code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
|---|---|
# Emily Harvey
## Research question/interests
It would be cool to see how much of the money made overall in Canada had to do with tourism from BC and the breakdown of which categories within tourism BC makes the most money in. As well as overall growth of tourism in Canada and BC from 2014-2017.
```
import pandas as pd
pd.read_csv('../data/raw/tourism.csv')
```
## Milestone 3:
### Task 1: EDA
```
#Importing Libraries
import numpy as np
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
import missingno
import pandas as pd
%matplotlib inline
df = pd.read_csv('../data/raw/tourism.csv')
# Simple Preview and Stats of Data
# Shows column names, non-null count, and Dtype
# Shows us that columns 13 and 14 only have null numbers and need to be deleted.
# Also shows that the most common Dtype in our data are objects
df.info()
df.head()
# summary stats table
df.describe().T
# generate preview of entries with null values
if df.isnull().any(axis=None):
print("\nPreview of data with null values:\n")
print(df[df.isnull().any(axis=1)].head(5))
missingno.matrix(df)
plt.show()
# Shows us that there are only null values in the SYMBOL & TERMINATED columns and many null values in the STATUS and VALUE columns
# Both the status, symbol, & terminated columns are not being used in the analysis so can be deleted as well
# We also need to get rid of the rows with the unknown values
#checking for duplicated entries
if len(df[df.duplicated()]) > 0:
print("No. of duplicated entries: ", len(df[df.duplicated()]))
print(df[df.duplicated(keep=False)].sort_values(by=list(df.columns)).head())
else:
print("No duplicated entries found")
#tells us how many unique values there are in our dataframe
df.nunique(axis=0)
#further helps us decide which columns to get rid of as they are not useful
#looking at the unique values of the products
df.Products.unique()
#gives us a list of the values in the products column
#looking at the unique values of the indicators
df.Indicators.unique()
#gives us a list of the values in the indicators column
#looking at the unique values of the geo
df.GEO.unique()
#gives us a list of the values in the geo column
#helps us see which rows we want to get rid of as we only want to study BC and Canada
sns.set_theme(style="ticks",
font_scale=1
)
plt.rc("axes.spines", top=False, right=False)
totalDomesticSupply = [3654954, 3910989.7, 421110.5, 483396]
totalDemand = [84580.6, 95998, 15570.4, 19541.5]
totalExports = [32668.7, 42535.5, 8801.2, 11839]
totalImports = [60274.6, 67791.4, 10269.8, 11689.5]
fig, ax = plt.subplots(figsize=(25, 25))
new_df = pd.DataFrame([['Total Domestic Supply', 'Canada 2014', 3654954],
['Total Domestic Supply', 'Canada 2017', 3910989.7],
['Total Domestic Supply', 'BC 2014', 421110.5],
['Total Domestic Supply', 'BC 2017', 483396],
['Total Demand', 'Canada 2014', 84580.6],
['Total Demand', 'Canada 2017', 95998],
['Total Demand', 'BC 2014', 15570.4],
['Total Demand', 'BC 2017', 19541.5],
['Total Exports', 'Canada 2014', 32668.7],
['Total Exports', 'Canada 2017', 42535.5],
['Total Exports', 'BC 2014', 8801.2],
['Total Exports', 'BC 2017', 11839],
['Total Imports', 'Canada 2014', 60274.6],
['Total Imports', 'Canada 2017', 67791.4],
['Total Imports','BC 2014',10269.8],
['Total Imports','BC 2017',11689.5]], columns=['Total Tourism Expenditures', 'Location and Year', 'Dollars (in millions)'])
sns.barplot(data=new_df, x='Location and Year', y='Dollars (in millions)', hue='Total Tourism Expenditures')
# this graph would be better by splitting it into seperate graphs in order to see the values better
TotalDomesticSupplyDf = pd.DataFrame([[ 'Canada 2014', 3654954],
['Canada 2017', 3910989.7],
['BC 2014', 421110.5],
['BC 2017', 483396]], columns=['Location and Year', 'Dollars (in millions)'])
fig, ax = plt.subplots(figsize=(16, 8))
sns.barplot(data=TotalDomesticSupplyDf, x='Location and Year', y='Dollars (in millions)')
plt.title('Total Dometic Supply of BC and Canada in 2014 and 2017 in Millions')
fig, ax = plt.subplots(figsize=(18, 8))
new_df = pd.DataFrame([['Total Demand', 'Canada 2014', 84580.6],
['Total Demand', 'Canada 2017', 95998],
['Total Demand', 'BC 2014', 15570.4],
['Total Demand', 'BC 2017', 19541.5],
['Total Exports', 'Canada 2014', 32668.7],
['Total Exports', 'Canada 2017', 42535.5],
['Total Exports', 'BC 2014', 8801.2],
['Total Exports', 'BC 2017', 11839],
['Total Imports', 'Canada 2014', 60274.6],
['Total Imports', 'Canada 2017', 67791.4],
['Total Imports','BC 2014',10269.8],
['Total Imports','BC 2017',11689.5]], columns=['Indicator', 'Location and Year', 'Dollars (in millions)'])
sns.barplot(data=new_df, x='Location and Year', y='Dollars (in millions)', hue='Indicator')
plt.title('Total Tourism Expenditures per Indicator for 2014 and 2017')
fig, ax = plt.subplots(figsize=(18, 8))
new_df = pd.DataFrame([['Canada 2014','Total Demand', 84580.6],
['Canada 2017','Total Demand', 95998],
['BC 2014', 'Total Demand', 15570.4],
['BC 2017','Total Demand', 19541.5],
['Canada 2014','Total Exports', 32668.7],
['Canada 2017','Total Exports', 42535.5],
['BC 2014','Total Exports', 8801.2],
['BC 2017', 'Total Exports',11839],
['Canada 2014','Total Imports', 60274.6],
['Canada 2017','Total Imports', 67791.4],
['BC 2014','Total Imports', 10269.8],
['BC 2017','Total Imports', 11689.5]], columns=['Location and Year','Indicator', 'Dollars (in millions)'])
sns.barplot(data=new_df, x='Indicator', y='Dollars (in millions)', hue='Location and Year')
plt.title('Total Tourism Expenditures per Indicator for 2014 and 2017')
# shows the difference between years better
```
### Task 2: Analysis Pipeline
#### Load in data
```
df = pd.read_csv('../data/raw/tourism.csv')
df
```
#### Clean Data
```
#remove unwanted columns
df_cleaned = df.drop(['DGUID','UOM_ID','SCALAR_ID','VECTOR', 'COORDINATE', 'STATUS','SYMBOL','TERMINATED', 'DECIMALS'],axis=1)
#remove unwanted rows
df_cleaned = df_cleaned[df_cleaned['GEO'].isin(['Canada','British Columbia'])]
df_cleaned = df_cleaned[df_cleaned['UOM'].isin(['Dollars'])]
#remove rows with null values
df_cleaned = df_cleaned.dropna(axis=0)
#reset index
df_cleaned.reset_index()
```
#### Process/Wrangle Data
```
df_cleaned = df_cleaned.rename(columns = {'REF_DATE':'Year', 'GEO': 'Location', 'SCALAR_FACTOR': 'Scalar factor', 'VALUE':'Value' })
df_cleaned
```
### Task 3: Method Chain
#### Step 1
```
df = (
pd.read_csv("../data/raw/tourism.csv")
.drop(columns=['DGUID','UOM_ID','SCALAR_ID','VECTOR', 'COORDINATE', 'STATUS','SYMBOL','TERMINATED', 'DECIMALS'],axis=1)
.dropna()
.query("GEO != ['Nunavut', 'Northwest Territories','Yukon','Newfoundland and Labrador','Prince Edward Island','Nova Scotia','New Brunswick','Quebec','Ontario','Manitoba','Saskatchewan', 'Alberta']")
.query("UOM != ['Percentage']")
.rename(columns={"REF_DATE":"Year", "GEO":"Location", "SCALAR_FACTOR":"Scalar Factor", "VALUE":"Value"})
)
```
#### Step 2
```
def load_and_process(url_or_path_to_csv_file):
# Method Chain 1
df = (
pd.read_csv(url_or_path_to_csv_file)
.drop(columns=['DGUID','UOM_ID','SCALAR_ID','VECTOR', 'COORDINATE', 'STATUS','SYMBOL','TERMINATED', 'DECIMALS'],axis=1)
.dropna()
.query("GEO != ['Nunavut', 'Northwest Territories','Yukon','Newfoundland and Labrador','Prince Edward Island','Nova Scotia','New Brunswick','Quebec','Ontario','Manitoba','Saskatchewan', 'Alberta']")
.query("UOM != ['Percentage']")
.rename(columns={"REF_DATE":"Year", "GEO":"Location", "SCALAR_FACTOR":"Scalar Factor", "VALUE":"Value"})
)
return df
load_and_process('../data/raw/tourism.csv')
```
#### Step 3
```
import project_functions1
df = project_functions1.load_and_process("../data/raw/tourism.csv")
df
df_cleaned.to_csv('emily_cleaned_data.csv')
```
### Task 4: Conduct your analysis
#### Research Question 1: Which categories within tourism BC makes the most money in
##### In the table below, we can see which categories have the highest total demand in BC. We can compare these numbers with the total demands in Canada to see what percentage of these catagories is from BC.
```
dfBC = df[df['Location'].isin(['British Columbia'])]
dfBC = dfBC[dfBC['Indicators'].isin(['Total demand'])]
dfBC = dfBC[['Year','Products','Value']]
dfBC = dfBC.sort_values(by=["Value"], ascending=[False]).groupby("Value")
dfBC.head(10)
### I don't know why head isn't working properly
dfCan = df[df['Location'].isin(['British Columbia'])]
dfCan = dfCan[dfCan['Indicators'].isin(['Total demand'])]
dfCan = dfCan[['Year','Products','Value']]
dfCan = dfCan.sort_values(by=["Value"], ascending=[False]).groupby("Value")
dfCan.head(10)
### I still don't know why head isn't working properly
```
##### The top 3 catagories of higest demand in BC are Total Tourism Expenditures, Total Tourism Products and Total Transportion. These catagories remain top 3 in 2017 as well. We can see some growth from 2014 to 2017 in these catagories within BC as well as Canada. We can use this data in these catagories to find out what percentage of of each of these catagories comes from BC. When we do this we find that in 2014, BC contributed 15.55% of Total Tourism Expenditures, 15.80% of Total Tourism Products, and 16.44% of Total Transportation. In 2017, BC contributed 16.91% of Total Tourism Expenditures, 17.08% of Total Tourism Products, and 17.01% of Total Transportation. From these values, we can see that although Total Tourism Expenditures makes the most money for BC, it is not what BC contributes to the most in Canada, that would be Total Transportation. We can also see that from 2014-2017, Total Tourism Expenditures grew 1.36%, Total Tourism Products grew 1.28%, and Total Transportation grew 0.57%.
#### Research Question 2: Overall growth of tourism in Canada and BC from 2014-2017.
##### On the graphs below, we can see that there is growth in the total demand, the exports and the imports from 2014 to 2017 in both BC and Canada.
```
sns.barplot(data=new_df, x='Indicator', y='Dollars (in millions)', hue='Location and Year')
plt.title('Total Tourism Expenditures per Indicator for 2014 and 2017')
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0)
new_Can = pd.DataFrame([['2014','Total Demand', 84580.6],
['2017','Total Demand', 95998],
['2014','Total Exports', 32668.7],
['2017','Total Exports', 42535.5],
['2014','Total Imports', 60274.6],
['2017','Total Imports', 67791.4],
], columns=['Year','Indicator', 'Dollars (in millions)'])
sns.lineplot(x='Year', y='Dollars (in millions)', hue='Indicator',
data=new_Can)
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0)
plt.title('Tourism Growth Per Indicator in Canada From 2014-2018')
new_BC = pd.DataFrame([['2014', 'Total Demand', 15570.4],
['2017','Total Demand', 19541.5],
['2014','Total Exports', 8801.2],
['2017', 'Total Exports',11839],
['2014','Total Imports', 10269.8],
['2017','Total Imports', 11689.5]], columns=['Year','Indicator', 'Dollars (in millions)'])
sns.lineplot(x='Year', y='Dollars (in millions)', hue='Indicator',
data=new_BC)
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0)
plt.title('Tourism Growth Per Indicator in BC From 2014-2018')
```
##### Looking at these graphs we can tell that growth did occur for all of these catagories. Using this data we can calculate the compound growth rate of each catagory. When I did this I focused on the coumpound growth rate of the total demand. The CGR for the total demand in Canada from 2014-2017 was 13.50% and the CGR for the total demand in BC was 25.50%. For exports and imports, in BC the CGR was 34.52% and 13.82%, respectively, and in Canada were 30.20% and 12.47%.
#### Conclusion
##### The main things I found out were that:
- the three biggest contributors to total demands in BC are Total Tourism Expenditures, Total Tourism Products and Total Transportion.
- Although Total Tourism Expenditures were the biggest contributers in BC, Total Transportation contributed more to Canada.
- All of these catagories grew form 2014-2017 and the one that grew the most was Total Tourism Expenditures.
- The compound growth rate of the total demand from 2014 to 2017 was 25.50% in BC and 13.50% in Canada.
- Exports had the largest compound growth in both Canada and BC.
|
github_jupyter
|
```
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import numpy as np
from keras.models import load_model
import sys
import pickle
import time
from prepare_data import *
np.random.seed(7)
data = Data('l', shuffle_all_inputs=False)
content = data._read_content('data/SlovarIJS_BESEDE_utf8.lex')
dictionary, max_word, max_num_vowels, vowels, accented_vowels = data._create_dict(content)
feature_dictionary = data._create_slovene_feature_dictionary()
syllable_dictionary = data._create_syllables_dictionary(content, vowels)
accented_vowels = ['ŕ', 'á', 'ä', 'é', 'ë', 'ě', 'í', 'î', 'ó', 'ô', 'ö', 'ú', 'ü']
environment = {}
environment['dictionary'] = dictionary
environment['max_word'] = max_word
environment['max_num_vowels'] = max_num_vowels
environment['vowels'] = vowels
environment['accented_vowels'] = accented_vowels
environment['feature_dictionary'] = feature_dictionary
environment['eng_feature_dictionary'] = feature_dictionary
environment['syllable_dictionary'] = syllable_dictionary
output = open('environment.pkl', 'wb')
pickle.dump(environment, output)
output.close()
i = 0
for el in syllable_dictionary:
if el == "da":
print(i)
i += 1
%run prepare_data.py
data = Data('l', shuffle_all_inputs=False)
letter_location_model, syllable_location_model, syllabled_letters_location_model = data.load_location_models(
'cnn/word_accetuation/cnn_dictionary/v5_3/20_final_epoch.h5',
'cnn/word_accetuation/syllables/v3_3/20_final_epoch.h5',
'cnn/word_accetuation/syllabled_letters/v3_3/20_final_epoch.h5')
letter_location_co_model, syllable_location_co_model, syllabled_letters_location_co_model = data.load_location_models(
'cnn/word_accetuation/cnn_dictionary/v5_2/20_final_epoch.h5',
'cnn/word_accetuation/syllables/v3_2/20_final_epoch.h5',
'cnn/word_accetuation/syllabled_letters/v3_2/20_final_epoch.h5')
letter_type_model, syllable_type_model, syllabled_letter_type_model = data.load_type_models(
'cnn/accent_classification/letters/v3_1/20_final_epoch.h5',
'cnn/accent_classification/syllables/v2_1/20_final_epoch.h5',
'cnn/accent_classification/syllabled_letters/v2_1/20_final_epoch.h5')
letter_type_co_model, syllable_type_co_model, syllabled_letter_type_co_model = data.load_type_models(
'cnn/accent_classification/letters/v3_0/20_final_epoch.h5',
'cnn/accent_classification/syllables/v2_0/20_final_epoch.h5',
'cnn/accent_classification/syllabled_letters/v2_0/20_final_epoch.h5')
test_input = [['uradni', '', 'Agpmpn', 'uradni'], ['podatki', '', 'Ncmpn', 'podatki'], ['policije', '', 'Ncfsg', 'policije'], ['kažejo', '', 'Vmpr3p', 'kažejo'], ['na', '', 'Sa', 'na'], ['precej', '', 'Rgp', 'precej'], ['napete', '', 'Appfpa', 'napete'], ['razmere', '', 'Ncfpa', 'razmere'], ['v', '', 'Sl', 'v'], ['piranskem', '', 'Agpmsl', 'piranskem'], ['zalivu', '', 'Ncmsl', 'zalivu'], ['je', '', 'Va-r3s-n', 'je'], ['danes', '', 'Rgp', 'danes'], ['poročala', '', 'Vmpp-sf', 'poročala'], ['oddaja', '', 'Ncfsn', 'oddaja'], ['do', '', 'Sg', 'do'], ['danes', '', 'Rgp', 'danes'], ['se', '', 'Px------y', 'se'], ['je', '', 'Va-r3s-n', 'je'], ['zgodilo', '', 'Vmep-sn', 'zgodilo']]
accented_vowels = ['ŕ', 'á', 'ä', 'é', 'ë', 'ě', 'í', 'î', 'ó', 'ô', 'ö', 'ú', 'ü']
words = [["Gorejevemu", "", "Psnsed", "Gorejevemu"]]
pos = 4282
print(location_accented_words)
print(accented_words)
print(words)
data = Data('s', shuffle_all_inputs=False)
new_content = data._read_content('data/sloleks-sl_v1.2.tbl')
words = [[el[0], '', el[2], el[0]] for el in new_content][1146450:1146550]
print(words.append['nadnaravno', '', 'Ppnsei'])
#Words proccesed: 650250
#Word indeks: 50023
#Word number: 50023
#done_lexical_entries = 33522
#new_content = data._read_content('sloleks-sl_v1.2.tbl')
rate = 100000
start_timer = time.time()
with open("data/new_sloleks/new_sloleks.tab", "a") as myfile:
for index in range(0, len(new_content), rate):
if index+rate >= len(new_content):
words = [[el[0], '', el[2], el[0]] for el in new_content][index:len(new_content)]
else:
words = [[el[0], '', el[2], el[0]] for el in new_content][index:index+rate]
data = Data('l', shuffle_all_inputs=False)
location_accented_words, accented_words = data.accentuate_word(words, letter_location_model, syllable_location_model, syllabled_letters_location_model,
letter_location_co_model, syllable_location_co_model, syllabled_letters_location_co_model,
letter_type_model, syllable_type_model, syllabled_letter_type_model,
letter_type_co_model, syllable_type_co_model, syllabled_letter_type_co_model,
dictionary, max_word, max_num_vowels, vowels, accented_vowels, feature_dictionary, syllable_dictionary)
res = ''
for i in range(index, index + len(words)):
res += new_content[i][0] + '\t' + new_content[i][1] + '\t' + new_content[i][2] + '\t' \
+ new_content[i][3][:-1] + '\t' + location_accented_words[i-index] + '\t' + accented_words[i-index] + '\n'
print('Writing data from ' + str(index) + ' onward.')
end_timer = time.time()
print("Elapsed time: " + "{0:.2f}".format((end_timer - start_timer)/60.0) + " minutes")
myfile.write(res)
```
|
github_jupyter
|
```
from linebot import LineBotApi
from linebot.exceptions import LineBotApiError
```
# 官方DEMO- Message Type :https://developers.line.me/en/docs/messaging-api/message-types/
# Doc : https://github.com/line/line-bot-sdk-python/blob/master/linebot/models/send_messages.py
```
CHANNEL_ACCESS_TOKEN = "YOUR CHANNEL TOKEN"
# user ID - Get by reply message object.
to = "YOUR USER ID"
from linebot.models import TextSendMessage
```
# TextSendMessage
```
line_bot_api = LineBotApi(CHANNEL_ACCESS_TOKEN)
try:
line_bot_api.push_message(to, TextSendMessage(text='台科大電腦研習社'))
except LineBotApiError as e:
# error handle
raise e
```
# Output

```
from linebot.models import ImageSendMessage, VideoSendMessage, LocationSendMessage, StickerSendMessage
```
# ImageSendMessage
### 連結需要使用https
物件中的輸入 original_content_url 以及 preview_image_url都要寫才不會報錯。<br>
輸入的網址要是一個圖片,應該說只能是一個圖片,不然不會報錯但是傳過去是灰色不能用的圖<br>
他圖片的功能都有點問題,我都要丟到imgur圖床才能鏈結。。。<br>
直接丟進github 用raw也不能讀<br>
```
line_bot_api = LineBotApi(CHANNEL_ACCESS_TOKEN)
image_url = "https://i.imgur.com/eTldj2E.png?1"
try:
line_bot_api.push_message(to, ImageSendMessage(original_content_url=image_url, preview_image_url=image_url))
except LineBotApiError as e:
# error handle
raise e
```
# Output

```
from linebot.models import LocationSendMessage
```
# LocationSendMessage
```
line_bot_api = LineBotApi(CHANNEL_ACCESS_TOKEN)
title = "國立臺灣科技大學"
address = "106台北市大安區基隆路四段43號"
latitude = 25.0136906
longitude = 121.5406792
try:
line_bot_api.push_message(to, LocationSendMessage(title=title,
address=address,
latitude=latitude,
longitude=longitude))
except LineBotApiError as e:
# error handle
raise e
```
# Output

```
from linebot.models import StickerSendMessage
```
# StickerSendMessage
照下面這段話的意思是說,只能用預設的!!!<br>
Message object which contains the sticker data sent from the source.<br>
For a list of basic LINE stickers and sticker IDs, see sticker list.<br>
stick list:https://developers.line.me/media/messaging-api/messages/sticker_list.pdf
```
line_bot_api = LineBotApi(CHANNEL_ACCESS_TOKEN)
package_id = "1"
sticker_id = "1"
# package_id = "1181660"
# sticker_id = "7389429"
try:
line_bot_api.push_message(to, StickerSendMessage(package_id=package_id, sticker_id=sticker_id))
except LineBotApiError as e:
# error handle
raise e
```
# Output

# ImagemapSendMessage
```
from linebot.models import ImagemapSendMessage, BaseSize, URIImagemapAction, MessageImagemapAction, ImagemapArea
```
這邊解説一下
輸入一張圖片的網址https,
他會顯示一張圖片,
但是可以對這張圖片的點選範圍做一些操作
例如對某區塊點擊會發生什麼事
舉例:輸入一張圖片(如下圖)by colors https://coolors.co/ffb8d1-e4b4c2-e7cee3-e0e1e9-ddfdfe
Imagemap 讓我們可以對圖片的區塊(給定一個範圍)做操作,<br>
例如我們要使用<br>
最左邊的顏色<br>
點擊後輸出色票<br>
<br>
最右邊的顏色<br>
點擊後轉至網址<br>
他圖片的功能都有點問題,我都要丟到imgur圖床才能鏈結。。。<br>
直接丟進github 用raw也不能讀<br>
而且他不會自動縮放,會裁掉

```
line_bot_api = LineBotApi(CHANNEL_ACCESS_TOKEN)
# image_url = "https://raw.githubusercontent.com/xiaosean/Line_tutorial/master/img/colors.png"
image_url = "https://i.imgur.com/mB9yDO0.png"
text = "#FFB8D1"
click_link_1 = "https://www.facebook.com/ntustcc"
try:
line_bot_api.push_message(to, ImagemapSendMessage(base_url=image_url,
alt_text="ImageMap Example",
base_size=BaseSize(height=1040, width=1040),
actions=[
MessageImagemapAction(
text=text,
area=ImagemapArea(
x=0, y=0, width=1040/5, height=1040
)
),
URIImagemapAction(
link_uri=click_link_1,
area=ImagemapArea(
x=int(1040*0.8), y=0, width=int(1040/5), height=1040
)
)
]))
except LineBotApiError as e:
# error handle
raise e
```
# Output

# TemplateSendMessage - ButtonsTemplate 只可在智慧手機上顯示
doc:https://github.com/line/line-bot-sdk-python/blob/master/linebot/models/template.py
這部分我建議看這個人所寫的 - 他的圖片很用心,真好看!!
https://ithelp.ithome.com.tw/articles/10195640?sc=iThomeR
```
from linebot.models import TemplateSendMessage, ButtonsTemplate, PostbackTemplateAction, MessageTemplateAction, URITemplateAction
button_template_message =ButtonsTemplate(
thumbnail_image_url="https://i.imgur.com/eTldj2E.png?1",
title='Menu',
text='Please select',
ratio="1.51:1",
image_size="cover",
actions=[
# PostbackTemplateAction 點擊選項後,
# 除了文字會顯示在聊天室中,
# 還回傳data中的資料,可
# 此類透過 Postback event 處理。
PostbackTemplateAction(
label='postback還會回傳data參數',
text='postback text',
data='action=buy&itemid=1'
),
MessageTemplateAction(
label='message會回傳text文字', text='message text'
),
URITemplateAction(
label='uri可回傳網址', uri='http://www.xiaosean.website/'
)
]
)
line_bot_api = LineBotApi(CHANNEL_ACCESS_TOKEN)
try:
# alt_text 因template只能夠在手機上顯示,因此在PC版會使用alt_Text替代
line_bot_api.push_message(to, TemplateSendMessage(alt_text="Template Example", template=button_template_message))
except LineBotApiError as e:
# error handle
raise e
```
# Output

# TemplateSendMessage - CarouselTemplate 只可在智慧手機上顯示
他和前一個的差別是他可以一次傳送多個Template並且可以用旋轉的方式轉過去 1...n
```
from linebot.models import TemplateSendMessage, CarouselTemplate, CarouselColumn, ButtonsTemplate, PostbackTemplateAction, MessageTemplateAction, URITemplateAction
image_url_1 = "https://i.imgur.com/eTldj2E.png?1"
image_url_2 = "https://i.imgur.com/mB9yDO0.png"
click_link_1 = "https://www.facebook.com/ntustcc"
click_link_2 = "https://www.facebook.com/ntustcc"
carousel_template = template=CarouselTemplate(
columns=[
CarouselColumn(
thumbnail_image_url=image_url_1,
title='template-1',
text='text-1',
actions=[
PostbackTemplateAction(
label='postback-1',
text='postback text1',
data='result=1'
),
MessageTemplateAction(
label='message-1',
text='message text1'
),
URITemplateAction(
label='uri-1',
uri=click_link_1
)
]
),
CarouselColumn(
thumbnail_image_url=image_url_2,
title='template-2',
text='text-2',
actions=[
PostbackTemplateAction(
label='postback-2',
text='postback text2',
data='result=2'
),
MessageTemplateAction(
label='message-2',
text='message text2'
),
URITemplateAction(
label='link-2',
uri=click_link_2
)
]
)]
)
line_bot_api = LineBotApi(CHANNEL_ACCESS_TOKEN)
try:
# alt_text 因template只能夠在手機上顯示,因此在PC版會使用alt_Text替代
line_bot_api.push_message(to, TemplateSendMessage(alt_text="Carousel Template Example", template=carousel_template))
except LineBotApiError as e:
# error handle
raise e
```
# Output

# TemplateSendMessage - ImageCarouselTemplate 只可在智慧手機上顯示
他和前一個的差別是整個版面都是圖片和一行文字,較為簡潔,請看結果。
```
from linebot.models import TemplateSendMessage, ImageCarouselTemplate, ImageCarouselColumn, PostbackTemplateAction, MessageTemplateAction, URITemplateAction
image_url_1 = "https://i.imgur.com/eTldj2E.png?1"
image_url_2 = "https://i.imgur.com/mB9yDO0.png"
carousel_template = template=ImageCarouselTemplate(
columns=[
ImageCarouselColumn(
image_url=image_url_1,
action=MessageTemplateAction(
label='message-1',
text='message text1'
)
),
ImageCarouselColumn(
image_url=image_url_2,
action=PostbackTemplateAction(
label='postback-2',
text='postback text2',
data='result=2'
),
)]
)
line_bot_api = LineBotApi(CHANNEL_ACCESS_TOKEN)
try:
# alt_text 因template只能夠在手機上顯示,因此在PC版會使用alt_Text替代
line_bot_api.push_message(to, TemplateSendMessage(alt_text="Image Carousel Template Example", template=carousel_template))
except LineBotApiError as e:
# error handle
raise e
```
# Output

# TemplateAction有個DatetimePickerTemplateAction
```
from linebot.models import TemplateSendMessage, ButtonsTemplate, DatetimePickerTemplateAction
button_template_message =ButtonsTemplate(
thumbnail_image_url="https://i.imgur.com/eTldj2E.png?1",
title='Menu',
text='Please select',
actions=[
DatetimePickerTemplateAction(
label="datetime picker date",
# 等同PostbackTemplateAction中的data, in the postback.data property of the postback event
data="action=sell&itemid=2&mode=date",
mode="date",
initial="2013-04-01",
min="2011-06-23",
max="2017-09-08"
),
DatetimePickerTemplateAction(
label="datetime picker time",
data="action=sell&itemid=2&mode=time",
mode="time",
initial="10:00",
min="00:00",
max="23:59"
)
# below part failed, I have reported issue
# https://github.com/line/line-bot-sdk-python/issues/100
# DatetimePickerTemplateAction(
# label="datetime picker datetime",
# data="action=sell&itemid=2&mode=datetime",
# mode="datetime",
# initial="2018-04-01T10:00",
# min="2011-06-23T00:00",
# max="2019-09-08T23:59"
# )
]
)
line_bot_api = LineBotApi(CHANNEL_ACCESS_TOKEN)
try:
# alt_text 因template只能夠在手機上顯示,因此在PC版會使用alt_Text替代
line_bot_api.push_message(to, TemplateSendMessage(alt_text="Template Example", template=button_template_message))
except LineBotApiError as e:
# error handle
raise e
```
# Output

# FileMessage - 沒實作
DOC https://github.com/line/line-bot-sdk-python/blob/master/linebot/models/messages.py
```
from linebot.models import VideoSendMessage
```
# VideoSendMessage - 我沒試成功 Failed
文件中的input有說"影片時長要 < 1miuntes" <br>
<br>
我猜拉 只要找到網址後綴是.mp4應該就可以<br>
只是我找不到這種影片<br>
```
line_bot_api = LineBotApi(CHANNEL_ACCESS_TOKEN)
viedo_url = ""
image_url = ""
try:
line_bot_api.push_message(to, VideoSendMessage(original_content_url=viedo_url, preview_image_url=image_url))
except LineBotApiError as e:
# error handle
raise e
```
|
github_jupyter
|
```
import pandas as pd
import numpy as np
from pathlib import Path
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder
from sklearn.feature_extraction import DictVectorizer
from sklearn.ensemble import RandomForestRegressor
from sklearn.impute import SimpleImputer
from sklearn.inspection import plot_partial_dependence
from dtreeviz.trees import *
import scipy as sp
from scipy.cluster import hierarchy as hc
import sys
sys.path.append('..')
from fpl_predictor.util import *
# path to project directory
path = Path('../')
# read in training dataset
train_df = pd.read_csv(path/'fpl_predictor/data/train_v8.csv',
index_col=0,
dtype={'season':str,
'squad':str,
'comp':str})
```
## Random Forest
Random Forest is an ensemble tree-based predictive algorithm. In this case we will be using it for regression - we want to predict a continuous number, predicted points, for each player each game. It works by training many separate decision trees, each using a subset of the training data, and outputs the average prediction across all trees.
Applying it to a time series problem, where metrics from recent time periods can be predicitve, requires us to add in window features (e.g. points scored last gameweek). These are created using the player_lag_features function from 00_fpl_features.
```
# add a bunch of player lag features
lag_train_df, team_lag_vars = team_lag_features(train_df, ['total_points'], ['all', 1, 2, 3, 4, 5, 10])
lag_train_df, player_lag_vars = player_lag_features(lag_train_df, ['total_points'], ['all', 1, 2, 3, 4, 5, 10])
```
Similar to the simple model, we'll set the validation period to be gameweeks 20-25 of the 2019/20 season - the model will be trained on all data prior to that period. This time however, we'll be using some additional features: the season, gameweek, player position, home/away, and both teams, as well as all the lagging features we created above.
```
# set validaton point/length and categorical/continuous variables
valid_season = '2021'
valid_gw = 20
valid_len = 6
cat_vars = ['season', 'position', 'was_home', 'team', 'opponent_team']
cont_vars = ['gw']#, 'minutes']
dep_var = ['total_points']
```
Some of the features have an order (2019/20 season is after 2019 season) whereas others do not (position). We can set this in the data where appropriate using an ordered category (e.g. 1617 < 1718 < 1819 < 1920 < 2021).
```
# we want to set gw and season as ordered categorical variables
# need lists with ordered categories
ordered_gws = list(range(1,39))
ordered_seasons = ['1617', '1718', '1819', '1920', '2021']
# set as categories with correct order
lag_train_df['gw'] = lag_train_df['gw'].astype('category')
lag_train_df['season'] = lag_train_df['season'].astype('category')
lag_train_df['gw'].cat.set_categories(ordered_gws, ordered=True, inplace=True)
lag_train_df['season'].cat.set_categories(ordered_seasons, ordered=True, inplace=True)
lag_train_df['season']
```
And now we can go ahead and create our training and validation sets using the function we defined in the last notebook.
```
# create dataset with adjusted post-validation lag numbers
train_valid_df, train_idx, valid_idx = create_lag_train(lag_train_df,
cat_vars, cont_vars,
player_lag_vars, team_lag_vars, dep_var,
valid_season, valid_gw, valid_len)
```
The way we calculate our lag features means that there will be null values in our dataset. This will cause an error when using random forest in scikit learn, so we will set them all to zero for now (although note that this may not be the best fill strategy).
```
lag_train_df[~np.isfinite(lag_train_df['total_points_pg_last_1'])]
# imp = SimpleImputer(missing_values=np.nan, strategy='mean')
# need to think about imputing NaN instead of setting to zero
# imp.fit(X_train[team_lag_vars + player_lag_vars])
train_valid_df[team_lag_vars + player_lag_vars] = train_valid_df[team_lag_vars + player_lag_vars].fillna(0)
```
The random forest regressor will only take numbers as inputs, so we need to transform our caterogical features into a format that the random forest regressor object will be able to use, numbers instead of strings in one or more columns.
```
# split out dependent variable
X, y = train_valid_df[cat_vars + cont_vars + team_lag_vars + player_lag_vars].copy(), train_valid_df[dep_var].copy()
# since position is categorical, it should be a string
X['position'] = X['position'].apply(str)
# need to transform season
enc = LabelEncoder()
X['season'] = enc.fit_transform(X['season'])
X_dict = X.to_dict("records")
# Create the DictVectorizer object: dv
dv = DictVectorizer(sparse=False, separator='_')
# Apply dv on df: df_encoded
X_encoded = dv.fit_transform(X_dict)
X_df = pd.DataFrame(X_encoded, columns=dv.feature_names_)
```
For example, season is now represented by a number (0 -> 2016/17, 1 -> 2017/18, etc.) in a single column, and position is represented by a 1 or 0 in multiple columns.
```
X_df[['season', 'position_1', 'position_2', 'position_3', 'position_4']]
X_df.columns
```
Let's now split out our training (everything prior to the validation gameweek) and validation (6 gameweeks from the validation gameweek, only rows with >0 minutes)
```
# split out training and validation sets
X_train = X_df.loc[train_idx]
y_train = y.loc[train_idx]
X_test = X_df.loc[valid_idx]
# we only want look at rows with >0 minutes (i.e. the player played)
# test_mask = (X_test['minutes'] > 0)
# X_test = X_test[test_mask]
# y_test = y.loc[valid_idx][test_mask]
y_test = y.loc[valid_idx]
# X_train = X_train.drop('minutes', axis=1)
# X_test = X_test.drop('minutes', axis=1)
```
We can now create the RandomForestRegessor with set parameters, train using the training data, and look at the error on the validation set.
```
# def rf(xs, y, n_estimators=40, max_samples=50_000,
# max_features=0.5, min_samples_leaf=5, **kwargs):
# return RandomForestRegressor(n_jobs=-1, n_estimators=n_estimators,
# max_samples=max_samples, max_features=max_features,
# min_samples_leaf=min_samples_leaf, oob_score=True).fit(xs, y)
def rf(xs, y, max_depth=7, **kwargs):
return RandomForestRegressor(n_jobs=-1, max_depth=max_depth, oob_score=True).fit(xs, y)
# fit training data
m = rf(X_train, y_train.values.ravel())
# predict validation set and output metrics
preds = m.predict(X_test)
print("RMSE: %f" % (r_mse(preds, y_test.values.ravel())))
print("MAE: %f" % mae(preds, y_test.values.ravel()))
```
Right away this looks like it's a significant improvement on the simple model, good to see. Let's go ahead and use the same approach with validation across the whole of the 2019/20 season.
```
def rf_season(df, valid_season='2021'):
# empty list for scores
scores = []
valid_len = 6
for valid_gw in range(1,40-valid_len):
# create dataset with adjusted post-validation lag numbers
train_valid_df, train_idx, valid_idx = create_lag_train(df, cat_vars, cont_vars,
player_lag_vars, team_lag_vars, dep_var,
valid_season, valid_gw, valid_len)
train_valid_df[team_lag_vars + player_lag_vars] = train_valid_df[team_lag_vars + player_lag_vars].fillna(0)
# split out dependent variable
X, y = train_valid_df[cat_vars + cont_vars + team_lag_vars + player_lag_vars].copy(), train_valid_df[dep_var].copy()
# since position is categorical, it should be a string
X['position'] = X['position'].apply(str)
# need to transform season
enc = LabelEncoder()
X['season'] = enc.fit_transform(X['season'])
X_dict = X.to_dict("records")
# Create the DictVectorizer object: dv
dv = DictVectorizer(sparse=False, separator='_')
# Apply dv on df: df_encoded
X_encoded = dv.fit_transform(X_dict)
X_df = pd.DataFrame(X_encoded, columns=dv.feature_names_)
# split out training and validation sets
X_train = X_df.loc[train_idx]
y_train = y.loc[train_idx]
X_test = X_df.loc[valid_idx]
# we only want look at rows with >0 minutes (i.e. the player played)
# test_mask = (X_test['minutes'] > 0)
# X_test = X_test[test_mask]
# y_test = y.loc[valid_idx][test_mask]
y_test = y.loc[valid_idx]
m = rf(X_train, y_train.values.ravel())
preds, targs = m.predict(X_test), y_test.values.ravel()
gw_mae = mae(preds, targs)
print("GW%d MAE: %f" % (valid_gw, gw_mae))
scores.append(gw_mae)
return scores
scores = rf_season(lag_train_df)
plt.plot(scores)
plt.ylabel('GW MAE')
plt.xlabel('GW')
plt.text(15, 1.55, 'Season Avg MAE: %.2f' % np.mean(scores), bbox={'facecolor':'white', 'alpha':1, 'pad':5})
plt.show()
```
Looking across the whole season we see about a 10% improvement versus the simple model. Also interesting is that the performance again improves as the season progresses - this makes sense, more data about each of teams and players (particularly new ones) means improved ability to predict the next 6 gameweeks.
Let's add these validation scores to our comparison dataset.
```
model_validation_scores = pd.read_csv(path/'charts/model_validation_scores.csv', index_col=0)
model_validation_scores['random_forest'] = scores
model_validation_scores.to_csv(path/'charts/model_validation_scores.csv')
```
A feature of the random forest algorithm is that we can see how often features are being used in trees. This will give us an indication of how important each feature is i.e. is it predictive of todal points scored. Simple models are usually better, so this also gives us a way of seeing if there are any features that are not particularly useful, and can therefore be removed.
```
def rf_feat_importance(m, df):
return pd.DataFrame({'cols':df.columns, 'imp':m.feature_importances_}
).sort_values('imp', ascending=False)
fi = rf_feat_importance(m, X_train)
fi[:32]
def plot_fi(fi):
return fi.plot('cols', 'imp', 'barh', figsize=(12,7), legend=False).invert_yaxis()
plot_fi(fi[:30]);
```
At the moment this algorithm is given minutes played in the gameweek so it's unsurprising that this is by far the most important feature - the more minutes a player plays, the more opportunity to score points. But strictly speaking we don't actually have this information prior to a gameweek (in practice it is estimated using previous minutes and injury status), so we can ignore it for now.
Below that the top features are:
1. minutes_last_1 - number of minutes in the last fixture for the player
2. minutes_last_2 - number of minutes in the last two fixtures for the player
3. total_points_pg_last_all - the player's average points per game in all of history (since start of 2016/17 season)
4. total_points_team_pg_last_all_opponent - the opposition's average points per game in all of history
5. minutes_last_3 - number of minutes in the last three fixtures for the player
6. total_points_team_pg_last_all - the player's team's average points per game in all of history
7. total_points_pg_last_10 - the player's average points per game in the last 10 fixtures
8. total_points_pg_last_1 - the player's average points per game in the last fixture
This is interesting. It seems to be saying that the amount of minutes a player has played recently and their underlying ability to score points in all of history, along with their team's and opponent team's points scoring in all of history, is most important.
Recent performance (i.e. 'form') is also important, but to a lesser extent.
It also shows that the lag features are far more useful than the categorical features such as team, opponent and position. Again not too surprising since information on these categories are already captured in the lag features.
Let's test this... we can remove anything with a feature importance of less than 0.005 and see how the model performs on the original 2019/20 week 20 validation point (going from 94 features to just 32).
```
to_keep = fi[fi.imp>0.005].cols
len(to_keep)
len(X_train.columns)
X_train_imp = X_train[to_keep]
X_test_imp = X_test[to_keep]
m = rf(X_train_imp, y_train.values.ravel())
mae(m.predict(X_test_imp), y_test.values.ravel())
# mae(m.predict(X_train_imp), y_train.values.ravel())
```
Very similar albeit slightly higher error (less than 1% worse performance) than previously, and still a long way ahead of the simple model.
Continuing our thinking about improving/simplifying the model features, we can also look to see if there are any similar features - quite often we will find that some features are so similar that some of them may be redundant.
The following function determines the similarity between columns in a dataset and visualises it using a dendrogram.
```
def cluster_columns(df, figsize=(10,6), font_size=12):
corr = np.round(sp.stats.spearmanr(df).correlation, 4)
corr_condensed = hc.distance.squareform(1-corr)
z = hc.linkage(corr_condensed, method='average')
fig = plt.figure(figsize=figsize)
hc.dendrogram(z, labels=df.columns, orientation='left', leaf_font_size=font_size)
plt.show()
cluster_columns(X_train_imp)
```
We can see that our lagging features are somewhat similar - absolutely expected since, for example, minutes_last_5 is equal to minutes_last_4 + minutes 5 games ago. They are still different enough to be of value separately, but it does make me wonder whether separating out each historic game in some way (up to a point) would be valuable.
A final useful tool we can use is partial dependency plots. These try to look at the impact of single features on the dependent variable (points scored).
```
fig,ax = plt.subplots(figsize=(12, 3))
plot_partial_dependence(m, X_test_imp, ['total_points_pg_last_all',
'total_points_team_pg_last_all_opponent',
'total_points_pg_last_1'],
grid_resolution=20, ax=ax);
```
Again, these make sense. The higher a player's historic points per game (defined as 90 minutes) is, the higher we predict their score will be. Conversely, the higher their opposition's historic points per game, the harder they are as an opponent and the lower their predicted score will be.
Looking at the player's most recent game, again the higher their score, the more it will push up our prediction (the impact of their 'form'), but the relationship is far weaker than the player's underlying per minute scoring stats.
Here we just try to look at features in isolation, there will lots of interactions going on between features that improve performance. For example, a player may have a high 'total_points_pg_last_1' from the previous fixture but only played 5 minutes in total - in this case the algorithm is likely to have learned that a high 'total_points_pg_last_1' coupled with a low 'minutes_last_1' is not an indicator that the player will score higher in the next fixture.
Ok, now we can move onto the next algorithm - xgboost.
|
github_jupyter
|
# Heikin-Ashi PSAR Strategy
_Roshan Mahes_
In this tutorial, we implement the so-called _Parabolic Stop and Reverse (PSAR)_ strategy. Given any stock, currency or commodity, this indicator tells us whether to buy or sell the stock at any given time. The momentum strategy is based on the open, high, low and close price for each time period. This can be represented with a traditional Japanese candlestick chart. Later on, we apply the PSAR strategy on so-called Heikin-Ashi ('average bar') data, which reduces some noise, making it easier to identify trends.
The following packages are required:
```
%pip install pandas
%pip install yfinance
%pip install plotly
```
Now we can import the following modules:
```
import os
import pandas as pd
import yfinance as yf
import plotly.graph_objects as go
```
This strategy works on any stock. In this notebook, we take the stock of Apple, represented by the ticker symbol AAPL. Let's download the pricing data and plot a (Japanese) candlestick chart:
```
symbol = 'AAPL'
df = yf.download(symbol, start='2020-01-01')
df.index = df.index.strftime('%Y-%m-%d') # format index as dates only
candles = go.Candlestick(x=df.index, open=df.Open, high=df.High, low=df.Low, close=df.Close)
# plot figure
fig = go.Figure(candles)
fig.layout.xaxis.type = 'category' # remove weekend days
fig.layout.xaxis.dtick = 20 # show x-axis ticker once a month
fig.layout.xaxis.rangeslider.visible = False
fig.layout.title = f'Japanese Candlestick Chart ({symbol})'
fig.layout.template = 'plotly_white'
fig.show()
```
## The PSAR Indicator
The _Parabolic Stop and Reverse (PSAR) indicator,_ developed by J. Wells Wilder, is a momentum indicator used by traders to determine trend direction and potential reversals in price. It is a trend-following (lagging) indicator that uses a trailing stop and reverse method called SAR (Stop and Reverse), to identify suitable exit and entry points. The concept draws on the idea that 'time is the enemy', i.e., unless a security can continue to generate more profits over time, it should be liquidated.
The PSAR indicator appears on a chart as a series of dots, either above or below an asset's price, depending on the direction the price is moving. A dot is placed below the price when it is trending upward, and above the price when it is trending downward. There is a dot for every price bar, hence the indicator is always producing information.
The parabolic SAR is calculated almost independently for each trend in the price. When the price is in an uptrend, the SAR emerges below the price and converges upwards towards it. Similarly, on a downtrend, the SAR emerges above the price and converges downwards. At each step within a trend, the SAR is calculated one period in advance, i.e., tomorrow's SAR value is built using data available today. The general formula used for this is:
\begin{align*}
SAR_t = SAR_{t-1} + \alpha_t (EP_t - SAR_{t-1}),
\end{align*}
where $SAR_t$ is the SAR value at time $t$.
The _extreme point_ $EP$ is a record kept during each trend that represents the highest value reached by the price during the current uptrend, or lowest value during a downtrend. During each period, if a new maximum (or minimum) is observed, the EP is updated with that value.
The $\alpha$ value is the _acceleration factor._ Usually, this is initially set to a value of $0.02$. The factor is increased by $0.02$ each time a new EP is recorded. The rate will then quicken to a point where the SAR converges towards the price. To prevent it from getting too large, a maximum value for the acceleration factor is normally set to $0.20$. Generally, it is preferable in stocks to set the acceleration factor to $0.01$ so that it is not too sensitive to local decreases, whereas for commodity or currency trading the preferred value is $0.02$.
There are special cases that modify the SAR value:
1. If the next period's SAR value is inside (or beyond) the current period or the previous period's price range, the SAR must be set to the closest price bound. For example, if in an upward trend, the new SAR value is calculated and if it results to be more than today's or yesterday's lowest price, it must be set equal to that lower boundary.
2. If the next period's SAR value is inside (or beyond) the next period's price range, a new trend direction is then signaled. The SAR must then switch sides.
3. Upon a trend switch, the first SAR value for this new trend is set to the last $EP$ recorded on the prior trend. Then, the $EP$ is reset accordingly to this period's maximum, and the acceleration factor is reset to its initial value of $0.01$ (stocks) or $0.02$ (commodities/currencies).
As we can see, it's quite a difficult strategy as the formulas are not that straightforward. We have implemented it in the following function:
```
def PSAR(df, alpha_start=0.01):
"""
Returns the dataframe with the given PSAR indicator for each time period.
"""
trend = 0
alpha = alpha_start
SAR = [df['Open'][0]] + [0] * (len(df) - 1)
isUpTrend = lambda x: x > 0
trendSwitch = lambda x: abs(x) == 1
# initialisation
if df['Close'][1] > df['Close'][0]:
trend = 1
SAR[1] = df['High'][0]
EP = df['High'][1]
else:
trend = -1
SAR[1] = df['Low'][0]
EP = df['Low'][1]
# recursion
for t in range(2,len(df)):
# general formula
SAR_new = SAR[t-1] + alpha * (EP - SAR[t-1])
# case 1 & 2
if isUpTrend(trend):
SAR[t] = min(SAR_new, df['Low'][t-1], df['Low'][t-2])
if SAR[t] > df['Low'][t]:
trend = -1
else:
trend += 1
else:
SAR[t] = max(SAR_new, df['High'][t-1], df['High'][t-2])
if SAR[t] < df['High'][t]:
trend = 1
else:
trend -= 1
# case 3
if trendSwitch(trend):
SAR[t] = EP
alpha = alpha_start
if isUpTrend(trend):
EP_new = df['High'][t]
else:
EP_new = df['Low'][t]
else:
if isUpTrend(trend):
EP_new = max(df['High'][t], EP)
else:
EP_new = min(df['Low'][t], EP)
if EP != EP_new:
alpha = min(alpha + 0.02, 0.20)
# update EP
EP = EP_new
# store values
df['SAR'] = SAR
df['Signal'] = (df['SAR'] < df['Close']).apply(int).diff() # records trend switches
return df
```
After applying the PSAR strategy on Apple's stock, we end up with the following trading decisions:
```
# apply PSAR
df = PSAR(df)
# extract trend switches (buying/selling advice)
buy = df.loc[df['Signal'] == 1]
sell = df.loc[df['Signal'] == -1]
# candles & psar
candles = go.Candlestick(x=df.index, open=df.Open, high=df.High, low=df.Low, close=df.Close, name='candles')
psar = go.Scatter(x=df.index, y=df['SAR'], mode='markers', name='PSAR', line={'width': 10, 'color': 'midnightblue'})
# buy & sell symbols
buys = go.Scatter(x=buy.index, y=buy.Close, mode='markers', marker_size=15, marker_symbol=5,
marker_color='green', name='Buy', marker_line_color='black', marker_line_width=1)
sells = go.Scatter(x=sell.index, y=sell.Close, mode='markers', marker_size=15, marker_symbol=6,
marker_color='red', name='Sell', marker_line_color='black', marker_line_width=1)
# plot figure
fig = go.Figure(data=[candles, psar, buys, sells])
fig.layout.xaxis.type = 'category' # remove weekend days
fig.layout.xaxis.dtick = 20 # show x-axis ticker once a month
fig.layout.xaxis.rangeslider.visible = False
fig.layout.title = f'PSAR indicator ({symbol})'
fig.layout.template = 'plotly_white'
fig.show()
```
We see that most of the times our indicator predicted a correct trend! Instead of using the open, high, low and close data, represented by this traditional candlestick chart, we can also apply the PSAR strategy on so-called _Heikin-Ashi charts_.
## Heikin-Ashi Charts
_Heikin-Ashi_ means 'average bar' in Japanese. Heikin-Ashi charts, developed by Munehisa Homma in the 1700s, display prices that, at a glance, look similar to a traditional Japanese chart. The Heikin-Ashi technique averages price data to create a Japanese candlestick chart that filters out market noise. Instead of using the open, high, low, and close like standard candlestick charts, the Heikin-Ashi technique uses a modified formula based on two-period averages. This gives the chart a smoother appearance, making it easier to spots trends and reversals, but also obscures gaps and some price data.
The formulas are as follows:
\begin{align*}
H_{open,t} &= \frac{H_{open,t-1} + H_{close,t-1}}{2}, \\
H_{close,t} &= \frac{C_{open,t} + C_{high,t} + C_{low,t} + C_{close,t}}{4}, \\
H_{high,t} &= \max\{H_{open,t}, H_{close,t}, C_{high,t}\}, \\
H_{low,t} &= \min\{H_{open,t}, H_{close,t}, C_{low,t}\},
\end{align*}
with initial condition $H_{open, 0} = C_{open,0}$. In here, $H_{open,t}$ is the opening value in the Heikin-Ashi chart at time $t \in \mathbb{N}_0$, and $C_{open,t}$ is the opening value of the stock, which is used in the traditional Japanese candlestick chart etc.
In the following function we transform a given dataframe of stock prices to a Heikin-Ashi one.
```
def heikin_ashi(df):
"""
Converts a dataframe according to the Heikin-Ashi.
"""
df_HA = pd.DataFrame(index=df.index, columns=['Open', 'High', 'Low', 'Close'])
df_HA['Open'][0] = df['Open'][0]
df_HA['Close'] = (df['Open'] + df['High'] + df['Low'] + df['Close']) / 4
for t in range(1,len(df)):
df_HA.iat[t,0] = (df_HA['Open'][t-1] + df_HA['Close'][t-1]) / 2 # change H_open without warnings
df_HA['High'] = df_HA[['Open', 'Close']].join(df['High']).max(axis=1)
df_HA['Low'] = df_HA[['Open', 'Close']].join(df['Low']).min(axis=1)
return df_HA
```
Let's convert the Apple's (Japanese) candlestick chart to a Heikin-Ashi chart:
```
df_HA = heikin_ashi(df)
candle = go.Candlestick(x=df_HA.index, open=df_HA['Open'], high=df_HA['High'], low=df_HA['Low'], close=df_HA['Close'])
# plot figure
fig = go.Figure(candle)
fig.layout.xaxis.type = 'category' # remove weekend days
fig.layout.xaxis.dtick = 20 # show x-axis ticker once a month
fig.layout.xaxis.rangeslider.visible = False
fig.layout.title = f'Heikin-Ashi Chart ({symbol})'
fig.layout.template = 'plotly_white'
fig.show()
```
As we can see, the Heikin-Ashi technique can be used to identify a trend more easily. Because the Heikin-Ashi technique smooths price information over two periods, it makes trends, price patterns, and reversal points easier to spot. Candles on a traditional candlestick chart frequently change from up to down, which can make them difficult to interpret. Heikin-Ashi charts typically have more consecutive colored candles, helping traders to identify past price movements easily.
The Heikin-Ashi technique reduces false trading signals in sideways and choppy markets to help traders avoid placing trades during these times. For example, instead of getting two false reversal candles before a trend commences, a trader who uses the Heikin-Ashi technique is likely only to receive the valid signal.
## Heikin-Ashi PSAR indicator
It is straightforward to apply the PSAR strategy on our Heikin-Ashi data:
```
# apply PSAR
df = PSAR(df_HA)
# extract trend switches (buying/selling advice)
buy = df.loc[df['Signal'] == 1]
sell = df.loc[df['Signal'] == -1]
# candles & psar
candles = go.Candlestick(x=df.index, open=df.Open, high=df.High, low=df.Low, close=df.Close, name='candles')
psar = go.Scatter(x=df.index, y=df['SAR'], mode='markers', name='PSAR', line={'width': 10, 'color': 'midnightblue'})
# buy & sell symbols
buys = go.Scatter(x=buy.index, y=buy.Close, mode='markers', marker_size=15, marker_symbol=5,
marker_color='green', name='Buy', marker_line_color='black', marker_line_width=1)
sells = go.Scatter(x=sell.index, y=sell.Close, mode='markers', marker_size=15, marker_symbol=6,
marker_color='red', name='Sell', marker_line_color='black', marker_line_width=1)
# plot figure
fig = go.Figure(data=[candles, psar, buys, sells])
fig.layout.xaxis.type = 'category' # remove weekend days
fig.layout.xaxis.dtick = 20 # show x-axis ticker once a month
fig.layout.xaxis.rangeslider.visible = False
fig.layout.title = f'Heikin-Ashi PSAR indicator on Heikin-Ashi ({symbol})'
fig.layout.template = 'plotly_white'
fig.show()
```
In this case, there are small differences. In fact, only on one date the Heikin-Ashi SAR value is different from the traditional SAR value. This might change when clear trends are less visible, so feel free to try other stocks!
|
github_jupyter
|
## 1. Introduction to pyLHD
pyLHD is a python implementation of the R package [LHD](https://cran.r-project.org/web/packages/LHD/index.html) by Hongzhi Wang, Qian Xiao, Abhyuday Mandal. As of now, only the algebraic construction of Latin hypercube designs (LHD) are implemented in this package. For search algorithms to construct LHDs such as: Simulated annealing, particle swarm optimization, and genetic algorithms refer to the R package.
In section 2 algebraic construction methods for LHDs are discussed
To evalute the generated LHDs we consider the following criteria
### Maximin distance Criterion
Let $X$ denote an LHD matrix. Define the $L_q$-distance between two run $x_i$ and $x_j$ of $X$ as $d_q(x_i,x_j) = \left( \sum_{k=1}^m |x_{ik}-x_{jk}|^q \right)^{1/q}$ where $q$ is an integer. Define the $L_q$-distance of design $X$ as $d_q(X) = \min \{ d_q(x_i,x_j), 1 \leq i\leq j \leq n \}$. If $q=1$, we are considering the Manhattan $(L_1)$ distance. If $q=2$, the Euclidean $(L_2)$ distance is considered. A design $X$ is called a maximim $L_q$-distance if it has the unique largest $d_q(X)$ value.
Morris and Mitch (1995) and Jin et al. (2005) proposed the $\phi_p$ criterion which is defined as
$$
\phi_p = \left( \sum_{i=1}^{n-1} \sum_{j=i+1}^n d_q (x_i,x_j)^{-p} \right)^{1/p}
$$
The $\phi_p$ criterion is asymptotically equivalent to the Maximin distance criterion as $p \rightarrow \infty$. In practice $p=15$ often suffices.
### Maximum Projection Criterion
Joseph et al (2015) proposed the maximum projection LHDs that consider designs' space-filling properties in all possible dimensional spaces. Such designs minimize the maximum projection criterion, which is defined as
$$
\underset{X}{\min} \psi(X) = \left( \frac{1}{{n \choose 2}} \sum_{i=1}^{n-1} \sum_{j=i+1}^n \frac{1}{ \prod_{l=1}^k (x_{il}-x_{jl})^2} \right)^{1/k}
$$
We can wee that any two design points should be apart from each other in any projection to minimize the value of $\psi(x)$
### Orthogonality Criteria
Two major correlation-based criteria to measure designs' orthogonality is the average absolute correlation criterion and the maximum absolute correlation
$$
ave(|q|) = \frac{2 \sum_{i=1}^{k-1} \sum_{j=i+1}^k |q_{ij}|}{k(k-1)} \quad \text{and} \quad \max |q| = \underset{i,j}{\max} |q_{ij}|
$$
where $q_{ij}$ is the correlation between the $i$th and $j$th columns of the design matrix $X$. Orthogonal design have $ave(|q|)=0$ and $\max|q|=0$, which may not exist for all design sizes. Designs with smaller $ave(|q|)$ or $\max|q|$ are generally preferred in practice.
```
import pyLHD as pl
```
Lets start by generating a random LHD with 5 rows and 3 columns
```
X = pl.rLHD(nrows=5,ncols=3)
X
```
We evaluate the above design with the different optimamlity criteria described earlier:
The maximin distance criterion (Manhattan)
```
pl.phi_p(X,p=15,q=1) # using default parameters
```
The maximin distance criterion (Euclidean)
```
pl.phi_p(X,p=10,q=2) # different p used than above
```
The average absolute correlation
```
pl.AvgAbsCor(X)
```
The maximum absolute correlation
```
pl.MaxAbsCor(X)
```
The maximum projection criterion
```
pl.MaxProCriterion(X)
```
We can apply Williams transformation on X defined as:
$$
W(x) = \begin{cases}
2x & 0 \leq x \leq (p-1)/2 \\
2(p-x)-1 & (p+1)/2 \leq x \leq p-1
\end{cases}
$$
```
W_x = pl.williams_transform(X)
W_x
```
Lets evaluate the new transformed design
```
pl.phi_p(W_x)
```
The $\phi_p$ value of transformed $W_x$ is smaller than the original design $X$
## 2. Algebraic Construction Functions
The algebraic construction methods are demonstrated in the table below
| | Ye98 | Cioppa07 | Sun10 | Tang93 | Lin09 | Butler01 |
|------------|---|---|---|---|---|----|
| Run # $n$ | $2^m +1$ | $2^m +1$ | $r2^{m +1}$ or $r2^{m +1} +1$ | $n$ | $n^2$ | $n$ |
| Factor # $k$ | $2m-2$ | $m + {m-1 \choose 2}$ | $2^c$ | $m$ | $2fp$ | $k \leq n-1$ |
| Note | $m$ is a positive integer $m\geq 2$ | $m$ is a positive integer $m\geq 2$ | $r$ and $c$ are positive integers | $n$ and $m$ are from $OA(n,m,s,r)$ | $n^2,2f$ and $p$ are from $OA(n^2,2f,n,2)$ and $OLHD(n,p)$ | $n$ is an odd prime number |
For theoretical details on the construction methods, a good overview is **Section 4.2: Algebraic Constuctions for Orthogonal LHDs** from [Musings about Constructions of Efficient Latin Hypercube Designs with Flexible Run-sizes](https://arxiv.org/abs/2010.09154)
We start by implementing Ye 1998 construction, the resulting desig will have
$2^m+1$ runs and $2m-2$ factors
```
Ye98 = pl.OLHD_Ye98(m=4)
Ye98
pl.MaxAbsCor(Ye98) # column-wise correlation are 0
```
Cioppa and Lucas 2007 construction, the resulting design will be a $2^m+1$ by $m+ {m-1 \choose 2}$ orthogonal LHD. Note $m \geq 2$
```
Cioppa07 = pl.OLHD_Cioppa07(m=3)
Cioppa07
pl.MaxAbsCor(Cioppa07) # column-wise correlation are 0
```
Sun et al. 2010 construction, the resulting design will be $r2^{c+1}$ by $2^c$ if type='even'. If type='odd'
the resulting design will be $r2^{c+1} + 1$ by $2^c$, where $r$ and $c$ are positive integers.
```
Sun10_odd = pl.OLHD_Sun10(C=2,r=2,type='odd')
Sun10_odd
Sun10_even = pl.OLHD_Sun10(C=2,r=2,type='even')
Sun10_even
```
Line et al. 2009 construction, the resulting design will be $n^2$ by $2fp$. This is obtained by using a
$n$ by $p$ orthogonal LHD with a $n^2$ by $2f$ strength 2 and level $n$ orthogonal array.
Start by generating an orthogonal LHD
```
OLHD_example = pl.OLHD_Cioppa07(m=2)
```
Next, create an orthogonal array with 25 rows, 6 columns, 5 levels, and strength 2 OA(25,6,5,2)
```
import numpy as np
OA_example = np.array([[2,2,2,2,2,1],[2,1,5,4,3,5],
[3,2,1,5,4,5],[1,5,4,3,2,5],
[4,1,3,5,2,3],[1,2,3,4,5,2],
[1,3,5,2,4,3],[1,1,1,1,1,1],
[4,3,2,1,5,5],[5,5,5,5,5,1],
[4,4,4,4,4,1],[3,1,4,2,5,4],
[3,3,3,3,3,1],[3,5,2,4,1,3],
[3,4,5,1,2,2],[5,4,3,2,1,5],
[2,3,4,5,1,2],[2,5,3,1,4,4],
[1,4,2,5,3,4],[4,2,5,3,1,4],
[2,4,1,3,5,3],[5,3,1,4,2,4],
[5,2,4,1,3,3],[5,1,2,3,4,2],
[4,5,1,2,3,2] ])
```
Now using Lin at al. 2009 construction, we couple OLHD and OA to obtain
```
Lin09 = pl.OLHD_Lin09(OLHD=OLHD_example,OA=OA_example)
Lin09
```
We can convert an orthogonal array into a LHD using the function OA2LHD. Consider the
earlier OA_example with 25 rows and 6 columns.
```
pl.OA2LHD(OA_example)
```
Lastly, we consider Butler 2001 construction by generating a $n$ by $k$ OLHD
```
Butler01 = pl.OLHD_Butler01(nrows=11,ncols=5)
Butler01
```
|
github_jupyter
|
# Update the Human Proteome Reference File and KinPred Final Data
This notebook shows the steps to upgrade from 2019-12-12 to 2020-2-26 reference humam proteome, and update the KinPred final data with the new reference human proteome.
```
# IMPORTS
import pandas as pd
import os
import sys
sys.path.append('../PreprocessingPredictionData/')
import humanProteomesReference
##################
# File Location #
##################
# local (../../)
base = '../../'
#####################
# Defining File Dir #
#####################
# Human Proteome fasta file dir
HP_fasta = base + 'Data/Raw/HumanProteome/'
# human proteome referece csv file dir
HP_csv = base + 'Data/Map/'
#########################################################
# Defining the file names for the current (old) version #
#########################################################
# Old version Date
old_version = '2019-12-11'
old_HP_fasta = HP_fasta + 'humanProteome_' + old_version + '.fasta'
old_HP_csv = HP_csv + 'humanProteome_' + old_version + '.csv'
#########################################################
# Defining the file names for the updated (new) version #
#########################################################
# New version Date: uniprot.org (Date of last sequence modification)
new_version = '2020-02-26'
new_HP_fasta = HP_fasta + 'humanProteome_' + new_version + '.fasta'
new_HP_csv = HP_csv + 'humanProteome_' + new_version + '.csv'
seq_mod_fasta = HP_fasta + 'UpdatedSeq_' + new_version + '.fasta'
```
### Download the updated (new) version of Human Proteome fasta file
```
# Downloads the updated Human Proteomes (canonical) from Unipro.org
# saves as fasta formate at the given dir/name + last sequence modification date.
humanProteomesReference.downloadHumanProteomes(new_HP_fasta)
# Convert the input fasta file into a dataframe
# saves as csv formate at the given dir/name + last sequence modification date.
humanProteomesReference.fastaToCSV(new_HP_fasta, new_HP_csv)
```
### Compare the Current (old) version and the Updated (new) version
- get a list of uniportIDs from the current (old) human proteome referece file that become obsolet/secondary in the updated (new) human proteome referece file
- get a list of uniportIDs from the updated (new) human proteome referece file that have different sequence in the current (old) human proteome referece file
- prediction with those uniprotIDs (substrate_acc) will be removed from the current prediction data files
```
df_old = pd.read_csv(old_HP_csv, usecols = ['UniprotID', 'sequence'], sep = '\t')
df_new = pd.read_csv(new_HP_csv, usecols = ['UniprotID', 'sequence'], sep = '\t')
common_acc = df_old.merge(df_new, on=['UniprotID'])
common_seq = df_old.merge(df_new, on=['UniprotID' , 'sequence'])
old_id = df_old[(~df_old.UniprotID.isin(common_acc.UniprotID))][['UniprotID']]
new_seq = df_new[(~df_new.UniprotID.isin(common_seq.UniprotID))|(~df_new.sequence.isin(common_seq.sequence))][['UniprotID']]
print ('Outdated Protein UniprotIDs: \n', old_id, '\n')
print ('Protein UniprotIDs with Sequence Modifacation:\n', new_seq)
```
- download the fasta file of the Protein UniprotIDs with Sequence Modifacation
```
humanProteomesReference.downloadFasta (new_seq, seq_mod_fasta)
```
### Re-run predictions of the Protein UniprotIDs with Sequence Modifacation in each predictor
Please see the `Get Results` section in
[FormattingPhosphoPICK.ipynb](https://github.com/NaegleLab/KinPred/blob/master/Code/PreprocessingPredictionData/FormattingPhosphoPICK.ipynb), [FormattingGPS.ipynb](https://github.com/NaegleLab/KinPred/blob/master/Code/PreprocessingPredictionData/FormattingGPS.ipynb), and
[FormattingNetworKIN.ipynb](https://github.com/NaegleLab/KinPred/blob/master/Code/PreprocessingPredictionData/FormattingNetworKIN.ipynb)
for instruction on how to run prediction with each predictor
### Update the Prediction Data
```
# IMPORTS
sys.path.append('../PreprocessingPredictionData/')
import gps_convert, phosphoPick_convert, networKin_convert
#####################
# Defining File Dir #
#####################
# Resource Files
SubstrateMap = base + 'Data/Map/globalSubstrateMap.csv' # add all unique substrate in HPRD to the global file
KinaseMap = base + 'Data/Map/globalKinaseMap.csv' # add all unique kinase in HPRD to the global file
# GPS
# Current (old) prediction data file
gps_old = base + 'Data/Formatted/GPS/GPS_formatted_' + old_version + '.csv'
# updated (new) prediction data file
gps_new = base + 'Data/Formatted/GPS/GPS_formatted_' + new_version + '.csv'
# manually prepared GPS valid kinase table
gps_kinase = base + 'Data/Raw/GPS/gps_valid_kinases.csv'
# dir for the predictions of the updated sequences
gps_update_dir = base + 'Data/Raw/GPS/updated/updated/'
# temp dir for processing the predictions for the updated sequences
gps_temp_dir_acc_update = base + 'Data/Temp/GPS/mappedAcc/updated/updated/'
gps_temp_dir_site_update = base + 'Data/Temp/GPS/mappedSite/updated/updated/'
# PhosphoPICK
# Current (old) prediction data file
pick_old = base + 'Data/Formatted/PhosphoPICK/PhosphoPICK_formatted_' + old_version + '.csv'
# updated (new) prediction data file
pick_new = base + 'Data/Formatted/PhosphoPICK/PhosphoPICK_formatted_' + new_version + '.csv'
# dir for the predictions of the updated sequences
pick_update_dir = base + 'Data/Raw/PhosphoPICK/updated/updated/'
# temp dir for processing the predictions for the updated sequences
pick_temp_dir_acc_update = base + 'Data/Temp/PhosphoPICK/mappedAcc/updated/updated/'
pick_temp_dir_site_update = base + 'Data/Temp/PhosphoPICK/mappedSite/updated/updated/'
# NetworKIN
# Current (old) prediction data file
kin_old = base + 'Data/Formatted/NetworKIN/NetworKIN_formatted_' + old_version + '.csv'
# updated (new) prediction data file
kin_new = base + 'Data/Formatted/NetworKIN/NetworKIN_formatted_' + new_version + '.csv'
# dir for the predictions of the updated sequences
kin_update_dir = base + 'Data/Raw/NetworKIN/updated/updated/'
# temp dir for processing the predictions for the updated sequences
kin_temp_dir_acc_update = base + 'Data/Temp/NetworKIN/mappedAcc/updated/updated/'
kin_temp_dir_site_update = base + 'Data/Temp/NetworKIN/mappedSite/updated/updated/'
```
**Remove outdated data** of the above outdated Protein UniprotIDs and Protein UniprotIDs with Sequence Modifacation from each prediction data of each predictor
```
# append old_id and new_seq together
rm_id = pd.concat([old_id, new_seq]).reset_index(drop = True)
rm_id
def rmOutdated (predictor_old, predictor_new, rm_id_df):
for chunk in pd.read_csv(predictor_old, chunksize = 1000000):
chunk = chunk[~chunk.substrate_acc.isin(rm_id_df.UniprotID)]
if not os.path.isfile(predictor_new):
chunk.to_csv(predictor_new, mode='a', index=False, sep=',')
else:
chunk.to_csv(predictor_new, mode='a', index=False, sep=',', header=False)
```
- **GPS**
```
rmOutdated(gps_old, gps_new, rm_id)
```
- **PhosphoPICK**
```
rmOutdated(pick_old, pick_new, rm_id)
```
- **NetworKIN**
```
rmOutdated(kin_old, kin_new, rm_id)
```
**Process the rerunned predictions** of each predictor
```
#get Gene Name(substrate) from the globalSubstrateMap.csv
df_unique_sub = pd.read_csv(SubstrateMap, usecols = ['Gene Name','UniprotID'])
#get Kinase Name from the globalKinaseMap.csv
df_unique_kin = pd.read_csv(KinaseMap, usecols = ['Kinase Name','UniprotID'])
def addNameCol (perdictor_df):
# add Gene Name (substrate) column
perdictor_df = perdictor_df.merge(df_unique_sub, left_on=['substrate_acc'], right_on=['UniprotID'], how = 'left')
# drop the duplicated uniprotID column for substrate
perdictor_df = perdictor_df.drop(columns = 'UniprotID')
# add Kinase Name column
perdictor_df = perdictor_df.merge(df_unique_kin, left_on=['kinase_acc'], right_on=['UniprotID'], how = 'left')
# drop the duplicated uniprotID column for kinases
perdictor_df = perdictor_df.drop(columns = 'UniprotID')
return perdictor_df
# removing unmatched kinase type and phosphostie type
def rm_unmatched_kinase_type(df):
df_y = df[df['Kinase Name'].isin(y_kin['Kinase Name'])]
df_y = df_y[df_y['site'].str.contains('Y')]
df_st = df[df['Kinase Name'].isin(st_kin['Kinase Name'])]
df_st = df_st[df_st['site'].str.contains('S|T')]
df_dual = df[df['Kinase Name'].isin(dual_kin['Kinase Name'])]
df_final = pd.concat([df_y, df_st, df_dual])
df_final = df_final.reset_index()
return df_final
```
- **GPS**
```
# convert substrate_acc and kinase_acc
convert_type = 'acc'
gps_convert.gps_convert_directory(gps_update_dir, gps_kinase, gps_temp_dir_acc_update, convert_type)
# map the site to the updated (new) human proteome reference
convert_type = 'site'
gps_convert.gps_convert_directory(gps_temp_dir_acc_update, new_HP_csv, gps_temp_dir_site_update, convert_type)
# print the converted df for the updated seq predictions
df_gps_update = pd.read_csv(gps_temp_dir_site_update+'UpdatedSeq_' + new_version + '_mappedSite.csv')
df_gps_update
# remove the ones that are not kinse (from FormattingGPS.ipynb)
not_kinase = ['PDK2', 'PDK3', 'PDK4', 'MSN', 'GTF2F1', 'MPS1']
df_gps_update = df_gps_update[~df_gps_update['kinase'].isin(not_kinase)]
df_gps_update
# add Gene (substrate) and Kinase Name columns
df_gps_update = addNameCol (df_gps_update)
#rename columns
df_gps_update = df_gps_update[['substrate_id','substrate_acc','Gene Name','mapped site','pep', 'score', 'Kinase Name']]
df_gps_update = df_gps_update.rename(columns={'mapped site' : 'site', 'Gene Name' : 'substrate_name'})
df_gps_update
```
- **PhosphoPICK**
```
# convert substrate_acc and kinase_acc
convert_type = 'acc'
phosphoPick_convert.pick_convert_directory(pick_update_dir, 'na', pick_temp_dir_acc_update, convert_type)
# map the site to the updated (new) human proteome reference
convert_type = 'site'
phosphoPick_convert.pick_convert_directory(pick_temp_dir_acc_update, new_HP_csv, pick_temp_dir_site_update, convert_type)
# print the converted df for the updated seq predictions
df_pick_update = pd.read_csv(pick_temp_dir_site_update+'UpdatedSeq_' + new_version + '_mappedSite.csv')
df_pick_update
# add the uniprotID of the protein kinases 'RPSK6A5'
# remove predictions with kinase 'MAP3KB' : didn't find any record in human
id_dict = {'RPSK6A5':'O75582'}
for key in id_dict:
df_pick_update.loc[df_pick_update.kinase == key, ["kinase_acc"]] = id_dict[key]
# remove MAP3KB : didn't find any record in human
df_pick_update = df_pick_update[df_pick_update['kinase_acc'] != '(no hit in human)']
df_pick_update
# add Gene (substrate) and Kinase Name columns
df_pick_update = addNameCol (df_pick_update)
#rename columns
df_pick_update = df_pick_update[['substrate_id','substrate_acc','Gene Name','site','pep', 'combined-p-value', 'Kinase Name']]
df_pick_update = df_pick_update.rename(columns={'combined-p-value' : 'score', 'Gene Name' : 'substrate_name'})
df_pick_update
```
- **NetworKIN**
```
# convert substrate_acc and kinase_acc
convert_type = 'acc'
networKin_convert.kin_convert_directory(kin_update_dir, 'na', kin_temp_dir_acc_update, convert_type)
# map the site to the updated (new) human proteome reference
convert_type = 'site'
networKin_convert.kin_convert_directory(kin_temp_dir_acc_update, new_HP_csv, kin_temp_dir_site_update, convert_type)
# print the converted df for the updated seq predictions
df_kin_update = pd.read_csv(kin_temp_dir_site_update+'UpdatedSeq_' + new_version + '_mappedSite.csv')
df_kin_update
# remove the ones that are not kinse (from FormattingNetworKIN.ipynb)
not_kinase = ['PDK2','PDK3','PDK4','LCA5']
df_kin_update = df_kin_update[~df_kin_update['kinase_name'].isin(not_kinase)]
df_kin_update
# add Gene (substrate) and Kinase Name columns
df_kin_update = addNameCol(df_kin_update)
#rename columns
df_kin_update = df_kin_update[['substrate_id','substrate_acc','Gene Name','site','pep', 'score', 'Kinase Name']]
df_kin_update = df_kin_update.rename(columns={'Gene Name' : 'substrate_name'})
df_kin_update
```
**Append the rerunned predictions** of each predictor to the prediction files and save as the updated prediction files with version date
```
def appendUpdates(predictor_new, update_df):
update_df = rm_unmatched_kinase_type(update_df)
update_df.to_csv(predictor_new, mode='a', index = False, header=False)
```
- **GPS**
```
appendUpdates(gps_new, df_gps_update)
```
- **PhosphoPICK**
```
appendUpdates(pick_new, df_pick_update)
```
- **NetworKIN**
```
appendUpdates(kin_new, df_kin_update)
```
### Cross Referencing with ProteomeScout Phosphorylation Data
see [CrossReferenceWithProteomeScout.ipynb](https://github.com/NaegleLab/KinPred/blob/master/Code/CrossReferenceWithProteomeScout/CrossReferenceWithProteomeScout.ipynb) for detail
```
# IMPORTS
sys.path.append('../CrossReferenceWithProteomeScout/')
import XRefProteomeScout
from datetime import date
# version date
pscout_version = date.today().strftime('%Y-%m-%d')
# file location
ref_proteome = base+"Data/Raw/HumanProteome/humanProteome_"+ref_version+".fasta"
pscout_data = base+'Data/Raw/ProteomeScout_'+pscout_version+'/data.tsv'
# download current ProteomeScout Data
XRefProteomeScout.getPScoutData()
# run cross referencing
XRefProteomeScout.XRefProteomeScout(pscout_data, ref_proteome, new_version)
```
|
github_jupyter
|
```
%matplotlib inline
import pyross
import numpy as np
import matplotlib.pyplot as plt
```
# Introduction: Forecast for SEAIRQ model with stochastic parameters
In this notebook, we consider the SEAIRQ model.
We assume that the parameters
* $\beta$ (probability of infection on contact),
* $\gamma_{E}$ (rate of progression for exposed individual to class A),
* $\gamma_{AA}$ (rate of progression from class A to asymptomatic infective class),
* $\gamma_{AS}$ (rate of progression from class A to symptomatic infective class),
* $\gamma_{I_a}$ (rate of removal for asymptomatic infected individuals), and
* $\gamma_{I_s}$ (rate of removal for symptomatic infected individuals)
* $ \tau_S$ (quarantining rate for susceptibles)
* $ \tau_E$ (quarantining rate for exposed)
* $ \tau_A$ (quarantining rate for A)
* $ \tau_{I_a}$ (quarantining rate for asymptomatic infectives)
* $ \tau_{I_s}$ (quarantining rate for symptomatic infectives)
are not known exactly, but rather are characterized by a 11D Gaussian distribution with known mean and covariance matrix. The Gaussian distribution function is trunacted, i.e. set to zero if any parameter is $< 0$.
**We now illustrate how uncertainties in the parameters affect the predictions of the SEAIRQ model.**
For this we simulate the SEIR model $N_s = 500$ times; for each simulation the above parameters are sampled from a given 11D Gaussian distribution. The resulting 500 trajectories are shown together with their mean, standard deviation, median, and 5 as well as 95 percentiles.
We perform this analysis for the deterministic SEAIRQ model.
# Define model parameters and initialise pyross.forecast.SEAIRQ
```
M = 1 # the SEAIRQ model we consider has no age structure
Ni = 50000*np.ones(M) # so there is only one age group
N = np.sum(Ni) # and the total population is the size of this age group
E0 = np.array([0])
A0 = np.array([1])
Ia0 = np.array([0]) # the SEAIRQ model we consider has only one kind of infective
Is0 = np.array([20]) # we take these to be symptomatic
Q0 = np.array([0])
R0 = np.array([0]) # and assume there are no recovered individuals initially
S0 = N-(Ia0+Is0+R0+E0) # The initial susceptibles are obtained from S + E + A + Ia + Is + R = N
# there is no contact structure
def contactMatrix(t):
return np.identity(M)
# duration of simulation and output datapoints
Tf = 500; Nt=Tf+1
# These parameters we consider exact
fsa = 1 # the self-isolation parameter
tE = 0.00 # rate E -> Q
tA = 0.00 # rate A -> Q
tIa = 0.00 # rate Ia -> Q
tIs = 0.05 # rate Is -> Q
# These are the parameters that we sample stochastically
# means
alpha = 0.0 # fraction of asymptomatic infectives
beta = 0.2 # infection rate
gIa = 0.1 # removal rate of asymptomatic infectives
gIs = 0.1 # removal rate of symptomatic infectives
gE = 0.04 # removal rate of E
gA = 0.2 # rate to go from A to Ia
# order in covariance matrix:
# beta, gE, gAA, gAS, gIa, gIs, tS, tE, tA, tIa, tIs
#
cov = np.zeros([6,6],dtype=float)
cov[0,0] = 0*alpha**2 # cov(alpha, alpha) = Var(alpha)
cov[1,1] = 0.1*beta**2 # cov(beta, beta) = Var(beta)
cov[2,2] = 0.01*gIa**2 # cov(gIa,gIa) = Var(gIa)
cov[3,3] = 0.01*gIs**2 # cov(gIs,gIs) = Var(gIs)
cov[4,4] = 0.01*gA**2 # cov(gA, gA) = Var(gA)
cov[5,5] = 0.01*gE**2 # cov(gE, gE) = Var(gE)
#
cov[1,5] = 0.01*beta*gE # cov(beta, gE)
cov[5,1] = cov[1,5] # covariance matrix is symmetric
#
cov[2,3] = cov[2,2] # cov(gIa, gIs)
cov[3,2] = cov[2,3]
# Define parameters for simulations
parameters = {'alpha':alpha, 'beta':beta,
'gE':gE,'gA':gA,
'gIa':gIa, 'gIs':gIs, 'gE':gE, 'fsa':fsa,
'tE':tE,'tA':tA,'tIa':tIa,'tIs':tIs,
'cov':cov
}
# Initialise pyross forecast module
model_forecast = pyross.forecast.SEAIRQ(parameters, M, Ni)
# Number of simulations over which we average, use 500
Ns = 10
# Define a function which we use below to plot simulation results
def plot_trajectories(result,
percentile=-1,
plot_index = 4, # which time series should be plotted?
filename='None'): # set filename for saving figures
# plot_index class
# 0 susceptibles
# 1 exposed
# 2 asymptomatic and infectious
# 3 asymptomatic infectives
# 4 symptomatic infectives
# 5 quarantined
if plot_index == 0:
title='Susceptibles'
ylabel = r'$N_S$'
elif plot_index == 1:
title='Exposed'
ylabel = r'$N_{E}$'
elif plot_index == 2:
title=r'Asymptomatic, infectious (A)'
ylabel = r'$N_{A}$'
elif plot_index == 3:
title='Asymptomatic infectives'
ylabel = r'$N_{I,a}$'
elif plot_index == 4:
title='Symptomatic infectives'
ylabel = r'$N_{I,s}$'
elif plot_index == 5:
title='Quarantined'
ylabel = r'$N_{Q}$'
else:
raise RuntimeError("plot_index should be 0, 1, 2, or 3.")
#
fontsize=25
#
#
trajectories = result['X']
t_arr = result['t']
traj_mean = result['X_mean']
traj_std = result['X_std']
#
#
# Plot trajectories
#
fig, ax = plt.subplots(1,1,figsize=(7,5))
ax.set_title(title,
y=1.05,
fontsize=fontsize)
for i,e in enumerate(trajectories):
ax.plot(t_arr,e[plot_index],
alpha=0.15,
)
ax.fill_between(t_arr,traj_mean[plot_index] - traj_std[plot_index],
traj_mean[plot_index] + traj_std[plot_index],
alpha=0.7,
color='limegreen',
label='Std deviation')
ax.plot(t_arr,traj_mean[plot_index] - traj_std[plot_index],
alpha=1,
label='Std deviation',
lw=1.5,
ls='--',
color='black')
ax.plot(t_arr,traj_mean[plot_index] + traj_std[plot_index],
alpha=1,
#label='Std deviation',
lw=1.5,
ls='--',
color='black')
ax.plot(t_arr,traj_mean[plot_index],
alpha=1,
lw=2,
color='black',
label='Mean')
ax.set_xlim(np.min(t_arr),np.max(t_arr))
ax.set_ylabel(ylabel,fontsize=fontsize)
ax.set_xlabel(r'$t$ [days]',fontsize=fontsize)
ax.legend(loc='upper right',fontsize=18)
plt.show()
if filename != 'None':
fig.savefig(filename + '_trajs.png', bbox_inches='tight',dpi=100)
plt.close()
#
#
#
# Plot percentiles
#
if percentile > 0:
percentiles_lower = np.percentile(trajectories[:,plot_index],percentile,axis=0)
percentiles_upper = np.percentile(trajectories[:,plot_index],100-percentile,axis=0)
percentiles_median = np.percentile(trajectories[:,plot_index],50,axis=0)
print("In the following plot, red dashed lines denote {0} and {1} percentiles of the numerical data:".format(percentile,
100-percentile))
fig, ax = plt.subplots(1,1,figsize=(7,5))
ax.set_title(title,
y=1.05,
fontsize=fontsize)
for i,e in enumerate(trajectories):
ax.plot(t_arr,e[plot_index],
alpha=0.15,
)
ax.fill_between(t_arr,percentiles_lower,
percentiles_upper,
alpha=0.1,
color='red',
label='Percentiles')
ax.plot(t_arr,percentiles_lower,
alpha=1,
lw=2,
label='Percentiles',
ls='--',
color='red',
)
ax.plot(t_arr,percentiles_upper,
alpha=1,
lw=2,
color='red',
ls='--',
)
ax.plot(t_arr,percentiles_median,
alpha=1,
lw=2,
color='red',
label='Median')
ax.plot(t_arr,traj_mean[plot_index],
alpha=1,
lw=2,
color='black',
label='Mean')
ax.set_xlim(np.min(t_arr),np.max(t_arr))
ax.set_ylabel(ylabel,fontsize=fontsize)
ax.set_xlabel(r'$t$ [days]',fontsize=fontsize)
ax.legend(loc='upper right',fontsize=18)
plt.show()
if filename != 'None':
fig.savefig(filename + '_trajs2.png', bbox_inches='tight',dpi=100)
plt.close()
# Define a function which we use below to plot parameters used for simulations
def plot_sample_parameters(result,
filename='None'): # set filename for saving figures
#
fontsize=25
#
# Scatterplot of used parameters
#
sample_parameters = result['sample_parameters'].T
beta = result['beta']
gE = result['gE']
gIa = result['gIa']
gIs = result['gIs']
#
title = r'Samples for stochastic $\beta$, $\gamma_{E}$'
labelx = r'$\beta $'
labely = r'$\gamma_{E}$'
x_mean = beta
y_mean = gE
labelx_mean = r'$\langle \beta \rangle$'
labely_mean = r'$\langle \gamma_{E} \rangle$'
data_index_x = 1
data_index_y = 4
fig, ax = plt.subplots(1,1,figsize=(7,5))
ax.set_title(title,y=1.05,fontsize=fontsize)
ax.axvline(x_mean,color='limegreen',ls='--',lw=2,label=labelx_mean)
ax.axhline(y_mean,color='dodgerblue',ls='--',lw=2,label=labely_mean)
ax.scatter(sample_parameters[data_index_x], sample_parameters[data_index_y] ,
label='sampled data',
color='black',s=10) #, c = truth)
ax.set_xlabel(labelx,fontsize=fontsize)
ax.set_ylabel(labely,fontsize=fontsize)
ax.set_xlim(0,1.05*np.max(sample_parameters[data_index_x]))
ax.set_ylim(0,1.05*np.max(sample_parameters[data_index_y]))
ax.legend(loc='best',fontsize=15)
plt.show()
if filename != 'None':
fig.savefig(filename + '_samples1.png', bbox_inches='tight',dpi=100)
plt.close()
#
```
# Forecast based on deterministic model
```
result = model_forecast.simulate(S0, E0, A0, Ia0, Is0, Q0,
contactMatrix, Tf, Nt,
verbose=True,
Ns=Ns)
plot_trajectories(result,
plot_index = 2,
percentile=5,
)
plot_trajectories(result,
# filename='forecast_deterministic',
percentile=5,
)
plot_trajectories(result,
plot_index = 5,
percentile=5,
)
plot_sample_parameters(result)
```
|
github_jupyter
|
```
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow import keras
tf.keras.backend.clear_session()
from src.models import places_ontop_model
from src import custom_losses, custom_metrics, optimizers
from src.data import data
batch_size = 128
n_classes = 6
epochs = 100
img_size = 224
n_channels = 3
model = places_ontop_model.PlacesOntop_Model(batch_size, n_classes, epochs, img_size, n_channels, version=8)
from src.data import data
paths = data.PATH()
dataset_path = f'{paths.PROCESSED_DATA_PATH}/'
dataset = 'vision_based_dataset'
test_dataset_path = f'{dataset_path}/{dataset}/'
train_generator, validation_generator, test_generator = model.get_image_data_generator(test_dataset_path, train=True, validation=True, test=True, class_mode_validation='categorical', class_mode_test='categorical')
weights = model.get_class_weights(train_generator.classes, model)
model.compile(loss=custom_losses.weighted_categorical_crossentropy(weights), metrics=['categorical_accuracy'],)
# model.model.compile(optimizer='adam', loss=custom_losses.weighted_categorical_crossentropy(weights), metrics=['categorical_accuracy'],)
# instance_model.compile(optimizer='adam', loss=custom_losses.weighted_categorical_crossentropy(weights), metrics=['categorical_accuracy'],)
model.show_summary()
model.fit_from_generator(path=f'{dataset_path}/{dataset}',
train_generator=train_generator, validation_generator=validation_generator,
test_generator=test_generator,
evaluate_net=False, use_model_check_point=True, use_early_stop=True, weighted=True,
show_activations=False, n_workers=2)
model.model_path
model = model.load_model(model.model_path)
model.model_is_trained = True
model.save_model()
```
### Notas:
#### - Probar configuraciones 6, 7, 8 y 9.
#### - Comparar mejor resultado con notebook placescnn_v2.1
#### - Probar configuraciones desfrizando bloques convolutivos de la red
```
model.fit_from_generator(path=f'{dataset_path}/{dataset}',
train_generator=train_generator, validation_generator=validation_generator,
test_generator=test_generator,
evaluate_net=False, use_model_check_point=True, use_early_stop=True, weighted=True,
show_activations=False,)
model.model_is_trained = True
model.save_model()
model.predict_from_generator()
```
|
github_jupyter
|
```
from influxdb import InfluxDBClient
client = InfluxDBClient(host='localhost', port=8086)
print(client)
client.create_database('pyexample')
client.switch_database('pyexample')
json_body = [
{
"measurement": "brushEvents",
"tags": {
"user": "Carol",
"brushId": "6c89f539-71c6-490d-a28d-6c5d84c0ee2f"
},
"time": "2018-03-28T8:01:00Z",
"fields": {
"duration": 127
}
},
{
"measurement": "brushEvents",
"tags": {
"user": "Carol",
"brushId": "6c89f539-71c6-490d-a28d-6c5d84c0ee2f"
},
"time": "2018-03-29T8:04:00Z",
"fields": {
"duration": 132
}
},
{
"measurement": "brushEvents",
"tags": {
"user": "Carol",
"brushId": "6c89f539-71c6-490d-a28d-6c5d84c0ee2f"
},
"time": "2018-03-30T8:02:00Z",
"fields": {
"duration": 129
}
}
]
test = [{"measurement": "test_temperature",
"tags": {"room": "room1", "id": "1"},
"time": "2020-05-12T8:11:00Z",
"fields": { "temperature": "22"}
},
{"measurement": "test_temperature",
"tags": {"room": "room1", "id": "2"},
"time": "2020-05-12T8:13:00Z",
"fields": { "temperature": "25"}
},
{"measurement": "test_temperature",
"tags": {"room": "room1", "id": "3"},
"time": "2020-05-12T8:14:00Z",
"fields": { "temperature": "24"}
},
{"measurement": "test_temperature",
"tags": {"room": "room1", "id": "4"},
"time": "2020-05-12T8:15:00Z",
"fields": { "temperature": "23"}
},
{"measurement": "test_temperature",
"tags": {"room": "room1", "id": "5"},
"time": "2020-05-12T8:17:00Z",
"fields": { "temperature": "26"}
}]
client.write_points(test)
test = b'45'
test.decode("ascii")
client.write_points([{"measurement": "room_temperature", "time": "2020-05-12T8:09:00Z",
"fields": { "temperature": "25"}}])
results = client.query('SELECT "temperature" FROM "pyexample"."autogen"."room_temperature"')
results.raw
client.write_points([{'measurement': 'test_temperature', 'tags': {'room': 'room1', 'id': '6'}, 'time': '2020-08-05T11:44:25Z', 'fields': {'temperature': '53.6'}}]
)
results = client.query('SELECT "temperature" FROM "pyexample"."autogen"."test_temperature"')
results.raw
client.query('DELETE FROM "pyexample"."test_temperature" WHERE time < now()')
client.write_points(json_body)
results = client.query('SELECT "duration" FROM "pyexample"."autogen"."brushEvents" GROUP BY "user"')
results.raw
points = results.get_points(tags={'user':'Carol'})
for point in points:
print("Time: %s, Duration: %i" % (point['time'], point['duration']))
results = client.query('SELECT * FROM "pyexample"."temperature"')
```
|
github_jupyter
|
# NEXUS tool: case study for the Souss-Massa basin - energy demand calculations
In this notebook a case study for the Souss-Massa basin is covered using the `nexustool` package. The water requirements for agricultural irrigation and domestic use were previously calculated using the Water Evaluation and Planning System (WEAP) model. In this case study, the energy requirements for groundwater pumping, wastewater treatment, desalination of seawater and pumping energy for water conveyance are estimated.
First import the package by running the following block:
```
import sys
sys.path.append("..") #this is to add the avobe folder to the package directory
import os
import nexustool
import pandas as pd
from dashboard.scripts.plotting import water_delivered_plot, unmet_demand_plot, water_supply_plot, wtd_plot, energy_demand_plot, crop_production
```
## 1. Read scenario data
After importing all required packages, the input GIS data is loaded into the variable `df`. Change the `data_folder`, `scenario` and `climate` variables to reflect the name and relative location of your data file. This dataset should already have the water demand for irrigation results.
```
data_folder = os.path.join('data', 'processed results')
scenario = 'Desalination'
climate = 'Climate Change'
input_folder = os.path.join(data_folder, scenario, climate)
```
## 2. Create nexus model
To create a model simply create an instance of the `nexustool.Model()` class and store it in a variable name. The `nexustool.Model()` class requires a dataframe as input data. Several other properties and parameter values can be defined by explicitly passing values to them. To see a full list of parameters and their explaination refer to the documentation of the package. We wil create a model using the `demand_data.gz` data:
```
#Define the path to read the scenario input data and reads it in
file_path = os.path.join(input_folder, 'demand_data.gz')
df = pd.read_csv(file_path)
#Creates the nexus model with the input dataframe
souss_massa = nexustool.Model(df)
```
## 3. Define variable names
The names of the properties of the model can be changed at any time. This is important for the model to know how each property is called withing your input data. To check the current property names run the `.print_properties()` method, a list with the names of each property and its current value will be displayed.
Then you can provide the right names for each property, calling them and assigning a value as:
```python
souss_massa.elevation_diff = 'elevation_delta'
souss_massa.gw_depth = 'name_of_ground_water_depth'
```
In this particular case we will need to change the following default values:
```
souss_massa.elevation_diff = 'elevation_diff' #for the case of GW, the elevation_diff is set to be the wtd
souss_massa.L = 'distance' #for the case of GW, the distance is set to be the wtd
souss_massa.D = 'Pipe_diameter'
#Defines the name of the variable for Peak Water Demand and Seasonal Scheme Water demand (monthly)
souss_massa.pwd = 'pwd' # Peak Water Demand
souss_massa.sswd = 'sswd' # Seassonal Scheme Water Demand
souss_massa.df.rename(columns={'value': 'sswd'}, inplace=True) #Renames the name of the column value to sswd
souss_massa.pp_e = 'pp_e' # Peak Pumping Energy
souss_massa.pa_e = 'pa_e' # Pumping Average Energy
```
## 4. Define pipelines diameters and average pumping hours, pumping efficiency
Now we need to define the specifications of the water network, giving pipeline / canal diameter values:
```
souss_massa.df['Pipe_diameter'] = 1.2
souss_massa.df.loc[souss_massa.df['type'].str.contains('GW'), 'Pipe_diameter'] = 1000
souss_massa.df.loc[souss_massa.df['type'].str.contains('DS'), 'Pipe_diameter'] = 1.2
souss_massa.df.loc[souss_massa.df['type'].str.contains('Pipeline'), 'Pipe_diameter'] = 1.2
souss_massa.pumping_hours_per_day = 10
souss_massa.pump_eff = 0.6
```
## 5. Peak Water Demand (PWD)
The $PWD$ is definfe as the daily peak cubic meters of water pumped per second withing the month. To accomplish that, the $SSWD$ (m<sup>3</sup>/month) is divided by 30 days per month, 3600 seconds per hour and the amount of average pumping hours in a day. This provides the $PWD$ in m<sup>3</sup>/s:
$$
PWD\,(m^3/s) = \frac{SSWD\,(m^3/month)}{30\,(day/month)\cdot PumpHours\,(h/day)\cdot 3600\, (s/h)}
$$
Moreover, the $PWD$ for agricultural irrigation is assumed as double the normal $PWD$. We make this calculations as per the following cell:
```
#Defines the PWD. It is defined as double the seasonal demand for agricultural sites
souss_massa.df[souss_massa.pwd] = souss_massa.df[souss_massa.sswd] / 30 / souss_massa.pumping_hours_per_day / 3600 #to convert to cubic meter per second [m3/s]
souss_massa.df.loc[souss_massa.df['type']=='Agriculture', souss_massa.pwd] *= 2
```
## 6. Calculate pumping energy requirements
To estimate the pumping energy requirements for conveyance, first we need to calculate the Total Dinamic Head (TDH). This, is a measure in meters that accounts for the elevation difference between two points and the pressure loss in distribution.
For that, the area $A$ `.pipe_area()`, the velocity $V$ `.flow_velocity()`, the Reynolds number $Re$ `.reynolds()` and the friction factor $f$ `.friction_factor()` need to be estimated. The `nexustool` provides simple functions that allows us make an easy estimation of these variables, which have the following formulas implemented in the background:
$$
A\,(m^2) = \pi\cdot \frac{D^2}{4}
$$
$$
V\,(m/s) = \frac{SSWD\,(m^3/month)}{PumpHours\,(h/day)\cdot 30\,(day/month)\cdot 3600\,(s/h)\cdot A\,(m^2)}
$$
$$
Re = \frac{V\,(m/s)\cdot D\,(m)}{v\,(m^2/s)}
$$
Where $v$ is the kinematic viscosity of water at around 1.004e-06 m<sup>2</sup>/s. And the frction factor is estimated according to the Swamee–Jain equation:
$$
f = \frac{0.25}{\left[log_{10}\left(\frac{\epsilon}{3.7D}+\frac{5.74}{Re^{0.9}}\right)\right]^2}
$$
Where $\epsilon$ is the roughness of the material.
```
souss_massa.pipe_area()
souss_massa.flow_velocity()
souss_massa.reynolds()
souss_massa.friction_factor()
```
Then, the TDH can be calculated by simply calling the `.get_tdh()` function.
$$
TDH\,(m) = f\cdot \frac{L\,(m)}{D\,(m)}\cdot \frac{V(m/s)^2}{2\cdot g\,(m/s^2)}
$$
Whereas the conveyance pumping energy requirements by calling the `.get_pumping_energy()` method. The equation used to calculate the Electricity Demand ($E_D$) for pumping is as follows:
$$
E_D\,(kW_h) = \frac{SSWD\,(m^3)\cdot \rho\,(kg/m^3)\cdot g\,(m/s^2)\cdot TDH\,(m)}{PP_{eff}\,(\%)\cdot 3600\,(s/h)\cdot 1000\,(W/kW)}
$$
The variable withing the Model for the $E_D$ is the `pa_e` or Pumping Average Electricity requirements.
Moreover, the Power Demand for pumping ($PD$) is denoted by the variable `pp_e` and calculated by the following formula:
$$
PD\,(kW) = \frac{PWD\,(m^3/s)\cdot \rho\,(kg/m^3)\cdot g\,(m/s^2)\cdot TDH\,(m)}{PP_{eff}\,(\%)\cdot 1000\,(W/kW)}
$$
The `.get_pumping_energy()` method calculates both the $E_D$ (`pa_e`) and $PD$ (`pp_e`).
```
souss_massa.get_tdh()
souss_massa.get_pumping_energy()
souss_massa.df.loc[souss_massa.df.pp_e<0, souss_massa.pp_e] = 0 # ensures no negative energy values are considered
souss_massa.df.loc[souss_massa.df.pa_e<0, souss_massa.pa_e] = 0 # ensures no negative power values are considered
# We exclude energy for pumping calculations done for the Complexe Aoulouz Mokhtar Soussi,
# as this pipeline is known to be driven by gravity only
souss_massa.df.loc[souss_massa.df['Supply point'].str.contains('Complexe Aoulouz Mokhtar Soussi'), 'pa_e'] = None
```
## 7. Calculating desalination energy requirements
Desalination energy requirements are estimated by multipliying the monthly average desalinated water (`sswd`), by an energy intensity factor (`desal_energy_int`) based on the characteristics of the desalination plant.
```
#Define energy intensity for seawater desalination project
desal_energy_int = 3.31 # kWh/m3
#Create a new nexus Model with the data relevant to the desalination plant only, filtering by the key work DS (Desalination)
sm_desal = nexustool.Model(souss_massa.df.loc[souss_massa.df['type'].str.contains('DS')].copy())
#Multiply the sswd by the energy intensity for treatment
sm_desal.df[souss_massa.pa_e] = sm_desal.df[souss_massa.sswd] * desal_energy_int
```
## 8. Calculating wastewater treatment energy requirements
Wastewater treatment energy is dependent on the type of treatment required. Wastewater treatment can be subdivided into three stages: primary, secondary and tertiary. The treatment stages used, are then dependent on the final quality requirements of the treated wastewater. Thus, for wastewater that will be treated and returned to the ecosystem, often primary to secondary treatment is enough. On the other hand, treated wastewater intended for agricultural irrigation or drinking purposes, should go through secondary to terciary treatment to ensure proper desinfecton levels.
Depending on the scenario run, we will need then to use a proper wastewater treatment energy intensity. In general, the higher the number of stages, the higher the energy requirements. In this model, we used an energy intensity of **0.1 kWh/m<sup>3</sup>** for treated wastewater that is not being reused, and **0.8 kWh/m<sup>3</sup>** for treated wastewater reused in agricultural irrigation.
```
#Here we load the WWTP inflow data
file_path = os.path.join(input_folder, 'wwtp_inflow.gz')
df_wwtp = pd.read_csv(file_path)
#We define an energy intensity for wastewater treatment and compute the energy demand
wwtp_energy_int = 0.1 # kWh/m3
df_wwtp['pa_e'] = df_wwtp.value * wwtp_energy_int
```
## 9. Saving the results
Finally, we save the resulting dataframes as `.gz` files, which is a compressed version of a `csv` file:
```
#Define and create the output folder
results_folder = os.path.join('dashboard', 'data', scenario, climate)
os.makedirs(results_folder, exist_ok=True)
#Save the results
souss_massa.df.to_csv(os.path.join(results_folder, 'results.gz'), index=False)
sm_desal.df.to_csv(os.path.join(results_folder, 'desal_data.gz'), index=False)
df_wwtp.to_csv(os.path.join(results_folder, 'wwtp_data.gz'), index=False)
```
## 10. Visualizing some results
Using some functions imported from the visualization tool, we can plot some general results for the scenario:
### Water delivered (Mm<sup>3</sup>)
```
water_delivered_plot(souss_massa.df, 'Year', {})
```
### Enery demand (GWh)
```
energy_demand_plot(souss_massa.df, df_wwtp, sm_desal.df, 'Year', {})
```
### Unmet demand (%)
```
unmet_demand_plot(souss_massa.df, 'Year', {})
```
### Water supplied (Mm<sup>3</sup>/year)
```
water_supply_plot(souss_massa.df, 'Year', {})
```
### Groundwater depth (m)
```
wtd_plot(souss_massa.df, 'Date', {})
```
### Crop production (ton/year)
```
crop = pd.read_csv(os.path.join(input_folder, 'production.gz'))
crop_production(crop, 'crop', {})
```
|
github_jupyter
|
# 시각 심화
- **Instructor**: Jongwoo Lim / Jiun Bae
- **Email**: [jlim@hanyang.ac.kr](mailto:jlim@hanyang.ac.kr) / [jiunbae.623@gmail.com](mailto:jiunbae.623@gmail.com)
## Machine Learnig Basic
In this example we will take a quick look at how machine learning works. The goals of this example are as follows:
- Understand **Machine Learning** and how they work.
- Learn basically how to **write and use code**.
And this example also is written in [IPython Notebook](https://ipython.org/notebook.html), an interactive computational environment, in which you can run code directly.
## Machine Learning Model
You can think of a basic machine learning model as a function that returns a predicted value for an input.

To make this model return the results we want, we can use the training data to update the model with the differences from the desired results.

## Perceptron: <small>Artifical Neuron</small>
An artificial neuron is a mathematical function based on a model of biological neurons, where each neuron takes inputs, weighs them separately, sums them up and passes this sum through a nonlinear function to produce output. A perceptron is a neural network unit (an artificial neuron) that does certain computations to detect features or business intelligence in the input data.

```
import numpy as np
import matplotlib.pyplot as plt
data_size = 1000
dimension = 2
```
### Random points
1000 dots within the range of $x: 0..1, y: 0..1$.
Points with $y>x$ will be green (label=0) and points with no blue (label=1).
```
points = np.random.rand(data_size, dimension)
labels = np.zeros(data_size)
labels[points[:, 0] > points[:, 1]] = 1
line = np.arange(0, 1, .001)
plt.scatter(points[labels == 0][:, 0], points[labels == 0][:, 1], c='g')
plt.scatter(points[labels == 1][:, 0], points[labels == 1][:, 1], c='b')
plt.plot(line, line, '-r')
```
### Simple perceptron
$w$ is weight, $b$ is bias
$y = wx + b$
```
weight = np.random.rand(dimension)
bias = np.random.rand(dimension)
def forward(weight, bias, X):
return np.sum(np.multiply(X, weight) + bias)
prediction = forward(weight, bias, points[0])
print(f'We expected {labels[0]}, prediction is {0 if prediction > .5 else 1}')
```
Calculate `error` and update `weight`, `bias`
```
error = prediction - labels[0]
weight = weight - .1 * error * points[0]
bias = bias - .1 * error
```
### Train & Test
```
train_size = int(data_size * .7)
train_points = points[:train_size]
train_labels = labels[:train_size]
test_points = points[train_size:]
test_labels = labels[train_size:]
for epoch in range(500):
# train
for x, y in zip(train_points, train_labels):
# get prediction
pred = forward(weight, bias, x)
# calculate error
error = pred - y
# update model
weight -= .01 * error * x
bias -= .01 * error * x
# test
if not (epoch % 100):
predictions = np.array([forward(weight, bias, x).item() > .5 for x, _ in zip(test_points, test_labels)])
print(f'Acc: {(predictions == test_labels).sum() / len(test_labels):.4f}')
plt.scatter(test_points[predictions == 0][:, 0], test_points[predictions == 0][:, 1], c='g')
plt.scatter(test_points[predictions == 1][:, 0], test_points[predictions == 1][:, 1], c='b')
plt.plot(line, line, '-r')
plt.show()
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/NicoleRichards1998/FinRL/blob/master/FinRL_Raytune_with_Alpaca_Paper_Trading.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
print("Setting up colab environment")
!pip uninstall -y -q pyarrow
!pip install -q -U ray[tune]
!pip install -q ray[debug]
# A hack to force the runtime to restart, needed to include the above dependencies.
print("Done installing! Restarting via forced crash (this is not an issue).")
import os
os._exit(0)
## If you are running on Google Colab, please install TensorFlow 2.0 by uncommenting below..
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
#Installing FinRL
%%capture
!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
%%capture
!pip install "ray[tune]" optuna
%%capture
!pip install int_date==0.1.8
!pip install tensorboardX
!pip install bayesian-optimization
# Load the TensorBoard notebook extension
%load_ext tensorboard
#Importing the libraries
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import random
# matplotlib.use('Agg')
import datetime
import optuna
%matplotlib inline
from finrl.apps import config
from finrl.apps.config import DOW_30_TICKER
from finrl.apps.config import TECHNICAL_INDICATORS_LIST
from finrl.finrl_meta.preprocessor.yahoodownloader import YahooDownloader
from finrl.finrl_meta.preprocessor.preprocessors import FeatureEngineer, data_split
from finrl.finrl_meta.env_stock_trading.env_stocktrading_np import StockTradingEnv
#from intraday.env import SingleAgentEnv
from finrl.finrl_meta.env_stock_trading.env_stock_papertrading import AlpacaPaperTrading
from finrl.drl_agents.rllib.models import DRLAgent as DRLAgent_rllib
#from stable_baselines3.common.vec_env import DummyVecEnv
from finrl.finrl_meta.data_processor import DataProcessor
from finrl.plot import backtest_stats, backtest_plot, get_daily_return, get_baseline
import ray
from pprint import pprint
from ray.rllib.agents.ppo import PPOTrainer
from ray.rllib.agents.ddpg import DDPGTrainer
from ray.rllib.agents.a3c import A2CTrainer
from ray.rllib.agents.a3c import a2c
from ray.rllib.agents.ddpg import ddpg, td3
from ray.rllib.agents.ppo import ppo
from ray.rllib.agents.sac import sac
import sys
sys.path.append("../FinRL-Library")
import os
import itertools
from ray import tune
from ray.tune.suggest import ConcurrencyLimiter
from ray.tune.schedulers import AsyncHyperBandScheduler, PopulationBasedTraining
from ray.tune.suggest.hebo import HEBOSearch
from ray.tune.suggest.optuna import OptunaSearch
from ray.tune.logger import (
CSVLoggerCallback,
JsonLoggerCallback,
JsonLogger,
CSVLogger,
TBXLoggerCallback,
TBXLogger,
)
from ray.tune.result import (
EXPR_PARAM_FILE,
EXPR_PARAM_PICKLE_FILE,
EXPR_PROGRESS_FILE,
EXPR_RESULT_FILE,
)
from ray.tune.registry import register_env
from ray.tune import ExperimentAnalysis
from ray.tune.suggest import Repeater
import time
from typing import Dict, Optional, Any
import tensorflow as tf
import datetime, os
from google.colab import files
#!pip install 'scipy<1.7.0' 'pymoo<0.5.0' 'HEBO==0.1.0'
ticker_list = DOW_30_TICKER
action_dim = len(DOW_30_TICKER)
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
API_KEY = "PKJSSCSTQJZ87LKDF5OP"
API_SECRET = "nJOIVicXAh8HZy958ZYcGOWqO4behHpIGJEHaHDL"
APCA_API_BASE_URL = 'https://paper-api.alpaca.markets'
data_url = 'wss://data.alpaca.markets'
env = StockTradingEnv
def sample_ddpg_params():
return {
"buffer_size": tune.choice([int(1e4), int(1e5), int(1e6)]),
"lr": tune.loguniform(1e-5, 1),
"train_batch_size": tune.choice([32, 64, 128, 256, 512])
}
def sample_a2c_params():
return{
"lambda": tune.choice([0.1,0.3,0.5,0.7,0.9,1.0]),
"entropy_coeff": tune.loguniform(0.00000001, 0.1),
"lr": tune.loguniform(1e-5, 1)
}
# cite : https://medium.com/aureliantactics/ppo-hyperparameters-and-ranges-6fc2d29bccbe
def sample_ppo_params():
return {
"entropy_coeff": tune.loguniform(0.00001, 0.001),
"lr": tune.loguniform(5e-6, 0.003),
#"sgd_minibatch_size": tune.choice([ 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096 ]),
"lambda": tune.loguniform(0.9, 1)
}
MODELS = {"a2c": a2c, "ddpg": ddpg, "td3": td3, "sac": sac, "ppo": ppo}
def get_train_env(start_date, end_date, ticker_list, data_source, time_interval,
technical_indicator_list, env, model_name, if_vix = True,
**kwargs):
#fetch data
DP = DataProcessor(data_source = data_source,
API_KEY = API_KEY,
API_SECRET = API_SECRET,
APCA_API_BASE_URL = APCA_API_BASE_URL
)
data = DP.download_data(ticker_list, start_date, end_date, time_interval)
data = DP.clean_data(data)
data = DP.add_technical_indicator(data, technical_indicator_list)
if if_vix:
data = DP.add_vix(data)
price_array, tech_array, turbulence_array = DP.df_to_array(data, if_vix)
train_env_config = {'price_array':price_array,
'tech_array':tech_array,
'turbulence_array':turbulence_array,
'if_train':True}
return train_env_config
#Function to calculate the sharpe ratio from the list of total_episode_reward
def calculate_sharpe(episode_reward:list):
perf_data = pd.DataFrame(data=episode_reward,columns=['reward'])
perf_data['daily_return'] = perf_data['reward'].pct_change(1)
if perf_data['daily_return'].std() !=0:
sharpe = (252**0.5)*perf_data['daily_return'].mean()/ \
perf_data['daily_return'].std()
return sharpe
else:
return 0
def get_test_config(start_date, end_date, ticker_list, data_source, time_interval,
technical_indicator_list, env, model_name, if_vix = True,
**kwargs):
DP = DataProcessor(data_source = data_source,
API_KEY = API_KEY,
API_SECRET = API_SECRET,
APCA_API_BASE_URL = APCA_API_BASE_URL
)
data = DP.download_data(ticker_list, start_date, end_date, time_interval)
data = DP.clean_data(data)
data = DP.add_technical_indicator(data, technical_indicator_list)
if if_vix:
data = DP.add_vix(data)
price_array, tech_array, turbulence_array = DP.df_to_array(data, if_vix)
test_env_config = {'price_array':price_array,
'tech_array':tech_array,
'turbulence_array':turbulence_array,'if_train':False}
return test_env_config
def val_or_test(test_env_config,agent_path,model_name,env):
episode_total_reward = DRL_prediction(model_name,test_env_config,
env = env,
agent_path=agent_path)
return calculate_sharpe(episode_total_reward),episode_total_reward
TRAIN_START_DATE = '2022-01-17'
TRAIN_END_DATE = '2022-01-20'
VAL_START_DATE = '2022-01-24'
VAL_END_DATE = '2022-01-25'
TEST_START_DATE = '2022-01-26'
TEST_END_DATE = '2022-01-27'
technical_indicator_list = TECHNICAL_INDICATORS_LIST
model_name = 'ppo'
env = StockTradingEnv
ticker_list = DOW_30_TICKER
data_source = 'alpaca'
time_interval = '1Min'
train_env_config = get_train_env(TRAIN_START_DATE, VAL_END_DATE,
ticker_list, data_source, time_interval,
technical_indicator_list, env, model_name)
from ray.tune.registry import register_env
env_name = 'StockTrading_train_env'
register_env(env_name, lambda config: env(train_env_config))
'''
pbt = PopulationBasedTraining(
time_attr="training_iteration",
#metric="episode_reward_mean",
#mode="max",
perturbation_interval=10, # every 10 `time_attr` units
# (training_iterations in this case)
hyperparam_mutations={
# Perturb factor1 by scaling it by 0.8 or 1.2. Resampling
# resets it to a value sampled from the lambda function.
"factor_1": lambda: random.uniform(0.0, 20.0),
# Alternatively, use tune search space primitives.
# The search space for factor_1 is equivalent to factor_2.
"factor_2": tune.uniform(0.0, 20.0),
# Perturb factor3 by changing it to an adjacent value, e.g.
# 10 -> 1 or 10 -> 100. Resampling will choose at random.
"factor_3": [1, 10, 100, 1000, 10000],
# Using tune.choice is NOT equivalent to the above.
# factor_4 is treated as a continuous hyperparameter.
"factor_4": tune.choice([1, 10, 100, 1000, 10000]),
})
'''
MODEL_TRAINER = {'a2c':A2CTrainer,'ppo':PPOTrainer,'ddpg':DDPGTrainer}
if model_name == "ddpg":
sample_hyperparameters = sample_ddpg_params()
elif model_name == "ppo":
sample_hyperparameters = sample_ppo_params()
elif model_name == "a2c":
sample_hyperparameters = sample_a2c_params()
def run_optuna_tune():
#bayesopt = BayesOptSearch(metric="episode_reward_mean", mode="max")
#hebo = HEBOSearch()
#re_search_alg = Repeater(search_alg, repeat=10)
algo = OptunaSearch()
algo = ConcurrencyLimiter(algo,max_concurrent=4)
scheduler = AsyncHyperBandScheduler()
#scheduler=pbt
num_samples = 1
training_iterations = 100
analysis = tune.run(
MODEL_TRAINER[model_name],
metric="episode_reward_mean", #The metric to optimize for tuning
mode="max", #Maximize the metric
search_alg = algo,
scheduler=scheduler, #To prune bad trials
config = {**sample_hyperparameters,
'env':'StockTrading_train_env','num_workers':1,
'num_gpus':1,'framework':'tf2'},
num_samples = num_samples, #Number of hyperparameters to test out
stop = {'training_iteration':training_iterations},#Time attribute to validate the results
verbose=1,
local_dir="./tuned_models",#Saving tensorboard plots
# resources_per_trial={'gpu':1,'cpu':1},
max_failures = 1,#Extra Trying for the failed trials
raise_on_failed_trial=False,#Don't return error even if you have errored trials
keep_checkpoints_num = num_samples,
checkpoint_score_attr ='episode_reward_mean',#Only store keep_checkpoints_num trials based on this score
checkpoint_freq=training_iterations,#Checpointing all the trials
callbacks=[TBXLoggerCallback()]
)
print("Best hyperparameter: ", analysis.best_config)
return analysis
analysis = run_optuna_tune()
%tensorboard --logdir ./tuned_models/PPOTrainer_2022-02-23_08-43-01
dfs = analysis.trial_dataframes
ax = None # This plots everything on the same plot
for d in dfs.values():
ax = d.episode_reward_mean.plot(ax=ax, legend=False)
ax.set_xlabel("Epochs")
ax.set_ylabel("Episode reward Mean");
df = pd.DataFrame(dfs[0])
df.to_csv('traing_data')
files.download('training_data')
ExpAnalysis = ExperimentAnalysis(
experiment_checkpoint_path="~/tune_results/my_exp/state.json")
best_logdir = analysis.get_best_logdir(metric='episode_reward_mean',mode='max')
best_logdir
best_checkpoint = analysis.best_checkpoint
best_checkpoint
best_config = analysis.best_config
test_env_config = get_test_config(TEST_START_DATE, TEST_END_DATE, ticker_list, data_source, time_interval,
technical_indicator_list, env, model_name)
def DRL_prediction(
model_name,
test_env_config,
env,
model_config,
agent_path,
env_name_test='StockTrading_test_env'
):
env_instance = env(test_env_config)
register_env(env_name_test, lambda config: env(test_env_config))
model_config['env'] = env_name_test
# ray.init() # Other Ray APIs will not work until `ray.init()` is called.
if model_name == "ppo":
trainer = MODELS[model_name].PPOTrainer(config=model_config)
elif model_name == "a2c":
trainer = MODELS[model_name].A2CTrainer(config=model_config)
elif model_name == "ddpg":
trainer = MODELS[model_name].DDPGTrainer(config=model_config)
elif model_name == "td3":
trainer = MODELS[model_name].TD3Trainer(config=model_config)
elif model_name == "sac":
trainer = MODELS[model_name].SACTrainer(config=model_config)
try:
trainer.restore(agent_path)
print("Restoring from checkpoint path", agent_path)
except BaseException:
raise ValueError("Fail to load agent!")
# test on the testing env
state = env_instance.reset()
episode_returns = list() # the cumulative_return / initial_account
episode_total_assets = list()
episode_total_assets.append(env_instance.initial_total_asset)
done = False
while not done:
action = trainer.compute_single_action(state)
state, reward, done, _ = env_instance.step(action)
total_asset = (
env_instance.amount
+ (env_instance.price_ary[env_instance.day] * env_instance.stocks).sum()
)
episode_total_assets.append(total_asset)
episode_return = total_asset / env_instance.initial_total_asset
episode_returns.append(episode_return)
ray.shutdown()
print("episode return: " + str(episode_return))
print("Test Finished!")
return episode_total_assets
episode_total_assets = DRL_prediction(
model_name,
test_env_config,
env,
best_config,
best_checkpoint,
env_name_test='StockTrading_test_env')
print('The test sharpe ratio is: ',calculate_sharpe(episode_total_assets))
df_account_test = pd.DataFrame(data=episode_total_assets,columns=['account_value'])
df_account_test.to_csv('account_memory')
files.download('account_memory')
```
# Paper Trading
```
ray.shutdown()
ray.init()
state_dim = 1 + 2 + 3 * action_dim + len(TECHNICAL_INDICATORS_LIST) * action_dim
import datetime
import threading
from finrl.finrl_meta.data_processors.processor_alpaca import AlpacaProcessor
import alpaca_trade_api as tradeapi
import time
import pandas as pd
import numpy as np
import torch
import gym
class AlpacaPaperTrading():
def __init__(self,ticker_list, time_interval, drl_lib, agent, cwd, net_dim,
state_dim, action_dim, API_KEY, API_SECRET,
APCA_API_BASE_URL, tech_indicator_list, turbulence_thresh=30,
max_stock=1e2, latency = None):
#load agent
self.drl_lib = drl_lib
if agent =='ppo':
if drl_lib == 'elegantrl':
from elegantrl.agent import AgentPPO
from elegantrl.run import train_and_evaluate, init_agent
from elegantrl.config import Arguments
#load agent
config = {'state_dim':state_dim,
'action_dim':action_dim,}
args = Arguments(agent=AgentPPO, env=StockEnvEmpty(config))
args.cwd = cwd
args.net_dim = net_dim
# load agent
try:
agent = init_agent(args, gpu_id = 0)
self.act = agent.act
self.device = agent.device
except BaseException:
raise ValueError("Fail to load agent!")
elif drl_lib == 'rllib':
from ray.rllib.agents import ppo
from ray.rllib.agents.ppo.ppo import PPOTrainer
from ray.tune.registry import register_env
train_env_config = {
'state_dim':state_dim,
'action_dim':action_dim,
"if_train": False,}
env = StockEnvEmpty(train_env_config)
register_env("Stock_Env_Empty", lambda config: env)
print("environment is registered")
model_config = best_config
model_config['env'] = "Stock_Env_Empty"
model_config["log_level"] = "WARN"
model_config['env_config'] = train_env_config
print("model config done")
trainer = PPOTrainer(env="Stock_Env_Empty", config=model_config)
print("the ppo trainer is initialised")
try:
trainer.restore(cwd)
self.agent = trainer
print("Restoring from checkpoint path", cwd)
except:
raise ValueError('Fail to load agent!')
elif drl_lib == 'stable_baselines3':
from stable_baselines3 import PPO
try:
#load agent
self.model = PPO.load(cwd)
print("Successfully load model", cwd)
except:
raise ValueError('Fail to load agent!')
else:
raise ValueError('The DRL library input is NOT supported yet. Please check your input.')
else:
raise ValueError('Agent input is NOT supported yet.')
#connect to Alpaca trading API
try:
self.alpaca = tradeapi.REST(API_KEY,API_SECRET,APCA_API_BASE_URL, 'v2')
except:
raise ValueError('Fail to connect Alpaca. Please check account info and internet connection.')
#read trading time interval
if time_interval == '1s':
self.time_interval = 1
elif time_interval == '5s':
self.time_interval = 5
elif time_interval == '1Min':
self.time_interval = 60
elif time_interval == '5Min':
self.time_interval = 60 * 5
elif time_interval == '15Min':
self.time_interval = 60 * 15
else:
raise ValueError('Time interval input is NOT supported yet.')
#read trading settings
self.tech_indicator_list = tech_indicator_list
self.turbulence_thresh = turbulence_thresh
self.max_stock = max_stock
#initialize account
self.stocks = np.asarray([0] * len(ticker_list)) #stocks holding
self.stocks_cd = np.zeros_like(self.stocks)
self.cash = None #cash record
self.stocks_df = pd.DataFrame(self.stocks, columns=['stocks'], index = ticker_list)
self.asset_list = []
self.price = np.asarray([0] * len(ticker_list))
self.stockUniverse = ticker_list
self.turbulence_bool = 0
self.equities = []
def test_latency(self, test_times = 10):
total_time = 0
for i in range(0, test_times):
time0 = time.time()
self.get_state()
time1 = time.time()
temp_time = time1 - time0
total_time += temp_time
latency = total_time/test_times
print('latency for data processing: ', latency)
return latency
def run(self):
orders = self.alpaca.list_orders(status="open")
for order in orders:
self.alpaca.cancel_order(order.id)
# Wait for market to open.
print("Waiting for market to open...")
tAMO = threading.Thread(target=self.awaitMarketOpen)
tAMO.start()
tAMO.join()
print("Market opened.")
while True:
# Figure out when the market will close so we can prepare to sell beforehand.
clock = self.alpaca.get_clock()
closingTime = clock.next_close.replace(tzinfo=datetime.timezone.utc).timestamp()
currTime = clock.timestamp.replace(tzinfo=datetime.timezone.utc).timestamp()
self.timeToClose = closingTime - currTime
if(self.timeToClose < (60)):
# Close all positions when 1 minutes til market close.
print("Market closing soon. Stop trading.")
break
'''# Close all positions when 1 minutes til market close.
print("Market closing soon. Closing positions.")
positions = self.alpaca.list_positions()
for position in positions:
if(position.side == 'long'):
orderSide = 'sell'
else:
orderSide = 'buy'
qty = abs(int(float(position.qty)))
respSO = []
tSubmitOrder = threading.Thread(target=self.submitOrder(qty, position.symbol, orderSide, respSO))
tSubmitOrder.start()
tSubmitOrder.join()
# Run script again after market close for next trading day.
print("Sleeping until market close (15 minutes).")
time.sleep(60 * 15)'''
else:
trade = threading.Thread(target=self.trade)
trade.start()
trade.join()
last_equity = float(self.alpaca.get_account().last_equity)
cur_time = time.time()
self.equities.append([cur_time,last_equity])
time.sleep(self.time_interval)
def awaitMarketOpen(self):
isOpen = self.alpaca.get_clock().is_open
while(not isOpen):
clock = self.alpaca.get_clock()
openingTime = clock.next_open.replace(tzinfo=datetime.timezone.utc).timestamp()
currTime = clock.timestamp.replace(tzinfo=datetime.timezone.utc).timestamp()
timeToOpen = int((openingTime - currTime) / 60)
print(str(timeToOpen) + " minutes til market open.")
time.sleep(60)
isOpen = self.alpaca.get_clock().is_open
def trade(self):
state = self.get_state()
if self.drl_lib == 'elegantrl':
with torch.no_grad():
s_tensor = torch.as_tensor((state,), device=self.device)
a_tensor = self.act(s_tensor)
action = a_tensor.detach().cpu().numpy()[0]
action = (action * self.max_stock).astype(int)
elif self.drl_lib == 'rllib':
action = self.agent.compute_single_action(state)
elif self.drl_lib == 'stable_baselines3':
action = self.model.predict(state)[0]
else:
raise ValueError('The DRL library input is NOT supported yet. Please check your input.')
self.stocks_cd += 1
if self.turbulence_bool == 0:
min_action = 10 # stock_cd
for index in np.where(action < -min_action)[0]: # sell_index:
sell_num_shares = min(self.stocks[index], -action[index])
qty = abs(int(sell_num_shares))
respSO = []
tSubmitOrder = threading.Thread(target=self.submitOrder(qty, self.stockUniverse[index], 'sell', respSO))
tSubmitOrder.start()
tSubmitOrder.join()
self.cash = float(self.alpaca.get_account().cash)
self.stocks_cd[index] = 0
for index in np.where(action > min_action)[0]: # buy_index:
if self.cash < 0:
tmp_cash = 0
else:
tmp_cash = self.cash
buy_num_shares = min(tmp_cash // self.price[index], abs(int(action[index])))
qty = abs(int(buy_num_shares))
respSO = []
tSubmitOrder = threading.Thread(target=self.submitOrder(qty, self.stockUniverse[index], 'buy', respSO))
tSubmitOrder.start()
tSubmitOrder.join()
self.cash = float(self.alpaca.get_account().cash)
self.stocks_cd[index] = 0
else: # sell all when turbulence
positions = self.alpaca.list_positions()
for position in positions:
if(position.side == 'long'):
orderSide = 'sell'
else:
orderSide = 'buy'
qty = abs(int(float(position.qty)))
respSO = []
tSubmitOrder = threading.Thread(target=self.submitOrder(qty, position.symbol, orderSide, respSO))
tSubmitOrder.start()
tSubmitOrder.join()
self.stocks_cd[:] = 0
def get_state(self):
alpaca = AlpacaProcessor(api=self.alpaca)
price, tech, turbulence = alpaca.fetch_latest_data(ticker_list = self.stockUniverse, time_interval='1Min',
tech_indicator_list=self.tech_indicator_list)
turbulence_bool = 1 if turbulence >= self.turbulence_thresh else 0
turbulence = (self.sigmoid_sign(turbulence, self.turbulence_thresh) * 2 ** -5).astype(np.float32)
tech = tech * 2 ** -7
positions = self.alpaca.list_positions()
stocks = [0] * len(self.stockUniverse)
for position in positions:
ind = self.stockUniverse.index(position.symbol)
stocks[ind] = ( abs(int(float(position.qty))))
stocks = np.asarray(stocks, dtype = float)
cash = float(self.alpaca.get_account().cash)
self.cash = cash
self.stocks = stocks
self.turbulence_bool = turbulence_bool
self.price = price
amount = np.array(self.cash * (2 ** -12), dtype=np.float32)
scale = np.array(2 ** -6, dtype=np.float32)
state = np.hstack((amount,
turbulence,
self.turbulence_bool,
price * scale,
self.stocks * scale,
self.stocks_cd,
tech,
)).astype(np.float32)
print(len(self.stockUniverse))
return state
def submitOrder(self, qty, stock, side, resp):
if(qty > 0):
try:
self.alpaca.submit_order(stock, qty, side, "market", "day")
print("Market order of | " + str(qty) + " " + stock + " " + side + " | completed.")
resp.append(True)
except:
print("Order of | " + str(qty) + " " + stock + " " + side + " | did not go through.")
resp.append(False)
else:
print("Quantity is 0, order of | " + str(qty) + " " + stock + " " + side + " | not completed.")
resp.append(True)
@staticmethod
def sigmoid_sign(ary, thresh):
def sigmoid(x):
return 1 / (1 + np.exp(-x * np.e)) - 0.5
return sigmoid(ary / thresh) * thresh
class StockEnvEmpty(gym.Env):
#Empty Env used for loading rllib agent
def __init__(self,config):
state_dim = config['state_dim']
action_dim = config['action_dim']
#price_ary = config["price_array"]
self.env_num = 1
self.max_step = 10000
self.env_name = 'StockEnvEmpty'
self.state_dim = state_dim
self.action_dim = action_dim
#self.price_ary = price_ary.astype(np.float32)
self.if_discrete = False
self.target_return = 9999
self.observation_space = gym.spaces.Box(low=-3000, high=3000, shape=(state_dim,), dtype=np.float32)
self.action_space = gym.spaces.Box(low=-1, high=1, shape=(action_dim,), dtype=np.float32)
def reset(self):
return
def step(self, actions):
return
paper_trading_erl = AlpacaPaperTrading(ticker_list = DOW_30_TICKER,
time_interval = '1Min',
drl_lib="rllib",
agent = 'ppo',
cwd = best_checkpoint,
net_dim = 512,
state_dim = state_dim,
action_dim= action_dim,
API_KEY = API_KEY,
API_SECRET = API_SECRET,
APCA_API_BASE_URL = 'https://paper-api.alpaca.markets',
tech_indicator_list = TECHNICAL_INDICATORS_LIST,
turbulence_thresh=30,
max_stock=1e2)
paper_trading_erl.run()
```
|
github_jupyter
|
```
from __future__ import absolute_import
from __future__ import print_function
import autograd.numpy as np
from autograd import grad
from autograd.extend import notrace_primitive
@notrace_primitive
def resampling(w, rs):
"""
Stratified resampling with "nograd_primitive" to ensure autograd
takes no derivatives through it.
"""
N = w.shape[0]
bins = np.cumsum(w)
ind = np.arange(N)
u = (ind + rs.rand(N))/N
return np.digitize(u, bins)
def vsmc_lower_bound(prop_params, model_params, y, smc_obj, rs, verbose=False, adapt_resamp=False):
"""
Estimate the VSMC lower bound. Amenable to (biased) reparameterization
gradients
.. math::
ELBO(\theta,\lambda) =
\mathbb{E}_{\phi}\left[\nabla_\lambda \log \hat p(y_{1:T}) \right]
Requires an SMC object with 2 member functions:
-- sim_prop(t, x_{t-1}, y, prop_params, model_params, rs)
-- log_weights(t, x_t, x_{t-1}, y, prop_params, model_params)
"""
# Extract constants
T = y.shape[0]
Dx = smc_obj.Dx
N = smc_obj.N
# Initialize SMC
X = np.zeros((N,Dx))
Xp = np.zeros((N,Dx))
logW = np.zeros(N)
W = np.exp(logW)
W /= np.sum(W)
logZ = 0.
ESS = 1./np.sum(W**2)/N
for t in range(T):
# Resampling
if adapt_resamp:
if ESS < 0.5:
ancestors = resampling(W, rs)
Xp = X[ancestors]
logZ = logZ + max_logW + np.log(np.sum(W)) - np.log(N)
logW = np.zeros(N)
else:
Xp = X
else:
if t > 0:
ancestors = resampling(W, rs)
Xp = X[ancestors]
else:
Xp = X
# Propagation
X = smc_obj.sim_prop(t, Xp, y, prop_params, model_params, rs)
# Weighting
if adapt_resamp:
logW = logW + smc_obj.log_weights(t, X, Xp, y, prop_params, model_params)
else:
logW = smc_obj.log_weights(t, X, Xp, y, prop_params, model_params)
max_logW = np.max(logW)
W = np.exp(logW-max_logW)
if adapt_resamp:
if t == T-1:
logZ = logZ + max_logW + np.log(np.sum(W)) - np.log(N)
else:
logZ = logZ + max_logW + np.log(np.sum(W)) - np.log(N)
W /= np.sum(W)
ESS = 1./np.sum(W**2)/N
if verbose:
print('ESS: '+str(ESS))
return logZ
def sim_q(prop_params, model_params, y, smc_obj, rs, verbose=False):
"""
Simulates a single sample from the VSMC approximation.
Requires an SMC object with 2 member functions:
-- sim_prop(t, x_{t-1}, y, prop_params, model_params, rs)
-- log_weights(t, x_t, x_{t-1}, y, prop_params, model_params)
"""
# Extract constants
T = y.shape[0]
Dx = smc_obj.Dx
N = smc_obj.N
# Initialize SMC
X = np.zeros((N,T,Dx))
logW = np.zeros(N)
W = np.zeros((N,T))
ESS = np.zeros(T)
for t in range(T):
# Resampling
if t > 0:
ancestors = resampling(W[:,t-1], rs)
X[:,:t,:] = X[ancestors,:t,:]
# Propagation
X[:,t,:] = smc_obj.sim_prop(t, X[:,t-1,:], y, prop_params, model_params, rs)
# Weighting
logW = smc_obj.log_weights(t, X[:,t,:], X[:,t-1,:], y, prop_params, model_params)
max_logW = np.max(logW)
W[:,t] = np.exp(logW-max_logW)
W[:,t] /= np.sum(W[:,t])
ESS[t] = 1./np.sum(W[:,t]**2)
# Sample from the empirical approximation
bins = np.cumsum(W[:,-1])
u = rs.rand()
B = np.digitize(u,bins)
if verbose:
print('Mean ESS', np.mean(ESS)/N)
print('Min ESS', np.min(ESS))
return X[B,:,:]
import autograd.numpy.random as npr
def init_model_params(Dx, Dy, alpha, r, obs, rs = npr.RandomState(0)):
mu0 = np.zeros(Dx)
Sigma0 = np.eye(Dx)
A = np.zeros((Dx,Dx))
for i in range(Dx):
for j in range(Dx):
A[i,j] = alpha**(abs(i-j)+1)
Q = np.eye(Dx)
C = np.zeros((Dy,Dx))
if obs == 'sparse':
C[:Dy,:Dy] = np.eye(Dy)
else:
C = rs.normal(size=(Dy,Dx))
R = r * np.eye(Dy)
return (mu0, Sigma0, A, Q, C, R)
def init_prop_params(T, Dx, scale = 0.5, rs = npr.RandomState(0)):
return [(scale * rs.randn(Dx), # Bias
1. + scale * rs.randn(Dx), # Linear times A/mu0
scale * rs.randn(Dx)) # Log-var
for t in range(T)]
def generate_data(model_params, T = 5, rs = npr.RandomState(0)):
mu0, Sigma0, A, Q, C, R = model_params
Dx = mu0.shape[0]
Dy = R.shape[0]
x_true = np.zeros((T,Dx))
y_true = np.zeros((T,Dy))
for t in range(T):
if t > 0:
x_true[t,:] = rs.multivariate_normal(np.dot(A,x_true[t-1,:]),Q)
else:
x_true[0,:] = rs.multivariate_normal(mu0,Sigma0)
y_true[t,:] = rs.multivariate_normal(np.dot(C,x_true[t,:]),R)
return x_true, y_true
def log_marginal_likelihood(model_params, T, y_true):
mu0, Sigma0, A, Q, C, R = model_params
Dx = mu0.shape[0]
Dy = R.shape[1]
log_likelihood = 0.
xfilt = np.zeros(Dx)
Pfilt = np.zeros((Dx,Dx))
xpred = mu0
Ppred = Sigma0
for t in range(T):
if t > 0:
# Predict
xpred = np.dot(A,xfilt)
Ppred = np.dot(A,np.dot(Pfilt,A.T)) + Q
# Update
yt = y_true[t,:] - np.dot(C,xpred)
S = np.dot(C,np.dot(Ppred,C.T)) + R
K = np.linalg.solve(S, np.dot(C,Ppred)).T
xfilt = xpred + np.dot(K,yt)
Pfilt = Ppred - np.dot(K,np.dot(C,Ppred))
sign, logdet = np.linalg.slogdet(S)
log_likelihood += -0.5*(np.sum(yt*np.linalg.solve(S,yt)) + logdet + Dy*np.log(2.*np.pi))
return log_likelihood
class lgss_smc:
"""
Class for defining functions used in variational SMC.
"""
def __init__(self, T, Dx, Dy, N):
self.T = T
self.Dx = Dx
self.Dy = Dy
self.N = N
def log_normal(self, x, mu, Sigma):
dim = Sigma.shape[0]
sign, logdet = np.linalg.slogdet(Sigma)
log_norm = -0.5*dim*np.log(2.*np.pi) - 0.5*logdet
Prec = np.linalg.inv(Sigma)
return log_norm - 0.5*np.sum((x-mu)*np.dot(Prec,(x-mu).T).T,axis=1)
def log_prop(self, t, Xc, Xp, y, prop_params, model_params):
mu0, Sigma0, A, Q, C, R = model_params
mut, lint, log_s2t = prop_params[t]
s2t = np.exp(log_s2t)
if t > 0:
mu = mut + np.dot(A, Xp.T).T*lint
else:
mu = mut + lint*mu0
return self.log_normal(Xc, mu, np.diag(s2t))
def log_target(self, t, Xc, Xp, y, prop_params, model_params):
mu0, Sigma0, A, Q, C, R = model_params
if t > 0:
logF = self.log_normal(Xc,np.dot(A,Xp.T).T, Q)
else:
logF = self.log_normal(Xc, mu0, Sigma0)
logG = self.log_normal(np.dot(C,Xc.T).T, y[t], R)
return logF + logG
# These following 2 are the only ones needed by variational-smc.py
def log_weights(self, t, Xc, Xp, y, prop_params, model_params):
return self.log_target(t, Xc, Xp, y, prop_params, model_params) - \
self.log_prop(t, Xc, Xp, y, prop_params, model_params)
def sim_prop(self, t, Xp, y, prop_params, model_params, rs = npr.RandomState(0)):
mu0, Sigma0, A, Q, C, R = model_params
mut, lint, log_s2t = prop_params[t]
s2t = np.exp(log_s2t)
if t > 0:
mu = mut + np.dot(A, Xp.T).T*lint
else:
mu = mut + lint*mu0
return mu + rs.randn(*Xp.shape)*np.sqrt(s2t)
# Model hyper-parameters
T = 10
Dx = 5
Dy = 3
alpha = 0.42
r = .1
obs = 'sparse'
# Training parameters
param_scale = 0.5
num_epochs = 1000
step_size = 0.001
N = 4
data_seed = npr.RandomState(0)
model_params = init_model_params(Dx, Dy, alpha, r, obs, data_seed)
print("Generating data...")
x_true, y_true = generate_data(model_params, T, data_seed)
lml = log_marginal_likelihood(model_params, T, y_true)
print("True log-marginal likelihood: "+str(lml))
seed = npr.RandomState(0)
# Initialize proposal parameters
prop_params = init_prop_params(T, Dx, param_scale, seed)
combined_init_params = (model_params, prop_params)
lgss_smc_obj = lgss_smc(T, Dx, Dy, N)
# Define training objective
def objective(combined_params, iter):
model_params, prop_params = combined_params
return -vsmc_lower_bound(prop_params, model_params, y_true, lgss_smc_obj, seed)
# Get gradients of objective using autograd.
objective_grad = grad(objective)
from autograd.misc.optimizers import adam
def print_perf(combined_params, iter, grad):
if iter % (num_epochs/10) == 0:
model_params, prop_params = combined_params
bound = -objective(combined_params, iter)
message = "{:15}|{:20}".format(iter, bound)
print(message)
#with open(f_head+'_ELBO.csv', 'a') as f_handle:
# np.savetxt(f_handle, [[iter,bound]], fmt='%i,%f')
# SGD with adaptive step-size "adam"
optimized_params = adam(objective_grad, combined_init_params, step_size=step_size,
num_iters=num_epochs, callback=print_perf)
opt_model_params, opt_prop_params = optimized_params
opt_model_params[0].shape, opt_model_params[1].shape, opt_model_params[2].shape, opt_model_params[3].shape, opt_model_params[4].shape
```
|
github_jupyter
|
[Sebastian Raschka](http://sebastianraschka.com), 2015
https://github.com/rasbt/python-machine-learning-book
# Python Machine Learning - Code Examples
# Chapter 13 - Parallelizing Neural Network Training with Theano
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
```
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,matplotlib,theano,keras
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
```
### Overview
- [Building, compiling, and running expressions with Theano](#Building,-compiling,-and-running-expressions-with-Theano)
- [What is Theano?](#What-is-Theano?)
- [First steps with Theano](#First-steps-with-Theano)
- [Configuring Theano](#Configuring-Theano)
- [Working with array structures](#Working-with-array-structures)
- [Wrapping things up – a linear regression example](#Wrapping-things-up:-A--linear-regression-example)
- [Choosing activation functions for feedforward neural networks](#Choosing-activation-functions-for-feedforward-neural-networks)
- [Logistic function recap](#Logistic-function-recap)
- [Estimating probabilities in multi-class classification via the softmax function](#Estimating-probabilities-in-multi-class-classification-via-the-softmax-function)
- [Broadening the output spectrum by using a hyperbolic tangent](#Broadening-the-output-spectrum-by-using-a-hyperbolic-tangent)
- [Training neural networks efficiently using Keras](#Training-neural-networks-efficiently-using-Keras)
- [Summary](#Summary)
<br>
<br>
```
from IPython.display import Image
```
# Building, compiling, and running expressions with Theano
Depending on your system setup, it is typically sufficient to install Theano via
pip install Theano
For more help with the installation, please see: http://deeplearning.net/software/theano/install.html
```
Image(filename='./images/13_01.png', width=500)
```
<br>
<br>
## What is Theano?
...
## First steps with Theano
Introducing the TensorType variables. For a complete list, see http://deeplearning.net/software/theano/library/tensor/basic.html#all-fully-typed-constructors
```
import theano
from theano import tensor as T
# initialize
x1 = T.scalar()
w1 = T.scalar()
w0 = T.scalar()
z1 = w1 * x1 + w0
# compile
net_input = theano.function(inputs=[w1, x1, w0], outputs=z1)
# execute
net_input(2.0, 1.0, 0.5)
```
<br>
<br>
## Configuring Theano
Configuring Theano. For more options, see
- http://deeplearning.net/software/theano/library/config.html
- http://deeplearning.net/software/theano/library/floatX.html
```
print(theano.config.floatX)
theano.config.floatX = 'float32'
```
To change the float type globally, execute
export THEANO_FLAGS=floatX=float32
in your bash shell. Or execute Python script as
THEANO_FLAGS=floatX=float32 python your_script.py
Running Theano on GPU(s). For prerequisites, please see: http://deeplearning.net/software/theano/tutorial/using_gpu.html
Note that `float32` is recommended for GPUs; `float64` on GPUs is currently still relatively slow.
```
print(theano.config.device)
```
You can run a Python script on CPU via:
THEANO_FLAGS=device=cpu,floatX=float64 python your_script.py
or GPU via
THEANO_FLAGS=device=gpu,floatX=float32 python your_script.py
It may also be convenient to create a `.theanorc` file in your home directory to make those configurations permanent. For example, to always use `float32`, execute
echo -e "\n[global]\nfloatX=float32\n" >> ~/.theanorc
Or, create a `.theanorc` file manually with the following contents
[global]
floatX = float32
device = gpu
<br>
<br>
## Working with array structures
```
import numpy as np
# initialize
# if you are running Theano on 64 bit mode,
# you need to use dmatrix instead of fmatrix
x = T.fmatrix(name='x')
x_sum = T.sum(x, axis=0)
# compile
calc_sum = theano.function(inputs=[x], outputs=x_sum)
# execute (Python list)
ary = [[1, 2, 3], [1, 2, 3]]
print('Column sum:', calc_sum(ary))
# execute (NumPy array)
ary = np.array([[1, 2, 3], [1, 2, 3]], dtype=theano.config.floatX)
print('Column sum:', calc_sum(ary))
```
Updating shared arrays.
More info about memory management in Theano can be found here: http://deeplearning.net/software/theano/tutorial/aliasing.html
```
# initialize
x = T.fmatrix(name='x')
w = theano.shared(np.asarray([[0.0, 0.0, 0.0]],
dtype=theano.config.floatX))
z = x.dot(w.T)
update = [[w, w + 1.0]]
# compile
net_input = theano.function(inputs=[x],
updates=update,
outputs=z)
# execute
data = np.array([[1, 2, 3]], dtype=theano.config.floatX)
for i in range(5):
print('z%d:' % i, net_input(data))
```
We can use the `givens` variable to insert values into the graph before compiling it. Using this approach we can reduce the number of transfers from RAM (via CPUs) to GPUs to speed up learning with shared variables. If we use `inputs`, a datasets is transferred from the CPU to the GPU multiple times, for example, if we iterate over a dataset multiple times (epochs) during gradient descent. Via `givens`, we can keep the dataset on the GPU if it fits (e.g., a mini-batch).
```
# initialize
data = np.array([[1, 2, 3]],
dtype=theano.config.floatX)
x = T.fmatrix(name='x')
w = theano.shared(np.asarray([[0.0, 0.0, 0.0]],
dtype=theano.config.floatX))
z = x.dot(w.T)
update = [[w, w + 1.0]]
# compile
net_input = theano.function(inputs=[],
updates=update,
givens={x: data},
outputs=z)
# execute
for i in range(5):
print('z:', net_input())
```
<br>
<br>
## Wrapping things up: A linear regression example
Creating some training data.
```
import numpy as np
X_train = np.asarray([[0.0], [1.0], [2.0], [3.0], [4.0],
[5.0], [6.0], [7.0], [8.0], [9.0]],
dtype=theano.config.floatX)
y_train = np.asarray([1.0, 1.3, 3.1, 2.0, 5.0,
6.3, 6.6, 7.4, 8.0, 9.0],
dtype=theano.config.floatX)
```
Implementing the training function.
```
import theano
from theano import tensor as T
import numpy as np
def train_linreg(X_train, y_train, eta, epochs):
costs = []
# Initialize arrays
eta0 = T.fscalar('eta0')
y = T.fvector(name='y')
X = T.fmatrix(name='X')
w = theano.shared(np.zeros(
shape=(X_train.shape[1] + 1),
dtype=theano.config.floatX),
name='w')
# calculate cost
net_input = T.dot(X, w[1:]) + w[0]
errors = y - net_input
cost = T.sum(T.pow(errors, 2))
# perform gradient update
gradient = T.grad(cost, wrt=w)
update = [(w, w - eta0 * gradient)]
# compile model
train = theano.function(inputs=[eta0],
outputs=cost,
updates=update,
givens={X: X_train,
y: y_train,})
for _ in range(epochs):
costs.append(train(eta))
return costs, w
```
Plotting the sum of squared errors cost vs epochs.
```
%matplotlib inline
import matplotlib.pyplot as plt
costs, w = train_linreg(X_train, y_train, eta=0.001, epochs=10)
plt.plot(range(1, len(costs)+1), costs)
plt.tight_layout()
plt.xlabel('Epoch')
plt.ylabel('Cost')
plt.tight_layout()
# plt.savefig('./figures/cost_convergence.png', dpi=300)
plt.show()
```
Making predictions.
```
def predict_linreg(X, w):
Xt = T.matrix(name='X')
net_input = T.dot(Xt, w[1:]) + w[0]
predict = theano.function(inputs=[Xt], givens={w: w}, outputs=net_input)
return predict(X)
plt.scatter(X_train, y_train, marker='s', s=50)
plt.plot(range(X_train.shape[0]),
predict_linreg(X_train, w),
color='gray',
marker='o',
markersize=4,
linewidth=3)
plt.xlabel('x')
plt.ylabel('y')
plt.tight_layout()
# plt.savefig('./figures/linreg.png', dpi=300)
plt.show()
```
<br>
<br>
# Choosing activation functions for feedforward neural networks
...
## Logistic function recap
The logistic function, often just called "sigmoid function" is in fact a special case of a sigmoid function.
Net input $z$:
$$z = w_1x_{1} + \dots + w_mx_{m} = \sum_{j=1}^{m} x_{j}w_{j} \\ = \mathbf{w}^T\mathbf{x}$$
Logistic activation function:
$$\phi_{logistic}(z) = \frac{1}{1 + e^{-z}}$$
Output range: (0, 1)
```
# note that first element (X[0] = 1) to denote bias unit
X = np.array([[1, 1.4, 1.5]])
w = np.array([0.0, 0.2, 0.4])
def net_input(X, w):
z = X.dot(w)
return z
def logistic(z):
return 1.0 / (1.0 + np.exp(-z))
def logistic_activation(X, w):
z = net_input(X, w)
return logistic(z)
print('P(y=1|x) = %.3f' % logistic_activation(X, w)[0])
```
Now, imagine a MLP perceptron with 3 hidden units + 1 bias unit in the hidden unit. The output layer consists of 3 output units.
```
# W : array, shape = [n_output_units, n_hidden_units+1]
# Weight matrix for hidden layer -> output layer.
# note that first column (A[:][0] = 1) are the bias units
W = np.array([[1.1, 1.2, 1.3, 0.5],
[0.1, 0.2, 0.4, 0.1],
[0.2, 0.5, 2.1, 1.9]])
# A : array, shape = [n_hidden+1, n_samples]
# Activation of hidden layer.
# note that first element (A[0][0] = 1) is for the bias units
A = np.array([[1.0],
[0.1],
[0.3],
[0.7]])
# Z : array, shape = [n_output_units, n_samples]
# Net input of output layer.
Z = W.dot(A)
y_probas = logistic(Z)
print('Probabilities:\n', y_probas)
y_class = np.argmax(Z, axis=0)
print('predicted class label: %d' % y_class[0])
```
<br>
<br>
## Estimating probabilities in multi-class classification via the softmax function
The softmax function is a generalization of the logistic function and allows us to compute meaningful class-probalities in multi-class settings (multinomial logistic regression).
$$P(y=j|z) =\phi_{softmax}(z) = \frac{e^{z_j}}{\sum_{k=1}^K e^{z_k}}$$
the input to the function is the result of K distinct linear functions, and the predicted probability for the j'th class given a sample vector x is:
Output range: (0, 1)
```
def softmax(z):
return np.exp(z) / np.sum(np.exp(z))
def softmax_activation(X, w):
z = net_input(X, w)
return softmax(z)
y_probas = softmax(Z)
print('Probabilities:\n', y_probas)
y_probas.sum()
y_class = np.argmax(Z, axis=0)
y_class
```
<br>
<br>
## Broadening the output spectrum using a hyperbolic tangent
Another special case of a sigmoid function, it can be interpreted as a rescaled version of the logistic function.
$$\phi_{tanh}(z) = \frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}$$
Output range: (-1, 1)
```
def tanh(z):
e_p = np.exp(z)
e_m = np.exp(-z)
return (e_p - e_m) / (e_p + e_m)
import matplotlib.pyplot as plt
%matplotlib inline
z = np.arange(-5, 5, 0.005)
log_act = logistic(z)
tanh_act = tanh(z)
# alternatives:
# from scipy.special import expit
# log_act = expit(z)
# tanh_act = np.tanh(z)
plt.ylim([-1.5, 1.5])
plt.xlabel('net input $z$')
plt.ylabel('activation $\phi(z)$')
plt.axhline(1, color='black', linestyle='--')
plt.axhline(0.5, color='black', linestyle='--')
plt.axhline(0, color='black', linestyle='--')
plt.axhline(-1, color='black', linestyle='--')
plt.plot(z, tanh_act,
linewidth=2,
color='black',
label='tanh')
plt.plot(z, log_act,
linewidth=2,
color='lightgreen',
label='logistic')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/activation.png', dpi=300)
plt.show()
Image(filename='./images/13_05.png', width=700)
```
<br>
<br>
# Training neural networks efficiently using Keras
### Loading MNIST
1) Download the 4 MNIST datasets from http://yann.lecun.com/exdb/mnist/
- train-images-idx3-ubyte.gz: training set images (9912422 bytes)
- train-labels-idx1-ubyte.gz: training set labels (28881 bytes)
- t10k-images-idx3-ubyte.gz: test set images (1648877 bytes)
- t10k-labels-idx1-ubyte.gz: test set labels (4542 bytes)
2) Unzip those files
3 Copy the unzipped files to a directory `./mnist`
```
import os
import struct
import numpy as np
def load_mnist(path, kind='train'):
"""Load MNIST data from `path`"""
labels_path = os.path.join(path,
'%s-labels-idx1-ubyte'
% kind)
images_path = os.path.join(path,
'%s-images-idx3-ubyte'
% kind)
with open(labels_path, 'rb') as lbpath:
magic, n = struct.unpack('>II',
lbpath.read(8))
labels = np.fromfile(lbpath,
dtype=np.uint8)
with open(images_path, 'rb') as imgpath:
magic, num, rows, cols = struct.unpack(">IIII",
imgpath.read(16))
images = np.fromfile(imgpath,
dtype=np.uint8).reshape(len(labels), 784)
return images, labels
X_train, y_train = load_mnist('mnist', kind='train')
print('Rows: %d, columns: %d' % (X_train.shape[0], X_train.shape[1]))
X_test, y_test = load_mnist('mnist', kind='t10k')
print('Rows: %d, columns: %d' % (X_test.shape[0], X_test.shape[1]))
```
### Multi-layer Perceptron in Keras
Once you have Theano installed, [Keras](https://github.com/fchollet/keras) can be installed via
pip install Keras
In order to run the following code via GPU, you can execute the Python script that was placed in this directory via
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python mnist_keras_mlp.py
```
import theano
theano.config.floatX = 'float32'
X_train = X_train.astype(theano.config.floatX)
X_test = X_test.astype(theano.config.floatX)
```
One-hot encoding of the class variable:
```
from keras.utils import np_utils
print('First 3 labels: ', y_train[:3])
y_train_ohe = np_utils.to_categorical(y_train)
print('\nFirst 3 labels (one-hot):\n', y_train_ohe[:3])
from keras.models import Sequential
from keras.layers.core import Dense
from keras.optimizers import SGD
np.random.seed(1)
model = Sequential()
model.add(Dense(input_dim=X_train.shape[1],
output_dim=50,
init='uniform',
activation='tanh'))
model.add(Dense(input_dim=50,
output_dim=50,
init='uniform',
activation='tanh'))
model.add(Dense(input_dim=50,
output_dim=y_train_ohe.shape[1],
init='uniform',
activation='softmax'))
sgd = SGD(lr=0.001, decay=1e-7, momentum=.9)
model.compile(loss='categorical_crossentropy', optimizer=sgd)
model.fit(X_train, y_train_ohe,
nb_epoch=50,
batch_size=300,
verbose=1,
validation_split=0.1,
show_accuracy=True)
y_train_pred = model.predict_classes(X_train, verbose=0)
print('First 3 predictions: ', y_train_pred[:3])
train_acc = np.sum(y_train == y_train_pred, axis=0) / X_train.shape[0]
print('Training accuracy: %.2f%%' % (train_acc * 100))
y_test_pred = model.predict_classes(X_test, verbose=0)
test_acc = np.sum(y_test == y_test_pred, axis=0) / X_test.shape[0]
print('Test accuracy: %.2f%%' % (test_acc * 100))
```
<br>
<br>
# Summary
...
|
github_jupyter
|
```
# HIDDEN
from datascience import *
%matplotlib inline
path_data = '../../../data/'
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
import numpy as np
```
### Confidence Intervals ###
We have developed a method for estimating a parameter by using random sampling and the bootstrap. Our method produces an interval of estimates, to account for chance variability in the random sample. By providing an interval of estimates instead of just one estimate, we give ourselves some wiggle room.
In the previous example we saw that our process of estimation produced a good interval about 95% of the time, a "good" interval being one that contains the parameter. We say that we are *95% confident* that the process results in a good interval. Our interval of estimates is called a *95% confidence interval* for the parameter, and 95% is called the *confidence level* of the interval.
The situation in the previous example was a bit unusual. Because we happened to know value of the parameter, we were able to check whether an interval was good or a dud, and this in turn helped us to see that our process of estimation captured the parameter about 95 out of every 100 times we used it.
But usually, data scientists don't know the value of the parameter. That is the reason they want to estimate it in the first place. In such situations, they provide an interval of estimates for the unknown parameter by using methods like the one we have developed. Because of statistical theory and demonstrations like the one we have seen, data scientists can be confident that their process of generating the interval results in a good interval a known percent of the time.
### Confidence Interval for a Population Median: Bootstrap Percentile Method ###
We will now use the bootstrap method to estimate an unknown population median. The data come from a sample of newborns in a large hospital system; we will treat it as if it were a simple random sample though the sampling was done in multiple stages. [Stat Labs](https://www.stat.berkeley.edu/~statlabs/) by Deborah Nolan and Terry Speed has details about a larger dataset from which this set is drawn.
The table `baby` contains the following variables for mother-baby pairs: the baby's birth weight in ounces, the number of gestational days, the mother's age in completed years, the mother's height in inches, pregnancy weight in pounds, and whether or not the mother smoked during pregnancy.
```
baby = Table.read_table(path_data + 'baby.csv')
baby
```
Birth weight is an important factor in the health of a newborn infant – smaller babies tend to need more medical care in their first days than larger newborns. It is therefore helpful to have an estimate of birth weight before the baby is born. One way to do this is to examine the relationship between birth weight and the number of gestational days.
A simple measure of this relationship is the ratio of birth weight to the number of gestational days. The table `ratios` contains the first two columns of `baby`, as well as a column of the ratios. The first entry in that column was calcualted as follows:
$$
\frac{120~\mbox{ounces}}{284~\mbox{days}} ~\approx ~ 0.4225~ \mbox{ounces per day}
$$
```
ratios = baby.select('Birth Weight', 'Gestational Days').with_column(
'Ratio BW/GD', baby.column('Birth Weight')/baby.column('Gestational Days')
)
ratios
```
Here is a histogram of the ratios.
```
ratios.select('Ratio BW/GD').hist()
```
At first glance the histogram looks quite symmetric, with the density at its maximum over the interval 4 ounces per day to 4.5 ounces per day. But a closer look reveals that some of the ratios were quite large by comparison. The maximum value of the ratios was just over 0.78 ounces per day, almost double the typical value.
```
ratios.sort('Ratio BW/GD', descending=True).take(0)
```
The median gives a sense of the typical ratio because it is unaffected by the very large or very small ratios. The median ratio in the sample is about 0.429 ounces per day.
```
np.median(ratios.column(2))
```
But what was the median in the population? We don't know, so we will estimate it.
Our method will be exactly the same as in the previous section. We will bootstrap the sample 5,000 times resulting in 5,000 estimates of the median. Our 95% confidence interval will be the "middle 95%" of all of our estimates.
Recall the function `bootstrap_median` defined in the previous section. We will call this function and construct a 95% confidence interval for the median ratio in the population. Remember that the table `ratios` contains the relevant data from our original sample.
```
def bootstrap_median(original_sample, label, replications):
"""Returns an array of bootstrapped sample medians:
original_sample: table containing the original sample
label: label of column containing the variable
replications: number of bootstrap samples
"""
just_one_column = original_sample.select(label)
medians = make_array()
for i in np.arange(replications):
bootstrap_sample = just_one_column.sample()
resampled_median = percentile(50, bootstrap_sample.column(0))
medians = np.append(medians, resampled_median)
return medians
# Generate the medians from 5000 bootstrap samples
bstrap_medians = bootstrap_median(ratios, 'Ratio BW/GD', 5000)
# Get the endpoints of the 95% confidence interval
left = percentile(2.5, bstrap_medians)
right = percentile(97.5, bstrap_medians)
make_array(left, right)
```
The 95% confidence interval goes form about 0.425 ounces per day to about 0.433 ounces per day. We are estimating the the median "birth weight to gestational days" ratio in the population is somewhere in the interval 0.425 ounces per day to 0.433 ounces per day.
The estimate of 0.429 based on the original sample happens to be exactly half-way in between the two ends of the interval, though that need not be true in general.
To visualize our results, let us draw the empirical histogram of our bootstrapped medians and place the confidence interval on the horizontal axis.
```
resampled_medians = Table().with_column(
'Bootstrap Sample Median', bstrap_medians
)
resampled_medians.hist(bins=15)
plots.plot(make_array(left, right), make_array(0, 0), color='yellow', lw=8);
```
This histogram and interval resembles those we drew in the previous section, with one big difference – there is no red dot showing where the parameter is. We don't know where that dot should be, or whether it is even in the interval.
We just have an interval of estimates. It is a 95% confidence interval of estimates, because the process that generates it produces a good interval about 95% of the time. That certainly beats guessing at random!
Keep in mind that this interval is an approximate 95% confidence interval. There are many approximations involved in its computation. The approximation is not bad, but it is not exact.
### Confidence Interval for a Population Mean: Bootstrap Percentile Method ###
What we have done for medians can be done for means as well. Suppose we want to estimate the average age of the mothers in the population. A natural estimate is the average age of the mothers in the sample. Here is the distribution of their ages, and their average age which was about 27.2 years.
```
baby.select('Maternal Age').hist()
np.mean(baby.column('Maternal Age'))
```
What was the average age of the mothers in the population? We don't know the value of this parameter.
Let's estimate the unknown parameter by the bootstrap method. To do this, we will edit the code for `bootstrap_median` to instead define the function `bootstrap_mean`. The code is the same except that the statistics are means instead of medians, and are collected in an array called `means` instead of `medians`
```
def bootstrap_mean(original_sample, label, replications):
"""Returns an array of bootstrapped sample means:
original_sample: table containing the original sample
label: label of column containing the variable
replications: number of bootstrap samples
"""
just_one_column = original_sample.select(label)
means = make_array()
for i in np.arange(replications):
bootstrap_sample = just_one_column.sample()
resampled_mean = np.mean(bootstrap_sample.column(0))
means = np.append(means, resampled_mean)
return means
# Generate the means from 5000 bootstrap samples
bstrap_means = bootstrap_mean(baby, 'Maternal Age', 5000)
# Get the endpoints of the 95% confidence interval
left = percentile(2.5, bstrap_means)
right = percentile(97.5, bstrap_means)
make_array(left, right)
```
The 95% confidence interval goes from about 26.9 years to about 27.6 years. That is, we are estimating that the average age of the mothers in the population is somewhere in the interval 26.9 years to 27.6 years.
Notice how close the two ends are to the average of about 27.2 years in the original sample. The sample size is very large – 1,174 mothers – and so the sample averages don't vary much. We will explore this observation further in the next chapter.
The empirical histogram of the 5,000 bootstrapped means is shown below, along with the 95% confidence interval for the population mean.
```
resampled_means = Table().with_column(
'Bootstrap Sample Mean', bstrap_means
)
resampled_means.hist(bins=15)
plots.plot(make_array(left, right), make_array(0, 0), color='yellow', lw=8);
```
Once again, the average of the original sample (27.23 years) is close to the center of the interval. That's not very surprising, because each bootstrapped sample is drawn from that same original sample. The averages of the bootstrapped samples are about symmetrically distributed on either side of the average of the sample from which they were drawn.
Notice also that the empirical histogram of the resampled means has roughly a symmetric bell shape, even though the histogram of the sampled ages was not symmetric at all:
```
baby.select('Maternal Age').hist()
```
This is a consequence of the Central Limit Theorem of probability and statistics. In later sections, we will see what the theorem says.
### An 80% Confidence Interval ###
You can use the bootstrapped sample means to construct an interval of any level of confidence. For example, to construct an 80% confidence interval for the mean age in the population, you would take the "middle 80%" of the resampled means. So you would want 10% of the disribution in each of the two tails, and hence the endpoints would be the 10th and 90th percentiles of the resampled means.
```
left_80 = percentile(10, bstrap_means)
right_80 = percentile(90, bstrap_means)
make_array(left_80, right_80)
resampled_means.hist(bins=15)
plots.plot(make_array(left_80, right_80), make_array(0, 0), color='yellow', lw=8);
```
This 80% confidence interval is much shorter than the 95% confidence interval. It only goes from about 27.0 years to about 27.4 years. While that's a tight set of estimates, you know that this process only produces a good interval about 80% of the time.
The earlier process produced a wider interval but we had more confidence in the process that generated it.
To get a narrow confidence interval at a high level of confidence, you'll have to start with a larger sample. We'll see why in the next chapter.
### Confidence Interval for a Population Proportion: Bootstrap Percentile Method ###
In the sample, 39% of the mothers smoked during pregnancy.
```
baby.where('Maternal Smoker', are.equal_to(True)).num_rows/baby.num_rows
```
For what follows is useful to observe that this proportion can also be calculated by an array operation:
```
smoking = baby.column('Maternal Smoker')
np.count_nonzero(smoking)/len(smoking)
```
What percent of mothers in the population smoked during pregnancy? This is an unknown parameter which we can estimate by a bootstrap confidence interval. The steps in the process are analogous to those we took to estimate the population mean and median.
We will start by defining a function `bootstrap_proportion` that returns an array of bootstrapped sampled proportions. Once again, we will achieve this by editing our definition of `bootstrap_median`. The only change in computation is in replacing the median of the resample by the proportion of smokers in it. The code assumes that the column of data consists of Boolean values. The other changes are only to the names of arrays, to help us read and understand our code.
```
def bootstrap_proportion(original_sample, label, replications):
"""Returns an array of bootstrapped sample proportions:
original_sample: table containing the original sample
label: label of column containing the Boolean variable
replications: number of bootstrap samples
"""
just_one_column = original_sample.select(label)
proportions = make_array()
for i in np.arange(replications):
bootstrap_sample = just_one_column.sample()
resample_array = bootstrap_sample.column(0)
resampled_proportion = np.count_nonzero(resample_array)/len(resample_array)
proportions = np.append(proportions, resampled_proportion)
return proportions
```
Let us use `bootstrap_proportion` to construct an approximate 95% confidence interval for the percent of smokers among the mothers in the population. The code is analogous to the corresponding code for the mean and median.
```
# Generate the proportions from 5000 bootstrap samples
bstrap_props = bootstrap_proportion(baby, 'Maternal Smoker', 5000)
# Get the endpoints of the 95% confidence interval
left = percentile(2.5, bstrap_props)
right = percentile(97.5, bstrap_props)
make_array(left, right)
```
The confidence interval goes from about 36% to about 42%. The original sample percent of 39% is very close to the center of the interval, as you can see below.
```
resampled_proportions = Table().with_column(
'Bootstrap Sample Proportion', bstrap_props
)
resampled_proportions.hist(bins=15)
plots.plot(make_array(left, right), make_array(0, 0), color='yellow', lw=8);
```
### Care in Using the Bootstrap ###
The bootstrap is an elegant and powerful method. Before using it, it is important to keep some points in mind.
- Start with a large random sample. If you don't, the method might not work. Its success is based on large random samples (and hence also resamples from the sample) resembling the population. The Law of Averages says that this is likely to be true provided the random sample is large.
- To approximate the probability distribution of a statistic, it is a good idea to replicate the resampling procedure as many times as possible. A few thousand replications will result in decent approximations to the distribution of sample median, especially if the distribution of the population has one peak and is not very asymmetric. We used 5,000 replications in our examples but would recommend 10,000 in general.
- The bootstrap percentile method works well for estimating the population median or mean based on a large random sample. However, it has limitations, as do all methods of estimation. For example, it is not expected to do well in the following situations.
- The goal is to estimate the minimum or maximum value in the population, or a very low or very high percentile, or parameters that are greatly influenced by rare elements of the population.
- The probability distribution of the statistic is not roughly bell shaped.
- The original sample is very small, say less than 10 or 15.
|
github_jupyter
|
```
!pip install --upgrade tables
!pip install eli5
!pip install xgboost
!pip install hyperopt
import pandas as pd
import numpy as np
import xgboost as xgb
from sklearn.metrics import mean_absolute_error as mae
from sklearn.model_selection import cross_val_score, KFold
from hyperopt import hp, fmin, tpe, STATUS_OK
import eli5
from eli5.sklearn import PermutationImportance
```
## Wczytywanie danych
```
cd '/content/drive/My Drive/Colab Notebooks/dw_matrix/matrix_two/dw_matrix_car'
df = pd.read_hdf('data/car.h5')
df.shape
```
## Feature Engineering
```
SUFFIX_CAT = '__cat'
for feat in df.columns:
if isinstance(df[feat][0], list): continue
factorized_values = df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat] = factorized_values
else:
df[feat + SUFFIX_CAT] = factorized_values
# cat_feats = [x for x in df.columns if SUFFIX_CAT in x]
# cat_feats = [x for x in cat_feats if 'price' not in x]
# cat_feats
df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x))
df['param_moc'] = df['param_moc'].map(lambda x: -1 if str(x) == 'None' else str(x).split(' ')[0])
df['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x: -1 if str(x) == 'None' else str(x).split('cm')[0].replace(' ', '') )
def run_model(model, feats):
X = df[feats].values
y = df['price_value'].values
scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
feats = ['param_napęd__cat', 'param_rok-produkcji', 'param_stan__cat', 'param_skrzynia-biegów__cat', 'param_faktura-vat__cat', 'param_moc', 'param_marka-pojazdu__cat', 'feature_kamera-cofania__cat', 'param_typ__cat', 'param_pojemność-skokowa', 'seller_name__cat', 'feature_wspomaganie-kierownicy__cat', 'param_model-pojazdu__cat', 'param_wersja__cat', 'param_kod-silnika__cat', 'feature_system-start-stop__cat', 'feature_asystent-pasa-ruchu__cat', 'feature_czujniki-parkowania-przednie__cat', 'feature_łopatki-zmiany-biegów__cat', 'feature_regulowane-zawieszenie__cat']
xgb_params = {
'max_depth' : 5,
'n_estimators' : 50,
'learning_rate' : 0.1,
'seed': 0
}
model = xgb.XGBRegressor(**xgb_params)
run_model(model, feats)
```
## Hyperopt
```
def obj_func(params):
print("Training with params: ")
print(params)
mean_mae, score_std = run_model(xgb.XGBRegressor(**params), feats)
return {'loss': np.abs(mean_mae), 'status': STATUS_OK}
# space
xgb_reg_params = {
'learning_rate': hp.choice('learning_rate', np.arange(0.05, 0.31, 0.05)),
'max_depth': hp.choice('max_depth', np.arange(5, 16, 1, dtype=int)),
'subsample': hp.quniform('subsample', 0.5, 1, 0.05),
'colsample_bytree': hp.quniform('colsample_bytree', 0.5, 1, 0.05),
'objective': 'reg:squarederror',
'n_estimators' : 100,
'seed': 0,
}
## run
best = fmin(obj_func, xgb_reg_params, algo=tpe.suggest, max_evals=25)
best
```
|
github_jupyter
|
Interactive analysis with python
--------------------------------
Before starting this tutorial, ensure that you have set up _tangos_ [as described here](https://pynbody.github.io/tangos/) and the data sources [as described here](https://pynbody.github.io/tangos/data_exploration.html).
We get started by importing the modules we'll need:
```
%matplotlib inline
import tangos
import pylab as p
```
First let's inspect what simulations are available in our database:
```
tangos.all_simulations()
```
For any of these simulations, we can generate a list of available timesteps as follows:
```
tangos.get_simulation("tutorial_changa").timesteps
```
For any timestep, we can access the halos using `.halos` and a specific halo using standard python 0-based indexing:
```
tangos.get_simulation("tutorial_changa").timesteps[3].halos[3]
```
One can skip straight to getting a specific halo as follows:
```
tangos.get_halo("tutorial_changa/%384/halo_4")
```
Note the use of the SQL wildcard % character which avoids us having to type out the entire path. Whatever way you access it, the resulting object allows you to query what properties have been calculated for that specific halo. We can then access those properties using the normal python square-bracket dictionary syntax.
```
halo = tangos.get_halo("tutorial_changa/%960/halo_1")
halo.keys()
halo['Mvir']
p.imshow(halo['uvi_image'])
```
One can also get meta-information about the computed property. It would be nice to know
the physical size of the image we just plotted. We retrieve the underlying property object
and ask it:
```
halo.get_description("uvi_image").plot_extent()
```
This tells us that the image is 15 kpc across. The example properties that come with _tangos_
use _pynbody_'s units system to convert everything to physical kpc, solar masses and km/s. When
you implement your own properties, you can of course store them in whichever units you like.
Getting a time sequence of properties
-------------------------------------
Often we would like to see how a property varies over time. _Tangos_ provides convenient ways to extract this information, automatically finding
major progenitors or descendants for a halo. Let's see this illustrated on the SubFind _mass_ property:
```
halo = tangos.get_halo("tutorial_gadget/snapshot_020/halo_10")
# Calculate on major progenitor branch:
Mvir, t = halo.calculate_for_progenitors("mass","t()")
# Now perform plotting:
p.plot(t,1e10*Mvir)
p.xlabel("t/Gyr")
p.ylabel(r"$M/h^{-1} M_{\odot}$")
p.semilogy()
```
In the example above, `calculate_for_progenitors` retrieves properties on the major progenitor branch of the chosen halo. One can ask for as many properties as you like, each one being returned as a numpy array in order. In this particular example the first property is the mass (as reported by subfind) and the second is the time. In fact the second property isn't really stored - if you check `halo.keys()` you won't find `t` in there. It's a simple example of a _live property_ which means it's calculated on-the-fly from other data. The time is actually stored in the TimeStep rather than the Halo database entry, so the `t()` live property simply retrieves it from the appropriate location.
Live properties are a powerful aspect of _tangos_. We'll see more of them momentarily.
Histogram properties
--------------------
While the approach above is the main way to get time series of data with _tangos_, sometimes one
wants to be able to use finer time bins than the number of outputs available. For example, star
formation rates or black hole accretion rates often vary on short timescales and the output files
from simulations are sufficient to reconstruct these variations in between snapshots.
_Tangos_ implements `TimeChunkedHistogram` for this purpose. As the name suggests, a _chunk_ of
historical data is stored with each timestep. The full history is then reconstructed by combining
the chunks through the merger tree; this process is customizable. Let's start with the simplest
possible request:
```
halo = tangos.get_halo("tutorial_changa_blackholes/%960/halo_1")
SFR = halo["SFR_histogram"]
# The above is sufficient to retrieve the histogram; however you probably also want to check
# the size of the time bins. The easiest approach is to request a suitable time array to go with
# the SF history:
SFR_property_object = halo.get_objects("SFR_histogram")[0]
SFR_time_bins = SFR_property_object.x_values()
p.plot(SFR_time_bins, SFR)
p.xlabel("Time/Gyr")
p.ylabel("SFR/$M_{\odot}\,yr^{-1}$")
```
The advantage of storing the histogram in chunks is that one can reconstruct it
in different ways. The default is to go along the major progenitor branch, but
one can also sum over all progenitors. The following code shows the fraction of
star formation in the major progenitor:
```
SFR_all = halo.calculate('reassemble(SFR_histogram, "sum")')
p.plot(SFR_time_bins, SFR/SFR_all)
p.xlabel("Time/Gyr")
p.ylabel("Frac. SFR in major progenitor")
```
_Technical note_: It's worth being aware that the merger information is, of course, quantized to the
output timesteps even though the SFR information is stored in small chunks. This is rarely an issue
but with coarse timesteps (such as those in the tutorial simulations), the quantization can cause
noticable artefacts – here, the jump to 100% in the major progenitor shortly before _t_ = 3 Gyr
corresponds to the time of the penultimate stored step, after which no mergers are recorded.
For more information, see the [time-histogram properties](https://pynbody.github.io/tangos/histogram_properties.html) page.
Let's see another example of a histogram property: the black hole accretion rate
```
BH_accrate = halo.calculate('BH.BH_mdot_histogram')
p.plot(SFR_time_bins, BH_accrate)
p.xlabel("Time/Gyr")
p.ylabel("BH accretion rate/$M_{\odot}\,yr^{-1}$")
```
This works fine, but you may have noticed the warning that more than one black hole
is in the halo of interest. There is more information about the way that links between
objects work in _tangos_, and disambiguating between them, in the "using links" section
below.
Getting properties for multiple halos
-------------------------------------
Quite often one wants to collect properties from multiple halos simultaneously. Suppose we want to plot the mass against the vmax for all halos at
a specific snapshot:
```
timestep = tangos.get_timestep("tutorial_gadget/snapshot_019")
mass, vmax = timestep.calculate_all("mass","VMax")
p.plot(mass*1e10,vmax,'k.')
p.loglog()
p.xlabel("$M/h^{-1} M_{\odot}$")
p.ylabel(r"$v_{max}/{\rm km s^{-1}}$")
```
Often when querying multiple halos we still want to know something about their history, and live calculations enable that. Suppose we want to know how much the mass has grown since the previous snapshot:
```
mass, fractional_delta_2 = timestep.calculate_all("mass", "(mass-earlier(2).mass)/mass")
p.hlines(0.0,1e10,1e15, colors="gray")
p.plot(mass*1e10, fractional_delta_2,"r.", alpha=0.2)
p.semilogx()
p.ylim(-0.1,0.9)
p.xlim(1e12,1e15)
p.xlabel("$M/h^{-1} M_{\odot}$")
p.ylabel("Fractional growth in mass")
```
This is a much more ambitious use of the live calculation system. Consider the last property retrieved, which is `(mass-earlier(2).mass)/mass`. This combines algebraic operations with _redirection_: `earlier(2)` finds the major progenitor two steps prior to this one, after which `.mass` retrieves the mass at that earlier timestep. This is another example of a "link", as previously used to retrieve
black hole information above.
Using Links
-----------
_Tangos_ has a concept of "links" between objects including halos and black holes. For example,
the merger tree information that you have already used indirectly is stored as links.
Returning to our example of black holes above, we used a link named `BH`; however this issued a
warning that the result was technically ambiguous. Let's see that warning again. For clarity,
we will use the link named `BH_central` this time around -- it's an alternative set of links
which only includes black holes associated with the central galaxy (rather than any satellites).
```
halo = tangos.get_halo("tutorial_changa_blackholes/%960/halo_1")
BH_mass = halo.calculate('BH_central.BH_mass')
```
We still get the warning, so there's more than one black hole in the central galaxy.
To avoid such warnings, you can specify more about which link you are referring to. For example,
we can specifically ask for the black hole with the _largest mass_ and _smallest impact parameters_
using the following two queries:
```
BH_max_mass = halo.calculate('link(BH_central, BH_mass, "max")')
BH_closest = halo.calculate('link(BH_central, BH_central_distance, "min")')
```
The `link` live-calculation function returns the halo with either the maximum or minimum value of an
associated property, here the `BH_mass` and `BH_central_distance` properties respectively.
Either approach disambiguates the black holes we mean (in fact, they unsurprisingly lead to
the same disambiguation):
```
BH_max_mass == BH_closest
```
However one doesn't always have to name a link to make use of it. The mere existence of a link
is sometimes enough. An example is the merger tree information already used. Another useful
example is when two simulations have the same initial conditions, as in the `tutorial_changa`
and `tutorial_changa_blackholes` examples; these two simulations differ only in that the latter
has AGN feedback. We can identify halos between simulations using the following syntax:
```
SFR_in_other_sim = halo.calculate("match('tutorial_changa').SFR_histogram")
p.plot(SFR_time_bins, halo['SFR_histogram'],color='r', label="With AGN feedback")
p.plot(SFR_time_bins, SFR_in_other_sim, color='b',label="No AGN feedback")
p.legend(loc="lower right")
p.semilogy()
p.xlabel("t/Gyr")
p.ylabel("SFR/$M_{\odot}\,yr^{-1}$")
```
The `match` syntax simply tries to follow links until it finds a halo in the named
_tangos_ context. One can use it to match halos across entire timesteps too; let's
compare the stellar masses of our objects:
```
timestep = tangos.get_timestep("tutorial_changa/%960")
Mstar_no_AGN, Mstar_AGN = timestep.calculate_all("star_mass_profile[-1]",
"match('tutorial_changa_blackholes').star_mass_profile[-1]")
# note that we use star_mass_profile[-1] to get the last entry of the star_mass_profile array,
# as a means to get the total stellar mass from a profile
p.plot(Mstar_no_AGN, Mstar_AGN, 'k.')
p.plot([1e6,1e11],[1e6,1e11],'k-',alpha=0.3)
p.loglog()
p.xlabel("$M_{\star}/M_{\odot}$ without AGN")
p.ylabel("$M_{\star}/M_{\odot}$ with AGN")
```
|
github_jupyter
|
n=b
```
# Binary representation ---> Microsoft
# Difficulty: School Marks: 0
'''
Write a program to print Binary representation of a given number N.
Input:
The first line of input contains an integer T, denoting the number of test cases. Each test case contains an integer N.
Output:
For each test case, print the binary representation of the number N in 14 bits.
Constraints:
1 ≤ T ≤ 100
1 ≤ N ≤ 5000
Example:
Input:
2
2
5
Output:
00000000000010
00000000000101
'''
for _ in range(int(input())):
n=int(input())
x=bin(n).split('b')[1]
print('0'*(14-len(x))+x)
# Alone in couple ---> Ola Cabs
# Difficulty: School Marks: 0
'''
In a party everyone is in couple except one. People who are in couple have same numbers. Find out the person who is not in couple.
Input:
The first line contains an integer 'T' denoting the total number of test cases. In each test cases, the first line contains an integer 'N' denoting the size of array. The second line contains N space-separated integers A1, A2, ..., AN denoting the elements of the array. (N is always odd)
Output:
In each seperate line print number of the person not in couple.
Constraints:
1<=T<=30
1<=N<=500
1<=A[i]<=500
N%2==1
Example:
Input:
1
5
1 2 3 2 1
Output:
3
'''
for _ in range(int(input())):
n=int(input())
s=input()
a=''
for i in s:
if s.count(i)%2==1 and i not in a:
a=i
print(i,end=' ')
# Count total set bits ---> Amazon,Adobe
# Difficulty: Basic Marks: 1
'''
You are given a number N. Find the total count of set bits for all numbers from 1 to N(both inclusive).
Input:
The first line of input contains an integer T denoting the number of test cases. T testcases follow. The first line of each test case is N.
Output:
For each testcase, in a new line, print the total count of all bits.
Constraints:
1 ≤ T ≤ 100
1 ≤ N ≤ 103
Example:
Input:
2
4
17
Output:
5
35
Explanation:
Testcase1:
An easy way to look at it is to consider the number, n = 4:
0 0 0 = 0
0 0 1 = 1
0 1 0 = 1
0 1 1 = 2
1 0 0 = 1
Therefore , the total number of bits is 5.
'''
for _ in range(int(input())):
n=int(input())
s=0
for i in range(n+1):
s+=bin(i).split('b')[1].count('1')
print(s)
```
***IMP***
```
# ------------------------------------------IMP---------------------------------------
"https://practice.geeksforgeeks.org/problems/toggle-bits-given-range/0/?track=sp-bit-magic&batchId=152"
# Toggle bits given range
# Difficulty: Basic Marks: 1
'''
Given a non-negative number N and two values L and R. The problem is to toggle the bits in the range L to R in the binary representation of N, i.e, to toggle bits from the rightmost Lth bit to the rightmost Rth bit. A toggle operation flips a bit 0 to 1 and a bit 1 to 0.
Input:
First line of input contains a single integer T which denotes the number of test cases. Then T test cases follows. First line of each test case contains three space separated integers N, L and R.
Output:
For each test case , print the number obtained by toggling bits from the rightmost Lth bit to the rightmost Rth bit in binary representation of N.
Constraints:
1<=T<=100
1<=N<=1000
1<=L<=R
L<=R<= Number of bits(N)
Example:
Input:
2
17 2 3
50 2 5
Output:
23
44
'''
for _ in range(int(input())):
l=list(map(int,input().split()))
c=0
s1=''
s=bin(l[0])[2:]
n=len(s)
for i in s:
if c>=(n-l[2]) and c<=(n-l[1]):
if i=='0':
s1+='1'
else:
s1+='0'
else:
s1+=i
c+=1
print(int(s1,base=2))
"https://practice.geeksforgeeks.org/problems/set-kth-bit/0/?track=sp-bit-magic&batchId=152"
# Set kth bit ---> Cisco, Qualcomm
# Difficulty: Basic Marks: 1
'''
Given a number N and a value K. From the right, set the Kth bit in the binary representation of N. The position of LSB(or last bit) is 0, second last bit is 1 and so on. Also, 0 <= K < X, where X is the number of bits in the binary representation of N.
Input:
First line of input contains a single integer T, which denotes the number of test cases. T test cases follows. First line of each testcase contains two space separated integers N and K.
Output:
For each test case, print the new number after setting the Kth bit of N.
Constraints:
1 <= T <= 100
1 <= N <= 1000
Example:
Input:
2
10 2
15 3
Output:
14
15
Explanation:
Testcase 1: Binary representation of the given number 10 is: 1 0 1 0, number of bits in the binary reprsentation is 4. Thus 2nd bit from right is 0. The number after changing this bit to 1 is: 14(1 1 1 0).
'''
for _ in range(int(input())):
l=list(map(int,input().split()))
s=bin(l[0])[2:]
s1=''
c=0
if (l[1]+1)>len(s):
s1='0'*(l[1]+1-len(s))+s
s=s1
s1=''
for i in s:
if c==(len(s)-(l[1]+1)):
s1+='1'
else:
s1+=i
c+=1
print(int(s1,2))
"https://practice.geeksforgeeks.org/problems/bit-difference/0/?track=sp-bit-magic&batchId=152"
# Bit Difference ---> Amazon Qualcomm, Samsung
# Difficulty: Basic Marks: 1
'''
You are given two numbers A and B. Write a program to count number of bits needed to be flipped to convert A to B.
Input:
The first line of input contains an integer T denoting the number of test cases. T testcases follow. The first line of each test case is A and B separated by a space.
Output:
For each testcase, in a new line, print the number of bits needed to be flipped.
Constraints:
1 ≤ T ≤ 100
1 ≤ A, B ≤ 103
Example:
Input:
1
10 20
Output:
4
Explanation:
Testcase1:
A = 01010
B = 10100
Number of bits need to flipped = 4
'''
for _ in range(int(input())):
a,c=input().split()
a=bin(int(a))[2:]
c=bin(int(c))[2:]
an=len(a)
cn=len(c)
if an!=cn:
if (an-cn)>0:
c='0'*(an-cn)+c
else:
a='0'*(cn-an)+a
count=0
for i,j in zip(a,c):
if i !=j:
count+=1
print(count)
"https://practice.geeksforgeeks.org/problems/swap-two-nibbles-in-a-byte/0/?track=sp-bit-magic&batchId=152"
# Swap two nibbles in a byte ---> Accolite, Cisco, Amazon, Qualcomm
# Difficulty: Basic Marks: 1
'''
Given a byte, swap the two nibbles in it. For example 100 is be represented as 01100100 in a byte (or 8 bits).
The two nibbles are (0110) and (0100). If we swap the two nibbles, we get 01000110 which is 70 in decimal.
Input:
The first line contains 'T' denoting the number of testcases. Each testcase contains a single positive integer X.
Output:
In each separate line print the result after swapping the nibbles.
Constraints:
1 ≤ T ≤ 70
1 ≤ X ≤ 255
Example:
Input:
2
100
129
Output:
70
24
'''
for _ in range(int(input())):
a=bin(int(input()))[2:]
if len(a)%4!=0:
a='0'*(4-len(a)%4)+a
c=[]
for i in range(1,(len(a)//4)+1):
c.append(a[4*(i-1):4*i])
c=c[::-1]
print(int(''.join(c),2))
```
### [Check whether K-th bit is set or not](https://practice.geeksforgeeks.org/problems/check-whether-k-th-bit-is-set-or-not/0/?track=sp-bit-magic&batchId=152)
- Company Tag: Cisco
- Difficulty: Basic
- Marks: 1
***Given a number N and a bit number K, check if Kth bit of N is set or not. A bit is called set if it is 1. Position of set bit '1' should be indexed starting with 0 from RSB side in binary representation of the number. Consider N = 4(100): 0th bit = 0, 1st bit = 0, 2nd bit = 1.***
***Input:***\
The first line of input contains an integer T denoting the number of test cases. Then T test cases follow.\
Each test case consists of two lines. The first line of each test case contain an integer N. \
The second line of each test case contains an integer K.\
\
***Output:***\
Corresponding to each test case, print "Yes" (without quotes) if Kth bit is set else print "No" (without quotes) in a new line.\
\
***Constraints:***\
1 ≤ T ≤ 200\
1 ≤ N ≤ 109\
0 ≤ K ≤ floor(log2(N) + 1)\
\
***Example:***\
***Input:***\
3\
4\
0\
4\
2\
500\
3\
\
***Output:***\
No\
Yes\
No\
\
***Explanation:***\
***Testcase 1:*** Binary representation of 4 is 100, in which 0th bit from LSB is not set. So, answer is No.\
```
for _ in range(int(input())):
a=bin(int(input()))[2:]
k=int(input())
if a[(len(a)-1)-k]=='1':
print('Yes')
else:
print('No')
```
### [Rightmost different bit](https://practice.geeksforgeeks.org/problems/rightmost-different-bit/0/?track=sp-bit-magic&batchId=152)
- Difficulty: Basic
- Marks: 1
***Given two numbers M and N. The task is to find the position of rightmost different bit in binary representation of numbers.***
***Input:***\
The input line contains T, denoting the number of testcases. Each testcase follows. First line of each testcase contains two space separated integers M and N.
***Output:***\
For each testcase in new line, print the position of rightmost different bit in binary representation of numbers. If both M and N are same then print -1 in this case.
***Constraints:***\
1 <= T <= 100\
1 <= M <= 103\
1 <= N <= 103
***Example:***\
***Input:***\
2\
11 9\
52 4
***Output:***\
2\
5
***Explanation:***\
***Tescase 1:*** Binary representaion of the given numbers are: 1011 and 1001, 2nd bit from right is different.
```
for _ in range(int(input())):
a,c=input().split()
a=bin(int(a))[2:]
c=bin(int(c))[2:]
an=len(a)
cn=len(c)
if an!=cn:
if (an-cn)>0:
c='0'*(an-cn)+c
else:
a='0'*(cn-an)+a
k=len(a)
for i in range(k):
if a[k-1-i]!=c[k-1-i]:
print(i+1)
break
else:
print(-1)
```
### [Number is sparse or not](https://practice.geeksforgeeks.org/problems/number-is-sparse-or-not/0/?track=sp-bit-magic&batchId=152)
- Difficulty: Basic
- Marks: 1
***Given a number N, check whether it is sparse or not. A number is said to be a sparse number if in the binary representation of the number no two or more consecutive bits are set.***
***Input:***\
The first line of input contains an integer T denoting the number of test cases. The first line of each test case is number 'N'.
***Output:***\
Print '1' if the number is sparse and '0' if the number is not sparse.
***Constraints:***\
1 <= T <= 100\
1 <= N <= 103
***Example:***\
***Input:***\
2\
2\
3
***Output:***\
1\
0
***Explanation:***\
***Testcase 1:*** Binary Representation of 2 is 10, which is not having consecutive set bits. So, it is sparse number.\
***Testcase 2:*** Binary Representation of 3 is 11, which is having consecutive set bits in it. So, it is not a sparse number.
```
for _ in range(int(input())):
a=bin(int(input()))[2:]
if a.count('11')>0:
print(0)
else:
print(1)
```
### [Gray Code](https://practice.geeksforgeeks.org/problems/gray-code/0/?track=sp-bit-magic&batchId=152)
- Difficulty: Basic
- Marks: 1
***You are given a decimal number n. You need to find the gray code of the number n and convert it into decimal.
To see how it's done, refer here.***
***Input:***\
The first line contains an integer T, the number of test cases. For each test case, there is an integer n denoting the number
***Output:***\
For each test case, the output is gray code equivalent of n.
***Constraints:***\
1 <= T <= 100\
0 <= n <= 108
***Example:***\
***Input***\
2\
7\
10
***Output***\
4\
15
***Explanation:***\
***Testcase1:*** 7 is represented as 111 in binary form. The gray code of 111 is 100, in the binary form whose decimal equivalent is 4.
***Testcase2:*** 10 is represented as 1010 in binary form. The gray code of 1010 is 1111, in the binary form whose decimal equivalent is 15.
```
for _ in range(int(input())):
a=bin(int(input()))[2:]
c=a[0]
for i in range(1,len(a)):
k=(int(a[i])+int(a[i-1]))
if k==0 or k==1:
c+=str(k)
else:
c+='0'
print(int(c,2))
```
### [Gray to Binary equivalent](https://practice.geeksforgeeks.org/problems/gray-to-binary-equivalent/0/?track=sp-bit-magic&batchId=152)
- Difficulty: Basic
- Marks: 1
***Given N in Gray code equivalent. Find its binary equivalent.***
***Input:***\
The first line contains an integer T, number of test cases. For each test cases, there is an integer N denoting the number in gray equivalent.
***Output:***\
For each test case, in a new line, the output is the decimal equivalent number N of binary form.
***Constraints:***\
1 <= T <= 100\
0 <= n <= 108
***Example:***\
***Input***\
2\
4\
15
***Output***\
7\
10
***Explanation:***\
***Testcase1.*** 4 is represented as 100 and its binary equivalent is 111 whose decimal equivalent is 7.\
***Testcase2.*** 15 is represented as 1111 and its binary equivalent is 1010 i.e. 10 in decimal.
```
for _ in range(int(input())):
a=bin(int(input()))[2:]
c=a[0]
for i in range(1,len(a)):
k=(int(a[i])+int(c[i-1]))
if k==0 or k==1:
c+=str(k)
else:
c+='0'
print(int(c,2))
```
### [Check if a Integer is power of 8 or not](https://practice.geeksforgeeks.org/problems/check-if-a-integer-is-power-of-8-or-not/0/?track=sp-bit-magic&batchId=152)
- Difficulty: Easy
- Marks: 2
***Given a positive integer N, The task is to find if it is a power of eight or not.***
***Input:***\
The first line of input contains an integer T denoting the number of test cases. Then T test cases follow. Each test case contains an integer N.
***Output:***\
In new line print "Yes" if it is a power of 8, else print "No".
***Constraints:***\
1<=T<=100\
1<=N<=1018
***Example:***\
***Input:***\
2\
64\
75
***Output:***\
Yes\
No
```
for _ in range(int(input())):
n=int(input())
i=1
while 8**i<=n:
i+=1
if 8**(i-1)==n:
print('Yes')
else:
print('No')
```
### [Is Binary Number Multiple of 3](https://practice.geeksforgeeks.org/problems/is-binary-number-multiple-of-3/0/?track=sp-bit-magic&batchId=152)
- Company Tags : Adobe, Amazon, Microsoft
- Difficulty: Medium
- Marks: 4
***Given a binary number, write a program that prints 1 if given binary number is a multiple of 3. Else prints 0. The given number can be big upto 2^100. It is recommended to finish the task using one traversal of input binary string.***
***Input:***\
The first line contains T denoting the number of testcases. Then follows description of testcases.
Each case contains a string containing 0's and 1's.
***Output:***\
For each test case, output a 1 if string is multiple of 3, else 0.
***Constraints:***\
1<=T<=100\
1<=Length of Input String<=100
***Example:***\
***Input:***\
2\
011\
100
***Output:***\
1\
0
```
for _ in range(int(input())):
n=int(input(),2)
if n%3==0:
print(1)
else:
print(0)
```
### [Reverse Bits](https://practice.geeksforgeeks.org/problems/reverse-bits/0/?track=sp-bit-magic&batchId=152)
- Company Tags : Amazon, Cisco, HCL, Nvidia, Qualcomm
- Difficulty: Easy
- Marks: 2
***Given a 32 bit number x, reverse its binary form and print the answer in decimal.***
***Input:***\
The first line of input consists T denoting the number of test cases. T testcases follow. Each test case contains a single 32 bit integer
***Output:***\
For each test case, in a new line, print the reverse of integer.
***Constraints:***\
1 <= T <= 100\
0 <= x <= 4294967295
***Example:***\
***Input:***\
2\
1\
5
***Output:***\
2147483648\
2684354560
***Explanation:***\
***Testcase1:***\
00000000000000000000000000000001 =1\
10000000000000000000000000000000 =2147483648
```
for _ in range(int(input())):
a=bin(int(input()))[2:][::-1]
a+='0'*(32-len(a))
print(int(a,2))
```
### [Swap all odd and even bits](https://practice.geeksforgeeks.org/problems/swap-all-odd-and-even-bits/0/?track=sp-bit-magic&batchId=152)
- Difficulty: Easy
- Marks: 2
***Given an unsigned integer N. The task is to swap all odd bits with even bits. For example, if the given number is 23 (00010111), it should be converted to 43(00101011). Here, every even position bit is swapped with adjacent bit on right side(even position bits are highlighted in binary representation of 23), and every odd position bit is swapped with adjacent on left side.***
***Input:***\
The first line of input contains T, denoting the number of testcases. Each testcase contains single line.
***Output:***\
For each testcase in new line, print the converted number.
***Constraints:***\
1 ≤ T ≤ 100\
1 ≤ N ≤ 100
***Example:***\
***Input:***\
2\
23\
2
***Output:***\
43\
1
***Explanation:***\
***Testcase 1:*** BInary representation of the given number; 00010111 after swapping 00101011.
```
for _ in range(int(input())):
a=bin(int(input()))[2:]
if len(a)%4!=0:
a='0'*(4-len(a)%4)+a
s=''
for i,j in zip(a[1::2],a[::2]):
s=s+i+j
print(int(s,2))
def f(a,c):
a=bin(a)[2:]
c=bin(c)[2:]
an=len(a)
cn=len(c)
if an!=cn:
if (an-cn)>0:
c='0'*(an-cn)+c
else:
a='0'*(cn-an)+a
count=0
for i,j in zip(a,c):
if i !=j:
count+=1
return count
for _ in range(int(input())):
count=0
n=int(input())
a=list(map(int,input().split()))
for i in a:
for j in a:
count+=f(i,j)
print(count)
if __name__ == '__main__':
n = int(input())
while n != 0:
p = int(input())
lis = [int(x) for x in input().split()]
bits = 0
for i in range(0, 32):
k = 0
for j in range(0, len(lis)):
if lis[j] & (1 << i):
k = k + 1
bits += k * (len(lis) - k)
print(2 * bits % 1000000007)
n = n-1
```
### [Bleak Numbers](https://practice.geeksforgeeks.org/problems/bleak-numbers/0/?track=sp-bit-magic&batchId=152)
- Company Tags : SAP Labs
- Difficulty: Medium
- Marks: 4
***Given an integer, check whether it is Bleak or not.***
***A number ‘n’ is called Bleak if it cannot be represented as sum of a positive number x and set bit count in x, i.e., x + [countSetBits(x)](http://www.geeksforgeeks.org/count-set-bits-in-an-integer/) is not equal to n for any non-negative number x.***
***Examples :***
3 is not Bleak as it can be represented
as 2 + countSetBits(2).
4 is t Bleak as it cannot be represented
as sum of a number x and countSetBits(x)
for any number x.
***Input:***\
The first line of input contains an integer T denoting the number of test cases. Then T test cases follow. Each test case consists of a single line. The first line of each test case contains a single integer N to be checked for Bleak.
***Output:***\
Print "1" or "0" (without quotes) depending on whether the number is Bleak or not.
***Constraints:***\
1 <= T <= 1000\
1 <= N <= 10000
***Example:***\
***Input:***\
3\
4\
167\
3
***Output:***\
1\
0\
0
```
for _ in range(int(input())):
n=int(input())
for i in range(0,n+1,2):
if (i+bin(i).count('1'))==n:
print(0)
break
else:
print(1)
a
a[1::2]
''+'2'
a=bin(-2)
a
int('1b10',2)
a=list(map(int,input().split()))
xor=0
for i in range(len(a)):
for j in range(i+1,len(a)):
if a[i]^a[j]>xor:
xor=a[i]^a[j]
print(xor)
a[::2]
32-len(a)
a=bin(52)[2:]
a
k=0
a[(len(a)-1)-k]
```
|
github_jupyter
|
___
<a href='http://www.pieriandata.com'><img src='../Pierian_Data_Logo.png'/></a>
___
<center><em>Copyright Pierian Data</em></center>
<center><em>For more information, visit us at <a href='http://www.pieriandata.com'>www.pieriandata.com</a></em></center>
# Introduction to Forecasting
In the previous section we fit various smoothing models to existing data. The purpose behind this is to predict what happens next.<br>
What's our best guess for next month's value? For the next six months?
In this section we'll look to extend our models into the future. First we'll divide known data into training and testing sets, and evaluate the performance of a trained model on known test data.
* Goals
* Compare a Holt-Winters forecasted model to known data
* Understand <em>stationarity</em>, <em>differencing</em> and <em>lagging</em>
* Introduce ARIMA and describe next steps
### <font color=blue>Simple Exponential Smoothing / Simple Moving Average</font>
This is the simplest to forecast. $\hat{y}$ is equal to the most recent value in the dataset, and the forecast plot is simply a horizontal line extending from the most recent value.
### <font color=blue>Double Exponential Smoothing / Holt's Method</font>
This model takes trend into account. Here the forecast plot is still a straight line extending from the most recent value, but it has slope.
### <font color=blue>Triple Exponential Smoothing / Holt-Winters Method</font>
This model has (so far) the "best" looking forecast plot, as it takes seasonality into account. When we expect regular fluctuations in the future, this model attempts to map the seasonal behavior.
## Forecasting with the Holt-Winters Method
For this example we'll use the same airline_passengers dataset, and we'll split the data into 108 training records and 36 testing records. Then we'll evaluate the performance of the model.
```
import pandas as pd
import numpy as np
%matplotlib inline
df = pd.read_csv('../Data/airline_passengers.csv',index_col='Month',parse_dates=True)
df.index.freq = 'MS'
df.head()
df.tail()
df.info()
```
## Train Test Split
```
train_data = df.iloc[:108] # Goes up to but not including 108
test_data = df.iloc[108:]
```
## Fitting the Model
```
from statsmodels.tsa.holtwinters import ExponentialSmoothing
fitted_model = ExponentialSmoothing(train_data['Thousands of Passengers'],trend='mul',seasonal='mul',seasonal_periods=12).fit()
```
## Evaluating Model against Test Set
```
# YOU CAN SAFELY IGNORE WARNINGS HERE!
# THIS WILL NOT AFFECT YOUR FORECAST, IT'S JUST SOMETHING STATSMODELS NEEDS TO UPDATE UPON NEXT RELEASE.
test_predictions = fitted_model.forecast(36).rename('HW Forecast')
test_predictions
train_data['Thousands of Passengers'].plot(legend=True,label='TRAIN')
test_data['Thousands of Passengers'].plot(legend=True,label='TEST',figsize=(12,8));
train_data['Thousands of Passengers'].plot(legend=True,label='TRAIN')
test_data['Thousands of Passengers'].plot(legend=True,label='TEST',figsize=(12,8))
test_predictions.plot(legend=True,label='PREDICTION');
train_data['Thousands of Passengers'].plot(legend=True,label='TRAIN')
test_data['Thousands of Passengers'].plot(legend=True,label='TEST',figsize=(12,8))
test_predictions.plot(legend=True,label='PREDICTION',xlim=['1958-01-01','1961-01-01']);
```
## Evaluation Metrics
```
from sklearn.metrics import mean_squared_error,mean_absolute_error
mean_absolute_error(test_data,test_predictions)
mean_squared_error(test_data,test_predictions)
np.sqrt(mean_squared_error(test_data,test_predictions))
test_data.describe()
```
## Forecasting into Future
```
final_model = ExponentialSmoothing(df['Thousands of Passengers'],trend='mul',seasonal='mul',seasonal_periods=12).fit()
forecast_predictions = final_model.forecast(36)
df['Thousands of Passengers'].plot(figsize=(12,8))
forecast_predictions.plot();
```
# Stationarity
Time series data is said to be <em>stationary</em> if it does <em>not</em> exhibit trends or seasonality. That is, the mean, variance and covariance should be the same for any segment of the series, and are not functions of time.<br>
The file <tt>samples.csv</tt> contains made-up datasets that illustrate stationary and non-stationary data.
<div class="alert alert-info"><h3>For Further Reading:</h3>
<strong>
<a href='https://otexts.com/fpp2/stationarity.html'>Forecasting: Principles and Practice</a></strong> <font color=black>Stationarity and differencing</font></div>
```
df2 = pd.read_csv('../Data/samples.csv',index_col=0,parse_dates=True)
df2.head()
df2['a'].plot(ylim=[0,100],title="STATIONARY DATA").autoscale(axis='x',tight=True);
df2['b'].plot(ylim=[0,100],title="NON-STATIONARY DATA").autoscale(axis='x',tight=True);
df2['c'].plot(ylim=[0,10000],title="MORE NON-STATIONARY DATA").autoscale(axis='x',tight=True);
```
In an upcoming section we'll learn how to test for stationarity.
# Differencing
## First Order Differencing
Non-stationary data can be made to look stationary through <em>differencing</em>. A simple method called <em>first order differencing</em> calculates the difference between consecutive observations.
$y^{\prime}_t = y_t - y_{t-1}$
In this way a linear trend is transformed into a horizontal set of values.
```
# Calculate the first difference of the non-stationary dataset "b"
df2['d1b'] = df2['b'] - df2['b'].shift(1)
df2[['b','d1b']].head()
```
Notice that differencing eliminates one or more rows of data from the beginning of the series.
```
df2['d1b'].plot(title="FIRST ORDER DIFFERENCE").autoscale(axis='x',tight=True);
```
An easier way to perform differencing on a pandas Series or DataFrame is to use the built-in <tt>.diff()</tt> method:
```
df2['d1b'] = df2['b'].diff()
df2['d1b'].plot(title="FIRST ORDER DIFFERENCE").autoscale(axis='x',tight=True);
```
### Forecasting on first order differenced data
When forecasting with first order differences, the predicted values have to be added back in to the original values in order to obtain an appropriate forecast.
Let's say that the next five forecasted values after applying some model to <tt>df['d1b']</tt> are <tt>[7,-2,5,-1,12]</tt>. We need to perform an <em>inverse transformation</em> to obtain values in the scale of the original time series.
```
# For our example we need to build a forecast series from scratch
# First determine the most recent date in the training set, to know where the forecast set should start
df2[['b']].tail(3)
# Next set a DateTime index for the forecast set that extends 5 periods into the future
idx = pd.date_range('1960-01-01', periods=5, freq='MS')
z = pd.DataFrame([7,-2,5,-1,12],index=idx,columns=['Fcast'])
z
```
The idea behind an inverse transformation is to start with the most recent value from the training set, and to add a cumulative sum of Fcast values to build the new forecast set. For this we'll use the pandas <tt>.cumsum()</tt> function which does the reverse of <tt>.diff()</tt>
```
z['forecast']=df2['b'].iloc[-1] + z['Fcast'].cumsum()
z
df2['b'].plot(figsize=(12,5), title="FORECAST").autoscale(axis='x',tight=True)
z['forecast'].plot();
```
## Second order differencing
Sometimes the first difference is not enough to attain stationarity, particularly if the trend is not linear. We can difference the already differenced values again to obtain a second order set of values.
$\begin{split}y_{t}^{\prime\prime} &= y_{t}^{\prime} - y_{t-1}^{\prime} \\
&= (y_t - y_{t-1}) - (y_{t-1} - y_{t-2}) \\
&= y_t - 2y_{t-1} + y_{t-2}\end{split}$
```
# First we'll look at the first order difference of dataset "c"
df2['d1c'] = df2['c'].diff()
df2['d1c'].plot(title="FIRST ORDER DIFFERENCE").autoscale(axis='x',tight=True);
```
Now let's apply a second order difference to dataset "c".
```
# We can do this from the original time series in one step
df2['d2c'] = df2['c'].diff().diff()
df2[['c','d1c','d2c']].head()
df2['d2c'].plot(title="SECOND ORDER DIFFERENCE").autoscale(axis='x',tight=True);
```
<div class="alert alert-info"><strong>NOTE: </strong>This is different from <font color=black><tt>df2['c'].diff(2)</tt></font>, which would provide a first order difference spaced 2 lags apart.<br>
We'll use this technique later to address seasonality.</div>
### Forecasting on second order differenced data
As before, the prediction values have to be added back in to obtain an appropriate forecast.
To invert the second order transformation and obtain forecasted values for $\hat y_t$ we have to solve the second order equation for $y_t$:
$\begin{split}y_{t}^{\prime\prime} &= y_t - 2y_{t-1} + y_{t-2} \\
y_t &= y_{t}^{\prime\prime} + 2y_{t-1} - y_{t-2}\end{split}$
Let's say that the next five forecasted values after applying some model to <tt>df['d2c']</tt> are <tt>[7,-2,5,-1,12]</tt>.
```
# For our example we need to build a forecast series from scratch
idx = pd.date_range('1960-01-01', periods=5, freq='MS')
z = pd.DataFrame([7,-2,5,-1,12],index=idx,columns=['Fcast'])
z
```
One way to invert a 2nd order transformation is to follow the formula above:
```
forecast = []
# Capture the two most recent values from the training set
v2,v1 = df2['c'].iloc[-2:]
# Apply the formula
for i in z['Fcast']:
newval = i + 2*v1 - v2
forecast.append(newval)
v2,v1 = v1,newval
z['forecast']=forecast
z
```
Another, perhaps more straightforward method is to create a first difference set from the second, then build the forecast set from the first difference. We'll again use the pandas <tt>.cumsum()</tt> function which does the reverse of <tt>.diff()</tt>
```
# Add the most recent first difference from the training set to the Fcast cumulative sum
z['firstdiff'] = (df2['c'].iloc[-1]-df2['c'].iloc[-2]) + z['Fcast'].cumsum()
# Now build the forecast values from the first difference set
z['forecast'] = df2['c'].iloc[-1] + z['firstdiff'].cumsum()
z[['Fcast','firstdiff','forecast']]
df2['c'].plot(figsize=(12,5), title="FORECAST").autoscale(axis='x',tight=True)
z['forecast'].plot();
```
<div class="alert alert-danger"><strong>NOTE:</strong> statsmodels has a built-in differencing tool:<br>
<tt><font color=black> from statsmodels.tsa.statespace.tools import diff<br><br>
df2['d1'] = diff(df2['b'],k_diff=1)</font></tt><br><br>
that performs the same first order differencing operation shown above. We chose not to use it here because seasonal differencing is somewhat complicated. To difference based on 12 lags, the code would be<br><br>
<tt><font color=black> df2['d12'] = diff(df2['b'],k_diff=0,k_seasonal_diff=1,seasonal_periods=12)
</font></tt><br><br>
whereas with pandas it's simply<br><br>
<tt><font color=black> df2['d12'] = df2['b'].diff(12)
</font></tt>
</div>
## Lagging
Also known as "backshifting", lagging notation reflects the value of $y$ at a prior point in time. This is a useful technique for performing <em>regressions</em> as we'll see in upcoming sections.
\begin{split}L{y_t} = y_{t-1} & \text{ one lag shifts the data back one period}\\
L^{2}{y_t} = y_{t-2} & \text{ two lags shift the data back two periods} \end{split}
<br><br>
<table>
<tr><td>$y_t$</td><td>6</td><td>8</td><td>3</td><td>4</td><td>9</td><td>2</td><td>5</td></tr>
<tr><td>$y_{t-1}$</td><td>8</td><td>3</td><td>4</td><td>9</td><td>2</td><td>5</td></tr>
<tr><td>$y_{t-2}$</td><td>3</td><td>4</td><td>9</td><td>2</td><td>5</td></tr>
</table>
# Introduction to ARIMA Models
We'll investigate a variety of different forecasting models in upcoming sections, but they all stem from ARIMA.
<strong>ARIMA</strong>, or <em>Autoregressive Integrated Moving Average</em> is actually a combination of 3 models:
* <strong>AR(p)</strong> Autoregression - a regression model that utilizes the dependent relationship between a current observation and observations over a previous period
* <strong>I(d)</strong> Integration - uses differencing of observations (subtracting an observation from an observation at the previous time step) in order to make the time series stationary
* <strong>MA(q)</strong> Moving Average - a model that uses the dependency between an observation and a residual error from a moving average model applied to lagged observations.
<strong>Moving Averages</strong> we've already seen with EWMA and the Holt-Winters Method.<br>
<strong>Integration</strong> will apply differencing to make a time series stationary, which ARIMA requires.<br>
<strong>Autoregression</strong> is explained in detail in the next section. Here we're going to correlate a current time series with a lagged version of the same series.<br>
Once we understand the components, we'll investigate how to best choose the $p$, $d$ and $q$ values required by the model.
### Great, let's get started!
|
github_jupyter
|
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from tqdm import tqdm
from hiv_patient import HIVPatient
from buffer import Buffer
```
### Create dataset
```
patient = HIVPatient(clipping=False,logscale=False)
FQI_buffer = Buffer(50000)
for j in tqdm(range(30)):
s = patient.reset(mode="unhealthy")
for i in range(200):
a = np.random.choice(4)
s_, r, d, _ = patient.step(a)
FQI_buffer.append(s,a,r,s_,d)
s = s_
```
### FQI
```
from sklearn.ensemble import RandomForestRegressor
from typing import Optional,List, Callable
from sklearn.base import BaseEstimator
def estimator_factory(*args, **kwargs):
return RandomForestRegressor(*args, **kwargs)
def update(memory : Buffer,
gamma : float = 0.98,
estimator : Optional[BaseEstimator] = None,
estimator_factory: Callable = estimator_factory):
states, actions, rewards, next_state, done = memory.get()
actions = np.expand_dims(actions,axis=1)
target = np.expand_dims(rewards,axis=1)
#target = rewards
if estimator is not None:
q_values = np.zeros((len(rewards),4))
for a in range(4):
actions_ = a*np.ones((len(rewards),1))
X = np.concatenate((next_state,actions_),axis=1)
q_values[:,a] = estimator.predict(X)
#print(f"q_values --> {q_values}")
qmax = np.expand_dims(np.max(q_values,axis=1),axis=1)
#print(f"Shape de qmax --> {qmax.shape}, Shape de target --> {target.shape}, Shape de done --> {done.shape}")
target += gamma*qmax * (1 - np.expand_dims(done,axis=1))
#print(f"Shape de target --> {target.shape}")
if estimator is None:
estimator = estimator_factory()
#print(f"States shape --> {states.shape}, actions shape --> {actions.shape},target{target.shape}")
data = np.concatenate((states,actions),axis=1)
#print(f"Data shape --> {data.shape}")
estimator.fit(data,target)
return estimator
```
### Premier entraînement !
```
estimator = None
for _ in tqdm(range(100)):
estimator = update(FQI_buffer,estimator=estimator)
```
### Deuxième entraînement
```
for j in tqdm(range(30)):
s = patient.reset(mode="unhealthy")
for step in range(200):
if np.random.random()<0.15:
action = np.random.choice(4)
else:
if estimator is not None:
greedy = np.zeros((4,2))
for a in range(4):
#print(f"States shape --> {np.expand_dims(s.T,axis=1).shape}, actions shape --> {np.expand_dims(np.array([a]),axis=1).shape}")
sta = np.expand_dims(s,axis=1)
act = np.expand_dims(np.array([a]),axis=1)
X = np.concatenate((sta,act),axis=0).T
#print((f"shape of X --> {X.shape}"))
q = estimator.predict(X)
#print(f'Q -->{q}')
greedy[a,0] , greedy[a,1] = a , q
#print(greedy)
action = greedy[np.argmax(greedy[:,0]),0]
else:
action = np.random.choice(4)
s_, r, d, _ = patient.step(int(action))
FQI_buffer.append(s,a,r,s_,d)
s = s_
while _ in tqdm(range(100)):
estimator = update(FQI_buffer,estimator=estimator)
```
### Affichons les résultats
```
s = patient.reset(mode="unhealthy")
T1,T2,T1_,T2_,V,E = np.zeros(200), np.zeros(200), np.zeros(200), np.zeros(200), np.zeros(200), np.zeros(200)
actions = []
for step in tqdm(range(200)):
T1[step], T2[step], T1_[step], T2_[step], V[step], E[step] = s
greedy = np.zeros((4,2))
for a in range(4):
#print(f"States shape --> {np.expand_dims(s.T,axis=1).shape}, actions shape --> {np.expand_dims(np.array([a]),axis=1).shape}")
sta = np.expand_dims(s,axis=1)
act = np.expand_dims(np.array([a]),axis=1)
X = np.concatenate((sta,act),axis=0).T
#print((f"shape of X --> {X.shape}"))
q = estimator.predict(X)
#print(f'Q -->{q}')
greedy[a,0] , greedy[a,1] = a , q
#print(greedy)
action = greedy[np.argmax(greedy[:,0]),0]
actions += [action]
s_, r, d, _ = patient.step(int(action))
FQI_buffer.append(s,a,r,s_,d)
s = s_
```
Post traitement
```
actions
plt.plot(np.log10(E))
plt.plot(np.log10(T1))
FQI_buffer.get()
np.array(FQI_buffer.actions).shape
_, _, rewards, _, _ = FQI_buffer.get()
```
|
github_jupyter
|
# Generative models - variational auto-encoders
### Author: Philippe Esling (esling@ircam.fr)
In this course we will cover
1. A [quick recap](#recap) on simple probability concepts (and in TensorFlow)
2. A formal introduction to [Variational Auto-Encoders](#vae) (VAEs)
3. An explanation of the [implementation](#implem) of VAEs
4. Some [modifications and tips to improve the reconstruction](#improve) of VAEs **(exercise)**
<a id="recap"> </a>
## Quick recap on probability
The field of probability aims to model random or uncertain events. Hence, a random variable $X$ denotes a quantity that is uncertain, such as the result of an experiment (flipping a coin) or the measurement of an uncertain property (measuring the temperature). If we observe several occurrences of the variable $\{\mathbf{x}_{i}\}_{i=1}$, it might take different values on each occasion, but some values may occur more often than others. This information is captured by the _probability distribution_ $p(\mathbf{x})$ of the random variable.
To understand these concepts graphically, we will rely on the `Tensorflow Probability` package.
```
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
```
### Probability distributions
#### Discrete distributions
Let $\mathbf{x}$ be a discrete random variable with range $R_{X}=\{x_1,\cdots,x_n\}$ (finite or countably infinite). The function
\begin{equation}
p_{X}(x_{i})=p(X=x_{i}), \forall i\in\{1,\cdots,n\}
\end{equation}
is called the probability mass function (PMF) of $X$.
Hence, the PMF defines the probabilities of all possible values for a random variable. The above notation allows to express that the PMF is defined for the random variable $X$, so that $p_{X}(1)$ gives the probability that $X=1$. For discrete random variables, the PMF is also called the \textit{probability distribution}. The PMF is a probability measure, therefore it satisfies all the corresponding properties
- $0 \leq p_{X}(x_i) < 1, \forall x_i$
- $\sum_{x_i\in R_{X}} p_{X}(x_i) = 1$
- $\forall A \subset R_{X}, p(X \in A)=\sum_{x_a \in A}p_{X}(x_a)$
A very simple example of discrete distribution is the `Bernoulli` distribution. With this distribution, we can model a coin flip. If we throw the coin a very large number of times, we hope to see on average an equal amount of _heads_ and _tails_.
```
bernoulli = tfp.distributions.Bernoulli(probs=0.5)
samples = bernoulli.sample(10000)
sns.distplot(samples)
plt.title("Samples from a Bernoulli (coin toss)")
plt.show()
```
However, we can also _sample_ from the distribution to have individual values of a single throw. In that case, we obtain a series of separate events that _follow_ the distribution
```
vals = ['heads', 'tails']
samples = bernoulli.sample(10)
for s in samples:
print('Coin is tossed on ' + vals[s])
```
#### Continuous distributions
The same ideas apply to _continuous_ random variables, which can model for instance the height of human beings. If we try to guess the height of someone that we do not know, there is a higher probability that this person will be around 1m70, instead of 20cm or 3m. For the rest of this course, we will use the shorthand notation $p(\mathbf{x})$ for the distribution $p(\mathbf{x}=x_{i})$, which expresses for a real-valued random variable $\mathbf{x}$, evaluated at $x_{i}$, the probability that $\mathbf{x}$ takes the value $x_i$.
One notorious example of such distributions is the Gaussian (or Normal) distribution, which is defined as
\begin{equation}
p(x)=\mathcal{N}(\mu,\sigma)=\frac{1}{\sqrt{2\pi\sigma^{2}}}e^{-\frac{(x-\mu)^{2}}{2\sigma^{2}}}
\end{equation}
Similarly as before, we can observe the behavior of this distribution with the following code
```
normal = tfp.distributions.Normal(loc=0., scale=1.)
samples = normal.sample(10000)
sns.distplot(samples)
plt.title("Samples from a standard Normal")
plt.show()
```
### Comparing distributions (KL divergence)
$
\newcommand{\R}{\mathbb{R}}
\newcommand{\bb}[1]{\mathbf{#1}}
\newcommand{\bx}{\bb{x}}
\newcommand{\by}{\bb{y}}
\newcommand{\bz}{\bb{z}}
\newcommand{\KL}[2]{\mathcal{D}_{\text{KL}}\left[#1 \| #2\right]}$
Originally defined in the field of information theory, the _Kullback-Leibler (KL) divergence_ (usually noted $\KL{p(\bx)}{q(\bx)}$) is a dissimilarity measure between two probability distributions $p(\bx)$ and $q(\bx)$. In the view of information theory, it can be understood as the cost in number of bits necessary for coding samples from $p(\bx)$ by using a code optimized for $q(\bx)$ rather than the code optimized for $p(\bx)$. In the view of probability theory, it represents the amount of information lost when we use $q(\bx)$ to approximate the true distribution $p(\bx)$. %that explicit the cost incurred if events were generated by $p(\bx)$ but charged under $q(\bx)$
Given two probability distributions $p(\bx)$ and $q(\bx)$, the Kullback-Leibler divergence of $q(\bx)$ _from_ $p(\bx)$ is defined to be
\begin{equation}
\KL{p(\bx)}{q(\bx)}=\int_{\R} p(\bx) \log \frac{p(\bx)}{q(\bx)}d\bx
\end{equation}
Note that this dissimilarity measure is \textit{asymmetric}, therefore, we have
\begin{equation}
\KL{p(\bx)}{q(\bx)}\neq \KL{q(\bx)}{p(\bx)}
\end{equation}
This asymmetry also describes an interesting behavior of the KL divergence, depending on the order to which it is evaluated. The KL divergence can either be a _mode-seeking_ or _mode-coverage} measure.
<a id="vae"></a>
## Variational auto-encoders
As we have seen in the previous AE course, VAEs are also a form generative models. However, they are defined from a more sound probabilistic perspective. to find the underlying probability distribution of the data $p(\mathbf{x})$ based on a set of examples in $\mathbf{x}\in\mathbb{R}^{d_{x}}$. To do so, we consider *latent variables* defined in a lower-dimensional space $\mathbf{z}\in\mathbb{R}^{d_{z}}$ ($d_{z} \ll d_{x}$) with the joint probability distribution $p(\mathbf{x}, \mathbf{z}) = p(\mathbf{x} \vert \mathbf{z})p(\mathbf{z})$. Unfortunately, for complex distributions this integral is too complex and cannot be found in closed form.
### Variational inference
The idea of *variational inference* (VI) allows to solve this problem through *optimization* by assuming a simpler approximate distribution $q_{\phi}(\mathbf{z}\vert\mathbf{x})\in\mathcal{Q}$ from a family $\mathcal{Q}$ of approximate densities. Hence, the goal is to minimize the difference between this approximation and the real distribution. Therefore, this turns into the optimization problem of minimizing the Kullback-Leibler (KL) divergence between the parametric approximation and the original density
$$
q_{\phi}^{*}(\mathbf{z}\vert \mathbf{x})=\text{argmin}_{q_{\phi}(\mathbf{z} \vert \mathbf{x})\in\mathcal{Q}} \mathcal{D}_{KL} \big[ q_{\phi}\left(\mathbf{z} \vert \mathbf{x}\right) \parallel p\left(\mathbf{z} \vert \mathbf{x}\right) \big]
\tag{2}
$$
By developing this KL divergence and re-arranging terms (the detailed development can be found in [3](#reference1)), we obtain
$$
\log{p(\mathbf{x})} - D_{KL} \big[ q_{\phi}(\mathbf{z} \vert \mathbf{x}) \parallel p(\mathbf{z} \vert \mathbf{x}) \big] =
\mathbb{E}_{\mathbf{z}} \big[ \log{p(\mathbf{x} \vert \mathbf{z})}\big] - D_{KL} \big[ q_{\phi}(\mathbf{z} \vert \mathbf{x}) \parallel p(\mathbf{z}) \big]
\tag{3}
$$
This formulation describes the quantity we want to maximize $\log p(\mathbf{x})$ minus the error we make by using an approximate $q$ instead of $p$. Therefore, we can optimize this alternative objective, called the *evidence lower bound* (ELBO)
$$
\begin{equation}
\mathcal{L}_{\theta, \phi} = \mathbb{E} \big[ \log{ p_\theta (\mathbf{x|z}) } \big] - \beta \cdot D_{KL} \big[ q_\phi(\mathbf{z|x}) \parallel p_\theta(\mathbf{z}) \big]
\end{equation}
\tag{4}
$$
We can see that this equation involves $q_{\phi}(\mathbf{z} \vert \mathbf{x})$ which *encodes* the data $\mathbf{x}$ into the latent representation $\mathbf{z}$ and a *decoder* $p(\mathbf{x} \vert \mathbf{z})$, which allows generating a data vector $\mathbf{x}$ given a latent configuration $\mathbf{z}$. Hence, this structure defines the *Variational Auto-Encoder* (VAE).
The VAE objective can be interpreted intuitively. The first term increases the likelihood of the data generated given a configuration of the latent, which amounts to minimize the *reconstruction error*. The second term represents the error made by using a simpler posterior distribution $q_{\phi}(\mathbf{z} \vert \mathbf{x})$ compared to the true prior $p_{\theta}(\mathbf{z})$. Therefore, this *regularizes* the choice of approximation $q$ so that it remains close to the true posterior distribution [3].
### Reparametrization trick
Now, while this formulation has some very interesting properties, it involves sampling operations, where we need to draw the latent point $\mathbf{z}$ from the distribution $q_{\phi}(\mathbf{z}\vert\mathbf{x})$. The simplest choice for this variational approximate posterior is a multivariate Gaussian with a diagonal covariance structure (which leads to independent Gaussians on every dimension, called the *mean-field* family) so that
$$
\text{log}q_\phi(\mathbf{z}\vert\mathbf{x}) = \text{log}\mathcal{N}(\mathbf{z};\mathbf{\mu}^{(i)},\mathbf{\sigma}^{(i)})
\tag{5}
$$
where the mean $\mathbf{\mu}^{(i)}$ and standard deviation $\mathbf{\sigma}^{(i)}$ of the approximate posterior are different for each input point and are produced by our encoder parametrized by its variational parameters $\phi$. Now the KL divergence between this distribution and a simple prior $\mathcal{N}(\mathbf{0}, \mathbf{I})$ can be very simply obtained with
$$
D_{KL} \big[ q_\phi(\mathbf{z|x}) \parallel \mathcal{N}(\mathbf{0}, \mathbf{I}) \big] = \frac{1}{2}\sum_{j=1}^{D}\left(1+\text{log}((\sigma^{(i)}_j)^2)+(\mu^{(i)}_j)^2+(\sigma^{(i)}_j)^2\right)
\tag{6}
$$
While this looks convenient, we will still have to perform gradient descent through a sampling operation, which is non-differentiable. To solve this issue, we can use the *reparametrization trick*, which takes the sampling operation outside of the gradient flow by considering $\mathbf{z}^{(i)}=\mathbf{\mu}^{(i)}+\mathbf{\sigma}^{(i)}\odot\mathbf{\epsilon}^{(l)}$ with $\mathbf{\epsilon}^{(l)}\sim\mathcal{N}(\mathbf{0}, \mathbf{I})$
<a id="implem"> </a>
## VAE implementation
As we have seen, VAEs can be simply implemented by decomposing the above series of operations into an `encoder` which represents the distribution $q_\phi(\mathbf{z}\vert\mathbf{x})$, from which we will sample some values $\tilde{\mathbf{z}}$ (using the reparametrization trick) and compute the Kullback-Leibler (KL) divergence. Then, we use these values as input to a `decoder` which represents the distribution $p_\theta(\mathbf{x}\vert\mathbf{z})$ so that we can produce a reconstruction $\tilde{\mathbf{x}}$ and compute the reconstruction error.
Therefore, we can define the VAE based on our previous implementation of the AE that we recall here
```
import tensorflow as tf
from sklearn.metrics import accuracy_score, precision_score, recall_score
from sklearn.model_selection import train_test_split
from tensorflow.keras import layers, losses
from tensorflow.keras.datasets import fashion_mnist
from tensorflow.keras.models import Model
class AE(Model):
def __init__(self, encoder, decoder, encoding_dim):
super(AE, self).__init__()
self.encoding_dim = encoding_dim
self.encoder = encoder
self.decoder = decoder
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
```
In order to move to a probabilistic version, we need to add the latent space sampling mechanism, and change the behavior of our `call` function. This process is implemented in the following `VAE` class.
Note that we purposedly rely on an implementation of the `encode` function where the `encoder` first produces an intermediate representation of size `encoder_dims`. Then, this representation goes through two separate functions for encoding $\mathbf{\mu}$ and $\mathbf{\sigma}$. This provides a clearer implementation but also the added bonus that we can ensure that $\mathbf{\sigma} > 0$
```
class VAE(AE):
def __init__(self, encoder, decoder, encoding_dims, latent_dims):
super(VAE, self).__init__(encoder, decoder, encoding_dims)
self.latent_dims = latent_dims
self.mu = layers.Dense(self.latent_dims, activation='relu')
self.sigma = layers.Dense(self.latent_dims, activation='softplus')
def encode(self, x):
x = self.encoder(x)
mu = self.mu(x)
sigma = self.sigma(x)
return mu, sigma
def decode(self, z):
return self.decoder(z)
def call(self, x):
# Encode the inputs
z_params = self.encode(x)
# Obtain latent samples and latent loss
z_tilde, kl_div = self.latent(x, z_params)
# Decode the samples
x_tilde = self.decode(z_tilde)
return x_tilde, kl_div
def latent(self, x, z_params):
n_batch = x.shape[0]
# Retrieve mean and var
mu, sigma = z_params
# Re-parametrize
q = tfp.distributions.Normal(np.zeros(mu.shape[1]), np.ones(sigma.shape[1]))
z = (sigma * tf.cast(q.sample(n_batch), 'float32')) + mu
# Compute KL divergence
kl_div = -0.5 * tf.reduce_sum(1 + sigma - tf.pow(mu, 2) - tf.exp(sigma))
kl_div = kl_div / n_batch
return z, kl_div
```
Now the interesting aspect of VAEs is that we can define any parametric function as `encoder` and `decoder`, as long as we can optimize them. Here, we will rely on simple feed-forward neural networks, but these can be largely more complex (with limitations that we will discuss later in the tutorial).
```
def construct_encoder_decoder(nin, n_latent = 16, n_hidden = 512, n_classes = 1):
# Encoder network
encoder = tf.keras.Sequential([
layers.Flatten(),
layers.Dense(n_hidden, activation='relu'),
layers.Dense(n_hidden, activation='relu'),
layers.Dense(n_hidden, activation='relu'),
])
# Decoder network
decoder = tf.keras.Sequential([
layers.Dense(n_hidden, activation='relu'),
layers.Dense(n_hidden, activation='relu'),
layers.Dense(nin * n_classes, activation='sigmoid'),
layers.Reshape((28, 28))
])
return encoder, decoder
```
### Evaluating the error
In the definition of the `VAE` class, we directly included the computation of the $D_{KL}$ term to regularize our latent space. However, remember that the complete loss of equation (4) also contains a *reconstruction loss* which compares our reconstructed output to the original data.
While there are several options to compare the error between two elements, there are usually two preferred choices among the generative literature depending on how we consider our problem
1. If we consider each dimension (pixel) to be a binary unit (following a Bernoulli distribution), we can rely on the `binary cross entropy` between the two distributions
2. If we turn our problem to a set of classifications, where each dimension can belong to a given set of *intensity classes*, then we can compute the `multinomial loss` between the two distributions
In the following, we define both error functions and regroup them in the `reconstruction_loss` call (depending on the `num_classes` considered). However, as the `multinomial loss` requires a large computational overhead, and for the sake of simplicity, we will train all our first models by relying on the `binary cross entropy`
```
optimizer = tf.keras.optimizers.Adam(1e-4)
def compute_loss(model, x):
x_tilde, kl_div = model(x)
cross_ent = tf.nn.sigmoid_cross_entropy_with_logits(logits=x_tilde, labels=x)
logpx_z = -tf.reduce_sum(cross_ent, axis=[1, 2])
return -tf.reduce_mean(logpx_z + kl_div)
@tf.function
def train_step(model, x, optimizer):
"""Executes one training step and returns the loss."""
with tf.GradientTape() as tape:
loss = compute_loss(model, x)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
```
### Optimizing a VAE on a real dataset
For this tutorial, we are going to take a quick shot at a real-life problem by trying to train our VAEs on the `FashionMNIST` dataset. This dataset can be natively used in PyTorch by relying on the `torchvision.datasets` classes as follows
```
# Load (and eventually download) the dataset
(x_train, _), (x_test, _) = fashion_mnist.load_data()
# Normalize the dataset in the [0, 1] range]
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
```
The `FashionMNIST` dataset is composed of simple 28x28 black and white images of different items of clothings (such as shoes, bags, pants and shirts). We put a simple function here to display one batch of the test set (note that we keep a fixed batch from the test set in order to evaluate the different variations that we will try in this tutorial).
```
def plot_batch(batch, nslices=8):
# Create one big image for plot
img = np.zeros(((batch.shape[1] + 1) * nslices, (batch.shape[2] + 1) * nslices))
for b in range(batch.shape[0]):
row = int(b / nslices); col = int(b % nslices)
r_p = row * batch.shape[1] + row; c_p = col * batch.shape[2] + col
img[r_p:(r_p+batch.shape[1]),c_p:(c_p+batch.shape[2])] = batch[b]
im = plt.imshow(img, cmap='Greys', interpolation='nearest'),
return im
# Select a random set of fixed data
fixed_batch = x_test[:64]
print(x_test.shape)
plt.figure(figsize=(10, 10))
plot_batch(fixed_batch);
```
Now based on our proposed implementation, the optimization aspects are defined in a very usual way
```
# Using Bernoulli or Multinomial loss
num_classes = 1
# Number of hidden and latent
n_hidden = 512
n_latent = 2
# Compute input dimensionality
nin = fixed_batch.shape[1] * fixed_batch.shape[2]
# Construct encoder and decoder
encoder, decoder = construct_encoder_decoder(nin, n_hidden = n_hidden, n_latent = n_latent, n_classes = num_classes)
# Build the VAE model
model = VAE(encoder, decoder, n_hidden, n_latent)
```
Now all that is left to do is train the model. We define here a `train_vae` function that we will reuse along the future implementations and variations of VAEs and flows. Note that this function is set to run for only a very few number of `epochs` and also most importantly, *only considers a subsample of the full dataset at each epoch*. This option is just here so that you can test the different models very quickly on any CPU or laptop.
```
def generate_and_save_images(model, epoch, test_sample):
predictions, _ = model(test_sample)
fig = plt.figure(figsize=(4, 4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i + 1)
plt.imshow(predictions[i, :, :], cmap='gray')
plt.axis('off')
# tight_layout minimizes the overlap between 2 sub-plots
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
epochs=50
test_sample = x_test[0:16, :, :]
for epoch in range(1, epochs + 1):
for train_x in x_train:
train_step(model, tf.expand_dims(train_x, axis=0), optimizer)
loss = tf.keras.metrics.Mean()
for test_x in x_test:
loss(compute_loss(model, tf.expand_dims(test_x, axis=0)))
elbo = -loss.result()
print('Epoch: {}, Test set ELBO: {}'.format(epoch, elbo))
generate_and_save_images(model, epoch, test_sample)
```
### Evaluating generative models
In order to evaluate our upcoming generative models, we will rely on the computation of the Negative Log-Likelihood. This code for the following `evaluate_nll_bpd` is inspired by the [Sylvester flow repository](https://github.com/riannevdberg/sylvester-flows)
```
from scipy.special import logsumexp
def evaluate_nll_bpd(data_loader, model, batch = 500, R = 5):
# Set of likelihood tests
likelihood_test = []
# Go through dataset
for batch_idx, (x, _) in enumerate(data_loader):
for j in range(x.shape[0]):
a = []
for r in range(0, R):
cur_x = x[j].unsqueeze(0)
# Repeat it as batch
x = cur_x.expand(batch, *cur_x.size()[1:]).contiguous()
x = x.view(batch, -1)
x_tilde, kl_div = model(x)
rec = reconstruction_loss(x_tilde, x, average=False)
a_tmp = (rec + kl_div)
a.append(- a_tmp.cpu().data.numpy())
# calculate max
a = np.asarray(a)
a = np.reshape(a, (a.shape[0] * a.shape[1], 1))
likelihood_x = logsumexp(a)
likelihood_test.append(likelihood_x - np.log(len(a)))
likelihood_test = np.array(likelihood_test)
nll = - np.mean(likelihood_test)
# Compute the bits per dim (but irrelevant for binary data)
bpd = nll / (np.prod(nin) * np.log(2.))
return nll, bpd
```
Now we can evaluate our VAE model more formally as follows.
```
# Plot final loss
plt.figure()
plt.plot(losses_kld[:, 0].numpy());
# Evaluate log-likelihood and bits per dim
nll, _ = evaluate_nll_bpd(test_loader, model)
print('Negative Log-Likelihood : ' + str(nll))
```
### Limitations of VAEs - (**exercise**)
Although VAEs are extremely powerful tools, they still have some limitations. Here we list the three most important and known limitations (all of them are still debated and topics of active research).
1. **Blurry reconstructions.** As can be witnessed directly in the results of the previous vanilla VAE implementation, the reconstructions appear to be blurry. The precise origin of this phenomenon is still debated, but the proposed explanation are
1. The use of the KL regularization
2. High variance regions of the latent space
3. The reconstruction criterion (expectation)
4. The use of simplistic latent distributions
2. **Posterior collapse.** The previous *blurry reconstructions* issue can be mitigated by using a more powerful decoder. However, relying on a decoder with a large capacity causes the phenomenon of *posterior collapse* where the latent space becomes useless. A nice intuitive explanation can be found [here](https://ermongroup.github.io/blog/a-tutorial-on-mmd-variational-autoencoders/)
3. **Simplistic Gaussian approximation**. In the derivation of the VAE objective, recall that the KL divergence term needs to be computed analytically. Therefore, this forces us to rely on quite simplistic families. However, the Gaussian family might be too simplistic to model real world data
In the present tutorial, we show how normalizing flows can be used to mostly solve the third limitation, while also adressing the two first problems. Indeed, we will see that normalizing flows also lead to sharper reconstructions and also act on preventing posterior collapse
<a id="improve"></a>
## Improving the quality of VAEs
As we discussed in the previous section, several known issues have been reported when using the vanilla VAE implementation. We listed some of the major issues as being
1. **Blurry reconstructions.**
2. **Posterior collapse.**
3. **Simplistic Gaussian approximation**.
Here, we discuss some recent developments that were proposed in the VAE literature and simple adjustments that can be made to (at least partly) alleviate these issues. However, note that some more advanced proposals such as PixelVAE [5](#reference1) and VQ-VAE [6](#reference1) can lead to wider increases in quality
### Reducing the bluriness of reconstructions
In this tutorial, we relied on extremely simple decoder functions, to show how we could easily define VAEs and normalizing flows together. However, the capacity of the decoder obviously directly influences the quality of the final reconstruction. Therefore, we could address this issue naively by using deep networks and of course convolutional layers as we are currently dealing with images.
First you need to construct a more complex encoder and decoder
```
def construct_encoder_decoder_complex(nin, n_latent = 16, n_hidden = 512, n_params = 0, n_classes = 1):
# Encoder network
encoder = ...
# Decoder network
decoder = ...
return encoder, decoder
```
### Preventing posterior collapse with Wasserstein-VAE-MMD (InfoVAE)
As we discussed earlier, the reason behind posterior collapse mostly relates to the KL divergence criterion (a nice intuitive explanation can be found [here](https://ermongroup.github.io/blog/a-tutorial-on-mmd-variational-autoencoders/). This can be mitigated by relying on a different criterion, such as regularizing the latent distribution by using the *Maximum Mean Discrepancy* (MMD) instead of the KL divergence. This model was independently proposed as the *InfoVAE* and later also as the *Wasserstein-VAE*.
Here we provide a simple implementation of the `InfoVAEMMD` class based on our previous implementations.
```
def compute_kernel(x, y):
return ...
def compute_mmd(x, y):
return ...
class InfoVAEMMD(VAE):
def __init__(self, encoder, decoder):
super(InfoVAEMMD, self).__init__(encoder, decoder)
def latent(self, x, z_params):
return ...
```
### Putting it all together
Here we combine all these ideas (except for the MMD, which is not adequate as the flow definition already regularizes the latent space without the KL divergence) to perform a more advanced optimization of the dataset. Hence, we will rely on the complex encoder and decoder with gated convolutions, the multinomial loss and the normalizing flows in order to improve the overall quality of our reconstructions.
```
# Size of latent space
n_latent = 16
# Number of hidden units
n_hidden = 256
# Rely on Bernoulli or multinomial
num_classes = 128
# Construct encoder and decoder
encoder, decoder = ...
# Create VAE or (InfoVAEMMD - WAE) model
model_flow_p = ...
# Create optimizer algorithm
optimizer = ...
# Add learning rate scheduler
scheduler = ...
# Launch our optimization
losses_flow_param = ...
```
*NB*: It seems that the multinomial version have a hard time converging. Although I only let this run for 200 epochs and only for a subsampling of 5000 examples, it might need more time, but this might also come from a mistake somewhere in my code ... If you spot something odd please let me know :)
### References
<a id="reference1"></a>
[1] Rezende, Danilo Jimenez, and Shakir Mohamed. "Variational inference with normalizing flows." _arXiv preprint arXiv:1505.05770_ (2015). [link](http://arxiv.org/pdf/1505.05770)
[2] Kingma, Diederik P., Tim Salimans, and Max Welling. "Improving Variational Inference with Inverse Autoregressive Flow." _arXiv preprint arXiv:1606.04934_ (2016). [link](https://arxiv.org/abs/1606.04934)
[3] Kingma, D. P., & Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. (2013). [link](https://arxiv.org/pdf/1312.6114)
[4] Rezende, D. J., Mohamed, S., & Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082. (2014). [link](https://arxiv.org/pdf/1401.4082)
[5] Gulrajani, I., Kumar, K., Ahmed, F., Taiga, A. A., Visin, F., Vazquez, D., & Courville, A. (2016). Pixelvae: A latent variable model for natural images. arXiv preprint arXiv:1611.05013. [link](https://arxiv.org/pdf/1611.05013)
[6] Van den Oord, A., & Vinyals, O. (2017). Neural discrete representation learning. In NIPS 2017 (pp. 6306-6315). [link](http://papers.nips.cc/paper/7210-neural-discrete-representation-learning.pdf)
### Inspirations and resources
https://blog.evjang.com/2018/01/nf1.html
https://github.com/ex4sperans/variational-inference-with-normalizing-flows
https://akosiorek.github.io/ml/2018/04/03/norm_flows.html
https://github.com/abdulfatir/normalizing-flows
https://github.com/riannevdberg/sylvester-flows
|
github_jupyter
|
# Tutorial 3 - Boosting Search via Symmetry Breaking, Implied Constraints, Randomisation, and Restarts
revisit the exact parameters so that restars work)
**Please do not read untill you fully finish the first 2 tutorials**
Congratulations! you are now level one constraint programmer: you know the basics on how to model a problem, how to display solutions, how to evaluate models, and how to choose a good branching strategy !! **I'm so proud of you!**
In this tutorial we slowly dive into advanced techniques. We also start to use arithmetic constraints and solve optimisation problems.
```
from config import setup
setup()
```
## Golomb ruler
Your goal is to place $N$ marks on a ruler, such that no two marks are at the same distance and the total length of the ruler (the position of the last mark) is minimized.
<div class="row" style="margin-top: 10px">
<img src="display/images/Golomb_Ruler-4.svg" style="display: block; margin: auto; width: 400px;" />
<p style="margin: auto; margin-top: 10px; text-align: center;">Golomb ruler of order 4 and length 6. This ruler is both optimal and perfect.</p>
</div>
Golomb ruler can be used in information theory to design error correcting codes or in telecommunications to avoid interferences during radio communications. You can read about it here https://en.wikipedia.org/wiki/Golomb_ruler#:~:targetText=In%20mathematics%2C%20a%20Golomb%20ruler,are%20the%20same%20distance%20apart.&targetText=It%20has%20been%20proven%20that,of%20the%20same%20order%20exists.
**In the rest of this tutorial (except the last part), please use the following parameter with the solve method:**
```
SearchType= 'DepthFirst'
```
Also, in order to control the level of filtering (arc consistency, bound consistency, forward checking, etc), CPoptimizer offers to use a parameter called $DefaultInferenceLevel$ http://ibmdecisionoptimization.github.io/docplex-doc/cp/docplex.cp.parameters.py.html?highlight=defaultinferencelevel#docplex.cp.parameters.CpoParameters.DefaultInferenceLevel
In the rest of this tutorial, you are required to test all three possibilities
```
DefaultInferenceLevel=Low
DefaultInferenceLevel=Medium
DefaultInferenceLevel=Extended
```
After a while, if you see one that you particularly find efficient (runtime), you can use it for the rest of the tutorial.
Create a model for the decision version of this problem. That is, given $n$ marks, and a ruler of size $m$, place the $n$ markers such that no two markers are at the same distance.
You are free to use any constraint you want. However, you must declare and use the minimum amount of constraints (**NOT A SINGLE UNNESSASARY CONSTRAINT**)
Note that for N marks, a ruler of length $2 ^ {N -1}$ can be found (I let you figure out why).
Write a funtion decision_model(n,m) that builds and returns the correspondant model.
Solve the problem for n=4, m=6. Then try different values of (n,m) (but don't waste too much time).
You can display to solution using :
```
from display import golomb as display_golomb
display_golomb([sol[m] for m in marks])
```
Print and display all the sulutions for (n,m) = (4,6) and (4,7)
Write a funtion basic_optimisation_model(n) that builds and returns the correspondant model for the
optimisation problem. Note that an optimisation function can be seen as a variable. In order to specify the variable to optimise, we can simply use :
```
model.add(model.minimize(myvariable))
```
or
```
model.add(model.maximize(myvariable))
```
Solve the optimisation problem for N=6.. 10 and display the solution
# Symmetry Breaking
In combinatorial optimisation, two (partial) solutions are called symmetric if we can find a transformation from one to the other.
Consider our golomb ruler problem. Given any solution to the marks variables, if the first mark is not at index $0$, we can always shift everything to the left to start from $0$ and still have a solution.
Constraint programming is extremely flexible to handle symmetries since they can be declared as constraints.
In the case of the above symmetry, we can simply add
```
model.add (marks[0]==0)
```
This problem has another symmetry, can you find it? In order to help you, display the solution for n=4 and m=6 for the decision problem. You should find 2 solutions that are essentially the same. Can you find the symmetry? How can we model this symmetry as a constraint?
Write a new function nosymmetry_optimisation_model(n) that builds a new model that avoids the two symmetries we found so far.
Compare nosymmetry_optimisation_model and basic_optimisation_model for different values of $n$ (you decide the values of $n$). Plot the runtime and the search tree size
What's your impression about symmetries?
## Implied Constraints
An implied constraint is one that can be dedused by looking at the original constraints of the problem.
For instance, if we have $a<b $ and $b<c$, one can infer that $a<c$.
Such constraints (called also redundant constraints) can help the solver to prune further the search tree.
In our problem there is an implied constraint. Can you find it? Please check with of the supervisors.
Write a new function nosymmetry2_optimisation_model(n) that adds the implied constraint to the nosymmetry_optimisation_model(n) and returns the new model
Compare nosymmetry2_optimisation_model and nosymmetry_optimisation_model
# Randomisation and Restarts
Declare two search strategies: One that uses a lexicographical order on both variables and values,
and the other using an impact-based choice on the variables with a random value selection.
Run the two strategies using the nosymmetry2_optimisation_model for different values of $n$
### The magic of restarts
Combinatorial search exhibits usually a bad behaviour in the runtime distribution called **heavy tailed phenomenon**.
That is, at any node of the search tree, there is a non-negligeable probability that the time needed to explore the current subtree is heavier than
an exponential distribution (you can read about it here https://aaai.org/Papers/AAAI/1998/AAAI98-061.pdf.
A simple solution to deal with such a bad behaviour is to restart search from time to time.
CPOptimizer offers this choice by using the parameter:
```
SearchType= 'Restart'
```
Using a restart search, evaluate the two strategies mentionned above using the nosymmetry2_optimisation_model for different values of $n$. What do you think?
What is the maximum value of $n$ for which you can solve this problem? Use all your techniques!
```
### WARNING : This block takes a lot of time to execute
# A lot of configurations try for instance
```
What did you learn today?
|
github_jupyter
|
```
!wget https://datahack-prod.s3.amazonaws.com/train_file/train_LZdllcl.csv -O train.csv
!wget https://datahack-prod.s3.amazonaws.com/test_file/test_2umaH9m.csv -O test.csv
!wget https://datahack-prod.s3.amazonaws.com/sample_submission/sample_submission_M0L0uXE.csv -O sample_submission.csv
# Import the required packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
# Read the train and test data
train=pd.read_csv("train.csv")
train.drop('employee_id',inplace=True,axis = 1)
test=pd.read_csv("test.csv")
# Check the variables in train data
train.columns
# Print datatype of each variable
train.dtypes
# Dimension of the train dataset
train.shape
# Print the head of train dataset
train.head()
# Unique values in each variable of train dataset
train.nunique()
```
### Univariate Analysis
#### Target Variable
```
train['is_promoted'].value_counts(normalize=True)
# Around 91% trainee have promoted
# Unbalanced dataset
```
#### Categorical Independent Variables
```
plt.figure(1)
plt.subplot(221)
train['department'].value_counts(normalize=True).plot.bar(figsize=(20,10), title= 'Department')
plt.subplot(222)
train['awards_won?'].value_counts(normalize=True).plot.bar(title= 'Awards won')
plt.subplot(223)
train['education'].value_counts(normalize=True).plot.bar(title= 'Education')
plt.subplot(224)
train['gender'].value_counts(normalize=True).plot.bar(title= 'Gender')
plt.show()
# Most of the trainee are enrolled for Y and T program_type.
# More number of trainee enrolment for offline test than online test.
# Most of the test are easy in terms of difficulty level.
train['KPIs_met >80%'].value_counts(normalize=True).plot.bar(title= 'KPI met greater than 80')
plt.figure(1)
plt.subplot(221)
train['region'].value_counts(normalize=True).plot.bar(figsize=(20,10), title= 'Region')
plt.subplot(222)
train['recruitment_channel'].value_counts(normalize=True).plot.bar(title='Recruitment Channels')
plt.subplot(223)
train['no_of_trainings'].value_counts(normalize=True).plot.bar(title= 'No of Trainings')
plt.subplot(224)
train['previous_year_rating'].value_counts(normalize=True).plot.bar(title= 'Previous year ratings')
plt.show()
# More male trainee as compared to female trainee
# Most of the trainee have diploma
# Most of the trainee belongs to tier 3 city
# 10% of the trainee are handicapped
```
#### Numerical Independent Variables
```
sns.distplot(train['age']);
# Most of the trainee are in the age range of 20-30 and 40-50
sns.distplot(train['length_of_service']);
sns.distplot(train['avg_training_score']);
```
### Bivariate Analysis
```
# Correlation between numerical variables
matrix = train.corr()
f, ax = plt.subplots(figsize=(9, 6))
sns.heatmap(matrix, vmax=.8, square=True, cmap="BuPu");
# Not much correlation between the variables
# program_id vs is_pass
plt.figure(figsize=(12,4))
sns.barplot(train['department'], train['is_promoted'])
plt.figure(figsize=(20,8))
# program_type vs is_pass
sns.barplot(train['region'], train['is_promoted'])
# Trainee in X and Y program type have higher chances to pass the test
# test_type vs is_pass
sns.barplot(train['recruitment_channel'], train['is_promoted'])
# Trainee attending online mode of test have higher chances to pass the test
# difficulty_level vs is_pass
sns.barplot(train['no_of_trainings'], train['is_promoted'])
# If the difficulty level of the test is easy, chances to pass the test are higher
# Gender vs is_pass
sns.barplot(train['previous_year_rating'], train['is_promoted'])
# Gender does not affect the chances to pass the test
# education vs is_pass
plt.figure(figsize=(12,4))
sns.barplot(train['education'], train['is_promoted'])
# Trainee with Masters education level have more chances to pass the test
plt.figure(figsize=(20,8))
# is_handicapped vs is_pass
sns.barplot(train['length_of_service'], train['is_promoted'])
# Handicapped trainee have less chances to pass the test
# city_tier vs is_pass
sns.barplot(train['KPIs_met >80%'], train['is_promoted'])
# Trainee from city tier 1 have higher chances to pass the test
# trainee_engagement_rating vs is_pass
sns.barplot(train['awards_won?'], train['is_promoted'])
# As the trainee engagement rating increases, chances to pass the test also increases
```
### Missing Values Treatment
```
# Check the number of missing values in each variable
train.isnull().sum()
# age and trainee_engagement_rating variables have missing values in it.
test = pd.read_csv('test.csv')
test.drop('employee_id',inplace=True,axis = 1)
test.head()
test['education'].fillna('other',inplace=True)
test['previous_year_rating'].fillna(99,inplace=True)
train['education'].fillna('other',inplace=True)
train['previous_year_rating'].fillna(99,inplace=True)
```
### Logistic Regression
```
train.head()
# Save target variable in separate dataset
X = train.drop('is_promoted',axis=1)
y = train.is_promoted
test.head()
# Apply dummies to the dataset
X=pd.get_dummies(X)
test=pd.get_dummies(test)
from sklearn.ensemble import GradientBoostingClassifier
from sklearn import cross_validation, metrics #Additional scklearn functions
from sklearn.grid_search import GridSearchCV #Perforing grid search
#same function as xgboost tuning one!
def modelfit(alg, dtrain, predictors, performCV=True, printFeatureImportance=True, cv_folds=5):
#Fit the algorithm on the data
alg.fit(dtrain[predictors],y)
#Predict training set:
dtrain_predictions = alg.predict(dtrain[predictors])
dtrain_predprob = alg.predict_proba(dtrain[predictors])[:,1]
#Perform cross-validation:
if performCV:
cv_score = cross_validation.cross_val_score(alg, dtrain[predictors],y, cv=cv_folds, scoring='f1')
#Print model report:
print("\nModel Report")
print("F1 Score :",metrics.f1_score(y, dtrain_predictions))
if performCV:
print("CV Score : Mean - %.7g | Std - %.7g | Min - %.7g | Max - %.7g" % (np.mean(cv_score),np.std(cv_score),np.min(cv_score),np.max(cv_score)))
#Print Feature Importance:
if printFeatureImportance:
feat_imp = pd.Series(alg.feature_importances_, predictors).sort_values(ascending=False)
feat_imp.plot(kind='bar', title='Feature Importances')
plt.ylabel('Feature Importance Score')
#Choose all predictors except target & IDcols
predictors = [x for x in X.columns]
gbm0 = GradientBoostingClassifier(random_state=42,verbose = 1)
modelfit(gbm0,X, predictors)
param_test1 = {'n_estimators':np.arange(180,400,20)}
gsearch1 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1,verbose = 1, min_samples_split=500,min_samples_leaf=50,max_depth=5,max_features='sqrt',subsample=0.8,random_state=10),
param_grid = param_test1, scoring='f1',n_jobs=-1,iid=False, cv=3,verbose=1)
gsearch1.fit(X,y)
gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_
#tuning max depth and min samples split
param_test2 = {'max_depth':np.arange(5,10,2),'min_samples_split':np.arange(500,1001,100)}
gsearch2 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1,verbose = 1, n_estimators=600, max_features='sqrt', subsample=0.8, random_state=10),
param_grid = param_test2, scoring='f1',n_jobs=-1,iid=False, cv=3,verbose =1)
gsearch2.fit(X,y)
gsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_
#Tuning min_samples_leaf after updating the latest hyperparameter values i.e max_depth and min_samples_split
param_test3 = {'min_samples_leaf':np.arange(50,100,10)}
gsearch3 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=600,min_samples_split=600,max_depth=7,max_features='sqrt',verbose = 1, subsample=0.8, random_state=10),
param_grid = param_test3, scoring='f1',n_jobs=-1,iid=False, cv=3,verbose = 1)
gsearch3.fit(X,y)
gsearch3.grid_scores_, gsearch3.best_params_, gsearch3.best_score_
param_test5 = {'subsample':[0.6,0.7,0.75,0.8,0.85,0.9]}
gsearch5 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, verbose = 1 , n_estimators=600,max_depth=7,min_samples_split=600, min_samples_leaf=60, subsample=0.8, random_state=10,max_features=7),
param_grid = param_test5, scoring='f1',n_jobs=-1,iid=False, cv=3,verbose = 1)
gsearch5.fit(X,y)
gsearch5.grid_scores_, gsearch5.best_params_, gsearch5.best_score_
gbm_tuned_1 = GradientBoostingClassifier(learning_rate=0.1, n_estimators=600,max_depth=7, min_samples_split=600,min_samples_leaf=60, subsample=0.8, random_state=10, max_features=7,verbose=1 )
modelfit(gbm_tuned_1,X,predictors)
pred = gbm_tuned_1.predict(test)
# Read the submission file
submission=pd.read_csv("sample_submission.csv")
submission.head()
# Fill the is_pass variable with the predictions
submission['is_promoted']=pred
submission['is_promoted'] = submission['is_promoted'].astype(np.int64)
submission.head()
submission['is_promoted'].value_counts()
# Converting the submission file to csv format
submission.to_csv('logistic_submission.csv', index=False)
```
score on leaderboard - 0.71145
|
github_jupyter
|
[View in Colaboratory](https://colab.research.google.com/github/3catz/DeepLearning-NLP/blob/master/Time_Series_Forecasting_with_EMD_and_Fully_Convolutional_Neural_Networks_on_the_IRX_data_set.ipynb)
# TIME SERIES FORECASTING -- using Empirical Mode Decomposition with Fully Convolutional Networks for One-step ahead forecasting on the IRX time series.
# Summary:#
A noisy time series is additively decomposed into Intrinsic Mode Functions--oscillating, orthogonal basis functions, using the Empirical Mode Decomposition method pionered by Norden Huang. The IMF components are then used as features for a deep convolutional neural network, which can "learn" the decomposition--divide and conquer--and thereby improve forecasting performance and offer not only forecasting for the series but also the IMF components going into the future. This allows us to focus on forecasting physically significant or interesting IMFs. Note: This is additive, not multiplicative decomposition, which means that you consider the time series to be the sum of various components, rather than the product of various component functions. What it is--or rather, which is the better model--is something you have to explore. It helps to have domain knowledge, though more advanced forms of spectral analysis can also be used to glean insights in this regard.
In this notebook, I demonstrate that using the IMFs a features alongside the original time series can do very well in out-of-sample forecasting, in this case, forecasting 1 step ahead. We used a lookback window of 10 lags from the signal as well as the IMFs to help us predict 1-step ahead in the future. Using the R2 coefficient of determination, we can see that the model can account for over 98% of the variation when applied to an out-of-sample forecast.
# Data#
**IRX opening prices**
IRX is the stock ticker for the [13 Week Treasury Bill](https://finance.yahoo.com/quote/%5EIRX/history/).
I downloaded the data from [Comp-engine.org, a self-organizing database of time series](https://www.comp-engine.org/#!visualize/25c6285e-3872-11e8-8680-0242ac120002) fully accessible to the public.
# Architecture and Process#
1. 4 Conv layers, all from the original input, each with 128 hidden units, filter size of 3, dilation rates exponential powers of 2.
2. Concatenate these 4 layers with the original input--no adding or multiplying, just concatenate on axis = -1.
3. Deconv with hidden units equal to number of IMF-components, in this case 11.
4. Add the predicted IMF-components together to reconstruct the signal, which is your yhat prediction for a step ahead.
5. Compare with ground truth to see how you did.
```
!pip install pyhht
!pip install PeakUtils
from sklearn.preprocessing import MinMaxScaler, RobustScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
#import pandas_datareader.data as web
from pandas import Series
from pandas import DataFrame
from pandas import concat
import matplotlib.pyplot as plt
import os
from scipy.integrate import odeint
#keras
from keras.models import *
from keras.layers import *
from keras.optimizers import *
from keras.callbacks import *
from keras import backend as K
from keras.engine.topology import Layer
import peakutils
#!pip install pyramid-arima
#from pyramid.arima import auto_arima
```
# Utilities: series to supervised
```
def series_to_supervised(data, n_in, n_out, dropnan=True):
n_vars = 1 if type(data) is list else data.shape[1]
df = DataFrame(data)
cols, names = list(), list()
# input sequence (t-n, ... t-1)
for i in range(n_in, 0, -1):
cols.append(df.shift(i))
names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
# forecast sequence (t, t+1, ... t+n)
for i in range(0, n_out):
cols.append(df.shift(-i))
if i == 0:
names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
else:
names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
# put it all together
agg = concat(cols, axis=1)
agg.columns = names
# drop rows with NaN values
if dropnan:
agg.dropna(inplace=True)
return agg
```
# Loading Data
```
from google.colab import files
files.upload()
import numpy as np
data = np.fromfile("yourfilehere.dat", sep = "\n")
print(data)
len(data)
import numpy as np
data = np.genfromtxt("FLYahooop_IRX.csv", delimiter = ","); data = np.asarray(data); data.shape
#Plot of Time Series
from scipy.interpolate import interp1d
plt.figure(figsize=(20,6))
plt.plot(data)
plt.tight_layout()
plt.xlim([0,len(data)])
plt.show()
#Scale the Data to -1,1
scaler = MinMaxScaler(feature_range = (-1,1))
scaled_data = scaler.fit_transform(data.reshape(-1,1))
scaled_data.shape
scaled_data = np.squeeze(scaled_data)
scaled_data.shape
scaled_data = np.transpose(scaled_data)
# before you do the EMD, cut out the out of sample part so that the EMDs are not constructed with those future values and information contained within them
in_sample = scaled_data[:-1000]; out_sample = scaled_data[-1000:]
print(in_sample.shape)
```
#Empirical Mode Decomposition
From Yang et. al (2015), a summary:
**Empirical mode decomposition (EMD)** technique to decompose the nonstationary signal into a series of intrinsic mode functions (IMFs) [9–11]. This ability makes HHT competitive in processing various composite signals [12–14]. With HHT, complex signals can be decomposed into multiple single-frequency signals that can further be processed by intrinsic mode function of EMD. *After the nonstationary signals have been decomposed into IMFs through EMD, these signals can easily be obtained by Hilbert transform of each mode function*. By doing so, researchers can obtain the instantaneous frequency and amplitude of each IMF. With the Hilbert spectrum and Hilbert marginal spectrum of IMFs, people can accurately get the joint distribution of energy with frequency and time and further predict whether IoT equipment is normal or not. Compared with FFT and VT, HHT is a strong adaptive time frequency analysis method.
```
from pyhht.emd import EMD
from pyhht.visualization import plot_imfs
decomposer1 = EMD(in_sample, maxiter = 10000)
imfs1 = decomposer1.decompose()
print("There are a total of %s IMFs" % len(imfs1))
#Plot the IMFs, from highest frequency to lowest. The last one should be a monotonic trend function. It is known as the residue,
#the irreducible trend left after the detrending of the EMD process.
for i in range(len(imfs1)):
fig, ax = plt.subplots(figsize=(25,2))
fig = plt.plot(imfs1[i])
plt.show()
import numpy as np
import pylab as plt
from scipy.signal import hilbert
#from PyEMD import EMD
def instant_phase(imfs):
"""Extract analytical signal through Hilbert Transform."""
analytic_signal = hilbert(imfs) # Apply Hilbert transform to each row
phase = np.unwrap(np.angle(analytic_signal)) # Compute angle between img and real
return phase
t = np.linspace(0,len(scaled_data),len(scaled_data))
dt = 1
# Extract instantaneous phases and frequencies using Hilbert transform
instant_phases = instant_phase(imfs1)
instant_freqs = np.diff(instant_phases)/(2*np.pi*dt)
# Create a figure consisting of 3 panels which from the top are the input signal, IMFs and instantaneous frequencies
fig, axes = plt.subplots(3, figsize=(20,18))
# The top panel shows the input signal
ax = axes[0]
ax.plot(t, scaled_data)
ax.set_ylabel("Amplitude [arb. u.]")
ax.set_title("Input signal Channel 1")
# The middle panel shows all IMFs
ax = axes[1]
for num, imf in enumerate(imfs1):
ax.plot(t, imf, label='IMF %s' %(num+1))
# Label the figure
ax.legend()
ax.set_ylabel("Amplitude [arb. u.]")
ax.set_title("IMFs")
# The bottom panel shows all instantaneous frequencies
ax = axes[2]
for num, instant_freq in enumerate(instant_freqs):
ax.plot(t[:-1], instant_freq, label='IMF %s'%(num+1))
# Label the figure
ax.legend()
ax.set_xlabel("Time [s]")
ax.set_ylabel("Inst. Freq. [Hz]")
ax.set_title("Huang-Hilbert Transform")
plt.tight_layout()
plt.savefig('hht_example', dpi=120)
plt.show()
```
# Creating Datasets
*Raw Data, using a certian number of lags; most of my experimentation has beeen with either 10 or 20.
```
in_sample = in_sample.reshape(-1,1); print(in_sample.shape)
lookback = 10
data_f = series_to_supervised(in_sample, n_in = lookback, n_out = 1, dropnan = True)
print(data_f.shape)
data_f = np.asarray(data_f)
Xr = data_f[:,:-1]
Y = data_f[:,-1]
print(Xr.shape, Y.shape)
```
# Use the IMFs--which are time series of equal length as the original signal, as features for convolutional/recurrent network.
```
imfs1.shape
imfs1 = np.transpose(imfs1, (1,0)); imfs1.shape
imf_df = series_to_supervised(imfs1, n_in = lookback, n_out = 1, dropnan = True)
imf_df = np.expand_dims(imf_df, axis = 1)
print(imf_df.shape)
imf_df = np.reshape(imf_df, (imf_df.shape[0], (lookback +1), imfs1.shape[-1]))
print(imf_df.shape)
targets = imf_df[:,-1,:]
print(targets.shape)
print(Xr.shape)
#so reshape everything properly
input_data = np.reshape(Xr, (targets.shape[0],1,lookback))
targets = np.reshape(targets,(targets.shape[0],1,targets.shape[1]))
print(input_data.shape, targets.shape)
#test Y values--completely out of sample. The calculation of the IMFs
#was not influenced by these values. No information contamination from future to past.
out_df = series_to_supervised(out_sample.reshape(-1,1), n_in = lookback, n_out = 1, dropnan = True)
print(out_df.shape); out_df = np.asarray(out_df)
testY = out_df[:,-1]
testX = out_df[:,:-1]
testX = np.expand_dims(testX, axis = 1)
print(testX.shape,testY.shape)
```
# Partial autocorrelation
If you were doing SARIMA analysis, you would want to know if this series is autoregressive and to what extent. this helps when calculating a good lag for prediction, that is, how many past values you need to accurately predict a future value.
```
from statsmodels.graphics.tsaplots import plot_acf
from statsmodels.graphics.tsaplots import plot_pacf
fig, axes = plt.subplots(2, figsize=(20,6))
fig1 = plot_acf(scaled_data,lags = 60, ax = axes[0])
fig2 = plot_pacf(scaled_data, lags = 100, ax = axes[1])
plt.show()
```
# Network Architecture and Model fitting
```
from keras.layers.advanced_activations import *
from keras.regularizers import l1, l2
from sklearn.metrics import r2_score
import keras.backend as K
from keras.layers import ConvLSTM2D
from keras.layers import LeakyReLU
np.random.seed(2018) #inputs are (1, 20) and outputs are #(1 time step,17 features)
def convs(x, n, f, rate, bn = False):
x = Conv1D(n, f, padding = "causal", dilation_rate = rate, activation="tanh")(x)
if bn == False:
x = x
else:
x = BatchNormalization()(x)
return x
inputs = Input(shape = (1, lookback))
x = convs(x = inputs, n = 128, f = 3, rate = 2, bn = False)
y = convs(x = inputs, n = 128, f = 3, rate = 4, bn = False)
u = convs(x = inputs, n = 128, f = 3, rate = 8, bn = False)
v = convs(x = inputs, n = 128, f = 3, rate = 16, bn = False)
z = concatenate([inputs, x, y, u, v], axis = -1)
z = Activation("tanh")(z)
z = Dropout(0.3)(z)
predictions = Conv1D(11, 3, padding = "causal", dilation_rate = 1)(z)
model = Model(inputs = inputs, outputs = predictions)
opt = adam(lr = 1e-3, clipnorm = 1.)
reduce_lr = ReduceLROnPlateau(monitor='loss', factor = 0.9, patience = 3, min_lr = 1e-5, verbose = 1)
checkpointer = ModelCheckpoint(filepath = "timeseries_weights.hdf5", verbose = 1, save_best_only = True)
early = EarlyStopping(monitor = 'loss', min_delta = 1e-4, patience = 10, verbose = 1)
model.compile(optimizer=opt, loss='mse', metrics = [])
model.summary()
history = model.fit(input_data, targets,
epochs = 20,
batch_size = 128,
verbose = 1,
#validation_data = (validX, validY),
callbacks = [reduce_lr, early],
shuffle = False)
```
```
preds = model.predict(testX, batch_size = 1)
summed = np.sum(preds, axis = -1); print(summed.shape)
test_preds = summed[:,0]
plt.plot(test_preds)
```
# R2 analysis#
In statistics, the coefficient of determination, denoted R2 or r2 and pronounced "R squared", is the proportion of the variance in the dependent variable that is predictable from the independent variable(s).
It is a statistic used in the context of statistical models whose main purpose is either the prediction of future outcomes or the testing of hypotheses, on the basis of other related information. It provides a measure of how well observed outcomes are replicated by the model, based on the proportion of total variation of outcomes explained by the model.[1][2][3]
```
print("Final R2 Score is: {}".format(r2_score(testY, test_preds)))
fig = plt.figure(figsize = (20,6))
fig = plt.plot(test_preds, label = "PREDICTIONS")
fig = plt.plot(testY, label = "TRUE DATA")
plt.xlim([0,990])
plt.legend()
plt.show()
plt.clf()
plt.cla()
plt.close()
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/mizoru/blog/blob/master/2022-05-24-thunder-speech-pronunciation-trainer.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Finetuning a pretrained QuartzNet on TIMIT
using [thunder-speech](https://github.com/scart97/thunder-speech)
I talk more about this project [here on Twitter](https://twitter.com/Irmuzy/status/1529087355377836032)
```
# hide
!pip install thunder-speech wandb
```
Cloning the repository that contains `.csv`'s with processed labels and filepaths, courtesy of my coursemates.
```
!git clone https://github.com/mizoru/pronunciation-trainer.git
```
### Getting the data and the imports ready
```
from kaggle import api
api.dataset_download_files('mfekadu/darpa-timit-acousticphonetic-continuous-speech')
import zipfile
archive = zipfile.ZipFile('darpa-timit-acousticphonetic-continuous-speech.zip')
archive.extractall()
```
This dataset is going to be used as noise
```
api.dataset_download_files('chrisfilo/urbansound8k')
import zipfile
archive = zipfile.ZipFile('urbansound8k.zip')
archive.extractall('data')
import thunder
from thunder.callbacks import FinetuneEncoderDecoder
from thunder.finetune import FinetuneCTCModule
from thunder.data.dataset import BaseSpeechDataset
from thunder.data.datamodule import BaseDataModule
from thunder.blocks import conv1d_decoder
from thunder.quartznet.compatibility import load_quartznet_checkpoint
from typing import Any, List, Sequence, Tuple, Union
import torch
from torch import Tensor, nn
from thunder.registry import load_pretrained
from thunder.quartznet.compatibility import QuartznetCheckpoint
from pathlib import Path
import pandas as pd
import librosa
import numpy as np
import torchaudio
import pytorch_lightning as pl
from math import ceil
from IPython.display import Audio
labels = pd.read_csv('pronunciation-trainer/dataDS.csv')
noise_files = pd.read_csv('data/UrbanSound8K.csv')
noise_files = list('data/fold1/' + noise_files[noise_files.fold==1].slice_file_name)
```
### Setting up Dataset and DataModule for training
The commented out code is the transforms I tried.
```
class TimitDataset(BaseSpeechDataset):
def __init__(
self, items: Sequence, force_mono: bool = True, sample_rate: int = 16000,
time_stretch = None, volume = None, pitch = None, noise_files = None
# 0.2, 0.2, 2
):
super().__init__(items, force_mono, sample_rate)
self.librosa_transforms = bool(time_stretch)
self.time_stretch = time_stretch
self.volume = volume
self.pitch = pitch
self.noise_files = noise_files
def open_audio(self, item) -> Tuple[Tensor, int]:
audio,sr = self.loader.open_audio(item.Path)
# adding noise
if self.noise_files:
idx = int(torch.randint(0, len(self.noise_files), (1,)))
noise = self.loader(self.noise_files[idx])
# this bit of code I got from a course, it gets the loudness ratio right
noize_level = torch.rand(1) * 40 # from 0 to 40
noize_energy = torch.norm(noise)
audio_energy = torch.norm(audio)
alpha = (audio_energy / noize_energy) * torch.pow(10, -noize_level / 20)
# repeat the noise as many times as wee need
if noise.shape[1] < audio.shape[1]:
noise = torch.cat([noise] * ceil(audio.shape[1] / noise.shape[1]), 1)
noise = noise[:,:audio.shape[1]]
audio = audio + alpha * noise
audio.clamp_(-1, 1)
# THIS TRANSFORM TAKES FOREVER
if self.pitch: # AND PROBABLY DOESN'T WORK
audio = torchaudio.functional.pitch_shift(audio, sr, self.pitch * torch.randn(1))
if self.volume: # this transform led to CUDA out of memory
audio = torchaudio.transforms.Vol(torch.abs(1+self.volume*torch.randn(1)))(audio)
# this works, but I didn't get better results with it, might need tuning
if self.librosa_transforms: audio = audio.numpy().squeeze()
if self.time_stretch:
audio = librosa.effects.time_stretch(audio, np.abs(1 + self.time_stretch * np.random.randn()))
if self.librosa_transforms: audio = torch.Tensor(audio).unsqueeze(0)
return audio, sr
def open_text(self, item) -> str:
return item.Transcription
def get_item(self, index: int) -> Any:
return self.items.iloc[index]
Audio(TimitDataset(labels, noise_files=noise_files)[159][0], rate=16000)
class TimtiDataModule(BaseDataModule):
def __init__(
self,
batch_size: int = 32,
num_workers: int = 2,
time_stretch = 0.2, volume = 0.2, pitch = 2, noise_files=None
):
super().__init__(batch_size, num_workers)
self.time_stretch = time_stretch
self.volume = volume
self.pitch = pitch
self.noise_files = noise_files
def get_dataset(self, split):
if split != "train":
return TimitDataset(labels[labels["is_valid"]], time_stretch = False, volume = False, pitch = False)
else:
return TimitDataset(labels[labels["is_valid"] == False],
time_stretch = self.time_stretch, volume = self.volume, pitch = self.pitch,
noise_files = self.noise_files)
dm = TimtiDataModule(batch_size=32, noise_files=noise_files)
```
Getting the tokens from the data
```
whole = '.'.join([t for t in labels.Transcription])
tokens = list(set(whole.split('.')))
len(tokens)
def dot_tokenizer(s:str):
return s.split('.')
```
### Adapting pretrained weights
```
model = FinetuneCTCModule(QuartznetCheckpoint.QuartzNet15x5Base_En,
decoder_class = conv1d_decoder, tokens = tokens,
text_kwargs={'custom_tokenizer_function':dot_tokenizer})
```
These next five cells import the weights of the decoder from a trained models and adapt them into the new decoder.
`correspondences` is a dictionary that assigns every token in the new decoder the corresponding token in the trained decoder to take the model parameters from.
```
correspnodences = {'s': 's', 'n': 'n', 'dʒ': 'j', 'd̚': 'd', 'w': 'w', 'b': 'b', 'g': 'g', 'm': 'm',
'l̩': 'l', 'f': 'f', 'l': 'l', 'j': 'y', 'k': 'k', 'eɪ': 'a', 'p̚': 'p', 'm̩': 'm',
'r': 'r', 't': 't', 'h': 'h', 'aʊ': 'o', 'n̩': 'n', 'i': 'e', 'b̚': 'b', 'p': 'p',
'k̚': 'k', 'd': 'd', 'u': 'o', 't̚': 't', 'z': 'z', 'aɪ': 'i', 'v': 'v', 'tʃ': 'c',
'oʊ': 'o', '<blank>' : '<blank>', 'ɝ' : 'e', 'ʉ' : 'o', 'ð' : 't', 'θ' : 't', 'ɚ' : 'e',
'ɦ' : 'h', 'ŋ' : 'n', 'ʔ' : 't', 'ʒ' : 's', 'ʊ' : 'o', 'ɾ' : 't', 'ɪ' : 'i', 'ə̥' : 'u',
'ɑ' : 'a', 'ə' : 'e', 'ɛ' : 'e', 'ɔɪ' : 'o', 'ɡ̚' : 'g', 'ɔ' : 'o', 'ɨ̞' : 'i', 'ŋ̩' : 'n',
'ʌ' : 'u', 'ɾ̃' : 'n', 'ʃ' : 's', 'æ' : 'a'}
def adapt_into_new_decoder(decoder, old_vocab, new_vocab, correspnodences = None):
if correspnodences == None:
correspnodences = {k:k[0] for k in new_vocab.keys() if k and k[0] in old_vocab.keys()}
with torch.no_grad():
new_decoder = conv1d_decoder(1024, len(new_vocab))
weight = decoder.weight
bias = decoder.bias
for new_token,old_token in correspnodences.items():
new_decoder.weight[new_vocab[new_token]] = weight[old_vocab[old_token]]
new_decoder.bias[new_vocab[new_token]] = bias[old_vocab[old_token]]
return new_decoder
checkpoint_model = load_quartznet_checkpoint(QuartznetCheckpoint.QuartzNet15x5Base_En)
```
These `vocab` dictionaries give the function `adapt_into_new_decoder` the indices in the weight matrix of the decoder for the corresponding tokens.
```
old_vocab = checkpoint_model.text_transform.vocab.itos
old_vocab = {k:v for (v, k) in enumerate(old_vocab)}
new_vocab = {k:v for (v, k) in enumerate(model.text_transform.vocab.itos)}
model.decoder = adapt_into_new_decoder(checkpoint_model.decoder, old_vocab, new_vocab, correspnodences)
del checkpoint_model
```
### Training
```
import wandb
from pytorch_lightning.loggers import WandbLogger
wandb_logger = WandbLogger(project='pronunciation-trainer', name='transform-thunder')
```
Setting a higher `encoder_initial_lr_div` led to less overfitting.
```
trainer = pl.Trainer(
gpus=-1, # Use all gpus
max_epochs=30,
callbacks=[FinetuneEncoderDecoder(unfreeze_encoder_at_epoch=15, encoder_initial_lr_div=100)],
logger = wandb_logger
)
trainer.fit(model = model, datamodule=dm)
trainer.validate(model = model, datamodule=dm)
```
let's save our model for inference
```
model.to_torchscript("QuartzNet_thunderspeech.pt")
wandb.save('QuartzNet_thunderspeech.pt', policy='now')
wandb.finish()
```
### Getting predictions for the app
```
loader = AudioFileLoader(sample_rate=16000)
natives = pd.read_csv('pronunciation-trainer/natives.csv')
```
I came up with a small list of words that learners might struggle with differentiating.
```
subset = ["thin", "thing", "think", "fit", "feet", "bald", "bold", "food", "foot",
"death", "deaf", "worm", "warm"]
subset_df = natives[natives.replica.isin(subset)]
```
This dataset contains audio for single words.
```
!wget https://lingualibre.org/datasets/Q22-eng-English.zip
import zipfile
archive = zipfile.ZipFile('Q22-eng-English.zip')
archive.extractall()
```
I get the raw prediction tensors and then convert them into the format I need.
```
model.eval()
predicts = []
for i in range(len(subset_df)):
path = str(Path('Q22-eng-English') / '/'.join(subset_df.path.iloc[i].split('/')[2:]))
# print(path)
try:
audio = loader(path)
predicts.append(model(audio, torch.tensor(audio.shape[0] * [audio.shape[-1]], device=audio.device)))
except Exception:
predicts.append(None)
# print(predicts[-1])
vocab = model.text_transform.vocab.itos
vocab[-1] = ''
for i in range(len(predicts)):
if predicts[i] != None:
ids = predicts[i][0].argmax(1)[0]
s = []
# print(ids)
if vocab[ids[0]]: s.append(vocab[ids[0]])
for l in range(1,len(ids)):
if ids[l-1] != ids[l]:
new = vocab[ids[l]]
if new: s.append(new)
predicts[i] = '.'.join(s)
predicts
subset_df["transcription"] = predicts
subset_df.to_csv("native_words_subset.csv", index=False)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/unicamp-dl/IA025_2022S1/blob/main/ex07/Guilherme_Pereira/Aula_7_Guilherme_Pereira.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
nome = 'Guilherme Pereira'
print(f'Meu nome é {nome}')
```
# Exercício: Modelo de Linguagem (Bengio 2003) - MLP + Embeddings
Neste exercício iremos treinar uma rede neural simples para prever a proxima palavra de um texto, data as palavras anteriores como entrada. Esta tarefa é chamada de "Modelagem da Língua".
Este dataset já possui um tamanho razoável e é bem provável que você vai precisar rodar seus experimentos com GPU.
Alguns conselhos úteis:
- **ATENÇÃO:** o dataset é bem grande. Não dê comando de imprimí-lo.
- Durante a depuração, faça seu dataset ficar bem pequeno, para que a depuração seja mais rápida e não precise de GPU. Somente ligue a GPU quando o seu laço de treinamento já está funcionando
- Não deixe para fazer esse exercício na véspera. Ele é trabalhoso.
```
# iremos utilizar a biblioteca dos transformers para ter acesso ao tokenizador do BERT.
!pip install transformers
```
## Importação dos pacotes
```
import collections
import itertools
import functools
import math
import random
import torch
import torch.nn as nn
import numpy as np
from torch.utils.data import DataLoader
from tqdm import tqdm_notebook
# Check which GPU we are using
!nvidia-smi
if torch.cuda.is_available():
dev = "cuda:0"
else:
dev = "cpu"
device = torch.device(dev)
print('Using {}'.format(device))
```
## Implementação do MyDataset
```
from typing import List
def tokenize(text: str, tokenizer):
return tokenizer(text, return_tensors=None, add_special_tokens=False).input_ids
class MyDataset():
def __init__(self, texts: List[str], tokenizer, context_size: int):
# Escreva seu código aqui
self.tokens, self.target = [], []
for text in texts:
ids = tokenize(text, tokenizer)
for i in range(len(ids)-context_size):
self.tokens.append(ids[i:i + context_size])
self.target.append(ids[i + context_size])
self.tokens = torch.tensor(self.tokens)
self.target = torch.tensor(self.target)
def __len__(self):
# Escreva seu código aqui
return len(self.target)
def __getitem__(self, idx):
# Escreva seu código aqui
return self.tokens[idx], self.target[idx]
```
## Teste se sua implementação do MyDataset está correta
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("neuralmind/bert-base-portuguese-cased")
dummy_texts = ['Eu gosto de correr', 'Ela gosta muito de comer pizza']
dummy_dataset = MyDataset(texts=dummy_texts, tokenizer=tokenizer, context_size=3)
dummy_loader = DataLoader(dummy_dataset, batch_size=6, shuffle=False)
assert len(dummy_dataset) == 5
print('passou no assert de tamanho do dataset')
first_batch_input, first_batch_target = next(iter(dummy_loader))
correct_first_batch_input = torch.LongTensor(
[[ 3396, 10303, 125],
[ 1660, 5971, 785],
[ 5971, 785, 125],
[ 785, 125, 1847],
[ 125, 1847, 13779]])
correct_first_batch_target = torch.LongTensor([13239, 125, 1847, 13779, 15616])
assert torch.equal(first_batch_input, correct_first_batch_input)
print('Passou no assert de input')
assert torch.equal(first_batch_target, correct_first_batch_target)
print('Passou no assert de target')
```
# Carregamento do dataset
Iremos usar uma pequena amostra do dataset [BrWaC](https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC) para treinar e avaliar nosso modelo de linguagem.
```
!wget -nc https://storage.googleapis.com/unicamp-dl/ia025a_2022s1/aula7/sample_brwac.txt
# Load datasets
context_size = 9
valid_examples = 100
test_examples = 100
texts = open('sample_brwac.txt').readlines()
# print('Truncating for debugging purposes.')
# texts = texts[:500]
training_texts = texts[:-(valid_examples + test_examples)]
valid_texts = texts[-(valid_examples + test_examples):-test_examples]
test_texts = texts[-test_examples:]
training_dataset = MyDataset(texts=training_texts, tokenizer=tokenizer, context_size=context_size)
valid_dataset = MyDataset(texts=valid_texts, tokenizer=tokenizer, context_size=context_size)
test_dataset = MyDataset(texts=test_texts, tokenizer=tokenizer, context_size=context_size)
print(f'training examples: {len(training_dataset)}')
print(f'valid examples: {len(valid_dataset)}')
print(f'test examples: {len(test_dataset)}')
class LanguageModel(torch.nn.Module):
def __init__(self, vocab_size, context_size, embedding_dim, hidden_size):
"""
Implements the Neural Language Model proposed by Bengio et al."
Args:
vocab_size (int): Size of the input vocabulary.
context_size (int): Size of the sequence to consider as context for prediction.
embedding_dim (int): Dimension of the embedding layer for each word in the context.
hidden_size (int): Size of the hidden layer.
"""
# Escreva seu código aqui.
super(LanguageModel, self).__init__()
self.context_size = context_size
self.embeddings_dim = embedding_dim
self.embeddings = nn.Embedding(vocab_size, embedding_dim)
self.hidden_layer1 = nn.Linear(self.context_size*self.embeddings_dim, hidden_size*4)
self.hidden_layer2 = nn.Linear(hidden_size*4, hidden_size*2)
self.hidden_layer3 = nn.Linear(hidden_size*2, hidden_size)
self.output_layer = nn.Linear(hidden_size, vocab_size, bias=False)
self.relu = nn.ReLU()
def forward(self, inputs):
"""
Args:
inputs is a LongTensor of shape (batch_size, context_size)
"""
# Escreva seu código aqui.
out = self.embeddings(inputs).view(-1, self.context_size*self.embeddings_dim)
out = self.relu(self.hidden_layer1(out))
out = self.relu(self.hidden_layer2(out))
out = self.relu(self.hidden_layer3(out))
return self.output_layer(out)
```
## Teste o modelo com um exemplo
```
model = LanguageModel(
vocab_size=tokenizer.vocab_size,
context_size=context_size,
embedding_dim=64,
hidden_size=128,
).to(device)
sample_train, _ = next(iter(DataLoader(training_dataset)))
sample_train_gpu = sample_train.to(device)
model(sample_train_gpu).shape
num_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'Number of model parameters: {num_params}')
```
## Assert da Perplexidade
```
random.seed(123)
np.random.seed(123)
torch.manual_seed(123)
def perplexity(logits, target):
"""
Computes the perplexity.
Args:
logits: a FloatTensor of shape (batch_size, vocab_size)
target: a LongTensor of shape (batch_size,)
Returns:
A float corresponding to the perplexity.
"""
# Escreva seu código aqui.
return torch.exp(nn.functional.cross_entropy(logits,target))
n_examples = 1000
sample_train, target_token_ids = next(iter(DataLoader(training_dataset, batch_size=n_examples)))
sample_train_gpu = sample_train.to(device)
target_token_ids = target_token_ids.to(device)
logits = model(sample_train_gpu)
my_perplexity = perplexity(logits=logits, target=target_token_ids)
print(f'my perplexity: {int(my_perplexity)}')
print(f'correct initial perplexity: {tokenizer.vocab_size}')
assert math.isclose(my_perplexity, tokenizer.vocab_size, abs_tol=2000)
print('Passou o no assert da perplexidade')
```
## Laço de Treinamento e Validação
```
max_examples = 200_000_000
eval_every_steps = 5000
lr = 3.5e-5
batch_size = 1024
model = LanguageModel(
vocab_size=tokenizer.vocab_size,
context_size=context_size,
embedding_dim=128,
hidden_size=256,
).to(device)
train_loader = DataLoader(training_dataset, batch_size=batch_size, shuffle=True, drop_last=True)
validation_loader = DataLoader(valid_dataset, batch_size=batch_size)
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
def train_step(input, target):
model.train()
model.zero_grad()
logits = model(input.to(device))
loss = nn.functional.cross_entropy(logits, target.to(device))
loss.backward()
optimizer.step()
return loss.item()
def validation_step(input, target):
model.eval()
logits = model(input)
loss = nn.functional.cross_entropy(logits, target)
return loss.item()
train_losses = []
n_examples = 0
step = 0
ver = 0
while n_examples < max_examples:
for input, target in train_loader:
loss = train_step(input.to(device), target.to(device))
train_losses.append(loss)
if step % eval_every_steps == 0:
train_ppl = np.exp(np.average(train_losses))
with torch.no_grad():
valid_ppl = np.exp(np.average([
validation_step(input.to(device), target.to(device))
for input, target in validation_loader]))
print(f'{step} steps; {n_examples} examples so far; train ppl: {train_ppl:.2f}, valid ppl: {valid_ppl:.2f}')
train_losses = []
n_examples += len(input) # Increment of batch size
step += 1
if n_examples >= max_examples:
break
```
## Avaliação final no dataset de teste
Bonus: o modelo com menor perplexidade no dataset de testes ganhará 0.5 ponto na nota final.
```
test_loader = DataLoader(test_dataset, batch_size=64)
with torch.no_grad():
test_ppl = np.exp(np.average([
validation_step(input.to(device), target.to(device))
for input, target in test_loader
]))
print(f'test perplexity: {test_ppl}')
```
## Teste seu modelo com uma sentença
Escolha uma sentença gerada pelo modelo que ache interessante.
```
prompt = 'Eu estou sozinho, sinto muita falta da minha namorada'
max_output_tokens = 10
for _ in range(max_output_tokens):
input_ids = tokenize(text=prompt, tokenizer=tokenizer)
input_ids_truncated = input_ids[-context_size:] # Usamos apenas os últimos <context_size> tokens como entrada para o modelo.
logits = model(torch.LongTensor([input_ids_truncated]).to(device))
# Ao usarmos o argmax, a saída do modelo em cada passo é token de maior probabilidade.
# Isso se chama decodificação gulosa (greedy decoding).
predicted_id = torch.argmax(logits).item()
input_ids += [predicted_id] # Concatenamos a entrada com o token escolhido nesse passo.
prompt = tokenizer.decode(input_ids)
print(prompt)
```
|
github_jupyter
|
# Network Training
## Includes
```
# mass includes
import os, sys, warnings
import ipdb
import torch as t
import torchnet as tnt
from tqdm.notebook import tqdm
# add paths for all sub-folders
paths = [root for root, dirs, files in os.walk('.')]
for item in paths:
sys.path.append(item)
from ipynb.fs.full.config import r2rNetConf
from ipynb.fs.full.monitor import Visualizer
from ipynb.fs.full.network import r2rNet
from ipynb.fs.full.dataLoader import r2rSet
from ipynb.fs.full.util import *
```
## Initialization
```
# for debugging only
%pdb off
warnings.filterwarnings('ignore')
# choose GPU if available
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
device = t.device('cuda' if t.cuda.is_available() else 'cpu')
# define model
opt = r2rNetConf()
model = r2rNet().to(device)
# load pre-trained model if necessary
if opt.save_root:
last_epoch = model.load(opt.save_root)
last_epoch += opt.save_epoch
else:
last_epoch = 0
# dataloader for training
train_dataset = r2rSet(opt, mode='train')
train_loader = t.utils.data.DataLoader(train_dataset,
batch_size=opt.batch_size,
shuffle=True,
num_workers=opt.num_workers,
pin_memory=True)
# dataloader for validation
val_dataset = r2rSet(opt, mode='val')
val_loader = t.utils.data.DataLoader(val_dataset)
# optimizer
last_lr = opt.lr * opt.lr_decay**(last_epoch // opt.upd_freq)
optimizer = t.optim.Adam(model.parameters(), lr=last_lr)
scheduler = t.optim.lr_scheduler.StepLR(optimizer,
step_size=opt.upd_freq,
gamma=opt.lr_decay)
# visualizer
vis = Visualizer(env='r2rNet', port=8686)
loss_meter = tnt.meter.AverageValueMeter()
```
## Validation
```
def validate():
# set to evaluation mode
model.eval()
psnr = 0.0
for (raw_patch, srgb_patch, cam_wb) in val_loader:
with t.no_grad():
# copy to device
raw_patch = raw_patch.to(device)
srgb_patch = srgb_patch.to(device)
rggb_patch = toRGGB(srgb_patch)
cam_wb = cam_wb.to(device)
# inference
pred_patch = model(rggb_patch, cam_wb)
pred_patch = t.clamp(pred_patch, 0.0, 1.0)
# compute psnr
mse = t.mean((pred_patch - raw_patch)**2)
psnr += 10 * t.log10(1 / mse)
psnr /= len(val_loader)
# set to training mode
model.train(mode=True)
return psnr
```
## Training entry
```
for epoch in tqdm(range(last_epoch, opt.max_epoch),
desc='epoch',
total=opt.max_epoch - last_epoch):
# reset meter and update learning rate
loss_meter.reset()
scheduler.step()
for (raw_patch, srgb_patch, cam_wb) in train_loader:
# reset gradient
optimizer.zero_grad()
# copy to device
raw_patch = raw_patch.to(device)
srgb_patch = srgb_patch.to(device)
rggb_patch = toRGGB(srgb_patch)
cam_wb = cam_wb.to(device)
# inference
pred_patch = model(rggb_patch, cam_wb)
# compute loss
loss = t.mean(t.abs(pred_patch - raw_patch))
# backpropagation
loss.backward()
optimizer.step()
# add to loss meter for logging
loss_meter.add(loss.item())
# show training status
vis.plot('loss', loss_meter.value()[0])
gt_img = raw2Img(raw_patch[0, :, :, :],
wb=opt.d65_wb,
cam_matrix=opt.cam_matrix)
pred_img = raw2Img(pred_patch[0, :, :, :],
wb=opt.d65_wb,
cam_matrix=opt.cam_matrix)
vis.img('gt/pred/mask', t.cat([gt_img, pred_img], dim=2).cpu() * 255)
# save model and do validation
if (epoch + 1) > opt.save_epoch or (epoch + 1) % 50 == 0:
model.save()
psnr = validate()
vis.log('epoch: %d, psnr: %.2f' % (epoch, psnr))
```
|
github_jupyter
|
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# Challenge Notebook
## Problem: Implement breadth-first traversal on a binary tree.
* [Constraints](#Constraints)
* [Test Cases](#Test-Cases)
* [Algorithm](#Algorithm)
* [Code](#Code)
* [Unit Test](#Unit-Test)
## Constraints
* Can we assume we already have a Node class with an insert method?
* Yes
* Can we assume this fits in memory?
* Yes
* What should we do with each node when we process it?
* Call an input method `visit_func` on the node
## Test Cases
### Breadth-First Traversal
* 5, 2, 8, 1, 3 -> 5, 2, 8, 1, 3
## Algorithm
Refer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/graphs_trees/tree_bfs/bfs_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
## Code
```
# %load ../bst/bst.py
class Node(object):
def __init__(self, data):
self.data = data
self.left = None
self.right = None
self.parent = None
def __repr__(self):
return str(self.data)
class Bst(object):
def __init__(self, root=None):
self.root = root
def insert(self, data):
if data is None:
raise TypeError('data cannot be None')
if self.root is None:
self.root = Node(data)
return self.root
else:
return self._insert(self.root, data)
def _insert(self, node, data):
if node is None:
return Node(data)
if data <= node.data:
if node.left is None:
node.left = self._insert(node.left, data)
node.left.parent = node
return node.left
else:
return self._insert(node.left, data)
else:
if node.right is None:
node.right = self._insert(node.right, data)
node.right.parent = node
return node.right
else:
return self._insert(node.right, data)
class BstBfs(Bst):
def bfs(self, visit_func):
queue = [self.root]
index = 0
while index < len(queue):
node = queue[index]
index += 1
visit_func(node)
if node.left is not None:
queue.append(node.left)
if node.right is not None:
queue.append(node.right)
```
## Unit Test
```
%run ../utils/results.py
# %load test_bfs.py
from nose.tools import assert_equal
class TestBfs(object):
def __init__(self):
self.results = Results()
def test_bfs(self):
bst = BstBfs(Node(5))
bst.insert(2)
bst.insert(8)
bst.insert(1)
bst.insert(3)
bst.bfs(self.results.add_result)
assert_equal(str(self.results), '[5, 2, 8, 1, 3]')
print('Success: test_bfs')
def main():
test = TestBfs()
test.test_bfs()
if __name__ == '__main__':
main()
```
## Solution Notebook
Review the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/graphs_trees/tree_bfs/bfs_solution.ipynb) for a discussion on algorithms and code solutions.
|
github_jupyter
|
# Spatial merge census and precinct data
This notebook will join precincts with census data.
Spatial unit of analysis is the precinct.
The aim is to join census data to each precinct. The problem is the precinct and block group boundaries don't match up.
So, calculate census values for each precinct this way:
For each precinct, variable value is a weighted average of the values of the bg's with which that precinct overlaps.
x_A = p_A1 \* x_1 + p_A2 \* x_2
where
x_A = variable x for precinct A, block group 1
p_A1 = proportion of precinct A's area that is in block group 1
```
%matplotlib inline
from geopandas import GeoDataFrame, read_file
from geopandas.tools import overlay
import pandas as pd
import spatial_processing_functions as spf
#import importlib
#importlib.reload(spf)
```
SF voting precincts. Boundaries are updated every ten years, and become active two years after the census.
We have 1992, 2002, and 2012.
years = ['1990','2000','2010','2009','2014']
1990 census data -> 1992 precinct + 1990 bg (missing)
2000, 2009 census data -> 2002 precincts + 2000 bgs
2010, 2014 census data -> 2012 precincts + 2010 bgs
## Step 1: load precinct and census geography shapefiles
We'll need the following combinations of censusXprecinct:
- ce2000pre1992, ce2000pre2002, ce2007pre2002, ce2012pre2012 <- for census data
- 'bg2000pre1992', 'bg2000pre2002', 'bg2010pre2012' <- for block groups (since ce2007 data uses 2000 bg boundaries)
```
bgXprec = dict.fromkeys(['bg2000pre1992', 'bg2000pre2002', 'bg2010pre2012'])
for yr_key in bgXprec.keys():
bgs = spf.load_bg_shp(yr_key[2:6])
precincts = spf.load_prec_shp(yr_key[9:13])
precincts = spf.reproject_prec(precincts)
bgXprec[yr_key] = spf.merge_precinct_bg(precincts,bgs,yr_key)
#yr_key ='bg2010pre2012'
#bgs = load_bg_shp(yr_key[2:6])
#precincts = load_prec_shp(yr_key[9:13])
#bgXprec[yr_key] = merge_precinct_bg(precincts,bgs,yr_key)
bgXprec.keys()
```
## Merge with census data
```
# We'll need the following combinations of censusXprecinct
#ce2000pre1992, ce2000pre2002, ce2007pre2002, ce2012pre2012 <- for census data
#'bg2000pre1992', 'bg2000pre2002', 'bg2010pre2012' <- for block groups
# dictionary for matching correct year.
# (although we don't actually need 1990 data. )
census2bg_year = {'1990':'1990', '2000':'2000','2010':'2010','2007':'2000','2012':'2010'}
ce2bgXpre={'ce2000pre1992':'bg2000pre1992','ce2000pre2002':'bg2000pre2002','ce2007pre2002':'bg2000pre2002','ce2012pre2012':'bg2010pre2012'}
# load census data, for each year. Then merge with the appropriate bg/precinct file.
census_data_by_precinct = dict.fromkeys(['ce2000pre1992', 'ce2000pre2002', 'ce2007pre2002', 'ce2012pre2012'])
for yr_key in census_data_by_precinct.keys():
print('\n',yr_key)
census_yr = yr_key[2:6]
census_df = spf.load_census_data(census_yr)
#lookup correct bgXprec dataframe to use.
bg_key = ce2bgXpre[yr_key]
# now merge.
print('{} precincts before'.format(len(bgXprec[bg_key].precname.unique())))
df_merged = pd.merge(bgXprec[bg_key], census_df, on = 'geoid')
print('{} precincts after'.format(len(df_merged.precname.unique())))
vars_to_use = spf.get_vars_to_use()
cols_to_keep = vars_to_use + ['precname','area_m','intersect_area','geoid']
df_merged = df_merged[cols_to_keep]
df_merged_calc = spf.calc_variables(df_merged, vars_to_use) # leave off geo columns, obviously
# aggregate back to precinct level.
df_new = spf.agg_vars_by_prec(df_merged_calc)
# clean up by dropping unweighted and other unneeded columns
df_new.drop(vars_to_use, axis=1, inplace=True)
df_new.drop(['intersect_area','prop_area'], axis=1, inplace=True)
df_new = spf.rename_wgt_cols(df_new, vars_to_use)
# store data frame in a dictionary
census_data_by_precinct[yr_key] = df_new
# also save as csv.
spf.save_census_data(df_new,yr_key)
```
Let's check out the precincts that don't total 1.0.. something may be wrong.
Precinct 2009(2002) is on the SF border and has 931 registered voters.
The 5 precincts(1992) with weird results are all on the southern SF border.
These that are on the border probably don't add up to 1.0 because the boundaries are slightly different from the census shapefiles.
I think it's close enough that it's not a problem.
TODO: fix these two precincts.
Precincts(2012) 7025 and 7035 are the Hunter's Point Shipyard area. I wonder if this is messed up because boundaries changed?
Something's clearly wrong with 7035 because there are 327 registered voters and a tot population of only ~34.
7025 has 441 registered voters and tot pop of ~1323.
For these, it might be more of a problem because they're really far off.
Probably have to omit them until I can come back and figure out what to do.
```
# look for other missing data. # can't find any other missing data here.
for yr_key in census_data_by_precinct.keys():
print(len(census_data_by_precinct[yr_key][pd.isnull(census_data_by_precinct[yr_key]['med_inc_wgt'])]))
```
|
github_jupyter
|
```
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
%matplotlib inline
#Importamos nuestros módulos y clases necesarias
import Image_Classifier as img_clf
import Labeled_Image as li
import classifiers as clfs
from skimage import io
from skimage.color import rgb2gray
from skimage.transform import rescale
import matplotlib.pyplot as plt
from IPython.display import display
import fileupload
import os
import PIL.Image
import io as io2
import numpy as np
# Inicializamos la clase que se encarga de clasificar imagenes
clf = img_clf.Image_Classifier(clfs.classifiers.get('svm'))
lbl_img = li.Labeled_Image(clf)
''' Función que se encarga de aplicar las operaciones
necesarias para convertir los datos obtenidos del FileUpload
en una imagen'''
def imageConverter(change):
ch = change['owner']
image = io2.BytesIO(ch.data)
image = PIL.Image.open(image)
image = np.array(image)
return rgb2gray(image)
'''Función mediante la que indicamos el clasificador
con el que clasificaremos la imagen'''
def set_classifier_wrapper(classifier_index):
clf.set_classifier(clfs.classifiers[classifier_index][0],
is_probs_classifier = clfs.classifiers[classifier_index][1])
'''Función que nos permite mostrar la imagen'''
def plotter_wrapper():
lbl_img.boxes_generator_with_nms()
lbl_img.plotter()
''' Función mediante la que escogemos la imagen'''
def _upload(lbl_img):
_upload_widget = fileupload.FileUploadWidget()
def _cb(change):
image = imageConverter(change)
lbl_img.set_image(image)
#lbl_img.predict()
_upload_widget.observe(_cb, names='data')
display(_upload_widget)
'''Función que nos permite mostrar la imagen'''
def rescale_image_selector(lbl_img, rescale_coef):
if lbl_img.get_original_image() is not None:
lbl_img.image_rescale(rescale_coef)
def patch_size_selector(Ni, Nj):
clf.set_patch_size((Ni,Nj))
clf_button = widgets.Button(description="Clasificar")
def on_button_clicked(b):
# Etiquetamos imagen
lbl_img.predict()
# Y la mostramos
plotter_wrapper()
#clf_button.on_click(on_button_clicked)#, clf)
def step_size_selector(istep, jstep):
clf.set_istep(istep)
clf.set_jstep(jstep)
def probabilities_selector(probs):
lbl_img.set_probs(probs)
lbl_img.predict()
plotter_wrapper()
def alfa_selector(alfa):
lbl_img.set_alfa(alfa)
# Mostramos el widget que permita elegir el clasificador
interact(set_classifier_wrapper, classifier_index = list(clfs.classifiers.keys()));
# Mostramos el widget que permita elegir la imagen a clasificar
_upload(lbl_img)
# Permitimos escoger el rescalado de la imagen, por defecto 1
interact(rescale_image_selector, rescale_coef=(0.3,1,0.001), lbl_img=fixed(lbl_img))
# Permitimos escoger el tamaño de alto y ancho para
# las subdivisiones de la ventana
#interact(patch_size_selector, Ni=(0,100), Nj=(0,100))
# Permitimos escoger el tamaño del salto
# en las subdivisiones de la imagen
interact(step_size_selector, istep=(0,100), jstep=(0,100))
interact(alfa_selector, alfa=(0,1,0.001))
# Por ultimo, mostramos la imagen y permitimos que muestre las ventanas
# en función de las probabilidades
interact_manual(probabilities_selector, probs=(0.5,1,0.001))
# LLamar al clasificador
#display(clf_button)
```
|
github_jupyter
|
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import missingno as msno
pd.set_option('display.max_rows', 40)
pd.set_option('display.max_columns', 20)
pd.set_option('display.width', 200)
def explore_df(df):
print(df.shape)
print(df.head())
print(df.info())
```
# Extract RGB values from image
There are broadly three steps to find the dominant colors in an image:
Extract RGB values into three lists.
Perform k-means clustering on scaled RGB values.
Display the colors of cluster centers.
To extract RGB values, we use the imread() function of the image class of matplotlib. Empty lists, r, g and b have been initialized.
For the purpose of finding dominant colors, we will be using the following image.
```
import matplotlib.image as img
r = []
g = []
b = []
# Read batman image
batman_image = img.imread('datasets/batman.jpg')
print(batman_image.shape)
for row in batman_image:
for temp_r, temp_g, temp_b in row:
r.append(temp_r)
g.append(temp_g)
b.append(temp_b)
batman_df = pd.DataFrame({'red':r, 'green':b, 'blue':b})
display(batman_df.head())
from scipy.cluster.vq import whiten
columns = ['red', 'green', 'blue']
for column in columns:
batman_df['scaled_'+ column] = whiten(batman_df[column])
display(batman_df.head())
from sklearn.preprocessing import StandardScaler
# Create scaler
scaler = StandardScaler(with_mean=False, with_std=True)
batman_df[['scaled_red', 'scaled_green', 'scaled_blue']] = scaler.fit_transform(batman_df[['red','green','blue']])
display(batman_df.shape)
display(batman_df.head())
```
# How many dominant colors?
We have loaded the following image using the imread() function of the image class of matplotlib.
The RGB values are stored in a data frame, batman_df. The RGB values have been standardized used the whiten() function, stored in columns, scaled_red, scaled_blue and scaled_green.
Construct an elbow plot with the data frame. How many dominant colors are present?
```
from scipy.cluster.vq import kmeans, vq
distortions = []
num_clusters = range(1, 7)
# Create a list of distortions from the kmeans function
for i in num_clusters:
cluster_centers, distortion = kmeans(batman_df[['scaled_red', 'scaled_green', 'scaled_blue']], i)
distortions.append(distortion)
# Create a data frame with two lists, num_clusters and distortions
elbow_plot = pd.DataFrame({'num_clusters':num_clusters, 'distortions':distortions})
# Create a line plot of num_clusters and distortions
sns.lineplot(x='num_clusters', y='distortions', data = elbow_plot)
plt.xticks(num_clusters)
plt.show()
from sklearn.cluster import KMeans
num_clusters = range(1, 7)
inertias = []
for k in num_clusters:
# Create a KMeans with k clusters
model = KMeans(n_clusters=k)
# Fit model to samples
model.fit(batman_df[['scaled_red', 'scaled_green', 'scaled_blue']])
# Append iterntia to the list of inertias
inertias.append(model.inertia_)
# Create a data frame with two lists - num_clusters, distortions
elbow_plot = pd.DataFrame({'num_clusters': num_clusters, 'inertias': inertias})
# Creat a line plot of num_clusters and distortions
sns.lineplot(x='num_clusters', y='inertias', data = elbow_plot)
plt.show()
```
### RESULT: Notice that there are three distinct colors present in the image, which is supported by the elbow plot.
# Display dominant colors
We have loaded the following image using the imread() function of the image class of matplotlib.
To display the dominant colors, convert the colors of the cluster centers to their raw values and then
converted them to the range of 0-1, using the following formula:
```converted_pixel = standardized_pixel * pixel_std / 255```
The RGB values are stored in a data frame, batman_df. The scaled RGB values are stored in columns, scaled_red, scaled_blue and scaled_green. The cluster centers are stored in the variable cluster_centers, which were generated using the kmeans() function with three clusters.
```
colors = []
cluster_centers, distortion = kmeans(batman_df[['scaled_red', 'scaled_green', 'scaled_blue']], 3)
# Get standard deviations of each color
r_std, g_std, b_std = batman_df[['red', 'green', 'blue']].std()
display(r_std, g_std, g_std)
for cluster_center in cluster_centers:
scaled_r, scaled_g, scaled_b = cluster_center
# Convert each standardized value to scaled value
colors.append((
scaled_r * r_std / 255,
scaled_g * g_std / 255,
scaled_b * b_std / 255
))
plt.imshow([colors])
plt.show()
model = KMeans(n_clusters=3)
model.fit(batman_df[['scaled_red', 'scaled_green', 'scaled_blue']])
centroids = model.cluster_centers_
display(centroids)
centroids_unscaled = scaler.inverse_transform(centroids)
display(centroids_unscaled)
centroids_unscaled /= 255
display(centroids_unscaled)
plt.imshow([centroids_unscaled])
plt.show()
```
|
github_jupyter
|
# COMP90051 Workshop 3
## Logistic regression
***
In this workshop we'll be implementing L2-regularised logistic regression using `scipy` and `numpy`.
Our key objectives are:
* to become familiar with the optimisation problem that sits behind L2-regularised logistic regression;
* to apply polynomial basis expansion and recognise when it's useful; and
* to experiment with the effect of L2 regularisation.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
```
### 1. Binary classification data
Let's begin by generating some binary classification data.
To make it easy for us to visualise the results, we'll stick to a two-dimensional feature space.
```
from sklearn.datasets import make_circles
X, Y = make_circles(n_samples=300, noise=0.1, factor=0.7, random_state=90051)
plt.plot(X[Y==0,0], X[Y==0,1], 'o', label = "y=0")
plt.plot(X[Y==1,0], X[Y==1,1], 's', label = "y=1")
plt.legend()
plt.xlabel("$x_0$")
plt.ylabel("$x_1$")
plt.show()
```
**Question:** What's interesting about this data? Do you think logistic regression will perform well?
**Answer:** *This question is answered in section 3.*
In preparation for fitting and evaluating a logistic regression model, we randomly partition the data into train/test sets. We use the `train_test_split` function from `sklearn`.
```
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.33, random_state=90051)
print("Training set has {} instances. Test set has {} instances.".format(X_train.shape[0], X_test.shape[0]))
```
### 2. Logistic regression objective function
Recall from lectures, that logistic regression models the distribution of the binary class $y$ *conditional* on the feature vector $\mathbf{x}$ as
$$
y | \mathbf{x} \sim \mathrm{Bernoulli}[\sigma(\mathbf{w}^T \mathbf{x} + b)]
$$
where $\mathbf{w}$ is the weight vector, $b$ is the bias term and $\sigma(z) = 1/(1 + e^{-z})$ is the logistic function.
To simplify the notation, we'll collect the model parameters $\mathbf{w}$ and $b$ in a single vector $\mathbf{v} = [b, \mathbf{w}]$.
Fitting this model amounts to choosing $\mathbf{v}$ that minimises the sum of cross-entropies over the instances ($i = 1,\ldots,n$) in the training set
$$
f_\mathrm{cross-ent}(\mathbf{v}; \mathbf{X}, \mathbf{Y}) = - \sum_{i = 1}^{n} \left\{ y_i \log \sigma(\mathbf{w}^T \mathbf{x}_i + b) + (1 - y_i) \log (1 - \sigma(\mathbf{w}^T \mathbf{x}_i + b)) \right\}
$$
Often a regularisation term of the form $f_\mathrm{reg}(\mathbf{w}; \lambda) = \frac{1}{2} \lambda \mathbf{w}^T \mathbf{w}$ is added to the objective to penalize large weights (this can help to prevent overfitting). Note that $\lambda \geq 0$ controls the strength of the regularisation term.
Putting this together, our goal is to minimise the following objective function with respect to $\mathbf{w}$ and $b$:
$$
f(\mathbf{v}; \mathbf{X}, \mathbf{Y}, \lambda) = f_\mathrm{reg}(\mathbf{w}; \lambda) + f_\mathrm{cross-ent}(\mathbf{v}; \mathbf{X}, \mathbf{Y})
$$
**Question:** Why aren't we regularising the entire parameter vector $\mathbf{v}$? Notice that only $\mathbf{w}$ is included in $f_\mathrm{reg}$—in other words $b$ is excluded from regularisation.
**Answer:** *If we were to replace $\mathbf{w}$ with $\mathbf{v}$ in the regularisation term, we'd be penalising large $b$. This is not a good idea, because a large bias may be required for some data sets—and restricting the bias doesn't help with generalisation.*
We're going to find a solution to this minimisation problem using the BFGS algorithm (named after the inventors Broyden, Fletcher, Goldfarb and Shanno). BFGS is a "hill-climbing" algorithm like gradient descent, however it additionally makes use of second-order derivative information (by approximating the Hessian). It converges in fewer iterations than gradient descent (it's convergence rate is *superlinear* whereas gradient descent is only *linear*).
We'll use an implementation of BFGS provided in `scipy` called `fmin_bfgs`. The algorithm requires two functions as input: (i) a function that evaluates the objective $f(\mathbf{v}; \ldots)$ and (ii) a function that evalutes the gradient $\nabla_{\mathbf{v}} f(\mathbf{v}; \ldots)$.
Let's start by writing a function to compute $f(\mathbf{v}; \ldots)$.
```
from scipy.special import expit # this is the logistic function
# v: parameter vector
# X: feature matrix
# Y: class labels
# Lambda: regularisation constant
def obj_fn(v, X, Y, Lambda):
prob_1 = expit(np.dot(X,v[1::]) + v[0])
reg_term = 0.5 * Lambda * np.dot(v[1::],v[1::]) # fill in
cross_entropy_term = - np.dot(Y, np.log(prob_1)) - np.dot(1. - Y, np.log(1. - prob_1))
return reg_term + cross_entropy_term # fill in
```
Now for the gradient, we use the following result (if you're familiar with vector calculus, you may wish to derive this yourself):
$$
\nabla_{\mathbf{v}} f(\mathbf{v}; \ldots) = \left[\frac{\partial f(\mathbf{w}, b;\ldots)}{\partial b}, \nabla_{\mathbf{w}} f(\mathbf{w}, b; \ldots) \right] = \left[\sum_{i = 1}^{n} \sigma(\mathbf{w}^T \mathbf{x}_i + b) - y_i, \lambda \mathbf{w} + \sum_{i = 1}^{n} (\sigma(\mathbf{w}^T \mathbf{x}_i + b) - y_i)\mathbf{x}_i\right]
$$
The function below implements $\nabla_{\mathbf{v}} f(\mathbf{v}; \ldots)$.
```
# v: parameter vector
# X: feature matrix
# Y: class labels
# Lambda: regularisation constant
def grad_obj_fn(v, X, Y, Lambda):
prob_1 = expit(np.dot(X, v[1::]) + v[0])
grad_b = np.sum(prob_1 - Y)
grad_w = Lambda * v[1::] + np.dot(prob_1 - Y, X)
return np.insert(grad_w, 0, grad_b)
```
### 3. Solving the minimization problem using BFGS
Now that we've implemented functions to compute the objective and the gradient, we can plug them into `fmin_bfgs`.
Specifically, we define a function `my_logistic_regression` which calls `fmin_bfgs` and returns the optimal weight vector.
```
from scipy.optimize import fmin_bfgs
# X: feature matrix
# Y: class labels
# Lambda: regularisation constant
# v_initial: initial guess for parameter vector
def my_logistic_regression(X, Y, Lambda, v_initial, disp=True):
# Function for displaying progress
def display(v):
print('v is', v, 'objective is', obj_fn(v, X, Y, Lambda))
return fmin_bfgs(f=obj_fn, fprime=grad_obj_fn,
x0=v_initial, args=(X, Y, Lambda), disp=disp,
callback=display)
```
Let's try it out!
```
Lambda = 1
v_initial = np.zeros(X_train.shape[1] + 1) # fill in a vector of zeros of appropriate length
v_opt = my_logistic_regression(X_train, Y_train, Lambda, v_initial)
# Function to plot the data points and decision boundary
def plot_results(X, Y, v, trans_func = None):
# Scatter plot in feature space
plt.plot(X[Y==0,0], X[Y==0,1], 'o', label = "y=0")
plt.plot(X[Y==1,0], X[Y==1,1], 's', label = "y=1")
# Compute axis limits
x0_lower = X[:,0].min() - 0.1
x0_upper = X[:,0].max() + 0.1
x1_lower = X[:,1].min() - 0.1
x1_upper = X[:,1].max() + 0.1
# Generate grid over feature space
x0, x1 = np.mgrid[x0_lower:x0_upper:.01, x1_lower:x1_upper:.01]
grid = np.c_[x0.ravel(), x1.ravel()]
if (trans_func is not None):
grid = trans_func(grid) # apply transformation to features
arg = (np.dot(grid, v[1::]) + v[0]).reshape(x0.shape)
# Plot decision boundary (where w^T x + b == 0)
plt.contour(x0, x1, arg, levels=[0], cmap="Greys", vmin=-0.2, vmax=0.2)
plt.legend()
plt.show()
plot_results(X, Y, v_opt)
```
**Question:** Is the solution what you expected? Is it a good fit for the data?
**Answer:** *It's not a good fit because logistic regression is a linear classifier, and the data is not linearly seperable.*
**Question:** What's the accuracy of this model? Fill in the code below assuming the following decision function
$$
\hat{y} = \begin{cases}
1, &\mathrm{if} \ p(y = 1|\mathbf{x}) \geq \tfrac{1}{2}, \\
0, &\mathrm{otherwise}.
\end{cases}
$$
```
from sklearn.metrics import accuracy_score
Y_test_pred = ((np.dot(X_test, v_opt[1::]) + v_opt[0]) >= 0)*1 # fill in
accuracy_score(Y_test, Y_test_pred)
```
### 4. Adding polynomial features
We've seen that ordinary logistic regression does poorly on this data set, because the data is not linearly separable in the $x_0,x_1$ feature space.
We can get around this problem using basis expansion. In this case, we'll augment the feature space by adding polynomial features of degree 2. In other words, we replace the original feature matrix $\mathbf{X}$ by a transformed feature matrix $\mathbf{\Phi}$ which contains additional columns corresponding to $x_0^2$, $x_0 x_1$ and $x_1^2$. This is done using the function `add_quadratic_features` defined below.
**Note:** There's a built-in function in `sklearn` for adding polynomial features located at `sklearn.preprocessing.PolynomialFeatures`.
```
# X: original feature matrix
def add_quadratic_features(X):
return np.c_[X, X[:,0]**2, X[:,0]*X[:,1], X[:,1]**2]
Phi_train = add_quadratic_features(X_train)
Phi_test = add_quadratic_features(X_test)
```
Let's apply our custom logistic regression function again on the augmented feature space.
```
Lambda = 1
v_initial = np.zeros(Phi_train.shape[1] + 1) # fill in a vector of zeros of appropriate length
v_opt = my_logistic_regression(Phi_train, Y_train, Lambda, v_initial)
plot_results(X, Y, v_opt, trans_func=add_quadratic_features)
```
This time we should get a better result for the accuracy on the test set.
```
from sklearn.metrics import accuracy_score
Y_test_pred = ((np.dot(Phi_test, v_opt[1::]) + v_opt[0]) >= 0)*1 # fill in
accuracy_score(Y_test, Y_test_pred)
```
### 5. Effect of regularisation
So far, we've fixed the regularisation constant so that $\lambda = 1$. (Note it's possible to choose an "optimal" value for $\lambda$ by applying cross-validation.)
**Question:** What do you think will happen if we switch the regularisation off? Try setting $\lambda$ to a small value (say $10^{-3}$) and check whether the accuracy of the model is affected.
**Answer:** *Generally speaking, we risk overfitting if the regularisation constant is too small (or switched off entirely). You should observe that the accuracy on the test set reduces slightly with $\lambda = 10^{-3}$ vs. $\lambda = 1$.*
### 6. Logistic regression using sklearn
Now that you have some insight into the optimisation problem behind logistic regression, you should feel confident in using the built-in implementation in `sklearn` (or other packages).
Note that the `sklearn` implementation handles floating point underflow/overflow more carefully than we have done, and uses faster numerical optimisation algorithms.
```
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(C=1)
clf.fit(Phi_train, Y_train)
from sklearn.metrics import accuracy_score
Y_test_pred = clf.predict(Phi_test)
accuracy_score(Y_test, Y_test_pred)
```
|
github_jupyter
|
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
```
## Does nn.Conv2d init work well?
[Jump_to lesson 9 video](https://course.fast.ai/videos/?lesson=9&t=21)
```
#export
from exp.nb_02 import *
def get_data():
path = datasets.download_data(MNIST_URL, ext='.gz')
with gzip.open(path, 'rb') as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')
return map(tensor, (x_train,y_train,x_valid,y_valid))
def normalize(x, m, s): return (x-m)/s
torch.nn.modules.conv._ConvNd.reset_parameters??
```
```
def reset_parameters(self):
init.kaiming_uniform_(self.weight, a=math.sqrt(5)) # why 5???
if self.bias is not None:
fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
bound = 1 / math.sqrt(fan_in)
init.uniform_(self.bias, -bound, bound)
```
Let's try to test that with MNIST, observing how initialization affects performance...?
```
# Load data
x_train,y_train,x_valid,y_valid = get_data()
train_mean,train_std = x_train.mean(),x_train.std()
x_train = normalize(x_train, train_mean, train_std)
x_valid = normalize(x_valid, train_mean, train_std)
# Reshape
x_train = x_train.view(-1,1,28,28)
x_valid = x_valid.view(-1,1,28,28)
x_train.shape,x_valid.shape
n,*_ = x_train.shape
c = y_train.max()+1
nh = 32
n,c
l1 = nn.Conv2d(in_channels=1, out_channels=nh, kernel_size=5)
```
The parameters are: channels in (1 because is b&w), channels out (number of "hidden layer neurons") and kernel size.
We take 100 samples from the original input:
```
x = x_valid[:100]
x.shape
```
Write a function for a repetitive task like obtaining the stats ;)
```
def stats(x):
return x.mean(),x.std()
```
We can access to `l1.weight` and to `l1.bias`
```
l1.weight.shape, l1.bias.shape
```
The weights tensor dimensions: 32 output filters, 1 input filter (because we only have 1 channel), and 5x5 because of the filter/kernel size.
The initialization of Conv2d gives us these stats:
```
stats(l1.weight),stats(l1.bias)
```
Pass our input through the conv layer...
```
t = l1(x)
stats(t)
```
The std dev is far from the expected 1 value. This looks like a problem.
Comparing to the regular kaiming normal (remember it's designed to be used followed by a ReLU layer, or more generally by a Leaky ReLU layer, where `a` is the slope of the negative part):
```
init.kaiming_normal_(l1.weight, a=1.)
stats(l1(x))
```
Let's test our conv layer followed by a Leaky ReLU layer...
```
import torch.nn.functional as F
def f1(x,a=0):
return F.leaky_relu(l1(x),a)
```
With a=0 (ReLU) the mean raises to 0.5 aprox, but std dev remains near to 1...
```
init.kaiming_normal_(l1.weight, a=0)
stats(f1(x))
```
Compared to the default with a=5:
```
l1 = nn.Conv2d(1, nh, 5)
stats(f1(x))
```
Let's write our own kaiming normalization function :)
```
l1.weight.shape
# receptive field size (number of kernel elements)
rec_fs = l1.weight[0,0].numel()
rec_fs
nf,ni,*_ = l1.weight.shape
nf,ni
# Effective fan in and fan out of the convolutional layer
fan_in = ni*rec_fs
fan_out = nf*rec_fs
fan_in,fan_out
```
The gain for the normalization, having `a` into account:
```
def gain(a):
return math.sqrt(2.0 / (1 + a**2))
gain(1),gain(0),gain(0.01),gain(0.1),gain(math.sqrt(5.))
```
The last value is the one corresponding to the PyTorch's `a` init value.
Remember that PyTorch uses kaiming_uniform instead of kaiming_normal, and the std dev of an uniform distribution is not 1:
```
torch.zeros(10000).uniform_(-1,1).std()
1/math.sqrt(3.)
```
Our complete kaiming function, with the compensated gain:
```
def kaiming2(x,a, use_fan_out=False):
nf,ni,*_ = x.shape
rec_fs = x[0,0].shape.numel()
fan = nf*rec_fs if use_fan_out else ni*rec_fs
std = gain(a) / math.sqrt(fan)
bound = math.sqrt(3.) * std
x.data.uniform_(-bound,bound)
kaiming2(l1.weight, a=0);
stats(f1(x))
kaiming2(l1.weight, a=math.sqrt(5.))
stats(f1(x))
class Flatten(nn.Module):
def forward(self,x): return x.view(-1)
m = nn.Sequential(
nn.Conv2d(1,8, 5,stride=2,padding=2), nn.ReLU(),
nn.Conv2d(8,16,3,stride=2,padding=1), nn.ReLU(),
nn.Conv2d(16,32,3,stride=2,padding=1), nn.ReLU(),
nn.Conv2d(32,1,3,stride=2,padding=1),
nn.AdaptiveAvgPool2d(1),
Flatten(),
)
y = y_valid[:100].float()
t = m(x)
stats(t)
```
That std dev is very low, and we have a problem, because the variance is really small in the last layer.
```
l = mse(t,y)
l.backward()
stats(m[0].weight.grad)
```
Let's return to `kaiming_uniform`, and apply that to the previous network:
```
init.kaiming_uniform_??
for l in m:
if isinstance(l,nn.Conv2d):
init.kaiming_uniform_(l.weight)
l.bias.data.zero_()
t = m(x)
stats(t)
l = mse(t,y)
l.backward()
stats(m[0].weight.grad)
```
It's better than the default Conv2d.
PyTorch team said it was a bug (not multiplying by math.sqrt(3.)) but it empirically worked really well :S
## Export
```
!python notebook2script.py 02a_why_sqrt5.ipynb
```
|
github_jupyter
|
```
import numpy as np
import pandas as pd
from sklearn import *
import warnings; warnings.filterwarnings("ignore")
train = pd.read_csv('../input/train.csv')
test = pd.read_csv('../input/test.csv')
sub = pd.read_csv('../input/sample_submission.csv')
train.shape, test.shape, sub.shape
```
Wordplay in Column Names
==============================
```
import matplotlib.pyplot as plt
import networkx as nx
G=nx.Graph()
col = [c for c in train.columns if c not in ['id', 'target']]
G.add_node('Start')
for i in range(4):
G.add_node('Column Section '+ str(i))
G.add_edge('Start','Column Section '+ str(i))
for c in train[col].columns:
if c.split('-')[i] not in G.nodes():
G.add_node(c.split('-')[i])
G.add_edge('Column Section '+ str(i), c.split('-')[i])
if c not in G.nodes():
G.add_node(c)
G.add_edge(c.split('-')[i],c)
plt.figure(1,figsize=(12,12))
nx.draw_networkx(G, node_size=1,font_size=6)
plt.axis('off'); plt.show()
```
How unique are the column values
==========
```
df = []
for c in train.columns:
if c not in ['target', 'id', 'wheezy-copper-turtle-magic']:
l1 = test[c].unique()
l2 = train[c].unique()
df.append([c, len(l1), len(l2), len(l1)- 131073, len(l2) - 262144])
df = pd.DataFrame(df, columns=['col', 'test_unique', 'train_unique', 'test_diff', 'train_diff'])
for c in ['test_unique', 'train_unique', 'test_diff', 'train_diff']:
print(df[c].min(), df[c].max())
#col = list(df[((df['test_diff']<-1900) & (df['train_diff']<-7500))]['col'].values)
df.head()
```
Getting wheezy
=====
```
col = [c for c in train.columns if c not in ['id', 'target', 'wheezy-copper-turtle-magic']]
df_all = pd.concat((train,test), axis=0, ignore_index=True).reset_index(drop=True)
df_all['wheezy-copper-turtle-magic'] = df_all['wheezy-copper-turtle-magic'].astype('category')
train = df_all[:train.shape[0]].reset_index(drop=True)
test = df_all[train.shape[0]:].reset_index(drop=True)
del df_all
train.shape, test.shape
```
Lets Race
======
```
test_ = []
kn = neighbors.KNeighborsClassifier(n_neighbors=17, p=2.9)
sv = svm.NuSVC(kernel='poly', degree=4, random_state=4, probability=True, coef0=0.08)
for s in sorted(train['wheezy-copper-turtle-magic'].unique()):
train2 = train[train['wheezy-copper-turtle-magic']==s].reset_index(drop=True).copy()
test2 = test[test['wheezy-copper-turtle-magic']==s].reset_index(drop=True).copy()
kn.fit(train2[col], train2['target'])
sv.fit(train2[col], train2['target'])
test2['target'] = (kn.predict_proba(test2[col])[:,1] * 0.2) + (sv.predict_proba(test2[col])[:,1] * 0.8)
test_.append(test2)
test_ = pd.concat(test_).reset_index(drop=True)
test_[['id','target']].to_csv("submission.csv", index=False)
```
|
github_jupyter
|
# Accessing the Trigger
In ATLAS all access to event trigger decision is via the Trigger Decision Tool (TDT). There is quite a bit of information attached to the trigger, and its layout is quite complex - for that reason one should use the TDT to access the data. It is not really possible for a human to navigate the data structures quickly!
```
import matplotlib.pyplot as plt
from config import ds_zee as ds
from func_adl_servicex_xaodr21 import tdt_chain_fired, tmt_match_object
```
## Looking for events that fired a chain
Lets look at $Z \rightarrow ee$ Monte Carlo for a single electron trigger in the event.
```
n_electrons = (ds.Select(lambda e:
{
"n_ele": e.Electrons().Where(lambda e: abs(e.eta()) < 2.5).Count(),
"fired": tdt_chain_fired("HLT_e60_lhmedium_nod0"),
})
.AsAwkwardArray()
.value()
)
plt.hist(n_electrons.n_ele, bins=4, range=(0, 4), label='All Events')
plt.hist(n_electrons.n_ele[n_electrons.fired], bins=4, range=(0, 4), label='Fired Events')
plt.xlabel('Number of Electrons')
plt.ylabel('Number of Events')
plt.title('Electron Trigger and Number of Electrons in the Event')
_ = plt.legend()
```
## Trigger Matching
Next, let's find the electrons that matched that trigger that fired above. We'll do this by looking only at events where the trigger has fired, and then asking each electron if it matches withing a $\Delta R$.
```
matched_electrons = (
ds.Where(lambda e: tdt_chain_fired("HLT_e60_lhmedium_nod0"))
.SelectMany(lambda e: e.Electrons())
.Select(
lambda e: {
"pt": e.pt() / 1001.0,
"eta": e.eta(),
"is_trig": tmt_match_object("HLT_e60_lhmedium_nod0", e, 0.7),
}
)
.AsAwkwardArray()
.value()
)
```
To know the `tnt_match_object` arguments, you'll need to look up its definition below on the atlas twiki.
```
plt.hist(matched_electrons.pt, bins=100, range=(0, 100), label='All Electrons')
trigger_electrons = matched_electrons[matched_electrons.is_trig]
plt.hist(trigger_electrons.pt, bins=100, range=(0, 100), label='Trigger Electrons')
plt.xlabel('Electron $p_T$ [GeV]')
plt.ylabel('Number of Electrons')
_ = plt.legend()
```
## Further Information
* Tutorial on [trigger for analysis](https://indico.cern.ch/event/860971/contributions/3626403/attachments/1973400/3283452/200122_TriggerTutorial.pdf).
* Trigger Group's [Trigger Analysis Tool](https://twiki.cern.ch/twiki/bin/view/Atlas/TriggerAnalysisTools) twiki page (with a [page devoted to the TDT](https://twiki.cern.ch/twiki/bin/view/Atlas/TrigDecisionTool)).
* [Lowest un-prescaled triggers](https://twiki.cern.ch/twiki/bin/view/Atlas/LowestUnprescaled) per data-taking period twiki.
|
github_jupyter
|
# Additive Secret Sharing
Author:
- Carlos Salgado - [email](mailto:csalgado@uwo.ca) - [linkedin](https://www.linkedin.com/in/eng-socd/) - [github](https://github.com/socd06)
## Additive Secret Sharing
Additive Secret Sharing is a mechanism to share data among parties and to perform computation on it.

## Sharing
A secret `s` is uniformly split into `n` shares, one per shareholder (also known as worker, node, user or party) using some randomness `r`, also known as some **very high random prime** number `Q`.
$ F_s (s, r, n) = ( s_1, s_2, ..., s_n ) $
## Reconstruction
`s` can be reconstructed (decrypted) by adding up **all the shares** and taking the [*modulo*](https://en.wikipedia.org/wiki/Modulo_operation) of the random prime number `Q`, used to encrypt the shares originally.
$ s = ( \: \sum \limits _{i=1} ^n s_i \: ) \; mod \; Q $
## 32-bit Integer Secrets
A secret is the data or message that a party wants to secure. In additive secret sharing, secrets (and therefore, shares) must be members of a fixed [finite field](https://en.wikipedia.org/wiki/Finite_field). Particularly, the literature mentions shares should be members of the $ {\mathbb{Z}_{2^{32}}} $ [ring](https://en.wikipedia.org/wiki/Ring_(mathematics)), which is the [ring of integers](https://en.wikipedia.org/wiki/Ring_of_integers) that fit within [32-bits](https://en.wikipedia.org/wiki/32-bit_computing).

Rings are [sets](https://en.wikipedia.org/wiki/Set_(mathematics)) with two operations, addition and multiplication, which allow the rationale of secret sharing and reconstruction to work.
Plainly, secrets and secret shares **must** be integers within -2,147,483,647 to +2,147,483,647
## Governance
Additive secret sharing provides shared governance. The threshold `t` to reconstruct `s` is equal to `n`, which means **no party can recover the data** alone because all the shares are required to decrypt the secret *(t = n)*. This scheme allows us to do computation on the shares while each shareholder is only aware of their **own** share.
### [Quiz] Find the secret `s`
In practice, we use a **very high prime number** Q to add a **big deal of uniform randomness** to our shares. Here we will use a very small Q, so you can try to solve the quiz without programming yet.
Let $ s_1 = 10 \; and \; s_2 = 74 \; and \; Q = 59 $
What is the original secret `s`? Fill the ____ space below with your answer.
Try **not** to use a calculator or programming.
```
# Run this cell to import the quizzes
from quiz import q0, q1, q2
# run to check your answer
q0.check(___)
# Uncomment the line below to see a hint
# q0.hint
# Uncomment the line below to see the solution
# q0.solution
```
### [Quiz] Find the final share s<sub>2</sub>
Using a small `Q` to facilitate calculation (it needs to be a **very high prime number** in production), let
$ s = 7, n = 2 $ with $ Q = 59 $ and $ s_1 = 9 $
plugged in on the secret reconstruction equation, find the final share s<sub>2</sub>.
Fill the ____ space below with your answer. Feel free to implement the equation in a new cell or use whatever tool you'd like (e.g. a calculator), it's your call.
```
# Fill the ____ space below with your answer
final_share =
# run to check your answer
q1.check(final_share)
# Uncomment the line below to see a hint
# q1.hint
# Uncomment the line below to see the solution
# q1.solution
```
## In Practice
Just as an educational example, we can generate a list of prime numbers using [sympy](https://www.sympy.org/en/index.html)
```
# Verify we have all the tools we need to run the notebook
!pip install -r requirements.txt
import sympy
# An arbitrary constant, feel free to play with it
CONST = 999
BIT_DEPTH = 31
# Range start
start = 2**BIT_DEPTH-CONST
# Maximum in Z2**32 ring
end = 2**BIT_DEPTH
prime_lst = list(sympy.primerange(start,end+1))
print("Prime numbers in range: " , prime_lst)
```
And **randomly** choose one every time using [NumPy](https://numpy.org/devdocs/contents.html)'s [randint](https://numpy.org/doc/stable/reference/random/generated/numpy.random.randint.html)
```
from numpy.random import randint
Q = prime_lst[randint(len(prime_lst))]
Q
```
As an additional note, the [Secrets module](https://docs.python.org/3/library/secrets.html), introduced in Python 3.6, provides randomness as secure as your operating system.
```
import secrets
Q = secrets.choice(prime_lst)
Q
```
## The Final Share and 2-party Additive Secret Sharing
Knowing that $ s_n = Q - (\; \sum \limits _{i=1} ^{n-1} s_i \; mod \; Q \; ) + s $
How do we implement 2-party ($ n=2 $) additive secret sharing using Python?
Keep reading and fing out!
```
def dual_share(s, r):
'''
s = secret
r = randomness
'''
share_lst = list()
share_lst.append(randint(0,r))
final_share = r - (share_lst[0] % r) + s
share_lst.append(final_share)
return share_lst
# Let's generate a couple of shares
secret = 5
dual_shares = dual_share(secret, Q)
dual_shares
```
Now go back to the previous cell and **run it again**. Notice anything?
...
...
...
See it yet? The shares are never the same because they are **randomly generated**.
Now let's implement the reconstruction (or decryption) function.
```
def decrypt(shares, r):
'''
shares = iterable made of additive secret shares
r = randomness
'''
return sum(shares) % r
# And let's decrypt our secret for the first time
decrypt(dual_shares, Q)
```
## Exercise: Implement n-party additive secret sharing
Fill the function below with your code.
```
def n_share(s, r, n):
'''
s = secret
r = randomness
n = number of nodes, workers or participants
returns a tuple of n-shares
'''
# replace with your code
pass
five_shares = n_share(s=686,r=Q,n=5)
five_shares
# run this cell to check your solution
q2.check(decrypt(five_shares, Q))
# Uncomment the line below to see a hint
# q2.hint
# Uncomment the line below to see the solution
# q2.solution
```
## Addition
Given two shared values $a$ and $b$, a party $P_i$ can compute the added shares as:
$ c_i = ( a_i + b_i ) \; mod \; Q$
In Python, we can implement this type of addition like this:
```
def addition(a, b, r):
'''
a = iterable of the same length of b
b = iterable of the same length of a
r = randomness AKA randomly generated very high prime number
'''
c = list()
for i in range(len(a)):
c.append((a[i] + b[i]) % r)
return tuple(c)
```
Considering Alice and Bob are our parties, with secrets $s_a$ and $s_b$ to be shared (2-way) and wanting to compute addition.
Let $s_a = 5 $ and $s_b = 11 $
Alice's shares would be something like:
```
# Alice's secret
sa = 5
alice_shares = dual_share(sa, Q)
alice_shares
```
While Bob's shares would be
```
# Bob's secret
sb = 11
bob_shares = dual_share(sb, Q)
bob_shares
secret_sum = addition(alice_shares, bob_shares, Q)
secret_sum
```
Doesn't make a lot of sense, does it?
Secret shares must only reveal information about their secrets when they are all combined. Otherwise all data must be hidden, which defines the **privacy** property.
These are still secret shares so there is one more step to get the sum of the original secrets.
```
decrypt(secret_sum, Q)
```
Et Voilà!
## Public (scalar) Multiplication
Given a list of shared values $a$ and a **scalar** $b$, a party $P_i$ can compute the multiplied shares as:
$ c_i = a_i \times b \; mod \; Q$
In Python, we can implement this type of multiplication like this:
```
def public_mul(a, b, r):
'''
a = iterable of the same length of b
b = scalar to multiply a by
r = randomness AKA randomly generated very high prime number
'''
c = list()
for i in range(len(a)):
c.append((a[i] * b) % r)
return tuple(c)
```
Let's say another party wants to multiply Alice's shares by the **scalar** value of 3.
```
alice_times3 = public_mul(alice_shares, 3, Q)
```
Then we can decrypt (with Alice's permission) to double check we did multiply what we intended.
```
decrypt(alice_times3,Q)
```
And this is `True` because Alice's secret $sa = 5$, remember?
```
decrypt(alice_times3,Q) == sa * 3
```
## PyTorch + PySyft implementation
Now that you know how additive secret sharing works under the hood, let's see how we can leverage PyTorch and PySyft to do it for us.
```
import torch
import syft as sy
hook = sy.TorchHook(torch)
```
Let's say Alice, Bob and Charlie are all enrolled on the **Foundations of Privacy** course and we, as instructors, want to know on average, how far in the course they are. We don't want to breach their privacy so each percentage of completion will be their own secret (a, b and c).
For educational purposes, we will define our parties (nodes, workers, etc) using `VirtualWorker` PySyft objects.
```
alice = sy.VirtualWorker(hook, id="alice")
bob = sy.VirtualWorker(hook, id="bob")
charlie = sy.VirtualWorker(hook, id="charlie")
```
We also need a "secure worker", also known as the `Crypto Provider` to provide us with random prime numbers.
```
secure_worker = sy.VirtualWorker(hook, "secure_worker")
```
We define our secrets using `torch.tensor` PyTorch tensor objects and we `Additive Share` them with our fellow workers.
```
# Let a, b and c be our students' completion percentage
a = torch.tensor([35])
b = torch.tensor([77])
c = torch.tensor([10])
# And we additive share with our parties
a = a.share(alice, bob, charlie, crypto_provider=secure_worker)
b = b.share(alice, bob, charlie, crypto_provider=secure_worker)
c = c.share(alice, bob, charlie, crypto_provider=secure_worker)
# And we compute the mean of our tensor
mean = torch.mean(torch.stack(list([a,b,c])))
mean
```
Also, see that the object type is **[AdditiveSharingTensor]**.
For this example, we can decrypt our computation result using the get() method
```
decrypted_mean = mean.get()
decrypted_mean
```
And get the scalar using the item() method (Only works for 1-dimensional tensors).
```
scalar_mean = decrypted_mean.item()
scalar_mean
```
Now, the average completion should actually be 40 and $ \frac{1}{3} $ (or 40.6666666666... ) but this is something we will learn about in the next lessons.
Let’s now tackle private multiplication!
|
github_jupyter
|
### HMM Software
- [hmmlearn](https://hmmlearn.readthedocs.io/en/latest/tutorial.html)
- [pomegranite]()
- [r calculate AIC/BIC of model](https://rdrr.io/cran/HMMpa/man/AIC_HMM.html)
- [comparison between pomegranite and hmmlearn (with notebook)](https://kyso.io/share/pomegranate-vs-hmmlearn#files)
- [discussion of AIC/BIC from hmmlearn](https://waterprogramming.wordpress.com/2018/07/03/fitting-hidden-markov-models-part-ii-sample-python-script/)
```
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=2
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from tqdm.autonotebook import tqdm
from joblib import Parallel, delayed
import umap
import pandas as pd
from avgn.utils.paths import DATA_DIR, most_recent_subdirectory, ensure_dir
from avgn.signalprocessing.create_spectrogram_dataset import flatten_spectrograms
from avgn.visualization.spectrogram import draw_spec_set
from avgn.visualization.quickplots import draw_projection_plots
from avgn.visualization.projections import (
scatter_projections,
draw_projection_transitions,
)
```
### Collect data
```
DATASET_ID = 'bengalese_finch_sober'
DATA_DIR
syllable_df = pd.concat([pd.read_pickle(i) for i in list((DATA_DIR / 'indv_dfs' / DATASET_ID).glob('*.pickle'))])
syllable_df[:3]
label = "hdbscan_labels"
```
#### Compare hdbscan to hand
```
from avgn.visualization.projections import scatter_spec, scatter_projections
from avgn.utils.general import save_fig
from avgn.utils.paths import FIGURE_DIR, ensure_dir
indvs = syllable_df.indv.unique()
label_dict = {lab:i for i, lab in enumerate(np.unique(syllable_df['labels'].values))}
syllable_df['labels_num'] = [label_dict[i] for i in syllable_df.labels.values]
for indv in tqdm(indvs):
print(indv)
indv_df = syllable_df[syllable_df.indv == indv]
z = np.vstack(indv_df.umap.values)
fig, axs = plt.subplots(ncols=2, figsize=(10,5))
ax = axs[0]
ax.scatter(z[:,0], z[:,1], c = indv_df['labels_num'].values, s = 0.5, cmap = plt.cm.tab20, alpha = 0.25)
ax.axis('off')
ax.set_title('Hand Labels')
ax = axs[1]
ax.scatter(z[:,0], z[:,1], c = indv_df['hdbscan_labels'].values, s = 0.5, cmap = plt.cm.tab20, alpha = 0.25)
ax.set_title('HDBSCAN Labels')
ax.axis('off')
plt.show()
```
### train HMM on data
```
indv = 'gy6or6'
indv_df = syllable_df[syllable_df.indv == indv]
indv_df = indv_df.sort_values(by=['syllables_sequence_id', 'syllables_sequence_pos'])
indv_df = indv_df.reset_index()
print(len(indv_df))
indv_df[:3]
element_prob = {i: np.sum(indv_df.labels_num.values== i) for i in np.unique(indv_df.labels_num.values)}
element_prob
for key, val in element_prob.items():
if val < 100:
indv_df = indv_df.drop(indv_df[indv_df.labels_num == key].index)
len(indv_df)
hand_seqs = [
list(indv_df[indv_df.syllables_sequence_id == seqid]["labels_num"].values)
for seqid in indv_df.syllables_sequence_id.unique()
]
print(hand_seqs[:3])
seq_lens = [len(i) for i in hand_seqs]
from hmmlearn import hmm
def AIC(log_likelihood, k):
""" AIC given log_likelihood and # parameters (k)
"""
aic = 2 * k - 2 * log_likelihood
return aic
def BIC(log_likelihood, n, k):
""" BIC given log_likelihood, number of observations (n) and # parameters (k)
"""
bic = np.log(n)*k - 2 * log_likelihood
return bic
training_mask = np.random.choice(np.arange(len(hand_seqs)), size = int(len(hand_seqs)/2), replace=False)
testing_mask = np.array([i for i in np.arange(len(hand_seqs)) if i not in training_mask])
training_mask[:4], testing_mask[:4]
seqs_train = np.array(hand_seqs)[training_mask]
seqs_test = np.array(hand_seqs)[testing_mask]
len(hand_seqs), len(seqs_train), len(seqs_test)
print(np.unique(np.concatenate(seqs_train).reshape(-1, 1)))
print(np.unique(np.concatenate(seqs_test).reshape(-1, 1)))
from joblib import Parallel, delayed
def fit_hmm(seqs_train, seqs_test, seqs_test, n_components):
# model
model = hmm.MultinomialHMM(n_components=n_components).fit(
np.concatenate(seqs_train).reshape(-1, 1), [len(i) for i in seqs_train]
)
# params
num_params = (
np.product(model.transmat_.shape)
+ np.product(model.emissionprob_.shape)
+ np.product(model.startprob_.shape)
)
# data
n_data = np.sum(seq_lens)
# probability of data given model
log_probability = model.score(
np.concatenate(seqs_test).reshape(-1, 1), [len(i) for i in seqs_test]
)
# AIC and BIC
aic = AIC(log_probability, num_params)
bic = BIC(log_probability, n_data, num_params)
return (model,
n_components,
num_params,
log_probability,
aic,
bic)
n_repeats = 5
results = Parallel(n_jobs=-1, verbose=15)(
delayed(fit_hmm)(seqs_train, seqs_test, seqs_test, n_components)
for n_components in tqdm(np.repeat(np.arange(10, 50, 1), n_repeats))
)
results_df = pd.DataFrame(results,
columns=["model", "n_components", "n_params", "log_prob", "AIC", "BIC"]
)
results_df[:3]
fig, axs = plt.subplots(ncols =3, figsize=(20,5))
ax = axs[0]
ax.scatter(results_df.n_components, results_df.log_prob)
#ax.plot(results_df.n_components, results_df.log_prob)
ax = axs[1]
ax.scatter(results_df.n_components, results_df.AIC)
#ax.plot(results_df.n_components, results_df.AIC)
ax = axs[2]
ax.scatter(results_df.n_components, results_df.BIC)
#ax.plot(results_df.n_components, results_df.BIC)
print(seqs_train[0])
train_second_order = [['-'.join([str(state), str(seq[si+1])]) for si, state in enumerate(seq[:-1])] for seq in seqs_train]
test_second_order = [['-'.join([str(state), str(seq[si+1])]) for si, state in enumerate(seq[:-1])] for seq in seqs_test]
print(train_second_order[0])
train_third_order = [['-'.join([str(state), str(seq[si+1]), str(seq[si+2])]) for si, state in enumerate(seq[:-2])] for seq in seqs_train]
print(train_third_order[0])
second_order_dict = {s:si for si, s in enumerate(np.unique(np.concatenate(train_second_order)))}
train_second_order_num = [[second_order_dict[i] for i in seq] for seq in train_second_order]
#test_second_order_num = [[second_order_dict[i] for i in seq] for seq in test_second_order]
n_repeats = 1
results = Parallel(n_jobs=-1, verbose=15)(
delayed(fit_hmm)(train_second_order_num, train_second_order_num, n_components)
for n_components in tqdm(np.repeat(np.arange(10, 40, 5), n_repeats))
)
results_df_second = pd.DataFrame(results,
columns=["model", "n_components", "n_params", "log_prob", "AIC", "BIC"]
)
results_df[:3]
fig, axs = plt.subplots(ncols =3, figsize=(20,5))
ax = axs[0]
ax.scatter(results_df.n_components, results_df.log_prob)
ax.scatter(results_df_second.n_components, results_df_second.log_prob)
#ax.plot(results_df.n_components, results_df.log_prob)
ax = axs[1]
ax.scatter(results_df_second.n_components, results_df_second.AIC)
#ax.plot(results_df.n_components, results_df.AIC)
ax = axs[2]
ax.scatter(results_df_second.n_components, results_df_second.BIC)
#ax.plot(results_df.n_components, results_df.BIC)
```
#### TODO
- create a HMM where latent states are just normal states (e.g. Markov model)
- compute likelihood of markov model
- create a HMM where latent states are HDBSCAN states
- compute likelihood of model
- create a second order markov model
- compute likelihood of markov model
- create a HMM where latent states are learned using Baum Welch - chose highest AIC
- compute likelihood of model
```
from hmmlearn.utils import iter_from_X_lengths
results_df.iloc[0]
model = results_df.iloc[0].model
lengths = [len(i) for i in seqs_test]
X = np.concatenate(seqs_test).reshape(-1, 1)
len(X), len(lengths)
curr_logprob = 0
for i,j in iter_from_X_lengths(X, lengths):
framelogprob = model._compute_log_likelihood(X[i:j])
logprob, fwdlattice = model._do_forward_pass(framelogprob)
curr_logprob += logprob
bwdlattice = model._do_backward_pass(framelogprob)
posteriors[i:j] = self._compute_posteriors(fwdlattice, bwdlattice)
np.shape(framelogprob)
fwdlattice[-1]
fwdlattice
bwdlattice
np.shape(bwdlattice)
curr_logprob
results_df.iloc[0].model.__dict__
```
|
github_jupyter
|
# TensorFlow Tutorial #05
# Ensemble Learning
by [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)
/ [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ)
## Introduction
This tutorial shows how to use a so-called ensemble of convolutional neural networks. Instead of using a single neural network, we use several neural networks and average their outputs.
This is used on the MNIST data-set for recognizing hand-written digits. The ensemble improves the classification accuracy slightly on the test-set, but the difference is so small that it is possibly random. Furthermore, the ensemble mis-classifies some images that are correctly classified by some of the individual networks.
This tutorial builds on the previous tutorials, so you should have a basic understanding of TensorFlow and the add-on package Pretty Tensor. A lot of the source-code and text here is similar to the previous tutorials and may be read quickly if you have recently read the previous tutorials.
## Flowchart
The following chart shows roughly how the data flows in a single Convolutional Neural Network that is implemented below. The network has two convolutional layers and two fully-connected layers, with the last layer being used for the final classification of the input images. See Tutorial #02 for a more detailed description of this network and convolution in general.
This tutorial implements an ensemble of 5 such neural networks, where the network structure is the same but the weights and other variables are different for each network.
```
from IPython.display import Image
Image('images/02_network_flowchart.png')
```
## Imports
```
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
import os
# Use PrettyTensor to simplify Neural Network construction.
import prettytensor as pt
```
This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
```
tf.__version__
```
PrettyTensor version:
```
pt.__version__
```
## Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
```
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
```
The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets, but we will make random training-sets further below.
```
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
```
### Class numbers
The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now.
```
data.test.cls = np.argmax(data.test.labels, axis=1)
data.validation.cls = np.argmax(data.validation.labels, axis=1)
```
### Helper-function for creating random training-sets
We will train 5 neural networks on different training-sets that are selected at random. First we combine the original training- and validation-sets into one big set. This is done for both the images and the labels.
```
combined_images = np.concatenate([data.train.images, data.validation.images], axis=0)
combined_labels = np.concatenate([data.train.labels, data.validation.labels], axis=0)
```
Check that the shape of the combined arrays is correct.
```
print(combined_images.shape)
print(combined_labels.shape)
```
Size of the combined data-set.
```
combined_size = len(combined_images)
combined_size
```
Define the size of the training-set used for each neural network. You can try and change this.
```
train_size = int(0.8 * combined_size)
train_size
```
We do not use a validation-set during training, but this would be the size.
```
validation_size = combined_size - train_size
validation_size
```
Helper-function for splitting the combined data-set into a random training- and validation-set.
```
def random_training_set():
# Create a randomized index into the full / combined training-set.
idx = np.random.permutation(combined_size)
# Split the random index into training- and validation-sets.
idx_train = idx[0:train_size]
idx_validation = idx[train_size:]
# Select the images and labels for the new training-set.
x_train = combined_images[idx_train, :]
y_train = combined_labels[idx_train, :]
# Select the images and labels for the new validation-set.
x_validation = combined_images[idx_validation, :]
y_validation = combined_labels[idx_validation, :]
# Return the new training- and validation-sets.
return x_train, y_train, x_validation, y_validation
```
## Data Dimensions
The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
```
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
```
### Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
```
def plot_images(images, # Images to plot, 2-d array.
cls_true, # True class-no for images.
ensemble_cls_pred=None, # Ensemble predicted class-no.
best_cls_pred=None): # Best-net predicted class-no.
assert len(images) == len(cls_true)
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
# Adjust vertical spacing if we need to print ensemble and best-net.
if ensemble_cls_pred is None:
hspace = 0.3
else:
hspace = 1.0
fig.subplots_adjust(hspace=hspace, wspace=0.3)
# For each of the sub-plots.
for i, ax in enumerate(axes.flat):
# There may not be enough images for all sub-plots.
if i < len(images):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if ensemble_cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
msg = "True: {0}\nEnsemble: {1}\nBest Net: {2}"
xlabel = msg.format(cls_true[i],
ensemble_cls_pred[i],
best_cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
```
### Plot a few images to see if data is correct
```
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
```
## TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below:
* Placeholder variables used for inputting data to the graph.
* Variables that are going to be optimized so as to make the convolutional network perform better.
* The mathematical formulas for the neural network.
* A loss measure that can be used to guide the optimization of the variables.
* An optimization method which updates the variables.
In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.
### Placeholder variables
Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.
First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to `float32` and the shape is set to `[None, img_size_flat]`, where `None` means that the tensor may hold an arbitrary number of images with each image being a vector of length `img_size_flat`.
```
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
```
The convolutional layers expect `x` to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead `[num_images, img_height, img_width, num_channels]`. Note that `img_height == img_width == img_size` and `num_images` can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
```
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
```
Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable `x`. The shape of this placeholder variable is `[None, num_classes]` which means it may hold an arbitrary number of labels and each label is a vector of length `num_classes` which is 10 in this case.
```
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
```
We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
```
y_true_cls = tf.argmax(y_true, dimension=1)
```
### Neural Network
This section implements the Convolutional Neural Network using Pretty Tensor, which is much simpler than a direct implementation in TensorFlow, see Tutorial #03.
The basic idea is to wrap the input tensor `x_image` in a Pretty Tensor object which has helper-functions for adding new computational layers so as to create an entire neural network. Pretty Tensor takes care of the variable allocation, etc.
```
x_pretty = pt.wrap(x_image)
```
Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.
Note that `pt.defaults_scope(activation_fn=tf.nn.relu)` makes `activation_fn=tf.nn.relu` an argument for each of the layers constructed inside the `with`-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The `defaults_scope` makes it easy to change arguments for all of the layers.
```
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
```
### Optimization Method
Pretty Tensor gave us the predicted class-label (`y_pred`) as well as a loss-measure that must be minimized, so as to improve the ability of the neural network to classify the input images.
It is unclear from the documentation for Pretty Tensor whether the loss-measure is cross-entropy or something else. But we now use the `AdamOptimizer` to minimize the loss.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
```
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
```
### Performance Measures
We need a few more performance measures to display the progress to the user.
First we calculate the predicted class number from the output of the neural network `y_pred`, which is a vector with 10 elements. The class number is the index of the largest element.
```
y_pred_cls = tf.argmax(y_pred, dimension=1)
```
Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
```
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
```
The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
```
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
```
### Saver
In order to save the variables of the neural network, we now create a Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below.
Note that if you have more than 100 neural networks in the ensemble then you must increase `max_to_keep` accordingly.
```
saver = tf.train.Saver(max_to_keep=100)
```
This is the directory used for saving and retrieving the data.
```
save_dir = 'checkpoints/'
```
Create the directory if it does not exist.
```
if not os.path.exists(save_dir):
os.makedirs(save_dir)
```
This function returns the save-path for the data-file with the given network number.
```
def get_save_path(net_number):
return save_dir + 'network' + str(net_number)
```
## TensorFlow Run
### Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
```
session = tf.Session()
```
### Initialize variables
The variables for `weights` and `biases` must be initialized before we start optimizing them. We make a simple wrapper-function for this, because we will call it several times below.
```
def init_variables():
session.run(tf.initialize_all_variables())
```
### Helper-function to create a random training batch.
There are thousands of images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
```
train_batch_size = 64
```
Function for selecting a random training-batch of the given size.
```
def random_batch(x_train, y_train):
# Total number of images in the training-set.
num_images = len(x_train)
# Create a random index into the training-set.
idx = np.random.choice(num_images,
size=train_batch_size,
replace=False)
# Use the random index to select random images and labels.
x_batch = x_train[idx, :] # Images.
y_batch = y_train[idx, :] # Labels.
# Return the batch.
return x_batch, y_batch
```
### Helper-function to perform optimization iterations
Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
```
def optimize(num_iterations, x_train, y_train):
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = random_batch(x_train, y_train)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations and after last iteration.
if i % 100 == 0:
# Calculate the accuracy on the training-batch.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Status-message for printing.
msg = "Optimization Iteration: {0:>6}, Training Batch Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
```
### Create ensemble of neural networks
Number of neural networks in the ensemble.
```
num_networks = 5
```
Number of optimization iterations for each neural network.
```
num_iterations = 10000
```
Create the ensemble of neural networks. All networks use the same TensorFlow graph that was defined above. For each neural network the TensorFlow weights and variables are initialized to random values and then optimized. The variables are then saved to disk so they can be reloaded later.
You may want to skip this computation if you just want to re-run the Notebook with different analysis of the results.
```
if True:
# For each of the neural networks.
for i in range(num_networks):
print("Neural network: {0}".format(i))
# Create a random training-set. Ignore the validation-set.
x_train, y_train, _, _ = random_training_set()
# Initialize the variables of the TensorFlow graph.
session.run(tf.global_variables_initializer())
# Optimize the variables using this training-set.
optimize(num_iterations=num_iterations,
x_train=x_train,
y_train=y_train)
# Save the optimized variables to disk.
saver.save(sess=session, save_path=get_save_path(i))
# Print newline.
print()
```
### Helper-functions for calculating and predicting classifications
This function calculates the predicted labels of images, that is, for each image it calculates a vector of length 10 indicating which of the 10 classes the image is.
The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
```
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_labels(images):
# Number of images.
num_images = len(images)
# Allocate an array for the predicted labels which
# will be calculated in batches and filled into this array.
pred_labels = np.zeros(shape=(num_images, num_classes),
dtype=np.float)
# Now calculate the predicted labels for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images between index i and j.
feed_dict = {x: images[i:j, :]}
# Calculate the predicted labels using TensorFlow.
pred_labels[i:j] = session.run(y_pred, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
return pred_labels
```
Calculate a boolean array whether the predicted classes for the images are correct.
```
def correct_prediction(images, labels, cls_true):
# Calculate the predicted labels.
pred_labels = predict_labels(images=images)
# Calculate the predicted class-number for each image.
cls_pred = np.argmax(pred_labels, axis=1)
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct
```
Calculate a boolean array whether the images in the test-set are classified correctly.
```
def test_correct():
return correct_prediction(images = data.test.images,
labels = data.test.labels,
cls_true = data.test.cls)
```
Calculate a boolean array whether the images in the validation-set are classified correctly.
```
def validation_correct():
return correct_prediction(images = data.validation.images,
labels = data.validation.labels,
cls_true = data.validation.cls)
```
### Helper-functions for calculating the classification accuracy
This function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. `classification_accuracy([True, True, False, False, False]) = 2/5 = 0.4`
```
def classification_accuracy(correct):
# When averaging a boolean array, False means 0 and True means 1.
# So we are calculating: number of True / len(correct) which is
# the same as the classification accuracy.
return correct.mean()
```
Calculate the classification accuracy on the test-set.
```
def test_accuracy():
# Get the array of booleans whether the classifications are correct
# for the test-set.
correct = test_correct()
# Calculate the classification accuracy and return it.
return classification_accuracy(correct)
```
Calculate the classification accuracy on the original validation-set.
```
def validation_accuracy():
# Get the array of booleans whether the classifications are correct
# for the validation-set.
correct = validation_correct()
# Calculate the classification accuracy and return it.
return classification_accuracy(correct)
```
## Results and analysis
Function for calculating the predicted labels for all the neural networks in the ensemble. The labels are combined further below.
```
def ensemble_predictions():
# Empty list of predicted labels for each of the neural networks.
pred_labels = []
# Classification accuracy on the test-set for each network.
test_accuracies = []
# Classification accuracy on the validation-set for each network.
val_accuracies = []
# For each neural network in the ensemble.
for i in range(num_networks):
# Reload the variables into the TensorFlow graph.
saver.restore(sess=session, save_path=get_save_path(i))
# Calculate the classification accuracy on the test-set.
test_acc = test_accuracy()
# Append the classification accuracy to the list.
test_accuracies.append(test_acc)
# Calculate the classification accuracy on the validation-set.
val_acc = validation_accuracy()
# Append the classification accuracy to the list.
val_accuracies.append(val_acc)
# Print status message.
msg = "Network: {0}, Accuracy on Validation-Set: {1:.4f}, Test-Set: {2:.4f}"
print(msg.format(i, val_acc, test_acc))
# Calculate the predicted labels for the images in the test-set.
# This is already calculated in test_accuracy() above but
# it is re-calculated here to keep the code a bit simpler.
pred = predict_labels(images=data.test.images)
# Append the predicted labels to the list.
pred_labels.append(pred)
return np.array(pred_labels), \
np.array(test_accuracies), \
np.array(val_accuracies)
pred_labels, test_accuracies, val_accuracies = ensemble_predictions()
```
Summarize the classification accuracies on the test-set for the neural networks in the ensemble.
```
print("Mean test-set accuracy: {0:.4f}".format(np.mean(test_accuracies)))
print("Min test-set accuracy: {0:.4f}".format(np.min(test_accuracies)))
print("Max test-set accuracy: {0:.4f}".format(np.max(test_accuracies)))
```
The predicted labels of the ensemble is a 3-dim array, the first dim is the network-number, the second dim is the image-number, the third dim is the classification vector.
```
pred_labels.shape
```
### Ensemble predictions
There are different ways to calculate the predicted labels for the ensemble. One way is to calculate the predicted class-number for each neural network, and then select the class-number with most votes. But this requires a large number of neural networks relative to the number of classes.
The method used here is instead to take the average of the predicted labels for all the networks in the ensemble. This is simple to calculate and does not require a large number of networks in the ensemble.
```
ensemble_pred_labels = np.mean(pred_labels, axis=0)
ensemble_pred_labels.shape
```
The ensemble's predicted class number is then the index of the highest number in the label, which is calculated using argmax as usual.
```
ensemble_cls_pred = np.argmax(ensemble_pred_labels, axis=1)
ensemble_cls_pred.shape
```
Boolean array whether each of the images in the test-set was correctly classified by the ensemble of neural networks.
```
ensemble_correct = (ensemble_cls_pred == data.test.cls)
```
Negate the boolean array so we can use it to lookup incorrectly classified images.
```
ensemble_incorrect = np.logical_not(ensemble_correct)
```
### Best neural network
Now we find the single neural network that performed best on the test-set.
First list the classification accuracies on the test-set for all the neural networks in the ensemble.
```
test_accuracies
```
The index of the neural network with the highest classification accuracy.
```
best_net = np.argmax(test_accuracies)
best_net
```
The best neural network's classification accuracy on the test-set.
```
test_accuracies[best_net]
```
Predicted labels of the best neural network.
```
best_net_pred_labels = pred_labels[best_net, :, :]
```
The predicted class-number.
```
best_net_cls_pred = np.argmax(best_net_pred_labels, axis=1)
```
Boolean array whether the best neural network classified each image in the test-set correctly.
```
best_net_correct = (best_net_cls_pred == data.test.cls)
```
Boolean array whether each image is incorrectly classified.
```
best_net_incorrect = np.logical_not(best_net_correct)
```
### Comparison of ensemble vs. the best single network
The number of images in the test-set that were correctly classified by the ensemble.
```
np.sum(ensemble_correct)
```
The number of images in the test-set that were correctly classified by the best neural network.
```
np.sum(best_net_correct)
```
Boolean array whether each image in the test-set was correctly classified by the ensemble and incorrectly classified by the best neural network.
```
ensemble_better = np.logical_and(best_net_incorrect,
ensemble_correct)
```
Number of images in the test-set where the ensemble was better than the best single network:
```
ensemble_better.sum()
```
Boolean array whether each image in the test-set was correctly classified by the best single network and incorrectly classified by the ensemble.
```
best_net_better = np.logical_and(best_net_correct,
ensemble_incorrect)
```
Number of images in the test-set where the best single network was better than the ensemble.
```
best_net_better.sum()
```
### Helper-functions for plotting and printing comparisons
Function for plotting images from the test-set and their true and predicted class-numbers.
```
def plot_images_comparison(idx):
plot_images(images=data.test.images[idx, :],
cls_true=data.test.cls[idx],
ensemble_cls_pred=ensemble_cls_pred[idx],
best_cls_pred=best_net_cls_pred[idx])
```
Function for printing the predicted labels.
```
def print_labels(labels, idx, num=1):
# Select the relevant labels based on idx.
labels = labels[idx, :]
# Select the first num labels.
labels = labels[0:num, :]
# Round numbers to 2 decimal points so they are easier to read.
labels_rounded = np.round(labels, 2)
# Print the rounded labels.
print(labels_rounded)
```
Function for printing the predicted labels for the ensemble of neural networks.
```
def print_labels_ensemble(idx, **kwargs):
print_labels(labels=ensemble_pred_labels, idx=idx, **kwargs)
```
Function for printing the predicted labels for the best single network.
```
def print_labels_best_net(idx, **kwargs):
print_labels(labels=best_net_pred_labels, idx=idx, **kwargs)
```
Function for printing the predicted labels of all the neural networks in the ensemble. This only prints the labels for the first image.
```
def print_labels_all_nets(idx):
for i in range(num_networks):
print_labels(labels=pred_labels[i, :, :], idx=idx, num=1)
```
## Examples: Ensemble is better than the best network
Plot examples of images that were correctly classified by the ensemble and incorrectly classified by the best single network.
```
plot_images_comparison(idx=ensemble_better)
```
The ensemble's predicted labels for the first of these images (top left image):
```
print_labels_ensemble(idx=ensemble_better, num=1)
```
The best network's predicted labels for the first of these images:
```
print_labels_best_net(idx=ensemble_better, num=1)
```
The predicted labels of all the networks in the ensemble, for the first of these images:
```
print_labels_all_nets(idx=ensemble_better)
```
## Examples: Best network is better than ensemble
Now plot examples of images that were incorrectly classified by the ensemble but correctly classified by the best single network.
```
plot_images_comparison(idx=best_net_better)
```
The ensemble's predicted labels for the first of these images (top left image):
```
print_labels_ensemble(idx=best_net_better, num=1)
```
The best single network's predicted labels for the first of these images:
```
print_labels_best_net(idx=best_net_better, num=1)
```
The predicted labels of all the networks in the ensemble, for the first of these images:
```
print_labels_all_nets(idx=best_net_better)
```
## Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources.
```
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
```
## Conclusion
This tutorial created an ensemble of 5 convolutional neural networks for classifying hand-written digits in the MNIST data-set. The ensemble worked by averaging the predicted class-labels of the 5 individual neural networks. This resulted in slightly improved classification accuracy on the test-set, with the ensemble having an accuracy of 99.1% compared to 98.9% for the best individual network.
However, the ensemble did not always perform better than the individual neural networks, which sometimes classified images correctly while the ensemble misclassified those images. This suggests that the effect of using an ensemble of neural networks is somewhat random and may not provide a reliable way of improving the performance over a single neural network.
The form of ensemble learning used here is called [bagging](https://en.wikipedia.org/wiki/Bootstrap_aggregating) (or Bootstrap Aggregating), which is mainly useful for avoiding overfitting and may not be necessary for this particular neural network and data-set. So it is still possible that ensemble learning may work in other settings.
### Technical Note
This implementation of ensemble learning used the TensorFlow `Saver()`-object to save and reload the variables of the neural network. But this functionality was really designed for another purpose and becomes very awkward to use for ensemble learning with different types of neural networks, or if you want to load multiple neural networks at the same time. There's an add-on package for TensorFlow called [sk-flow](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/learn/python/learn) which makes this much easier, but it is still in the early stages of development as of August 2016.
## Exercises
These are a few suggestions for exercises that may help improve your skills with TensorFlow. It is important to get hands-on experience with TensorFlow in order to learn how to use it properly.
You may want to backup this Notebook before making any changes.
* Change different aspects of this program to see how it affects the performance:
* Use more neural networks in the ensemble.
* Change the size of the training-sets.
* Change the number of optimization iterations, try both more and less.
* Explain to a friend how the program works.
* Do you think Ensemble Learning is worth more research effort, or should you rather focus on improving a single neural network?
## License (MIT)
Copyright (c) 2016 by [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
github_jupyter
|
#### _Speech Processing Labs 2020: Signals: Module 1_
```
## Run this first!
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import cmath
from math import floor
from matplotlib.animation import FuncAnimation
from IPython.display import HTML
plt.style.use('ggplot')
## some helper functions:
from dspMisc import *
```
# 4 The Discrete Fourier Transform
### Learning Outcomes
* Understand the DFT equation: what input does it take and what outputs does it produce.
### Need to know
* Topic Videos: Series Expansion, Fourier Analysis, Frequency Domain
* Vector operations: dot product
* [Digital Signals: Sampling sinusoids](./sp-m1-3-sampling-sinusoids.ipynb)
<div class="alert alert-warning">
<em> <strong>Optional-ish:</strong> This notebook goes through the DFT equation in quite a lot of detail - more than is strictly necessary for this course. It's perfectly fine to just focus on the visualizations or to skip it for now.
That said, you might like to run the code anyway and look at the visualization of the different phasors (i.e. basis functions) that correspond to the different DFT outputs in the DFT work through section called [Generate Phasors](#genphasors), and the corresponding magnitude and phase response graphs for that example</em>
</div>
Do you remember learning a bunch of math (trig, algebra, calculus) in high-school and thinking you'll never have to use this? While you might not use it directly, a lot of the technology you use everyday depends on it. Pretty much all current speech technology depends on the Fourier Transform in some way! It has been (and continues to be) used to solve problems in many different fields from from image processing to quantum mechanics
In the previous notebook, you looked at how you can add 'pure' sinusoids together to form complicated waveforms. The Fourier Transform gives us a way of doing the opposite: taking sequence of measurements over time (like a speech wave) and decomposing it into sinusoids of different frequencies, magnitudes, and phases.
Since we are interested in digital signals (i.e. measuresment over discrete time steps), we'll need to use the **Discrete Fourier Transform**. This makes it possible to actually do these calculations on a computer, but it does also lead to some unexpected side-effects.
<div class="alert alert-warning">
<strong>Equation alert</strong>: If you're viewing this on github, please note that the equation rendering is not always perfect. You should view the notebooks through a jupyter notebook server for an accurate view.
</div>
## 4.1 The DFT equation
You can think of the DFT as a machine that takes $N$ amplitude samples over time and returns the frequency spectrum of the input signal (just like you when you view a spectral slice in praat).
You can treat this as a black box. But, now that we've gone through complex numbers, phasors, etc we can look at how the machine works and, more importantly, what it's limitations are.
* Input: $x[n]$, for $n=1..N$ (i.e. a time series of $N$ samples)
* Output: N complex numbers $\mapsto$ magnitude and phases of specific phasors:
$$
\begin{align}
DFT[k] &= \sum_{n=0}^{N-1} x[n] e^{-j \frac{2\pi n}{N} k} \\
DFT[k] &= \sum_{n=0}^{N-1} x[n]\big[\cos(\frac{2\pi n}{N} k) - j \sin(\frac{2\pi n}{N} k) \big]
\end{align}
$$
You'll usually see this written as $X[k]$ in signal processing textbooks, but we'll just say $DFT[k]$ to be a bit more transparent (and reduce the probability of typos!)
Notice the $-j$ in the phasor! This means as $n$ increases the 'clockhand' of the phasor is rotating clockwise!
## 4.2 The DFT Step-by-Step
But what does the DFT do for us? Very broadly, the DFT takes a series of $N$ values in time as input (e.g. waveform amplitudes over time for our sound waves) and outputs the **correlations** between the input and $N$ pure cosine waves with specific frequencies. That is, it tells you how you would weight (and shift) these cosine waves so that you can add them all up and recover the original input.
_Let's break it down!_
### The DFT for $k=0$
Let's first look at the DFT for $k=0$:
$$
\begin{align}
DFT[0] = \sum_{n=0}^{N-1} x[n] e^{-j \frac{2\pi n}{N} 0}
\end{align}
$$
This is usually referred to as the **bias** term. It tells us whether the input function is shifted up or down the amplitude axis (in a time v amplitude plot of the waveform).
### Exercise:
* Q: Why doesn't DFT[0] tell us anything about the frequency of the input?
### The DFT for $k=1$:
Now we can start to look at the correlation between the input sequence and some phasors of different frequencies. For $k=1$, we have:
$$ DFT[1] = \sum_{n=0}^{N-1} x[n] e^{-j \frac{2 \pi n}{N}}$$
Each of the $e^{j 2\pi n/N}$ terms in the sum is a step of $2\pi n/N$ radians around the circle. That is, our $k=1$ phasor $e^{-j \frac{2 \pi}{N}n}$ makes completes 1 full circle every $N$ time steps.
Let's plot the the $k=1$ phasor for $N=16$:
```
## plot complex numbers in the DFT[1] equation
N=16
## make an array of N steps: 0,...,N
nsteps = np.array(range(N))
tsteps = 2*np.pi*nsteps/N
Tmin = np.min(tsteps)
Tmax = np.max(tsteps)
## Corresponding complex numbers z[n] = e^{j 2 pi n/N}
zn_k1 = np.exp(-1j*2*np.pi*nsteps/N)
## Plot the phasor corresponding to DFT[k=1]
X_k1, Y_k1, fig, phasor, iproj, rproj = plot_phasor_static(zn_k1, tsteps, plot_real=True, plot_imag=True)
## Animate the phasor!
phasor1_vid = get_phasor_animation(X_k1, Y_k1, tsteps, phasor, iproj, rproj, fig)
phasor1_vid
```
What this animation shows is that for $k=1$, the $e^{-j\varphi}$ terms in the DFT represent $N$ evenly space samples around the unit circle aka one period of a sinusoid. So, The term $x[n] e^{-j2\pi n/N}$ represents multiplying the $n$th step around the unit circle with the $n$th input value.
### Exercise (optional)
We saw from Euler's formula previously that projecting the phasor (left fig) onto the imaginary (vertical) axis, would give us a sine wave, i.e. $\sin(t)$. This isn't quite what we see above. Instead the sinusoid on the right, looks like the mirror image of $\sin(t)$, i.e. $-\sin(t)$. Why is this?
### Notes
So, we can interpret the DFT as taking the dot product between the input and a cosine (and sine) wave of specific frequency. That is, it tells us if the input sequence repeats itself with a specific frequency.
### The dot product with the input $x[n]$
It's useful at this point to break the DFT equation into two parts using Euler's rule. This esssentially gives us two dot products (i.e. elementwise multiplication and summation). This allows us to calculate the real and imaginary components of the DFT output separately as a dot product with a cosine (the real part) and a dot product with a sine (the imaginary part):
$$
\begin{align}
DFT[1] &= \sum_{n=0}^{N-1} x[n] \big(\cos(2\pi n/N) -j\sin(2\pi n/N)\big)\\
&= \sum_{n=0}^{N-1} x[n] \cos(2\pi n/N) - j \sum_{n=0}^{N-1} x[n] \sin(2\pi n/N)
\end{align}
$$
### Example:
Let's look at what's happening in these sum terms using a simple sampled square wave with the same period as the $k=1$ phasor for input. The following visualizes the second sum (i.e. the dot product with $\sin(2\pi n/N)$).
```
## Make the background and plot the k=1 phasor
X_k1, Y_k1, fig, phasor, iproj, rproj = plot_phasor_static(zn_k1, tsteps, plot_real=True, plot_imag=True)
## Now, let's say our input x looks like this square wave:
xn = np.array([-1]*8 + [1]*8)
## Exercise: Try some other inputs later
#xn = np.array([-1, -1,-1,-1, 1, 1,1,1]*2)
#xn = np.array([-1]*4 + [1]*8 + [-1]*4)
#xn = np.array([1] + [-1, -1,-1, 1, 1,1,1]*2 + [-1])
#xn = zn_imag_k1
#xn = np.concatenate([Ys[np.arange(0, 16, 2)], Ys[np.arange(0, 16, 2)]])
#xn = np.array([1] + [-1, -1,-1, 1, 1,1,1]*2 + [-1])
print("x[n]:", xn)
## Plot the input x[n]:
# with respect to the imaginary component (top right)
iproj.scatter(tsteps, xn, color='C1')
# and the real component (bottom left)
rproj.scatter(xn, tsteps, color='C1')
## Do the elementwise multiplication of input (xn) and the k=1 phasor (zn_k1)
xn_times_zn = xn * zn_k1
## Get the real and imaginary parts
real_xn_zn = np.real(xn_times_zn)
imag_xn_zn = np.imag(xn_times_zn)
## Plot the imaginary parts = x[n]*sin(theta) (top right) in yellow
iproj.plot(tsteps, imag_xn_zn, color='C4')
iproj.scatter(tsteps, imag_xn_zn, color='C4', s=100, alpha=0.5)
## Plot the real parts = x[n]*cos(theta) (bottom left) in yello
rproj.plot(real_xn_zn, tsteps, color='C4')
rproj.scatter(real_xn_zn, tsteps, color='C4', s=100, alpha=0.5)
```
In the plot above, you can see that:
* The grey dots show our samples from the $k=1$ phasor (top left), and projected imaginary component, i.e. the sine in the DFT sum (top right), and the projected real component, i.e. the cosine (bottom left)
* The blue dots show our input sequence (top right and bottom left)
* the yellow dots show the element wise multiplication of the phasor values and the input, projected on the imaginary and real axes.
Let's just look at the imaginary (sine) part of the $x[n] \cdot z_1[n]$ multiplication (i.e., `xn_times_zn`):
When we multiply the values in both sequences together, we can see that (1) the values in the input and phasor don't exactly match, but (2) they are always the same sign. That is, the input and the sine component of the $k=1$ phasor are correlated to some extent. In this case, this means that the multiplication terms (in yellow) in this case are all positive. So, adding them all up will give us a positive term for the imaginary component of the DFT.
We can also see that, even though we basically chose this example to match the sine component, we do also get a positive correlation with the real (cos) component.
The following cell shows that you get the same result from doing the dot products with the real (cos) and imaginary (sin) parts of the phasor separately, or doing the dot product with the phasor and then projecting the real and imaginary parts.
```
## The dot product: sum all the elementwise multiplications
dft_1_real = np.sum(real_xn_zn)
dft_1_imag = np.sum(imag_xn_zn)
print("* projection and then two separate products: DFT[k] = %f + i%f" %(dft_1_real, dft_1_imag))
## check these are the same!
dft_1 = np.sum(xn*zn_k1)
print("* one dot product and then projection: DFT[k] = %f + i%f" %(dft_1.real, dft_1.imag))
```
### Exercise: Change the input
What happens when we change the input `xn` so that:
* it has a different period
* exactly matches the sine component of the k=1 phasor
* is out of phase with the sine component of the k=1 phasor
* has a different magnitude
* something else ...
There are some commented out options that you can try in the cell above
### Notes
### Example: Phase shifted input
Let's see what happens when our input matches the imaginary component of $k=1$ phasor but has it's phase shifted a bit. Remember, we can shift the start point of our phasor by multiply the whole phasor sequence a complex number.
$$ \sin(\theta + \phi) = Imag(e^{j\phi}e^{j\theta}) = Imag(e^{j\theta+\phi})$$
```
## Let's generate our DFT[1] phasor for an input of N=16 points:
N=16
## make an array of N steps: 0,...,N
nsteps = np.array(range(N))
tsteps = 2*np.pi*nsteps/N
## The k=1 phasor:
zn_k1 = np.exp(-1j*tsteps)
```
Now let's create an input that's the same as the sine component of the DFT[1] phasor, but shifted by $\pi/3$ radians.
```
## Plot the DFT[1] phasor
X_k1, Y_k1, fig, phasor, iproj, rproj = plot_phasor_static(zn_k1, tsteps, plot_real=False, plot_imag=True)
## Now, for the input let's use a phase shifted version of the sine component of our phasor, zn (defined above)
# Remember that multiplying a complex number by e^{j theta} rotates it by theta degrees
# (anticlockwise if theta is positive)
# So to shift the whole sequence we just multiply everything in the sequence by the same complex number
## For this example, let's shift our -sin(2 pi n/N) by pi/3
zn_shift = np.exp(1j*np.pi/3) * zn_k1
## And take as input just the imaginary component of this shifted sine wave
xn = np.imag(zn_shift)
## Plot the phase shifted sine wave it in blue!
iproj.scatter(2*np.pi*nsteps/N, xn, color='C1')
iproj.plot(2*np.pi*nsteps/N, xn, color='C1')
```
In the plot above, you should see the input (in blue) is the same as the sine component of the $k=1$ phasor but *phase shifted*. By multiplying everything by $e^{j\pi/3}$ _delays_ the our $-sin(\theta)$ function by $\pi/3$ seconds (in this example we're assuming that it takes $\theta$ seconds to travel $\theta$ degrees around the phasor).
Now let's see how this effects the DFT output:
```
## Elementwise multiplication of the input and the k=1 phasor
xn_times_zn = xn * zn_k1
## Add it all up to get the dot product
dft_1 = np.sum(xn_times_zn)
print("DFT[1] = %f + i%f" %(dft_1.real, dft_1.imag))
print("in polar coordinates: magnitude=%f, phase=%f" %cmath.polar(dft_1))
## Plot the sequence of multiplications (yellow)
iproj.plot(2*np.pi*nsteps/N, np.imag(xn_times_zn), color='C4')
iproj.scatter(2*np.pi*nsteps/N, np.imag(xn_times_zn), color='C4')
fig
```
The result of the DFT is 6.93 + j4, which in polar coordinates has a magnitude=8 and phase angle=0.52
* Non-zero magnitude means that the input has a frequency component that matches the frequency of the $k=1$ phasor
* Non-zero phase means that the input is like the $k=1$ phasor in frequency, but starting from a different starting angle.
### How does the detected phase angle relate to the phase we used to shift the input?
The magnitude of the DFT output is relatively straightforward to interpret. The bigger the magnitude, the bigger the amplitude of this frequency component in the input.
When we convert the $DFT[1]$ output to polar coordinates we can, as before, interpret $DFT[1]$ as a scaling (magnitude) and rotation (phase) factor. We can think of the phase as signalling a bigger amplitude of this frequency in the the input. We can think of the phase angle as rotating the starting position of the DFT[1] phasor 'clock hand' by that angle. When we convert this to time vs amplitude, this essential means starting from a future point in the sinusoid for positive phase (or a past point for negative phase) relative to the direction the phasor is rotating.
### DFT phase is relative to cosine!
An important thing to note is that the phase output of the DFT is relative to the **cosine** function with the same frequency as the phasor. This is why the phase value we calculated isn't actually the same as the angle we used to shift the input sequence ($\pi/3=1.047$ radians), since that input sequence was actually based on a sine function.
### Exercise (optional)
Show that our input above $-\sin(t-\pi/3)$ is the same as a cosine wave with the phase shift derived above $\pi/6$.
You'll need to use these trig identities:
$$ \cos(\alpha - \pi/2) = \sin(\alpha) $$
$$\sin(t+\pi) = -\sin(t)$$
**Note** We definitely won't be asking you do these sorts of calculations for assessment, but going through it will help consolidate your intutions about sinusoids and the relationship between cosine and sine waves.
### Notes
Even if you don't do the exercise above, we can see that shifting a cosine function by the DFT output phase gives us the same sinusoid as our input.
That is:
$$ \cos(t + \pi/6) = -\sin(t-\pi/3)$$
just by plotting them:
```
## Plot the input from above and the equivalent cos and sin function based on DFT phase output.
_, _, fig, phasor, iproj, rproj = plot_phasor_static(zn_k1, tsteps, plot_real=False, plot_imag=True)
#fig, phasor, iproj, rproj = create_phasor_iproj_bkg(Tmin, Tmax, ymax=1.5)
#phasor.scatter(zn_real_k1, zn_imag_k1)
## Our input (C1=blue)
tsteps = 2*np.pi*nsteps/N
iproj.scatter(tsteps, xn, color='C1', s=300)
iproj.plot(tsteps, xn, color='C1')
## cos(t + pi/6) (C0=red)
iproj.scatter(tsteps, np.cos(tsteps+np.pi/6), color='C0', s=200)
iproj.plot(tsteps, np.cos(tsteps+np.pi/6), color='C0')
## -sin(t-pi/3) (C5=green)
iproj.scatter(tsteps, -np.sin(tsteps-np.pi/3), color='C5', s=80)
iproj.plot(tsteps, -np.sin(tsteps-np.pi/3), color='C5')
```
In the plot above, you should see:
* the DFT[1] phasor in grey, i.e. $-\sin(t)$
* Our phase shifted input in blue
* The phase shift determined by DFT[1] applied as a cosine wave: $\cos(t + \pi/6)$ in red
* Our phase shifted input generated directly using the `np.sin` function: $-\sin(t-\pi/3)$ in green
You should see is that that the last three functions are all the same! We can always write a sine wave as a cosine wave with a phase shift.
### The DFT for k = 2 and beyond
We can think of DFT[2] as representing the contribution of a phasor that spins around the unit circle
twice as fast as the $k=1$ DFT phasor:
* For $k=2$, Each $e^{i \frac{2\pi n}{N}k}$ is a step of $\frac{2\pi}{N}\times 2$ radians around the unit circle
* i.e. we only sample every second point compared to the $k=1$ case.
* This means it only takes half the time to make a full cycle. So, this phasor represents a sinusoid with twice the frequency of the one for $k=1$
### Exercise:
Plot sinusoids for different values of `k` using the code below
* What happens when $k=N/2$?
* What happens when $k > N/2$?
* What if you increase $N$?
* How many actual frequencies can the DFT actually tell us about?
* How does this relate to the aliasing problem we saw in the previously?
```
## Plot the phasor and sinusoid for different values of k
## Number of samples
N=16
## DFT output we're looking at: a sinusoid with k times the freq of the k=1 phasor
k=15
## indices of points in sample
nsteps = np.array(range(N))
## times of samples
tsteps = 2*np.pi*nsteps/N
## N sample values of the kth DFT phasor
zn_k = np.exp(k*-1j*tsteps)
X_k, Y_k, fig, phasor, iproj, rproj = plot_phasor_static(zn_k, tsteps, plot_real=False, plot_imag=True)
kphasor_anim= get_phasor_animation(X_k, Y_k, tsteps, phasor, iproj, rproj, fig)
kphasor_anim
```
### Notes
## 4.3 The full DFT
Now that we've seen what happens when we calculate the individual DFT components, lets do the whole thing!
We need
1. **Input:** a sequence of $N$ samples
1. generate $N$ phasors, with $N$ sampled points
1. generate $N$ dot products between the input and the phasors
1. **Output:** $N$ complex numbers representing magnitude and phase
The magnitudes and phases give us the decomposition of the input into pure cosine waves
### Set the input $x[n]$
Let's use the same input as above $x[n] = -\sin(2\pi n/N -\pi/3)$
```
## 1) Input: -sin(t-pi/3) as above
N=16
nsteps = np.array(range(N))
theta_step = 2*np.pi/N
theta_steps = theta_step * nsteps
## set the phase shift
phase_in = np.pi/3
## set the input as -sin with phase shift
x = -np.sin(theta_steps-phase_in)
## Plot the input x
fig, tdom = plt.subplots(figsize=(16, 4))
tdom.scatter(tsteps, x, color='magenta')
tdom.plot(tsteps, x, color='magenta')
```
You should see the input (16 points), `x`, plotted in magenta
### Generate phasors
<a name="genphasors"></a>
Let's use some functions to generate all the DFT phasors in one go:
```
## 2) Generate the phasors
# Given a sequence length N, return dictionary representing N DFT outputs
# the kth element of the return dictionary contains the complex numbers sampled around the unit circle after
# N steps, their real and imaginary parts, and the angles of those complex numbers (magnitude is always 1)
def get_dft_phasors(N, centered=False):
## N input steps, N phasors
nsteps = np.array(range(N))
## DFT works for negatively indexed samples, i.e. x[-n], but we don't need this right now!
if centered:
nsteps = nsteps - floor(N/2)
print(nsteps)
# We know the smallest step around the unit circle we can take is 2pi/N
theta_step = 2*np.pi/N
# And we're going to take N steps
theta_steps = theta_step * nsteps
## Generate N phasors
phasors = {}
for k in range(N):
## samples around the unit circle
zs = np.exp(k*-1j*theta_steps)
real = np.real(zs)
imag = np.imag(zs)
## Since we're here, let's return all these things for convenience
# zs: the phasor samples, real: the real component, imag: the imaginary component
# theta_steps: the angles for each phasor sample, theta_step: the smallest angle step size
phasors[k] = {'zs':zs, 'real':real, 'imag':imag, 'thetas':theta_steps, 'theta_step':theta_step}
return phasors
## get the list of phasors
phasors = get_dft_phasors(N)
## plot the different phasors and sine (imaginary) components
## You should be able to see the frequency relationship between each DFT component
for k in range(N):
X_k, Y_k, fig, phasor, iproj, rproj = plot_phasor_static(phasors[k]['zs'], tsteps, plot_real=False, plot_imag=True, color='C'+str(k))
```
You should see in the plots above that phasors $k > N/2$ repeat the frequencies we see for $k < N/2$. For example,
the $k=1$ phasor is equivalent in frequency to the $k=15$ phasor!
### Calculate dot products (i.e. input and phasor correlations)
```
## 3) Get the dot products for each for each phasor and the input
def get_dft_outputs(x, phasors):
DFT = []
## Go through our list of N phasors
for k, phasor in phasors.items():
## Do the dot product between the input and each phasor sequence
DFT.append(np.sum(x * phasor['zs']))
return DFT
## do the DFT
dft = get_dft_outputs(x, phasors)
```
### Get the output magnitudes and phases
Now we convert the results of the dot products we just calculated into polar form to get magnitudes and phases. We can then plot them!
```
## convert to polar coordinates to get magnitude and phase
dft_polar = [cmath.polar(z) for z in dft]
mags = [m for m, _ in dft_polar]
phases = [ph if mag > 0.00001 else 0 for mag,ph in dft_polar]
## we need to put a condition into the phase calculation because phase is calculated by
## a ratio of the imaginary component and the real component. Both of these values might be very, very small.
## This makes it susceptible to floating point errors.
## Then, the ratio of two very small things can actually end up to be quite (spuriously) big!
## Plot the magnitudes
#print(mags)
fig, freqdom = plt.subplots(figsize=(16, 4))
freqdom.set(xlim=(-1, N), ylim=(-1, 10))
freqdom.scatter(range(N), mags)
## plot the phase angle for each DFT component
#print(phases)
fig, fdom = plt.subplots(figsize=(16, 4))
fdom.set(xlim=(-1, N), ylim=(-2, 2))
fdom.scatter(range(N), phases)
```
### How do we interpret the magnitude and phase graphs?
The top plot shows the magnitude response of the DFT with respect to our input. The bottom plot shows the phase respose.
The magnitude plot shows positive magnitudes for the 1st and the N-1th DFT components, and zero for all the rest. This means that the input is a sinusoid with the same frequency as the 1st DFT component.
The positive phase for the first component indicates a phase shift. We see the opposite phase shift for the 15th component because the phasor for that has the same frequency but rotates the opposite way.
So, this is what we expect since we generated the input as exactly the sine projection of the k=1 phasor, just shifted by $\pi/3$ radians!
### Plotting DFT outputs on the complex plane
So far, we've looked at the real and imaginary components of the DFT outputs separately. But we can also think about their properties on the complex plane.
The following plots the DFT outputs for the example above on the complex plane.
```
## Plot the actual DFT outputs as complex numbers on the complex plane
## For the example above, we can see that most of the outputs are at (0,0), i.e. not contribution
## but we get positive magnitudes for the 1st and 15th component, with the same
## magnitude by 'mirrored' phase angle:
ymax=(N/2) + 2
fig, ax = plt.subplots(figsize=(5, 5))
ax.set(xlim=(-10, 10), ylim=(-10, 10))
ax.plot([-ymax,ymax], [0,0], color='grey')
ax.plot([0,0],[-ymax,ymax], color='grey')
circle1 = plt.Circle((0, 0), 1, color='grey',fill=False)
ax.add_artist(circle1)
for k in range(N):
zs = phasors[k]['zs']
xz = x * zs
dftk = sum(xz)
#print(dftk, dftk.real, cmath.polar(dftk))
ax.plot(dftk.real, dftk.imag, 'o')
if dftk.real > 0.0001:
print("k=%d, %f + j%f" % (k, dftk.real, dftk.imag))
```
You can see from this plot that all but two of the DFT outputs have zero magnitude. The two that have non-zero magnitude are the 1st and 15th components. You can see that they have the same magnitude, but mirror each other in phase (they are complex conjugates). This symmetry is something you see a lot of in digital signal processing. We won't go into it in this class, but for one thing we can take advantage of to save some computations!
### Exercise (optional)
You can use the code below to visualize the dot products of the input with with cos (real) and sin (imaginary) parts of each DFT component by varying $k$:
* What's the relationship between the DFT phasors for $k=1$ and $k=N-1$ in terms of the dot product between the real (left plot) and imaginary (right plot) components.
* What's happen for components that don't have the same frequency as the input? e.g. $k=2$
```
## 1) Input: -sin(t-pi/3) as above
nsteps = np.array(range(N))
theta_step = 2*np.pi/N
theta_steps = theta_step * nsteps
Tmin = np.min(theta_steps)
Tmax = np.max(theta_steps)
phase_in = np.pi/3
x = -np.sin(theta_steps-phase_in)
fig, tdom = plt.subplots(figsize=(16, 4))
tdom.scatter(tsteps, x, color='magenta')
tdom.plot(tsteps, x, color='magenta')
## let's break it down into cos and sin components
k=15
## zcos is our sampling of the real part of the kth phasor
zcos = phasors[k]['real']
zsin = phasors[k]['imag']
## the elementwise multiplication between the input x and the real part of the phasor, zcos
xz_real = x * zcos
xz_imag = x * zsin
## This initializes our plots, makes them the appropriate size etc, but leave the plot empty
_, _, fig, phasor, iproj, rproj = plot_phasor_static(phasors[k]['zs'], tsteps, plot_phasor=False, plot_real=True, plot_imag=True, color='grey')
## plot the zero line: if the sequence we get after the elementwise multiplication is
## symmetric around zero, we know that the dot product will be zero since we're adding all the
## values together
#inusoid.plot([Tmin-1,Tmax+1], [0,0], color='grey')
## Plot the input in magenta
rproj.plot(tsteps, x, color='magenta')
rproj.scatter(tsteps, x, color='magenta')
## Plot the elementwise multiplication in orange
rproj.plot(tsteps, xz_real, color='orange')
rproj.scatter(tsteps, xz_real, color='orange')
## Plot the input in magenta
iproj.plot(tsteps, x, color='magenta')
iproj.scatter(tsteps, x, color='magenta')
## Plot the elementwise multiplication in orange
iproj.plot(tsteps, xz_imag, color='orange')
iproj.scatter(tsteps, xz_imag, color='orange')
## Add it all up to get the dot product
dftk_real, dftk_imag = sum(xz_real), sum(xz_imag)
print("dft[%d] = %f + j%f " % (k, dftk_real, dftk_imag))
```
### Notes
|
github_jupyter
|
```
import pandas as pd
#This is the Richmond USGS Data gage
river_richmnd = pd.read_csv('JR_Richmond02037500.csv')
river_richmnd.dropna();
#Hurricane data for the basin - Names of Relevant Storms - This will be used for getting the storms from the larger set
JR_stormnames = pd.read_csv('gis_match.csv')
# Bring in the Big HURDAT data, from 1950 forward (satellites and data quality, etc.)
HURDAT = pd.read_csv('hurdatcleanva_1950_present.csv')
VA_JR_stormmatch = JR_stormnames.merge(HURDAT)
# Now the common storms for the James Basin have been created. We now have time and storms together for the basin
#checking some things about the data
# How many unique storms within the basin since 1950? 62 here and 53 in the Data on the Coast.NOAA.gov's website.
#I think we are close enough here, digging may show some other storms, but I think we have at least captured the ones
#from NOAA
len(VA_JR_stormmatch['Storm Number'].unique());
#double ck the lat and long parameters
print(VA_JR_stormmatch['Lat'].min(),
VA_JR_stormmatch['Lon'].min(),
VA_JR_stormmatch['Lat'].max(),
VA_JR_stormmatch['Lon'].max())
#Make a csv of this data
VA_JR_stormmatch.to_csv('storms_in_basin.csv', sep=',',encoding = 'utf-8')
#names of storms
len(VA_JR_stormmatch['Storm Number'].unique())
VA_JR_stormmatch['Storm Number'].unique()
numbers = VA_JR_stormmatch['Storm Number']
#grab a storm from this list and lok at the times
#Bill = pd.DataFrame(VA_JR_stormmatch['Storm Number'=='AL032003'])
storm = VA_JR_stormmatch[(VA_JR_stormmatch["Storm Number"] == 'AL021955')]
storm
#so this is the data for a storm named Bill that had a pth through the basin * BILL WAS A BACKDOOR Storm
# plotting for the USGS river Gage data
import matplotlib
import matplotlib.pyplot as plt
from climata.usgs import DailyValueIO
from datetime import datetime
from pandas.plotting import register_matplotlib_converters
import numpy as np
register_matplotlib_converters()
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (20.0, 10.0)
# set parameters
nyears = 1
ndays = 365 * nyears
station_id = "02037500"
param_id = "00060"
datelist = pd.date_range(end=datetime.today(), periods=ndays).tolist()
#take an annual average for the river
annual_data = DailyValueIO(
start_date="1955-01-01",
end_date="1956-01-01",
station=station_id,
parameter=param_id,)
for series in annual_data:
flow = [r[1] for r in series.data]
si_flow_annual = np.asarray(flow) * 0.0283168
flow_mean = np.mean(si_flow_annual)
#now for the storm
dischg = DailyValueIO(
start_date="1955-08-10",
end_date="1955-08-24",
station=station_id,
parameter=param_id,)
#create lists of date-flow values
for series in dischg:
flow = [r[1] for r in series.data]
si_flow = np.asarray(flow) * 0.0283168
dates = [r[0] for r in series.data]
plt.plot(dates, si_flow)
plt.axhline(y=flow_mean, color='r', linestyle='-')
plt.xlabel('Date')
plt.ylabel('Discharge (m^3/s)')
plt.title("HU Connie - 1955 (Atlantic)")
plt.xticks(rotation='vertical')
plt.show()
max(si_flow)
percent_incr= (abs(max(si_flow)-flow_mean)/abs(flow_mean))*100
percent_incr
#take an annual average for the river
annual_data = DailyValueIO(
start_date="1955-03-01",
end_date="1955-10-01",
station=station_id,
parameter=param_id,)
for series in annual_data:
flow = [r[1] for r in series.data]
si_flow_annual = np.asarray(flow) * 0.0283168
flow_mean_season = np.mean(si_flow_annual)
print(abs(flow_mean-flow_mean_season))
print(flow_mean)
print(flow_mean_season)
```
|
github_jupyter
|
# Scalable Recommendation with Poisson Factorization - Computation Statistics Project 1
#### The following notebook is an example of the use of the implementation of the mean-field variational algorithm for approximate posterior inference for the Hierarchical Poisson Factorization, by Gopalan et al. (2013).
The variational inference algorithm for HPF has been implemented and stored in the module **hpf_vi**.
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import scipy.special
import scipy.stats
import sklearn.metrics
from hpf_vi import hpf_vi
from sklearn.metrics import mean_squared_error
```
The dataset is a sparse matrix of 0 and 1, which represents the interaction between users and items.
The matrix contains clusters of similar users in terms of items consumed. There are more than 1700 users and approximately 17000 items.
```
ratings = pd.read_pickle("ratings_df.pkl")
ratings = np.array(ratings)
```
As one can read in the documentation of **hpf_vi**, the algorithm stops either when the difference between two subsequent log-likelihood is smaller than a tolerance value (which can be changed) or after a user-defined number of iterations.
In the following code, a function that splits training and testing data is defined.
As suggested by the authors of the paper, the test set consists of randomly selected ratings, which are set to zero during the training. In the original paper, the algorithm convergence is ultimately evaluated on the evaluation set.
```
def train_val_split(data, valid_dim=0.2):
'''
Creating two additional objects, i.e. training and validation set, which can be used in the fitting process
Parameters:
data = np.array
valid_dim = float
'''
if valid_dim >= 1:
raise ValueError("valid_dim must be lower than 1")
train = data.copy()
valid = np.zeros(data.shape)
for u in np.unique(data.nonzero()[0]):
ind = data[u].nonzero()[0]
if len(ind) > 0:
valid_ind = np.random.choice(ind, round(len(ind)*valid_dim), replace=False)
for i in valid_ind:
valid[u,i], train[u,i] = data[u,i], 0
return train, valid
train, valid = train_val_split(ratings)
```
We can now fit the model using the variational inference algorithm. Note that the specification of a validation set is optional.
```
model = hpf_vi() # instantiating the model
model.fit(train = train, valid = valid, tol = 1, iterations = 100)
```
As one can see in the next graph, the log-likelihood, as expected, tends to a higher value, until one of the two stopping criteria is met.
Recall that the log-likelihood is the following:
$$
p(y) = \prod_{u,i}\frac{(\theta_u^T\beta_i)^{y_{u,i}}\cdot e ^{-\theta_u^T\beta_i}}{y_{u,i}!} = \prod_{u,i:\;y_{u,i} > 0} \frac{(\theta_u^T\beta_i)^{y_{u,i}}}{y_{u,i}!} \cdot \prod_{u,i} e ^{-\theta_u^T\beta_i}
$$
```
fig, ax = plt.subplots(figsize=(9,5))
ax.plot(model.ll[1:model.its]) # trimming the loglik computed on the random initialization
ax.set_xlabel("Epochs")
ax.set_ylabel("- Loglikelihood")
```
The **fit** method will generate a sequence of Mean Squared Errors for both the training and, if provided, the validation set. This second parameter can be used, as suggested by the authors, as a stopping criterion for the algorithm. Nevertheless, we opt for the in-sample log-likelihood.
In the following, a graph of the MSE for the validation and the training set is plotted as function of the number of iterations.
```
fig, ax = plt.subplots(figsize=(9, 5))
ax.plot(model.mse_train[1:model.its],label="Training set MSE")
ax.plot(model.mse_valid[1:model.its], label="Validation set MSE")
ax.set_xlabel("Epochs")
ax.set_ylabel("MSE")
ax.legend();
```
Recommendations can now be made thanks to the estimated parameter. The method **recommend** suggests, for each user, the items that obtained the highest score.
## Plotting some results ##
In the following, we can qualitatively assess the in-sample performance of the model by comparing the actual vs predicted observations for 3 users of our dataset.
```
fig, ax = plt.subplots(figsize=(14, 10), nrows=3)
for u,i in enumerate([1,100,1000]):
ax[u].plot(model.predicted[i]/model.predicted[i].max(), label = "Prediction")
ax[u].plot(ratings[i], label = "Actual observations")
ax[u].set_xlabel("Item")
ax[u].set_ylabel("Interaction")
ax[u].set_yticklabels([])
ax[u].legend();
```
|
github_jupyter
|
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
file = os.path.join(dirname, filename)
# You can write up to 5GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
```
Importing necessary libraries.
```
from tensorflow.keras.layers import Input, Dense, Flatten
from keras import Model
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.models import Sequential
```
As we are using VGG16 architecture, it expects the size of 224 by 224(Although, you can set your own size). We will set image size.
```
image_size = [224, 224]
vgg = VGG16(input_shape = image_size + [3], weights = 'imagenet', include_top = False)
```
The first argument is the shape of input image plus **3**(as image is colured[RBG], for black_and_white add **1**).
The second one is the weights eqaul to imagenet. And,
as we know it gives 1000 outputs. Third one excludes the top layer.
```
for layer in vgg.layers:
layer.trainable = False
```
Some of the layers of VGG16 are already trained. To train them again is not a good practice. Thereby making it False
```
from glob import glob
folders = glob('/kaggle/input/tomato/New Plant Diseases Dataset(Augmented)/train/*')
folders
```
Flattening the output layer
```
x = Flatten()(vgg.output)
prediction = Dense(len(folders), activation = 'softmax')(x)
model = Model(inputs = vgg.input, outputs = prediction)
model.summary()
```
Compiling the model
```
model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
```
Generating more images
```
from keras.preprocessing.image import ImageDataGenerator
train_data_gen = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True)
test_data_gen = ImageDataGenerator(rescale = 1./255)
train_set = train_data_gen.flow_from_directory('/kaggle/input/tomato/New Plant Diseases Dataset(Augmented)/train/', target_size = (224,224), batch_size = 32, class_mode = 'categorical')
test_set = test_data_gen.flow_from_directory('/kaggle/input/tomato/New Plant Diseases Dataset(Augmented)/valid/', target_size = (224,224), batch_size = 32, class_mode = 'categorical')
```
Fitting the model
```
mod = model.fit_generator(
train_set,
validation_data=test_set,
epochs=10,
steps_per_epoch=len(train_set),
validation_steps=len(test_set)
)
import matplotlib.pyplot as plt
plt.plot(mod.history['loss'], label='train loss')
plt.plot(mod.history['val_loss'], label='val loss')
plt.legend()
plt.show()
plt.plot(mod.history['accuracy'], label='train accuracy')
plt.plot(mod.history['val_accuracy'], label='val_accuracy')
plt.legend()
plt.show()
```
|
github_jupyter
|
# Lecture 8: Fitting Generalized Linear Models (Part II)
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('talk')
sns.set_style('white')
%matplotlib inline
```
## Objectives
+ Maximum Posterior Estimate
+ Bayesian Linear Regression
+ Evidence Approximation
+ Automatic Relevance Determination
## Readings
Before coming to class, please read the following:
+ [Ch. 3 of Bishop, 2006](http://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738)
+ [Ohio State University, Bayesian Linear Regression](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0ahUKEwikxsiPuJPKAhVE32MKHRoMCtsQFggyMAI&url=http%3A%2F%2Fweb.cse.ohio-state.edu%2F~kulis%2Fteaching%2F788_sp12%2Fscribe_notes%2Flecture5.pdf&usg=AFQjCNFvxuyBfFkRN8bdJAvd_dlZdsShEw&sig2=UqakvfANehNUUK1J9rXIiQ)
You can also check out this 10 minutes short Youtube video on Bayesian Linear Regression -
+ [Mathematicalmonk, Bayesian Linear Regression](https://www.youtube.com/watch?v=dtkGq9tdYcI)
Plese see the 7th handout before you start on this one.
We just repeate some of the code that we developed there.
In particular, we load the essential modules and we redefine the basis function classes.
```
# Linear Basis
class LinearBasis(object):
"""
Represents a 1D linear basis.
"""
def __init__(self):
self.num_basis = 2 # The number of basis functions
def __call__(self, x):
"""
``x`` should be a 1D array.
"""
return [1., x[0]]
# We need a generic function that computes the design matrix
def compute_design_matrix(X, phi):
"""
Arguments:
X - The observed inputs (1D array)
phi - The basis functions.
"""
num_observations = X.shape[0]
num_basis = phi.num_basis
Phi = np.ndarray((num_observations, num_basis))
for i in xrange(num_observations):
Phi[i, :] = phi(X[i, :])
return Phi
# Here is a class for the polynomials:
class PolynomialBasis(object):
"""
A set of linear basis functions.
Arguments:
degree - The degree of the polynomial.
"""
def __init__(self, degree):
self.degree = degree
self.num_basis = degree + 1
def __call__(self, x):
return np.array([x[0] ** i for i in range(self.degree + 1)])
class FourierBasis(object):
"""
A set of linear basis functions.
Arguments:
num_terms - The number of Fourier terms.
L - The period of the function.
"""
def __init__(self, num_terms, L):
self.num_terms = num_terms
self.L = L
self.num_basis = 2 * num_terms
def __call__(self, x):
res = np.ndarray((self.num_basis,))
for i in xrange(num_terms):
res[2 * i] = np.cos(2 * i * np.pi / self.L * x[0])
res[2 * i + 1] = np.sin(2 * (i+1) * np.pi / self.L * x[0])
return res
class RadialBasisFunctions(object):
"""
A set of linear basis functions.
Arguments:
X - The centers of the radial basis functions.
ell - The assumed lengthscale.
"""
def __init__(self, X, ell):
self.X = X
self.ell = ell
self.num_basis = X.shape[0]
def __call__(self, x):
return np.exp(-.5 * (x - self.X) ** 2 / self.ell ** 2).flatten()
class StepFunctionBasis(object):
"""
A set of step functions.
Arguments:
X - The centers of the step functions.
"""
def __init__(self, X):
self.X = X
self.num_basis = X.shape[0]
def __call__(self, x):
res = np.ones((self.num_basis, ))
res[x < self.X.flatten()] = 0.
return res
```
We will use the motorcycle data set of lecture 7.
```
data = np.loadtxt('motor.dat')
X = data[:, 0][:, None]
Y = data[:, 1]
```
## Probabilistic Regression - Version 2
+ We wish to model the data using some **fixed** basis/features:
$$
y(\mathbf{x};\mathbf{w}) = \sum_{j=1}^{m} w_{j}\phi_{j}(\mathbf{x}) = \mathbf{w^{T}\boldsymbol{\phi}(\mathbf{x})
}
$$
+ We *model the measurement process* using a **likelihood** function:
$$
\mathbf{y}_{1:n} | \mathbf{x}_{1:n}, \mathbf{w} \sim p(\mathbf{y}_{1:n}|\mathbf{x}_{1:n}, \mathbf{w}).
$$
+ We *model the uncertainty in the model parameters* using a **prior**:
$$
\mathbf{w} \sim p(\mathbf{w}).
$$
### Gaussian Prior on the Weights
+ Consider the following **prior** on $\mathbf{w}$:
$$
p(\mathbf{w}|\alpha) = \mathcal{N}\left(\mathbf{w}|\mathbf{0},\alpha^{-1}\mathbf{I}\right) =
\left(\frac{\alpha}{2\pi}\right)^{\frac{m}{2}}\exp\left\{-\frac{\alpha}{2}\lVert\mathbf{w}\rVert^2\right\}.
$$
+ We say:
> Before we see the data, we beleive that $\mathbf{w}$ must be around zero with a precision of $\alpha$.
### The Posterior of the Weights
+ Combining the likelihood and the prior, we get using Bayes rule:
$$
p(\mathbf{w}|\mathbf{x}_{1:n},\mathbf{y}_{1:n}, \sigma,\alpha) =
\frac{p(\mathbf{y}_{1:n}|\mathbf{x}_{1:n}, \mathbf{w}, \sigma)p(\mathbf{w}|\alpha)}
{\int p(\mathbf{y}_{1:n}|\mathbf{x}_{1:n}, \mathbf{w}', \sigma)p(\mathbf{w}'|\alpha)d\mathbf{w}'}.
$$
+ We say
> The posterior summarizes our state of knowledge about $\mathbf{w}$ after we see the data,
if we know $\alpha$ and $\sigma$.
### Maximum Posterior Estimate
+ We can find a point estimate of $\mathbf{w}$ by solving:
$$
\mathbf{w}_{\mbox{MPE}} = \arg\max_{\mathbf{w}} p(\mathbf{y}_{1:n}|\mathbf{x}_{1:n}, \mathbf{w}, \sigma)p(\mathbf{w}|\alpha).
$$
+ For Gaussian likelihood and weights:
$$
\log p(\mathbf{w}|\mathbf{x}_{1:n},\mathbf{y}_{1:n}, \sigma,\alpha) =
- \frac{1}{2\sigma^2}\lVert\mathbf{\Phi}\mathbf{w}-\mathbf{y}_{1:n}\rVert^2 -\frac{\alpha}{2}\lVert\mathbf{w}\rVert^2.
$$
+ With maximum:
$$
\mathbf{w}_{\mbox{MPE}} = \left(\sigma^{-2}\mathbf{\Phi}^T\mathbf{\Phi}+\alpha\mathbf{I}\right)^{-1}\mathbf{\Phi}^T\mathbf{y}_{1:n}.
$$
+ But, no analytic formula for $\sigma$...
### The Stable Way to Compute the MAP Estimate
+ Construct the positive-definite matrix:
$$
\mathbf{A} = \left(\sigma^{-2}\mathbf{\Phi}^T\mathbf{\Phi}+\alpha\mathbf{I}\right)
$$
+ Compute the [Cholesky decomposition](https://en.wikipedia.org/wiki/Cholesky_decomposition) of $\mathbf{A}$:
$$
\mathbf{A} = \mathbf{L}\mathbf{L}^T,
$$
where $\mathbf{L}$ is lower triangular.
+ Then, solve the system:
$$
\mathbf{L}\mathbf{L}^T\mathbf{w} = \mathbf{\Phi}^T\mathbf{y}_{1:n},
$$
doing a forward and a backward substitution.
+ [scipy.linalg.cho_factor](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.linalg.cho_factor.html#scipy.linalg.cho_factor) and [scipy.linalg.cho_solve](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.linalg.cho_solve.html) can be used for this.
### Radial Basis Functions
```
import scipy.linalg
ell = 2.
alpha = 5
sigma = 20.28
Xc = np.linspace(0, 60, 20)
phi = RadialBasisFunctions(Xc, ell)
Phi = compute_design_matrix(X, phi)
A = np.dot(Phi.T, Phi) / sigma ** 2. + alpha * np.eye(Phi.shape[1])
L = scipy.linalg.cho_factor(A)
w_MPE = scipy.linalg.cho_solve(L, np.dot(Phi.T, Y))
print 'w_MPE:'
print w_MPE
# Let's predict on these points:
X_p = np.linspace(0, 60, 100)[:, None]
Phi_p = compute_design_matrix(X_p, phi)
Y_p = np.dot(Phi_p, w_MPE)
Y_l = Y_p - 2. * sigma # Lower predictive bound
Y_u = Y_p + 2. * sigma # Upper predictive bound
fig, ax = plt.subplots()
ax.plot(X, Y, 'x', markeredgewidth=2, label='Observations')
ax.plot(X_p, Y_p, label='MPE Prediction (Radial Basis Functions, alpha=%1.2f)' % alpha)
ax.fill_between(X_p.flatten(), Y_l, Y_u, color=sns.color_palette()[1], alpha=0.25)
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
plt.legend(loc='best');
```
### Hands-on
+ Experiment with different alphas.
+ When are we underfitting?
+ When are we overfitting?
+ Which one (if any) gives you the best fit?
### Issues with Maximum Posterior Estimate
+ How many basis functions should I use?
+ Which basis functions should I use?
+ How do I pick the parameters of the basis functions, e.g., the lengthscale $\ell$ of the RBFs, $\alpha$, etc.?
## Probabilistic Regression - Version 3 - Bayesian Linear Regression
+ For Gaussian likelihood and weights, the posterior is Gaussian:
$$
p(\mathbf{w}|\mathbf{x}_{1:n},\mathbf{y}_{1:n}, \sigma, \alpha) = \mathcal{N}\left(\mathbf{w}|\mathbf{m}, \mathbf{S}\right),
$$
where
$$
\mathbf{S} = \left(\sigma^{-2}\mathbf{\Phi}^T\mathbf{\Phi}+\alpha\mathbf{I}\right)^{-1},
$$
and
$$
\mathbf{m} = \sigma^{-2}\mathbf{S}\Phi^T\mathbf{y}_{1:n}.
$$
+ In general: [Markov Chain Monte Carlo](https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo).
### Posterior Predictive Distribution
+ Using probability theory, we ask: What do we know about $y$ at a new $\mathbf{x}$ after seeing the data.
+ We have using the sum rule:
$$
p(y|\mathbf{x}, \mathbf{x}_{1:n}, \mathbf{y}_{1:n}, \sigma, \alpha) =
\int p(y | \mathbf{x}, \mathbf{w}, \sigma) p(\mathbf{w}|\mathbf{x}_{1:n}, \mathbf{y}_{1:n},\sigma,\alpha)d\mathbf{w}.
$$
+ For Gaussian likelihood and prior:
$$
p(y|\mathbf{x}, \mathbf{x}_{1:n}, \mathbf{y}_{1:n}, \sigma, \alpha) = \mathcal{N}\left(y|m(\mathbf{x}), s^2(\mathbf{x})\right),
$$
where
$$
m(\mathbf{x}) = \mathbf{m}^T\boldsymbol{\phi}(\mathbf{x})\;\mbox{and}\;s(\mathbf{x}) = \boldsymbol{\phi}(\mathbf{x})^T\mathbf{S}\boldsymbol{\phi}(\mathbf{x}) + \sigma^2.
$$
### Predictive Uncertainty
+ The **predictive uncertainty** is:
$$
s^2(\mathbf{x}) = \boldsymbol{\phi}(\mathbf{x})^T\mathbf{S}\boldsymbol{\phi}(\mathbf{x}) + \sigma^2.
$$
+ $\sigma^2$ corresponds to the measurement noise.
+ $\boldsymbol{\phi}(\mathbf{x})^T\mathbf{S}\boldsymbol{\phi}(\mathbf{x})$ is the epistemic uncertainty induced by limited data.
```
import scipy.linalg
ell = 2.
alpha = 0.001
sigma = 20.28
Xc = np.linspace(0, 60, 20)
phi = RadialBasisFunctions(Xc, ell)
Phi = compute_design_matrix(X, phi)
A = np.dot(Phi.T, Phi) / sigma ** 2. + alpha * np.eye(Phi.shape[1])
L = scipy.linalg.cho_factor(A)
m = scipy.linalg.cho_solve(L, np.dot(Phi.T, Y) / sigma ** 2) # The posterior mean of w
S = scipy.linalg.cho_solve(L, np.eye(Phi.shape[1])) # The posterior covariance of w
Phi_p = compute_design_matrix(X_p, phi)
Y_p = np.dot(Phi_p, m) # The mean prediction
V_p_ep = np.einsum('ij,jk,ik->i', Phi_p, S, Phi_p) # The epistemic uncertainty
S_p_ep = np.sqrt(V_p_ep)
V_p = V_p_ep + sigma ** 2 # Full uncertainty
S_p = np.sqrt(V_p)
Y_l_ep = Y_p - 2. * S_p_ep # Lower epistemic predictive bound
Y_u_ep = Y_p + 2. * S_p_ep # Upper epistemic predictive bound
Y_l = Y_p - 2. * S_p # Lower predictive bound
Y_u = Y_p + 2. * S_p # Upper predictive bound
fig, ax = plt.subplots()
ax.plot(X, Y, 'x', markeredgewidth=2, label='Observations')
ax.plot(X_p, Y_p, label='Bayesian Prediction (Radial Basis Functions, alpha=%1.2f)' % alpha)
ax.fill_between(X_p.flatten(), Y_l_ep, Y_u_ep, color=sns.color_palette()[2], alpha=0.25)
ax.fill_between(X_p.flatten(), Y_l, Y_l_ep, color=sns.color_palette()[1], alpha=0.25)
ax.fill_between(X_p.flatten(), Y_u_ep, Y_u, color=sns.color_palette()[1], alpha=0.25)
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
plt.legend(loc='best');
```
### Sampling Posterior Models
+ We can actually sample models (functions) from the posterior. Here is how:
+ Sample a $\mathbf{w}$ from $p(\mathbf{w}|\mathbf{x}_{1:n},\mathbf{y}_{1:n}, \sigma, \alpha)$.
+ Look at the sampled model:
$$
y(\mathbf{x};\mathbf{w}) = \mathbf{w}^T\boldsymbol{\phi}(\mathbf{x}).
$$
```
# We have m, S, X_p, and Phi_p from before
fig, ax = plt.subplots()
ax.plot(X, Y, 'x', markeredgewidth=2, label='Observations')
for i in xrange(10):
w = np.random.multivariate_normal(m, S)
Y_p_s = np.dot(Phi_p, w)
ax.plot(X_p, Y_p_s, color=sns.color_palette()[2], linewidth=0.5);
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
```
### Issues with Bayesian Linear Regression
+ How many basis functions should I use?
+ Which basis functions should I use?
+ How do I pick the parameters of the basis functions, e.g., the lengthscale $\ell$ of the RBFs, $\alpha$, etc.?+
### Hands-on
+ Experiment with different alphas, ells, and sigmas.
+ When are we underfitting?
+ When are we overfitting?
+ Which one (if any) gives you the best fit?
+ In the figure, right above: Increase the number of posterior $\mathbf{w}$ samples to get a sense of the epistemic uncertainty induced by the limited data.
## Probabilistic Regression - Version 4 - Hierarchical Priors
+ So, how do we find all the parameters like $\sigma$, $\alpha$, $\ell$, etc?
+ These are all called **hyper-parameters** of the model.
+ Call all of them
$$
\boldsymbol{\theta} = \{\sigma, \alpha, \ell,\dots\}.
$$
### Hierarchical Priors
+ Model:
$$
y(\mathbf{x};\mathbf{w}) = \sum_{j=1}^{m} w_{j}\phi_{j}(\mathbf{x}) = \mathbf{w^{T}\boldsymbol{\phi}(\mathbf{x})}
$$
+ Likelihood:
$$
\mathbf{y}_{1:n} | \mathbf{x}_{1:n}, \mathbf{w}, \boldsymbol{\theta} \sim p(\mathbf{y}_{1:n}|\mathbf{x}_{1:n}, \mathbf{w}, \boldsymbol{\theta}).
$$
+ Weight prior:
$$
\mathbf{w} | \boldsymbol{\theta} \sim p(\mathbf{w}| \boldsymbol{\theta}).
$$
+ Hyper-prior:
$$
\boldsymbol{\theta} \sim p(\boldsymbol{\theta}).
$$
### Fully Bayesian Solution
+ Just write down the posterior of everything:
$$
p(\mathbf{w}, \boldsymbol{\theta}|\mathbf{x}_{1:n}, \mathbf{y}_{1:n}) \propto p(\mathbf{y}_{1:n}|\mathbf{x}_{1:n}|\mathbf{w},\boldsymbol{\theta})p(\mathbf{w}|\boldsymbol{\theta})p(\boldsymbol{\theta}).
$$
+ and, somehow, sample from it...
### The Evidence Approximation
+ Look at the marginal posterior of $\boldsymbol{\theta}$:
$$
p(\boldsymbol{\theta}|\mathbf{x}_{1:n}, \mathbf{y}_{1:n}) \propto
\int p(\mathbf{y}_{1:n}|\mathbf{x}_{1:n}|\mathbf{w},\boldsymbol{\theta})p(\mathbf{w}|\boldsymbol{\theta})p(\boldsymbol{\theta})d\mathbf{w}.
$$
+ Assume that the hyper-prior is relatively flat:
$$
p(\boldsymbol{\theta}) \propto 1,
$$
+ Use a MAP estimate for $\boldsymbol{\theta}$:
$$
\boldsymbol{\theta}_{\mbox{EV}} = \arg\max_{\boldsymbol{\theta}}\int p(\mathbf{y}_{1:n}|\mathbf{x}_{1:n}|\mathbf{w},\boldsymbol{\theta})p(\mathbf{w}|\boldsymbol{\theta})d\mathbf{w}.
$$
+ Analytical for Gaussian likelihood and prior.
### Implementation Evidence Approximation
+ There is a fast algorithm for the evidence approximation for Bayesian linear regression.
+ It would take about an hour to go over it. See Ch. 3 of (Bishop, 2006).
+ We will use the implementation found in [scikit-learn](http://scikit-learn.org).
+ If you don't have it:
```
conda install scikit-learn
```
### Radial Basis Functions
```
from sklearn.linear_model import BayesianRidge
ell = 2.
Xc = np.linspace(0, 60, 50)
phi = RadialBasisFunctions(Xc, ell)
Phi = compute_design_matrix(X, phi)
regressor = BayesianRidge()
regressor.fit(Phi, Y)
# They are using different names:
sigma = np.sqrt(1. / regressor.alpha_)
print 'best sigma:', sigma
alpha = regressor.lambda_
print 'best alpha:', alpha
A = np.dot(Phi.T, Phi) / sigma ** 2. + alpha * np.eye(Phi.shape[1])
L = scipy.linalg.cho_factor(A)
m = scipy.linalg.cho_solve(L, np.dot(Phi.T, Y) / sigma ** 2) # The posterior mean of w
S = scipy.linalg.cho_solve(L, np.eye(Phi.shape[1])) # The posterior covariance of w
Phi_p = compute_design_matrix(X_p, phi)
Y_p = np.dot(Phi_p, m) # The mean prediction
V_p_ep = np.einsum('ij,jk,ik->i', Phi_p, S, Phi_p) # The epistemic uncertainty
S_p_ep = np.sqrt(V_p_ep)
V_p = V_p_ep + sigma ** 2 # Full uncertainty
S_p = np.sqrt(V_p)
Y_l_ep = Y_p - 2. * S_p_ep # Lower epistemic predictive bound
Y_u_ep = Y_p + 2. * S_p_ep # Upper epistemic predictive bound
Y_l = Y_p - 2. * S_p # Lower predictive bound
Y_u = Y_p + 2. * S_p # Upper predictive bound
fig, ax = plt.subplots()
ax.plot(X, Y, 'x', markeredgewidth=2, label='Observations')
ax.plot(X_p, Y_p, label='Bayesian Prediction (Radial Basis Functions, alpha=%1.2f)' % alpha)
ax.fill_between(X_p.flatten(), Y_l_ep, Y_u_ep, color=sns.color_palette()[2], alpha=0.25)
ax.fill_between(X_p.flatten(), Y_l, Y_l_ep, color=sns.color_palette()[1], alpha=0.25)
ax.fill_between(X_p.flatten(), Y_u_ep, Y_u, color=sns.color_palette()[1], alpha=0.25)
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
plt.legend(loc='best');
```
### Issues with Bayesian Linear Regression
+ How many basis functions should I use?
+ Which basis functions should I use?
### Hands-on
+ Try the evidence approximation with the Fourier basis.
+ Try the evidence approximation with the Step function basis.
## Probabilistic Regression - Version 5 - Automatic Relevance Determination
+ Use a different precision $\alpha_i$ for each weight:
$$
p(w_j | \alpha_j) \propto \exp\left\{-\alpha_jw_j^2\right\},
$$
+ so that:
$$
p(\mathbf{w}|\boldsymbol{\alpha}) = \propto \prod_{j=1}^mp(w_j|\alpha_j).
$$
+ Then maximize the **evidence** with respect to all the $\alpha_j$'s.
+ **Sparsity**: When $\alpha_j\rightarrow\infty$, $w_j=0$ identically!
### Radial Basis Functions
```
from sklearn.linear_model import ARDRegression
ell = 2.
Xc = np.linspace(0, 60, 50)
phi = RadialBasisFunctions(Xc, ell)
Phi = compute_design_matrix(X, phi)
regressor = ARDRegression()
regressor.fit(Phi, Y)
# They are using different names:
sigma = np.sqrt(1. / regressor.alpha_)
print 'best sigma:', sigma
alpha = regressor.lambda_
print 'best alpha:', alpha
A = np.dot(Phi.T, Phi) / sigma ** 2. + alpha * np.eye(Phi.shape[1])
L = scipy.linalg.cho_factor(A)
m = scipy.linalg.cho_solve(L, np.dot(Phi.T, Y) / sigma ** 2) # The posterior mean of w
S = scipy.linalg.cho_solve(L, np.eye(Phi.shape[1])) # The posterior covariance of w
Phi_p = compute_design_matrix(X_p, phi)
Y_p = np.dot(Phi_p, m) # The mean prediction
V_p_ep = np.einsum('ij,jk,ik->i', Phi_p, S, Phi_p) # The epistemic uncertainty
S_p_ep = np.sqrt(V_p_ep)
V_p = V_p_ep + sigma ** 2 # Full uncertainty
S_p = np.sqrt(V_p)
Y_l_ep = Y_p - 2. * S_p_ep # Lower epistemic predictive bound
Y_u_ep = Y_p + 2. * S_p_ep # Upper epistemic predictive bound
Y_l = Y_p - 2. * S_p # Lower predictive bound
Y_u = Y_p + 2. * S_p # Upper predictive bound
fig, ax = plt.subplots()
ax.plot(X, Y, 'x', markeredgewidth=2, label='Observations')
ax.plot(X_p, Y_p, label='Bayesian Prediction (Radial Basis Functions, ARD)')
ax.fill_between(X_p.flatten(), Y_l_ep, Y_u_ep, color=sns.color_palette()[2], alpha=0.25)
ax.fill_between(X_p.flatten(), Y_l, Y_l_ep, color=sns.color_palette()[1], alpha=0.25)
ax.fill_between(X_p.flatten(), Y_u_ep, Y_u, color=sns.color_palette()[1], alpha=0.25)
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
plt.legend(loc='best');
```
### Issues with Automatic Relevance Determination
+ What about the input-dependent (heteroscedastic) noise? (ADVANCED).<div class="cite2c-biblio"></div>
### Hands-on
+ Try ARD with the Fourier basis.
+ Try ARD with the Step function basis.
+ Try ARD with a basis that consists both of Fourier and RBFs. Which one's survive?
|
github_jupyter
|
# Optimization of a Dissipative Quantum Gate
```
# NBVAL_IGNORE_OUTPUT
%load_ext watermark
import sys
import os
import qutip
import numpy as np
import scipy
import matplotlib
import matplotlib.pylab as plt
import krotov
import copy
from functools import partial
from itertools import product
%watermark -v --iversions
```
$\newcommand{tr}[0]{\operatorname{tr}}
\newcommand{diag}[0]{\operatorname{diag}}
\newcommand{abs}[0]{\operatorname{abs}}
\newcommand{pop}[0]{\operatorname{pop}}
\newcommand{aux}[0]{\text{aux}}
\newcommand{int}[0]{\text{int}}
\newcommand{opt}[0]{\text{opt}}
\newcommand{tgt}[0]{\text{tgt}}
\newcommand{init}[0]{\text{init}}
\newcommand{lab}[0]{\text{lab}}
\newcommand{rwa}[0]{\text{rwa}}
\newcommand{bra}[1]{\langle#1\vert}
\newcommand{ket}[1]{\vert#1\rangle}
\newcommand{Bra}[1]{\left\langle#1\right\vert}
\newcommand{Ket}[1]{\left\vert#1\right\rangle}
\newcommand{Braket}[2]{\left\langle #1\vphantom{#2}\mid{#2}\vphantom{#1}\right\rangle}
\newcommand{ketbra}[2]{\vert#1\rangle\!\langle#2\vert}
\newcommand{op}[1]{\hat{#1}}
\newcommand{Op}[1]{\hat{#1}}
\newcommand{dd}[0]{\,\text{d}}
\newcommand{Liouville}[0]{\mathcal{L}}
\newcommand{DynMap}[0]{\mathcal{E}}
\newcommand{identity}[0]{\mathbf{1}}
\newcommand{Norm}[1]{\lVert#1\rVert}
\newcommand{Abs}[1]{\left\vert#1\right\vert}
\newcommand{avg}[1]{\langle#1\rangle}
\newcommand{Avg}[1]{\left\langle#1\right\rangle}
\newcommand{AbsSq}[1]{\left\vert#1\right\vert^2}
\newcommand{Re}[0]{\operatorname{Re}}
\newcommand{Im}[0]{\operatorname{Im}}$
This example illustrates the optimization for a quantum gate in an open quantum system, where the dynamics is governed by the Liouville-von Neumann equation. A naive extension of a gate optimization to Liouville space would seem to imply that it is necessary to optimize over the full basis of Liouville space (16 matrices, for a two-qubit gate). However, [Goerz et al., New J. Phys. 16, 055012 (2014)][1] showed that is not necessary, but that a set of 3 density matrices is sufficient to track the optimization.
This example reproduces the "Example II" from that paper, considering the optimization towards a $\sqrt{\text{iSWAP}}$ two-qubit gate on a system of two transmons with a shared transmission line resonator.
[1]: https://michaelgoerz.net/research/Goerz_NJP2014.pdf
**Note**: This notebook uses some parallelization features (`parallel_map`/`multiprocessing`). Unfortunately, on Windows (and macOS with Python >= 3.8), `multiprocessing` does not work correctly for functions defined in a Jupyter notebook (due to the [spawn method](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) being used on Windows, instead of Unix-`fork`, see also https://stackoverflow.com/questions/45719956). We can use the third-party [loky](https://loky.readthedocs.io/) library to fix this, but this significantly increases the overhead of multi-process parallelization. The use of parallelization here is for illustration only and makes no guarantee of actually improving the runtime of the optimization.
```
if sys.platform != 'linux':
krotov.parallelization.set_parallelization(use_loky=True)
from krotov.parallelization import parallel_map
```
## The two-transmon system
We consider the Hamiltonian from Eq (17) in the paper, in the rotating wave approximation, together with spontaneous decay and dephasing of each qubit. Alltogether, we define the Liouvillian as follows:
```
def two_qubit_transmon_liouvillian(
ω1, ω2, ωd, δ1, δ2, J, q1T1, q2T1, q1T2, q2T2, T, Omega, n_qubit
):
from qutip import tensor, identity, destroy
b1 = tensor(identity(n_qubit), destroy(n_qubit))
b2 = tensor(destroy(n_qubit), identity(n_qubit))
H0 = (
(ω1 - ωd - δ1 / 2) * b1.dag() * b1
+ (δ1 / 2) * b1.dag() * b1 * b1.dag() * b1
+ (ω2 - ωd - δ2 / 2) * b2.dag() * b2
+ (δ2 / 2) * b2.dag() * b2 * b2.dag() * b2
+ J * (b1.dag() * b2 + b1 * b2.dag())
)
H1_re = 0.5 * (b1 + b1.dag() + b2 + b2.dag()) # 0.5 is due to RWA
H1_im = 0.5j * (b1.dag() - b1 + b2.dag() - b2)
H = [H0, [H1_re, Omega], [H1_im, ZeroPulse]]
A1 = np.sqrt(1 / q1T1) * b1 # decay of qubit 1
A2 = np.sqrt(1 / q2T1) * b2 # decay of qubit 2
A3 = np.sqrt(1 / q1T2) * b1.dag() * b1 # dephasing of qubit 1
A4 = np.sqrt(1 / q2T2) * b2.dag() * b2 # dephasing of qubit 2
L = krotov.objectives.liouvillian(H, c_ops=[A1, A2, A3, A4])
return L
```
We will use internal units GHz and ns. Values in GHz contain an implicit factor 2π, and MHz and μs are converted to GHz and ns, respectively:
```
GHz = 2 * np.pi
MHz = 1e-3 * GHz
ns = 1
μs = 1000 * ns
```
This implicit factor $2 \pi$ is because frequencies ($\nu$) convert to energies as $E = h \nu$, but our propagation routines assume a unit $\hbar = 1$ for energies. Thus, the factor $h / \hbar = 2 \pi$.
We will use the same parameters as those given in Table 2 of the paper:
```
ω1 = 4.3796 * GHz # qubit frequency 1
ω2 = 4.6137 * GHz # qubit frequency 2
ωd = 4.4985 * GHz # drive frequency
δ1 = -239.3 * MHz # anharmonicity 1
δ2 = -242.8 * MHz # anharmonicity 2
J = -2.3 * MHz # effective qubit-qubit coupling
q1T1 = 38.0 * μs # decay time for qubit 1
q2T1 = 32.0 * μs # decay time for qubit 2
q1T2 = 29.5 * μs # dephasing time for qubit 1
q2T2 = 16.0 * μs # dephasing time for qubit 2
T = 400 * ns # gate duration
tlist = np.linspace(0, T, 2000)
```
While in the original paper, each transmon was cut off at 6 levels, here we truncate at 5 levels. This makes the propagation faster, while potentially introducing a slightly larger truncation error.
```
n_qubit = 5 # number of transmon levels to consider
```
In the Liouvillian, note the control being split up into a separate real and imaginary part. As a guess control we use a real-valued constant pulse with an amplitude of 35 MHz, acting over 400 ns, with a switch-on and switch-off in the first 20 ns (see plot below)
```
def Omega(t, args):
E0 = 35.0 * MHz
return E0 * krotov.shapes.flattop(t, 0, T, t_rise=(20 * ns), func='sinsq')
```
The imaginary part start out as zero:
```
def ZeroPulse(t, args):
return 0.0
```
We can now instantiate the Liouvillian:
```
L = two_qubit_transmon_liouvillian(
ω1, ω2, ωd, δ1, δ2, J, q1T1, q2T1, q1T2, q2T2, T, Omega, n_qubit
)
```
The guess pulse looks as follows:
```
def plot_pulse(pulse, tlist, xlimit=None):
fig, ax = plt.subplots()
if callable(pulse):
pulse = np.array([pulse(t, None) for t in tlist])
ax.plot(tlist, pulse/MHz)
ax.set_xlabel('time (ns)')
ax.set_ylabel('pulse amplitude (MHz)')
if xlimit is not None:
ax.set_xlim(xlimit)
plt.show(fig)
plot_pulse(L[1][1], tlist)
```
## Optimization objectives
Our target gate is $\Op{O} = \sqrt{\text{iSWAP}}$:
```
SQRTISWAP = qutip.Qobj(np.array(
[[1, 0, 0, 0],
[0, 1 / np.sqrt(2), 1j / np.sqrt(2), 0],
[0, 1j / np.sqrt(2), 1 / np.sqrt(2), 0],
[0, 0, 0, 1]]),
dims=[[2, 2], [2, 2]]
)
```
The key idea explored in the paper is that a set of three density matrices is sufficient to track the optimization
$$
\begin{align}
\Op{\rho}_1
&= \sum_{i=1}^{d} \frac{2 (d-i+1)}{d (d+1)} \ketbra{i}{i} \\
\Op{\rho}_2
&= \sum_{i,j=1}^{d} \frac{1}{d} \ketbra{i}{j} \\
\Op{\rho}_3
&= \sum_{i=1}^{d} \frac{1}{d} \ketbra{i}{i}
\end{align}
$$
In our case, $d=4$ for a two qubit-gate, and the $\ket{i}$, $\ket{j}$ are the canonical basis states $\ket{00}$, $\ket{01}$, $\ket{10}$, $\ket{11}$
```
ket00 = qutip.ket((0, 0), dim=(n_qubit, n_qubit))
ket01 = qutip.ket((0, 1), dim=(n_qubit, n_qubit))
ket10 = qutip.ket((1, 0), dim=(n_qubit, n_qubit))
ket11 = qutip.ket((1, 1), dim=(n_qubit, n_qubit))
basis = [ket00, ket01, ket10, ket11]
```
The three density matrices play different roles in the optimization, and, as shown in the paper, convergence may improve significantly by weighing the states relatively to each other. For this example, we place a strong emphasis on the optimization $\Op{\rho}_1 \rightarrow \Op{O}^\dagger \Op{\rho}_1 \Op{O}$, by a factor of 20. This reflects that the hardest part of the optimization is identifying the basis in which the gate is diagonal. We will be using the real-part functional ($J_{T,\text{re}}$) to evaluate the success of $\Op{\rho}_i \rightarrow \Op{O}\Op{\rho}_i\Op{O}^\dagger$. Because $\Op{\rho}_1$ and $\Op{\rho}_3$ are mixed states, the Hilbert-Schmidt overlap will take values smaller than one in the optimal case. To compensate, we divide the weights by the purity of the respective states.
```
weights = np.array([20, 1, 1], dtype=np.float64)
weights *= len(weights) / np.sum(weights) # manual normalization
weights /= np.array([0.3, 1.0, 0.25]) # purities
```
The `krotov.gate_objectives` routine can initialize the density matrices $\Op{\rho}_1$, $\Op{\rho}_2$, $\Op{\rho}_3$ automatically, via the parameter `liouville_states_set`. Alternatively, we could also use the `'full'` basis of 16 matrices or the extended set of $d+1 = 5$ pure-state density matrices.
```
objectives = krotov.gate_objectives(
basis,
SQRTISWAP,
L,
liouville_states_set='3states',
weights=weights,
normalize_weights=False,
)
objectives
```
The use of `normalize_weights=False` is because we have included the purities in the weights, as discussed above.
## Dynamics under the Guess Pulse
For numerical efficiency, both for the analysis of the guess/optimized controls, we will use a stateful density matrix propagator:
A true physical measure for the success of the optimization is the "average gate fidelity". Evaluating the fidelity requires to simulate the dynamics of the full basis of Liouville space:
```
full_liouville_basis = [psi * phi.dag() for (psi, phi) in product(basis, basis)]
```
We propagate these under the guess control:
```
def propagate_guess(initial_state):
return objectives[0].mesolve(
tlist,
rho0=initial_state,
).states[-1]
full_states_T = parallel_map(
propagate_guess, values=full_liouville_basis,
)
print("F_avg = %.3f" % krotov.functionals.F_avg(full_states_T, basis, SQRTISWAP))
```
Note that we use $F_{T,\text{re}}$, not $F_{\text{avg}}$ to steer the optimization, as the Krotov boundary condition $\frac{\partial F_{\text{avg}}}{\partial \rho^\dagger}$ would be non-trivial.
Before doing the optimization, we can look the population dynamics under the guess pulse. For this purpose we propagate the pure-state density matrices corresponding to the canonical logical basis in Hilbert space, and obtain the expectation values for the projection onto these same states:
```
rho00, rho01, rho10, rho11 = [qutip.ket2dm(psi) for psi in basis]
def propagate_guess_for_expvals(initial_state):
return objectives[0].propagate(
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(),
rho0=initial_state,
e_ops=[rho00, rho01, rho10, rho11]
)
def plot_population_dynamics(dyn00, dyn01, dyn10, dyn11):
fig, axs = plt.subplots(ncols=2, nrows=2, figsize=(16, 8))
axs = np.ndarray.flatten(axs)
labels = ['00', '01', '10', '11']
dyns = [dyn00, dyn01, dyn10, dyn11]
for (ax, dyn, title) in zip(axs, dyns, labels):
for (i, label) in enumerate(labels):
ax.plot(dyn.times, dyn.expect[i], label=label)
ax.legend()
ax.set_title(title)
plt.show(fig)
plot_population_dynamics(
*parallel_map(
propagate_guess_for_expvals,
values=[rho00, rho01, rho10, rho11],
)
)
```
## Optimization
We now define the optimization parameters for the controls, the Krotov step size $\lambda_a$ and the update-shape that will ensure that the pulse switch-on and switch-off stays intact.
```
pulse_options = {
L[i][1]: dict(
lambda_a=1.0,
update_shape=partial(
krotov.shapes.flattop, t_start=0, t_stop=T, t_rise=(20 * ns))
)
for i in [1, 2]
}
```
Then we run the optimization for 2000 iterations
```
opt_result = krotov.optimize_pulses(
objectives,
pulse_options,
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(reentrant=True),
chi_constructor=krotov.functionals.chis_re,
info_hook=krotov.info_hooks.print_table(J_T=krotov.functionals.J_T_re),
iter_stop=3,
)
```
(this takes a while)...
```
dumpfile = "./3states_opt_result.dump"
if os.path.isfile(dumpfile):
opt_result = krotov.result.Result.load(dumpfile, objectives)
else:
opt_result = krotov.optimize_pulses(
objectives,
pulse_options,
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(reentrant=True),
chi_constructor=krotov.functionals.chis_re,
info_hook=krotov.info_hooks.print_table(J_T=krotov.functionals.J_T_re),
iter_stop=5,
continue_from=opt_result
)
opt_result.dump(dumpfile)
opt_result
```
## Optimization result
```
optimized_control = opt_result.optimized_controls[0] + 1j * opt_result.optimized_controls[1]
plot_pulse(np.abs(optimized_control), tlist)
def propagate_opt(initial_state):
return opt_result.optimized_objectives[0].propagate(
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(),
rho0=initial_state,
).states[-1]
opt_full_states_T = parallel_map(
propagate_opt, values=full_liouville_basis,
)
print("F_avg = %.3f" % krotov.functionals.F_avg(opt_full_states_T, basis, SQRTISWAP))
def propagate_opt_for_expvals(initial_state):
return opt_result.optimized_objectives[0].propagate(
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(),
rho0=initial_state,
e_ops=[rho00, rho01, rho10, rho11]
)
```
Plotting the population dynamics, we see the expected behavior for the $\sqrt{\text{iSWAP}}$ gate.
```
plot_population_dynamics(
*parallel_map(
propagate_opt_for_expvals,
values=[rho00, rho01, rho10, rho11],
)
)
def plot_convergence(result):
fig, ax = plt.subplots()
ax.semilogy(result.iters, result.info_vals)
ax.set_xlabel('OCT iteration')
ax.set_ylabel(r'optimization error $J_{T, re}$')
plt.show(fig)
plot_convergence(opt_result)
```
|
github_jupyter
|
# Working with Scikit-learn
This notebook shows how PySINDy objects interface with some useful tools from [Scikit-learn](https://scikit-learn.org/stable/).
## Setup
```
import numpy as np
from scipy.integrate import odeint
import pysindy as ps
```
Let's generate some training data from the [Lorenz system](https://en.wikipedia.org/wiki/Lorenz_system) with which to experiment.
```
def lorenz(z, t):
return [
10 * (z[1] - z[0]),
z[0] * (28 - z[2]) - z[1],
z[0] * z[1] - (8 / 3) * z[2]
]
# Generate training data
dt = .002
t_train = np.arange(0, 10, dt)
x0_train = [-8, 8, 27]
x_train = odeint(lorenz, x0_train, t_train)
# Evolve the Lorenz equations in time using a different initial condition
t_test = np.arange(0, 15, dt)
x0_test = np.array([8, 7, 15])
x_test = odeint(lorenz, x0_test, t_test)
```
## Cross-validation
PySINDy supports Scikit-learn-type cross-validation with a few caveats.
1. We must use **uniform timesteps** using the `t_default` parameter. This is because the `fit` and `score` methods of `SINDy` differ from those used in Scikit-learn in the sense that they both have an optional `t` parameter. Setting `t_default` is a workaround.
2. We have to be careful about the way we split up testing and training data during cross-validation. Because the `SINDy` object needs to differentiate the data, we need the training and test data to consist of sequential intervals of time. If we randomly sample the data, then the computed derivatives will be horribly inaccurate. Luckily, Scikit-learn has a `TimeSeriesSplit` object for such situations. If we really want to randomly sample the data during cross-validation, there is a way to do so. However, it's more complicated.
Note that we need to prepend `optimizer__`, `feature_library__`, or `differentiation_method__` to the parameter names.
### Cross-validation with TimeSeriesSplit
```
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import TimeSeriesSplit
model = ps.SINDy(t_default=dt)
param_grid = {
"optimizer__threshold": [0.001, 0.01, 0.1],
"optimizer__alpha": [0.01, 0.05, 0.1],
"feature_library": [ps.PolynomialLibrary(), ps.FourierLibrary()],
"differentiation_method__order": [1, 2]
}
search = GridSearchCV(
model,
param_grid,
cv=TimeSeriesSplit(n_splits=5)
)
search.fit(x_train)
print("Best parameters:", search.best_params_)
search.best_estimator_.print()
```
### Cross-validation without TimeSeriesSplit
If we want to use another cross-validation splitter, we'll need to (a) define a wrapper class which uses the argument "y" instead of "x_dot" and (b) precompute the derivatives. Note that (b) means that we will not be able to perform cross-validation on the parameters of the differentiation method.
```
from sklearn.metrics import r2_score
class SINDyCV(ps.SINDy):
def __init__(
self,
optimizer=None,
feature_library=None,
differentiation_method=None,
feature_names=None,
t_default=1,
discrete_time=False,
n_jobs=1
):
super(SINDyCV, self).__init__(
optimizer=optimizer,
feature_library=feature_library,
differentiation_method=differentiation_method,
feature_names=feature_names,
t_default=t_default,
discrete_time=discrete_time,
n_jobs=n_jobs
)
def fit(self, x, y, **kwargs):
return super(SINDyCV, self).fit(x, x_dot=y, **kwargs)
def score(
self,
x,
y,
t=None,
u=None,
multiple_trajectories=False,
metric=r2_score,
**metric_kws
):
return super(SINDyCV, self).score(
x,
x_dot=y,
t=t,
u=u,
multiple_trajectories=multiple_trajectories,
metric=metric,
**metric_kws
)
from sklearn.model_selection import ShuffleSplit
model = SINDyCV()
x_dot = model.differentiate(x_train, t=t_train)
param_grid = {
"optimizer__threshold": [0.002, 0.01, 0.1],
"optimizer__alpha": [0.01, 0.05, 0.1],
"feature_library__degree": [1, 2, 3],
}
search = GridSearchCV(
model,
param_grid,
cv=ShuffleSplit(n_splits=3, test_size=0.25)
)
search.fit(x_train, y=x_dot)
print("Best parameters:", search.best_params_)
search.best_estimator_.print()
```
## Sparse optimizers
Any of Scikit-learn's [linear models ](https://scikit-learn.org/stable/modules/linear_model.html) can be used for the `optimizer` parameter of a `SINDy` object, though we only recommend using those designed for sparse regression.
In the examples below we set `fit_intercept` to `False` since the default feature library (polynomials of degree up to two) already includes constant functions.
```
from sklearn.linear_model import ElasticNet
model = ps.SINDy(optimizer=ElasticNet(l1_ratio=0.9, fit_intercept=False), t_default=dt)
model.fit(x_train)
model.print()
from sklearn.linear_model import OrthogonalMatchingPursuit
model = ps.SINDy(
optimizer=OrthogonalMatchingPursuit(n_nonzero_coefs=8, fit_intercept=False),
t_default=dt
)
model.fit(x_train)
model.print()
```
|
github_jupyter
|
# An Introduction to Natural Language in Python using spaCy
## Introduction
This tutorial provides a brief introduction to working with natural language (sometimes called "text analytics") in Pytho, using [spaCy](https://spacy.io/) and related libraries.
Data science teams in industry must work with lots of text, one of the top four categories of data used in machine learning.
Usually that's human-generated text, although not always.
Think about it: how does the "operating system" for business work? Typically, there are contracts (sales contracts, work agreements, partnerships), there are invoices, there are insurance policies, there are regulations and other laws, and so on.
All of those are represented as text.
You may run across a few acronyms: _natural language processing_ (NLP), _natural language understanding_ (NLU), _natural language generation_ (NLG) — which are roughly speaking "read text", "understand meaning", "write text" respectively.
Increasingly these tasks overlap and it becomes difficult to categorize any given feature.
The _spaCy_ framework — along with a wide and growing range of plug-ins and other integrations — provides features for a wide range of natural language tasks.
It's become one of the most widely used natural language libraries in Python for industry use cases, and has quite a large community — and with that, much support for commercialization of research advances as this area continues to evolve rapidly.
## Getting Started
Check out the excellent _spaCy_ [installation notes](https://spacy.io/usage) for a "configurator" which generates installation commands based on which platforms and natural languages you need to support.
Some people tend to use `pip` while others use `conda`, and there are instructions for both. For example, to get started with _spaCy_ working with text in English and installed via `conda` on a Linux system:
```
conda install -c conda-forge spacy
python -m spacy download en_core_web_sm
```
BTW, the second line above is a download for language resources (models, etc.) and the `_sm` at the end of the download's name indicates a "small" model. There's also "medium" and "large", albeit those are quite large. Some of the more advanced features depend on the latter, although we won't quite be diving to the bottom of that ocean in this (brief) tutorial.
Now let's load _spaCy_ and run some code:
```
import spacy
nlp = spacy.load("en_core_web_sm")
```
That `nlp` variable is now your gateway to all things _spaCy_ and loaded with the `en_core_web_sm` small model for English.
Next, let's run a small "document" through the natural language parser:
```
text = "The rain in Spain falls mainly on the plain."
doc = nlp(text)
for token in doc:
print(token.text, token.lemma_, token.pos_, token.is_stop)
```
First we created a [doc](https://spacy.io/api/doc) from the text, which is a container for a document and all of its annotations. Then we iterated through the document to see what _spaCy_ had parsed.
Good, but it's a lot of info and a bit difficult to read. Let's reformat the _spaCy_ parse of that sentence as a [pandas](https://pandas.pydata.org/) dataframe:
```
import pandas as pd
cols = ("text", "lemma", "POS", "explain", "stopword")
rows = []
for t in doc:
row = [t.text, t.lemma_, t.pos_, spacy.explain(t.pos_), t.is_stop]
rows.append(row)
df = pd.DataFrame(rows, columns=cols)
df
```
Much more readable!
In this simple case, the entire document is merely one short sentence.
For each word in that sentence _spaCy_ has created a [token](https://spacy.io/api/token), and we accessed fields in each token to show:
- raw text
- [lemma](https://en.wikipedia.org/wiki/Lemma_(morphology)) – a root form of the word
- [part of speech](https://en.wikipedia.org/wiki/Part_of_speech)
- a flag for whether the word is a _stopword_ – i.e., a common word that may be filtered out
Next let's use the [displaCy](https://ines.io/blog/developing-displacy) library to visualize the parse tree for that sentence:
```
from spacy import displacy
displacy.render(doc, style="dep", jupyter=True)
```
Does that bring back memories of grade school? Frankly, for those of us coming from more of a computational linguistics background, that diagram sparks joy.
But let's backup for a moment. How do you handle multiple sentences?
There are features for _sentence boundary detection_ (SBD) – also known as _sentence segmentation_ – based on the builtin/default [sentencizer](https://spacy.io/api/sentencizer):
```
text = "We were all out at the zoo one day, I was doing some acting, walking on the railing of the gorilla exhibit. I fell in. Everyone screamed and Tommy jumped in after me, forgetting that he had blueberries in his front pocket. The gorillas just went wild."
doc = nlp(text)
for sent in doc.sents:
print(">", sent)
```
When _spaCy_ creates a document, it uses a principle of _non-destructive tokenization_ meaning that the tokens, sentences, etc., are simply indexes into a long array. In other words, they don't carve the text stream into little pieces. So each sentence is a [span](https://spacy.io/api/span) with a _start_ and an _end_ index into the document array:
```
for sent in doc.sents:
print(">", sent.start, sent.end)
```
We can index into the document array to pull out the tokens for one sentence:
```
doc[48:54]
```
Or simply index into a specific token, such as the verb `went` in the last sentence:
```
token = doc[51]
print(token.text, token.lemma_, token.pos_)
```
At this point we can parse a document, segment that document into sentences, then look at annotations about the tokens in each sentence. That's a good start.
## Acquiring Text
Now that we can parse texts, where do we get texts?
One quick source is to leverage the interwebs.
Of course when we download web pages we'll get HTML, and then need to extract text from them.
[Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) is a popular package for that.
First, a little housekeeping:
```
import sys
import warnings
warnings.filterwarnings("ignore")
```
### Character Encoding
The following shows examples of how to use [codecs](https://docs.python.org/3/library/codecs.html) and [normalize unicode](https://docs.python.org/3/library/unicodedata.html#unicodedata.normalize). NB: the example text comes from the article "[Metal umlat](https://en.wikipedia.org/wiki/Metal_umlaut)".
```
x = "Rinôçérôse screams flow not unlike an encyclopædia, \
'TECHNICIÄNS ÖF SPÅCE SHIP EÅRTH THIS IS YÖÜR CÄPTÅIN SPEÄKING YÖÜR ØÅPTÅIN IS DEA̋D' to Spın̈al Tap."
type(x)
```
The variable `x` is a *string* in Python:
```
repr(x)
```
Its translation into [ASCII](http://www.asciitable.com/) is unusable by parsers:
```
ascii(x)
```
Encoding as [UTF-8](http://unicode.org/faq/utf_bom.html) doesn't help much:
```
x.encode('utf8')
```
Ignoring difficult characters is perhaps an even worse strategy:
```
x.encode('ascii', 'ignore')
```
However, one can *normalize* text, then encode…
```
import unicodedata
unicodedata.normalize('NFKD', x).encode('ascii','ignore')
```
Even before this normalization and encoding, you may need to convert some characters explicitly **before** parsing. For example:
```
x = "The sky “above” the port … was the color of ‘cable television’ – tuned to the Weather Channel®"
ascii(x)
```
Consider the results for that line:
```
unicodedata.normalize('NFKD', x).encode('ascii', 'ignore')
```
...which still drops characters that may be important for parsing a sentence.
So a more advanced approach could be:
```
x = x.replace('“', '"').replace('”', '"')
x = x.replace("‘", "'").replace("’", "'")
x = x.replace('…', '...').replace('–', '-')
x = unicodedata.normalize('NFKD', x).encode('ascii', 'ignore').decode('utf-8')
print(x)
```
### Parsing HTML
In the following function `get_text()` we'll parse the HTML to find all of the `<p/>` tags, then extract the text for those:
```
from bs4 import BeautifulSoup
import requests
import traceback
def get_text (url):
buf = []
try:
soup = BeautifulSoup(requests.get(url).text, "html.parser")
for p in soup.find_all("p"):
buf.append(p.get_text())
return "\n".join(buf)
except:
print(traceback.format_exc())
sys.exit(-1)
```
Now let's grab some text from online sources.
We can compare open source licenses hosted on the [Open Source Initiative](https://opensource.org/licenses/) site:
```
lic = {}
lic["mit"] = nlp(get_text("https://opensource.org/licenses/MIT"))
lic["asl"] = nlp(get_text("https://opensource.org/licenses/Apache-2.0"))
lic["bsd"] = nlp(get_text("https://opensource.org/licenses/BSD-3-Clause"))
for sent in lic["bsd"].sents:
print(">", sent)
```
One common use case for natural language work is to compare texts. For example, with those open source licenses we can download their text, parse, then compare [similarity](https://spacy.io/api/doc#similarity) metrics among them:
```
pairs = [
["mit", "asl"],
["asl", "bsd"],
["bsd", "mit"]
]
for a, b in pairs:
print(a, b, lic[a].similarity(lic[b]))
```
That is interesting, since the [BSD](https://opensource.org/licenses/BSD-3-Clause) and [MIT](https://opensource.org/licenses/MIT) licenses appear to be the most similar documents.
In fact they are closely related.
Admittedly, there was some extra text included in each document due to the OSI disclaimer in the footer – but this provides a reasonable approximation for comparing the licenses.
## Natural Language Understanding
Now let's dive into some of the _spaCy_ features for NLU.
Given that we have a parse of a document, from a purely grammatical standpoint we can pull the [noun chunks](https://spacy.io/usage/linguistic-features#noun-chunks), i.e., each of the noun phrases:
```
text = "Steve Jobs and Steve Wozniak incorporated Apple Computer on January 3, 1977, in Cupertino, California."
doc = nlp(text)
for chunk in doc.noun_chunks:
print(chunk.text)
```
Not bad. The noun phrases in a sentence generally provide more information content – as a simple filter used to reduce a long document into a more "distilled" representation.
We can take this approach further and identify [named entities](https://spacy.io/usage/linguistic-features#named-entities) within the text, i.e., the proper nouns:
```
for ent in doc.ents:
print(ent.text, ent.label_)
```
The _displaCy_ library provides an excellent way to visualize named entities:
```
displacy.render(doc, style="ent", jupyter=True)
```
If you're working with [knowledge graph](https://www.akbc.ws/) applications and other [linked data](http://linkeddata.org/), your challenge is to construct links between the named entities in a document and other related information for the entities – which is called [entity linking](http://nlpprogress.com/english/entity_linking.html).
Identifying the named entities in a document is the first step in this particular kind of AI work.
For example, given the text above, one might link the `Steve Wozniak` named entity to a [lookup in DBpedia](http://dbpedia.org/page/Steve_Wozniak).
In more general terms, one can also link _lemmas_ to resources that describe their meanings.
For example, in an early section we parsed the sentence `The gorillas just went wild` and were able to show that the lemma for the word `went` is the verb `go`. At this point we can use a venerable project called [WordNet](https://wordnet.princeton.edu/) which provides a lexical database for English – in other words, it's a computable thesaurus.
There's a _spaCy_ integration for WordNet called
[spacy-wordnet](https://github.com/recognai/spacy-wordnet) by [Daniel Vila Suero](https://twitter.com/dvilasuero), an expert in natural language and knowledge graph work.
Then we'll load the WordNet data via NLTK (these things happen):
```
import nltk
nltk.download("wordnet")
```
Note that _spaCy_ runs as a "pipeline" and allows means for customizing parts of the pipeline in use.
That's excellent for supporting really interesting workflow integrations in data science work.
Here we'll add the `WordnetAnnotator` from the _spacy-wordnet_ project:
```
!pip install spacy-wordnet
from spacy_wordnet.wordnet_annotator import WordnetAnnotator
print("before", nlp.pipe_names)
if "WordnetAnnotator" not in nlp.pipe_names:
nlp.add_pipe(WordnetAnnotator(nlp.lang), after="tagger")
print("after", nlp.pipe_names)
```
Within the English language, some words are infamous for having many possible meanings. For example, click through the results online in a [WordNet](http://wordnetweb.princeton.edu/perl/webwn?s=star&sub=Search+WordNet&o2=&o0=1&o8=1&o1=1&o7=&o5=&o9=&o6=&o3=&o4=&h=) search to find the meanings related to the word `withdraw`.
Now let's use _spaCy_ to perform that lookup automatically:
```
token = nlp("withdraw")[0]
token._.wordnet.synsets()
token._.wordnet.lemmas()
token._.wordnet.wordnet_domains()
```
Again, if you're working with knowledge graphs, those "word sense" links from WordNet could be used along with graph algorithms to help identify the meanings for a particular word. That can also be used to develop summaries for larger sections of text through a technique called _summarization_. It's beyond the scope of this tutorial, but an interesting application currently for natural language in industry.
Going in the other direction, if you know _a priori_ that a document was about a particular domain or set of topics, then you can constrain the meanings returned from _WordNet_. In the following example, we want to consider NLU results that are within Finance and Banking:
```
domains = ["finance", "banking"]
sentence = nlp("I want to withdraw 5,000 euros.")
enriched_sent = []
for token in sentence:
# get synsets within the desired domains
synsets = token._.wordnet.wordnet_synsets_for_domain(domains)
if synsets:
lemmas_for_synset = []
for s in synsets:
# get synset variants and add to the enriched sentence
lemmas_for_synset.extend(s.lemma_names())
enriched_sent.append("({})".format("|".join(set(lemmas_for_synset))))
else:
enriched_sent.append(token.text)
print(" ".join(enriched_sent))
```
That example may look simple but, if you play with the `domains` list, you'll find that the results have a kind of combinatorial explosion when run without reasonable constraints.
Imagine having a knowledge graph with millions of elements: you'd want to constrain searches where possible to avoid having every query take days/weeks/months/years to compute.
Sometimes the problems encountered when trying to understand a text – or better yet when trying to understand a _corpus_ (a dataset with many related texts) – become so complex that you need to visualize it first.
Here's an interactive visualization for understanding texts: [scattertext](https://spacy.io/universe/project/scattertext), a product of the genius of [Jason Kessler](https://twitter.com/jasonkessler).
To install:
```
conda install -c conda-forge scattertext
```
Let's analyze text data from the party conventions during the 2012 US Presidential elections. It may take a minute or two to run, but the results from all that number crunching is worth the wait.
```
!pip install scattertext
import scattertext as st
if "merge_entities" not in nlp.pipe_names:
nlp.add_pipe(nlp.create_pipe("merge_entities"))
if "merge_noun_chunks" not in nlp.pipe_names:
nlp.add_pipe(nlp.create_pipe("merge_noun_chunks"))
convention_df = st.SampleCorpora.ConventionData2012.get_data()
corpus = st.CorpusFromPandas(convention_df,
category_col="party",
text_col="text",
nlp=nlp).build()
```
Once you have the `corpus` ready, generate an interactive visualization in HTML:
```
html = st.produce_scattertext_explorer(
corpus,
category="democrat",
category_name="Democratic",
not_category_name="Republican",
width_in_pixels=1000,
metadata=convention_df["speaker"]
)
```
Now we'll render the HTML – give it a minute or two to load, it's worth the wait...
```
from IPython.display import IFrame
from IPython.core.display import display, HTML
import sys
IN_COLAB = "google.colab" in sys.modules
print(IN_COLAB)
```
**NB: use the following cell on Google Colab:**
```
if IN_COLAB:
display(HTML("<style>.container { width:98% !important; }</style>"))
display(HTML(html))
```
**NB: use the following cell instead on Jupyter in general:**
```
file_name = "foo.html"
with open(file_name, "wb") as f:
f.write(html.encode("utf-8"))
IFrame(src=file_name, width = 1200, height=700)
```
Imagine if you had text from the past three years of customer support for a particular product in your organization. Suppose your team needed to understand how customers have been talking about the product? This _scattertext_ library might come in quite handy! You could cluster (k=2) on _NPS scores_ (a customer evaluation metric) then replace the Democrat/Republican dimension with the top two components from the clustering.
## Summary
Five years ago, if you’d asked about open source in Python for natural language, a default answer from many people working in data science would've been [NLTK](https://www.nltk.org/).
That project includes just about everything but the kitchen sink and has components which are relatively academic.
Another popular natural language project is [CoreNLP](https://stanfordnlp.github.io/CoreNLP/) from Stanford.
Also quite academic, albeit powerful, though _CoreNLP_ can be challenging to integrate with other software for production use.
Then a few years ago everything in this natural language corner of the world began to change.
The two principal authors for _spaCy_ -- [Matthew Honnibal](https://twitter.com/honnibal) and [Ines Montani](https://twitter.com/_inesmontani) -- launched the project in 2015 and industry adoption was rapid.
They focused on an _opinionated_ approach (do what's needed, do it well, no more, no less) which provided simple, rapid integration into data science workflows in Python, as well as faster execution and better accuracy than the alternatives.
Based on those priorities, _spaCy_ become sort of the opposite of _NLTK_.
Since 2015, _spaCy_ has consistently focused on being an open source project (i.e., depending on its community for directions, integrations, etc.) and being commercial-grade software (not academic research).
That said, _spaCy_ has been quick to incorporate the SOTA advances in machine learning, effectively becoming a conduit for moving research into industry.
It's important to note that machine learning for natural language got a big boost during the mid-2000's as Google began to win international language translation competitions.
Another big change occurred during 2017-2018 when, following the many successes of _deep learning_, those approaches began to out-perform previous machine learning models.
For example, see the [ELMo](https://arxiv.org/abs/1802.05365) work on _language embedding_ by Allen AI, followed by [BERT](https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html) from Google, and more recently [ERNIE](https://medium.com/syncedreview/baidus-ernie-tops-google-s-bert-in-chinese-nlp-tasks-d6a42b49223d) by Baidu -- in other words, the search engine giants of the world have gifted the rest of us with a Sesame Street repertoire of open source embedded language models based on deep learning, which is now _state of the art_ (SOTA).
Speaking of which, to keep track of SOTA for natural language, keep an eye on [NLP-Progress](http://nlpprogress.com/) and [Papers with Code](https://paperswithcode.com/sota).
The use cases for natural language have shifted dramatically over the past two years, after deep learning techniques arose to the fore.
Circa 2014, a natural language tutorial in Python might have shown _word count_ or _keyword search_ or _sentiment detection_ where the target use cases were relatively underwhelming.
Circa 2019 we're talking about analyzing thousands of documents for vendor contracts in an industrial supply chain optimization ... or hundreds of millions of documents for policy holders of an insurance company, or gazillions of documents regarding financial disclosures.
More contemporary natural language work tends to be in NLU, often to support construction of _knowledge graphs,_ and increasingly in NLG where large numbers of similar documents can be summarized at human scale.
The [spaCy Universe](https://spacy.io/universe) is a great place to check for deep-dives into particular use cases, and to see how this field is evolving. Some selections from this "universe" include:
- [Blackstone](https://spacy.io/universe/project/blackstone) – parsing unstructured legal texts
- [Kindred](https://spacy.io/universe/project/kindred) – extracting entities from biomedical texts (e.g., Pharma)
- [mordecai](https://spacy.io/universe/project/mordecai) – parsing geographic information
- [Prodigy](https://spacy.io/universe/project/prodigy) – human-in-the-loop annotation to label datasets
- [spacy-raspberry](https://spacy.io/universe/project/spacy-raspberry) – Raspberry PI image for running _spaCy_ and deep learning on edge devices
- [Rasa NLU](https://spacy.io/universe/project/rasa) – Rasa integration for voice apps
Also, a couple super new items to mention:
- [spacy-pytorch-transformers](https://explosion.ai/blog/spacy-pytorch-transformers) to fine tune (i.e., use _transfer learning_ with) the Sesame Street characters and friends: BERT, GPT-2, XLNet, etc.
- [spaCy IRL 2019](https://irl.spacy.io/2019/) conference – check out videos from the talks!
There's so much more that can be done with _spaCy_ – hopefully this tutorial provides an introduction. We wish you all the best in your natural language work.
|
github_jupyter
|
```
%load_ext watermark
%watermark -d -u -a 'Andreas Mueller, Kyle Kastner, Sebastian Raschka' -v -p numpy,scipy,matplotlib,scikit-learn
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
```
# SciPy 2016 Scikit-learn Tutorial
# In Depth - Support Vector Machines
SVM stands for "support vector machines". They are efficient and easy to use estimators.
They come in two kinds: SVCs, Support Vector Classifiers, for classification problems, and SVRs, Support Vector Regressors, for regression problems.
## Linear SVMs
The SVM module contains LinearSVC, which we already discussed briefly in the section on linear models.
Using ``SVC(kernel="linear")`` will also yield a linear predictor that is only different in minor technical aspects.
## Kernel SVMs
The real power of SVMs lies in using kernels, which allow for non-linear decision boundaries. A kernel defines a similarity measure between data points. The most common are:
- **linear** will give linear decision frontiers. It is the most computationally efficient approach and the one that requires the least amount of data.
- **poly** will give decision frontiers that are polynomial. The order of this polynomial is given by the 'order' argument.
- **rbf** uses 'radial basis functions' centered at each support vector to assemble a decision frontier. The size of the RBFs ultimately controls the smoothness of the decision frontier. RBFs are the most flexible approach, but also the one that will require the largest amount of data.
Predictions in a kernel-SVM are made using the formular
$$
\hat{y} = \text{sign}(\alpha_0 + \sum_{j}\alpha_j y_j k(\mathbf{x^{(j)}}, \mathbf{x}))
$$
where $\mathbf{x}^{(j)}$ are training samples, $\mathbf{y}^{(j)}$ the corresponding labels, $\mathbf{x}$ is a test-sample to predict on, $k$ is the kernel, and $\alpha$ are learned parameters.
What this says is "if $\mathbf{x}$ is similar to $\mathbf{x}^{(j)}$ then they probably have the same label", where the importance of each $\mathbf{x}^{(j)}$ for this decision is learned. [Or something much less intuitive about an infinite dimensional Hilbert-space]
Often only few samples have non-zero $\alpha$, these are called the "support vectors" from which SVMs get their name.
These are the most discriminant samples.
The most important parameter of the SVM is the regularization parameter $C$, which bounds the influence of each individual sample:
- Low C values: many support vectors... Decision frontier = mean(class A) - mean(class B)
- High C values: small number of support vectors: Decision frontier fully driven by most discriminant samples
The other important parameters are those of the kernel. Let's look at the RBF kernel in more detail:
$$k(\mathbf{x}, \mathbf{x'}) = \exp(-\gamma ||\mathbf{x} - \mathbf{x'}||^2)$$
```
from sklearn.metrics.pairwise import rbf_kernel
line = np.linspace(-3, 3, 100)[:, np.newaxis]
kernel_value = rbf_kernel(line, [[0]], gamma=1)
plt.plot(line, kernel_value);
```
The rbf kernel has an inverse bandwidth-parameter gamma, where large gamma mean a very localized influence for each data point, and
small values mean a very global influence.
Let's see these two parameters in action:
```
from figures import plot_svm_interactive
plot_svm_interactive()
```
## Exercise: tune a SVM on the digits dataset
```
from sklearn.datasets import load_digits
from sklearn.svm import SVC
digits = load_digits()
X_digits, y_digits = digits.data, digits.target
# split the dataset, apply grid-search
#%load solutions/18_svc_grid.py
```
|
github_jupyter
|
# Think Bayes
This notebook presents example code and exercise solutions for Think Bayes.
Copyright 2018 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
```
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import classes from thinkbayes2
from thinkbayes2 import Hist, Pmf, Suite, Beta
import thinkplot
```
## Unreliable observation
Suppose that instead of observing coin tosses directly, you measure the outcome using an instrument that is not always correct. Specifically, suppose there is a probability `y` that an actual heads is reported as tails, or actual tails reported as heads.
Write a class that estimates the bias of a coin given a series of outcomes and the value of `y`.
How does the spread of the posterior distribution depend on `y`?
```
# Solution
# Here's a class that models an unreliable coin
class UnreliableCoin(Suite):
def __init__(self, prior, y):
"""
prior: seq or map
y: probability of accurate measurement
"""
super().__init__(prior)
self.y = y
def Likelihood(self, data, hypo):
"""
data: outcome of unreliable measurement, either 'H' or 'T'
hypo: probability of heads, 0-100
"""
x = hypo / 100
y = self.y
if data == 'H':
return x*y + (1-x)*(1-y)
else:
return x*(1-y) + (1-x)*y
# Solution
# Now let's initialize an UnreliableCoin with `y=0.9`:
prior = range(0, 101)
suite = UnreliableCoin(prior, y=0.9)
thinkplot.Pdf(suite)
# Solution
# And update with 3 heads and 7 tails.
for outcome in 'HHHTTTTTTT':
suite.Update(outcome)
thinkplot.Pdf(suite)
# Solution
# Now let's try it out with different values of `y`:
def plot_posterior(y, data):
prior = range(0, 101)
suite = UnreliableCoin(prior, y=y)
for outcome in data:
suite.Update(outcome)
thinkplot.Pdf(suite, label='y=%g' % y)
# Solution
# The posterior distribution gets wider as the measurement gets less reliable.
data = 'HHHTTTTTTT'
plot_posterior(1, data)
plot_posterior(0.8, data)
plot_posterior(0.6, data)
thinkplot.decorate(xlabel='Probability of heads (x)',
ylabel='PMF')
# Solution
# At `y=0.5`, the measurement provides no information, so the posterior equals the prior:
plot_posterior(0.5, data)
thinkplot.decorate(xlabel='Probability of heads (x)',
ylabel='PMF')
# Solution
# As the coin gets less reliable (below `y=0.5`) the distribution gets narrower again.
# In fact, a measurement with `y=0` is just as good as one with `y=1`,
# provided that we know what `y` is.
plot_posterior(0.4, data)
plot_posterior(0.2, data)
plot_posterior(0.0, data)
thinkplot.decorate(xlabel='Probability of heads (x)',
ylabel='PMF')
```
|
github_jupyter
|
<img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# YahooFinance - Get Stock Update
<a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/YahooFinance/YahooFinance_Get_Stock_Update.ipynb" target="_parent"><img src="https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg"/></a>
**Tags:** #yahoofinance #usdinr #plotly #investors #analytics #automation #plotly
**Author:** [Megha Gupta](https://github.com/megha2907)
Description: With this template you will get INR USD rate visualized on a chart
## Input
### Import Libraries
```
import naas
from naas_drivers import yahoofinance, plotly
import markdown2
from IPython.display import Markdown as md
```
### Setup Yahoo parameters
👉 Here you can input:<br>
- yahoo ticker : get tickers <a href='https://finance.yahoo.com/trending-tickers?.tsrc=fin-srch'>here</a>
- date from
- date to
```
TICKER = 'INR=X'
date_from = -30
date_to = 'today'
```
### Setup your email parameters
👉 Here you can input your sender email and destination email
Note: emails are sent from notification@naass.ai by default
```
email_to = ["template@naas.ai"]
email_from = None
```
## Model
### Get the data from yahoo finance using naas drivers
```
#data cleaning
df = yahoofinance.get(TICKER, date_from=date_from, date_to = date_to)
df = df.dropna()# drop the na values from the dataframe
df.reset_index(drop=True)
df = df.sort_values("Date", ascending=False).reset_index(drop=True)
df.head()
```
### Extract value from data
```
LASTOPEN = round(df.loc[0, "Open"], 2)
LASTCLOSE = round(df.loc[0, "Close"], 2)
YESTERDAYOPEN = round(df.loc[1, "Open"], 2)
YESTERDAYCLOSE = round(df.loc[1, "Close"], 2)
MAXRATE = round(df['Open'].max(),2)
MXDATEOPEN = df.loc[df['Open'].idxmax(), "Date"].strftime("%Y-%m-%d")
MINRATE = round(df['Open'].min(),2)
MNDATEOPEN = df.loc[df['Open'].idxmin(), "Date"].strftime("%Y-%m-%d")
```
### Plot the data
```
last_date = df.loc[df.index[0], "Date"].strftime("%Y-%m-%d")
output = plotly.linechart(df,
x="Date",
y=['Open','Close'],
title=f"<b>INR USD rates of last month</b><br><span style='font-size: 13px;'>Last value as of {last_date}: Open={LASTOPEN}, Close={LASTCLOSE}</span>")
```
## Output
### Save the dataset in csv
```
df.to_csv(f"{TICKER}_LastMonth.csv", index=False)
```
### Create markdown template
```
%%writefile message.md
Hello world,
The **TICKER** price is Open LASTOPEN and Close LASTCLOSE right now. <br>
**Yesterday Open**: YESTERDAYOPEN <br>
**Yesterday Close**: YESTERDAYCLOSE <br>
The Max Open rate of **TICKER** was on MXDATEOPEN which was MAXRATE. <br>
The Min Open rate of **TICKER** was on MNDATEOPEN which was MINRATE. <br>
Attached is the excel file for your reference. <br>
Have a nice day.
<br>
PS: You can [send the email again](link_webhook) if you need a fresh update.<br>
<div><strong>Full Name</strong></div>
<div>Open source lover | <a href="http://www.naas.ai/" target="_blank">Naas</a></div>
<div>+ 33 1 23 45 67 89</div>
<div><small>This is an automated email from my Naas account</small></div>
```
### Add email template as dependency
```
naas.dependency.add("message.md")
```
### Replace values in template
```
markdown_file = "message.md"
content = open(markdown_file, "r").read()
md = markdown2.markdown(content)
md
post = md.replace("LASTOPEN", str(LASTOPEN))
post = post.replace("LASTCLOSE", str(LASTCLOSE))
post = post.replace("YESTERDAYOPEN", str(YESTERDAYOPEN))
post = post.replace("YESTERDAYCLOSE", str(YESTERDAYCLOSE))
post = post.replace("MXDATEOPEN", str(MXDATEOPEN))
post = post.replace("MAXRATE", str(MAXRATE))
post = post.replace("MNDATEOPEN", str(MNDATEOPEN))
post = post.replace("MINRATE", str(MINRATE))
post = post.replace("TICKER", str(TICKER))
post
```
### Add webhook to run your notebook again
```
link_webhook = naas.webhook.add()
```
### Send by email
```
subject = f"📈 {TICKER} Open and close rates as of today"
content = post
files = [f"{TICKER}_LastMonth.csv"]
naas.notification.send(email_to=email_to,
subject=subject,
html=content,
email_from=email_from,
files=files)
```
### Schedule your notebook
Please uncomment and run the cell below to schedule your notebook everyday at 8:00 during business days
```
# import naas
# naas.scheduler.add("0 8 1-5 * *")
```
|
github_jupyter
|
Before we begin, let's execute the cell below to display information about the CUDA driver and GPUs running on the server by running the `nvidia-smi` command. To do this, execute the cell block below by giving it focus (clicking on it with your mouse), and hitting Ctrl-Enter, or pressing the play button in the toolbar above. If all goes well, you should see some output returned below the grey cell.
```
!nvidia-smi
```
## Learning objectives
The **goal** of this lab is to:
- Dig deeper into kernels by analyzing it with Nsight Compute
In the previous section, we learned to optimize the parallel [RDF](../serial/rdf_overview.ipynb) application using OpenACC. Moreover, we used NVIDIA Nsight Systems to get a system-wide performance analysis. Now, let's dig deeper and profile the kernel with the Nsight Compute profiler to get detailed performance metrics and find out how the OpenACC is mapped at the Compute Unified Device Architecture(CUDA) hardware level. Note: You will get a better understanding of the GPU architecture in the CUDA notebooks.
To do this, let's use the [solution](../../source_code/openacc/SOLUTION/rdf_collapse.f90) as a reference to get a similar report from Nsight Compute. Run the application, and profile it with the Nsight Systems first.
Now, let's compile, and profile it with Nsight Systems first.
```
#compile the solution for Tesla GPU
!cd ../../source_code/openacc && nvfortran -acc -ta=tesla,lineinfo -Minfo=accel -o rdf nvtx.f90 SOLUTION/rdf_collapse.f90 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.3/cuda/11.2/lib64 -lnvToolsExt
#profile the solution with Nsight Systems
!cd ../../source_code/openacc && nsys profile -t nvtx,openacc --stats=true --force-overwrite true -o rdf_collapse_solution ./rdf
```
Let's checkout the profiler's report. Download and save the report file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../../source_code/openacc/rdf_collapse_solution.qdrep) and open it via the GUI. Now, right click on the kernel `rdf_98_gpu` and click on "Analyze the Selected Kernel with NVIDIA Nsight Compute" (see below screenshot).
<img src="../images/f_compute_analyz.png">
Then, make sure to tick the radio button next to "Display the command line to user NVIDIA Nsight Compute CLI".
<img src="../images/compute_command_line.png" width="50%" height="50%">
Then, you simply copy the command, run it and analyze the selected kernel.
<img src="../images/f_compute_command.png" width="50%" height="50%">
To profile the selected kernel, run the below cell (by adding `--set full` we make sure to capture all the sections in Nsight Compute profiler):
```
#profile the selected kernel in the solution with Nsight compute
!cd ../../source_code/openacc && ncu --set full --launch-skip 1 --launch-count 1 -o rdf_collapse_solution ./rdf
```
Let's checkout the Nsight Compute profiler's report together. Download and save the report file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../../source_code/openacc/rdf_collapse_solution.ncu-rep) and open it via the GUI. Let's checkout the first section called "GPU Speed Of Light". This section gives an overview of the utilization for compute and memory resources on the GPU. As you can see from the below screenshot, the Speed of Light (SOL) reports the achieved percentage of utilization of 30.04% for SM and 70.10% for memory.
<img src="../images/f_sol.png">
**Extra**: If you can use the baseline feature on the Nsight Compute and compare the analysis of the kernel from this version of the RDF (which uses data directives and collapse clause) with the very first parallel version where we only added parallel directives and used managed memory, you can see how much improvements we got (see the below screenshot for reference):
<img src="../images/f_sol_baseline.png">
It is clear that we were able to reduce the execution time to half(red rectangle) and increase the SM and memory utilization (green rectangle). However, as you see the device is still underutilized. Let's look at the roofline analysis which indicates that the application is bandwith bound and the kernel exhibits low compute throughput and memory is more heavily utilized than Compute and it is clear the the memory is the bottleneck.
<img src="../images/f_roofline_collapse.png">
The Nsight Compute profiler suggests us to checkout the "Memory Workload Analysis" report sections to see where the memory system bottleneck is. There are 9.85 M instructions loading from or storing to the global memory space. The link going from L1/TEX Cache to Global shows 8.47 M requests generated due to global load instructions.
<img src="../images/f_memory_collapse.png">
Let's have a look at the table showing L1/TEX Cache. The "Sectors/Req" column shows the average ratio of sectors to requests for the L1 cache. For the same number of active threads in a warp, smaller numbers imply a more efficient memory access pattern. For warps with 32 active threads, the optimal ratios per access size are: `32-bit: 4`, `64-bit: 8`, `128-bit: 16`. Smaller ratios indicate some degree of uniformity or overlapped loads within a cache line. Checkout the [GPU Architecture Terminologies](../GPU_Architecture_Terminologies.ipynb) notebook to learn more about threads and warps.
In the example screenshot, we can see that this number is higher. This implies uncoalesced memory accesses and will result in increased memory traffic. We are not efficiently utilizing the bytes transferred.
<img src="../images/f_memory_sec.png">
Now, let's have a look at the "Source Counters" section located at the end of "Details" page of the profiler report. The section contains tables indicating the N highest or lowest values of one or more metrics in the selected kernel source code. Hotspot tables point out performance problems in the source.
<img src="../images/f_source_loc.png">
We can select the location links to navigate directly to this location in the "Source" page. Moreover, you can hover the mouse over a value to see which metrics contribute to it.
<img src="../images/f_source_hover.png">
The "Source" page displays metrics that can be correlated with source code. It is filtered to only show (SASS) functions that were executed in the kernel launch.
<!--<img src="../images/source_sass_collapse.png">-->
<img src="../images/f_source_sass.png">
The "Source" section in the "Details" page indicates that the issue is *uncoalesced Global memory access*.
<img src="../images/uncoalesced_hint.png">
**Memory Coalescing**
On GPUs, threads are executed in warps. When we have a group of 32 contiguous threads called *warp* accessing adjacent locations in memory, we have *Coalesced memory* access and as a result we have few transactions and higher utilization. However, if a warp of 32 threads accessing scattered memory locations, then we have *Uncoalesced memory* access and this results in high number of transactions and low utilization.
<img src="../images/coalesced_mem.png">
Without changing the data structure and refactoring the code, we cannot fix this issue and improve the performance further using OpenACC in a straightforward easier way. The next step would be to look into how to optimize this application further with CUDA and perhaps take advantage of shared memory.
## Post-Lab Summary
If you would like to download this lab for later viewing, it is recommend you go to your browsers File menu (not the Jupyter notebook file menu) and save the complete web page. This will ensure the images are copied down as well. You can also execute the following cell block to create a zip-file of the files you've been working on, and download it with the link below.
```
%%bash
cd ..
rm -f nways_files.zip
zip -r nways_files.zip *
```
**After** executing the above zip command, you should be able to download and save the zip file [here] by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../nways_files.zip).
Let us now go back to parallelizing our code using other approaches.
**IMPORTANT**: Please click on **HOME** to go back to the main notebook for *N ways of GPU programming for MD* code.
-----
# <p style="text-align:center;border:3px; border-style:solid; border-color:#FF0000 ; padding: 1em"> <a href=../../../nways_MD_start.ipynb>HOME</a></p>
-----
# Links and Resources
[NVIDIA Nsight System](https://docs.nvidia.com/nsight-systems/)
[NVIDIA Nsight Compute](https://developer.nvidia.com/nsight-compute)
[CUDA Toolkit Download](https://developer.nvidia.com/cuda-downloads)
**NOTE**: To be able to see the Nsight System profiler output, please download Nsight System latest version from [here](https://developer.nvidia.com/nsight-systems).
Don't forget to check out additional [OpenACC Resources](https://www.openacc.org/resources) and join our [OpenACC Slack Channel](https://www.openacc.org/community#slack) to share your experience and get more help from the community.
---
## Licensing
This material is released by OpenACC-Standard.org, in collaboration with NVIDIA Corporation, under the Creative Commons Attribution 4.0 International (CC BY 4.0).
|
github_jupyter
|
## Face and Facial Keypoint detection
After you've trained a neural network to detect facial keypoints, you can then apply this network to *any* image that includes faces. The neural network expects a Tensor of a certain size as input and, so, to detect any face, you'll first have to do some pre-processing.
1. Detect all the faces in an image using a face detector (we'll be using a Haar Cascade detector in this notebook).
2. Pre-process those face images so that they are grayscale, and transformed to a Tensor of the input size that your net expects. This step will be similar to the `data_transform` you created and applied in Notebook 2, whose job was tp rescale, normalize, and turn any iimage into a Tensor to be accepted as input to your CNN.
3. Use your trained model to detect facial keypoints on the image.
---
In the next python cell we load in required libraries for this section of the project.
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
```
#### Select an image
Select an image to perform facial keypoint detection on; you can select any image of faces in the `images/` directory.
```
import cv2
# load in color image for face detection
image = cv2.imread('images/obamas.jpg')
# switch red and blue color channels
# --> by default OpenCV assumes BLUE comes first, not RED as in many images
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# plot the image
fig = plt.figure(figsize=(9,9))
plt.imshow(image)
```
## Detect all faces in an image
Next, you'll use one of OpenCV's pre-trained Haar Cascade classifiers, all of which can be found in the `detector_architectures/` directory, to find any faces in your selected image.
In the code below, we loop over each face in the original image and draw a red square on each face (in a copy of the original image, so as not to modify the original). You can even [add eye detections](https://docs.opencv.org/3.4.1/d7/d8b/tutorial_py_face_detection.html) as an *optional* exercise in using Haar detectors.
An example of face detection on a variety of images is shown below.
<img src='images/haar_cascade_ex.png' width=80% height=80%/>
```
# load in a haar cascade classifier for detecting frontal faces
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')
# run the detector
# the output here is an array of detections; the corners of each detection box
# if necessary, modify these parameters until you successfully identify every face in a given image
faces = face_cascade.detectMultiScale(image, 1.2, 2)
# make a copy of the original image to plot detections on
image_with_detections = image.copy()
# loop over the detected faces, mark the image where each face is found
for (x,y,w,h) in faces:
# draw a rectangle around each detected face
# you may also need to change the width of the rectangle drawn depending on image resolution
cv2.rectangle(image_with_detections,(x,y),(x+w,y+h),(255,0,0),3)
fig = plt.figure(figsize=(9,9))
plt.imshow(image_with_detections)
```
## Loading in a trained model
Once you have an image to work with (and, again, you can select any image of faces in the `images/` directory), the next step is to pre-process that image and feed it into your CNN facial keypoint detector.
First, load your best model by its filename.
```
import torch
from models import Net
net = Net()
## COMPLETED: load the best saved model parameters (by your path name)
## You'll need to un-comment the line below and add the correct name for *your* saved model
net.load_state_dict(torch.load('saved_models/keypoints_model_rms_prop_20.pt'))
## print out your net and prepare it for testing (uncomment the line below)
net.eval()
```
## Keypoint detection
Now, we'll loop over each detected face in an image (again!) only this time, you'll transform those faces in Tensors that your CNN can accept as input images.
### TODO: Transform each detected face into an input Tensor
You'll need to perform the following steps for each detected face:
1. Convert the face from RGB to grayscale
2. Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255]
3. Rescale the detected face to be the expected square size for your CNN (224x224, suggested)
4. Reshape the numpy image into a torch image.
**Hint**: The sizes of faces detected by a Haar detector and the faces your network has been trained on are of different sizes. If you find that your model is generating keypoints that are too small for a given face, try adding some padding to the detected `roi` before giving it as input to your model.
You may find it useful to consult to transformation code in `data_load.py` to help you perform these processing steps.
### TODO: Detect and display the predicted keypoints
After each face has been appropriately converted into an input Tensor for your network to see as input, you can apply your `net` to each face. The ouput should be the predicted the facial keypoints. These keypoints will need to be "un-normalized" for display, and you may find it helpful to write a helper function like `show_keypoints`. You should end up with an image like the following with facial keypoints that closely match the facial features on each individual face:
<img src='images/michelle_detected.png' width=30% height=30%/>
```
def show_keypoints(image, keypoints):
plt.figure(figsize=(5,5))
keypoints = keypoints.data.numpy()
keypoints = keypoints * 60.0 + 96
keypoints = np.reshape(keypoints, (68, -1)) # reshape to 2 X 68 keypoint
image = image.numpy()
image = np.transpose(image, (1, 2, 0)) # (H x W x C)
image = np.squeeze(image)
plt.imshow(image, cmap='gray')
plt.scatter(keypoints[:, 0], keypoints[:, 1], s=40, marker='.', c='m')
from torch.autograd import Variable
image_copy = np.copy(image)
# loop over the detected faces from your haar cascade
for (x,y,w,h) in faces:
# Select the region of interest that is the face in the image
# roi = image_copy[y:y+h, x:x+w]
roi = image_copy[y-30:y+h+50, x-30:x+w+50]
## COMPLETED: Convert the face region from RGB to grayscale
roi = cv2.cvtColor(roi, cv2.COLOR_RGB2GRAY)
## COMPLETED: Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255]
roi = roi/255
## COMPLETED: Rescale the detected face to be the expected square size for your CNN (224x224, suggested)
roi = cv2.resize(roi, (224, 224))
## COMPLETED: Reshape the numpy image shape (H x W x C) into a torch image shape (C x H x W)
if(len(roi.shape) == 2):
roi = roi.reshape(roi.shape[0], roi.shape[1], 1)
roi = roi.transpose((2, 0, 1))
print(roi.shape)
## COMPLETED: Make facial keypoint predictions using your loaded, trained network
tensor = Variable(torch.from_numpy(roi))
tensor = tensor.type(torch.FloatTensor)
tensor.unsqueeze_(0)
keypoints = net(tensor)
## COMPLETED: Display each detected face and the corresponding keypoints
show_keypoints(tensor.squeeze(0), keypoints)
```
|
github_jupyter
|
# Recruitment Across Datasets
In this notebook, we further examine the capability of ODIF to transfer across datasets, building upon the prior FTE/BTE experiments on MNIST and Fashion-MNIST. Using the datasets found in [this repo](https://github.com/neurodata/LLF_tidy_images), we perform a series of experiments to evaluate the transfer efficiency and recruitment capabilities of ODIF across five different datasets. The datasets and their content are as follows:
- Caltech-101: contains images of objects in 101 categories
- CIFAR-10: contains 32x32 color images of objects in 10 classes
- CIFAR-100: contains 32x32 color images of objects in 100 classes
- Food-101: contains images of dishes in 101 categories
- DTD: contains images of describable textures
```
import functions.recruitacrossdatasets_functions as fn
```
**Note:** This notebook tutorial uses functions stored externally within `functions/recruitacrossdatasets_functions.py` to simplify presentation of code. These functions are imported above, along with other libraries.
## FTE/BTE Experiment
We begin our examination of ODIF's transfer capabilities across datasets with the FTE/BTE experiment, which provides background metrics for what the expected performance should be. This helps inform the later recruitment experiment.
### Base Experiment
#### Import and Process Data
Let's first import the data and perform some preprocessing so that it is in the correct format for feeding to ODIF. The following function does so for us:
```
data, classes = fn.import_data(normalize=False)
```
#### Define Hyperparameters
We then define the hyperparameters to be used for the experiment:
- `model`: model to be used for FTE/BTE experiment
- `num_tasks`: number of tasks
- `num_trees`: nuber of trees
- `reps`: number of repetitions, fewer than actual figures to reduce running time
```
##### MAIN HYPERPARAMS ##################
model = "odif"
num_tasks = 5
num_trees = 10
reps = 4
#########################################
```
Taking each dataset as a separate task, we have `5` tasks, and we also set a default of `10` trees, with the experiment being run for `30` reps.
Note, in comparison to previous FTE/BTE experiments, the lack of the `num_points_per_task` parameter. Here, we sample based on the label with the least number of samples and take 31 samples from each label.
#### Run Experiment and Plot Results
First, we call the function to run the experiment:
```
accuracy_all_task = fn.ftebte_exp(
data, classes, model, num_tasks, num_trees, reps, shift=0
)
```
Using the accuracies over all tasks, we can calculate the error, the forwards transfer efficiency (FTE), the backwards transfer efficiency (BTE), and the overall transfer efficiency (TE).
```
err, bte, fte, te = fn.get_metrics(accuracy_all_task, num_tasks)
```
These results are therefore plotted using the function as follows:
```
fn.plot_ftebte(num_tasks, err, bte, fte, te)
```
As can be seen from above, there is generally positive forwards and backwards transfer efficiency when evaluating transfer across datasets, even though the datasets contained very different content.
### Varying the Number of Trees
We were also curious how changing the number of trees would affect the results of the FTE/BTE experiment across datasets, and therefore also reran the experiment using `50` trees:
```
##### MAIN HYPERPARAMS ##################
model = "odif"
num_tasks = 5
num_trees = 50
reps = 4
#########################################
```
Running the experiment, we find the following results:
```
accuracy_all_task = fn.ftebte_exp(
data, classes, model, num_tasks, num_trees, reps, shift=0
)
err, bte, fte, te = fn.get_metrics(accuracy_all_task, num_tasks)
fn.plot_ftebte(num_tasks, err, bte, fte, te)
```
It seems as if more trees leads to lower transfer efficiency.
We use `10` trees for the remainder of the experiments to save on computing power.
## Recruitment Experiment
Now that we have roughly assessed the performance of ODIF via the FTE/BTE experiment, we are also interested in which recruitment scheme works the best for this set of data.
### Base Experiment
To quickly reiterate some of the background on the recruitment experiment, there are generally two main schemes for developing lifelong learning algorithms: building and reallocating. The former involves adding new resources as new data comes in, whereas the latter involves compressing current representations to make room for new ones. We want to examine whether current resources could be better leveraged by testing a range of approaches:
1. **Building (default for Omnidirectional Forest):** train `num_trees` new trees
2. **Uncertainty forest:** ignore all prior trees
3. **Recruiting:** select `num_trees` (out of all 450 existing trees) that perform best on the newly introduced 10th task
4. **Hybrid:** builds `num_trees/2` new trees AND recruits `num_trees/2` best-forming trees
We compare the results of these approaches based on varying training sample sizes, in the range of `[1, 5, 10, 25]` samples per label.
#### Define Hyperparameters
As always, we define the hyperparameters:
- `num_tasks`: number of tasks
- `num_trees`: nuber of trees
- `reps`: number of repetitions
- `estimation_set`: size of set used to train for the last task, as a proportion (`1-estimation_set` is the size of the set used for validation, aka the selection of best trees)
```
############################
### Main hyperparameters ###
############################
num_tasks = 5
num_trees = 10
reps = 4
estimation_set = 0.63
```
#### Run Experiment and Plot Results
We call our experiment function and input the main hyperparameters:
```
# run recruitment experiment
means, stds, last_task_sample = fn.recruitment_exp(
data, classes, num_tasks, num_trees, reps, estimation_set, shift=0
)
```
And then we plot the results:
```
# plot results
fn.recruitment_plot(means, stds, last_task_sample, num_tasks)
```
We therefore see that though generalization error remains high on the final task, the lifelong learning algorithm still outperforms the other recruitment schemes overall.
### Shifting Dataset Order
Since the above experiment involves fixing DTD as the final dataset, a further experiment involves shifting the order of datasets, so that there is a different dataset as task 5 each time. This allows us to see whether different dataset content would significantly impact the results on the final task.
To do so, we define the `shift` parameter in our call to the `recruitment_exp` function. This, in turn, calls the `shift_data` function, which moves the first task to the end and thus reorders the sequence of tasks.
More specifically, if we define `shift=1`, as done below, we would get the following order of datasets:
1. CIFAR-10
2. CIFAR-100
3. Food-101
4. DTD
5. Caltech-101
```
# run recruitment experiment
means, stds, last_task_sample = fn.recruitment_exp(
data, classes, num_tasks, num_trees, reps, estimation_set, shift=1
)
# plot results
fn.recruitment_plot(means, stds, last_task_sample, num_tasks)
```
A `shift=2` results in a dataset order of:
1. CIFAR-100
2. Food-101
3. DTD
4. Caltech-101
5. CIFAR-10
```
# run recruitment experiment
means, stds, last_task_sample = fn.recruitment_exp(
data, classes, num_tasks, num_trees, reps, estimation_set, shift=2
)
# plot results
fn.recruitment_plot(means, stds, last_task_sample, num_tasks)
```
`shift=3` gives us:
1. Food-101
2. DTD
3. Caltech-101
4. CIFAR-10
5. CIFAR-100
```
# run recruitment experiment
means, stds, last_task_sample = fn.recruitment_exp(
data, classes, num_tasks, num_trees, reps, estimation_set, shift=3
)
# plot results
fn.recruitment_plot(means, stds, last_task_sample, num_tasks)
```
And finally, `shift=4` yields:
1. DTD
2. Caltech-101
3. CIFAR-10
4. CIFAR-100
5. Food-101
```
# run recruitment experiment
means, stds, last_task_sample = fn.recruitment_exp(
data, classes, num_tasks, num_trees, reps, estimation_set, shift=4
)
# plot results
fn.recruitment_plot(means, stds, last_task_sample, num_tasks)
```
Throughout all the above experiments, even though generalization error remains high due to the sheer amount of different labels across all the different datsets, our lifelong learning algorithm still outperforms the other recruitment methods.
## Other Experiments
### Effect of Normalization
When examining data across different datasets, normalization and standardization of data is often of interest. However, this can also lead to loss of information, as we are placing all the images on the same scale. As a final experiment, we also look into the effect of normalization on the FTE/BTE results.
#### Import and Process Data
The `import_data` function has a `normalize` parameter, where one can specify whether they want to normalize the data, normalize across the dataset, or just normalize across each image. Previously, for the original FTE/BTE experiment, we set `normalize=False`.
Here, we look at the other two options.
```
# normalize across dataset
data1, classes1 = fn.import_data(normalize="dataset")
# normalize across each image
data2, classes2 = fn.import_data(normalize="image")
```
#### Define Hyperparameters
We use the same parameters as before:
```
##### MAIN HYPERPARAMS ##################
model = "odif"
num_tasks = 5
num_trees = 10
reps = 4
#########################################
```
#### Run Experiment and Plot Results
We first run the FTE/BTE experiment by normalizing across each dataset, such that the images in each dataset have a range of [0,1] in each channel.
```
accuracy_all_task = fn.ftebte_exp(
data1, classes1, model, num_tasks, num_trees, reps, shift=0
)
err, bte, fte, te = fn.get_metrics(accuracy_all_task, num_tasks)
fn.plot_ftebte(num_tasks, err, bte, fte, te)
```
We then run the FTE/BTE experiment with normalizing per image, so that each channel in each image is scaled to a range of [0,1].
```
accuracy_all_task = fn.ftebte_exp(
data2, classes2, model, num_tasks, num_trees, reps, shift=0
)
err, bte, fte, te = fn.get_metrics(accuracy_all_task, num_tasks)
fn.plot_ftebte(num_tasks, err, bte, fte, te)
```
It seems as if normalizing both across the dataset and within each image yield relatively similar results to not normalizing, so we did not perform further experiments to explore this area more at the current point in time.
|
github_jupyter
|
```
# noexport
import os
os.system('export_notebook browser_libs.ipynb')
import os
if 'R_HOME' not in os.environ:
os.environ['R_HOME'] = '/usr/lib/R'
%load_ext rpy2.ipython
import json
import urllib.request as req
from memoize import memoize # pip install memoize2
from pymongo import MongoClient
from getsecret import getsecret
import urllib.parse
import moment
import datetime
import pandas as pd
#def get_user_to_all_install_ids():
# user_to_install = json.loads(req.urlopen("http://localhost:5001/get_user_to_all_install_ids").read().decode("utf-8"))
# return user_to_install
#def get_collection_names():
# collection_names = json.loads(req.urlopen("http://localhost:5001/listcollections").read().decode("utf-8"))
# return collection_names
def get_session_info_list_for_user(userid):
output = json.loads(req.urlopen("http://localhost:5001/get_session_info_list_for_user?userid=" + userid).read().decode("utf-8"))
return output
#collection_names = get_collection_items('collections')
#print(len(collection_names))
#print(get_collection_for_user('e0ea34c81d4b50cddc7bd752', 'synced:seconds_on_domain_per_session')[0])
@memoize
def download_url(url):
return req.urlopen(url).read().decode("utf-8")
def getjson(path, params={}):
querystring = urllib.parse.urlencode(params)
url = 'http://localhost:5001/' + path + '?' + querystring
return json.loads(download_url(url))
def make_getjson_func(path, *param_list):
def f(*arg_list):
if len(param_list) != len(arg_list):
print('missing some number of arguments. expected parameters: ' + str(param_list))
param_dict = {}
for param,arg in zip(param_list, arg_list):
param_dict[param] = arg
return getjson(path, param_dict)
return f
def expose_getjson(func_name, *args):
f = make_getjson_func(func_name, *args)
globals()[func_name] = f
return f
expose_getjson('get_session_info_list_for_user', 'userid')
#expose_getjson('get_user_to_all_install_ids')
#print(get_user_to_all_install_ids()['e0ea34c81d4b50cddc7bd752'])
#get_session_info_list = make_getjson_func('get_session_info_list_for_user', 'userid')
#print(get_session_info_list_for_user('e0ea34c81d4b50cddc7bd752')[0])
#def get_user_to_all_install_ids(user):
# return getjson
@memoize
def get_db(): # this is for the browser
client = MongoClient(getsecret("EXT_URI"))
db = client[getsecret("DB_NAME")]
return db
@memoize
def get_collection_items(collection_name):
db = get_db()
return [x for x in db[collection_name].find({})]
def get_collection_for_user(user, collection_name):
return get_collection_items(user + '_' + collection_name)
def get_collection_names():
collection_names = get_collection_items('collections')
return [x['_id'] for x in collection_names]
def get_users_with_goal_frequency_set():
output = []
collection_names = get_collection_names()
for collection_name in collection_names:
if not collection_name.endswith('_synced:goal_frequencies'):
continue
username = collection_name.replace('_synced:goal_frequencies', '')
output.append(username)
return output
@memoize
def get_user_to_all_install_ids():
install_info_list = get_collection_items('installs')
output = {}
for install_info in install_info_list:
if 'user_id' not in install_info:
continue
user_id = install_info['user_id']
install_id = install_info.get('install_id', None)
if user_id not in output:
output[user_id] = []
if install_id not in output[user_id]:
output[user_id].append(install_id)
return output
#print(get_user_to_all_install_ids()['e0ea34c81d4b50cddc7bd752'])
@memoize
def get_all_install_ids_for_user(user):
seconds_on_domain_per_session = get_collection_for_user(user, 'synced:seconds_on_domain_per_session')
interventions_active_for_domain_and_session = get_collection_for_user(user, 'synced:interventions_active_for_domain_and_session')
user_to_all_install_ids = get_user_to_all_install_ids()
output = []
output_set = set()
if user in user_to_all_install_ids:
for install_id in user_to_all_install_ids[user]:
if install_id not in output_set:
output_set.add(install_id)
output.append(install_id)
for item in seconds_on_domain_per_session:
if 'install_id' not in item:
continue
install_id = item['install_id']
if install_id not in output_set:
output_set.add(install_id)
output.append(install_id)
for item in interventions_active_for_domain_and_session:
if 'install_id' not in item:
continue
install_id = item['install_id']
if install_id not in output_set:
output_set.add(install_id)
output.append(install_id)
return output
@memoize
def get_is_user_unofficial(user):
seconds_on_domain_per_session = get_collection_for_user(user, 'synced:seconds_on_domain_per_session')
interventions_active_for_domain_and_session = get_collection_for_user(user, 'synced:interventions_active_for_domain_and_session')
#print(seconds_on_domain_per_session[0])
#print(seconds_on_domain_per_session[0]['developer_mode'])
for item in seconds_on_domain_per_session:
if 'unofficial_version' in item:
return True
if 'developer_mode' in item and item['developer_mode'] == True:
return True
return False
@memoize
def get_is_valid_user(user):
install_ids = get_all_install_ids_for_user(user)
if len(install_ids) != 1:
return False
return True
@memoize
def get_valid_user_list():
user_list = get_users_with_goal_frequency_set()
output = []
for user in user_list:
if not get_is_valid_user(user):
continue
output.append(user)
return output
#get_sessions_for_user('e0ea34c81d4b50cddc7bd752')
#valid_user_list = get_valid_user_list()
#print(len(valid_user_list))
#get_is_user_unofficial('e0ea34c81d4b50cddc7bd752')
#get_is_user_unofficial('c11e5f2d93f249b5083989b2')
'''
function convert_date_to_epoch(date) {
let start_of_epoch = moment().year(2016).month(0).date(1).hours(0).minutes(0).seconds(0).milliseconds(0)
let year = parseInt(date.substr(0, 4))
let month = parseInt(date.substr(4, 2)) - 1
let day = parseInt(date.substr(6, 2))
let date_moment = moment().year(year).month(month).date(day).hours(0).minutes(0).seconds(0).milliseconds(0)
return date_moment.diff(start_of_epoch, 'days')
}
function convert_epoch_to_date(epoch) {
let start_of_epoch = moment().year(2016).month(0).date(1).hours(0).minutes(0).seconds(0).milliseconds(0)
start_of_epoch.add(epoch, 'days')
return start_of_epoch.format('YYYYMMDD')
}
function timestamp_to_epoch(timestamp) {
let start_of_epoch = moment().year(2016).month(0).date(1).hours(0).minutes(0).seconds(0).milliseconds(0)
return moment(timestamp).diff(start_of_epoch, 'days')
}
'''
def convert_date_to_epoch(date):
#start_of_epoch = moment.now().timezone("US/Pacific").replace(years=2016, months=1, days=1, hours=0, minutes=0, seconds=0, milliseconds=0, microseconds=0)
start_of_epoch = moment.now().replace(years=2016, months=1, days=1, hours=0, minutes=0, seconds=0, milliseconds=0, microseconds=0)
year = int(date[0:4])
month = int(date[4:6])
day = int(date[6:8])
#date_moment = moment.now().timezone("US/Pacific").replace(years=year, months=month, days=day, hours=0, minutes=0, seconds=0, milliseconds=0, microseconds=0)
date_moment = moment.now().replace(years=year, months=month, days=day, hours=0, minutes=0, seconds=0, milliseconds=0, microseconds=0)
return date_moment.diff(start_of_epoch).days
def convert_epoch_to_date(epoch):
#start_of_epoch = moment.now().timezone("US/Pacific").replace(years=2016, months=1, days=1, hours=0, minutes=0, seconds=0, milliseconds=0, microseconds=0)
start_of_epoch = moment.now().replace(years=2016, months=1, days=1, hours=0, minutes=0, seconds=0, milliseconds=0, microseconds=0)
start_of_epoch.add(days=epoch)
return start_of_epoch.format('YYYYMMDD')
def timestamp_to_epoch(timestamp):
#start_of_epoch = moment.now().timezone("US/Pacific").replace(years=2016, months=1, days=1, hours=0, minutes=0, seconds=0, milliseconds=0, microseconds=0)
#return moment.unix(timestamp).timezone("US/Pacific").diff(start_of_epoch).days
start_of_epoch = moment.now().replace(years=2016, months=1, days=1, hours=0, minutes=0, seconds=0, milliseconds=0, microseconds=0)
return moment.unix(timestamp).diff(start_of_epoch).days
def timestamp_to_isoweek(timestamp):
isoWeek = int(datetime.datetime.fromtimestamp(timestamp/1000).isocalendar()[1])
return isoWeek
def epoch_to_isoweek(epoch):
#start_of_epoch = moment.now().timezone("US/Pacific").replace(years=2016, months=1, days=1, hours=0, minutes=0, seconds=0, milliseconds=0, microseconds=0)
start_of_epoch = moment.now().replace(years=2016, months=1, days=1, hours=0, minutes=0, seconds=0, milliseconds=0, microseconds=0)
start_of_epoch.add(days=epoch)
timestamp_seconds = start_of_epoch.epoch()
isoWeek = int(datetime.datetime.fromtimestamp(timestamp_seconds).isocalendar()[1])
return isoWeek
#print(timestamp_to_epoch(1537059309631))
#print(convert_epoch_to_date(988))
#print(convert_date_to_epoch('20180915'))
#print(convert_date_to_epoch('20180917'))
#print(epoch_to_isoweek(990))
#a=moment.unix(1537221946630)
#dir(a)
#print(timestamp_to_isoweek(1537221946630))
'''
@memoize
def get_frequency_info_for_user_epoch(user, epochnum):
# returns a dictionary mapping goal name -> 1 if frequent, 0 if infrequent
isoweek_input = epoch_to_isoweek(epochnum)
goal_frequencies = get_collection_for_user(user, 'synced:goal_frequencies')
output = {}
conflict_info_list = []
for item in goal_frequencies:
timestamp_local = item['timestamp_local']
isoweek_local = timestamp_to_isoweek(timestamp_local)
algorithm_info = json.loads(item['val'])
algorithm_name = algorithm_info['algorithm']
onweeks = algorithm_info['onweeks']
timestamp = algorithm_info['timestamp']
if algorithm_name == 'isoweek_random':
is_frequent = onweeks[isoweek_input] == 1
elif algorithm_name == 'isoweek_alternating':
is_frequent = isoweek_input % 2 == onweeks
else:
raise Exception('unknown frequency selection algorithm ' + algorithm)
goal = item['key']
if goal in output:
conflict_info = {'item': item, 'existing_is_frequent': output[goal], 'is_frequent': is_frequent}
conflict_info_list.append(conflict_info)
continue
output[goal] = is_frequent
#print(goal)
#print(is_frequent)
#print(algorithm_info)
#print(item)
return output
'''
@memoize
def get_frequency_info_for_user_epoch(user, epochnum):
# returns a dictionary mapping goal name -> 1 if frequent, 0 if infrequent
isoweek_input = epoch_to_isoweek(epochnum)
goal_frequencies = get_collection_for_user(user, 'synced:goal_frequencies')
output = {}
conflict_info_list = []
goal_frequencies.sort(key=lambda x: x['timestamp'])
for item in goal_frequencies:
timestamp_local = item['timestamp_local']
isoweek_local = timestamp_to_isoweek(timestamp_local)
algorithm_info = json.loads(item['val'])
algorithm_name = algorithm_info['algorithm']
onweeks = algorithm_info['onweeks']
timestamp = algorithm_info['timestamp']
if algorithm_name == 'isoweek_random':
is_frequent = onweeks[isoweek_input] == 1
elif algorithm_name == 'isoweek_alternating':
is_frequent = isoweek_input % 2 == onweeks
else:
raise Exception('unknown frequency selection algorithm ' + algorithm)
goal = item['key']
#if goal in output:
# conflict_info = {'item': item, 'existing_is_frequent': output[goal], 'is_frequent': is_frequent}
# conflict_info_list.append(conflict_info)
# continue
output[goal] = is_frequent
#print(goal)
#print(is_frequent)
#print(algorithm_info)
#print(item)
return output
def get_is_goal_frequent_for_user_on_domain_at_epoch(user, target_domain, epochnum):
goal_to_frequency_info = get_frequency_info_for_user_epoch(user, epochnum)
for goal_name,is_frequent in goal_to_frequency_info.items():
domain = get_domain_for_goal(goal_name)
if domain == target_domain:
return is_frequent
# we probably shouldn't have gotten here
return False
#def get_frequency_info_for_goal_on_timestamp(user, goal, )
#print(get_frequency_info_for_user_epoch('c11e5f2d93f249b5083989b2', 990))
#print(get_is_goal_frequent_for_user_on_domain_at_epoch('c11e5f2d93f249b5083989b2', 'www.youtube.com', 990))
@memoize
def get_goals_enabled_for_user_sorted_by_timestamp(user):
goal_info_list = get_collection_for_user(user, 'logs:goals')
goal_info_list_sorted = []
for goal_info in goal_info_list:
if 'timestamp_local' not in goal_info:
continue
goal_info_list_sorted.append(goal_info)
goal_info_list_sorted.sort(key=lambda k: k['timestamp_local'])
return goal_info_list_sorted
def get_goals_enabled_for_user_at_timestamp(user, target_timestamp_local):
goal_info_list_sorted = get_goals_enabled_for_user_sorted_by_timestamp(user)
enabled_goals = {}
for goal_info in goal_info_list_sorted:
# note this can be replaced with binary search if it is slow
timestamp_local = goal_info['timestamp_local']
if timestamp_local > target_timestamp_local:
return enabled_goals
enabled_goals = goal_info['enabled_goals']
return enabled_goals
def get_is_goal_enabled_for_user_at_timestamp(user, target_goal_name, target_timestamp_local):
goals_enabled_dictionary = get_goals_enabled_for_user_at_timestamp(user, target_timestamp_local)
for goal_name,is_enabled in goals_enabled_dictionary.items():
if goal_name == target_goal_name:
return is_enabled
return False
def get_is_goal_enabled_for_user_on_domain_at_timestamp(user, target_domain, target_timestamp_local):
goals_enabled_dictionary = get_goals_enabled_for_user_at_timestamp(user, target_timestamp_local)
for goal_name,is_enabled in goals_enabled_dictionary.items():
domain = get_domain_for_goal(goal_name)
if domain == target_domain:
return is_enabled
return False
#print(get_goals_enabled_for_user_sorted_by_timestamp('c11e5f2d93f249b5083989b2')[0])
#print(get_goals_active_for_user_at_timestep('c11e5f2d93f249b5083989b2', 1533450980492.0))
@memoize
def get_goal_intervention_info():
return json.load(open('goal_intervention_info.json'))
@memoize
def get_goal_info_list():
goal_intervention_info = get_goal_intervention_info()
return goal_intervention_info['goals']
@memoize
def get_goal_info_dict():
goal_info_list = get_goal_info_list()
output = {}
for goal_info in goal_info_list:
goal_name = goal_info['name']
output[goal_name] = goal_info
return output
@memoize
def get_domain_for_goal(goal_name):
goal_info_dict = get_goal_info_dict()
if goal_name in goal_info_dict:
return goal_info_dict[goal_name]['domain']
if goal_name.startswith('custom/spend_less_time_'): # custom/spend_less_time_www.tumblr.com
return goal_name[23:] # 23 == len('custom/spend_less_time_www.tumblr.com')
raise Exception('could not find domain for goal ' + goal_name)
#get_goal_info_dict()
#print(get_domain_for_goal('youtube/spend_less_time'))
#print(get_domain_for_goal('custom/spend_less_time_www.tumblr.com'))
def get_sessions_for_user(user):
seconds_on_domain_per_session = get_collection_for_user(user, 'synced:seconds_on_domain_per_session')
interventions_active_for_domain_and_session = get_collection_for_user(user, 'synced:interventions_active_for_domain_and_session')
#print(seconds_on_domain_per_session[0])
#print(interventions_active_for_domain_and_session[0])
output = []
domain_to_session_id_to_duration_info = {}
domain_to_session_id_to_intervention_info = {}
interventions_deployed_with_no_duration_info = []
seconds_on_domain_per_session.sort(key=lambda k: k['timestamp_local'])
for item in seconds_on_domain_per_session:
domain = item['key']
session_id = item['key2']
if domain not in domain_to_session_id_to_duration_info:
domain_to_session_id_to_duration_info[domain] = {}
domain_to_session_id_to_duration_info[domain][session_id] = item
for item in interventions_active_for_domain_and_session:
domain = item['key']
session_id = item['key2']
if domain not in domain_to_session_id_to_intervention_info:
domain_to_session_id_to_intervention_info[domain] = {}
domain_to_session_id_to_intervention_info[domain][session_id] = item
domain_to_session_id_to_info = {}
domain_session_id_pairs = []
for item in seconds_on_domain_per_session:
domain = item['key']
session_id = item['key2']
duration = item['val']
timestamp_local = item['timestamp_local']
timestamp = item['timestamp']
if domain not in domain_to_session_id_to_info:
domain_to_session_id_to_info[domain] = {}
if session_id not in domain_to_session_id_to_info[domain]:
domain_session_id_pairs.append([domain, session_id])
domain_to_session_id_to_info[domain][session_id] = {
'duration': duration,
'timestamp_local': timestamp_local,
'timestamp': timestamp,
'timestamp_local_last': timestamp_local,
'timestamp_last': timestamp,
}
info = domain_to_session_id_to_info[domain][session_id]
info['duration'] = max(duration, info['duration'])
info['timestamp_local'] = min(timestamp_local, info['timestamp_local'])
info['timestamp'] = min(timestamp, info['timestamp'])
info['timestamp_local_last'] = max(timestamp_local, info['timestamp_local_last'])
info['timestamp_last'] = max(timestamp, info['timestamp_last'])
#for item in seconds_on_domain_per_session:
# #print(item)
# domain = item['key']
# session_id = item['key2']
# duration = item['val']
for [domain, session_id] in domain_session_id_pairs:
item = domain_to_session_id_to_info[domain][session_id]
#print(item)
timestamp_local = item['timestamp_local']
timestamp_local_last = item['timestamp_local_last']
timestamp = item['timestamp']
timestamp_last = item['timestamp_last']
duration = item['duration']
epoch_local = timestamp_to_epoch(timestamp_local)
epoch_local_last = timestamp_to_epoch(timestamp_local_last)
epoch = timestamp_to_epoch(timestamp)
epoch_last = timestamp_to_epoch(timestamp_last)
interventions_active_info = None
interventions_active_list = None
intervention_active = None
have_intervention_info = False
is_preview_mode = False
is_suggestion_mode = False
if (domain in domain_to_session_id_to_intervention_info) and (session_id in domain_to_session_id_to_intervention_info[domain]):
interventions_active_info = domain_to_session_id_to_intervention_info[domain][session_id]
interventions_active_list = json.loads(interventions_active_info['val'])
if len(interventions_active_list) > 0:
intervention_active = interventions_active_list[0]
is_preview_mode = get_is_intervention_preview_mode(user, intervention_active, session_id)
is_suggestion_mode = get_is_intervention_suggestion_mode(user, intervention_active, session_id)
goals_enabled = get_goals_enabled_for_user_at_timestamp(user, timestamp_local)
is_goal_enabled = get_is_goal_enabled_for_user_on_domain_at_timestamp(user, domain, timestamp_local)
is_goal_frequent = get_is_goal_frequent_for_user_on_domain_at_epoch(user, domain, epoch_local)
goal_to_frequency_info = get_frequency_info_for_user_epoch(user, epoch_local)
output.append({
'domain': domain,
'session_id': session_id,
'is_goal_enabled': is_goal_enabled,
'is_goal_frequent': is_goal_frequent,
'is_preview_mode': is_preview_mode,
'is_suggestion_mode': is_suggestion_mode,
'intervention_active': intervention_active,
'duration': duration,
'timestamp_local': timestamp_local,
'timestamp': timestamp,
'timestamp_last': timestamp_last,
'timestamp_local_last': timestamp_local_last,
'epoch_local': epoch_local,
'epoch': epoch,
'epoch_local_last': epoch_local_last,
'epoch_last': epoch_last,
})
#if interventions_active_info != None and interventions_active_list != None and len(interventions_active_list) > 0:
# print(domain)
# print(is_goal_enabled)
# print(intervention_active)
# print(duration)
# print(is_goal_frequent)
# print(goals_enabled)
# print(goal_to_frequency_info)
# return
# duration = item['val']
# print(duration)
return output
def get_is_intervention_preview_mode(user, intervention_name, session_id):
intervention_info_list = get_intervention_info_list_for_user_intervention_session_id(user, intervention_name, session_id)
for x in intervention_info_list:
if 'is_preview_mode' in x and x['is_preview_mode'] == True:
return True
return False
def get_is_intervention_suggestion_mode(user, intervention_name, session_id):
intervention_info_list = get_intervention_info_list_for_user_intervention_session_id(user, intervention_name, session_id)
for x in intervention_info_list:
if 'is_suggestion_mode' in x and x['is_suggestion_mode'] == True:
return True
return False
def have_intervention_info_for_session_id(user, intervention_name, session_id):
intervention_info_list = get_intervention_info_list_for_user_intervention_session_id(user, intervention_name, session_id)
return len(intervention_info_list) > 0
def get_intervention_info_list_for_user_intervention_session_id(user, intervention_name, session_id):
session_to_intervention_info_list = get_session_id_to_intervention_info_list_for_user_and_intervention(user, intervention_name)
if session_id not in session_to_intervention_info_list:
return []
return session_to_intervention_info_list[session_id]
@memoize
def get_session_id_to_intervention_info_list_for_user_and_intervention(user, intervention_name):
output = {}
intervention_info_list = get_collection_for_user(user, intervention_name.replace('/', ':'))
for x in intervention_info_list:
if 'session_id' not in x:
continue
session_id = x['session_id']
if session_id not in output:
output[session_id] = []
output[session_id].append(x)
return output
def get_intervention_info_for_user_and_session(user, intervention_name, session_id):
intervention_collection = get_collection_for_user(user, intervention_name.replace('/', ':'))
print(intervention_collection)
#print(get_sessions_for_user('c11e5f2d93f249b5083989b2'))
#for session_info in get_sessions_for_user('c11e5f2d93f249b5083989b2'):
# print(session_info)
# break
#print(get_intervention_info_for_user_and_session('c11e5f2d93f249b5083989b2', 'generated_www.tumblr.com/toast_notifications', 0))
#print(get_session_id_to_intervention_info_list_for_user_and_intervention('c11e5f2d93f249b5083989b2', 'generated_www.tumblr.com/toast_notifications'))
#print(get_sessions_for_user('c11e5f2d93f249b5083989b2'))
#all_sessions_info_list = []
#for user in get_valid_user_list():
# print(user)
# for info in get_sessions_for_user(user):
# info['user'] = user
# all_sessions_info_list.append(info)
#print(get_sessions_for_user_by_day_and_goal('c11e5f2d93f249b5083989b2'))
def group_sessions_by_domain(session_info_list):
output = {}
for item in session_info_list:
domain = item['domain']
if domain not in output:
output[domain] = []
output[domain].append(item)
return output
def group_sessions_by_epoch(session_info_list):
output = {}
for item in session_info_list:
epoch = item['epoch_local']
if epoch not in output:
output[epoch] = []
output[epoch].append(item)
return output
def get_total_time_on_other_goal_domains(session_info_list_for_day, domain_to_exclude):
output = 0
for domain,session_info_list in group_sessions_by_domain(session_info_list_for_day).items():
if domain == domain_to_exclude:
continue
for session_info in session_info_list:
is_goal_frequent = session_info['is_goal_frequent']
is_goal_enabled = session_info['is_goal_enabled']
duration = session_info['duration']
if is_goal_enabled != True:
continue
output += duration
return output
def get_total_time_on_all_other_domains(session_info_list_for_day, domain_to_exclude):
output = 0
for domain,session_info_list in group_sessions_by_domain(session_info_list_for_day).items():
if domain == domain_to_exclude:
continue
for session_info in session_info_list:
is_goal_frequent = session_info['is_goal_frequent']
is_goal_enabled = session_info['is_goal_enabled']
duration = session_info['duration']
output += duration
return output
def get_total_time_when_goal_is_enabled(session_info_list):
output = 0
for session_info in session_info_list:
is_goal_frequent = session_info['is_goal_frequent']
is_goal_enabled = session_info['is_goal_enabled']
duration = session_info['duration']
if is_goal_enabled != True:
continue
output += duration
return output
def get_is_goal_enabled_from_session_info_list(session_info_list):
output = None
for session_info in session_info_list:
is_goal_frequent = session_info['is_goal_frequent']
is_goal_enabled = session_info['is_goal_enabled']
if output == None:
output = is_goal_enabled
elif output != is_goal_enabled:
return 'inconsistent'
return output
def get_is_goal_frequent_from_session_info_list(session_info_list):
output = None
for session_info in session_info_list:
is_goal_frequent = session_info['is_goal_frequent']
is_goal_enabled = session_info['is_goal_enabled']
if output == None:
output = is_goal_frequent
elif output != is_goal_frequent:
return 'inconsistent'
return output
#def get_is_goal_frequent
@memoize
def get_domain_to_epoch_to_time_for_user(user):
seconds_on_domain_per_day_items = get_collection_for_user(user, 'synced:seconds_on_domain_per_day')
output = {}
for item in seconds_on_domain_per_day_items:
domain = item['key']
epoch = item['key2']
duration = item['val']
if domain not in output:
output[domain] = {}
if epoch not in output[domain]:
output[domain][epoch] = duration
output[domain][epoch] = max(duration, output[domain][epoch])
return output
#print(seconds_on_domain_per_day_items[0])
@memoize
def get_epoch_to_domain_to_time_for_user(user):
seconds_on_domain_per_day_items = get_collection_for_user(user, 'synced:seconds_on_domain_per_day')
output = {}
for item in seconds_on_domain_per_day_items:
domain = item['key']
epoch = item['key2']
duration = item['val']
if epoch not in output:
output[epoch] = {}
if domain not in output[epoch]:
output[epoch][domain] = duration
output[epoch][domain] = max(duration, output[epoch][domain])
return output
def get_time_on_domain_on_epoch_for_user(user, domain, epoch):
epoch_to_domain_to_time = get_epoch_to_domain_to_time_for_user(user)
if epoch not in epoch_to_domain_to_time:
return 0
if domain not in epoch_to_domain_to_time[epoch]:
return 0
return epoch_to_domain_to_time[epoch][domain]
#print(get_sessions_for_user_by_day_and_goal('c11e5f2d93f249b5083989b2'))
#print(get_domain_to_epoch_to_time_for_user('c11e5f2d93f249b5083989b2'))
#print(get_time_on_domain_on_epoch_for_user('c11e5f2d93f249b5083989b2', 'www.cnn.com', 953)) # 75
def get_time_on_all_other_domains_on_epoch_for_user(user, target_domain, epoch):
epoch_to_domain_to_time = get_epoch_to_domain_to_time_for_user(user)
if epoch not in epoch_to_domain_to_time:
return 0
domain_to_time = epoch_to_domain_to_time[epoch]
output = 0
for domain,time in domain_to_time.items():
if domain == target_domain:
continue
output += time
return output
def get_time_on_epoch_on_domains_in_set_for_user_except(user, epoch, enabled_domains_set, target_domain):
epoch_to_domain_to_time = get_epoch_to_domain_to_time_for_user(user)
if epoch not in epoch_to_domain_to_time:
return 0
domain_to_time = epoch_to_domain_to_time[epoch]
output = 0
for domain,time in domain_to_time.items():
if domain == target_domain:
continue
if domain not in enabled_domains_set:
continue
output += time
return output
def difference_ratio(a, b):
diff = abs(a - b)
smaller = min(abs(a), abs(b))
if smaller == 0:
return 1
return diff / smaller
def get_enabled_goal_domains_set_in_session_info_list(session_info_list):
#output = []
output_set = set()
for session_info in session_info_list:
is_goal_enabled = session_info['is_goal_enabled']
domain = session_info['domain']
if is_goal_enabled:
if domain not in output_set:
output_set.add(domain)
#output.append(domain)
return output_set
def get_sessions_for_user_by_day_and_goal(user):
output = []
session_info_list = get_sessions_for_user(user)
sessions_grouped_by_epoch = group_sessions_by_epoch(session_info_list)
epoch_list = sessions_grouped_by_epoch.keys()
if len(epoch_list) == 0:
return output
first_epoch_for_user = max(epoch_list)
last_epoch_for_user = min(epoch_list)
for epoch,session_info_list_for_day in sessions_grouped_by_epoch.items():
info_for_epoch = {}
info_for_epoch['epoch'] = epoch
info_for_epoch['days_since_install'] = epoch - first_epoch_for_user
info_for_epoch['days_until_last'] = last_epoch_for_user - epoch
info_for_epoch['domains_and_sessions'] = []
enabled_domains_set = get_enabled_goal_domains_set_in_session_info_list(session_info_list_for_day)
for domain,session_info_list_for_domain in group_sessions_by_domain(session_info_list_for_day).items():
info_for_domain = {}
info_for_domain['domain'] = domain
this_goal_domain_total_time = get_total_time_when_goal_is_enabled(session_info_list_for_domain)
other_goal_domain_total_time = get_total_time_on_other_goal_domains(session_info_list_for_day, domain)
other_all_domain_total_time = get_total_time_on_all_other_domains(session_info_list_for_day, domain)
info_for_domain['time_on_domain_today'] = this_goal_domain_total_time
info_for_domain['time_on_domain_today_ref'] = get_time_on_domain_on_epoch_for_user(user, domain, epoch)
info_for_domain['time_on_all_other_domains_today'] = other_all_domain_total_time
info_for_domain['time_on_all_other_domains_today_ref'] = get_time_on_all_other_domains_on_epoch_for_user(user, domain, epoch)
info_for_domain['time_on_other_goal_domains_today'] = other_goal_domain_total_time
info_for_domain['time_on_other_goal_domains_today_ref'] = get_time_on_epoch_on_domains_in_set_for_user_except(user, epoch, enabled_domains_set, domain)
info_for_domain['is_goal_enabled'] = get_is_goal_enabled_from_session_info_list(session_info_list_for_domain)
info_for_domain['is_goal_frequent'] = get_is_goal_frequent_from_session_info_list(session_info_list_for_domain)
info_for_domain['session_info_list_for_domain'] = session_info_list_for_domain
info_for_epoch['domains_and_sessions'].append(info_for_domain)
#print(json.dumps(info_for_epoch))
#print(epoch)
#print(domain)
#print(this_goal_domain_total_time)
#print(other_goal_domain_total_time)
#print(session_info_list_for_domain)
#return
output.append(info_for_epoch)
return output
#get_sessions_for_user_by_day_and_goal('c11e5f2d93f249b5083989b2')
# noexport
'''
def timestamp_to_epoch(timestamp):
#start_of_epoch = moment.now().timezone("US/Pacific").replace(years=2016, months=1, days=1, hours=0, minutes=0, seconds=0, milliseconds=0, microseconds=0)
#return moment.unix(timestamp).timezone("US/Pacific").diff(start_of_epoch).days
start_of_epoch = moment.now().replace(years=2016, months=1, days=1, hours=0, minutes=0, seconds=0, milliseconds=0, microseconds=0)
return moment.unix(timestamp).diff(start_of_epoch).days
'''
def test_inconsistent():
#a = {'domain': 'www.youtube.com', 'time_on_domain_today': 26, 'time_on_domain_today_ref': 26, 'time_on_all_other_domains_today': 5754, 'time_on_all_other_domains_today_ref': 6509, 'time_on_other_goal_domains_today': 429, 'time_on_other_goal_domains_today_ref': 429, 'is_goal_enabled': True, 'is_goal_frequent': False, 'session_info_list_for_domain': [{'domain': 'www.youtube.com', 'session_id': 0, 'is_goal_enabled': True, 'is_goal_frequent': False, 'intervention_active': 'youtube/toast_notifications', 'duration': 26, 'timestamp_local': 1533455041603.0, 'timestamp': 1533456061598.0, 'timestamp_last': 1533456061598.0, 'timestamp_local_last': 1533455041603.0, 'epoch_local': 947, 'epoch': 947, 'epoch_local_last': 947, 'epoch_last': 947}]}
#b = {'user': 'c11e5f2d93f249b5083989b2', 'epoch': 947, 'domain': 'www.youtube.com', 'time_on_domain_today': 26, 'time_on_domain_today_ref': 26, 'time_on_all_other_domains_today': 5754, 'time_on_all_other_domains_today_ref': 6509, 'time_on_other_goal_domains_today': 429, 'time_on_other_goal_domains_today_ref': 429, 'is_goal_enabled': True, 'is_goal_frequent': False, 'intensity': 0.8333333333333334, 'intensity_other_goals': 0.8, 'num_days_available_for_user': 34, 'is_user_unofficial': False, 'have_preview_sessions': False, 'have_suggestion_sessions': False, 'consistent_item': False}
a = {'domain': 'www.reddit.com', 'time_on_domain_today': 287, 'time_on_domain_today_ref': 333, 'time_on_all_other_domains_today': 1996, 'time_on_all_other_domains_today_ref': 2309, 'time_on_other_goal_domains_today': 386, 'time_on_other_goal_domains_today_ref': 386, 'is_goal_enabled': True, 'is_goal_frequent': True, 'session_info_list_for_domain': [{'domain': 'www.reddit.com', 'session_id': 5, 'is_goal_enabled': True, 'is_goal_frequent': True, 'is_preview_mode': False, 'is_suggestion_mode': False, 'intervention_active': 'reddit/toast_notifications', 'duration': 40, 'timestamp_local': 1534011266360.0, 'timestamp': 1534011282343.0, 'timestamp_last': 1534011531865.0, 'timestamp_local_last': 1534011266360.0, 'epoch_local': 953, 'epoch': 953, 'epoch_local_last': 953, 'epoch_last': 953}, {'domain': 'www.reddit.com', 'session_id': 6, 'is_goal_enabled': True, 'is_goal_frequent': True, 'is_preview_mode': False, 'is_suggestion_mode': False, 'intervention_active': 'reddit/block_after_interval_per_visit', 'duration': 2, 'timestamp_local': 1534012345961.0, 'timestamp': 1534012468728.0, 'timestamp_last': 1534012468728.0, 'timestamp_local_last': 1534012345961.0, 'epoch_local': 953, 'epoch': 953, 'epoch_local_last': 953, 'epoch_last': 953}, {'domain': 'www.reddit.com', 'session_id': 7, 'is_goal_enabled': True, 'is_goal_frequent': True, 'is_preview_mode': False, 'is_suggestion_mode': False, 'intervention_active': 'reddit/toast_notifications', 'duration': 245, 'timestamp_local': 1534014045708.0, 'timestamp': 1534014076469.0, 'timestamp_last': 1534014329949.0, 'timestamp_local_last': 1534014045708.0, 'epoch_local': 953, 'epoch': 953, 'epoch_local_last': 953, 'epoch_last': 953}]}
b = {'user': 'c11e5f2d93f249b5083989b2', 'epoch': 953, 'domain': 'www.reddit.com', 'time_on_domain_today': 287, 'time_on_domain_today_ref': 333, 'time_on_all_other_domains_today': 1996, 'time_on_all_other_domains_today_ref': 2309, 'time_on_other_goal_domains_today': 386, 'time_on_other_goal_domains_today_ref': 386, 'is_goal_enabled': True, 'is_goal_frequent': True, 'intensity': 1.0, 'intensity_other_goals': 1.0, 'num_days_available_for_user': 34, 'is_user_unofficial': False, 'have_preview_sessions': False, 'have_suggestion_sessions': False, 'consistent_item': False}
epoch_target = b['epoch']
#print(a)
user = b['user']
seconds_on_domain_per_session = get_collection_for_user(user, 'synced:seconds_on_domain_per_session')
print(seconds_on_domain_per_session[0])
epoch_to_domain_to_time = get_epoch_to_domain_to_time_for_user(user)
domain_to_time = epoch_to_domain_to_time[epoch_target]
print(sum(domain_to_time.values()))
total = 0
domain_to_session_id_to_time = {}
for x in seconds_on_domain_per_session:
timestamp_local = x['timestamp_local']
domain = x['key']
session_id = x['key2']
epoch = timestamp_to_epoch(timestamp_local)
if epoch != epoch_target:
continue
duration = x['val']
if domain not in domain_to_session_id_to_time:
domain_to_session_id_to_time[domain] = {}
if session_id not in domain_to_session_id_to_time[domain]:
domain_to_session_id_to_time[domain][session_id] = duration
domain_to_session_id_to_time[domain][session_id] = max(domain_to_session_id_to_time[domain][session_id], duration)
for domain,session_id_to_time in domain_to_session_id_to_time.items():
for session_id,time in session_id_to_time.items():
total += time
print(total)
for domain,ref_time in domain_to_time.items():
session_id_to_time = domain_to_session_id_to_time[domain]
calc_time = sum(session_id_to_time.values())
if ref_time != calc_time:
print(domain)
print(ref_time)
print(calc_time)
test_inconsistent()
def get_sessions_for_user_by_day_and_goal_for_all_users():
output = []
for user in get_valid_user_list():
print(user)
info_for_user = {}
info_for_user['user'] = user
info_for_user['is_user_unofficial'] = get_is_user_unofficial(user)
info_for_user['days_domains_and_sessions'] = get_sessions_for_user_by_day_and_goal(user)
output.append(info_for_user)
return output
# noexport
all_session_info = get_sessions_for_user_by_day_and_goal_for_all_users()
import json
json.dump(all_session_info, open('browser_all_session_info_sept18_v4_dell.json', 'w'))
def convert_list_of_dicts_into_dataframe(dict_list):
output = {}
for keyname in dict_list[0].keys():
output[keyname] = []
for item in dict_list:
for k,v in item.items():
output[k].append(v)
return pd.DataFrame.from_dict(output)
#print(convert_list_of_dicts_into_dataframe([
# {'a': 3, 'b': 5},
# {'a': 4, 'b': 6}
#]))
def make_dataframe_days():
sessions_for_user_by_day_and_goal_for_all_users = get_sessions_for_user_by_day_and_goal_for_all_users()
output = []
for sessions_for_user_by_day_and_goal in sessions_for_user_by_day_and_goal_for_all_users:
user = sessions_for_user_by_day_and_goal['user']
for day_domains_and_sessions in sessions_for_user_by_day_and_goal['days_domains_and_sessions']:
epoch = day_domains_and_sessions['epoch']
for domain_and_sessions in day_domains_and_sessions['domains_and_sessions']:
domain = domain_and_sessions['domain']
time_on_domain_today = domain_and_sessions['time_on_domain_today']
time_on_all_other_domains_today = domain_and_sessions['time_on_all_other_domains_today']
time_on_other_goal_domains_today = domain_and_sessions['time_on_other_goal_domains_today']
is_goal_enabled = domain_and_sessions['is_goal_enabled']
is_goal_frequent = domain_and_sessions['is_goal_frequent']
output.append({
'user': user,
'epoch': epoch,
'domain': domain,
'time_on_domain_today': time_on_domain_today,
'time_on_all_other_domains_today': time_on_all_other_domains_today,
'time_on_other_goal_domains_today': time_on_other_goal_domains_today,
'is_goal_enabled': is_goal_enabled,
'is_goal_frequent': is_goal_frequent,
})
return convert_list_of_dicts_into_dataframe(output)
#df = make_dataframe_days()
#print(make_dataframe_days())
#df.to_csv('browser_time_on_domains_sept18.csv')
#%%R -i df -w 5 -h 5 --units in -r 200
#install.packages('ez')
#install.packages('lme4')
#library(lme4)
#library(sjPlot)
#library(lmerTest)
#library(ez)
# def get_days_and_sessions_for_user(user):
# session_info_list = get_sessions_for_user(user)
# min_epoch = min([x['local_epoch'] for x in session_info_list])
# max_epoch = max([x['local'] for x in session_info_list])
# for epoch in range(min_epoch, max_epoch + 1):
# print(get_days_and_sessions_for_user('c11e5f2d93f249b5083989b2'))
#print(len(get_users_with_goal_frequency_set()))
#print(valid_user_list[0])
def print_stats_on_install_records():
user_to_all_install_ids = get_user_to_all_install_ids()
users_with_goal_frequency_set = get_users_with_goal_frequency_set()
users_with_missing_install_record = []
users_with_zero_installs = []
users_with_multiple_installs = []
users_with_single_install = []
for username in users_with_goal_frequency_set:
if username not in user_to_all_install_ids:
users_with_missing_install_record.append(username)
continue
install_ids = user_to_all_install_ids[username]
if len(install_ids) == 0:
users_with_zero_installs.append(username)
continue
if len(install_ids) > 1:
users_with_multiple_installs.append(username)
continue
users_with_single_install.append(username)
print('users with missing install record', len(users_with_missing_install_record))
print('users with zero installs', len(users_with_zero_installs))
print('users with multiple installs', len(users_with_multiple_installs))
print('users with single install', len(users_with_single_install))
#print_stats_on_install_records()
```
|
github_jupyter
|
## Plot the throughput of experiment 1 of version 3
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import requests
import io
import glob
```
## Function Read CSV files of Throughput from Iperf log
```
def getDataframeThru(df,start_row,measurement_interval,header_range):
'''
This functions will import the data from txt file and return the dataframe without the header of txt file.
Input:
measurement_interval = 30 (sec) :
header_range = 10 lines
start_row = 0
Return:
df1t : dataframe of througput and jitter
'''
df1 = df.drop(labels=range(start_row, header_range), axis=0)
df1t = df1.drop(labels=range(measurement_interval, len(df)), axis=0)
return df1t
def getDatafromTxT(filename, headerrange):
"""
Get dataframe from txt file:
filename : xxx.txt
headerrange : number of lines that needed to be removed.
return : df : datafame type
"""
h = headerrange + 1
skip_1 = list(range(0,h, 1))
df = pd.read_csv(filename,
skiprows=skip_1,
header=None,
delimiter=' ',
skipinitialspace=True,
error_bad_lines=False)
return df
## Find start row index of itteration
def getStartEndID(df,start_data,end_data):
"""
to clean dataframe and return the data with new header
Input:
df : datafram without header of txt file
Output
strat_indices_list : start indices list
"""
# creating and passing series to new column
df["Start"]= df[2].str.find(start_data)
df["End"]= df[2].str.find(end_data)
index = df.index
strat_indices = index[df["Start"]==0.0]
strat_indices_list = strat_indices.tolist()
end_indices = index[df["End"]==0.0]
end_indices_list = end_indices.tolist()
return strat_indices_list, end_indices_list
def getCleanData(df,strat_indices_list,end_indices_list):
"""
"""
df_all = df.drop(labels=range(1, len(df)), axis=0) # create new df
start_row = 0
c = 0
for i in strat_indices_list:
h = i
print('h =',h)
m = end_indices_list[c]
print('m =', m)
df1 = getDataframeThru(df,start_row,m,h)
print('df1 = ', df1)
result = pd.concat([df_all,df1])
df_all = result
c = c + 1
if i == 0:
df_all = df_all.drop(labels=0, axis=0)
return df_all
def superClean(filename,headerrange,start_data,end_data):
"""
Clean Data from CSV file with remove the unnecessary header
"""
df = getDatafromTxT(filename, headerrange)
strat_indices_list, end_indices_list = getStartEndID(df,start_data,end_data)
df_all = getCleanData(df,strat_indices_list,end_indices_list)
df_all_new = df_all.drop(df_all.columns[[0,1,3,5,7,9]], axis=1) # Replace new columns header
df_all_new.rename({2 :'Interval', 4 : 'Transfer', 6 :'Bitrate', 8 :'Jitter', 10 :'Lost/Total Datagrams'}, axis=1, inplace=True)
df = df_all_new.drop(range(0,1))
df_all_new['Bitrate'] = df['Bitrate'].astype(float)
time = np.array(range(len(df_all_new.index)))
df_all_new['Time'] = time
df_all_new['Time'] = df_all_new['Time'].astype(int)
# avergae throughput
sumThroughput = df_all_new['Bitrate'].sum()
avgSumThroughput = sumThroughput/len(time)
var_throughput = df_all_new['Bitrate'].var()
return avgSumThroughput, var_throughput
def readCSV2pd_Thru(directoryPath,tf_load,edge_name,start_data,end_data,headerrange):
"""
This function is to read a CSV file and return the average value and varience
input: directoryPath : path of file names
tf_load : list of traffic load
"""
avg_Thr = []
var_Thr = []
for tf in tf_load:
cpu_data = pd.DataFrame()
for file_name in glob.glob(directoryPath+edge_name+str(tf)+'.csv'):
avg_thr,var_thr = superClean(file_name,headerrange,start_data,end_data)
avg_Thr.append(avg_thr)
var_Thr.append(var_thr)
return avg_Thr, var_Thr
```
## Read File CSV
```
headerrange = 7
start_data = '9.0-10.0'
end_data = '60.0-61.0'
tf_load = [i*2 for i in range(2,20)]
edge_name = 'edge1_M'
directoryPath = '/Users/kalika/PycharmProjects/Privacy_SDN_Edge_IoT/PlanB/CPU_utilization_Experiment/version3_Experiment_style/Experiment1/Edge1_iperf_log/'
avg_thr, var_thr = readCSV2pd_Thru(directoryPath,tf_load,edge_name,start_data,end_data,headerrange)
print('avg',avg_thr)
print('var',var_thr)
headerrange = 7
start_data = '9.0-10.0'
end_data = '60.0-61.0'
tf_load = [i*2 for i in range(2,20)]
edge_name = 'edge2_M'
directoryPath = '/Users/kalika/PycharmProjects/Privacy_SDN_Edge_IoT/PlanB/CPU_utilization_Experiment/version3_Experiment_style/Experiment1/Edge2_iperf_log/'
avg_thr2, var_thr2 = readCSV2pd_Thru(directoryPath,tf_load,edge_name,start_data,end_data,headerrange)
print('avg',avg_thr2)
print('var',var_thr2)
```
## Plot Throughput
```
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(tf_load, avg_thr, color='green', linestyle='dashed', linewidth = 2,
marker='o', markerfacecolor='green', markersize=10,label="Edge 1")
ax.plot(tf_load, avg_thr2, color='red', linestyle='dashed', linewidth = 2,
marker='x', markerfacecolor='red', markersize=10,label="Edge 2")
plt.ylim(0,30)
plt.xlim(0,40)
plt.xlabel('Traffic load $\lambda_{1,2}$ (Mbps)')
# naming the y axis
plt.ylabel('Average of Throughput (Mbps)')
plt.legend()
plt.show()
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(tf_load, var_thr, color='green', linestyle='dashed', linewidth = 2,
marker='o', markerfacecolor='green', markersize=10,label="Edge 1")
ax.plot(tf_load, var_thr2, color='red', linestyle='dashed', linewidth = 2,
marker='x', markerfacecolor='red', markersize=10,label="Edge 2")
plt.ylim(0,20)
plt.xlim(0,40)
plt.xlabel('Traffic load $\lambda_{1,2}$ (Mbps)')
# naming the y axis
plt.ylabel('Variance of Throughput')
plt.legend()
plt.show()
```
|
github_jupyter
|
$\newcommand{\xv}{\mathbf{x}}
\newcommand{\wv}{\mathbf{w}}
\newcommand{\Chi}{\mathcal{X}}
\newcommand{\R}{\rm I\!R}
\newcommand{\sign}{\text{sign}}
\newcommand{\Tm}{\mathbf{T}}
\newcommand{\Xm}{\mathbf{X}}
\newcommand{\Im}{\mathbf{I}}
\newcommand{\Ym}{\mathbf{Y}}
$
### ITCS8010
# G_np Simulation Experiment
In this experiment I like to replicate the behaviour of `Fraction of node in largest CC` and `Fraction of isolated nodes` over the `p*log(n)` in `Erdös-Renyi random graph model`.
```
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
import collections as collec
%matplotlib inline
# Fraction of node in largest CC Vs. {p*log(n)}
n = 100000
x1 = []
y1 = []
for kave in np.arange(0.5, 3.0, 0.1):
G = nx.fast_gnp_random_graph(n, kave / (n - 1))
largest_cc = max(nx.connected_components(G), key=len)
x1.append(kave)
y1.append(len(largest_cc)/n)
# print(kave)
# print(len(largest_cc)/n)
fig, ax = plt.subplots()
ax.plot(x1, y1)
ax.set(xlabel='p*(n-1)', ylabel='Fraction of node in largest CC',
title='Fraction of node in largest CC Vs. p*(n-1)')
ax.grid()
# fig.savefig("test.png")
plt.show()
# Fraction of isolated nodes Vs. {p*log(n)}
x2 = []
y2 = []
for kave in np.arange(0.3, 1.5, 0.1):
p = kave / (n - 1)
G = nx.fast_gnp_random_graph(n, p)
isolates = len(list(nx.isolates(G)))
x2.append(p * np.log10(n))
y2.append(isolates/n)
# print(kave)
# print(isolates/n)
fig, ax = plt.subplots()
ax.plot(x2, y2)
ax.set(xlabel='p*log(n)', ylabel='Fraction of isolated nodes',
title='Fraction of isolated nodes Vs. p*log(n)')
ax.grid()
# fig.savefig("test.png")
plt.show()
# Fraction of isolated nodes Vs. {p*log(n)}
x2 = []
y2 = []
for kave in np.arange(0.3, 10, 0.1):
p = kave / (n - 1)
G = nx.fast_gnp_random_graph(n, p)
isolates = len(list(nx.isolates(G)))
x2.append(p * np.log10(n))
y2.append(isolates/n)
# print(kave)
# print(isolates/n)
fig, ax = plt.subplots()
ax.plot(x2, y2)
ax.set(xlabel='p*log(n)', ylabel='Fraction of isolated nodes',
title='Fraction of isolated nodes Vs. p*log(n)')
ax.grid()
# fig.savefig("test.png")
plt.show()
```
### Observation:
1. The result of the first experiment (i.e. `fraction of node in largest CC` varying `p*(n-1)`) gives somewhat similar behaviour we observed in the class slide.
2. In the second experiment (i.e. plotting `fraction of isolated nodes` on varying `p*log(n)`) gives somewhat different result comparing to the one we found in the class slide. When we plot the graph for p*(n-1) within the range of 0.3 to 1.5 we don't get the long tail; which we can get when we increase the range of p*(n-1) from 0.3 to 10. Just to inform, in this experiment we do run the loop on the different values of `p*(n-1)`, but plot it on `p*log(n)` scale. I am not sure the reason behind type of behaviour.
## Key Network Properties
Now we like to use the networkx [[2]](https://networkx.github.io/documentation/stable/) library support to ovserve the values of the key network properties in Erdös-Renyi random graph.
```
# plotting degree distribution
n1 = 180
p1 = 0.11
G = nx.fast_gnp_random_graph(n1, p1)
degree_sequence = sorted([d for n, d in G.degree()], reverse=True) # degree sequence
degreeCount = collec.Counter(degree_sequence)
deg, cnt = zip(*degreeCount.items())
fig, ax = plt.subplots()
plt.bar(deg, cnt, width=0.80, color="b")
plt.title("Degree Histogram")
plt.ylabel("Count")
plt.xlabel("Degree")
ax.set_xticks([d + 0.4 for d in deg])
ax.set_xticklabels(deg)
# draw graph in inset
plt.axes([0.4, 0.4, 0.5, 0.5])
Gcc = G.subgraph(sorted(nx.connected_components(G), key=len, reverse=True)[0])
pos = nx.spring_layout(G)
plt.axis("off")
nx.draw_networkx_nodes(G, pos, node_size=20)
nx.draw_networkx_edges(G, pos, alpha=0.4)
plt.show()
# diameter and path length
dia = nx.diameter(G)
print(dia)
avg_path_len = nx.average_shortest_path_length(G)
print(avg_path_len)
```
# References
[1] Erdős, Paul, and Alfréd Rényi. 1960. “On the Evolution of Random Graphs.” Bull. Inst. Internat. Statis. 38 (4): 343–47.
[2] NetworkX, “Software for Complex Networks,” https://networkx.github.io/documentation/stable/, 2020, accessed: 2020-10.
|
github_jupyter
|
# Project 1: Babynames
## I. Characterise One File
### 1. Read the data
- Read the file yob2000.txt
- Name the columns
- Print the first 10 entries
```
import pandas as pd
from matplotlib import pyplot as plt
popular_names = pd.read_csv('yob2000.csv',
names = ['Names', 'Sex', 'Birth Count'])
len(popular_names)
popular_names.head(10)
top_1000 = popular_names.sort_values(by = 'Birth Count',
ascending=False).reset_index().drop('index', axis=1)
top_1000.head(10)
```
### 2. Calculate total births
- Calculate the sum of the birth count column in the file yob2000.txt.\
```
top_1000['Birth Count'].sum()
```
### 3. Separate boys / girls
- Calculate separate sums for boys and girls.
- Plot both sums in a bar plot
```
top_1000.groupby('Sex')['Birth Count'].sum()
plot_boys_girls = top_1000.groupby('Sex')['Birth Count'].sum()
plot_boys_girls.plot.bar()
plt.ylabel('Birth Count')
plt.title('Total births of females and males in Year 2000')
plt.show()
```
But there's a greater amount of female names!
```
top_1000['Sex'].value_counts() # counts column values
```
### 4. Frequent names
- Count how many names occur at least 1000 times in the file yob2000.txt.
```
top_1000[top_1000['Birth Count'] > 1000].head(10)
top_1000[top_1000['Birth Count'] > 1000]['Birth Count'].count()
```
### 5. Relative amount
- Create a new column containing the percentage of a name on the total births of that year.
- Verify that the sum of percentages is 100%.
```
(top_1000['Birth Count']/(top_1000['Birth Count'].sum()) * 100).head()
top_1000['Percentage of total count'] = top_1000['Birth Count']/(top_1000['Birth Count'].sum()) * 100
top_1000.head()
top_1000['Percentage of total count'].sum().round()
```
### 6. Search your name
- Identify and print all lines containing your name in the year 2000.
```
top_1000[top_1000['Names'].str.contains('Max')]
```
### 7. Bar plot
- Create a bar plot showing 5 selected names for the year 2000.
```
peppermint = top_1000.set_index('Names').loc[['Max', 'Eric','Josh','Daniela','Michael']]
peppermint
peppermint_5 = peppermint.groupby('Names')[['Birth Count']].sum()
peppermint_5
peppermint_5.plot.bar(stacked=True, colormap='Accent')
```
## II. Characterize all files
### 1. Read all names
To read the complete dataset, you need to loop though all file names:
yob1880.txt
yob1881.txt
yob1882.txt
...
Complete the code below by inserting _csv, data````df, names=['name', 'gender', 'count'], y and 2017:
years = range(1880, ____, 10)
data = []
for y in years:
fn = f'yob{_____}.txt'
df = pd.read____(fn, ____)
df['year'] = y
data.append(____)
df = pd.concat(____)
Run the code and check the size of the resulting data frame.
Hint: In addition to some pandas functions, you may need to look up Python format strings.
```
years = range(1880, 2017)
data = []
for y in years:
fn = f'yob{y}.txt'
df = pd.read_csv(fn, names =['Names', 'Sex', 'Birth Count'])
df['year'] = y
data.append(df)
usa_names = pd.concat(data)
usa_names.head(10)
len(usa_names)
```
### 2. Plot a time series
- extract all rows containing your name from the variable df
- plot the number of babies having your name and gender over time
- make the plot nicer by adding row/column labels and a title
- change the color and thickness of the line
- save the plot as a high-resolution diagram
```
my_name = usa_names[(usa_names['Names']=='Max')
& (usa_names['Sex'] == 'M')]
my_name.head(10)
my_name = my_name.set_index(['Names', 'Sex', 'year']).stack()
my_name = my_name.unstack((0,1,3))
my_name.head(10)
plt.plot(my_name)
plt.plot(my_name, linewidth=3, color= 'red')
plt.xlabel('Year')
plt.ylabel('Birth Count')
plt.title('Popularity of the name Max over time')
plt.savefig('Max_over_time.png', dpi = 300)
plt.show()
```
### 3. Name diversity
- Have the baby names become more diverse over time?
- What assumptions is your calculation based upon?
```
usa_names.head(5)
name_diversity = usa_names.groupby('year')[['Names']].count()
name_diversity.head()
plt.plot(name_diversity)
plt.xlabel('Year')
plt.ylabel('Number of different names')
plt.title('Variation of the number of given names over time')
plt.show()
```
The SSA files that we are extracting our data from are for the 'Top 1000' names, therefore, there are a certain number of unique names (names with a yearly frequency of less than 5) that will not be included in the data.
Our calculation essentially assumes that the number of names that has a frequency of less than 5 in the 1880s up until 2017 has probably increased too, or are at least equal! i.e. The number of names not present in the Top 1000 list does not affect the data enough that we can't conclude that in the present day there is a greater amount of name diversity.
### 4. Long names
- add an extra column that contains the length of the name
- print the 10 longest names to the screen.
Hint: If having the name in the index was useful so far, it is not so much useful for this task. With df.reset_index(inplace=True) you can move the index to a regular column.
```
usa_names.head()
long_names = list()
for i in usa_names['Names']:
long_names.append(len(i))
usa_names['Length of name'] = long_names
usa_names.head(5)
long_names_10 = usa_names.sort_values(by='Length of name', ascending=False).head(10)
long_names_10
```
## III. Plot Celebrities
### 1. Plotting Madonna
- plot time lines of names of celebrities
- try actors, presidents, princesses, Star Wars & GoT characters, boot camp participants…
Hint: When was the hit single “Like a Prayer” released?
```
usa_names.drop(columns='Length of name').head()
celebrity = usa_names[usa_names['Names'] == 'Madonna']
celebrity = celebrity.drop(columns='Length of name')
celeb_stacked = celebrity.set_index(['Names', 'Sex', 'year']).stack()
madonna = celeb_stacked.unstack((0,1,3))
plt.plot(madonna, linewidth=2.5)
plt.xlabel('Year')
plt.ylabel('Birth Count')
plt.title('Popularity of Madonna year on year')
plt.show()
```
### 2. Total births over time
- create a plot that shows the total birth rate in the U.S. over time
- plot the total birth rate for girls/boys separately
```
year_sum = usa_names.groupby('year')['Birth Count'].sum()
year_sum = pd.DataFrame(year_sum)
year_sum.head()
year_sum.plot()
plt.xlabel('Year')
plt.ylabel('Birth Count')
plt.title('Total Birth Rate in the USA over time')
plt.show()
usa_names.head()
usa_females_males = usa_names.groupby(['year','Sex'])['Birth Count'].sum().unstack()
usa_females_males.head()
usa_names_males = usa_females_males.groupby('year')['M'].sum()
usa_names_females = usa_females_males.groupby('year')['F'].sum()
plt.plot(year_sum)
plt.plot(usa_names_males)
plt.plot(usa_names_females)
plt.xlabel('Year')
plt.ylabel('Birth Count')
plt.title('Total, female, and male birth count year on year')
plt.show()
```
### 3. Normalize
- divide the number of births by the total number of births in each year to obtain the relative frequency
- plot the time series of your name or the celebrity names again.
Hint: To reshape the data for plotting, you may find a combination of df.groupby( ) and df.unstack( ) useful.
```
year_sum = usa_names.groupby('year')[['Birth Count']].sum().reset_index()
year_sum.head()
usa_names = usa_names.drop(columns='Length of name')
usa_names.head()
```
#### Now let's merge! Almost always 'left' and you will merge 'on' a point they have in common, eg year!
Can change sufixes too!
```
merged_usa_names = usa_names.merge(year_sum, how='left', on='year',
suffixes=('_name', '_total'))
merged_usa_names.head(10)
merged_usa_names['Name Rel. %'] = merged_usa_names['Birth Count_name']/merged_usa_names['Birth Count_total']*100
merged_usa_names = merged_usa_names.sort_values(by='Name Rel. %', ascending=False)
merged_usa_names.head(10)
my_name = merged_usa_names[(merged_usa_names['Names']=='Max')
& (merged_usa_names['Sex'] == 'M')]
my_name.head()
my_name = my_name.drop(columns=['Birth Count_name','Birth Count_total'])
my_names_stacked = my_name.set_index(['Names','Sex','year']).stack()
my_names_stacked.head()
my_name = my_names_stacked.unstack((0,1,3))
my_name.head()
plt.plot(my_name, linewidth=3, color= 'green')
plt.xlabel('Year')
plt.ylabel('Name Relativity %')
plt.title('Percentage of people named Max relative to the total number of births over time')
plt.show()
```
## II. Letter Statistics
### 1. First letter statistics
- use df.apply(func) to add an extra column that contains the first letter of the name.
- count how many names start with ‘A’.
- plot the relative occurence of initials over time.
- what can you conclude from your observations?
Hint: You may need to iterate over the names with df.iterrows(). A more elegant solution is possible by writing a Python function and using df.apply()
```
merged_usa_names.head()
def initial(name):
return name[0]
merged_usa_names['initial'] = merged_usa_names['Names'].apply(initial)
merged_usa_names.head()
merged_usa_names[merged_usa_names['initial']== 'A']['initial'].count()
first_letter_sum = merged_usa_names.groupby('year')['initial'].value_counts()
first_letter_sum.head()
df = pd.DataFrame(first_letter_sum)
df.head()
df = df.reset_index(0)
df.columns=['year','sum of initials']
df = df.reset_index()
df.head()
merge = merged_usa_names.merge(df, how='left', on=['year', 'initial'])
merge.head()
merge['initial Rel. %'] = merge['sum of initials']/merge['Birth Count_total']*100
merge.head()
merge = merge.sort_values(by='initial Rel. %', ascending=False)
merge.head()
initials = merge.drop(columns=['Birth Count_name', 'Birth Count_total', 'Name Rel. %', 'Sex', 'Names'])
initials.head()
initials = initials.drop_duplicates()
#initials_s = initials.set_index(['sum of initials', 'initial', 'year']).stack()
#initials_s.unstack((0,1, 3))
#plt.plot(initials_s, linewidth=3)
```
### 2. Last letter statistics
- try the same for the final character
- separate by boys/girls
- what can you conclude from your observations?
```
def last_letter(name):
return name[-1]
merged_usa_names['last letter'] = merged_usa_names['Names'].apply(last_letter)
merged_usa_names.head(5)
```
### 3. e-rich Names
- Find all names that contain the character ‘e’ at least four times.
```
usa_names.head()
```
### USE .APPLY to apply a function!
```
def four_e(input):
check = []
for i in input:
if i == 'e' or i == 'E':
check.append(i)
return len(check)
usa_names['e occurences'] = usa_names['Names'].apply(four_e)
usa_names.head()
many_es = usa_names[usa_names['e occurences'] > 3]
many_es.head()
len(many_es)
```
|
github_jupyter
|
## Classify Radio Signals from Space using Keras
In this experiment, we attempt to classify radio signals from space.
Dataset has been provided by SETI. Details can be found here:
https://github.com/setiQuest/ML4SETI/blob/master/tutorials/Step_1_Get_Data.ipynb
## Import necessary libraries
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import metrics
import seaborn as sns
import tensorflow as tf
%matplotlib inline
# Mount google drive to get data
from google.colab import drive
drive.mount('/content/drive')
!ls -l '/content/drive/My Drive/datasets/seti'
```
## Load data
```
# Load dataset from CSV
train_images = pd.read_csv('/content/drive/My Drive/datasets/seti/train/images.csv', header=None)
train_labels = pd.read_csv('/content/drive/My Drive/datasets/seti/train/labels.csv', header=None)
val_images = pd.read_csv('/content/drive/My Drive/datasets/seti/validation/images.csv', header=None)
val_labels = pd.read_csv('/content/drive/My Drive/datasets/seti/validation/labels.csv', header=None)
train_images.head()
train_labels.head()
# Check shape of train_images, train_labels, val_images nad val_labels
print("train_images shape:", train_images.shape)
print("train_labels shape:", train_labels.shape)
print("val_images shape:", val_images.shape)
print("val_labels shape:", val_labels.shape)
# Reshape the image sets
# Get the values as numpy array
x_train = train_images.values.reshape(3200, 64, 128, 1)
x_val = val_images.values.reshape(800, 64, 128, 1)
y_train = train_labels.values
y_val = val_labels.values
```
## Plot 2D spectrogram data
```
plt.figure(figsize=(15,15))
for i in range(1,4):
plt.subplot(1,3,i)
img = np.squeeze(x_train[np.random.randint(x_train.shape[0])])
plt.imshow(img, cmap='gray')
```
## Preprocess data
```
from tensorflow.keras.preprocessing.image import ImageDataGenerator
datagen_train = ImageDataGenerator(horizontal_flip=True)
datagen_train.fit(x_train)
datagen_val = ImageDataGenerator(horizontal_flip=True)
datagen_val.fit(x_val)
```
## Build model
```
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Flatten
from tensorflow.keras.layers import BatchNormalization, Dropout, Activation
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
# Initialize model
model = Sequential()
# 1st CNN block
model.add(Conv2D(32, (5,5), padding='same', input_shape=(64,128,1)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
# 2nd CNN block
model.add(Conv2D(64, (5,5), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
# Falatter CNN output to feed to FC layer
model.add(Flatten())
# Fully connected layer
model.add(Dense(1024))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.4))
# Softmax layer
model.add(Dense(4, activation='softmax'))
```
## Compile the model
```
# Schedule learnning rate decay
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
0.005,
decay_steps=5,
decay_rate=0.9,
staircase=True)
model.compile(optimizer=Adam(lr_schedule), loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
```
## Train the model
```
batch_size = 32
history = model.fit(
datagen_train.flow(x_train, y_train, batch_size=batch_size, shuffle=True),
steps_per_epoch=len(x_train)//batch_size,
validation_data = datagen_val.flow(x_val, y_val, batch_size=batch_size, shuffle=True),
validation_steps = len(x_val)//batch_size,
epochs=10,
)
```
## Evaluation
```
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['training', 'validation'])
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['training', 'validation'])
plt.show()
model.evaluate(x_val, y_val)
y_true = np.argmax(y_val, 1)
y_pred = np.argmax(model.predict(x_val), 1)
print(metrics.classification_report(y_true, y_pred))
print("Classification accuracy: %.2f" % metrics.accuracy_score(y_true, y_pred))
plt.figure(figsize=(8,8))
labels = ["squiggle", "narrowband", "noise", "narrowbanddrd"]
ax = plt.subplot()
sns.heatmap(metrics.confusion_matrix(y_true, y_pred, normalize='true'), annot=True, ax=ax, cmap=plt.cm.Blues)
ax.set_title('Confusion Matrix')
ax.xaxis.set_ticklabels(labels)
ax.yaxis.set_ticklabels(labels)
```
## Conclusions
Winning submission has used ResNet based architechure (WRN) on primary (full) dataset, and achieved a classification accuracy of 94.99%.
Reference: https://github.com/sgrvinod/Wide-Residual-Nets-for-SETI
Here we have used a simple CNN based model. The model did not learn much after the first 2 epochs (accuracy is around 74% after 10 epochs).
Reasons:
* The signals in the dataset have a noise factor added to it.
* Even though the dataset, we have used here, is simpler than the other datasets provided by SETI, it's a bit challenging to extract features using a simple model like ours. So it is essentially a underfitting problem.
Possible improvements:
* Add additional CNN blocks, change filter sizes (e.g. 7x7, 5x5 etc.) to learn more features.
* Add additional fully connected layers.
* Here we have used Adam optimizer. It has convergence issues. We can change it SGD, and see what happens.
* Use a different architechture altogether.
|
github_jupyter
|
### building a dask array without knowing sizes
#### from dask.dataframe
```
from dask import array as da, dataframe as ddf, delayed, compute
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
da.from_delayed
def get_chunk_df(array_size,n_cols):
col_names = [f"col_{i}" for i in range(n_cols)]
pd_df = pd.DataFrame(
{nm:pd.Series(np.arange(array_size[0])) for ic,nm in enumerate(col_names)}
)
return pd_df
def get_meta(n_cols):
col_names = [f"col_{i}" for i in range(n_cols)]
return {nm:pd.Series([], dtype=np.float64) for nm in col_names}
n_cols = 5
meta_dict = get_meta(n_cols)
delayed_chunks = [delayed(get_chunk_df)((10000+10*ch,),n_cols) for ch in range(0,5)]
df_delayed = ddf.from_delayed(delayed_chunks,meta=meta_dict)
df_delayed
df = df_delayed.compute()
df.head()
df.size
type(df)
col0 = df_delayed['col_0'].to_dask_array()
col0
col0.min()
col0.max().compute()
col0.max().compute()
col0np = col0.compute()
col0np.shape
col0np.max()
```
### direct from_array?
```
delayed_arrays=[]
for ichunk in range(0,5):
ra_size = 10000+10*ichunk
delayed_array = delayed(np.arange)(ra_size)
delayed_arrays.append(da.from_delayed(delayed_array, (ra_size,), dtype=float))
delayed_arrays
hda = da.hstack(delayed_arrays)
hda
def get_delayed_array(base_chunk_size,n_chunks):
delayed_arrays = []
for ichunk in range(0,n_chunks):
ra_size = base_chunk_size+10*ichunk
delayed_array = delayed(np.arange)(ra_size)
delayed_arrays.append(da.from_delayed(delayed_array, (ra_size,), dtype=float))
return da.hstack(delayed_arrays)
def get_delayed_array_from_df(base_chunk_size,n_chunks):
meta_dict = get_meta(1)
delayed_chunks = [delayed(get_chunk_df)((base_chunk_size+10*ch,),1) for ch in range(0,n_chunks)]
df_delayed = ddf.from_delayed(delayed_chunks,meta=meta_dict)
return df_delayed[list(meta_dict.keys())[0]].to_dask_array()
n_chunks = 5
base_chunk_size = 1000
array_from_hstack = get_delayed_array(base_chunk_size,n_chunks)
array_from_df = get_delayed_array_from_df(base_chunk_size,n_chunks)
array_from_hstack
array_from_df
h_array = array_from_hstack.compute()
df_array = array_from_df.compute()
h_array.shape
df_array.shape
np.all(h_array==df_array)
```
### comparison
```
def array_construct_compute(base_chunk_size,n_chunks,find_mean=False):
res1 = get_delayed_array(base_chunk_size,n_chunks)
if find_mean:
r = res1.mean().compute()
else:
r = res1.compute()
return
def df_construct_compute(base_chunk_size,n_chunks,find_mean=False):
res1 = get_delayed_array_from_df(base_chunk_size,n_chunks)
if find_mean:
r = res1.mean().compute()
else:
r = res1.compute()
return
base_chunk_size = 100000
test_chunks = np.arange(2,100,5)
results = pd.DataFrame()
for n_chunks in test_chunks:
time_result_ar = %timeit -n10 -r5 -o array_construct_compute(base_chunk_size,n_chunks)
time_result_df = %timeit -n10 -r5 -o df_construct_compute(base_chunk_size,n_chunks)
new_row = {
'chunks':n_chunks,'base_size':base_chunk_size,"actual_size":n_chunks * (base_chunk_size + 10),
'direct_mean':time_result_ar.average,'direct_std':time_result_ar.stdev,
'indirect_mean':time_result_df.average,'indirect_std':time_result_df.stdev,
}
results = results.append([new_row],ignore_index=True)
results.head()
def plot_results(results,xvar='chunks',log_x=False,log_y=True,second_x=None):
fig = plt.figure()
clrs = [[0,0,0,1],[1,0,0,1]]
ax1 = plt.subplot(2,1,1)
xvals = results[xvar]
for fld,clr in zip(['direct','indirect'],clrs):
plt.plot(xvals,results[fld+'_mean'],color=clr,marker='.')
clr[3] = 0.5
for pm in [1,-1]:
std_pm = results[fld+'_mean'] + results[fld+'_std']* pm *2
plt.plot(xvals,std_pm,color=clr)
if log_y:
plt.yscale('log')
if log_x:
plt.xscale('log')
plt.ylabel('time [s]')
plt.subplot(2,1,2)
plt.plot(xvals,results.indirect_mean/results.direct_mean,color='k',marker='.')
plt.ylabel('indirect / direct ')
plt.xlabel(xvar)
if log_x:
plt.xscale('log')
return fig
fig = plot_results(results,xvar='chunks')
base_chunk_size = 100000
test_chunks = np.arange(2,100,5)
results_wmn = pd.DataFrame()
for n_chunks in test_chunks:
time_result_ar = %timeit -n10 -r5 -o array_construct_compute(base_chunk_size,n_chunks,True)
time_result_df = %timeit -n10 -r5 -o df_construct_compute(base_chunk_size,n_chunks, True)
new_row = {
'chunks':n_chunks,'base_size':base_chunk_size,"actual_size":n_chunks * (base_chunk_size + 10),
'direct_mean':time_result_ar.average,'direct_std':time_result_ar.stdev,
'indirect_mean':time_result_df.average,'indirect_std':time_result_df.stdev,
}
results_wmn = results.append([new_row],ignore_index=True)
fig = plot_results(results_wmn)
test_sizes = np.logspace(3,6,9-3+1)
n_chunks = 10
results_by_size = pd.DataFrame()
for base_chunk_size in test_sizes:
time_result_ar = %timeit -n10 -r5 -o array_construct_compute(base_chunk_size,n_chunks,True)
time_result_df = %timeit -n10 -r5 -o df_construct_compute(base_chunk_size,n_chunks,True)
new_row = {
'chunks':n_chunks,'base_size':base_chunk_size,"actual_size":n_chunks * (base_chunk_size + 10),
'direct_mean':time_result_ar.average,'direct_std':time_result_ar.stdev,
'indirect_mean':time_result_df.average,'indirect_std':time_result_df.stdev,
}
results_by_size = results_by_size.append([new_row],ignore_index=True)
results_by_size
fig = plot_results(results_by_size,xvar='actual_size',log_x=True)
test_sizes = np.logspace(3,6,9-3+1)
n_chunks = 10
results_by_size_nomn = pd.DataFrame()
for base_chunk_size in test_sizes:
time_result_ar = %timeit -n10 -r5 -o array_construct_compute(base_chunk_size,n_chunks)
time_result_df = %timeit -n10 -r5 -o df_construct_compute(base_chunk_size,n_chunks)
new_row = {
'chunks':n_chunks,'base_size':base_chunk_size,"actual_size":n_chunks * (base_chunk_size + 10),
'direct_mean':time_result_ar.average,'direct_std':time_result_ar.stdev,
'indirect_mean':time_result_df.average,'indirect_std':time_result_df.stdev,
}
results_by_size_nomn = results_by_size.append([new_row],ignore_index=True)
fig = plot_results(results_by_size_nomn,xvar='actual_size',log_x=True)
```
the question is:
pre-compute time for counting particles + direct array <= indirect array from dataframe ?
|
github_jupyter
|
## Linear Regression with PyTorch
#### Part 2 of "PyTorch: Zero to GANs"
*This post is the second in a series of tutorials on building deep learning models with PyTorch, an open source neural networks library developed and maintained by Facebook. Check out the full series:*
1. [PyTorch Basics: Tensors & Gradients](https://jovian.ml/aakashns/01-pytorch-basics)
2. [Linear Regression & Gradient Descent](https://jovian.ml/aakashns/02-linear-regression)
3. [Image Classfication using Logistic Regression](https://jovian.ml/aakashns/03-logistic-regression)
4. [Training Deep Neural Networks on a GPU](https://jovian.ml/aakashns/04-feedforward-nn)
5. [Image Classification using Convolutional Neural Networks](https://jovian.ml/aakashns/05-cifar10-cnn)
6. [Data Augmentation, Regularization and ResNets](https://jovian.ml/aakashns/05b-cifar10-resnet)
7. [Generating Images using Generative Adverserial Networks](https://jovian.ml/aakashns/06-mnist-gan)
Continuing where the [previous tutorial](https://jvn.io/aakashns/3143ceb92b4f4cbbb4f30e203580b77b) left off, we'll discuss one of the foundational algorithms of machine learning in this post: *Linear regression*. We'll create a model that predicts crop yields for apples and oranges (*target variables*) by looking at the average temperature, rainfall and humidity (*input variables or features*) in a region. Here's the training data:

In a linear regression model, each target variable is estimated to be a weighted sum of the input variables, offset by some constant, known as a bias :
```
yield_apple = w11 * temp + w12 * rainfall + w13 * humidity + b1
yield_orange = w21 * temp + w22 * rainfall + w23 * humidity + b2
```
Visually, it means that the yield of apples is a linear or planar function of temperature, rainfall and humidity:

The *learning* part of linear regression is to figure out a set of weights `w11, w12,... w23, b1 & b2` by looking at the training data, to make accurate predictions for new data (i.e. to predict the yields for apples and oranges in a new region using the average temperature, rainfall and humidity). This is done by adjusting the weights slightly many times to make better predictions, using an optimization technique called *gradient descent*.
## System setup
This tutorial takes a code-first approach towards learning PyTorch, and you should try to follow along by running and experimenting with the code yourself. The easiest way to start executing this notebook is to click the **"Run"** button at the top of this page, and select **"Run on Binder"**. This will run the notebook on [mybinder.org](https://mybinder.org), a free online service for running Jupyter notebooks.
**NOTE**: *If you're running this notebook on Binder, please skip ahead to the next section.*
### Running on your computer locally
You can clone this notebook hosted on [Jovian.ml](https://www.jovian.ml), install the required dependencies, and start Jupyter by running the following commands on the terminal:
```bash
pip install jovian --upgrade # Install the jovian library
jovian clone aakashns/02-linear-regression # Download notebook & dependencies
cd 02-linear-regression # Enter the created directory
jovian install # Install the dependencies
conda activate 02-linear-regression # Activate virtual environment
jupyter notebook # Start Jupyter
```
On older versions of conda, you might need to run `source activate 02-linear-regression` to activate the environment. For a more detailed explanation of the above steps, check out the *System setup* section in the [previous notebook](https://jovian.ml/aakashns/01-pytorch-basics).
We begin by importing Numpy and PyTorch:
```
# Uncomment the command below if Numpy or PyTorch is not installed
# !conda install numpy pytorch cpuonly -c pytorch -y
import numpy as np
import torch
```
## Training data
The training data can be represented using 2 matrices: `inputs` and `targets`, each with one row per observation, and one column per variable.
```
# Input (temp, rainfall, humidity)
inputs = np.array([[73, 67, 43],
[91, 88, 64],
[87, 134, 58],
[102, 43, 37],
[69, 96, 70]], dtype='float32')
# Targets (apples, oranges)
targets = np.array([[56, 70],
[81, 101],
[119, 133],
[22, 37],
[103, 119]], dtype='float32')
```
We've separated the input and target variables, because we'll operate on them separately. Also, we've created numpy arrays, because this is typically how you would work with training data: read some CSV files as numpy arrays, do some processing, and then convert them to PyTorch tensors as follows:
```
# Convert inputs and targets to tensors
inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)
print(inputs)
print(targets)
```
## Linear regression model from scratch
The weights and biases (`w11, w12,... w23, b1 & b2`) can also be represented as matrices, initialized as random values. The first row of `w` and the first element of `b` are used to predict the first target variable i.e. yield of apples, and similarly the second for oranges.
```
# Weights and biases
w = torch.randn(2, 3, requires_grad=True)
b = torch.randn(2, requires_grad=True)
print(w)
print(b)
```
`torch.randn` creates a tensor with the given shape, with elements picked randomly from a [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution) with mean 0 and standard deviation 1.
Our *model* is simply a function that performs a matrix multiplication of the `inputs` and the weights `w` (transposed) and adds the bias `b` (replicated for each observation).

We can define the model as follows:
```
def model(x):
return x @ w.t() + b
```
`@` represents matrix multiplication in PyTorch, and the `.t` method returns the transpose of a tensor.
The matrix obtained by passing the input data into the model is a set of predictions for the target variables.
```
# Generate predictions
preds = model(inputs)
print(preds)
```
Let's compare the predictions of our model with the actual targets.
```
# Compare with targets
print(targets)
```
You can see that there's a huge difference between the predictions of our model, and the actual values of the target variables. Obviously, this is because we've initialized our model with random weights and biases, and we can't expect it to *just work*.
## Loss function
Before we improve our model, we need a way to evaluate how well our model is performing. We can compare the model's predictions with the actual targets, using the following method:
* Calculate the difference between the two matrices (`preds` and `targets`).
* Square all elements of the difference matrix to remove negative values.
* Calculate the average of the elements in the resulting matrix.
The result is a single number, known as the **mean squared error** (MSE).
```
# MSE loss
def mse(t1, t2):
diff = t1 - t2
return torch.sum(diff * diff) / diff.numel()
```
`torch.sum` returns the sum of all the elements in a tensor, and the `.numel` method returns the number of elements in a tensor. Let's compute the mean squared error for the current predictions of our model.
```
# Compute loss
loss = mse(preds, targets)
print(loss)
```
Here’s how we can interpret the result: *On average, each element in the prediction differs from the actual target by about 145 (square root of the loss 20834)*. And that’s pretty bad, considering the numbers we are trying to predict are themselves in the range 50–200. Also, the result is called the *loss*, because it indicates how bad the model is at predicting the target variables. Lower the loss, better the model.
## Compute gradients
With PyTorch, we can automatically compute the gradient or derivative of the loss w.r.t. to the weights and biases, because they have `requires_grad` set to `True`.
```
# Compute gradients
loss.backward()
```
The gradients are stored in the `.grad` property of the respective tensors. Note that the derivative of the loss w.r.t. the weights matrix is itself a matrix, with the same dimensions.
```
# Gradients for weights
print(w)
print(w.grad)
```
The loss is a [quadratic function](https://en.wikipedia.org/wiki/Quadratic_function) of our weights and biases, and our objective is to find the set of weights where the loss is the lowest. If we plot a graph of the loss w.r.t any individual weight or bias element, it will look like the figure shown below. A key insight from calculus is that the gradient indicates the rate of change of the loss, or the [slope](https://en.wikipedia.org/wiki/Slope) of the loss function w.r.t. the weights and biases.
If a gradient element is **positive**:
* **increasing** the element's value slightly will **increase** the loss.
* **decreasing** the element's value slightly will **decrease** the loss

If a gradient element is **negative**:
* **increasing** the element's value slightly will **decrease** the loss.
* **decreasing** the element's value slightly will **increase** the loss.

The increase or decrease in loss by changing a weight element is proportional to the value of the gradient of the loss w.r.t. that element. This forms the basis for the optimization algorithm that we'll use to improve our model.
Before we proceed, we reset the gradients to zero by calling `.zero_()` method. We need to do this, because PyTorch accumulates, gradients i.e. the next time we call `.backward` on the loss, the new gradient values will get added to the existing gradient values, which may lead to unexpected results.
```
w.grad.zero_()
b.grad.zero_()
print(w.grad)
print(b.grad)
```
## Adjust weights and biases using gradient descent
We'll reduce the loss and improve our model using the gradient descent optimization algorithm, which has the following steps:
1. Generate predictions
2. Calculate the loss
3. Compute gradients w.r.t the weights and biases
4. Adjust the weights by subtracting a small quantity proportional to the gradient
5. Reset the gradients to zero
Let's implement the above step by step.
```
# Generate predictions
preds = model(inputs)
print(preds)
```
Note that the predictions are same as before, since we haven't made any changes to our model. The same holds true for the loss and gradients.
```
# Calculate the loss
loss = mse(preds, targets)
print(loss)
# Compute gradients
loss.backward()
print(w.grad)
print(b.grad)
```
Finally, we update the weights and biases using the gradients computed above.
```
# Adjust weights & reset gradients
with torch.no_grad():
w -= w.grad * 1e-5
b -= b.grad * 1e-5
w.grad.zero_()
b.grad.zero_()
```
A few things to note above:
* We use `torch.no_grad` to indicate to PyTorch that we shouldn't track, calculate or modify gradients while updating the weights and biases.
* We multiply the gradients with a really small number (`10^-5` in this case), to ensure that we don't modify the weights by a really large amount, since we only want to take a small step in the downhill direction of the gradient. This number is called the *learning rate* of the algorithm.
* After we have updated the weights, we reset the gradients back to zero, to avoid affecting any future computations.
Let's take a look at the new weights and biases.
```
print(w)
print(b)
```
With the new weights and biases, the model should have lower loss.
```
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
print(loss)
```
We have already achieved a significant reduction in the loss, simply by adjusting the weights and biases slightly using gradient descent.
## Train for multiple epochs
To reduce the loss further, we can repeat the process of adjusting the weights and biases using the gradients multiple times. Each iteration is called an epoch. Let's train the model for 100 epochs.
```
# Train for 100 epochs
for i in range(100):
preds = model(inputs)
loss = mse(preds, targets)
loss.backward()
with torch.no_grad():
w -= w.grad * 1e-5
b -= b.grad * 1e-5
w.grad.zero_()
b.grad.zero_()
```
Once again, let's verify that the loss is now lower:
```
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
print(loss)
```
As you can see, the loss is now much lower than what we started out with. Let's look at the model's predictions and compare them with the targets.
```
# Predictions
preds
# Targets
targets
```
The prediction are now quite close to the target variables, and we can get even better results by training for a few more epochs.
At this point, we can save our notebook and upload it to [Jovian.ml](https://www.jovian.ml) for future reference and sharing.
```
!pip install jovian --upgrade -q
import jovian
jovian.commit()
```
`jovian.commit` uploads the notebook to [Jovian.ml](https://www.jovian.ml), captures the Python environment and creates a sharable link for the notebook. You can use this link to share your work and let anyone reproduce it easily with the `jovian clone` command. Jovian also includes a powerful commenting interface, so you (and others) can discuss & comment on specific parts of your notebook:

## Linear regression using PyTorch built-ins
The model and training process above were implemented using basic matrix operations. But since this such a common pattern , PyTorch has several built-in functions and classes to make it easy to create and train models.
Let's begin by importing the `torch.nn` package from PyTorch, which contains utility classes for building neural networks.
```
import torch.nn as nn
```
As before, we represent the inputs and targets and matrices.
```
# Input (temp, rainfall, humidity)
inputs = np.array([[73, 67, 43], [91, 88, 64], [87, 134, 58],
[102, 43, 37], [69, 96, 70], [73, 67, 43],
[91, 88, 64], [87, 134, 58], [102, 43, 37],
[69, 96, 70], [73, 67, 43], [91, 88, 64],
[87, 134, 58], [102, 43, 37], [69, 96, 70]],
dtype='float32')
# Targets (apples, oranges)
targets = np.array([[56, 70], [81, 101], [119, 133],
[22, 37], [103, 119], [56, 70],
[81, 101], [119, 133], [22, 37],
[103, 119], [56, 70], [81, 101],
[119, 133], [22, 37], [103, 119]],
dtype='float32')
inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)
inputs
```
We are using 15 training examples this time, to illustrate how to work with large datasets in small batches.
## Dataset and DataLoader
We'll create a `TensorDataset`, which allows access to rows from `inputs` and `targets` as tuples, and provides standard APIs for working with many different types of datasets in PyTorch.
```
from torch.utils.data import TensorDataset
# Define dataset
train_ds = TensorDataset(inputs, targets)
train_ds[0:3]
```
The `TensorDataset` allows us to access a small section of the training data using the array indexing notation (`[0:3]` in the above code). It returns a tuple (or pair), in which the first element contains the input variables for the selected rows, and the second contains the targets.
We'll also create a `DataLoader`, which can split the data into batches of a predefined size while training. It also provides other utilities like shuffling and random sampling of the data.
```
from torch.utils.data import DataLoader
# Define data loader
batch_size = 5
train_dl = DataLoader(train_ds, batch_size, shuffle=True)
```
The data loader is typically used in a `for-in` loop. Let's look at an example.
```
for xb, yb in train_dl:
print(xb)
print(yb)
break
```
In each iteration, the data loader returns one batch of data, with the given batch size. If `shuffle` is set to `True`, it shuffles the training data before creating batches. Shuffling helps randomize the input to the optimization algorithm, which can lead to faster reduction in the loss.
## nn.Linear
Instead of initializing the weights & biases manually, we can define the model using the `nn.Linear` class from PyTorch, which does it automatically.
```
# Define model
model = nn.Linear(3, 2)
print(model.weight)
print(model.bias)
```
PyTorch models also have a helpful `.parameters` method, which returns a list containing all the weights and bias matrices present in the model. For our linear regression model, we have one weight matrix and one bias matrix.
```
# Parameters
list(model.parameters())
```
We can use the model to generate predictions in the exact same way as before:
```
# Generate predictions
preds = model(inputs)
preds
```
## Loss Function
Instead of defining a loss function manually, we can use the built-in loss function `mse_loss`.
```
# Import nn.functional
import torch.nn.functional as F
```
The `nn.functional` package contains many useful loss functions and several other utilities.
```
# Define loss function
loss_fn = F.mse_loss
```
Let's compute the loss for the current predictions of our model.
```
loss = loss_fn(model(inputs), targets)
print(loss)
```
## Optimizer
Instead of manually manipulating the model's weights & biases using gradients, we can use the optimizer `optim.SGD`. SGD stands for `stochastic gradient descent`. It is called `stochastic` because samples are selected in batches (often with random shuffling) instead of as a single group.
```
# Define optimizer
opt = torch.optim.SGD(model.parameters(), lr=1e-5)
```
Note that `model.parameters()` is passed as an argument to `optim.SGD`, so that the optimizer knows which matrices should be modified during the update step. Also, we can specify a learning rate which controls the amount by which the parameters are modified.
## Train the model
We are now ready to train the model. We'll follow the exact same process to implement gradient descent:
1. Generate predictions
2. Calculate the loss
3. Compute gradients w.r.t the weights and biases
4. Adjust the weights by subtracting a small quantity proportional to the gradient
5. Reset the gradients to zero
The only change is that we'll work batches of data, instead of processing the entire training data in every iteration. Let's define a utility function `fit` which trains the model for a given number of epochs.
```
# Utility function to train the model
def fit(num_epochs, model, loss_fn, opt, train_dl):
# Repeat for given number of epochs
for epoch in range(num_epochs):
# Train with batches of data
for xb,yb in train_dl:
# 1. Generate predictions
pred = model(xb)
# 2. Calculate loss
loss = loss_fn(pred, yb)
# 3. Compute gradients
loss.backward()
# 4. Update parameters using gradients
opt.step()
# 5. Reset the gradients to zero
opt.zero_grad()
# Print the progress
if (epoch+1) % 10 == 0:
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))
```
Some things to note above:
* We use the data loader defined earlier to get batches of data for every iteration.
* Instead of updating parameters (weights and biases) manually, we use `opt.step` to perform the update, and `opt.zero_grad` to reset the gradients to zero.
* We've also added a log statement which prints the loss from the last batch of data for every 10th epoch, to track the progress of training. `loss.item` returns the actual value stored in the loss tensor.
Let's train the model for 100 epochs.
```
fit(100, model, loss_fn, opt,train_dl)
```
Let's generate predictions using our model and verify that they're close to our targets.
```
# Generate predictions
preds = model(inputs)
preds
# Compare with targets
targets
```
Indeed, the predictions are quite close to our targets, and now we have a fairly good model to predict crop yields for apples and oranges by looking at the average temperature, rainfall and humidity in a region.
## Commit and update the notebook
As a final step, we can record a new version of the notebook using the `jovian` library.
```
import jovian
jovian.commit()
```
Note that running `jovian.commit` a second time records a new version of your existing notebook. With Jovian.ml, you can avoid creating copies of your Jupyter notebooks and keep versions organized. Jovian also provides a visual diff ([example](https://jovian.ml/aakashns/keras-mnist-jovian/diff?base=8&remote=2)) so you can inspect what has changed between different versions:

## Further Reading
We've covered a lot of ground this this tutorial, including *linear regression* and the *gradient descent* optimization algorithm. Here are a few resources if you'd like to dig deeper into these topics:
* For a more detailed explanation of derivates and gradient descent, see [these notes from a Udacity course](https://storage.googleapis.com/supplemental_media/udacityu/315142919/Gradient%20Descent.pdf).
* For an animated visualization of how linear regression works, [see this post](https://hackernoon.com/visualizing-linear-regression-with-pytorch-9261f49edb09).
* For a more mathematical treatment of matrix calculus, linear regression and gradient descent, you should check out [Andrew Ng's excellent course notes](https://github.com/Cleo-Stanford-CS/CS229_Notes/blob/master/lectures/cs229-notes1.pdf) from CS229 at Stanford University.
* To practice and test your skills, you can participate in the [Boston Housing Price Prediction](https://www.kaggle.com/c/boston-housing) competition on Kaggle, a website that hosts data science competitions.
With this, we complete our discussion of linear regression in PyTorch, and we’re ready to move on to the next topic: *Logistic regression*.
|
github_jupyter
|
# Spin-polarized calculations with BigDFT
The goal of this notebook is to explain how to do a spin-polarized calculation with BigDFT (`nspin=2`).
We start with the molecule O$_2$ and a non-spin polarized calculation, which is the code default.
To do that we only have to specify the atomic positions of the molecule.
```
from BigDFT import Calculators as C
calc = C.SystemCalculator()
posO1=3*[0.0]
posO2=[0.0, 0.0, 1.2075] # in angstroem
inpt={'posinp':
{ 'positions': [ {'O': posO1 }, {'O': posO2 }], 'units': 'angstroem' }}
logNSP = calc.run(input=inpt)
```
Such calculation produced a converged set of KS LDA orbitals, with the following density of states:
```
%matplotlib inline
DoS=logNSP.get_dos(label='NSP')
DoS.plot()
```
Now we do the same calculation but with spin-polarized specifying `nspin=2`, in the `dft` field.
```
inpt['dft']={'nspin': 2}
logSP = calc.run(input=inpt)
```
We may see that this run did not produce any difference with respect to the previous one. Even though we doubled the number of orbitals, the input guess wavefunctions and densities are identical in both the spin sectors. As a consequence the energy and the DoS are identical to the NSP case:
```
print logNSP.energy,logSP.energy
DoS.append_from_bandarray(logSP.evals,label='SP (m 0)')
DoS.plot()
```
This is due to the fact that:
1. We had the same input guess for up and down subspaces;
2. We had the same number of orbitals in both the sectors and no empty orbitals during the minimization.
Such problems can be solved at the same time by performing mixing scheme with *random* initialization of the wavefunctions:
```
inpt['import']='mixing'
inpt['mix']={'iscf': 12, 'itrpmax': 20} # mixing on the potential, just 20 Hamiltonian iterations for a quick look
inpt['dft']['inputpsiid']= 'RANDOM' #for random initialization
logSP_mix = calc.run(input=inpt)
```
We see that with these input parameters the DoS is different from the NSP case, the energy is lower and the net polarization is 2:
```
print logNSP.energy,logSP_mix.energy
DoS.append_from_bandarray(logSP_mix.evals,label='SP mix(m 0, RAND)')
DoS.plot()
print 'Magnetic Polarization', logSP_mix.magnetization
```
We see that to break the symmetry it is therefore necessary to have different IG subspaces between up and down orbitals, otherwise the results will be identical to the NSP case.
Now that we know the polarization of the molecule, we may perform a direct minimization calculation of the molecule by specifying from the beginning the `mpol: 2` condition. We can also add some empty orbitals using the keyword `norbsempty`.
```
inpt={'dft': { 'nspin': 2, 'mpol': 2},
'mix': { 'norbsempty': 2 },
'posinp':
{ 'positions': [ {'O': posO1 }, {'O': posO2 }], 'units': 'angstroem' } }
logSP_m2 = calc.run(input=inpt)
print logSP_mix.energy,logSP_m2.energy
DoS.append_from_bandarray(logSP_m2.evals,label='SP m 2')
DoS.plot()
```
We show that the total magnetization is 2 in the case of the oxygen dimer. The DoS is not exactly the same because the mixing scheme was not fully converged (check increasing the value of `itrpmax`).
```
DoS=logSP_mix.get_dos(label='SP mix')
DoS.append_from_bandarray(logSP_m2.evals,label='SP m 2')
DoS.plot()
```
## Odd electron system: the N atom
What does happen when the number of electrons is odd as in the case of N?
If we do a NSP calculation, the occupation of the last state is 1. Switching only the parameter `nspin` to the value 2, we do the same calculation with averaged-occupation (0.5 for the last up and down state).
To do a spin-polarisation calculation, we need to change mpol which is the difference between the number of occupied electrons of different spins.
In the same way, we can look for the total magnetization using the mixing scheme.
```
inpt = { 'dft': { 'nspin': 1},
'posinp': { 'units': 'angstroem',
'positions': [ {'N': 3*[0.0] } ] } }
logNSP = calc.run(input=inpt)
inpt['dft']['nspin'] = 2
logSP = calc.run(input=inpt)
print logNSP.energy,logSP.energy
print logNSP.fermi_level,logSP.fermi_level
DoS=logNSP.get_dos(label='NSP')
DoS.append_from_bandarray(logSP.evals,label='SP')
DoS.plot()
inpt['dft']['inputpsiid']='RANDOM' #Random input guess
inpt['mix']={'iscf': 12, 'itrpmax': 30} # mixing on the potential, just 30 Hamiltonian iterations for a quick look
inpt['import'] = 'mixing'
logSP_mix = calc.run(input=inpt)
print logSP_mix.magnetization
DoS.append_from_bandarray(logSP_mix.evals,label='SP mix')
DoS.plot()
```
We found a total magnetization of 3 following the Hund's rule.
## Defining the input guess (*ig_occupation* keyword)
We have shown that by default, the input guess is LCAO (localised atomic orbitals) defining by the pseudo-orbitals.
The occupation is sphere symmetry (same occupation per orbital moment).
We have used random input guess to break the spin symmetry.
We can also use an LCAO input guess and indicate the occupation number for the input guess using the keyword `ig_occupation` in order to break the spin symmetry
```
inpt['dft']['inputpsiid']='LCAO' #LCAO input guess
inpt['ig_occupation'] = { 'N': { '2s': { 'up': 1, 'down': 1}, '2p': {'up': [1,1,1], 'down': 0} } }
logLCAO_mix = calc.run(input=inpt)
print logSP_mix.energy,logLCAO_mix.energy
DoS=logSP_mix.get_dos(label='SP RAN')
DoS.append_from_bandarray(logLCAO_mix.evals,label='SP LCAO')
DoS.plot()
```
Instead of `ig_occupation`, it is also possible to specify the keyword `IGSpin` per atom in the `posinp` dictionary.
```
inpt = { 'dft': { 'nspin': 2, 'mpol': 3},
'posinp': { 'units': 'angstroem',
'positions': [ {'N': 3*[0.0], 'IGSpin': 3 } ] },
'ig_occupation': { 'N': { '2s': { 'up': 1, 'down': 1},
'2p': { 'up': [1,1,1], 'down': 0} } } }
logIG = calc.run(input=inpt)
print logSP_mix.energy,logLCAO_mix.energy,logIG.energy
DoS=logLCAO_mix.get_dos(label='LCAO ig_occ')
DoS.append_from_bandarray(logIG.evals,label='LCAO IGSpin')
DoS.plot()
```
## Occupation numbers
Finally, it is possible to set the occupation numbers for each state by the parameter `occup`.
In this case, the direct minimization is done with this occupation number.
In the case of N, there are 8 orbitals, the first 4 are up and the other ones down.
Here we do a calculation following the Hund's rule.
```
del inpt['ig_occupation']
inpt['occupation'] = { 'up': { 'Orbital 1': 1, 'Orbital 2': 1, 'Orbital 3': 1, 'Orbital 4': 1 }, # up
'down': { 'Orbital 1': 1, 'Orbital 2': 0, 'Orbital 3': 0, 'Orbital 4': 0 } }# down
logS = calc.run(input=inpt)
print logSP_mix.energy,logLCAO_mix.energy,logIG.energy,logS.energy
DoS.append_from_bandarray(logS.evals,label='SP occup')
DoS.plot()
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/training-data-analyst/blob/master/courses/fast-and-lean-data-science/01_MNIST_TPU_Keras.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## MNIST on TPU (Tensor Processing Unit)<br>or GPU using tf.Keras and tf.data.Dataset
<table><tr><td><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/keras-tensorflow-tpu300px.png" width="300" alt="Keras+Tensorflow+Cloud TPU"></td></tr></table>
This sample trains an "MNIST" handwritten digit
recognition model on a GPU or TPU backend using a Keras
model. Data are handled using the tf.data.Datset API. This is
a very simple sample provided for educational purposes. Do
not expect outstanding TPU performance on a dataset as
small as MNIST.
<h3><a href="https://cloud.google.com/gpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/gpu-hexagon.png" width="50"></a> Train on GPU or TPU <a href="https://cloud.google.com/tpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png" width="50"></a></h3>
1. Select a GPU or TPU backend (Runtime > Change runtime type)
1. Runtime > Run All (Watch out: the "Colab-only auth" cell requires user input)
<h3><a href="https://cloud.google.com/ml-engine/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/mlengine-hexagon.png" width="50"></a> Deploy to ML Engine</h3>
1. At the bottom of this notebook you can deploy your trained model to ML Engine for a serverless, autoscaled, REST API experience. You will need a GCP project and a GCS bucket for this last part.
TPUs are located in Google Cloud, for optimal performance, they read data directly from Google Cloud Storage (GCS)
### Parameters
```
BATCH_SIZE = 128 # On TPU, this will be the per-core batch size. A Cloud TPU has 8 cores so tha global TPU batch size is 1024
training_images_file = 'gs://mnist-public/train-images-idx3-ubyte'
training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'
validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'
validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'
```
### Imports
```
import os, re, math, json, shutil, pprint
import PIL.Image, PIL.ImageFont, PIL.ImageDraw
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow.python.platform import tf_logging
print("Tensorflow version " + tf.__version__)
#@title visualization utilities [RUN ME]
"""
This cell contains helper functions used for visualization
and downloads only. You can skip reading it. There is very
little useful Keras/Tensorflow code here.
"""
# Matplotlib config
plt.rc('image', cmap='gray_r')
plt.rc('grid', linewidth=0)
plt.rc('xtick', top=False, bottom=False, labelsize='large')
plt.rc('ytick', left=False, right=False, labelsize='large')
plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white')
plt.rc('text', color='a8151a')
plt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts
MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf")
# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)
def dataset_to_numpy_util(training_dataset, validation_dataset, N):
# get one batch from each: 10000 validation digits, N training digits
unbatched_train_ds = training_dataset.apply(tf.data.experimental.unbatch())
v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next()
t_images, t_labels = unbatched_train_ds.batch(N).make_one_shot_iterator().get_next()
# Run once, get one batch. Session.run returns numpy results
with tf.Session() as ses:
(validation_digits, validation_labels,
training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels])
# these were one-hot encoded in the dataset
validation_labels = np.argmax(validation_labels, axis=1)
training_labels = np.argmax(training_labels, axis=1)
return (training_digits, training_labels,
validation_digits, validation_labels)
# create digits from local fonts for testing
def create_digits_from_local_fonts(n):
font_labels = []
img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1
font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)
font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)
d = PIL.ImageDraw.Draw(img)
for i in range(n):
font_labels.append(i%10)
d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)
font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)
font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])
return font_digits, font_labels
# utility to display a row of digits with their predictions
def display_digits(digits, predictions, labels, title, n):
plt.figure(figsize=(13,3))
digits = np.reshape(digits, [n, 28, 28])
digits = np.swapaxes(digits, 0, 1)
digits = np.reshape(digits, [28, 28*n])
plt.yticks([])
plt.xticks([28*x+14 for x in range(n)], predictions)
for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):
if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red
plt.imshow(digits)
plt.grid(None)
plt.title(title)
# utility to display multiple rows of digits, sorted by unrecognized/recognized status
def display_top_unrecognized(digits, predictions, labels, n, lines):
idx = np.argsort(predictions==labels) # sort order: unrecognized first
for i in range(lines):
display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],
"{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n)
# utility to display training and validation curves
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.grid(linewidth=1, color='white')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
```
### Colab-only auth for this notebook and the TPU
```
IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence
if IS_COLAB_BACKEND:
from google.colab import auth
auth.authenticate_user() # Authenticates the backend and also the TPU using your credentials so that they can access your private GCS buckets
```
### tf.data.Dataset: parse files and prepare training and validation datasets
Please read the [best practices for building](https://www.tensorflow.org/guide/performance/datasets) input pipelines with tf.data.Dataset
```
def read_label(tf_bytestring):
label = tf.decode_raw(tf_bytestring, tf.uint8)
label = tf.reshape(label, [])
label = tf.one_hot(label, 10)
return label
def read_image(tf_bytestring):
image = tf.decode_raw(tf_bytestring, tf.uint8)
image = tf.cast(image, tf.float32)/256.0
image = tf.reshape(image, [28*28])
return image
def load_dataset(image_file, label_file):
imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)
imagedataset = imagedataset.map(read_image, num_parallel_calls=16)
labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)
labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)
dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))
return dataset
def get_training_dataset(image_file, label_file, batch_size):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)
dataset = dataset.repeat() # Mandatory for Keras for now
dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed
dataset = dataset.prefetch(-1) # fetch next batches while training on the current one (-1: autotune prefetch buffer size)
return dataset
def get_validation_dataset(image_file, label_file):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch
dataset = dataset.repeat() # Mandatory for Keras for now
return dataset
# instantiate the datasets
training_dataset = get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file)
# For TPU, we will need a function that returns the dataset
training_input_fn = lambda: get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_input_fn = lambda: get_validation_dataset(validation_images_file, validation_labels_file)
```
### Let's have a look at the data
```
N = 24
(training_digits, training_labels,
validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)
display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N)
display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N)
font_digits, font_labels = create_digits_from_local_fonts(N)
```
### Keras model: 3 convolutional layers, 2 dense layers
If you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course: [Tensorflow and deep learning without a PhD](https://github.com/GoogleCloudPlatform/tensorflow-without-a-phd/#featured-code-sample)
```
# This model trains to 99.4% sometimes 99.5% accuracy in 10 epochs (with a batch size of 32)
l = tf.keras.layers
model = tf.keras.Sequential(
[
l.Reshape(input_shape=(28*28,), target_shape=(28, 28, 1)),
l.Conv2D(filters=6, kernel_size=3, padding='same', use_bias=False), # no bias necessary before batch norm
l.BatchNormalization(scale=False, center=True), # no batch norm scaling necessary before "relu"
l.Activation('relu'), # activation after batch norm
l.Conv2D(filters=12, kernel_size=6, padding='same', use_bias=False, strides=2),
l.BatchNormalization(scale=False, center=True),
l.Activation('relu'),
l.Conv2D(filters=24, kernel_size=6, padding='same', use_bias=False, strides=2),
l.BatchNormalization(scale=False, center=True),
l.Activation('relu'),
l.Flatten(),
l.Dense(200, use_bias=False),
l.BatchNormalization(scale=False, center=True),
l.Activation('relu'),
l.Dropout(0.5), # Dropout on dense layer only
l.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', # learning rate will be set by LearningRateScheduler
loss='categorical_crossentropy',
metrics=['accuracy'])
# print model layers
model.summary()
# set up learning rate decay
lr_decay = tf.keras.callbacks.LearningRateScheduler(lambda epoch: 0.0001 + 0.02 * math.pow(0.5, 1+epoch), verbose=True)
```
### Train and validate the model
```
EPOCHS = 10
steps_per_epoch = 60000//BATCH_SIZE # 60,000 items in this dataset
tpu = None
trained_model = model
# Counting steps and batches on TPU: the tpu.keras_to_tpu_model API regards the batch size of the input dataset
# as the per-core batch size. The effective batch size is 8x more because Cloud TPUs have 8 cores. It increments
# the step by +8 everytime a global batch (8 per-core batches) is processed. Therefore batch size and steps_per_epoch
# settings can stay as they are for TPU training. The training will just go faster.
# Warning: this might change in the final version of the Keras/TPU API.
try: # TPU detection
tpu = tf.contrib.cluster_resolver.TPUClusterResolver() # Picks up a connected TPU on Google's Colab, ML Engine, Kubernetes and Deep Learning VMs accessed through the 'ctpu up' utility
#tpu = tf.contrib.cluster_resolver.TPUClusterResolver('MY_TPU_NAME') # If auto-detection does not work, you can pass the name of the TPU explicitly (tip: on a VM created with "ctpu up" the TPU has the same name as the VM)
except ValueError:
print('Training on GPU/CPU')
if tpu: # TPU training
strategy = tf.contrib.tpu.TPUDistributionStrategy(tpu)
trained_model = tf.contrib.tpu.keras_to_tpu_model(model, strategy=strategy)
# Work in progress: reading directly from dataset object not yet implemented
# for Keras/TPU. Keras/TPU needs a function that returns a dataset.
history = trained_model.fit(training_input_fn, steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
validation_data=validation_input_fn, validation_steps=1, callbacks=[lr_decay])
else: # GPU/CPU training
history = trained_model.fit(training_dataset, steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
validation_data=validation_dataset, validation_steps=1, callbacks=[lr_decay])
```
### Visualize training and validation curves
```
print(history.history.keys())
display_training_curves(history.history['acc'], history.history['val_acc'], 'accuracy', 211)
display_training_curves(history.history['loss'], history.history['val_loss'], 'loss', 212)
```
### Visualize predictions
```
# recognize digits from local fonts
probabilities = trained_model.predict(font_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_digits(font_digits, predicted_labels, font_labels, "predictions from local fonts (bad predictions in red)", N)
# recognize validation digits
probabilities = trained_model.predict(validation_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)
```
## Deploy the trained model to ML Engine
Push your trained model to production on ML Engine for a serverless, autoscaled, REST API experience.
You will need a GCS bucket and a GCP project for this.
Models deployed on ML Engine autoscale to zero if not used. There will be no ML Engine charges after you are done testing.
Google Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore.
### Configuration
```
PROJECT = "" #@param {type:"string"}
BUCKET = "gs://" #@param {type:"string", default:"jddj"}
NEW_MODEL = True #@param {type:"boolean"}
MODEL_NAME = "colabmnist" #@param {type:"string"}
MODEL_VERSION = "v0" #@param {type:"string"}
assert PROJECT, 'For this part, you need a GCP project. Head to http://console.cloud.google.com/ and create one.'
assert re.search(r'gs://.+', BUCKET), 'For this part, you need a GCS bucket. Head to http://console.cloud.google.com/storage and create one.'
```
### Export the model for serving from ML Engine
```
class ServingInput(tf.keras.layers.Layer):
# the important detail in this boilerplate code is "trainable=False"
def __init__(self, name, dtype, batch_input_shape=None):
super(ServingInput, self).__init__(trainable=False, name=name, dtype=dtype, batch_input_shape=batch_input_shape)
def get_config(self):
return {'batch_input_shape': self._batch_input_shape, 'dtype': self.dtype, 'name': self.name }
def call(self, inputs):
# When the deployed model is called through its REST API,
# the JSON payload is parsed automatically, transformed into
# a tensor and passed to this input layer. You can perform
# additional transformations, such as decoding JPEGs for example,
# before sending the data to your model. However, you can only
# use tf.xxxx operations.
return inputs
# little wrinkle: must copy the model from TPU to CPU manually. This is a temporary workaround.
tf_logging.set_verbosity(tf_logging.INFO)
restored_model = model
restored_model.set_weights(trained_model.get_weights()) # this copied the weights from TPU, does nothing on GPU
tf_logging.set_verbosity(tf_logging.WARN)
# add the serving input layer
serving_model = tf.keras.Sequential()
serving_model.add(ServingInput('serving', tf.float32, (None, 28*28)))
serving_model.add(restored_model)
export_path = tf.contrib.saved_model.save_keras_model(serving_model, os.path.join(BUCKET, 'keras_export')) # export he model to your bucket
export_path = export_path.decode('utf-8')
print("Model exported to: ", export_path)
```
### Deploy the model
This uses the command-line interface. You can do the same thing through the ML Engine UI at https://console.cloud.google.com/mlengine/models
```
# Create the model
if NEW_MODEL:
!gcloud ml-engine models create {MODEL_NAME} --project={PROJECT} --regions=us-central1
# Create a version of this model (you can add --async at the end of the line to make this call non blocking)
# Additional config flags are available: https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions
# You can also deploy a model that is stored locally by providing a --staging-bucket=... parameter
!echo "Deployment takes a couple of minutes. You can watch your deployment here: https://console.cloud.google.com/mlengine/models/{MODEL_NAME}"
!gcloud ml-engine versions create {MODEL_VERSION} --model={MODEL_NAME} --origin={export_path} --project={PROJECT} --runtime-version=1.10
```
### Test the deployed model
Your model is now available as a REST API. Let us try to call it. The cells below use the "gcloud ml-engine"
command line tool but any tool that can send a JSON payload to a REST endpoint will work.
```
# prepare digits to send to online prediction endpoint
digits = np.concatenate((font_digits, validation_digits[:100-N]))
labels = np.concatenate((font_labels, validation_labels[:100-N]))
with open("digits.json", "w") as f:
for digit in digits:
# the format for ML Engine online predictions is: one JSON object per line
data = json.dumps({"serving_input": digit.tolist()}) # "serving_input" because the ServingInput layer was named "serving". Keras appends "_input"
f.write(data+'\n')
# Request online predictions from deployed model (REST API) using the "gcloud ml-engine" command line.
predictions = !gcloud ml-engine predict --model={MODEL_NAME} --json-instances digits.json --project={PROJECT} --version {MODEL_VERSION}
print(predictions)
probabilities = np.stack([json.loads(p) for p in predictions[1:]]) # first line is the name of the input layer: drop it, parse the rest
predictions = np.argmax(probabilities, axis=1)
display_top_unrecognized(digits, predictions, labels, N, 100//N)
```
## License
---
author: Martin Gorner<br>
twitter: @martin_gorner
---
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
---
This is not an official Google product but sample code provided for an educational purpose
|
github_jupyter
|
```
import folium
from folium.plugins import MarkerCluster
import pandas as pd
import branca
import json
import numpy as np
import vincent
import os
from folium.plugins import Draw
import numpy as np
from folium.plugins import HeatMap
print(folium.__version__)#muestra la version actual de la libreria
```
#### conda install -c conda-forge folium
#### conda install -c anaconda pandas
#### conda install -c conda-forge json-c
#### conda install -c anaconda numpy
#### conda install -c anaconda vincent
#### conda install -c conda-forge branca
#### conda install -c anaconda simplejson
#### conda install -c anaconda jinja2
#### conda install -c anaconda pytest
### Datos de Python. Leaflet.js Maps.
#### Folium se basa en los puntos fuertes del ecosistema Python y las fortalezas de mapeo de la biblioteca Leaflet.js. Manipule sus datos en Python, luego visualícelos en un mapa de folletos a través de Folium.
### Conceptos
#### Folium facilita la visualización de datos manipulados en Python en un mapa interactivo de folletos. Permite tanto el enlace de datos a un mapa para visualizaciones de coropletas como el paso de visualizaciones de Vincent / Vega como marcadores en el mapa.
#### La biblioteca tiene una serie de tilesets incorporados de OpenStreetMap, Mapbox y Stamen, y admite tilesets personalizados con Mapbox o las claves API de Cloudmade. Folium admite superposiciones GeoJSON y TopoJSON, así como la unión de datos a esas superposiciones para crear mapas de coropletas con esquemas de color color-brewer.
<img width=650px src='http://www.reactiongifs.com/r/dph.gif'>
http://folium.readthedocs.io/en/latest/
```
#Definir coordenadas de donde queremos centrar nuestro mapa
Santiago_coords = [-33.448653, -70.656910] # en geografica
#Crear mapa
mi_map = folium.Map(location = Santiago_coords, zoom_start = 13)#Cuanto mayor sea el número de zoom, más cerca se encuentra
#mostrar el mapa
mi_map
#Define las coordenadas que queremos que sean nuestros marcadores
Cartografia_coords = [-33.448653, -70.656910]
U_Central_Campus_coords = [-33.451471, -70.654607]
Instituto_Geografico_coords = [-33.450637, -70.657675]
#Agregar marcadores al mapa
folium.Marker(Cartografia_coords, popup = 'Escuela').add_to(mi_map)
folium.Marker(U_Central_Campus_coords, popup = 'central Campus').add_to(mi_map)
folium.Marker(Instituto_Geografico_coords, popup = 'IGM').add_to(mi_map)
#muestra el mapa
mi_map
#Agregar un juego de fichas a nuestro mapa
map_with_tiles = folium.Map(location = Santiago_coords , tiles = 'stamenwatercolor')
map_with_tiles
# Stamen Terrain ,stamenwatercolo,openstreetmap,cartodbpositron,cartodbdark_matter,mapboxbright,mapboxcontrolroom
#guardar mapa
# marca interactiva
map_with_tiles.add_child(folium.ClickForMarker(popup="papu"))
#guardar mapa
#map_with_tiles.save('bluemap.html')
#Usar marcadores de polígono con colores en lugar de marcadores predeterminados
polygon_map = folium.Map(location = Santiago_coords, zoom_start = 16)
Cartografia_coords = [-33.448653, -70.656910]
U_Central_Campus_coords = [-33.451471, -70.654607]
Instituto_Geografico_coords = [-33.450637, -70.657675]
#agregar marcadaores en el mapa
folium.RegularPolygonMarker(Cartografia_coords, popup = 'Carto-geomatica', fill_color = '#00ff40',
number_of_sides = 3, radius = 10).add_to(polygon_map)
folium.RegularPolygonMarker(U_Central_Campus_coords, popup = 'campus central', fill_color = '#bf00ff',
number_of_sides = 5, radius = 10).add_to(polygon_map)
folium.RegularPolygonMarker(Instituto_Geografico_coords, popup = 'IGM', fill_color = '#ff0000',
number_of_sides = 8, radius = 10).add_to(polygon_map)
#circulo con relleno azul
folium.features.CircleMarker(
location=[-33.448653, -70.656910],
radius=50,
popup='utem',
color='#55cc31',#green
fill=True,
fill_color='#3186cc'
).add_to(polygon_map)
# Interactive marker
polygon_map.add_child(folium.ClickForMarker(popup="papu 1"))
#mostrar mapa
polygon_map
m = folium.Map(
location=[-33.448653, -70.656910],
tiles='Stamen Toner',
zoom_start=13
)
#circulo carmesi
folium.features.Circle(
radius=100,
location=[-33.451471, -70.654607],
popup='cartografia',
color='crimson',
fill=False,#relleno del circulo
).add_to(m)
#circulo con relleno azul
folium.features.CircleMarker(
location=[-33.448653, -70.656910],
radius=50,
popup='utem',
color='#3186cc',
fill=True,
fill_color='#c131cc'
).add_to(m)
m
#crear mapa interactivo
map_hooray = folium.Map(location=[-33.448653, -70.656910],
zoom_start = 11)
# Agrega herramienta a la esquina superior derecha
from folium.plugins import MeasureControl
map_hooray.add_child(MeasureControl())
map_hooray
map_hooray = folium.Map(location=[-33.448653, -70.656910],
tiles = "Stamen Toner",
zoom_start = 15)
folium.Marker([-33.448653, -70.656910],
popup='carto',
icon=folium.Icon(color='green')
).add_to(map_hooray)
folium.Marker([-33.451471, -70.654607],
popup='igm',
icon=folium.Icon(color='red',icon='university', prefix='fa')
).add_to(map_hooray)
#icon=folium.Icon(color='blue',icon='bar-chart', prefix='fa') se remplaza por cualquiera
#cloud
folium.Marker([-33.450637, -70.657675],
popup='u central',
icon=folium.Icon(color='red',icon='bicycle', prefix='fa')
).add_to(map_hooray)
map_hooray.add_child(folium.ClickForMarker(popup="bici papu"))
map_hooray
import folium
from folium import plugins
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os
data =[[ -33.4073, -70.6531, 1900. ],
[ -33.4185, -70.6556, 3200. ],
[ -33.4116, -70.6509, 5800. ],
[ -33.4184, -70.6548, 2900. ],
[ -33.4178, -70.6515, 3312. ],
[ -33.4159, -70.6574, 2600. ],
[ -33.4192, -70.6537, 4299. ],
[ -33.4184, -70.6582 , 5750. ],
[ -33.4112, -70.6596, 3595. ]]
ma = folium.Map([-33.4426, -70.6568],
control_scale = True, zoom_start=12,tiles='mapboxbright')
plugins.HeatMap(data, radius = 20, min_opacity = 0.1, max_val = 50,gradient={.6: 'blue', .98: 'lime', 1: 'red'}).add_to(ma)
ma
center_pos = [-33.418043, -70.648273]
fmap = folium.Map(location=center_pos, zoom_start=15,tiles='cartodbdark_matter')
fmap.add_child(folium.Circle(location=center_pos,
color='green', # Circle color
radius=30, # Circle ancho
popup='Skytree', # contenido emergente
fill=True, #llenar area media
fill_opacity=0.5 #establecer trasparencia
))
points = [[-33.4073,-70.6531],
[-33.417417,-70.652628],
[-33.418043, -70.648273]]
fmap.add_child(folium.PolyLine(locations=points, # Lista de coordenadas
weight=8)) #Ancho de línea
import numpy as np
from folium.plugins import HeatMap
fmap = folium.Map(location=[-33.448653, -70.656910], zoom_start=12)
#Crear datos aleatorios
data = (np.random.normal(size=(100, 3)) * 0.02 *
np.array([[1, 1, 1]]) +
np.array([[-33.448653, -70.656910, 1]])).tolist()
fmap.add_child(HeatMap(data=data))
import numpy as np
from folium.plugins import HeatMapWithTime
center_pos = [-33.448653, -70.656910]
#Use numpy para crear datos iniciales
initial_data = (np.random.normal(size=(200, 2)) *
np.array([[0.02, 0.02]]) +
np.array([center_pos]))
# Crear datos continuos
data = [initial_data.tolist()]
for i in range(20):
data.append((data[i] + np.random.normal(size=(200, 2)) * 0.001).tolist())
fmap = folium.Map(center_pos, zoom_start=11)
fmap.add_child(HeatMapWithTime(data)) # Mostrar mapa de calor continuo
#fmap.save('heatmap1.html')
import json
buoy_map = folium.Map(
[-33.055721,-71.708766],
zoom_start=11,
tiles='Stamen Terrain'
)
folium.RegularPolygonMarker(
[-33.055721,-71.708766],
fill_color='#43d9de',
radius=12,
popup=folium.Popup(max_width=450).add_child(
folium.Vega(json.load(open('vis1.json')), width=450, height=250))
).add_to(buoy_map)
folium.RegularPolygonMarker(
[-33.017004,-71.613186],
fill_color='#43d9de',
radius=12,
popup=folium.Popup(max_width=450).add_child(
folium.Vega(json.load(open('vis2.json')), width=450, height=250))
).add_to(buoy_map)
folium.RegularPolygonMarker(
[-32.941004,-71.600571],
fill_color='#43d9de',
radius=12,
popup=folium.Popup(max_width=450).add_child(
folium.Vega(json.load(open('vis3.json')), width=450, height=250))
).add_to(buoy_map)
buoy_map
#guardar mapa
#buoy_map.save('mapacongrafico.html')
m = folium.Map([-33.448653, -70.656910], zoom_start=12,control_scale=True,# controlar escala
prefer_canvas=True
)
html = """
<h1> Esta es una gran ventana emergente</h1><br>
Con unas pocas líneas de código...
<p>
<code>
from folium import *<br>
html
</code>
</p>
"""
folium.Marker([-33.448653, -70.656910], popup=html).add_to(m)
m
# Vamos a crear una figura, con un mapa dentro.
f = branca.element.Figure()
folium.Map([-33.448653, -70.656910], zoom_start=10).add_to(f)
# Pongamos la figura en un IFrame.
iframe = branca.element.IFrame(width=500, height=300)
f.add_to(iframe)
# Pongamos el IFrame en una ventana emergente
popup = folium.Popup(iframe, max_width=2650)
# Vamos a crear otro mapa
m = folium.Map([-33.448653, -70.656910], zoom_start=4)
# Pongamos el Popup en un marcador, en el segundo mapa.
folium.Marker([-33.448653, -70.656910], popup=popup).add_to(m)
m
from folium.plugins import Draw
attr = ('© <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a> '
'creadores, © <a href="http://cartodb.com/attributions">Camilo</a>')
tiles = 'https://tile.thunderforest.com/mobile-atlas/{z}/{x}/{y}.png?apikey=3cd85f11f4744c0c8c3bdaab8483cde0'
m = folium.Map(location=[ -33.448653, -70.656910], tiles=tiles, attr=attr, zoom_start=14,control_scale=True,# controlar escala
prefer_canvas=True
)
#Crear datos aleatorios
data = (np.random.normal(size=(100, 3)) * 0.02 *
np.array([[1, 1, 1]]) +
np.array([[-33.448653, -70.656910, 1]])).tolist()
m.add_child(HeatMap(data=data))
m.add_child(HeatMap(data=data))
# Agrega herramienta a la esquina superior derecha
from folium.plugins import MeasureControl
m.add_child(MeasureControl())
m.add_child(folium.LatLngPopup())
#plugins.ScrollZoomToggler().add_to(m)
plugins.Fullscreen(
position='topright',
title='expander',
title_cancel='salir',
force_separate_button=True).add_to(m)
draw = Draw()
draw.add_to(m)
m.save(os.path.join('results', 'trabajofinal.html'))#carpeta donde guardar ,nombre del html
m
```
penCycleMap
https://tile.thunderforest.com/cycle/{z}/{x}/{y}.png?apikey=3cd85f11f4744c0c8c3bdaab8483cde0
Transporte
https://tile.thunderforest.com/transport/{z}/{x}/{y}.png?apikey=3cd85f11f4744c0c8c3bdaab8483cde0
Paisaje
https://tile.thunderforest.com/landscape/{z}/{x}/{y}.png?apikey=3cd85f11f4744c0c8c3bdaab8483cde0
Al aire libre
https://tile.thunderforest.com/outdoors/{z}/{x}/{y}.png?apikey=3cd85f11f4744c0c8c3bdaab8483cde0
Transporte oscuro
https://tile.thunderforest.com/transport-dark/{z}/{x}/{y}.png?apikey=3cd85f11f4744c0c8c3bdaab8483cde0
Mapa espinal
https://tile.thunderforest.com/spinal-map/{z}/{x}/{y}.png?apikey=3cd85f11f4744c0c8c3bdaab8483cde0
Pionero
https://tile.thunderforest.com/pioneer/{z}/{x}/{y}.png?apikey=3cd85f11f4744c0c8c3bdaab8483cde0
Atlas móvil
https://tile.thunderforest.com/mobile-atlas/{z}/{x}/{y}.png?apikey=3cd85f11f4744c0c8c3bdaab8483cde0
Barrio
https://tile.thunderforest.com/neighbourhood/{z}/{x}/{y}.png?apikey=3cd85f11f4744c0c8c3bdaab8483cde0
```
#!jupyter nbconvert --to tex Clase_folium_M.ipynb
from folium.plugins import Draw
attr = ('© <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a> '
'creadores, © <a href="http://cartodb.com/attributions">Camilo</a>')
tiles = 'https://tile.thunderforest.com/mobile-atlas/{z}/{x}/{y}.png?apikey=3cd85f11f4744c0c8c3bdaab8483cde0'
center_pos = [-33.448653, -70.656910]
#Use numpy para crear datos iniciales
initial_data = (np.random.normal(size=(200, 2)) *
np.array([[0.02, 0.02]]) +
np.array([center_pos]))
# Crear datos continuos
data = [initial_data.tolist()]
for i in range(20):
data.append((data[i] + np.random.normal(size=(200, 2)) * 0.001).tolist())
m = folium.Map(center_pos,tiles='Stamen Terrain',attr=attr, zoom_start=12,control_scale=True,prefer_canvas=True) #control del mapa
m.add_child(HeatMapWithTime(data)) # Mostrar mapa de calor continuo
# Agrega herramienta a la esquina superior derecha
from folium.plugins import MeasureControl
m.add_child(MeasureControl())
#lineas de info
points = [[-33.4073,-70.6531],
[-33.417417,-70.652628],
[-33.418043, -70.648273]]
m.add_child(folium.PolyLine(locations=points, # Lista de coordenadas
weight=8)) #Ancho de línea
#circulo con relleno azul
folium.features.CircleMarker(
location=[-33.448653, -70.656910],
radius=190,
popup='circulo de interes',
color='#3186cc',
fill=True,
fill_color='#c131cc'
).add_to(m)
m.add_child(folium.LatLngPopup())
#plugins.ScrollZoomToggler().add_to(m)
plugins.Fullscreen(
position='topright',
title='agranda',
title_cancel='by papu',
force_separate_button=True).add_to(m)
draw = Draw()
draw.add_to(m)
m.save(os.path.join('results', 'mapafinalcurso.html'))#carpeta donde guardar ,nombre del html
m
from folium.plugins import Draw
attr = ('© <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a> '
'creadores, © <a href="http://cartodb.com/attributions">Camilo</a>')
tiles = ' https://tile.thunderforest.com/cycle/{z}/{x}/{y}.png?apikey=3cd85f11f4744c0c8c3bdaab8483cde0 '
m = folium.Map(location=[ -33.448653, -70.656910], tiles=tiles, attr=attr, zoom_start=14,control_scale=True,# controlar escala
prefer_canvas=True
)
# Agrega herramienta a la esquina superior derecha
from folium.plugins import MeasureControl
m.add_child(MeasureControl())
m.add_child(folium.LatLngPopup())
#plugins.ScrollZoomToggler().add_to(m)
plugins.Fullscreen(
position='topright',
title='expander',
title_cancel='salir',
force_separate_button=True).add_to(m)
draw = Draw()
draw.add_to(m)
m
from folium.plugins import Draw
attr = ('© <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a> '
'creadores, © <a href="http://cartodb.com/attributions">Camilo</a>')
tiles = ' https://tile.thunderforest.com/transport/{z}/{x}/{y}.png?apikey=3cd85f11f4744c0c8c3bdaab8483cde0 '
m = folium.Map(location=[ -33.448653, -70.656910], tiles=tiles, attr=attr, zoom_start=12,control_scale=True,# controlar escala
prefer_canvas=True
)
# Agrega herramienta a la esquina superior derecha
from folium.plugins import MeasureControl
m.add_child(MeasureControl())
m.add_child(folium.LatLngPopup())
#plugins.ScrollZoomToggler().add_to(m)
plugins.Fullscreen(
position='topright',
title='expander',
title_cancel='salir',
force_separate_button=True).add_to(m)
draw = Draw()
draw.add_to(m)
m
from folium.plugins import Draw
attr = ('© <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a> '
'creadores, © <a href="http://cartodb.com/attributions">Camilo</a>')
tiles = ' https://tile.thunderforest.com/spinal-map/{z}/{x}/{y}.png?apikey=3cd85f11f4744c0c8c3bdaab8483cde0 '
m = folium.Map(location=[ -33.448653, -70.656910], tiles=tiles, attr=attr, zoom_start=12,control_scale=True,# controlar escala
prefer_canvas=True
)
# Agrega herramienta a la esquina superior derecha
from folium.plugins import MeasureControl
m.add_child(MeasureControl())
m.add_child(folium.LatLngPopup())
#plugins.ScrollZoomToggler().add_to(m)
plugins.Fullscreen(
position='topright',
title='expander',
title_cancel='salir',
force_separate_button=True).add_to(m)
draw = Draw()
draw.add_to(m)
m
from folium.plugins import Draw
attr = ('© <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a> '
'creadores, © <a href="http://cartodb.com/attributions">Camilo</a>')
tiles = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{z}/{y}/{x}'
m = folium.Map(location=[ -33.448653, -70.656910], tiles=tiles, attr=attr, zoom_start=14,control_scale=True,# controlar escala
prefer_canvas=True
)
# Agrega herramienta a la esquina superior derecha
from folium.plugins import MeasureControl
m.add_child(MeasureControl())
m.add_child(folium.LatLngPopup())
#plugins.ScrollZoomToggler().add_to(m)
plugins.Fullscreen(
position='topright',
title='expander',
title_cancel='salir',
force_separate_button=True).add_to(m)
draw = Draw()
draw.add_to(m)
m
from folium.plugins import Draw
attr = ('© <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a> '
'creadores, © <a href="http://cartodb.com/attributions">Camilo</a>')
tiles = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Topo_Map/MapServer/tile/{z}/{y}/{x}'
m = folium.Map(location=[ -33.4873, -70.4675], tiles=tiles, attr=attr, zoom_start=14,control_scale=True,# controlar escala
prefer_canvas=True
)
points = [[-33.4892,-70.4689],
[-33.4850,-70.4676],
[-33.4771,-70.4663]]
m.add_child(folium.PolyLine(locations=points, # Lista de coordenadas
weight=8)) #Ancho de línea
# Agrega herramienta a la esquina superior derecha
from folium.plugins import MeasureControl
m.add_child(MeasureControl())
m.add_child(folium.LatLngPopup())
#plugins.ScrollZoomToggler().add_to(m)
plugins.Fullscreen(
position='topright',
title='expander',
title_cancel='salir',
force_separate_button=True).add_to(m)
draw = Draw()
draw.add_to(m)
m
```
|
github_jupyter
|
```
# This cell is for the Google Colaboratory
# https://stackoverflow.com/a/63519730
if 'google.colab' in str(get_ipython()):
# https://colab.research.google.com/notebooks/io.ipynb
import google.colab.drive as gcdrive
# may need to visit a link for the Google Colab authorization code
gcdrive.mount("/content/drive/")
import sys
sys.path.insert(0,"/content/drive/My Drive/Colab Notebooks/nmisp/50_ode")
# 그래프, 수학 기능 추가
# Add graph and math features
import pylab as py
import numpy as np
import numpy.linalg as nl
# 기호 연산 기능 추가
# Add symbolic operation capability
import sympy as sy
sy.init_printing()
```
# 룽게-쿠타법 (RK4)<br>Runge-Kutta Method (RK4)
오일러법과 훈의 방법은 $t=t_0$, $t=t_1$, ... 에서의 기울기만 사용하였다.<br>Euler Method and Heun's Method used slopes at $t=t_0$, $t=t_1$, ... only.
룽게-쿠타 법은 1900년대에 독일 수학자 칼 룽게와 마틴 쿠타가 공동 개발했던 상미분 방정식의 근사 해법의 모음이다.<br>Runge-Kutta methods are a group of numerical methods to solve ordinary differential equations that two German mathematicians Carl Runge and Martin Kutta developed in 1900s.
실은 오일러법이나 훈의 방법도 룽게-쿠타법에 포함된다.<br>In fact, Euler method or Heun's Method are included in the Runge-Kutta method.
룽게-쿠타법 가운데 **RK4**가 가장 널리 사용된다.<br>Among the Runge-Kutta methods, **RK4** is used the most frequently.
**RK4** 는 $t=t_1$ 에서의 해를 구하기 위해 $t=t_0$, $t=t_\frac{1}{2}=\frac{1}{2}\left(t_0+t_1\right)$, $t=t_1$ 에서의 기울기를 사용한다.<br>
To find a solution at $t=t_1$, **RK4** uses slopes at $t=t_0$, $t=t_\frac{1}{2}=\frac{1}{2}\left(t_0+t_1\right)$, and $t=t_1$.
$$
\begin{cases}
\frac{d}{dt}\mathbf{x}=f(t, \mathbf{x}) \\
\mathbf{x}(t_0)=\mathbf{x}_0
\end{cases}
$$
위 미분방정식의 경우, 시간 간격 $\Delta t=t_1 - t_0$ 일 때 절차는 다음과 같다.<br>
For the differential equation above, when time step $\Delta t=t_1 - t_0$, the procedure is as follows.
1. $t=t_0$ 에서의 기울기 $s_1=f(t_0, \mathbf{x}_0)$을 구한다.<br>
At $t=t_0$, find the slope $s_1=f(t_0, \mathbf{x}_0)$.
1. $t_0$ 로 부터 ${\frac{1}{2}} \Delta t = {\frac{1}{2}}(t_1 - t_0)$ 만큼 기울기 $s_1$을 따라 전진하여 $t=t_\frac{1}{2}$ 에서의 기울기 $s_2=f(t_{\frac{1}{2}}, \mathbf{x}_0 + s_1 {\frac{1}{2}} \Delta t)$을 구한다.<br>
Advancing from $t_0$ by ${\frac{1}{2}} \Delta t = {\frac{1}{2}}(t_1 - t_0)$ in time, along the slope $s_1$, find the slope at $t=t_\frac{1}{2}$, $s_2=f(t_{\frac{1}{2}}, \mathbf{x}_0 + s_1 {\frac{1}{2}} \Delta t)$.
1. 다시 한번, $t_0$에서 $t_{\frac{1}{2}}$ 까지 $s_2$를 따라 전진하여 $t=t_\frac{1}{2}$ 에서의 기울기 $s_3=f(t_{\frac{1}{2}}, \mathbf{x}_0 + s_2 {\frac{1}{2}} \Delta t)$을 구한다.<br>
Once again, by time-advancing from $t_0$ to $t=t_\frac{1}{2}$, along the slope $s_2$, find the slope $s_3=f(t_{\frac{1}{2}}, \mathbf{x}_0 + s_2 {\frac{1}{2}} \Delta t)$.
1. 이번에는 $t_0$에서 $t_1$ 까지 $s_3$를 따라 전진하여 $t=t_1$ 에서의 기울기 $s_4=f(t_1, \mathbf{x}_0 + s_3 \Delta t)$을 구한다.<br>
This time, by going forward from $t_0$ to $t_1$, find the slope at $t=t_1$, $s_4=f(t_1, \mathbf{x}_0 + s_3 \Delta t)$.
1. $t_0 \le t \le t_1$ 구간을 대표하는 기울기 $s=\frac{\Delta t}{6} \left( s_1 + 2s_2 + 2s_3 + s_4 \right)$을 구한다.<br>
Find the slope representing interval $t_0 \le t \le t_1$, $s=\frac{\Delta t}{6} \left( s_1 + 2s_2 + 2s_3 + s_4 \right)$.
1. $t=t_1$ 에서 $\mathbf{x}(t_1) = \mathbf{x}_0 + s \Delta t$ 을 구한다.<br>
At $t=t_1$ 에서, find $\mathbf{x}(t_1) = \mathbf{x}_0 + s \Delta t$.
python 으로 써 보자.<br>Let's write in python.
```
def rk4_step(f, x0, t0, t1):
"""
One time step of Runge-Kutta method
f: dx_dt function
x0 : initial condition
t0 : this step time
t1 : next step time
"""
delta_t = (t1 - t0)
delta_t_half = delta_t * 0.5
t_half = t0 + delta_t_half
# Step 1
s1 = f(t0, x0)
# Step 2
s2 = f(t_half, x0 + s1 * delta_t_half)
# Step 3
s3 = f(t_half, x0 + s2 * delta_t_half)
# Step 4
s4 = f(t1, x0 + s3 * delta_t)
# Step 5
s = (1.0 / 6.0) * (s1 + (s2 + s3) * 2 + s4)
# Step 6
x1 = x0 + s * delta_t
return x1
def rk4(f, t_array, x_0):
time_list = [t_array[0]]
result_list = [x_0]
x_i = x_0
for k, t_i in enumerate(t_array[:-1]):
# time step
x_i_plus_1 = rk4_step(f, x_i, t_i, t_array[k+1])
time_list.append(t_array[k+1])
result_list.append(x_i_plus_1)
x_i = x_i_plus_1
return time_list, result_list
```
다시 1계 선형 상미분 방정식의 예를 살펴 보자.<br>Let's reconsider the 1st order linear ODE.
$$
\left\{
\begin{align}
a_0 \frac{d}{dt}x(t)+a_1 x(t)&=0 \\
x(0)&=x_0 \\
\end{align}
\right.
$$
룽게-쿠타법 결과를 엄밀해, 오일러법, 훈법과 비교해보자.<br>Let's compare the result from Runge-Kutta method with the exact solution, Euler method, and Heun's method.
```
a_0, a_1 = 2.0, 1.0
def dx_dt(t, x):
return - a_1 * x / a_0
def exact(t):
return x_0 * py.exp((-a_1 / a_0) * t)
import ode_solver
```
$\Delta t$
```
delta_t = 1.0
t_sec_array = np.arange(0, 6 + delta_t*0.5, delta_t)
```
초기값<br>Initial value<br>
$x(t_0)$
```
x_0 = 4.5
```
오일러법<br>Euler method
```
t_euler_out, x_euler_out = ode_solver.euler(dx_dt, t_sec_array, x_0)
```
훈의 방법<br>
Heun's method
```
t_heun__out, x_heun__out = ode_solver.heun(dx_dt, t_sec_array, x_0)
```
RK4
```
t_rk4___out, x_rk4___out = rk4(dx_dt, t_sec_array, x_0)
```
근사해 그림<br>
Plots of approximate solutions
```
import ode_plot
# Approximate solutions
py.plot(t_euler_out, x_euler_out, '.-', label='Euler')
py.plot(t_heun__out, x_heun__out, '.-', label='Heun')
py.plot(t_rk4___out, x_rk4___out, '.-', label='RK4')
# *** Exact Solution
t_exact_array = np.linspace(0, 6)
exact = ode_plot.ExactPlotterFirstOrderODE(t_exact_array)
exact.plot()
py.xlabel('t(sec)')
py.ylabel('x(m)')
py.legend(loc=0)
py.grid(True)
```
룽게-쿠타법의 해가 엄밀해에 더 가까움을 알 수 있다.<br>
We can see that Runge-Kutta method is closer to the exact solution.
## Scipy
```
import scipy.integrate as si
sol = si.solve_ivp(dx_dt, (t_heun__out[0], t_heun__out[-1]), [x_0], t_eval=t_heun__out)
py.plot(sol.t, sol.y[0, :], 'o', label='solve_ivp')
py.plot(t_euler_out, x_euler_out, '.-', label='Euler')
py.plot(t_heun__out, x_heun__out, '*-', label='Heun')
py.plot(t_rk4___out, x_rk4___out, '.-', label='RK4')
# plot exact solution
exact = ode_plot.ExactPlotterFirstOrderODE(py.array(t_rk4___out))
exact.plot()
py.grid(True)
py.xlabel('t(sec)')
py.ylabel('y(t)')
py.legend(loc=0);
import pandas as pd
df = pd.DataFrame(
data={
'euler':x_euler_out,
'heun' :x_heun__out,
'rk4' :x_rk4___out,
'solve_ivp':sol.y[0, :],
'exact':exact.exact(py.array(t_heun__out))
},
index=pd.Series(t_heun__out, name='t(sec)'),
columns=['exact', 'euler', 'heun', 'rk4', 'solve_ivp']
)
df['euler_error'] = df.euler - df.exact
df['heun_error'] = df.heun - df.exact
df['rk4_error'] = df.rk4 - df.exact
df['solve_ivp_error'] = df.solve_ivp - df.exact
```
표 형태<br>Table form
```
pd.set_option('display.max_rows', 10)
df
```
각종 통계<br>Statistics
```
df.describe()
```
이 경우, RK4 오차에 대한 의견은?<br>
In this case, what do you think about the error of the RK4?
```
import numpy.linalg as nl
nl.norm(df.euler_error), nl.norm(df.heun_error), nl.norm(df.rk4_error), nl.norm(df.solve_ivp_error),
```
### 계산시간<br>Computation time
$\Delta t$ 을 더 작은 값으로 바꾸어 보자.<br>Let's try a smaller $\Delta t$.
```
delta_t = 1e-3
t_sec_array = np.arange(0, 6 + delta_t*0.5, delta_t)
```
오일러법<br>Euler method
```
%%timeit -n100
t_euler_out, x_euler_out = ode_solver.euler(dx_dt, t_sec_array, x_0)
```
훈의 방법<br>
Heun's method
```
%%timeit -n100
t_heun__out, x_heun__out = ode_solver.heun(dx_dt, t_sec_array, x_0)
```
RK4
```
%%timeit -n100
t_rk4___out, x_rk4___out = rk4(dx_dt, t_sec_array, x_0)
```
계산의 단계 수가 더 많은 해법일 수록 계산 시간이 많이 필요한 것으로 보인다.<br>
With more steps, computation takes more time.
## 연습 문제<br>Exercises
다음 미분방정식의 엄밀해를 구하시오:<br>
Find exact solution of the following differential equation:
$$
\begin{align}
10 \frac{d}{dt}x(t) + 50 x(t) &= 0 \\
x(0) &= 2
\end{align}
$$
위 미분방정식의 수치해를 오일러법으로 구하시오.<br>
Find numerical solution of the above differential equation using Euler Method.
위 미분방정식의 수치해를 훈의 방법으로 구하고 엄밀해, 오일러법과 비교하시오.<br>
Find numerical solution of the above differential equation using Heun's method and compare with exact solution and Euler Method.
위 미분방정식의 수치해를 RK4법으로 구하고 엄밀해, 오일러법, 훈의 방법과 비교하시오.<br>
Find numerical solution of the above differential equation using RK$ and compare with exact solution, Euler Method, and Heun's Method.
## Final Bell<br>마지막 종
```
# stackoverfow.com/a/24634221
import os
os.system("printf '\a'");
```
|
github_jupyter
|
```
%pylab inline
import os
import keras
import metrics
import numpy as np
import pandas as pd
import seaborn as sns
import keras.backend as K
import glob
from scipy.io import loadmat
from IPython.display import display, clear_output
from time import time
from keras import callbacks
from keras.models import Model, Sequential
from keras.optimizers import SGD
from keras.layers import Input, Dense, Dropout, Conv2D, MaxPool2D, UpSampling2D, Activation
from keras.initializers import VarianceScaling
from keras.engine.topology import Layer, InputSpec
from PIL import Image
from sklearn.cluster import KMeans
from sklearn.metrics import normalized_mutual_info_score, confusion_matrix
images = loadmat("C:\\Users\\ustundag\\GitHub\\2D-3D-Semantics\\noXYZ_area_3_no_xyz_data_rgb_90x90.mat")
images = images["rgb"]
labels = loadmat("C:\\Users\\ustundag\\GitHub\\2D-3D-Semantics\\noXYZ_area_3_no_xyz_data_rgb_90x90_labels.mat")
labels = labels["labels"]
images.shape
# Assign ground truth labels
labels_gt = labels[0]
# Split dataset into tarin and test
x_train = images[:3000] / 255.0
x_test = images[-704:] / 255.0
y_train = labels_gt[:3000]
y_test = labels_gt[-704:]
set(labels_gt)
def get_room_type(label):
if label == 0: return 'WC'
if label == 1: return 'conferenceRoom'
if label == 2: return 'hallway'
if label == 3: return 'lounge'
if label == 4: return 'office'
if label == 5: return 'storage'
i = 1234
pylab.imshow(x_train[i].reshape(90, 90), cmap='gray')
pylab.show()
print('Room type: ' + get_room_type(y_train[i]))
```
### KMeans Beasic Implementation
```
km = KMeans(n_jobs=-1, n_clusters = 6, n_init=20)
km.fit(x_train)
pred = km.predict(x_test)
set(pred)
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
normalized_mutual_info_score(y_test, pred)
```
### Autoencoder + KMeans
```
# this is our input placeholder
input_img = Input(shape=(8100,))
# "encoded" is the encoded representation of the input
encoded = Dense(500, activation='relu')(input_img)
encoded = Dense(500, activation='relu')(encoded)
encoded = Dense(2000, activation='relu')(encoded)
encoded = Dense(30, activation='sigmoid')(encoded)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(2000, activation='relu')(encoded)
decoded = Dense(500, activation='relu')(decoded)
decoded = Dense(500, activation='relu')(decoded)
decoded = Dense(8100)(decoded)
# this model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)
autoencoder.summary()
# this model maps an input to its encoded representation
encoder = Model(input_img, encoded)
autoencoder.compile(optimizer='adam', loss='mse')
train_history = autoencoder.fit(x_train, x_train,
epochs=10,
batch_size=32,
shuffle=True,
validation_data=(x_test, x_test))
pred_auto_train = encoder.predict(x_train)
pred_auto = encoder.predict(x_test)
km.fit(pred_auto_train)
pred = km.predict(pred_auto)
set(pred)
normalized_mutual_info_score(y_test, pred)
```
### ConvAutoencoder + KMeans (currently not working, in progress...)
```
# Reshape the images
x_train_s = x_train.reshape(-1,90,90,1)
x_test_s = x_test.reshape(-1,90,90,1)
x_test_s[0].shape
# Build the autoencoder
model = Sequential()
model.add(Conv2D(45, kernel_size=3, padding='same', activation='relu', input_shape=(90,90,1)))
model.add(MaxPool2D((3,3), padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(15, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPool2D((3,3), padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(15, kernel_size=3, padding='same', activation='relu'))
model.add(UpSampling2D((3,3)))
model.add(Dropout(0.2))
model.add(Conv2D(45, kernel_size=3, padding='same', activation='relu'))
model.add(UpSampling2D((3,3)))
model.add(Dropout(0.2))
model.add(Conv2D(1, kernel_size=3, padding='same', activation='relu'))
model.compile(optimizer='adam', loss="mse")
model.summary()
# Train the model
model.fit(x_train_s, x_train_s, epochs=10, batch_size=64, validation_data=(x_test_s, x_test_s), verbose=1)
# Fitting testing dataset
restored_testing_dataset = model.predict(x_test_s)
# Observe the reconstructed image quality
plt.figure(figsize=(20,5))
for i in range(5):
index = y_test.tolist().index(i)
plt.subplot(2, 6, i+1)
plt.imshow(x_test_s[index].reshape((90,90)), cmap='gray')
plt.gray()
plt.subplot(2, 6, i+7)
plt.imshow(restored_testing_dataset[index].reshape((90,90)), cmap='gray')
plt.gray()
# Extract the encoder
encoder = K.function([model.layers[0].input], [model.layers[4].output])
# Encode the training set
encoded_images = encoder([x_test_s])[0].reshape(-1, 10*10*15)
encoded_images.shape
# Cluster the training set
kmeans = KMeans(n_clusters = 6)
clustered_training_set = kmeans.fit_predict(encoded_images)
# Observe and compare clustering result with actual label using confusion matrix
cm = confusion_matrix(y_test, clustered_training_set)
plt.figure(figsize=(8, 8))
sns.heatmap(cm, annot=True, fmt="d")
plt.title("Confusion matrix", fontsize=20)
plt.ylabel('True label', fontsize=15)
plt.xlabel('Clustering label', fontsize=15)
plt.show()
# Plot the actual pictures grouped by clustering
fig = plt.figure(figsize=(20,20))
for r in range(6):
cluster = cm[r].argmax()
for c, val in enumerate(x_test_s[clustered_training_set == cluster][0:6]):
fig.add_subplot(6, 6, 6*r+c+1)
plt.imshow(val.reshape((90,90)))
plt.gray()
plt.xticks([])
plt.yticks([])
plt.xlabel('cluster: '+str(cluster))
plt.ylabel('digit: '+str(r))
normalized_mutual_info_score(y_test, clustered_training_set)
```
### Deep Embedded Clustering (DEC) implementation
```
from time import time
import numpy as np
import keras.backend as K
from keras.engine.topology import Layer, InputSpec
from keras.layers import Dense, Input
from keras.models import Model
from keras.optimizers import SGD
from keras import callbacks
from keras.initializers import VarianceScaling
from sklearn.cluster import KMeans
"""
Keras implementation for Deep Embedded Clustering (DEC) algorithm:
Original Author:
Xifeng Guo. 2017.1.30
"""
def autoencoder(dims, act='relu', init='glorot_uniform'):
"""
Fully connected auto-encoder model, symmetric.
Arguments:
dims: list of number of units in each layer of encoder. dims[0] is input dim, dims[-1] is units in hidden layer.
The decoder is symmetric with encoder. So number of layers of the auto-encoder is 2*len(dims)-1
act: activation, not applied to Input, Hidden and Output layers
return:
(ae_model, encoder_model), Model of autoencoder and model of encoder
"""
n_stacks = len(dims) - 1
# input
x = Input(shape=(dims[0],), name='input')
h = x
# internal layers in encoder
for i in range(n_stacks-1):
h = Dense(dims[i + 1], activation=act, kernel_initializer=init, name='encoder_%d' % i)(h)
# hidden layer
h = Dense(dims[-1], kernel_initializer=init, name='encoder_%d' % (n_stacks - 1))(h) # hidden layer, features are extracted from here
y = h
# internal layers in decoder
for i in range(n_stacks-1, 0, -1):
y = Dense(dims[i], activation=act, kernel_initializer=init, name='decoder_%d' % i)(y)
# output
y = Dense(dims[0], kernel_initializer=init, name='decoder_0')(y)
return Model(inputs=x, outputs=y, name='AE'), Model(inputs=x, outputs=h, name='encoder')
class ClusteringLayer(Layer):
"""
Clustering layer converts input sample (feature) to soft label, i.e. a vector that represents the probability of the
sample belonging to each cluster. The probability is calculated with student's t-distribution.
# Example
```
model.add(ClusteringLayer(n_clusters=6))
```
# Arguments
n_clusters: number of clusters.
weights: list of Numpy array with shape `(n_clusters, n_features)` witch represents the initial cluster centers.
alpha: parameter in Student's t-distribution. Default to 1.0.
# Input shape
2D tensor with shape: `(n_samples, n_features)`.
# Output shape
2D tensor with shape: `(n_samples, n_clusters)`.
"""
def __init__(self, n_clusters, weights=None, alpha=1.0, **kwargs):
if 'input_shape' not in kwargs and 'input_dim' in kwargs:
kwargs['input_shape'] = (kwargs.pop('input_dim'),)
super(ClusteringLayer, self).__init__(**kwargs)
self.n_clusters = n_clusters
self.alpha = alpha
self.initial_weights = weights
self.input_spec = InputSpec(ndim=2)
def build(self, input_shape):
assert len(input_shape) == 2
input_dim = input_shape[1]
self.input_spec = InputSpec(dtype=K.floatx(), shape=(None, input_dim))
self.clusters = self.add_weight((self.n_clusters, input_dim), initializer='glorot_uniform', name='clusters')
if self.initial_weights is not None:
self.set_weights(self.initial_weights)
del self.initial_weights
self.built = True
def call(self, inputs, **kwargs):
""" student t-distribution, as same as used in t-SNE algorithm.
q_ij = 1/(1+dist(x_i, u_j)^2), then normalize it.
Arguments:
inputs: the variable containing data, shape=(n_samples, n_features)
Return:
q: student's t-distribution, or soft labels for each sample. shape=(n_samples, n_clusters)
"""
q = 1.0 / (1.0 + (K.sum(K.square(K.expand_dims(inputs, axis=1) - self.clusters), axis=2) / self.alpha))
q **= (self.alpha + 1.0) / 2.0
q = K.transpose(K.transpose(q) / K.sum(q, axis=1))
return q
def compute_output_shape(self, input_shape):
assert input_shape and len(input_shape) == 2
return input_shape[0], self.n_clusters
def get_config(self):
config = {'n_clusters': self.n_clusters}
base_config = super(ClusteringLayer, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
class DEC(object):
def __init__(self,
dims,
n_clusters=6,
alpha=1.0,
init='glorot_uniform'):
super(DEC, self).__init__()
self.dims = dims
self.input_dim = dims[0]
self.n_stacks = len(self.dims) - 1
self.n_clusters = n_clusters
self.alpha = alpha
self.autoencoder, self.encoder = autoencoder(self.dims, init=init)
# prepare DEC model
clustering_layer = ClusteringLayer(self.n_clusters, name='clustering')(self.encoder.output)
self.model = Model(inputs=self.encoder.input, outputs=clustering_layer)
def pretrain(self, x, y=None, optimizer='adam', epochs=200, batch_size=256, save_dir='results/temp'):
print('...Pretraining...')
self.autoencoder.compile(optimizer=optimizer, loss='mse')
csv_logger = callbacks.CSVLogger(save_dir + '/pretrain_log.csv')
cb = [csv_logger]
if y is not None:
class PrintACC(callbacks.Callback):
def __init__(self, x, y):
self.x = x
self.y = y
super(PrintACC, self).__init__()
def on_epoch_end(self, epoch, logs=None):
if epoch % int(epochs/10) != 0:
return
feature_model = Model(self.model.input,
self.model.get_layer(
'encoder_%d' % (int(len(self.model.layers) / 2) - 1)).output)
features = feature_model.predict(self.x)
km = KMeans(n_clusters=len(np.unique(self.y)), n_init=20, n_jobs=4)
y_pred = km.fit_predict(features)
# print()
print(' '*8 + '|==> acc: %.4f, nmi: %.4f <==|'
% (metrics.acc(self.y, y_pred), metrics.nmi(self.y, y_pred)))
cb.append(PrintACC(x, y))
# begin pretraining
t0 = time()
self.autoencoder.fit(x, x, batch_size=batch_size, epochs=epochs, callbacks=cb)
print('Pretraining time: ', time() - t0)
self.autoencoder.save_weights(save_dir + '/ae_weights.h5')
print('Pretrained weights are saved to %s/ae_weights.h5' % save_dir)
self.pretrained = True
def load_weights(self, weights): # load weights of DEC model
self.model.load_weights(weights)
def extract_features(self, x):
return self.encoder.predict(x)
def predict(self, x): # predict cluster labels using the output of clustering layer
q = self.model.predict(x, verbose=0)
return q.argmax(1)
@staticmethod
def target_distribution(q):
weight = q ** 2 / q.sum(0)
return (weight.T / weight.sum(1)).T
def compile(self, optimizer='sgd', loss='kld'):
self.model.compile(optimizer=optimizer, loss=loss)
def fit(self, x, y=None, maxiter=2e4, batch_size=256, tol=1e-3,
update_interval=140, save_dir='./results/temp'):
print('Update interval', update_interval)
save_interval = x.shape[0] / batch_size * 5 # 5 epochs
print('Save interval', save_interval)
# Step 1: initialize cluster centers using k-means
t1 = time()
print('Initializing cluster centers with k-means.')
kmeans = KMeans(n_clusters=self.n_clusters, n_init=20)
y_pred = kmeans.fit_predict(self.encoder.predict(x))
y_pred_last = np.copy(y_pred)
self.model.get_layer(name='clustering').set_weights([kmeans.cluster_centers_])
# Step 2: deep clustering
# logging file
import csv
logfile = open(save_dir + '/dec_log.csv', 'w')
logwriter = csv.DictWriter(logfile, fieldnames=['iter', 'acc', 'nmi', 'ari', 'loss'])
logwriter.writeheader()
loss = 0
index = 0
index_array = np.arange(x.shape[0])
for ite in range(int(maxiter)):
if ite % update_interval == 0:
q = self.model.predict(x, verbose=0)
p = self.target_distribution(q) # update the auxiliary target distribution p
# evaluate the clustering performance
y_pred = q.argmax(1)
if y is not None:
acc = np.round(metrics.acc(y, y_pred), 5)
nmi = np.round(metrics.nmi(y, y_pred), 5)
ari = np.round(metrics.ari(y, y_pred), 5)
loss = np.round(loss, 5)
logdict = dict(iter=ite, acc=acc, nmi=nmi, ari=ari, loss=loss)
logwriter.writerow(logdict)
print('Iter %d: acc = %.5f, nmi = %.5f, ari = %.5f' % (ite, acc, nmi, ari), ' ; loss=', loss)
# check stop criterion
delta_label = np.sum(y_pred != y_pred_last).astype(np.float32) / y_pred.shape[0]
y_pred_last = np.copy(y_pred)
if ite > 0 and delta_label < tol:
print('delta_label ', delta_label, '< tol ', tol)
print('Reached tolerance threshold. Stopping training.')
logfile.close()
break
# train on batch
# if index == 0:
# np.random.shuffle(index_array)
idx = index_array[index * batch_size: min((index+1) * batch_size, x.shape[0])]
self.model.train_on_batch(x=x[idx], y=p[idx])
index = index + 1 if (index + 1) * batch_size <= x.shape[0] else 0
# save intermediate model
if ite % save_interval == 0:
print('saving model to:', save_dir + '/DEC_model_' + str(ite) + '.h5')
self.model.save_weights(save_dir + '/DEC_model_' + str(ite) + '.h5')
ite += 1
# save the trained model
logfile.close()
print('saving model to:', save_dir + '/DEC_model_final.h5')
self.model.save_weights(save_dir + '/DEC_model_final.h5')
return y_pred
import sys
sys.path.insert(0, 'Deep_Embedding_Clustering')
from Deep_Embedding_Clustering import metrics
# setting the hyper parameters
init = 'glorot_uniform'
pretrain_optimizer = 'adam'
dataset = 'mnist'
batch_size = 32
maxiter = 2e4
tol = 0.001
save_dir = 'results'
import os
if not os.path.exists(save_dir):
os.makedirs(save_dir)
update_interval = 200
pretrain_epochs = 50
init = VarianceScaling(scale=1. / 3., mode='fan_in',
distribution='uniform') # [-limit, limit], limit=sqrt(1./fan_in)
#pretrain_optimizer = SGD(lr=1, momentum=0.9)
# prepare the DEC model
dec = DEC(dims=[x_train.shape[-1], 500, 500, 2000, 10], n_clusters=6, init=init)
dec.pretrain(x=x_train, y=y_train, optimizer=pretrain_optimizer,
epochs=pretrain_epochs, batch_size=batch_size,
save_dir=save_dir)
dec.model.summary()
dec.compile(optimizer=SGD(0.01, 0.9), loss='kld')
y_pred = dec.fit(x_train, y=y_train, tol=tol, maxiter=maxiter, batch_size=batch_size,
update_interval=update_interval, save_dir=save_dir)
pred_val = dec.predict(x_test)
set(pred_val)
normalized_mutual_info_score(y_test, pred_val)
```
|
github_jupyter
|
```
import numpy as np
import pandas as pd
df_can = pd.read_excel('https://ibm.box.com/shared/static/lw190pt9zpy5bd1ptyg2aw15awomz9pu.xlsx',
sheet_name='Canada by Citizenship',
skiprows=range(20),
skip_footer=2
)
print('Data downloaded and read into a dataframe!')
df_can.head()
print(df_can.shape)
# clean up the dataset to remove unnecessary columns (eg. REG)
df_can.drop(['AREA', 'REG', 'DEV', 'Type', 'Coverage'], axis=1, inplace=True)
# let's rename the columns so that they make sense
df_can.rename(columns={'OdName':'Country', 'AreaName':'Continent','RegName':'Region'}, inplace=True)
# for sake of consistency, let's also make all column labels of type string
df_can.columns = list(map(str, df_can.columns))
# set the country name as index - useful for quickly looking up countries using .loc method
df_can.set_index('Country', inplace=True)
# add total column
df_can['Total'] = df_can.sum(axis=1)
# years that we will be using in this lesson - useful for plotting later on
years = list(map(str, range(1980, 2014)))
print('data dimensions:', df_can.shape)
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.style.use('ggplot') # optional: for ggplot-like style
# check for latest version of Matplotlib
print('Matplotlib version: ', mpl.__version__) # >= 2.0.0
# group countries by continents and apply sum() function
df_continents = df_can.groupby('Continent', axis=0).sum()
# note: the output of the groupby method is a `groupby' object.
# we can not use it further until we apply a function (eg .sum())
print(type(df_can.groupby('Continent', axis=0)))
df_continents.head()
# autopct create %, start angle represent starting point
df_continents['Total'].plot(kind='pie',
figsize=(5, 6),
autopct='%1.1f%%', # add in percentages
startangle=90, # start angle 90° (Africa)
shadow=True, # add shadow
)
plt.title('Immigration to Canada by Continent [1980 - 2013]')
plt.axis('equal') # Sets the pie chart to look like a circle.
plt.show()
colors_list = ['gold', 'yellowgreen', 'lightcoral', 'lightskyblue', 'lightgreen', 'pink']
explode_list = [0.1, 0, 0, 0, 0.1, 0.1] # ratio for each continent with which to offset each wedge.
df_continents['Total'].plot(kind='pie',
figsize=(15, 6),
autopct='%1.1f%%',
startangle=90,
shadow=True,
labels=None, # turn off labels on pie chart
pctdistance=1.12, # the ratio between the center of each pie slice and the start of the text generated by autopct
colors=colors_list, # add custom colors
explode=explode_list # 'explode' lowest 3 continents
)
# scale the title up by 12% to match pctdistance
plt.title('Immigration to Canada by Continent [1980 - 2013]', y=1.12)
plt.axis('equal')
# add legend
plt.legend(labels=df_continents.index, loc='upper left')
plt.show()
```
|
github_jupyter
|
```
%load_ext autoreload
%autoreload 2
import warnings
import pandas as pd
import numpy as np
import os
import sys # error msg, add the modules
import operator # sorting
from math import *
import matplotlib.pyplot as plt
sys.path.append('../../')
import read_trace
import cuda_timeline
# from avgblkmodel import *
from ModelParam import *
import cke
from df_util import *
#from model_cke import *
warnings.filterwarnings("ignore", category=np.VisibleDeprecationWarning)
```
# gpu info
```
gtx950 = DeviceInfo()
gtx950.sm_num = 6
gtx950.sharedmem_per_sm = 49152
gtx950.reg_per_sm = 65536
gtx950.maxthreads_per_sm = 2048
```
# 2 stream info
```
# 10M for mem_mem : where the h2d between streams are overlapped
trace_file = 'trace_10M_s1.csv'
trace_file_2cke = 'trace_h2d_h2d_ovlp.csv'
df_trace = read_trace.trace2dataframe(trace_file) # read the trace to the dataframe
df_trace_2cke = read_trace.trace2dataframe(trace_file_2cke)
#df_trace
#cuda_timeline.plot_trace(df_trace)
df_trace_2cke
cuda_timeline.plot_trace(df_trace_2cke)
```
# 1cke - read trace and reset the timeline
```
df_single_stream = read_trace.get_timing(df_trace)
df_single_stream
df_s1 = read_trace.reset_starting(df_single_stream)
df_s1
```
### 2cke case
```
df_2stream = read_trace.get_timing(df_trace_2cke)
df_2stream
tot_runtime = read_trace.getTotalRuntime(df_2stream)
print tot_runtime
```
# 2 cke
```
stream_num = 2
# find when to start the stream and update the starting pos for the trace
H2D_H2D_OVLP_TH = 3.158431
df_cke_list = cke.init_trace_list(df_s1, stream_num = stream_num, h2d_ovlp_th = H2D_H2D_OVLP_TH)
df_cke_list[0]
df_cke_list[1]
```
### sort
```
df_all_api = cke.init_sort_api_with_extra_cols(df_cke_list)
df_all_api
```
### start algo
```
count = 1
# break_count = 7
while not cke.AllDone(df_all_api):
# pick two api to learn
df_all_api, r1, r2 = cke.PickTwo(df_all_api)
if r1 == None and r2 == None: # go directly updating the last wake api
df_all_api = cke.UpdateStream_lastapi(df_all_api)
else:
df_all_api = cke.StartNext_byType(df_all_api, [r1, r2])
whichType = cke.CheckType(df_all_api, r1, r2) # check whether the same api
# print whichType
if whichType == None:
df_all_api = cke.Predict_noConflict(df_all_api, r1, r2)
elif whichType in ['h2d', 'd2h']: # data transfer in the same direction
df_all_api = cke.Predict_transferOvlp(df_all_api, r1, r2, ways = 2.0)
else: # concurrent kernel: todo
pass
# if count == break_count:
# break
rangeT = cke.Get_pred_range(df_all_api)
# print rangeT
# if count == break_count:
# break
extra_conc = cke.Check_cc_by_time(df_all_api, rangeT) # check whether there is conc during the rangeT
if extra_conc == 0:
if whichType in ['h2d', 'd2h']:
df_all_api = cke.Update_wake_transferOvlp(df_all_api, rangeT, ways = 2.0)
elif whichType == 'kern':
pass
else: # no overlapping
df_all_api = cke.Update_wake_noConflict(df_all_api, rangeT)
# check if any api is done, and update the timing for the other apis in that stream
df_all_api = cke.UpdateStreamTime(df_all_api)
else: # todo : when there is additional overlapping
pass
# if count == break_count:
# break
# next call
count = count + 1
df_all_api
df_all_api.loc[df_all_api.stream_id == 0]
df_all_api.loc[df_all_api.stream_id == 1]
#
# run above
#
```
|
github_jupyter
|
```
import numpy as np
import pandas as pd
from pathlib import Path
import matplotlib.pyplot as plt
import plotly.graph_objects as go
from tqdm import tqdm
from scipy.spatial.distance import cdist
from sklearn.metrics import roc_curve, roc_auc_score
timings = Path('timings/')
raw_data = Path('surface_data/raw/protein_surfaces/01-benchmark_surfaces_npy')
experiment_names = ['TangentConv_site_1layer_5A_epoch49', 'TangentConv_site_1layer_9A_epoch49', 'TangentConv_site_1layer_15A_epoch49','TangentConv_site_3layer_5A_epoch49','TangentConv_site_3layer_9A_epoch46', 'TangentConv_site_3layer_15A_epoch17','PointNet_site_1layer_5A_epoch30','PointNet_site_1layer_9A_epoch30','PointNet_site_3layer_5A_epoch46', 'PointNet_site_3layer_9A_epoch37', 'DGCNN_site_1layer_k40_epoch46','DGCNN_site_1layer_k100_epoch32','DGCNN_site_3layer_k40_epoch33']
experiment_names_short = ['Ours 1L 5A', 'Ours 1L 9A', 'Ours 1L 15A','Ours 3L 5A','Ours 3L 9A', 'Ours 3L 15A','PN++ 1L 5A','PN++ 1L 9A','PN++ 3L 5A', 'PN++ 3L 9A', 'DGCNN 1L K40','DGCNN 1L K100','DGCNN 3L K40']
performance = []
times = []
time_errors = []
memory = []
memory_errors = []
for experiment_name in experiment_names:
predpoints_preds = np.load(timings/f'{experiment_name}_predpoints_preds.npy')
predpoints_labels = np.load(timings/f'{experiment_name}_predpoints_labels.npy')
rocauc = roc_auc_score(predpoints_labels,predpoints_preds)
conv_times = np.load(timings/f'{experiment_name}_convtime.npy')
memoryusage = np.load(timings/f'{experiment_name}_memoryusage.npy')
memoryusage = memoryusage
conv_times = conv_times
performance.append(rocauc)
times.append(conv_times.mean())
time_errors.append(conv_times.std())
memory.append(memoryusage.mean())
memory_errors.append(memoryusage.std())
performance += [0.849]
times += [0.16402676922934395]
time_errors += [0.04377787154914341]
memory += [1491945956.9371428]
memory_errors += [125881554.73354617]
experiment_names_short += ['MaSIF 3L 9A']
colors += [40]
experiment_names_short = [f'{i+1}) {experiment_names_short[i]}' for i in range(len(experiment_names_short))]
times = np.array(times)*1e3
time_errors = np.array(time_errors)*1e3
memory = np.array(memory)*1e-6
memory_errors = np.array(memory_errors)*1e-6
colors = [f'hsl(240,100,{25+i*10.83})' for i in range(6)]+[f'hsl(116,100,{25+i*16.25})' for i in range(4)] + [f'hsl(300,100,{25+i*21.66})' for i in range(3)] + [f'hsl(0,100,50)']
fig = go.Figure()
for i in range(len(times)):
fig.add_trace(go.Scatter(
x=[times[i]],
y=[performance[i]],
mode='markers',
name=experiment_names_short[i],
marker = dict(color=colors[i]),
error_x=dict(
type='data',
symmetric=True,
array=[time_errors[i]])))
fig.update_layout(
xaxis_title='Forward pass time per protein [ms] (log)',
yaxis_title='Site identification ROC-AUC',
legend_title="Models",
)
fig.update_xaxes(type="log")
fig.update_layout(
xaxis = dict(
tickvals = [1e1,2e1,4e1,6e1,8e1,1e2,2e2,4e2,6e2],
#tickvals = [10, 20, 50, 100, 200, 500],
)
)
fig.show()
fig.write_image('figures/time_vs_perf.pdf')
fig = go.Figure()
for i in range(len(times)):
fig.add_trace(go.Scatter(
x=[memory[i]],
y=[performance[i]],
mode='markers',
marker = dict(color=colors[i]),
name=experiment_names_short[i],
error_x=dict(
type='data',
symmetric=True,
array=[memory_errors[i]])))
fig.update_layout(
xaxis_title='Memory usage per protein [MB] (log)',
yaxis_title='Site identification ROC-AUC',
legend_title="Models",
)
fig.update_xaxes(type="log")
fig.update_layout(
xaxis = dict(
tickvals = [100,200,400,600,800,1000,2000,4000],
)
)
fig.show()
fig.write_image('figures/mem_vs_perf.pdf')
```
|
github_jupyter
|
## Student Activity on Advanced Data Structure
In this activity we will have to do the following tasks
- Look up the definition of permutations, and dropwhile from [itertools documentation](https://docs.python.org/3/library/itertools.html) in Python
- Using permutations generate all possible three digit numbers that can be generated using 0, 1, and 2
- Loop over this iterator and print them and also use `type` and `assert` to make sure that the return types are tuples
- Use a single line code involving `dropwhile` and an lambda expression to convert all the tuples to lists while dropping any leading zeros (example - `(0, 1, 2)` becomes `[1, 2]`)
- Write a function which takes a list like above and returns the actual number contained in it. Example - if you pass `[1, 2]` to the function it will return you `12`. Make sure it is indeed a number and not just a concatenated string. (Hint - You will need to treat the incoming list as a stack in the function to achieve this)
### Task 1
Look up the definition of `permutations` and `dropwhile` from itertools.
There is a way to look up the definition of a function inside Jupyter itself. just type the function name followed by a `?` and press `Shift+Enter`. We encourage you to also try this way
```
### Write your code bellow this comment.
```
### Task 2
Write an expression to generate all the possible three digit numbers using 0, 1, and 2
```
### Write your code bellow this comment
```
### Task 3
Loop over the iterator expression you generated before. Use print to print each element returned by the iterator. Use `assert` and `type` to make sure that the elements are of type tuple
```
### Write your code bellow this comment
```
### Task 4
Write the loop again. But this time use `dropwhile` with a lambda expression to drop any leading zeros from the tuples. As an example `(0, 1, 2)` will become `[0, 2]`. Also cast the output of the dropwhile to a list.
_Extra task can be to check the actual type that dropwhile returns without the casting asked above_
```
### Write your code bellow this comment
```
### Task 5
Write all the logic you had written above, but this time write a separate function where you will be passing the list generated from dropwhile and the function will return the whole number contained in the list. As an example if you pass `[1, 2]` to the fucntion it will return 12 to you. Make sure that the return type is indeed a number and not a string. Although this task can be achieved using some other tricks, we require that you treat the incoming list as a stack in the function and generate the number there.
```
### Write your code bellow this comment
```
|
github_jupyter
|
# Using custom containers with Vertex AI Training
**Learning Objectives:**
1. Learn how to create a train and a validation split with BigQuery
1. Learn how to wrap a machine learning model into a Docker container and train in on Vertex AI
1. Learn how to use the hyperparameter tuning engine on Vertex AI to find the best hyperparameters
1. Learn how to deploy a trained machine learning model on Vertex AI as a REST API and query it
In this lab, you develop, package as a docker image, and run on **Vertex AI Training** a training application that trains a multi-class classification model that **predicts the type of forest cover from cartographic data**. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.
The training code uses `scikit-learn` for data pre-processing and modeling. The code has been instrumented using the `hypertune` package so it can be used with **Vertex AI** hyperparameter tuning.
```
import os
import time
import pandas as pd
from google.cloud import aiplatform, bigquery
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler
```
## Configure environment settings
Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment.
- `REGION` - the compute region for Vertex AI Training and Prediction
- `ARTIFACT_STORE` - A GCS bucket in the created in the same region.
```
REGION = "us-central1"
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
ARTIFACT_STORE = f"gs://{PROJECT_ID}-kfp-artifact-store"
DATA_ROOT = f"{ARTIFACT_STORE}/data"
JOB_DIR_ROOT = f"{ARTIFACT_STORE}/jobs"
TRAINING_FILE_PATH = f"{DATA_ROOT}/training/dataset.csv"
VALIDATION_FILE_PATH = f"{DATA_ROOT}/validation/dataset.csv"
API_ENDPOINT = f"{REGION}-aiplatform.googleapis.com"
os.environ["JOB_DIR_ROOT"] = JOB_DIR_ROOT
os.environ["TRAINING_FILE_PATH"] = TRAINING_FILE_PATH
os.environ["VALIDATION_FILE_PATH"] = VALIDATION_FILE_PATH
os.environ["PROJECT_ID"] = PROJECT_ID
os.environ["REGION"] = REGION
```
We now create the `ARTIFACT_STORE` bucket if it's not there. Note that this bucket should be created in the region specified in the variable `REGION` (if you have already a bucket with this name in a different region than `REGION`, you may want to change the `ARTIFACT_STORE` name so that you can recreate a bucket in `REGION` with the command in the cell below).
```
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
```
## Importing the dataset into BigQuery
```
%%bash
DATASET_LOCATION=US
DATASET_ID=covertype_dataset
TABLE_ID=covertype
DATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv
SCHEMA=Elevation:INTEGER,\
Aspect:INTEGER,\
Slope:INTEGER,\
Horizontal_Distance_To_Hydrology:INTEGER,\
Vertical_Distance_To_Hydrology:INTEGER,\
Horizontal_Distance_To_Roadways:INTEGER,\
Hillshade_9am:INTEGER,\
Hillshade_Noon:INTEGER,\
Hillshade_3pm:INTEGER,\
Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,\
Soil_Type:STRING,\
Cover_Type:INTEGER
bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
```
## Explore the Covertype dataset
```
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
```
## Create training and validation splits
Use BigQuery to sample training and validation splits and save them to GCS storage
### Create a training split
```
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
```
### Create a validation split
## Exercise
```
# TODO: You code to create the BQ table validation split
!bq query \
-n 0 \
--destination_table covertype_dataset.validation \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (8)'
# TODO: Your code to export the validation table to GCS
!bq extract \
--destination_format CSV \
covertype_dataset.validation \
$VALIDATION_FILE_PATH
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
```
## Develop a training application
### Configure the `sklearn` training pipeline.
The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
```
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
("num", StandardScaler(), numeric_feature_indexes),
("cat", OneHotEncoder(), categorical_feature_indexes),
]
)
pipeline = Pipeline(
[
("preprocessor", preprocessor),
("classifier", SGDClassifier(loss="log", tol=1e-3)),
]
)
```
### Convert all numeric features to `float64`
To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
```
num_features_type_map = {
feature: "float64" for feature in df_train.columns[numeric_feature_indexes]
}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
```
### Run the pipeline locally.
```
X_train = df_train.drop("Cover_Type", axis=1)
y_train = df_train["Cover_Type"]
X_validation = df_validation.drop("Cover_Type", axis=1)
y_validation = df_validation["Cover_Type"]
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
```
### Calculate the trained model's accuracy.
```
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
```
### Prepare the hyperparameter tuning application.
Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on Vertex AI Training.
```
TRAINING_APP_FOLDER = "training_app"
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
```
### Write the tuning script.
Notice the use of the `hypertune` package to report the `accuracy` optimization metric to Vertex AI hyperparameter tuning service.
```
%%writefile {TRAINING_APP_FOLDER}/train.py
import os
import subprocess
import sys
import fire
import hypertune
import numpy as np
import pandas as pd
import pickle
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
accuracy = pipeline.score(X_validation, y_validation)
print('Model accuracy: {}'.format(accuracy))
# Log it with hypertune
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=accuracy
)
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
```
### Package the script into a docker image.
Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime in the training container is aligned with the serving runtime in the serving container.
Make sure to update the URI for the base image so that it points to your project's **Container Registry**.
### Exercise
Complete the Dockerfile below so that it copies the 'train.py' file into the container
at `/app` and runs it when the container is started.
```
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
# TODO
WORKDIR /app
COPY train.py .
ENTRYPOINT ["python", "train.py"]
```
### Build the docker image.
You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
```
IMAGE_NAME = "trainer_image"
IMAGE_TAG = "latest"
IMAGE_URI = f"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{IMAGE_TAG}"
os.environ["IMAGE_URI"] = IMAGE_URI
!gcloud builds submit --async --tag $IMAGE_URI $TRAINING_APP_FOLDER
```
## Submit an Vertex AI hyperparameter tuning job
### Create the hyperparameter configuration file.
Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:
- Max iterations
- Alpha
The file below configures Vertex AI hypertuning to run up to 5 trials in parallel and to choose from two discrete values of `max_iter` and the linear range between `1.0e-4` and `1.0e-1` for `alpha`.
```
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"forestcover_tuning_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
os.environ["JOB_NAME"] = JOB_NAME
os.environ["JOB_DIR"] = JOB_DIR
```
### Exercise
Complete the `config.yaml` file generated below so that the hyperparameter
tunning engine try for parameter values
* `max_iter` the two values 10 and 20
* `alpha` a linear range of values between 1.0e-4 and 1.0e-1
Also complete the `gcloud` command to start the hyperparameter tuning job with a max trial count and
a max number of parallel trials both of 5 each.
```
%%bash
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
CONFIG_YAML=config.yaml
cat <<EOF > $CONFIG_YAML
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: max_iter
discreteValueSpec:
values:
- 10
- 20
# TODO
- parameterId: alpha
doubleValueSpec:
minValue: 1.0e-4
maxValue: 1.0e-1
scaleType: UNIT_LINEAR_SCALE
algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: $MACHINE_TYPE
replicaCount: $REPLICA_COUNT
containerSpec:
imageUri: $IMAGE_URI
args:
- --job_dir=$JOB_DIR
- --training_dataset_path=$TRAINING_FILE_PATH
- --validation_dataset_path=$VALIDATION_FILE_PATH
- --hptune
EOF
# TODO
gcloud ai hp-tuning-jobs create \
--region=$REGION \
--display-name=$JOB_NAME \
--config=$CONFIG_YAML \
--max-trial-count=5 \
--parallel-trial-count=5
echo "JOB_NAME: $JOB_NAME"
```
Go to the Vertex AI Training dashboard and view the progression of the HP tuning job under "Hyperparameter Tuning Jobs".
### Retrieve HP-tuning results.
After the job completes you can review the results using GCP Console or programmatically using the following functions (note that this code supposes that the metrics that the hyperparameter tuning engine optimizes is maximized):
## Exercise
Complete the body of the function below to retrieve the best trial from the `JOBNAME`:
```
# TODO
def get_trials(job_name):
jobs = aiplatform.HyperparameterTuningJob.list()
match = [job for job in jobs if job.display_name == JOB_NAME]
tuning_job = match[0] if match else None
return tuning_job.trials if tuning_job else None
def get_best_trial(trials):
metrics = [trial.final_measurement.metrics[0].value for trial in trials]
best_trial = trials[metrics.index(max(metrics))]
return best_trial
def retrieve_best_trial_from_job_name(jobname):
trials = get_trials(jobname)
best_trial = get_best_trial(trials)
return best_trial
```
You'll need to wait for the hyperparameter job to complete before being able to retrieve the best job by running the cell below.
```
best_trial = retrieve_best_trial_from_job_name(JOB_NAME)
```
## Retrain the model with the best hyperparameters
You can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset.
### Configure and run the training job
```
alpha = best_trial.parameters[0].value
max_iter = best_trial.parameters[1].value
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"JOB_VERTEX_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
WORKER_POOL_SPEC = f"""\
machine-type={MACHINE_TYPE},\
replica-count={REPLICA_COUNT},\
container-image-uri={IMAGE_URI}\
"""
ARGS = f"""\
--job_dir={JOB_DIR},\
--training_dataset_path={TRAINING_FILE_PATH},\
--validation_dataset_path={VALIDATION_FILE_PATH},\
--alpha={alpha},\
--max_iter={max_iter},\
--nohptune\
"""
!gcloud ai custom-jobs create \
--region={REGION} \
--display-name={JOB_NAME} \
--worker-pool-spec={WORKER_POOL_SPEC} \
--args={ARGS}
print("The model will be exported at:", JOB_DIR)
```
### Examine the training output
The training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on GCS.
**Note:** We need to wait for job triggered by the cell above to complete before running the cells below.
```
!gsutil ls $JOB_DIR
```
## Deploy the model to Vertex AI Prediction
```
MODEL_NAME = "forest_cover_classifier_2"
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest"
)
SERVING_MACHINE_TYPE = "n1-standard-2"
```
### Uploading the trained model
## Exercise
Upload the trained model using `aiplatform.Model.upload`:
```
JOB_DIR
# TODO
uploaded_model = aiplatform.Model.upload(
display_name=MODEL_NAME,
artifact_uri=JOB_DIR,
serving_container_image_uri=SERVING_CONTAINER_IMAGE_URI,
)
```
### Deploying the uploaded model
## Exercise
Deploy the model using `uploaded_model`:
```
# TODO
endpoint = uploaded_model.deploy(
machine_type=SERVING_MACHINE_TYPE,
accelerator_type=None,
accelerator_count=None,
)
```
### Serve predictions
#### Prepare the input file with JSON formated instances.
## Exercise
Query the deployed model using `endpoint`:
```
instance = [
2841.0,
45.0,
0.0,
644.0,
282.0,
1376.0,
218.0,
237.0,
156.0,
1003.0,
"Commanche",
"C4758",
]
# TODO
endpoint.predict([instance])
```
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
|
github_jupyter
|
---
**Universidad de Costa Rica** | Escuela de Ingeniería Eléctrica
*IE0405 - Modelos Probabilísticos de Señales y Sistemas*
### `PyX` - Serie de tutoriales de Python para el análisis de datos
# `Py5` - *Curvas de ajuste de datos*
> Los modelos para describir un fenómeno y sus parámetros pueden obtenerse a partir de una muestra de datos. Debido a la gran cantidad de modelos probabilísticos disponibles, a menudo es necesario hacer una comparación de ajuste entre muchas de ellas.
*Fabián Abarca Calderón* \
*Jonathan Rojas Sibaja*
---
## Ajuste de modelos
El ajuste de modelos es ampliamente utilizado para obtener un modelo matemático que caracterize el comportamiento de cierto sistema basandose en los datos experimentales obtenidos. Este modelo deberá predecir también otras medidas experimentales que se obtengan de su recreación.
### Estimación de máxima verosimilitud (MLE)
(Esto es de menor prioridad) La estimación de máxima verosimilitud (**MLE**, *maximum likelihood estimation*) es...
---
## 5.1 - Con el módulo `numpy`
Para iniciar, con la función `polyfit()` de la librería `numpy` se puede realizar el ajuste de datos experimentals a polinomios de cualquier orden. Esta función devuelve los parámetros de la recta para un modelo lineal de la forma:
$$
f(x) = mx + b
$$
Esto en el caso de un polinomio de grado 1. Un ejemplo utilizando este método es el siguiente:
```
from numpy import *
import matplotlib.pyplot as plt
# Datos experimentales
x = array([ 0., 1., 2., 3., 4.])
y = array([ 10.2 , 12.1, 15.5 , 18.3, 20.6 ])
# Ajuste a una recta (polinomio de grado 1)
p = polyfit(x, y, 1)
# Una vez conocidos los parámetros de la recta de ajuste,
#se pueden utilizar para graficar la recta de ajuste.
y_ajuste = p[0]*x + p[1]
# Dibujamos los datos experimentales
p_datos, = plt.plot(x, y, 'b.')
# Dibujamos la recta de ajuste
p_ajuste, = plt.plot(x, y_ajuste, 'r-')
plt.title('Ajuste lineal por minimos cuadrados')
plt.xlabel('Eje x')
plt.ylabel('Eje y')
plt.legend(('Datos experimentales', 'Ajuste lineal'), loc="upper left")
```
En el caso de otro tipo de regresiones, se debe aumentar el grado del polinomio. Por ejemplo, el caso de una regresió polinomial se muestra a continuación:
```
import numpy
import matplotlib.pyplot as plt
#Lo primero es crear los vectores que definen los puntos de datos
x = [1,2,3,5,6,7,8,9,10,12,13,14,15,16,18,19,21,22]
y = [100,90,80,60,60,55,60,65,70,70,75,76,78,79,90,99,99,100]
#Este método nos permite crear un modelo polinomial
mimodelo = numpy.poly1d(numpy.polyfit(x, y, 3))
#Esto determina cómo se mostrara la línea, la cual inicia en 1
#y termina en 22
milinea = numpy.linspace(1,22,100)
#Y por último graficamos los datos y la curva de
#la regresion polinomial
plt.scatter(x,y)
plt.plot(milinea, mimodelo(milinea))
plt.show()
```
Una vez trazada la recta de mejor ajuste, se puede obtener el valor de un punto dado, evaluando la curva en dicho punto. por ejemplo si quisieramos obtener el valor dado para un valor de 17 en el eje x, entonces sería:
```
valor = mimodelo(17)
print(valor)
```
---
## 5.2 - Con el módulo `stats`
En este caso existen diversos comandos que pueden ser utilizados para crear diferentes distribuciones basadas en datos dados. por ejemplo, partiendo de los datos de un histograma de una PDF, se puede crear el la curva de dicha distribución normal utiliando el comando `scipy.stats.rv_histogram`, además también se puede graficar el CDF de dichos datos:
```
import scipy.stats
import numpy as np
import matplotlib.pyplot as plt
data = scipy.stats.norm.rvs(size=100000, loc=0, scale=1.5, random_state=123)
hist = np.histogram(data, bins=100)
hist_dist = scipy.stats.rv_histogram(hist)
X = np.linspace(-5.0, 5.0, 100)
plt.title("Datos aleatorios")
plt.hist(data, density=True, bins=100)
plt.show()
X = np.linspace(-5.0, 5.0, 100)
plt.title("PDF de los datos")
plt.plot(X, hist_dist.pdf(X), label='PDF')
plt.show()
X = np.linspace(-5.0, 5.0, 100)
plt.title("CDF de los datos")
plt.plot(X, hist_dist.cdf(X), label='CDF')
plt.show()
```
Otro paquete que brinda la librería ´Scipy´ es ´optimize´ el cuál tiene algorítmos de curvas de ajuste por medio de la función ´curve_fit´ con la cuál se pueden ajustar curvas de sistemas no lineales utilizando mínimos cuadrados. A continuación un ejemplo de su implementación para encontrar la recta de mejor ajuste ante una serie de datos experimentales obtenidos:
```
import numpy
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
def _polynomial(x, *p):
"""Ajuste polinomial de grado arbitrario"""
poly = 0.
for i, n in enumerate(p):
poly += n * x**i
return poly
# Se definen los datos experimentales:
x = numpy.linspace(0., numpy.pi)
y = numpy.cos(x) + 0.05 * numpy.random.normal(size=len(x))
# p0 es la suposición inicial para los coeficientes de ajuste, este valor
# establece el orden del polinomio que desea ajustar. Aquí yo
# ya establecí todas las conjeturas iniciales en 1., es posible que tenga una mejor idea de
# qué valores esperar en función de sus datos.
p0 = numpy.ones(6,)
coeff, var_matrix = curve_fit(_polynomial, x, y, p0=p0)
yfit = [_polynomial(xx, *tuple(coeff)) for xx in x]
plt.plot(x, y, label='Test data')
plt.plot(x, yfit, label='fitted data')
plt.show()
```
---
## 5.3 - Con la librería `fitter`
Si es necesario, el paquete de `fitter` provee una simple clases la cual identifica la distribución de la cuál las muestras de datos son generados. Utiliza 80 distribuciones de Scipy y permite graficar los resultados para verificar que dicha distribución es la que mejor se ajusta a los datos. En el siguiente ejemplo se generarán los una muestra de 1000 puntos con una distribución gamma, para luego utilizar `fitter` el cuál revisará las 80 distribuciones de Scipy y desplegará un resumen con las distribuciones que calzan de mejor forma con nuestros datos, basandose en la suma del cuadrado de los errores. Los resultados del resumen se puede verificar de manera visual en las gráficas que dicho resumen traza por sí mismo:
```
from scipy import stats
from fitter import Fitter
# Crear los datos
data = stats.gamma.rvs(2, loc=1.5, scale=2, size=1000)
# Definir cuáles distribuciones queremos que evalúe
f = Fitter(data, distributions=['gamma', 'rayleigh', 'uniform'])
f.fit()
f.summary()
```
Por último, un ejemplo que que ilustra la combinación de el paquete ´scipy.stats´ y ´fitter´ es mediante el modulo ´histfit´, el cuál permite graficar tanto los datos y también las curvas de mejor ajuste al agregar ruido a la medición y calcular ese ajuste en 10 ocaciones, ´Nfit = 10´. En este caso la serie de datos utilizada corresponde a una distribuación normal (creada con el paquete ´scipy.stats´) y se obtuvieron 10 curvas de mejor ajuste ante diversos casos de ruido (con ´error_rate = 0.01´) y además se obtuvo un estimado de los valores correspondientes a la media, la varianza y la amplitud de la distribución de las curvas de mejor ajuste.
```
from fitter import HistFit
from pylab import hist
import scipy.stats
#Creamos la curva con distribución normal
data = [scipy.stats.norm.rvs(2,3.4) for x in range(10000)]
#Graficamos los valores asignándoles espaciamiento temporal
Y, X, _ = hist(data, bins=30)
#Creamos las curvas de mejor ajuste
hf = HistFit(X=X, Y=Y)
#Aplicamos un margen de error para simular ruido y calcular 10
#curvas de mejor ajuste
hf.fit(error_rate=0.01, Nfit=10)
#Obtenemos los valores correspondientes a la media, la varianza y
#la amplitud de las curvas de mejor ajuste
print(hf.mu, hf.sigma, hf.amplitude)
```
---
### Más información
* [Página web](https://www.google.com/)
* Libro o algo
* Tutorial [w3schools](https://www.w3schools.com/python/)
---
**Universidad de Costa Rica** | Facultad de Ingeniería | Escuela de Ingeniería Eléctrica
© 2021
---
|
github_jupyter
|
# This task is not quite ready as we don't have an open source route for simulating geometry that requires imprinting and merging. However this simulation can be carried out using Trelis.
# Heating Mesh Tally on CAD geometry made from Components
This constructs a reactor geometry from 3 Component objects each made from points.
The Component made include a breeder blanket, PF coil and a central column shield.
2D and 3D Meshes tally are then simulated to show nuclear heating, flux and tritium_production across the model.
This section makes the 3d geometry for the entire reactor from a input parameters.
```
import paramak
my_reactor = paramak.BallReactor(
inner_bore_radial_thickness=50,
inboard_tf_leg_radial_thickness=55,
center_column_shield_radial_thickness=50,
divertor_radial_thickness=50,
inner_plasma_gap_radial_thickness=50,
plasma_radial_thickness=100,
outer_plasma_gap_radial_thickness=50,
firstwall_radial_thickness=1,
blanket_radial_thickness=100,
blanket_rear_wall_radial_thickness=10,
elongation=2,
triangularity=0.55,
number_of_tf_coils=16,
rotation_angle=180,
)
# TF and PF coils can be added with additional arguments.
# see the documentation for more details
# https://paramak.readthedocs.io/en/main/paramak.parametric_reactors.html
my_reactor.show()
```
The next section defines the materials. This can be done using openmc.Materials or in this case strings that look up materials from the neutronics material maker.
```
my_reactor.export_stp()
from IPython.display import FileLink
display(FileLink('blanket.stp'))
display(FileLink('pf_coil.stp'))
display(FileLink('center_column.stp'))
display(FileLink('Graveyard.stp'))
```
The next section defines the materials. This can be done using openmc.Materials or in this case strings that look up materials from the neutronics material maker.
```
from neutronics_material_maker import Material
mat1 = Material.from_library(name='Li4SiO4')
mat2 = Material.from_library(name='copper')
mat3 = Material.from_library(name='WC')
```
This next step makes a simple point source.
```
import openmc
# initialises a new source object
source = openmc.Source()
# sets the location of the source to x=0 y=0 z=0
source.space = openmc.stats.Point((100, 0, 0))
# sets the direction to isotropic
source.angle = openmc.stats.Isotropic()
# sets the energy distribution to 100% 14MeV neutrons
source.energy = openmc.stats.Discrete([14e6], [1])
```
This next section combines the geometry with the materials and specifies a few mesh tallies
```
import paramak_neutronics
neutronics_model = paramak_neutronics.NeutronicsModel(
geometry=my_reactor,
cell_tallies=['heating', 'flux', 'TBR', 'spectra'],
mesh_tally_2d=['heating', 'flux', '(n,Xt)'],
mesh_tally_3d=['heating', 'flux', '(n,Xt)'],
source=source,
simulation_batches=2,
simulation_particles_per_batch=10000,
materials={
'blanket_material': mat1,
'pf_coil_material': mat2,
'center_column_material': mat3,
}
)
# You will need to have Trelis installed to run this command
neutronics_model.simulate()
```
The next section produces download links for:
- vtk files that contain the 3D mesh results (open with Paraview)
- png images that show the resuls of the 2D mesh tally
```
from IPython.display import FileLink
display(FileLink('heating_on_3D_mesh.vtk'))
display(FileLink('flux_on_3D_mesh.vtk'))
display(FileLink('tritium_production_on_3D_mesh.vtk'))
display(FileLink('flux_on_2D_mesh_xy.png'))
display(FileLink('flux_on_2D_mesh_xz.png'))
display(FileLink('flux_on_2D_mesh_yz.png'))
display(FileLink('heating_on_2D_mesh_xy.png'))
display(FileLink('heating_on_2D_mesh_xz.png'))
display(FileLink('heating_on_2D_mesh_yz.png'))
display(FileLink('tritium_production_on_2D_mesh_yz.png'))
display(FileLink('tritium_production_on_2D_mesh_xz.png'))
display(FileLink('tritium_production_on_2D_mesh_yz.png'))
```
|
github_jupyter
|
# MDT Validation Notebook
Validated on Synthea +MDT population vs MEPS for Pediatric Asthma
```
import pandas as pd
import datetime as dt
import numpy as np
from scipy.stats import chi2_contingency
```
# Grab medication RXCUI of interest
Grabs the MEPS product RXCUI lists for filtering of Synthea to medications of interest.
Path to this will be MDT module - log - rxcui_ndc_df_output.csv
```
rxcui_df = pd.read_csv(r"") # MDT produced medication list
rxcui_df = rxcui_df[['medication_product_name','medication_product_rxcui']].drop_duplicates()
rxcui_df['medication_product_rxcui'] = rxcui_df['medication_product_rxcui'].astype(int)
```
# Read Synthea Population
Reads Synthea Medication file and filters on medications of interest
The path for this will be synthea -> output -> csv -> medications.csv
```
col_list = ['START','PATIENT','CODE']
syn_med_df = pd.DataFrame(columns = ['START','PATIENT','CODE','medication_product_rxcui','medication_product_name'])
for x in pd.read_csv(r"", usecols=col_list, chunksize=100000):
x['CODE'] = x['CODE'].astype(int)
temp_df = x.merge(rxcui_df, how="inner", left_on='CODE', right_on='medication_product_rxcui')
syn_med_df = syn_med_df.append(temp_df)
```
# Synthea Patient Population Filtering
Reads and merges Synthea patient data to allow for patient management.
The path for this will be synthea -> output -> csv -> patients.csv
This step can be skipped if not filtering by patient. For the pediatic use case we limited to patients who received medications when they were < 6 years of age
```
syn_pat_df = pd.read_csv(r"")
syn_pat_df = syn_pat_df.merge(syn_med_df, how='inner', left_on='Id', right_on='PATIENT')
syn_pat_df['START'] = pd.to_datetime(syn_pat_df['START']).dt.date
syn_pat_df['BIRTHDATE'] = pd.to_datetime(syn_pat_df['BIRTHDATE']).dt.date
syn_pat_df['age_in_days'] = (syn_pat_df['START'] - syn_pat_df['BIRTHDATE']).dt.days
syn_med_df = syn_pat_df[syn_pat_df['age_in_days'] < 2191]
```
# Synthea distributions
Gets total patient counts and medication distributions from Synthea population
```
syn_med_df = syn_med_df.groupby(['medication_product_name']).agg(patient_count=('CODE','count')).reset_index()
total_patients = syn_med_df['patient_count'].sum()
syn_med_df['percent'] = syn_med_df['patient_count']/total_patients
syn_med_df
```
# MEPS Expected
generates the expected MEPS patient counts for chi squared goodness of fit test
Path to file will be in you MDT module - log - validation_df.csv
```
meps_df = pd.read_csv(r"")
meps_df = meps_df[meps_df['age'] == '0-5'][['medication_product_name','validation_percent_product_patients']]
meps_df['patient_count'] = meps_df['validation_percent_product_patients'] * total_patients
meps_df['patient_count'] = meps_df['patient_count'].round(0)
meps_df
```
# Run Chi Squared
Runs chi squared test for two different populations
Take the values for patient count from syn_med_df and meps_df for this.
Numbers used are for the pediatric asthma use case of Synthea +MDT vs MEPS
```
obs = np.array([[203, 216],
[977, 979],
[513, 489],
[1819, 1836],
[1, 0],
[2378, 2332],
[1070, 1093]])
chi2, p, df, ob = chi2_contingency(obs)
print(f"""X2 = {chi2}
p-value = {p}
degrees of freedom = {df}
observatrions = {ob}""")
```
|
github_jupyter
|
---

---
# Operadores
Os operadores são usados para realizar operações sobre valores e variáveis. Os operadores manipulam e retornam valores de acordo com sua funcionalidade. Eles podem ser representados por palavras reservadas ou caracteres especiais (símbolos).
- [Operadores Aritiméticos](#Operadores-Aritméticos)
- [Operadores de Comparação](#Operadores-de-Comparação)
- [Operadores de Atribuição](#Operadores-de-Atribuição)
- [Operadores Lógicos](#Operadores-Lógicos)
## Operadores Aritméticos
Estes operadores são representados por símbolos e realizam calculos aritméticos como adição, subtração, multiplicação, etc. Segue a lista de operadores aritméticos:
| Operador | Símbolo |
|--|--|
| Adição | + |
| Subtração | - |
| Multiplicação | * |
| Divisão | / |
| Divisão inteira | // |
| Módulo | % |
| Exponenciação | ** |
[← Operadores](#Operadores)
#### Exemplos
```
# Variáveis
a, b = 10, 3
# Adição
print(a + b)
# Subtração
print(a - b)
# Multiplicação
print(a * b)
# Divisão
print(a / b)
# Divisão inteira
print(a // b)
# Módulo
print(a % 2)
print(a % 3)
# Exponenciação
print(a**2)
print(a**4)
print(2**4)
```
## Operadores de Comparação
Estes operadores comparam os valores em cada lado do operando e determinam a relação entre eles. Eles também são conhecidos como operadores relacionais. Este operadores também são descritos por símbolos. Segue a lista de operadores:
| Operador | Símbolo |
|--|--|
| Igualdade | == |
| Diferença | != |
| Maior que | > |
| Maior ou igual a | >= |
| Menor que | < |
| Menor ou igual a | <= |
Independente do operador utilizado, os resultados possíveis são `True` (Verdadeiro) ou `False` (Falso).
[← Operadores](#Operadores)
#### Exemples
```
# Variáveis
x, y = 5, 4
# Igualdade
print('x == y:', x == y)
# Diferença
print('x != y:', x != y)
# Maior
print('x > y: ', x > y)
print('x >= y:', x >= y)
# Menor
print('x < y: ', x < y)
print('x <= y:', x <= y)
```
## Operadores de Atribuição
Realizam calculos aritméticos e atribuições de forma simplificada. Segue a lista de operadores
| Operator | Symbol |
|--|--|
| Atribuição | = |
| Adição | += |
| Subtração | -= |
| Multiplicação | *= |
| Divisão | //= |
| Divisão Inteira | /= |
| Módulo | %= |
| Exponenciação | **= |
[← Operadores](#Operadores)
#### Exemplos
```
# Variáveis
a, b, c = 3, 2, 0
# Adição
c += a
print('C:', c)
# Multiplicação
c *= a
print('C:', c)
# Divisão
c /= a
print('C:', c)
# Módulo
c = 10
c %= a
print('C:', c)
# Exponenciação
c = 2
c **= a
print('C:', c)
# Divisão inteira
c //= a
print('C:', c)
```
## Operadores Lógicos
Utilizado para realizar calculos booleanos entre valores. Funcionam como nas tabelas-verdade. São representados por palavras reservadas.
| Operador | Palavra reservada |
| -- | -- |
| E | `and` |
| Ou | `or` |
| Negação | `not` |
[← Operadores](#Operadores)
#### Exemplos
```
# Variáveis
a, b = True, False
print('a AND b:', a and b)
print('a OR b:', a or b)
print('NOT a:', not a)
print('NOT (a AND b):', not(a and b))
```
## Precedência de Operadores
| Ordem | Operador |
| -- |--|
| 1º | ** |
| 2º | *, /, //, % |
| 3º | +, - |
| 4º | <=, <, >, >= |
| 5º | =, %=, /=, //=, -=, +=, *=, **= |
| 6º | is, is not |
| 7º | in, not in |
| 8º | not, or, and |
[← Operadores](#Operadores)
## Exercícios
Crie um programa que lê um número, soma 2 a este número e imprime o dobro deste novo valor.
Crie um programa que lê dois números mostra se o primeiro número é maior que o segundo. A saída deve ser um booleano.
|
github_jupyter
|
# Face Generation
In this project, we will define and train a DCGAN on a dataset of faces. Our goal is to get a generator network to generate *new* images of faces that look as realistic as possible!
### Get the Data
You'll be using the [CelebFaces Attributes Dataset (CelebA)](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) to train your adversarial networks.
This dataset is more complex than the number datasets (like MNIST or SVHN)
### Pre-processed Data
Each of the CelebA images is of size 64x64x3 NumPy images. Some sample data is show below.
```
data_dir = 'processed_celeba_small/'
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
import problem_unittests as tests
#import helper
%matplotlib inline
```
## Visualize the CelebA Data
```
# necessary imports
import torch
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
def get_dataloader(batch_size, image_size, data_dir='processed_celeba_small/'):
"""
Batch the neural network data using DataLoader
:param batch_size: The size of each batch; the number of images in a batch
:param img_size: The square size of the image data (x, y)
:param data_dir: Directory where image data is located
:return: DataLoader with batched data
"""
transform= transforms.Compose([transforms.Resize(image_size),
transforms.ToTensor()])
data = datasets.ImageFolder(root = data_dir,transform = transform )
data_loader = DataLoader(data , batch_size = batch_size, shuffle = True)
return data_loader
```
## Create a DataLoader
#### Create a DataLoader `celeba_train_loader` with appropriate hyperparameters.
Call the above function and create a dataloader to view images.
* You can decide on any reasonable `batch_size` parameter
* Your `image_size` **must be** `32`. Resizing the data to a smaller size will make for faster training, while still creating convincing images of faces!
```
# Define function hyperparameters
batch_size = 256 ## TRy with different size later
img_size = 32
# Call your function and get a dataloader
celeba_train_loader = get_dataloader(batch_size, img_size)
```
Next, you can view some images! You should seen square images of somewhat-centered faces.
Note: You'll need to convert the Tensor images into a NumPy type and transpose the dimensions to correctly display an image, suggested `imshow` code is below, but it may not be perfect.
```
# helper display function
def imshow(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# obtain one batch of training images
dataiter = iter(celeba_train_loader)
images, _ = dataiter.next() # _ for no labels
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(20, 4))
plot_size=20
for idx in np.arange(plot_size):
ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
```
#### Pre-process your image data and scale it to a pixel range of -1 to 1
You need to do a bit of pre-processing; you know that the output of a `tanh` activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)
```
# TODO: Complete the scale function
def scale(x, feature_range=(-1, 1)):
''' Scale takes in an image x and returns that image, scaled
with a feature_range of pixel values from -1 to 1.
This function assumes that the input x is already scaled from 0-1.'''
# assume x is scaled to (0, 1)
# scale to feature_range and return scaled x
min_val,max_val = feature_range
x = x*(max_val - min_val) + min_val
return x
# check scaled range
# should be close to -1 to 1
img = images[0]
scaled_img = scale(img)
print('Min: ', scaled_img.min())
print('Max: ', scaled_img.max())
```
---
# Define the Model
A GAN is comprised of two adversarial networks, a discriminator and a generator.
## Discriminator
Your first task will be to define the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers. To deal with this complex data, it's suggested you use a deep network with **normalization**. You are also allowed to create any helper functions that may be useful.
#### Complete the Discriminator class
* The inputs to the discriminator are 32x32x3 tensor images
* The output should be a single value that will indicate whether a given image is real or fake
```
import torch.nn as nn
import torch.nn.functional as F
##Helper function for Conv and Batch Layer of Discriminator##
def conv(in_channels,out_channels,kernel_size=4,stride=2,padding=1,batch_norm = True):
layers = []
conv_layer = nn.Conv2d(in_channels, out_channels,kernel_size,stride,padding,bias = False)
layers.append(conv_layer)
if batch_norm:
batch_layer = nn.BatchNorm2d(num_features = out_channels)
layers.append(batch_layer)
return nn.Sequential(*layers)
class Discriminator(nn.Module):
def __init__(self, conv_dim):
"""
Initialize the Discriminator Module
:param conv_dim: The depth of the first convolutional layer
"""
super(Discriminator, self).__init__()
# complete init function
## class variables
self.conv_dim = conv_dim
## class layers
##self.conv1 = nn.Conv2d(in_channels =3, out_channels=self.conv_dim,kernel_size=4,stride=2,padding=1)##16*16*32
##batchNorm1 is not required for the first conv layer
##self.conv2 = nn.Conv2d(in_channels = self.conv_dim, out_channels = self.conv_dim*2,kernel_size =4, stride=2,padding =1 )##8*8*64
##self.batchnorm2 = nn.BatchNorm2d(num_features = self.conv_dim*2)
##self.conv3 = nn.Conv2d(in_channels =self.conv_dim*2,out_channels=self.conv_dim*4, kernel_size = 4, stride =2,padding =1)##4*4*128
##self.batchnorm3 = nn.BatchNorm2d(self.conv_dim*4)
##self.conv4 = nn.Conv2d(in_channels =self.conv_dim*4,out_channels=self.conv_dim*8, kernel_size = 4, stride =2,padding =1)##2*2*256
##self.batchnorm4 = nn.BatchNorm2d(self.conv_dim*8)
## class layers
self.layer1 = conv(in_channels =3, out_channels=self.conv_dim,kernel_size=4,stride=2,padding=1,batch_norm=False)
self.layer2 = conv(in_channels = self.conv_dim, out_channels = self.conv_dim*2,kernel_size=4,stride=2,padding=1)
self.layer3 = conv(in_channels =self.conv_dim*2,out_channels=self.conv_dim*4, kernel_size = 4, stride =2,padding =1)
self.layer4 = conv(in_channels =self.conv_dim*4,out_channels=self.conv_dim*8, kernel_size = 4, stride =2,padding =1)
self.FC1 = nn.Linear(2*2*self.conv_dim*8,1)
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: Discriminator logits; the output of the neural network
"""
# define feedforward behavior
##x = self.conv1(x)
##x = F.leaky_relu(x)
##
##x = self.conv2(x)
##x = self.batchnorm2(x)
##x = F.leaky_relu(x)
##
##x = self.conv3(x)
##x = self.batchnorm3(x)
##x = F.leaky_relu(x)
##
##x = self.conv4(x)
##x = self.batchnorm4(x)
##x = F.leaky_relu(x)
##
#################
x = self.layer1(x)
x = F.leaky_relu(x)
##
x = self.layer2(x)
x = F.leaky_relu(x)
##
x = self.layer3(x)
x = F.leaky_relu(x)
##
x = self.layer4(x)
x = F.leaky_relu(x)
x = x.reshape(-1,2*2*self.conv_dim*8)
x = self.FC1(x)
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(Discriminator)
```
## Generator
The generator should upsample an input and generate a *new* image of the same size as our training data `32x32x3`. This should be mostly transpose convolutional layers with normalization applied to the outputs.
#### Complete the Generator class
* The inputs to the generator are vectors of some length `z_size`
* The output should be a image of shape `32x32x3`
```
def deconv(in_channels,out_channels,kernel_size, stride,padding,batch_norm):
layers = []
conv_trans_layer = nn.ConvTranspose2d(in_channels,out_channels,kernel_size, stride,padding,bias=False)
layers.append(conv_trans_layer)
if batch_norm:
batch_norm_layer = nn.BatchNorm2d(num_features= out_channels)
layers.append(batch_norm_layer)
return nn.Sequential(*layers)
class Generator(nn.Module):
def __init__(self, z_size, conv_dim):
"""
Initialize the Generator Module
:param z_size: The length of the input latent vector, z
:param conv_dim: The depth of the inputs to the *last* transpose convolutional layer
"""
super(Generator, self).__init__()
# complete init function
self.z_size = z_size
self.conv_dim = conv_dim
self.FC1 = nn.Linear(z_size, conv_dim*8*2*2)
self.layer1 = deconv(in_channels= conv_dim*8,out_channels= conv_dim*4,kernel_size=4, stride=2,padding=1,batch_norm=True)
self.layer2 = deconv(in_channels= conv_dim*4,out_channels= conv_dim*2,kernel_size=4, stride=2,padding=1,batch_norm=True)
self.layer3 = deconv(in_channels= conv_dim*2,out_channels= conv_dim,kernel_size=4, stride=2,padding=1,batch_norm=True)
self.layer4 = deconv(in_channels= conv_dim,out_channels= 3,kernel_size=4, stride=2,padding=1,batch_norm=False)
self.tan = nn.Tanh()
##self.dropout = nn.Dropout(0.3)
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: A 32x32x3 Tensor image as output
"""
# define feedforward behavior
batch_size = x.shape[0]
x = self.FC1(x) ##Here I am generating enough dimension to feed to the con2d Layers
x = x.reshape(-1,self.conv_dim*8,2,2) ## Here I am reshaping into accurate dimension
assert (x.shape[0] == batch_size)
x = self.layer1(x)
##x = F.relu(x)
x = F.leaky_relu(x)
##x = self.dropout(x)
x = self.layer2(x)
##x = F.relu(x)
x = F.leaky_relu(x)
##x = self.dropout(x)
x = self.layer3(x)
##x = F.relu(x)
x = F.leaky_relu(x)
##x = self.dropout(x)
x = self.layer4(x)
x = self.tan(x)
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(Generator)
```
## Initialize the weights of your networks
To help your models converge, you should initialize the weights of the convolutional and linear layers in your model. From reading the [original DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf), they say:
> All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02.
So, your next task will be to define a weight initialization function that does just this!
#### Complete the weight initialization function
* This should initialize only **convolutional** and **linear** layers
* Initialize the weights to a normal distribution, centered around 0, with a standard deviation of 0.02.
* The bias terms, if they exist, may be left alone or set to 0.
```
from torch.nn import init
def weights_init_normal(m):
"""
Applies initial weights to certain layers in a model .
The weights are taken from a normal distribution
with mean = 0, std dev = 0.02.
:param m: A module or layer in a network
"""
# classname will be something like:
# `Conv`, `BatchNorm2d`, `Linear`, etc.
classname = m.__class__.__name__
# TODO: Apply initial weights to convolutional and linear layers
if (classname.find('Conv') != -1 or classname.find('Linear') != -1):
##m.weight.data.normal_(0,0.02)
init.normal_(m.weight.data,0.0,0.02)
##init.kaiming_normal_(m.weight.data, a=0, mode='fan_in')
##init.xavier_normal_(m.weight.data,gain = 0.02)
```
## Build complete network
Define your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
def build_network(d_conv_dim, g_conv_dim, z_size):
# define discriminator and generator
D = Discriminator(d_conv_dim)
G = Generator(z_size=z_size, conv_dim=g_conv_dim)
# initialize model weights
D.apply(weights_init_normal)
G.apply(weights_init_normal)
print(D)
print()
print(G)
return D, G
```
#### Define model hyperparameters
```
# Define model hyperparams
d_conv_dim = 32
g_conv_dim = 32
z_size = 100
D, G = build_network(d_conv_dim, g_conv_dim, z_size)
train_on_gpu = torch.cuda.is_available()
if train_on_gpu:
D,G = D.cuda(), G.cuda()
```
### Training on GPU
Check if you can train on GPU. Here, we'll set this as a boolean variable `train_on_gpu`. Later, you'll be responsible for making sure that
>* Models,
* Model inputs, and
* Loss function arguments
Are moved to GPU, where appropriate.
```
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
else:
print('Training on GPU!')
```
---
## Discriminator and Generator Losses
Now we need to calculate the losses for both types of adversarial networks.
### Discriminator Losses
> * For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_real_loss + d_fake_loss`.
* Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
### Generator Loss
The generator loss will look similar only with flipped labels. The generator's goal is to get the discriminator to *think* its generated images are *real*.
#### Complete real and fake loss functions
**We may choose to use either cross entropy or a least squares error loss to complete the following `real_loss` and `fake_loss` functions.**
```
from scipy.stats import truncnorm
def get_truncated_normal(mean=0, sd=1, low=0, upp=10):
return truncnorm(
(low - mean) / sd, (upp - mean) / sd, loc=mean, scale=sd)
smooth_factor_for_real_loss = get_truncated_normal(mean=.85, sd=.05, low=.8, upp=.95)
print(smooth_factor_for_real_loss.rvs())
smooth_factor_for_fake_loss = get_truncated_normal(mean=.1, sd=0.05, low=0.0, upp=.15)
print(smooth_factor_for_fake_loss.rvs())
def real_loss(D_out,smooth =False):
'''Calculates how close discriminator outputs are to being real.
param, D_out: discriminator logits
return: real loss'''
label_smooth = smooth_factor_for_real_loss.rvs()
batch_size = D_out.shape[0]
labels = torch.ones(batch_size)
if smooth:
labels = labels*label_smooth
labels = labels.cuda()
criterion = nn.BCEWithLogitsLoss()
loss = criterion(D_out.squeeze(), labels)
return loss
##Fake loss smoothing is not used since it is not giving good result
def fake_loss(D_out,smooth = False):
'''Calculates how close discriminator outputs are to being fake.
param, D_out: discriminator logits
return: fake loss'''
batch_size = D_out.shape[0]
label_smooth = smooth_factor_for_fake_loss.rvs()
if smooth:
labels = torch.ones(batch_size) ## Using ones and multiplying with a number in range 0.0 to 0.3
labels = labels*label_smooth
else:
labels = torch.zeros(batch_size)
labels = labels.cuda()
criterion = nn.BCEWithLogitsLoss()
loss = criterion(D_out.squeeze(), labels)
return loss
```
## Optimizers
#### Define optimizers for your Discriminator (D) and Generator (G)
Define optimizers for your models with appropriate hyperparameters.
```
import torch.optim as optim
lr = .0002 ##.0001
beta1=0.5
beta2=0.999 # default value
## Create optimizers for the discriminator D and generator G
d_optimizer = optim.Adam(D.parameters(),lr, [beta1, beta2]) ##,lr =lr)
#d_optimizer = optim.SGD(D.parameters(),lr, momentum = 0.9)
##d_optimizer = optim.RMSprop(D.parameters(), lr = lr, alpha = 0.9)
g_optimizer = optim.Adam(G.parameters(),lr, [beta1, beta2]) ## ,lr =lr)
```
---
## Training
Training will involve alternating between training the discriminator and the generator. You'll use your functions `real_loss` and `fake_loss` to help you calculate the discriminator losses.
* You should train the discriminator by alternating on real and fake images
* Then the generator, which tries to trick the discriminator and should have an opposing loss function
#### Saving Samples
You've been given some code to print out some loss statistics and save some generated "fake" samples.
#### Complete the training function
Keep in mind that, if you've moved your models to GPU, you'll also have to move any model inputs to GPU.
```
X = get_truncated_normal(mean=0, sd=1, low=-1, upp=1)
#X.rvs()
def train(D, G, n_epochs, print_every=50):
'''Trains adversarial networks for some number of epochs
param, D: the discriminator network
param, G: the generator network
param, n_epochs: number of epochs to train for
param, print_every: when to print and record the models' losses
return: D and G losses'''
# move models to GPU
if train_on_gpu:
D.cuda()
G.cuda()
# keep track of loss and generated, "fake" samples
samples = []
losses = []
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
##fixed_z = np.random.normal(0.0,0.33, size=(sample_size, z_size))
## a small mean so that most of the values are not zeros
##fixed_z = np.random.normal(0.00000,0.33, size=(sample_size, z_size))
##fixed_z = X.rvs(size =(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# move z to GPU if available
if train_on_gpu:
fixed_z = fixed_z.cuda()
print('Cuda enabled')
# epoch training loop
for epoch in range(n_epochs):
# batch training loop
for batch_i, (real_images, _) in enumerate(celeba_train_loader):
batch_size = real_images.size(0)
real_images = scale(real_images)
# 1. Train the discriminator on real and fake images
if train_on_gpu:
real_images = real_images.cuda()
real_output = D(real_images)
real_output_loss = real_loss(real_output,smooth=True)
z = np.random.uniform(-1,1,size=(batch_size,z_size))
##z = np.random.normal(0.00000,0.33, size=(sample_size, z_size))
## a small mean so that most of the values are not zeros
##z = np.random.normal(0.0001,0.33, size=(sample_size, z_size))
##z = X.rvs(size =(sample_size, z_size))
z = torch.from_numpy(z).float()
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
fake_output = D(fake_images)
fake_output_loss = fake_loss(fake_output,smooth =False)
d_loss = real_output_loss + fake_output_loss
d_optimizer.zero_grad()
d_loss.backward()
##nn.utils.clip_grad_norm_(d.parameters(), clip=5)
d_optimizer.step()
# 2. Train the generator with an adversarial loss
z = np.random.uniform(-1,1,size=(batch_size,z_size))
##z = np.random.normal(0.0,0.33,size=(batch_size,z_size))
## a small mean so that most of the values are not zeros
##z = np.random.normal(0.00000,0.33, size=(sample_size, z_size))
##z = X.rvs(size =(sample_size, z_size))
z = torch.from_numpy(z).float()
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
fake_output_for_g = D(fake_images)
g_optimizer.zero_grad()
g_loss = real_loss(fake_output_for_g,smooth =False)
g_loss.backward()
##nn.utils.clip_grad_norm_(g.parameters(), clip=5)
g_optimizer.step()
# Print some loss stats
if batch_i % print_every == 0:
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, n_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# this code assumes your generator is named G, feel free to change the name
# generate and save sample, fake images
G.eval() # for generating samples
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to training mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
# finally return losses
return losses
```
Set your number of training epochs and train your GAN!
```
# set number of epochs
n_epochs = 100 ## Best is 50 epochs and lr = .0002
print(train_on_gpu)
# call training function
losses = train(D, G, n_epochs=n_epochs)
##Saving the Generator and the Discrminator
def save_generator(model):
model_name = 'trained_generator.pt'
checkpoint = {'z_size' : model.z_size,
'conv_dim' : model.conv_dim,
'state_dict': model.state_dict()}
with open(model_name, 'wb') as f:
torch.save(checkpoint,f)
def save_discriminator(model):
model_name = 'trained_discrimiator.pt'
checkpoint = {'conv_dim' : model.conv_dim,
'state_dict': model.state_dict()}
with open(model_name, 'wb') as f:
torch.save(checkpoint,f)
save_generator(G)
save_discriminator(D)
## Function for loading the model
def load_generator():
with open('trained_generator.pt', 'rb') as f:
checkpoint = torch.load(f)
##model = CharRNN(checkpoint['tokens'],n_hidden =checkpoint['n_hidden'],n_layers=checkpoint['n_layers'])
G = Generator(checkpoint['z_size'], checkpoint['conv_dim'])
G.load_state_dict(checkpoint['state_dict'])
return G
def load_discrimiator():
with open('trained_discrimiator.pt', 'rb') as f:
checkpoint = torch.load(f)
D = Discriminator(checkpoint['conv_dim'])
D.load_state_dict(checkpoint['state_dict'])
return D
#G = load_generator()
#D = load_discrimiator()
```
## Training loss
Plot the training losses for the generator and discriminator, recorded after each epoch.
```
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
```
## Generator samples from training
View samples of images from the generator, and answer a question about the strengths and weaknesses of your trained models.
```
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach().cpu().numpy()
img = np.transpose(img, (1, 2, 0))
img = ((img + 1)*255 / (2)).astype(np.uint8)
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((32,32,3)))
# Load samples from generator, taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-50, samples) ## 50 epochs gave me the best result
```
|
github_jupyter
|
```
#default_exp transform
#export
from local.torch_basics import *
from local.test import *
from local.notebook.showdoc import show_doc
from PIL import Image
```
# Transforms
> Definition of `Transform` and `Pipeline`
The classes here provide functionality for creating a composition of *partially reversible functions*. By "partially reversible" we mean that a transform can be `decode`d, creating a form suitable for display. This is not necessarily identical to the original form (e.g. a transform that changes a byte tensor to a float tensor does not recreate a byte tensor when decoded, since that may lose precision, and a float tensor can be displayed already).
Classes are also provided and for composing transforms, and mapping them over collections. `Pipeline` is a transform which composes several `Transform`, knowing how to decode them or show an encoded item.
## Helpers
```
#exports
def type_hints(f):
"Same as `typing.get_type_hints` but returns `{}` if not allowed type"
return typing.get_type_hints(f) if isinstance(f, typing._allowed_types) else {}
#export
def anno_ret(func):
"Get the return annotation of `func`"
if not func: return None
ann = type_hints(func)
if not ann: return None
return ann.get('return')
#hide
def f(x) -> float: return x
test_eq(anno_ret(f), float)
def f(x) -> typing.Tuple[float,float]: return x
test_eq(anno_ret(f), typing.Tuple[float,float])
def f(x) -> None: return x
test_eq(anno_ret(f), NoneType)
def f(x): return x
test_eq(anno_ret(f), None)
test_eq(anno_ret(None), None)
#export
cmp_instance = functools.cmp_to_key(lambda a,b: 0 if a==b else 1 if issubclass(a,b) else -1)
td = {int:1, numbers.Number:2, numbers.Integral:3}
test_eq(sorted(td, key=cmp_instance), [numbers.Number, numbers.Integral, int])
#export
def _p1_anno(f):
"Get the annotation of first param of `f`"
hints = type_hints(f)
ann = [o for n,o in hints.items() if n!='return']
return ann[0] if ann else object
def _f(a, b): pass
test_eq(_p1_anno(_f), object)
def _f(a, b)->str: pass
test_eq(_p1_anno(_f), object)
def _f(a, b:str)->float: pass
test_eq(_p1_anno(_f), str)
def _f(a:int, b:int)->float: pass
test_eq(_p1_anno(_f), int)
def _f(a:int, b:str)->float: pass
test_eq(_p1_anno(_f), int)
test_eq(_p1_anno(attrgetter('foo')), object)
```
## Types
`TensorImage`, `TensorImageBW` and `TensorMask` are subclasses of `torch.Tensor` that know how to show themselves.
```
#export
@delegates(plt.subplots, keep=True)
def subplots(nrows=1, ncols=1, **kwargs):
fig,ax = plt.subplots(nrows,ncols,**kwargs)
if nrows*ncols==1: ax = array([ax])
return fig,ax
#export
class TensorImageBase(TensorBase):
_show_args = {'cmap':'viridis'}
def show(self, ctx=None, **kwargs):
return show_image(self, ctx=ctx, **{**self._show_args, **kwargs})
def get_ctxs(self, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
n_samples = min(self.shape[0], max_n)
rows = rows or int(np.ceil(math.sqrt(n_samples)))
cols = cols or int(np.ceil(math.sqrt(n_samples)))
figsize = (cols*3, rows*3) if figsize is None else figsize
_,axs = subplots(rows, cols, figsize=figsize)
return axs.flatten()
#export
class TensorImage(TensorImageBase): pass
#export
class TensorImageBW(TensorImage): _show_args = {'cmap':'Greys'}
#export
class TensorMask(TensorImageBase): _show_args = {'alpha':0.5, 'cmap':'tab20'}
im = Image.open(TEST_IMAGE)
im_t = TensorImage(array(im))
test_eq(type(im_t), TensorImage)
im_t2 = TensorMask(tensor(1))
test_eq(type(im_t2), TensorMask)
test_eq(im_t2, tensor(1))
ax = im_t.show(figsize=(2,2))
test_fig_exists(ax)
#hide
axes = im_t.get_ctxs(1)
test_eq(axes.shape,[1])
plt.close()
axes = im_t.get_ctxs(4)
test_eq(axes.shape,[4])
plt.close()
```
## TypeDispatch -
The following class is the basis that allows us to do type dipatch with type annotations. It contains a dictionary type -> functions and ensures that the proper function is called when passed an object (depending on its type).
```
#export
class TypeDispatch:
"Dictionary-like object; `__getitem__` matches keys of types using `issubclass`"
def __init__(self, *funcs):
self.funcs,self.cache = {},{}
for f in funcs: self.add(f)
self.inst = None
def _reset(self):
self.funcs = {k:self.funcs[k] for k in sorted(self.funcs, key=cmp_instance, reverse=True)}
self.cache = {**self.funcs}
def add(self, f):
"Add type `t` and function `f`"
self.funcs[_p1_anno(f) or object] = f
self._reset()
def returns(self, x): return anno_ret(self[type(x)])
def returns_none(self, x):
r = anno_ret(self[type(x)])
return r if r == NoneType else None
def __repr__(self): return str({getattr(k,'__name__',str(k)):v.__name__ for k,v in self.funcs.items()})
def __call__(self, x, *args, **kwargs):
f = self[type(x)]
if not f: return x
if self.inst is not None: f = types.MethodType(f, self.inst)
return f(x, *args, **kwargs)
def __get__(self, inst, owner):
self.inst = inst
return self
def __getitem__(self, k):
"Find first matching type that is a super-class of `k`"
if k in self.cache: return self.cache[k]
types = [f for f in self.funcs if issubclass(k,f)]
res = self.funcs[types[0]] if types else None
self.cache[k] = res
return res
def f_col(x:typing.Collection): return x
def f_nin(x:numbers.Integral)->int: return x+1
def f_bti(x:TensorMask): return x
def f_fti(x:TensorImage): return x
def f_bll(x:bool): return x
def f_num(x:numbers.Number): return x
t = TypeDispatch(f_nin,f_fti,f_num,f_bti,f_bll)
test_eq(t[int], f_nin)
test_eq(t[str], None)
test_eq(t[TensorImage], f_fti)
test_eq(t[float], f_num)
t.add(f_col)
test_eq(t[str], f_col)
test_eq(t[int], f_nin)
test_eq(t(1), 2)
test_eq(t.returns(1), int)
t
def m_nin(self, x:numbers.Integral): return x+1
def m_bll(self, x:bool): self.foo='a'
def m_num(self, x:numbers.Number): return x
t = TypeDispatch(m_nin,m_num,m_bll)
class A: f = t
a = A()
test_eq(a.f(1), 2)
test_eq(a.f(1.), 1.)
a.f(False)
test_eq(a.foo, 'a')
```
## Transform -
```
#export
_tfm_methods = 'encodes','decodes','setups'
class _TfmDict(dict):
def __setitem__(self,k,v):
if k not in _tfm_methods or not callable(v): return super().__setitem__(k,v)
if k not in self: super().__setitem__(k,TypeDispatch())
res = self[k]
res.add(v)
#export
class _TfmMeta(type):
def __new__(cls, name, bases, dict):
res = super().__new__(cls, name, bases, dict)
res.__signature__ = inspect.signature(res.__init__)
return res
def __call__(cls, *args, **kwargs):
f = args[0] if args else None
n = getattr(f,'__name__',None)
for nm in _tfm_methods:
if not hasattr(cls,nm): setattr(cls, nm, TypeDispatch())
if callable(f) and n in _tfm_methods:
getattr(cls,n).add(f)
return f
return super().__call__(*args, **kwargs)
@classmethod
def __prepare__(cls, name, bases): return _TfmDict()
#export
class Transform(metaclass=_TfmMeta):
"Delegates (`__call__`,`decode`,`setup`) to (`encodes`,`decodes`,`setups`) if `filt` matches"
filt,init_enc,as_item_force,as_item,order = None,False,None,True,0
def __init__(self, enc=None, dec=None, filt=None, as_item=False):
self.filt,self.as_item = ifnone(filt, self.filt),as_item
self.init_enc = enc or dec
if not self.init_enc: return
# Passing enc/dec, so need to remove (base) class level enc/dec
del(self.__class__.encodes,self.__class__.decodes,self.__class__.setups)
self.encodes,self.decodes,self.setups = TypeDispatch(),TypeDispatch(),TypeDispatch()
if enc:
self.encodes.add(enc)
self.order = getattr(self.encodes,'order',self.order)
if dec: self.decodes.add(dec)
@property
def use_as_item(self): return ifnone(self.as_item_force, self.as_item)
def __call__(self, x, **kwargs): return self._call('encodes', x, **kwargs)
def decode (self, x, **kwargs): return self._call('decodes', x, **kwargs)
def setup(self, items=None): return self.setups(items)
def __repr__(self): return f'{self.__class__.__name__}: {self.use_as_item} {self.encodes} {self.decodes}'
def _call(self, fn, x, filt=None, **kwargs):
if filt!=self.filt and self.filt is not None: return x
f = getattr(self, fn)
if self.use_as_item or not is_listy(x): return self._do_call(f, x, **kwargs)
res = tuple(self._do_call(f, x_, **kwargs) for x_ in x)
return retain_type(res, x)
def _do_call(self, f, x, **kwargs):
return x if f is None else retain_type(f(x, **kwargs), x, f.returns_none(x))
add_docs(Transform, decode="Delegate to `decodes` to undo transform", setup="Delegate to `setups` to set up transform")
show_doc(Transform)
```
A `Transform` is the main building block of the fastai data pipelines. In the most general terms a transform can be any function you want to apply to your data, however the `Transform` class provides several mechanisms that make the process of building them easy and flexible.
### The main `Transform` features:
- **Type dispatch** - Type annotations are used to determine if a transform should be applied to the given argument. It also gives an option to provide several implementations and it choses the one to run based on the type. This is useful for example when running both independent and dependent variables through the pipeline where some transforms only make sense for one and not the other. Another usecase is designing a transform that handles different data formats. Note that if a transform takes multiple arguments only the type of the first one is used for dispatch.
- **Handling of tuples** - When a tuple (or another collection satisfying `is_listy`) of data is passed to a transform it will get applied to each element separately. Most comonly it will be a *(x,y)* tuple, but it can be anything for example a list of images. You can opt out of this behavior by setting the flag `as_item=True`. For transforms that must always operate on the tuple level you can set `as_item_force=True` which takes precedence over `as_item`, an example of that is `PointScaler`.
- **Reversability** - A transform can be made reversible by implementing the `decodes` method. This is mainly used to turn something like a category which is encoded as a number back into a label understandable by humans for showing purposes.
- **Type propagation** - Whenever possible a transform tries to return data of the same type it received. Mainly used to maintain semantics of things like `TensorImage` which is a thin wrapper of pytorches `Tensor`. You can opt out of this behavior by adding `->None` return type annotation.
- **Preprocessing** - The `setup` method can be used to perform any one-time calculations to be later used by the transform, for example generating a vocabulary to encode categorical data.
- **Filtering based on the dataset type** - By setting the `filt` flag you can make the transform be used only in a specific `DataSource` subset like in training, but not validation.
- **Ordering** - You can set the `order` attribute which the `Pipeline` uses when it needs to merge two lists of transforms.
- **Appending new behavior with decorators** - You can easily extend an existing `Transform` by creating `encodes` or `decodes` methods for new data types. You can put those new methods outside the original transform definition and decorate them with the class you wish them patched into. This can be used by the fastai library users to add their own behavior, or multiple modules contributing to the same transform.
### Defining a `Transform`
There are a few ways to create a transform with different ratios of simplicity to flexibility.
- **Extending the `Transform` class** - Use inheritence to implement the methods you want.
- **Passing methods to the constructor** - Instantiate the `Transform` class and pass your functions as `enc` and `dec` arguments.
- **@Transform decorator** - Turn any function into a `Transform` by just adding a decorator - very straightforward if all you need is a single `encodes` implementation.
- **Passing a function to fastai APIs** - Same as above, but when passing a function to other transform aware classes like `Pipeline` or `TfmdDS` you don't even need a decorator. Your function will get converted to a `Transform` automatically.
```
class A(Transform): pass
@A
def encodes(self, x): return x+1
f1 = A()
test_eq(f1(1), 2)
class B(A): pass
f2 = B()
test_eq(f2(1), 2)
class A(Transform): pass
f3 = A()
test_eq_type(f3(2), 2)
test_eq_type(f3.decode(2.0), 2.0)
```
`Transform` can be used as a decorator, to turn a function into a `Transform`.
```
f = Transform(lambda o:o//2)
test_eq_type(f(2), 1)
test_eq_type(f.decode(2.0), 2.0)
@Transform
def f(x): return x//2
test_eq_type(f(2), 1)
test_eq_type(f.decode(2.0), 2.0)
```
You can derive from `Transform` and use `encodes` for your encoding function.
```
class A(Transform):
def encodes(self, x:TensorImage): return -x
def decodes(self, x:TensorImage): return x+1
def setups (self, x:TensorImage): x.foo = 'a'
f = A()
t = f(im_t)
test_eq(t, -im_t)
test_eq(f(1), 1)
test_eq(type(t), TensorImage)
test_eq(f.decode(t), -im_t+1)
test_eq(f.decode(1), 1)
f.setup(im_t)
test_eq(im_t.foo, 'a')
t2 = tensor(1)
f.setup(t2)
assert not hasattr(f2,'foo')
f
```
Without return annotation we get an `Int` back since that's what was passed.
```
class A(Transform): pass
@A
def encodes(self, x:Int): return x//2
@A
def encodes(self, x:float): return x+1
f = A()
test_eq_type(f(Int(2)), Int(1))
test_eq_type(f(2), 2)
test_eq_type(f(2.), 3.)
```
Without return annotation we don't cast if we're not a subclass of the input type.
```
class A(Transform):
def encodes(self, x:Int): return x/2
def encodes(self, x:float): return x+1
f = A()
test_eq_type(f(Int(2)), 1.)
test_eq_type(f(2), 2)
test_eq_type(f(Float(2.)), Float(3.))
```
With return annotation `None` we get back whatever Python creates usually.
```
def func(x)->None: return x/2
f = Transform(func)
test_eq_type(f(2), 1.)
test_eq_type(f(2.), 1.)
```
Since `decodes` has no return annotation, but `encodes` created an `Int` and we pass that result here to `decode`, we end up with an `Int`.
```
def func(x): return Int(x+1)
def dec (x): return x-1
f = Transform(func,dec)
t = f(1)
test_eq_type(t, Int(2))
test_eq_type(f.decode(t), Int(1))
```
If the transform has `filt` then it's only applied if `filt` param matches.
```
f.filt = 1
test_eq(f(1, filt=1),2)
test_eq_type(f(1, filt=0), 1)
```
If `as_item=True` the transform takes tuples as a whole and is applied to them.
```
class A(Transform):
def encodes(self, xy): x,y=xy; return (x+y,y)
def decodes(self, xy): x,y=xy; return (x-y,y)
f = A(as_item=True)
t = f((1,2))
test_eq(t, (3,2))
test_eq(f.decode(t), (1,2))
f.filt = 1
test_eq(f((1,2), filt=1), (3,2))
test_eq(f((1,2), filt=0), (1,2))
class AL(Transform): pass
@AL
def encodes(self, x): return L(x_+1 for x_ in x)
@AL
def decodes(self, x): return L(x_-1 for x_ in x)
f = AL(as_item=True)
t = f([1,2])
test_eq(t, [2,3])
test_eq(f.decode(t), [1,2])
```
If `as_item=False` the transform is applied to each element of a listy input.
```
def neg_int(x:numbers.Integral): return -x
f = Transform(neg_int, as_item=False)
test_eq(f([1]), (-1,))
test_eq(f([1.]), (1.,))
test_eq(f([1.,2,3.]), (1.,-2,3.))
test_eq(f.decode([1,2]), (1,2))
#export
class InplaceTransform(Transform):
"A `Transform` that modifies in-place and just returns whatever it's passed"
def _call(self, fn, x, filt=None, **kwargs):
super()._call(fn,x,filt,**kwargs)
return x
```
## TupleTransform
```
#export
class TupleTransform(Transform):
"`Transform` that always treats `as_item` as `False`"
as_item_force=False
#export
class ItemTransform (Transform):
"`Transform` that always treats `as_item` as `True`"
as_item_force=True
def float_to_int(x:(float,int)): return Int(x)
f = TupleTransform(float_to_int)
test_eq_type(f([1.]), (Int(1),))
test_eq_type(f([1]), (Int(1),))
test_eq_type(f(['1']), ('1',))
test_eq_type(f([1,'1']), (Int(1),'1'))
test_eq(f.decode([1]), [1])
test_eq_type(f(TupleBase(1.)), TupleBase(Int(1)))
class B(TupleTransform): pass
class C(TupleTransform): pass
f = B()
test_eq(f([1]), [1])
@B
def encodes(self, x:int): return x+1
@B
def encodes(self, x:str): return x+'1'
@B
def encodes(self, x)->None: return str(x)+'!'
b,c = B(),C()
test_eq(b([1]), [2])
test_eq(b(['1']), ('11',))
test_eq(b([1.0]), ('1.0!',))
test_eq(c([1]), [1])
test_eq(b([1,2]), (2,3))
test_eq(b.decode([2]), [2])
assert pickle.loads(pickle.dumps(b))
@B
def decodes(self, x:int): return x-1
test_eq(b.decode([2]), [1])
test_eq(b.decode(('2',)), ('2',))
```
Non-type-constrained functions are applied to all elements of a tuple.
```
class A(TupleTransform): pass
@A
def encodes(self, x): return x+1
@A
def decodes(self, x): return x-1
f = A()
t = f((1,2.0))
test_eq_type(t, (2,3.0))
test_eq_type(f.decode(t), (1,2.0))
```
Type-constrained functions are applied to only matching elements of a tuple, and return annotations are only applied where matching.
```
class B(TupleTransform):
def encodes(self, x:int): return Int(x+1)
def encodes(self, x:str): return x+'1'
def decodes(self, x:Int): return x//2
f = B()
start = (1.,2,'3')
t = f(start)
test_eq_type(t, (1.,Int(3),'31'))
test_eq(f.decode(t), (1.,Int(1),'31'))
```
The same behavior also works with `typing` module type classes.
```
class A(Transform): pass
@A
def encodes(self, x:numbers.Integral): return x+1
@A
def encodes(self, x:float): return x*3
@A
def decodes(self, x:int): return x-1
f = A()
start = 1.0
t = f(start)
test_eq(t, 3.)
test_eq(f.decode(t), 3)
f = A(as_item=False)
start = (1.,2,3.)
t = f(start)
test_eq(t, (3.,3,9.))
test_eq(f.decode(t), (3.,2,9.))
```
Transform accepts lists
```
def a(x): return L(x_+1 for x_ in x)
def b(x): return L(x_-1 for x_ in x)
f = TupleTransform(a,b)
t = f((L(1,2),))
test_eq(t, (L(2,3),))
test_eq(f.decode(t), (L(1,2),))
```
## Func -
```
#export
def get_func(t, name, *args, **kwargs):
"Get the `t.name` (potentially partial-ized with `args` and `kwargs`) or `noop` if not defined"
f = getattr(t, name, noop)
return f if not (args or kwargs) else partial(f, *args, **kwargs)
```
This works for any kind of `t` supporting `getattr`, so a class or a module.
```
test_eq(get_func(operator, 'neg', 2)(), -2)
test_eq(get_func(operator.neg, '__call__')(2), -2)
test_eq(get_func(list, 'foobar')([2]), [2])
t = get_func(torch, 'zeros', dtype=torch.int64)(5)
test_eq(t.dtype, torch.int64)
a = [2,1]
get_func(list, 'sort')(a)
test_eq(a, [1,2])
```
Transforms are built with multiple-dispatch: a given function can have several methods depending on the type of the object received. This is done directly with the `TypeDispatch` module and type-annotation in `Transform`, but you can also use the following class.
```
#export
class Func():
"Basic wrapper around a `name` with `args` and `kwargs` to call on a given type"
def __init__(self, name, *args, **kwargs): self.name,self.args,self.kwargs = name,args,kwargs
def __repr__(self): return f'sig: {self.name}({self.args}, {self.kwargs})'
def _get(self, t): return get_func(t, self.name, *self.args, **self.kwargs)
def __call__(self,t): return mapped(self._get, t)
```
You can call the `Func` object on any module name or type, even a list of types. It will return the corresponding function (with a default to `noop` if nothing is found) or list of functions.
```
test_eq(Func('sqrt')(math), math.sqrt)
test_eq(Func('sqrt')(torch), torch.sqrt)
@patch
def powx(x:math, a): return math.pow(x,a)
@patch
def powx(x:torch, a): return torch.pow(x,a)
tst = Func('powx',a=2)([math, torch])
test_eq([f.func for f in tst], [math.powx, torch.powx])
for t in tst: test_eq(t.keywords, {'a': 2})
#export
class _Sig():
def __getattr__(self,k):
def _inner(*args, **kwargs): return Func(k, *args, **kwargs)
return _inner
Sig = _Sig()
show_doc(Sig, name="Sig")
```
`Sig` is just sugar-syntax to create a `Func` object more easily with the syntax `Sig.name(*args, **kwargs)`.
```
f = Sig.sqrt()
test_eq(f(math), math.sqrt)
test_eq(f(torch), torch.sqrt)
```
## Pipeline -
```
#export
def compose_tfms(x, tfms, is_enc=True, reverse=False, **kwargs):
"Apply all `func_nm` attribute of `tfms` on `x`, maybe in `reverse` order"
if reverse: tfms = reversed(tfms)
for f in tfms:
if not is_enc: f = f.decode
x = f(x, **kwargs)
return x
def to_int (x): return Int(x)
def to_float(x): return Float(x)
def double (x): return x*2
def half(x)->None: return x/2
def test_compose(a, b, *fs): test_eq_type(compose_tfms(a, tfms=map(Transform,fs)), b)
test_compose(1, Int(1), to_int)
test_compose(1, Float(1), to_int,to_float)
test_compose(1, Float(2), to_int,to_float,double)
test_compose(2.0, 2.0, to_int,double,half)
class A(Transform):
def encodes(self, x:float): return Float(x+1)
def decodes(self, x): return x-1
tfms = [A(), Transform(math.sqrt)]
t = compose_tfms(3., tfms=tfms)
test_eq_type(t, Float(2.))
test_eq(compose_tfms(t, tfms=tfms, is_enc=False), 1.)
test_eq(compose_tfms(4., tfms=tfms, reverse=True), 3.)
tfms = [A(as_item=False), Transform(math.sqrt, as_item=False)]
test_eq(compose_tfms((9,3.), tfms=tfms), (3,2.))
#export
def mk_transform(f, as_item=True):
"Convert function `f` to `Transform` if it isn't already one"
f = instantiate(f)
return f if isinstance(f,Transform) else Transform(f, as_item=as_item)
def neg(x): return -x
test_eq(type(mk_transform(neg)), Transform)
test_eq(type(mk_transform(math.sqrt)), Transform)
test_eq(type(mk_transform(lambda a:a*2)), Transform)
#export
def gather_attrs(o, k, nm):
"Used in __getattr__ to collect all attrs `k` from `self.{nm}`"
if k.startswith('_') or k==nm: raise AttributeError(k)
att = getattr(o,nm)
res = [t for t in att.attrgot(k) if t is not None]
if not res: raise AttributeError(k)
return res[0] if len(res)==1 else L(res)
#export
class Pipeline:
"A pipeline of composed (for encode/decode) transforms, setup with types"
def __init__(self, funcs=None, as_item=False, filt=None):
self.filt,self.default = filt,None
if isinstance(funcs, Pipeline): self.fs = funcs.fs
else:
if isinstance(funcs, Transform): funcs = [funcs]
self.fs = L(ifnone(funcs,[noop])).mapped(mk_transform).sorted(key='order')
for f in self.fs:
name = camel2snake(type(f).__name__)
a = getattr(self,name,None)
if a is not None: f = L(a)+f
setattr(self, name, f)
self.set_as_item(as_item)
def set_as_item(self, as_item):
self.as_item = as_item
for f in self.fs: f.as_item = as_item
def setup(self, items=None):
tfms = self.fs[:]
self.fs.clear()
for t in tfms: self.add(t,items)
def add(self,t, items=None):
t.setup(items)
self.fs.append(t)
def __call__(self, o): return compose_tfms(o, tfms=self.fs, filt=self.filt)
def decode (self, o): return compose_tfms(o, tfms=self.fs, is_enc=False, reverse=True, filt=self.filt)
def __repr__(self): return f"Pipeline: {self.fs}"
def __getitem__(self,i): return self.fs[i]
def decode_batch(self, b, max_n=10): return batch_to_samples(b, max_n=max_n).mapped(self.decode)
def __setstate__(self,data): self.__dict__.update(data)
def __getattr__(self,k): return gather_attrs(self, k, 'fs')
def show(self, o, ctx=None, **kwargs):
for f in reversed(self.fs):
res = self._show(o, ctx, **kwargs)
if res is not None: return res
o = f.decode(o, filt=self.filt)
return self._show(o, ctx, **kwargs)
def _show(self, o, ctx, **kwargs):
o1 = [o] if self.as_item or not is_listy(o) else o
if not all(hasattr(o_, 'show') for o_ in o1): return
for o_ in o1: ctx = o_.show(ctx=ctx, **kwargs)
return ifnone(ctx,1)
add_docs(Pipeline,
__call__="Compose `__call__` of all `fs` on `o`",
decode="Compose `decode` of all `fs` on `o`",
show="Show `o`, a single item from a tuple, decoding as needed",
add="Add transform `t`",
decode_batch="`decode` all sample in a the batch `b`",
set_as_item="Set value of `as_item` for all transforms",
setup="Call each tfm's `setup` in order")
```
`Pipeline` is a wrapper for `compose_tfm`. You can pass instances of `Transform` or regular functions in `funcs`, the `Pipeline` will wrap them all in `Transform` (and instantiate them if needed) during the initialization. It handles the transform `setup` by adding them one at a time and calling setup on each, goes through them in order in `__call__` or `decode` and can `show` an object by applying decoding the transforms up until the point it gets an object that knows how to show itself.
```
# Empty pipeline is noop
pipe = Pipeline()
test_eq(pipe(1), 1)
pipe.set_as_item(False)
test_eq(pipe((1,)), (1,))
# Check pickle works
assert pickle.loads(pickle.dumps(pipe))
class IntFloatTfm(Transform):
def encodes(self, x): return Int(x)
def decodes(self, x): return Float(x)
foo=1
int_tfm=IntFloatTfm()
def neg(x): return -x
neg_tfm = Transform(neg, neg)
pipe = Pipeline([neg_tfm, int_tfm])
start = 2.0
t = pipe(start)
test_eq_type(t, Int(-2))
test_eq_type(pipe.decode(t), Float(start))
test_stdout(lambda:pipe.show(t), '-2')
pipe.set_as_item(False)
test_stdout(lambda:pipe.show(pipe((1.,2.))), '-1\n-2')
```
Transforms are available as attributes named with the snake_case version of the names of their types. Attributes in transforms can be directly accessed as attributes of the pipeline.
```
test_eq(pipe.int_float_tfm, int_tfm)
test_eq(pipe.foo, 1)
pipe = Pipeline([int_tfm, int_tfm])
pipe.int_float_tfm
test_eq(pipe.int_float_tfm[0], int_tfm)
test_eq(pipe.foo, [1,1])
# Check opposite order
pipe = Pipeline([int_tfm,neg_tfm])
t = pipe(start)
test_eq(t, -2)
test_stdout(lambda:pipe.show(t), '-2')
class A(Transform):
def encodes(self, x): return int(x)
def decodes(self, x): return Float(x)
pipe = Pipeline([neg_tfm, A])
t = pipe(start)
test_eq_type(t, -2)
test_eq_type(pipe.decode(t), Float(start))
test_stdout(lambda:pipe.show(t), '-2.0')
s2 = (1,2)
pipe.set_as_item(False)
t = pipe(s2)
test_eq_type(t, (-1,-2))
test_eq_type(pipe.decode(t), (Float(1.),Float(2.)))
test_stdout(lambda:pipe.show(t), '-1.0\n-2.0')
class B(Transform):
def encodes(self, x): return x+1
def decodes(self, x): return x-1
from PIL import Image
def f1(x:TensorImage): return -x
def f2(x): return Image.open(x).resize((128,128))
def f3(x:Image.Image): return(TensorImage(array(x)))
pipe = Pipeline([f2,f3,f1])
t = pipe(TEST_IMAGE)
test_eq(type(t), TensorImage)
test_eq(t, -tensor(f3(f2(TEST_IMAGE))))
pipe = Pipeline([f2,f3])
t = pipe(TEST_IMAGE)
ax = pipe.show(t)
test_fig_exists(ax)
#Check filtering is properly applied
add1 = B()
add1.filt = 1
pipe = Pipeline([neg_tfm, A(), add1])
test_eq(pipe(start), -2)
pipe.filt=1
test_eq(pipe(start), -1)
pipe.filt=0
test_eq(pipe(start), -2)
for t in [None, 0, 1]:
pipe.filt=t
test_eq(pipe.decode(pipe(start)), start)
test_stdout(lambda: pipe.show(pipe(start)), "-2.0")
```
### Methods
```
#TODO: method examples
show_doc(Pipeline.__call__)
show_doc(Pipeline.decode)
show_doc(Pipeline.decode_batch)
pipe.set_as_item(False)
t = tensor([1,2,3])
pipe.filt=1
test_eq(pipe.decode_batch([t,t+1], max_n=2), [(0,-1),(-1,-2)])
show_doc(Pipeline.setup)
```
During the setup, the `Pipeline` starts with no transform and adds them one at a time, so that during its setup, each transform gets the items processed up to its point and not after.
```
#hide
#Test is with TfmdList
```
## Export -
```
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
```
|
github_jupyter
|
```
import numpy as np
import pandas as pd
from tqdm import tqdm_notebook, tqdm
from scipy.spatial.distance import jaccard
from surprise import Dataset, Reader, KNNBasic, KNNWithMeans, SVD, SVDpp, accuracy
from surprise.model_selection import KFold, train_test_split, cross_validate, GridSearchCV
import warnings
warnings.simplefilter('ignore')
# !find * -iname 'movies.c*' -or -iname 'ratings.csv' -print -or -iname 'Library' -prune -or -iname 'Dropbox' -prune
# !find * -iname 'movies.c*' -or -iname 'ratings.csv' -print -or -iname 'Library' -prune
movies = pd.read_csv('movies.csv') # Подгружаем данные
ratings = pd.read_csv('ratings.csv')
movies_with_ratings = movies.join(ratings.set_index('movieId'), on='movieId').reset_index(drop=True) # Объеденяем 'фильмы' и 'Оценки'
movies_with_ratings.dropna(inplace=True) # Удаляем пропуски
movies_with_ratings.head()
num_movies = movies_with_ratings.movieId.unique().shape[0] # len() Получаем колличество уникальных ID фильмов
uniques = movies_with_ratings.movieId.unique() # Список уникальных ID фильмов <class 'numpy.ndarray'>
user_vector = {} # Формируем словарь (векторов), где {key=ID_юзера: values=array([Рейтинга])}
for user, group in movies_with_ratings.groupby('userId'):
user_vector[user] = np.zeros(num_movies)
for i in range(len(group.movieId.values)):
m = np.argwhere(uniques==group.movieId.values[i])[0][0]
r = group.rating.values[i]
user_vector[user][m] = r
dataset = pd.DataFrame({
'uid': movies_with_ratings.userId,
'iid': movies_with_ratings.title,
'rating': movies_with_ratings.rating
}) # Формируем новый 'dataset' который будет учавствовать в нашей модели из библиотеки 'surprise'
dataset.head()
reader = Reader(rating_scale=(0.5, 5.0)) # Указываем рейтинг где 0.5 минимальный, а 5.0 максимальный
data = Dataset.load_from_df(dataset, reader) # Преобразовываем 'dataset' в необходимый формат библиотеки 'surprise'
trainset, testset = train_test_split(data, test_size=.15, random_state=42) # Делим на train и test выборку
algo = SVDpp(n_factors=20, n_epochs=20) # Наша модель SVD++ (https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVDpp)
algo.fit(trainset) # Обучаем модель на 'train'
test_pred = algo.test(testset) # Проверяем на 'test'
accuracy.rmse(test_pred, verbose=True) # Смотрим на 'Среднюю Квадратическую Ошибку' (Root Mean Square Error)
# Root Mean Square Error (RMSE) is the standard deviation of the residuals (prediction errors). \
# Residuals are a measure of how far from the regression line data points are; \
# RMSE is a measure of how spread out these residuals are. \
# In other words, it tells you how concentrated the data is around the line of best fit.
def recommendation(uid=2.0, neighbors=5, ratin=4.5, films=5, top=5):
'''
uid - идентификационный номер пользователя, который запросил рекомендации
neighbors - указываем необходимое количество похожих пользователей на 'uid' для поиска
ratin - рейтинг фильмов похожих пользователей на 'uid'
films - количество фильмов для предсказания оценки и сортировки
top - количество рекомендованных фильмов пользователю 'uid'
'''
titles = [key for key in user_vector.keys() if key != uid] # только те ключи где != ID чтобы не брать фильмы пользователя которые он посмотрел и оценил
distances = [jaccard(user_vector[uid], user_vector[key]) for key in user_vector.keys() if key != uid] # Джаккард (https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.jaccard.html#scipy.spatial.distance.jaccard)
best_indexes = np.argsort(distances)[:neighbors] # Сортировка
similar_users = np.array([(titles[i], distances[i]) for i in best_indexes])[:, 0]
movies_with_ratings.sort_values('timestamp', inplace=True) # Сортировка по времени
movies = np.array(list(set([]))) # Конструкция list(set()) для исключения дублей
for user in similar_users:
a = np.array(movies_with_ratings[movies_with_ratings.rating >= ratin][movies_with_ratings.userId == user][-films:].title)
movies = np.concatenate([a, movies])
user_movies = movies_with_ratings[movies_with_ratings.userId == uid].title.unique()
scores = list(set([algo.predict(uid=uid, iid=movie).est for movie in movies]))
titles_s = list(set([movie for movie in movies]))
best_indexes = np.argsort(scores)[-top:] # Сортировка
scores_r = [scores[i] for i in reversed(best_indexes)] #list(reversed([1, 2, 3, 4])) -> [4, 3, 2, 1]
titles_r = [titles_s[i] for i in reversed(best_indexes)]
# Объеденяем в один dataframe для вывода рекомендаций
df1, df2 = pd.DataFrame(data=titles_r).reset_index(), pd.DataFrame(data=scores_r).reset_index()
df1.columns, df2.columns = ['index','films'], ['index','scores']
df = pd.merge(df1, df2, on='index')
df['rank'] = df.scores.rank(ascending=False).astype('int')
data = df[['rank', 'films', 'scores']]
return data
''' Пользователь 2
Похожих пользователей 10
Из фильмов с минимальным рейтингом 4.5 у похожих пользователей
По топ 10 фильмам похожих пользователей
Топ рекомендаций 10 фильмов для пользователя '''
data = recommendation(uid=2.0, neighbors=10, ratin=4.5, films=10, top=10)
data.head(10)
pass
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/ArpitaChatterjee/Comedian-transcript-Analysis/blob/main/Exploratory_Data_Analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#To find the pattern of each comedian and find the reason of the likable
1. Most common words
2. size of vocab
3. Amt. of profanity used
##Most common words
```
#read dmt
import pandas as pd
data=pd.read_pickle('/content/drive/MyDrive/Colab Notebooks/NLP/dtm.pkl')
data= data.transpose()
data.head()
#find the top 30 words said by each comedian
top_dict={}
for c in data.columns:
top= data[c].sort_values(ascending=False).head(30)
top_dict[c]=list(zip(top.index, top.values))
top_dict
#print top 15 words by each comedian
for comedian, top_words in top_dict.items():
print(comedian)
print(', '.join([word for word, count in top_words[0:14]]))
print('---')
```
**NOTE:** At this point, we could go on and create word clouds. However, by looking at these top words, you can see that some of them have very little meaning and could be added to a stop words list.
```
#look at most common top words and add to dtop word list
from collections import Counter
#pull out top 30 words
words=[]
for comedian in data.columns:
top = [word for (word, count) in top_dict[comedian]]
for t in top:
words.append(t)
words
#aggregate the list and identify the most common words
Counter(words).most_common()
#if more tham half the comedians have same top words, remove 'em as stop word
add_stop_words= [word for word, count in Counter(words).most_common() if count>6]
add_stop_words
#update the DTM with the new list of stop words
from sklearn.feature_extraction import text
from sklearn.feature_extraction.text import CountVectorizer
#read the clean data
data_clean= pd.read_pickle('/content/drive/MyDrive/Colab Notebooks/NLP/data_clean.pkl')
#add new stop words
stop_words= text.ENGLISH_STOP_WORDS.union(add_stop_words)
#recreate the dtm
cv= CountVectorizer(stop_words=stop_words)
data_cv= cv.fit_transform(data_clean.transcript)
data_stop =pd.DataFrame(data_cv.toarray(), columns=cv.get_feature_names())
data_stop.index = data_clean.index
#pickle for later use
import pickle
pickle.dump(cv, open("/content/drive/MyDrive/Colab Notebooks/NLP/cv.pkl", "wb"))
data_stop.to_pickle("/content/drive/MyDrive/Colab Notebooks/NLP/dtm_stop.pkl")
!pip install wordcloud
from wordcloud import WordCloud
wc= WordCloud(stopwords=stop_words, background_color='white', colormap='Dark2', max_font_size=150, random_state=42 )
#reset output dimension
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize']=[16, 6]
full_names=['Ali Wong', 'Anthony Jeselnik', 'Bill Burr', 'Bo Burnham', 'Dave Chappelle', 'Hasan Minhaj',
'Jim Jefferies', 'Joe Rogan', 'John Mulaney', 'Louis C.K.', 'Mike Birbiglia', 'Ricky Gervais']
#create subplots for each comedian
for index, comedian in enumerate(data.columns):
wc.generate(data_clean.transcript[comedian])
plt.subplot(3, 4, index+1)
plt.imshow(wc, interpolation="bilinear")
plt.axis("off")
plt.title(full_names[index])
plt.show()
```
###**Finding**
* Ali Wong says the s-word a lot and talks about her asian. I guess that's funny to me.
* A lot of people use the F-word. Let's dig into that later.
# **Number of words**
```
#find no of unique words each of em used
#identify the nonzero item in dtm, meaning that the word appaers atleast once
unique_list=[]
for comedian in data.columns:
uniques = data[comedian].to_numpy().nonzero()[0].size
unique_list.append(uniques)
#create a new dataframe that contains this unique word count
data_words = pd.DataFrame(list(zip(full_names, unique_list)),columns=['comedian', 'unique_words'])
data_unique_sort= data_words.sort_values(by='unique_words')
data_unique_sort
#calculate the words permin of each comedian
#total no of words comedian uses
total_list=[]
for comedian in data.columns:
totals= sum(data[comedian])
total_list.append(totals)
#comedy spl runtime from imdb, in mins
run_times= [60, 59, 80, 60, 67, 73, 77, 63, 62, 58, 76, 79]
#add some more col to dataframe
data_words['total_words'] = total_list
data_words['run_times']= run_times
data_words['words_per_min']= data_words['total_words']/ data_words['run_times']
#sort the df to check the slowest and fastest
data_wpm_sort= data_words.sort_values(by='words_per_min')
data_wpm_sort
#plot the findings
import numpy as np
y_pos= np.arange(len(data_words))
plt.subplot(1, 2, 1)
plt.barh(y_pos, data_unique_sort.unique_words, align='center')
plt.yticks(y_pos, data_unique_sort.comedian)
plt.title('Number of Unique Words', fontsize=20)
plt.subplot(1, 2, 2)
plt.barh(y_pos, data_wpm_sort.words_per_min, align='center')
plt.yticks(y_pos, data_wpm_sort.comedian)
plt.title('Number of Words Per Minute', fontsize=20)
plt.tight_layout()
plt.show()
```
##**Finding**
* **Vocabulary**
* Ricky Gervais (British comedy) and Bill Burr (podcast host) use a lot of words in their comedy
* Louis C.K. (self-depricating comedy) and Anthony Jeselnik (dark humor) have a smaller vocabulary
* **Talking Speed**
* Joe Rogan (blue comedy) and Bill Burr (podcast host) talk fast
* Bo Burnham (musical comedy) and Anthony Jeselnik (dark humor) talk slow
Ali Wong is somewhere in the middle in both cases. Nothing too interesting here.
## **Amt of Profanity**
```
Counter(words).most_common()
#isolate just thse bad words
data_bad_words = data.transpose()[['fucking', 'fuck', 'shit']]
data_profanity = pd.concat([data_bad_words.fucking+ data_bad_words.fuck, data_bad_words.shit], axis=1)
data_profanity.columns = ['f_words', 's_words']
data_profanity
#lets create a scatter plot of our findings
plt.rcParams['figure.figsize']=[10, 8]
for i, comedian in enumerate(data_profanity.index):
x= data_profanity.f_words.loc[comedian]
y= data_profanity.s_words.loc[comedian]
plt.scatter(x, y, color='blue')
plt.text(x+1.5, y+0.5, full_names[i], fontsize=10)
plt.xlim(-5, 155)
plt.title('No. of Bad-words used in Routine', fontsize=20)
plt.xlabel('No of F words', fontsize=15)
plt.ylabel('No of S words', fontsize=15)
plt.show()
```
## **Finding**
* **Averaging 2 F-Bombs Per Minute!** - I don't like too much swearing, especially the f-word, which is probably why I've never heard of Bill Bur, Joe Rogan and Jim Jefferies.
* **Clean Humor** - It looks like profanity might be a good predictor of the type of comedy I like. Besides Ali Wong, my two other favorite comedians in this group are John Mulaney and Mike Birbiglia.
My conclusion - yes, it does, for a first pass. There are definitely some things that could be better cleaned up, such as adding more stop words or including bi-grams. But we can save that for another day. The results, especially the profanity findings, are interesting and make general sense,
```
```
|
github_jupyter
|
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
deforestation_df = pd.read_excel('data/Brazil_research/raw_data/savedrecs 1_1000.xls')
for i in range(1000, 43000, 1000):
temp_df = pd.read_excel(f'data/Brazil_research/raw_data/savedrecs {i+1}_{i+1000}.xls')
deforestation_df = pd.concat([deforestation_df, temp_df])
temp_df = pd.read_excel('data/Brazil_research/raw_data/savedrecs 43001_43248.xls')
deforestation_df = pd.concat([deforestation_df, temp_df])
len(deforestation_df)
deforestation_df.head(2)
deforestation_df.iloc[4]['Addresses']
deforestation_df.iloc[5]['Article Title']
deforestation_df = deforestation_df.drop(columns=['Hot Paper Status', 'Date of Export', 'Pubmed Id', 'Highly Cited Status', 'Special Issue'])
# deforestation_df = deforestation_df.drop(columns=['Unnamed: 69', 'Hot Paper Status', 'Date of Export', 'Pubmed Id', 'Highly Cited Status', 'Special Issue'])
# drop rows without a title, year, authors, locations, citations
# deforestation_df = deforestation_df.dropna(subset=['Authors', 'Addresses', 'Publication Year', 'Article Title', 'Times Cited, All Databases'])
deforestation_df = deforestation_df.dropna(subset=['Addresses', 'Publication Year', 'Times Cited, WoS Core'])
len(deforestation_df)
deforestation_df.to_csv('data/Brazil_research/cleaned_data/Brazil_focused_research.csv')
# reasearch location assignment
country_name_df = pd.read_csv('data/country_names.csv')
country_names = np.array(country_name_df['name'].unique())
country_names = np.append(country_names, ['USA', 'England', 'Ireland', 'Korea', 'Moldova', 'Micronesia',
'Saint Martin', 'Sint Maarten', 'Tanzania', 'United Kingdom', 'UK',
'United States', 'Virgin Islands'])
country_brazil = np.array(['Brazil'])
country_names_no_brazil = np.setdiff1d(country_names, country_brazil)
deforestation_df = pd.read_csv('data/Brazil_research/cleaned_data/Brazil_focused_research.csv')
domestic_brazil_df = deforestation_df[deforestation_df['Addresses'].str.contains('|'.join(country_names_no_brazil), case=False) == False]
domestic_brazil_df = domestic_brazil_df[domestic_brazil_df['Addresses'].str.contains('Brazil', case=False) == True]
domestic_brazil_df = domestic_brazil_df.reset_index(drop=True)
international_brazil_df = deforestation_df[deforestation_df['Addresses'].str.contains('Brazil', case=False) == False]
# make sure that addresses at least contain some country
international_brazil_df = international_brazil_df[international_brazil_df['Addresses'].str.contains('|'.join(country_names_no_brazil), case=False) == True]
international_brazil_df = international_brazil_df.reset_index(drop=True)
collaboration_brazil_df = pd.concat([deforestation_df, domestic_brazil_df, international_brazil_df]).drop_duplicates(keep=False)
# make sure that addresses at least contain some country
collaboration_brazil_df = collaboration_brazil_df[collaboration_brazil_df['Addresses'].str.contains('|'.join(country_names), case=False) == True]
collaboration_brazil_df = collaboration_brazil_df.reset_index(drop=True)
collaboration_brazil_df['Publication Year'].min()
domestic_brazil_df.to_csv('data/Brazil_research/cleaned_data/domestic_brazil_research.csv')
international_brazil_df.to_csv('data/Brazil_research/cleaned_data/international_brazil_research.csv')
collaboration_brazil_df.to_csv('data/Brazil_research/cleaned_data/collaboration_brazil_research.csv')
```
|
github_jupyter
|
# Going deeper with Tensorflow
В этом семинаре мы начнем изучать [Tensorflow](https://www.tensorflow.org/) для построения deep learning моделей.
Для установки tf на свою машину
* `pip install tensorflow` версия с поддержкой **cpu-only** для Linux & Mac OS
* для автомагической поддержки GPU смотрите документацию [TF install page](https://www.tensorflow.org/install/)
```
import tensorflow as tf
gpu_options = tf.GPUOptions(allow_growth=True, per_process_gpu_memory_fraction=0.1)
s = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options))
```
# Приступим
Для начала, давайте имплементируем простую функцию на numpy просто для сравнения. Напишите подсчет суммы квадратов чисел от 0 до N-1.
**Подсказка:**
* Массив чисел от 0 до N-1 включительно - numpy.arange(N)
```
import numpy as np
def sum_squares(N):
return <student.Implement_me()>
%%time
sum_squares(10**8)
```
# Tensoflow teaser
Doing the very same thing
```
#I gonna be your function parameter
N = tf.placeholder('int64', name="input_to_your_function")
#i am a recipe on how to produce sum of squares of arange of N given N
result = tf.reduce_sum((tf.range(N)**2))
%%time
#example of computing the same as sum_squares
print(result.eval({N:10**8}))
```
# How does it work?
1. define placeholders where you'll send inputs;
2. make symbolic graph: a recipe for mathematical transformation of those placeholders;
3. compute outputs of your graph with particular values for each placeholder
* output.eval({placeholder:value})
* s.run(output, {placeholder:value})
* So far there are two main entities: "placeholder" and "transformation"
* Both can be numbers, vectors, matrices, tensors, etc.
* Both can be int32/64, floats of booleans (uint8) of various size.
* You can define new transformations as an arbitrary operation on placeholders and other transformations
* tf.reduce_sum(tf.arange(N)\**2) are 3 sequential transformations of placeholder N
* There's a tensorflow symbolic version for every numpy function
* `a+b, a/b, a**b, ...` behave just like in numpy
* np.mean -> tf.reduce_mean
* np.arange -> tf.range
* np.cumsum -> tf.cumsum
* If if you can't find the op you need, see the [docs](https://www.tensorflow.org/api_docs/python).
Still confused? We gonna fix that.
```
#Default placeholder that can be arbitrary float32 scalar, vertor, matrix, etc.
arbitrary_input = tf.placeholder('float32')
#Input vector of arbitrary length
input_vector = tf.placeholder('float32',shape=(None,))
#Input vector that _must_ have 10 elements and integer type
fixed_vector = tf.placeholder('int32',shape=(10,))
#Matrix of arbitrary n_rows and 15 columns (e.g. a minibatch your data table)
input_matrix = tf.placeholder('float32',shape=(None,15))
#You can generally use None whenever you don't need a specific shape
input1 = tf.placeholder('float64',shape=(None,100,None))
input2 = tf.placeholder('int32',shape=(None,None,3,224,224))
#elementwise multiplication
double_the_vector = input_vector*2
#elementwise cosine
elementwise_cosine = tf.cos(input_vector)
#difference between squared vector and vector itself
vector_squares = input_vector**2 - input_vector
#Practice time: create two vectors of type float32
my_vector = <student.init_float32_vector()>
my_vector2 = <student.init_one_more_such_vector()>
#Write a transformation(recipe):
#(vec1)*(vec2) / (sin(vec1) +1)
my_transformation = <student.implementwhatwaswrittenabove()>
print(my_transformation)
#it's okay, it's a symbolic graph
#
dummy = np.arange(5).astype('float32')
my_transformation.eval({my_vector:dummy,my_vector2:dummy[::-1]})
```
### Visualizing graphs
It's often useful to visualize the computation graph when debugging or optimizing.
Interactive visualization is where tensorflow really shines as compared to other frameworks.
There's a special instrument for that, called Tensorboard. You can launch it from console:
```tensorboard --logdir=/tmp/tboard --port=7007```
If you're pathologically afraid of consoles, try this:
```os.system("tensorboard --logdir=/tmp/tboard --port=7007 &"```
_(but don't tell anyone we taught you that)_
```
# launch tensorflow the ugly way, uncomment if you need that
import os
port = 6000 + os.getuid()
print("Port: %d" % port)
#!killall tensorboard
os.system("tensorboard --logdir=./tboard --port=%d &" % port)
# show graph to tensorboard
writer = tf.summary.FileWriter("./tboard", graph=tf.get_default_graph())
writer.close()
```
One basic functionality of tensorboard is drawing graphs. One you've run the cell above, go to `localhost:7007` in your browser and switch to _graphs_ tab in the topbar.
Here's what you should see:
<img src="https://s12.postimg.org/a374bmffx/tensorboard.png" width=480>
Tensorboard also allows you to draw graphs (e.g. learning curves), record images & audio ~~and play flash games~~. This is useful when monitoring learning progress and catching some training issues.
One researcher said:
```
If you spent last four hours of your worktime watching as your algorithm prints numbers and draws figures, you're probably doing deep learning wrong.
```
You can read more on tensorboard usage [here](https://www.tensorflow.org/get_started/graph_viz)
# Do It Yourself
__[2 points max]__
```
# Quest #1 - implement a function that computes a mean squared error of two input vectors
# Your function has to take 2 vectors and return a single number
<student.define_inputs_and_transformations()>
mse =<student.define_transformation()>
compute_mse = lambda vector1, vector2: <how to run you graph?>
# Tests
from sklearn.metrics import mean_squared_error
for n in [1,5,10,10**3]:
elems = [np.arange(n),np.arange(n,0,-1), np.zeros(n),
np.ones(n),np.random.random(n),np.random.randint(100,size=n)]
for el in elems:
for el_2 in elems:
true_mse = np.array(mean_squared_error(el,el_2))
my_mse = compute_mse(el,el_2)
if not np.allclose(true_mse,my_mse):
print('Wrong result:')
print('mse(%s,%s)' % (el,el_2))
print("should be: %f, but your function returned %f" % (true_mse,my_mse))
raise ValueError,"Что-то не так"
print("All tests passed")
```
# variables
The inputs and transformations have no value outside function call. This isn't too comfortable if you want your model to have parameters (e.g. network weights) that are always present, but can change their value over time.
Tensorflow solves this with `tf.Variable` objects.
* You can assign variable a value at any time in your graph
* Unlike placeholders, there's no need to explicitly pass values to variables when `s.run(...)`-ing
* You can use variables the same way you use transformations
```
#creating shared variable
shared_vector_1 = tf.Variable(initial_value=np.ones(5))
#initialize variable(s) with initial values
s.run(tf.global_variables_initializer())
#evaluating shared variable (outside symbolicd graph)
print("initial value", s.run(shared_vector_1))
# within symbolic graph you use them just as any other inout or transformation, not "get value" needed
#setting new value
s.run(shared_vector_1.assign(np.arange(5)))
#getting that new value
print("new value", s.run(shared_vector_1))
```
# tf.gradients - why graphs matter
* Tensorflow can compute derivatives and gradients automatically using the computation graph
* Gradients are computed as a product of elementary derivatives via chain rule:
$$ {\partial f(g(x)) \over \partial x} = {\partial f(g(x)) \over \partial g(x)}\cdot {\partial g(x) \over \partial x} $$
It can get you the derivative of any graph as long as it knows how to differentiate elementary operations
```
my_scalar = tf.placeholder('float32')
scalar_squared = my_scalar**2
#a derivative of scalar_squared by my_scalar
derivative = tf.gradients(scalar_squared, my_scalar)[0]
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(-3,3)
x_squared, x_squared_der = s.run([scalar_squared,derivative],
{my_scalar:x})
plt.plot(x, x_squared,label="x^2")
plt.plot(x, x_squared_der, label="derivative")
plt.legend();
```
# Why that rocks
```
my_vector = tf.placeholder('float32',[None])
#Compute the gradient of the next weird function over my_scalar and my_vector
#warning! Trying to understand the meaning of that function may result in permanent brain damage
weird_psychotic_function = tf.reduce_mean((my_vector+my_scalar)**(1+tf.nn.moments(my_vector,[0])[1]) + 1./ tf.atan(my_scalar))/(my_scalar**2 + 1) + 0.01*tf.sin(2*my_scalar**1.5)*(tf.reduce_sum(my_vector)* my_scalar**2)*tf.exp((my_scalar-4)**2)/(1+tf.exp((my_scalar-4)**2))*(1.-(tf.exp(-(my_scalar-4)**2))/(1+tf.exp(-(my_scalar-4)**2)))**2
der_by_scalar = <student.compute_grad_over_scalar()>
der_by_vector = <student.compute_grad_over_vector()>
#Plotting your derivative
scalar_space = np.linspace(1, 7, 100)
y = [s.run(weird_psychotic_function, {my_scalar:x, my_vector:[1, 2, 3]})
for x in scalar_space]
plt.plot(scalar_space, y, label='function')
y_der_by_scalar = [s.run(der_by_scalar, {my_scalar:x, my_vector:[1, 2, 3]})
for x in scalar_space]
plt.plot(scalar_space, y_der_by_scalar, label='derivative')
plt.grid()
plt.legend();
```
# Almost done - optimizers
While you can perform gradient descent by hand with automatic grads from above, tensorflow also has some optimization methods implemented for you. Recall momentum & rmsprop?
```
y_guess = tf.Variable(np.zeros(2,dtype='float32'))
y_true = tf.range(1,3,dtype='float32')
loss = tf.reduce_mean((y_guess - y_true + tf.random_normal([2]))**2)
optimizer = tf.train.MomentumOptimizer(0.01,0.9).minimize(loss,var_list=y_guess)
#same, but more detailed:
#updates = [[tf.gradients(loss,y_guess)[0], y_guess]]
#optimizer = tf.train.MomentumOptimizer(0.01,0.9).apply_gradients(updates)
from IPython.display import clear_output
s.run(tf.global_variables_initializer())
guesses = [s.run(y_guess)]
for _ in range(100):
s.run(optimizer)
guesses.append(s.run(y_guess))
clear_output(True)
plt.plot(*zip(*guesses),marker='.')
plt.scatter(*s.run(y_true),c='red')
plt.show()
```
# Logistic regression example
Implement the regular logistic regression training algorithm
Tips:
* Use a shared variable for weights
* X and y are potential inputs
* Compile 2 functions:
* `train_function(X, y)` - returns error and computes weights' new values __(through updates)__
* `predict_fun(X)` - just computes probabilities ("y") given data
We shall train on a two-class MNIST dataset
* please note that target `y` are `{0,1}` and not `{-1,1}` as in some formulae
```
from sklearn.datasets import load_digits
mnist = load_digits(2)
X,y = mnist.data, mnist.target
print("y [shape - %s]:" % (str(y.shape)), y[:10])
print("X [shape - %s]:" % (str(X.shape)))
print('X:\n',X[:3,:10])
print('y:\n',y[:10])
plt.imshow(X[0].reshape([8,8]))
# inputs and shareds
weights = <student.code_variable()>
input_X = <student.code_placeholder()>
input_y = <student.code_placeholder()>
predicted_y = <predicted probabilities for input_X>
loss = <logistic loss (scalar, mean over sample)>
optimizer = <optimizer that minimizes loss>
train_function = <compile function that takes X and y, returns log loss and updates weights>
predict_function = <compile function that takes X and computes probabilities of y>
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
from sklearn.metrics import roc_auc_score
for i in range(5):
<run optimizer operation>
loss_i = <compute loss at iteration i>
print("loss at iter %i:%.4f" % (i, loss_i))
print("train auc:",roc_auc_score(y_train, predict_function(X_train)))
print("test auc:",roc_auc_score(y_test, predict_function(X_test)))
print ("resulting weights:")
plt.imshow(shared_weights.get_value().reshape(8, -1))
plt.colorbar();
```
# Bonus: my1stNN
Your ultimate task for this week is to build your first neural network [almost] from scratch and pure tensorflow.
This time you will same digit recognition problem, but at a larger scale
* images are now 28x28
* 10 different digits
* 50k samples
Note that you are not required to build 152-layer monsters here. A 2-layer (one hidden, one output) NN should already have ive you an edge over logistic regression.
__[bonus score]__
If you've already beaten logistic regression with a two-layer net, but enthusiasm still ain't gone, you can try improving the test accuracy even further! The milestones would be 95%/97.5%/98.5% accuraсy on test set.
__SPOILER!__
At the end of the notebook you will find a few tips and frequently made mistakes. If you feel enough might to shoot yourself in the foot without external assistance, we encourage you to do so, but if you encounter any unsurpassable issues, please do look there before mailing us.
```
from mnist import load_dataset
#[down]loading the original MNIST dataset.
#Please note that you should only train your NN on _train sample,
# _val can be used to evaluate out-of-sample error, compare models or perform early-stopping
# _test should be hidden under a rock untill final evaluation...
# But we both know it is near impossible to catch you evaluating on it.
X_train,y_train,X_val,y_val,X_test,y_test = load_dataset()
print (X_train.shape,y_train.shape)
plt.imshow(X_train[0,0])
<here you could just as well create computation graph>
<this may or may not be a good place to evaluating loss and optimizer>
<this may be a perfect cell to write a training&evaluation loop in>
<predict & evaluate on test here, right? No cheating pls.>
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
# SPOILERS!
Recommended pipeline
* Adapt logistic regression from previous assignment to classify some number against others (e.g. zero vs nonzero)
* Generalize it to multiclass logistic regression.
- Either try to remember lecture 0 or google it.
- Instead of weight vector you'll have to use matrix (feature_id x class_id)
- softmax (exp over sum of exps) can implemented manually or as T.nnet.softmax (stable)
- probably better to use STOCHASTIC gradient descent (minibatch)
- in which case sample should probably be shuffled (or use random subsamples on each iteration)
* Add a hidden layer. Now your logistic regression uses hidden neurons instead of inputs.
- Hidden layer uses the same math as output layer (ex-logistic regression), but uses some nonlinearity (sigmoid) instead of softmax
- You need to train both layers, not just output layer :)
- Do not initialize layers with zeros (due to symmetry effects). A gaussian noize with small sigma will do.
- 50 hidden neurons and a sigmoid nonlinearity will do for a start. Many ways to improve.
- In ideal casae this totals to 2 .dot's, 1 softmax and 1 sigmoid
- __make sure this neural network works better than logistic regression__
* Now's the time to try improving the network. Consider layers (size, neuron count), nonlinearities, optimization methods, initialization - whatever you want, but please avoid convolutions for now.
|
github_jupyter
|
<a href="https://colab.research.google.com/github/iued-uni-heidelberg/DAAD-Training-2021/blob/main/cord19download2text.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Downloading and reading CORD19 corpus
This notebook downloads and reads the free cord19 corpus into one file. The notebook is hosted at IÜD, Heidelberg University github repository https://github.com/iued-uni-heidelberg/cord19
CORD19 (covid-19) open-source corpus is available from https://www.semanticscholar.org/cord19/download.
Documentation is available at https://github.com/allenai/cord19
The original files are in json format. The output file is in plain text format; documents are separated (by default) by \<doc id="doc1000001"> ... \</doc> tags
The purpose of the plain text file is for further processing, e.g., generating linguistic annotation using the TreeTagger or the Standford parser for part-of-speech annotation or dependency / constituency parsing.
## Downloading CORD19 corpus
The corpus is downloaded and extracted from https://www.semanticscholar.org/cord19/download
Please check the link above: if you need the latest release of the corpus or if you would like to choose another release. Currently the 2021-08-30 release is downloaded.
File size is ~11GB
expected download time ~5 min
```
!wget https://ai2-semanticscholar-cord-19.s3-us-west-2.amazonaws.com/historical_releases/cord-19_2021-08-30.tar.gz
```
Extracting cord-19 corpus, approximate time ~ 4 min
```
!tar -xvzf cord-19_2021-08-30.tar.gz
```
Removing initial archive to free some disk space
```
!rm cord-19_2021-08-30.tar.gz
```
Extracting document parsers, which contain individual articles in separate json files. This is expected to take ~ 12 min.
```
!tar -xvzf 2021-08-30/document_parses.tar.gz
```
Removing more files to save space: ~ 9 seconds
```
# removing more files to save space
!rm --recursive 2021-08-30
!rm --recursive document_parses/pdf_json
```
## Reading json directory and merging into text file(s)
Run this cell to create the class; then run the next cell to execute on the directory "document_parses/pmc_json"
This is a class for reading a directory with json files and writing them to a single file or split into several text file, with "split_by_docs=N", N documents in each file.
```
# -*- coding: utf-8 -*-
# Python script to open each file, read json input and copy to one text file for subsequent processing
import os, re, sys
import json
class clJsonDir2txt(object):
'''
@author Bogdan Babych, IÜD, Heidelberg University
@email bogdan [dot] babych [at] iued [dot] uni-heidelberg [dot] de
a script for processing covid-19 corpus:
@url https://www.semanticscholar.org/cord19 @url https://www.semanticscholar.org/cord19/download
recursively reads files from a directory, and glues them together into a single corpus file
@todo:
working with sections - collect titles of all sections; frequent sections; select argumentative sections (e.g., discussion, analysis...)
- to compare descriptive and argumentative parts of the corpus
experimenting with different annotations (pos, parsing... ); MT quality evaluation...
'''
def __init__(self, SDirName, output_file = 'corpus_out.txt', textfilter=None, include_title = True, include_refs = True, include_authors = True, tag='doc', id=1000000, split_by_docs = 0): # initialising by openning the directories
self.SOutput_file = output_file
self.STextFilter = textfilter
self.RFilter = re.compile(textfilter, re.IGNORECASE | re.MULTILINE)
self.BInclTitle = include_title # implemented
self.BInclRefs = include_refs # not implemented yet
self.BInclAuth = include_authors # not implemented yet
self.STag = tag
self.ID = id
self.ISplitByDocs = int(split_by_docs)
# print(self.ISplitByDocs)
self.openDir(SDirName)
return
def openDir(self, path): # implementation of recursively openning directories from a given rule directory and reading each file recursively into a string
i = 0
if self.ISplitByDocs:
SPartFile = "part1000000" + self.SOutput_file
FOut = open(SPartFile, 'w')
else:
FOut = open(self.SOutput_file, 'w')
for root,d_names,f_names in os.walk(path):
for f in f_names:
i+=1
if i%1000==0: print(str(i) + '. Processing: ' + f)
fullpath = os.path.join(root, f)
# print(fullpath)
try:
FIn = open(fullpath,'r')
SIn = FIn.read()
# apply text filter, if not None
if self.STextFilter and (re.search(self.RFilter, SIn) == None): continue
SText2Write = self.procFile(SIn,f,i)
if SText2Write: FOut.write(SText2Write) # if the string is not empty then write to file
FIn.close()
except:
print(f'file {f} cannot be read or processed')
finally:
# splitting output into chunks of "split_by_docs" size
if self.ISplitByDocs and (i % self.ISplitByDocs == 0): # if self.ISplitByDocs == 0 then everything goes into one file; if this > 0 then
SPartFile = "part" + str(1000000 + i) + self.SOutput_file # generate new file name
FOut.flush()
FOut.close()
FOut = open(SPartFile, 'w')
FOut.flush()
FOut.close()
return
def procFile(self, SIn,SFNameIn,i): # sending each json string for extraction of text and attaching an correct tags to each output string output string
STagOpen = '<' + self.STag + ' id="' + self.STag + str(self.ID + i) + '">\n'
STagClose = '\n</' + self.STag + '>\n\n'
SText4Corpus = self.getJson(SIn, SFNameIn)
if SText4Corpus:
return STagOpen + SText4Corpus + STagClose
else:
print('\tNo data read from: ' + SFNameIn)
return None
def getJson(self, SIn, SFNameIn): # for each file-level string read from a file: managing internal structure of the covid-19 json file
LOut = [] # collecting a list of strings
try:
DDoc = json.loads(SIn)
except:
print('\t\t' + SFNameIn + ' => error reading json2dictionary')
return None
# metadata:
try:
DMetaData = DDoc['metadata']
if DMetaData:
SMetaData = self.getJson_Metadata(DMetaData)
if SMetaData: LOut.append(SMetaData)
except:
print('\t\t\t' + SFNameIn + ' ====> no metadata')
DMetaData = None
# body text
try:
LBodyText = DDoc['body_text']
if LBodyText:
SBodyText = self.getJson_BodyText(LBodyText)
LOut.append(SBodyText)
except:
print('\t\t\t' + SFNameIn + ' ====> no body_text')
LBodyText = None
# further: to implement references
SText = '\n\n'.join(LOut)
return SText
def getJson_Metadata(self, DIn): # converts interesting parts of metadata into a string
SMetadata = ''
LMetadata = []
try: STitle = DIn["title"]
except: STitle = None
if STitle and self.BInclTitle:
LMetadata.append(STitle)
# to implement reading of authors' names
if LMetadata: SMetadata = '\n\n'.join(LMetadata)
return SMetadata
def getJson_BodyText(self, LIn): # converts interesting parts of the body texts into a string
SBodyText = ''
LBodyText = []
for DParagraph in LIn:
try:
## DParagraphs[section] ## -- later on >> distinction between different sections....
SParagraph = DParagraph["text"]
LBodyText.append(SParagraph)
except:
print('!',)
continue
SBodyText = '\n\n'.join(LBodyText)
return SBodyText
# arguments:
'''
sys.argv[1], # obligatory: input directory name;
other arguments optional:
output_file = 'covid19corpus.txt',
textfilter = None, # if this is string, only texts containing it are collected, e.g., covid
include_title = True, # include or exclude title
include_refs = False, # not implemented yet: include or exclude references
split_by_docs=0 # split by groups of n documents; if 0 then write to one file
'''
'''if __name__ == '__main__':
OJsonDir2txt = clJsonDir2txt(sys.argv[1], output_file = 'covid19corpus.txt', textfilter=None, include_title = True, include_refs = False, split_by_docs=0)
'''
```
This cell will executre reading of json files into a single (or multiple) files
Change the value of "split_by_docs=0" to "split_by_docs=10000" or any number ; this will create several corpus files with 10000 or any required number fo documents per file, which you wish to have.
Approximate execution time ~10 min
File size to download ~4.3 GB
It contains ~198.000 documents,
~ 671.578.587 words
~ 19.381.647 paragraphs (including empty lines, i.e., ~10M real paragraphs)
~ 4.619.100.883 characters
Download time can take up to 1 hour depending on your connection speed.
To split into ~BNC size chunks (100MW), split into groups of ~40000 documents (in the following cell set "split_by_docs=20000")
```
# remove parameter textfilter='covid', to return all documents
OJsonDir2txt = clJsonDir2txt("document_parses/pmc_json", output_file = 'covid19corpusFilterCOVID.txt', textfilter='covid', include_title = True, include_refs = False, split_by_docs=40000)
```
To see the number of words, paragraphs in your corpus you can use this command:
```
!wc covid19corpus.txt
```
If you have split the text into parts, you can see the number of words in each part using this command:
```
!wc part*
```
|
github_jupyter
|
# Self-Driving Car Engineer Nanodegree
## Project: **Finding Lane Lines on the Road**
***
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
---
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
---
**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
---
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
## Import Packages
```
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
```
## Read in an Image
```
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
```
## Ideas for Lane Detection Pipeline
**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
`cv2.inRange()` for color selection
`cv2.fillPoly()` for regions selection
`cv2.line()` to draw lines on an image given endpoints
`cv2.addWeighted()` to coadd / overlay two images
`cv2.cvtColor()` to grayscale or change color
`cv2.imwrite()` to output images to file
`cv2.bitwise_and()` to apply a mask to an image
**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
## Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
```
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
# return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
# -------------- MY CODE -------------------#
# Convert an image from RGB to HSV
def to_hsv(img):
return cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
# Takes as input an image, a lower HSV threshold and a higher HSV threshold
# and return a mask used to filter out all the colors that are not in range
def mask_color(hsv, low, high):
mask = cv2.inRange(hsv, low, high)
return mask
# Utility function to check if a value is not +/-inf
def check_inf(value):
return abs(value) < math.inf
# Utility function used to filter out all the slopes that are not in the
# range [MIN_SLOPE, MAX_SLOPE]
def slope_in_range(slope):
MIN_SLOPE = 0.3
MAX_SLOPE = 1.0
return MIN_SLOPE <= abs(slope) <= MAX_SLOPE
def draw_lines(img, lines, color=[255, 0, 0], thickness=12):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
First, I tried to follow https://knowledge.udacity.com/questions/18578
but the result obtained wasn't satisying. So, I decided to take the slope
and coefficient of the line and average them instead of the "lower_x_values" as suggested
in point 4 of https://knowledge.udacity.com/questions/18578.
I also filtered the slope in order to have them in a range so as to avoid strange behaviours.
Following point 1 of https://knowledge.udacity.com/questions/18578, I used the sign of the slope
to divide them in left_line_slopes and right_line_slopes.
Then, I proceeded with averaging slopes and coefficient and use the means to find
a lower_x and an upper_x. In order to find the x, I used two y points:
- the y max dimension of the image
- the minimum y point found in all the lines that are in slope_range
Finally, I use all these information to draw the lines
Some maths:
if we have a line defined as y = slope*x + coeff, we have that:
- slope = ((y2-y1)/(x2-x1))
- coeff = y1 - slope*x1 (could also be y2 and x2)
"""
# In order to find the minimum, start with the max value and
# update it accordingly
min_y = img.shape[0]
max_y = img.shape[0]
left_line_slopes = []
left_line_coeff = []
right_line_slopes = []
right_line_coeff = []
for line in lines:
for x1,y1,x2,y2 in line:
slope = ((y2-y1)/(x2-x1))
coeff = y1 - slope*x1
if slope_in_range(slope) and check_inf(slope):
if slope > 0:
# Right line case (y axis is inverted in the image)
right_line_slopes.append(slope)
right_line_coeff.append(coeff)
elif slope < 0:
# Left line case (y axis is inverted in the image)
left_line_slopes.append(slope)
left_line_coeff.append(coeff)
min_y = min(y1, y2, min_y)
# if len(right_line_slopes) > 0 I also have a coeff, so
# it's no use to check len(right_line_coeff)
if len(right_line_slopes) > 0:
right_line_slope_mean = np.mean(right_line_slopes, dtype=np.float64)
right_line_coeff_mean = np.mean(right_line_coeff, dtype=np.float64)
# due to precision, np.mean can return inf or -inf: check if values are correct before using them
if check_inf(right_line_slope_mean) and check_inf(right_line_coeff_mean):
max_x = int((max_y - right_line_coeff_mean)/right_line_slope_mean)
min_x = int((min_y - right_line_coeff_mean)/right_line_slope_mean)
cv2.line(img, (min_x, min_y), (max_x, max_y), color, thickness)
# if len(left_line_slopes) > 0 I also have a coeff, so
# it's no use to check len(left_line_coeff)
if len(left_line_slopes) > 0:
left_line_slope_mean = np.mean(left_line_slopes, dtype=np.float64)
left_line_coeff_mean = np.mean(left_line_coeff, dtype=np.float64)
# due to precision, np.mean can return inf or -inf: check if values are correct before using them
if check_inf(left_line_slope_mean) and check_inf(left_line_coeff_mean):
max_x = int((max_y - left_line_coeff_mean)/left_line_slope_mean)
min_x = int((min_y - left_line_coeff_mean)/left_line_slope_mean)
cv2.line(img, (min_x, min_y), (max_x, max_y), color, thickness)
# ------------- END OF MY CODE -------------#
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
```
## Test Images
Build your pipeline to work on the images in the directory "test_images"
**You should make sure your pipeline works well on these images before you try the videos.**
```
import os
os.listdir("test_images/")
```
## Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
```
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
# Define Gaussian Filter parameters
GAUSSIAN_KERNEL_SIZE = 3
# End Gaussian Filter parameters
# Define Canny parameters
CANNY_LOW_THRESHOLD = 75
CANNY_HIGH_THRESHOLD = 150
# End Canny parameters
# Do not use with challenge (image's size is different)
# Define polygon
# imshape = image.shape
# LEFT_BOTTOM = (100, imshape[0])
# RIGHT_BOTTOM = (930, imshape[0])
# Y_HORIZON = 320
# LEFT_UP = (400, Y_HORIZON)
# RIGHT_UP = (590, Y_HORIZON)
# VERTICES = np.array([[LEFT_BOTTOM , LEFT_UP, RIGHT_UP, RIGHT_BOTTOM]], dtype=np.int32)
# Challenge images have different size, so I have to find a polygon using percentage
# For percentage I'm using the polygon found before
LEFT_BOTTOM_PERC = 100.0/960.0
RIGHT_BOTTOM_PERC = 930.0/960.0
Y_HORIZON_PERC = 320.0/540.0
LEFT_UP_PERC = 400.0/960.0
RIGHT_UP_PERC = 590.0/960.0
def get_vertices(img):
left_bottom = (img.shape[1] * LEFT_BOTTOM_PERC, img.shape[0])
right_bottom = (img.shape[1] * RIGHT_BOTTOM_PERC, img.shape[0])
left_up = (img.shape[1] * LEFT_UP_PERC, img.shape[0] * Y_HORIZON_PERC)
right_up = (img.shape[1] * RIGHT_UP_PERC, img.shape[0] * Y_HORIZON_PERC)
vertices = np.array([[left_bottom , left_up, right_up, right_bottom]], dtype=np.int32)
return vertices
# End polygon
# Define Hough transform parameters
RHO = 3.5 # distance resolution in pixels of the Hough grid
THETA = np.pi/180 # angular resolution in radians of the Hough grid
HOUGH_THRESHOLD = 30 # minimum number of votes (intersections in Hough grid cell)
HOUGH_MIN_LINE_LENGTH = 5 #minimum number of pixels making up a line
HOUGH_MAX_LINE_GAP = 25 # maximum gap in pixels between connectable line segments
# End Hough transform parameters
# Define Hough transform parameters suggested by reviewer
HOUGH_THRESHOLD = 50 # minimum number of votes (intersections in Hough grid cell)
HOUGH_MIN_LINE_LENGTH = 100 #minimum number of pixels making up a line
HOUGH_MAX_LINE_GAP = 160 # maximum gap in pixels between connectable line segments
# End Hough transform parameters suggested by reviewer
# In order to improve the quality of filtering, I switched from grayscale to hsv, which is more robust
# in situations where illumination changes. Aside from converting to hsv and filtering by colors,
# the pipeline resemble the one seen in the course material.
def my_pipeline(image):
hsv = to_hsv(image) # Convert the image to HSV
blur_gray = gaussian_blur(hsv, GAUSSIAN_KERNEL_SIZE)
# Define color ranges and apply color mask, so as to filter per color
yellow_hsv_low = np.array([ 0, 100, 100])
yellow_hsv_high = np.array([ 50, 255, 255])
white_hsv_low = np.array([ 20, 0, 180])
white_hsv_high = np.array([ 255, 80, 255])
mask_yellow = mask_color(blur_gray, yellow_hsv_low, yellow_hsv_high)
mask_white = mask_color(blur_gray, white_hsv_low, white_hsv_high)
color_masked_img = cv2.bitwise_or(mask_yellow, mask_white)
edges = canny(color_masked_img, CANNY_LOW_THRESHOLD, CANNY_HIGH_THRESHOLD)
vertices = get_vertices(image)
masked_edges = region_of_interest(edges, vertices)
line_image = hough_lines(masked_edges, RHO, THETA, HOUGH_THRESHOLD, HOUGH_MIN_LINE_LENGTH, HOUGH_MAX_LINE_GAP)
lines_edges = weighted_img(image, line_image)
return lines_edges
# Try the pipeline with the test_images provided
for file in os.listdir("test_images/"):
image = mpimg.imread("test_images/" + file)
lines_edges = my_pipeline(image)
vertices = get_vertices(image)
x = [vertices[0][0][0], vertices[0][1][0], vertices[0][2][0], vertices[0][3][0]]
y = [vertices[0][0][1], vertices[0][1][1], vertices[0][2][1], vertices[0][3][1]]
plt.plot(x, y, 'w--', lw=2)
plt.imshow(lines_edges)
plt.show()
cv2.imwrite("test_images_output/" + file, lines_edges)
```
## Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
`solidWhiteRight.mp4`
`solidYellowLeft.mp4`
**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
**If you get an error that looks like this:**
```
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
```
**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
```
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
# Utility function used to find out that challenge video has different image's size
def print_image_param(image):
print('This image is:', type(image), 'with dimensions:', image.shape)
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# print_image_param(image)
result = my_pipeline(image)
# you should return the final output (image where lines are drawn on lanes)
return result
```
Let's try the one with the solid white lane on the right first ...
```
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
```
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
```
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
```
## Improve the draw_lines() function
**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
Now for the one with the solid yellow lane on the left. This one's more tricky!
```
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
```
## Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
## Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
```
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
```
|
github_jupyter
|
<link rel="stylesheet" href="../../styles/theme_style.css">
<!--link rel="stylesheet" href="../../styles/header_style.css"-->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
<table width="100%">
<tr>
<td id="image_td" width="15%" class="header_image_color_7"><div id="image_img"
class="header_image_7"></div></td>
<td class="header_text"> Rock, Paper or Scissor Game - Train and Classify [Volume 2] </td>
</tr>
</table>
<div id="flex-container">
<div id="diff_level" class="flex-item">
<strong>Difficulty Level:</strong> <span class="fa fa-star checked"></span>
<span class="fa fa-star checked"></span>
<span class="fa fa-star checked"></span>
<span class="fa fa-star checked"></span>
<span class="fa fa-star"></span>
</div>
<div id="tag" class="flex-item-tag">
<span id="tag_list">
<table id="tag_list_table">
<tr>
<td class="shield_left">Tags</td>
<td class="shield_right" id="tags">train_and_classify☁machine-learning☁features☁extraction</td>
</tr>
</table>
</span>
<!-- [OR] Visit https://img.shields.io in order to create a tag badge-->
</div>
</div>
<span class="color4"><strong>Previous Notebooks that are part of "Rock, Paper or Scissor Game - Train and Classify" module</strong></span>
<ul>
<li><a href="classification_game_volume_1.ipynb"><strong>Rock, Paper or Scissor Game - Train and Classify [Volume 1] | Experimental Setup <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></strong></a></li>
</ul>
<span class="color7"><strong>Following Notebooks that are part of "Rock, Paper or Scissor Game - Train and Classify" module</strong></span>
<ul>
<li><a href="classification_game_volume_3.ipynb"><strong>Rock, Paper or Scissor Game - Train and Classify [Volume 3] | Training a Classifier <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></strong></a></li>
<li><a href="../Evaluate/classification_game_volume_4.ipynb"><strong>Rock, Paper or Scissor Game - Train and Classify [Volume 4] | Performance Evaluation <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></strong></a></li>
</ul>
<table width="100%">
<tr>
<td style="text-align:left;font-size:12pt;border-top:dotted 2px #62C3EE">
<span class="color1">☌</span> After the presentation of data acquisition conditions on the previous <a href="classification_game_volume_1.ipynb">Jupyter Notebook <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></a>, we will follow our Machine Learning Journey by specifying which features will be extracted.
<br>
"Features" are numerical parameters extracted from the training data (in our case physiological signals acquired when executing gestures of "Rock, Paper or Scissor" game), characterizing objectively the training example.
A good feature is a parameter that has the ability to separate the different classes of our classification system, i.e, a parameter with a characteristic range of values for each available class.
</td>
</tr>
</table>
<hr>
<p style="font-size:20pt;color:#62C3EE;padding-bottom:5pt">Starting Point (Setup)</p>
<strong>List of Available Classes:</strong>
<br>
<ol start="0">
<li><span class="color1"><strong>"No Action"</strong></span> [When the hand is relaxed]</li>
<li><span class="color4"><strong>"Paper"</strong></span> [All fingers are extended]</li>
<li><span class="color7"><strong>"Rock"</strong></span> [All fingers are flexed]</li>
<li><span class="color13"><strong>"Scissor"</strong></span> [Forefinger and middle finger are extended and the remaining ones are flexed]</li>
</ol>
<table align="center">
<tr>
<td height="200px">
<img src="../../images/train_and_classify/classification_game_volume_2/classification_game_paper.png" style="display:block;height:100%">
</td>
<td height="200px">
<img src="../../images/train_and_classify/classification_game_volume_2/classification_game_stone.png" style="display:block;height:100%">
</td>
<td height="200px">
<img src="../../images/train_and_classify/classification_game_volume_2/classification_game_scissor.png" style="display:block;height:100%">
</td>
</tr>
<tr>
<td style="text-align:center">
<strong>Paper</strong>
</td>
<td style="text-align:center">
<strong>Rock</strong>
</td>
<td style="text-align:center">
<strong>Scissor</strong>
</td>
</tr>
</table>
<strong>Acquired Data:</strong>
<br>
<ul>
<li>Electromyography (EMG) | 2 muscles | Adductor pollicis and Flexor digitorum superficialis</li>
<li>Accelerometer (ACC) | 1 axis | Sensor parallel to the thumb nail (Axis perpendicular)</li>
</ul>
<p style="font-size:20pt;color:#62C3EE;padding-bottom:5pt">Protocol/Feature Extraction</p>
<strong>Extracted Features</strong>
<ul>
<li><span style="color:#E84D0E"><strong>[From] EMG signal</strong></span></li>
<ul>
<li>Standard Deviation ☆</li>
<li>Maximum sampled value ☝</li>
<li><a href="https://en.wikipedia.org/wiki/Zero-crossing_rate">Zero-Crossing Rate</a> ☌</li>
<li>Standard Deviation of the absolute signal ☇</li>
</ul>
<li><span style="color:#FDC400"><strong>[From] ACC signal</strong></span></li>
<ul>
<li>Average Value ☉</li>
<li>Standard Deviation ☆</li>
<li>Maximum sampled value ☝</li>
<li><a href="https://en.wikipedia.org/wiki/Zero-crossing_rate">Zero-Crossing Rate</a> ☌</li>
<li><a href="https://en.wikipedia.org/wiki/Slope">Slope of the regression curve</a> ☍</li>
</ul>
</ul>
<strong>Formal definition of parameters</strong>
<br>
☝ | Maximum Sample Value of a set of elements is equal to the last element of the sorted set
☉ | $\mu = \frac{1}{N}\sum_{i=1}^N (sample_i)$
☆ | $\sigma = \sqrt{\frac{1}{N}\sum_{i=1}^N(sample_i - \mu_{signal})^2}$
☌ | $zcr = \frac{1}{N - 1}\sum_{i=1}^{N-1}bin(i)$
☇ | $\sigma_{abs} = \sqrt{\frac{1}{N}\sum_{i=1}^N(|sample_i| - \mu_{signal_{abs}})^2}$
☍ | $m = \frac{\Delta signal}{\Delta t}$
... being $N$ the number of acquired samples (that are part of the signal), $sample_i$ the value of the sample number $i$, $signal_{abs}$ the absolute signal, $\Delta signal$ is the difference between the y coordinate of two points of the regression curve and $\Delta t$ the difference between the x (time) coordinate of the same two points of the regression curve.
... and
$bin(i)$ a binary function defined as:
$bin(i) = \begin{cases} 1, & \mbox{if } signal_i \times signal_{i-1} \leq 0 \\ 0, & \mbox{if } signal_i \times signal_{i-1}>0 \end{cases}$
<hr>
<p class="steps">0 - Import of the needed packages for a correct execution of the current <span class="color4">Jupyter Notebook</span></p>
```
# Package that ensures a programatically interaction with operating system folder hierarchy.
from os import listdir
# Package used for clone a dictionary.
from copy import deepcopy
# Functions intended to extract some statistical parameters.
from numpy import max, std, average, sum, absolute
# With the following import we will be able to extract the linear regression parameters after
# fitting experimental points to the model.
from scipy.stats import linregress
# biosignalsnotebooks own package that supports some functionalities used on the Jupyter Notebooks.
import biosignalsnotebooks as bsnb
```
<p class="steps">1 - Loading of all signals that integrates our training samples (storing them inside a dictionary)</p>
The acquired signals are stored inside a folder which can be accessed through a relative path <span class="color7">"../../signal_samples/classification_game/data"</span>
<p class="steps">1.1 - Identification of the list of files/examples</p>
```
# Transposition of data from signal files to a Python dictionary.
relative_path = "../../signal_samples/classification_game"
data_folder = "data"
# List of files (each file is a training example).
list_examples = listdir(relative_path + "/" + data_folder)
print(list_examples)
```
The first digit of filename identifies the class to which the training example belongs and the second digit is the trial number <span class="color1">(<i><class>_<trial>.txt</i>)</span>
<p class="steps">1.2 - Access the content of each file and store it on the respective dictionary entry</p>
```
# Initialization of dictionary.
signal_dict = {}
# Scrolling through each entry in the list.
for example in list_examples:
if ".txt" in example: # Read only .txt files.
# Get the class to which the training example under analysis belong.
example_class = example.split("_")[0]
# Get the trial number of the training example under analysis.
example_trial = example.split("_")[1].split(".")[0]
# Creation of a new "class" entry if it does not exist.
if example_class not in signal_dict.keys():
signal_dict[example_class] = {}
# Load data.
complete_data = bsnb.load(relative_path + "/" + data_folder + "/" + example)
# Store data in the dictionary.
signal_dict[example_class][example_trial] = complete_data
```
<p class="steps">1.3 - Definition of the content of each channel</p>
```
# Channels (CH1 Flexor digitorum superficialis | CH2 Aductor policis | CH3 Accelerometer axis Z).
emg_flexor = "CH1"
emg_adductor = "CH2"
acc_z = "CH3"
```
<p class="steps">2 - Extraction of features according to the signal under analysis</p>
The extracted values of each feature will be stored in a dictionary with the same hierarchical structure as "signal_dict"
```
# Clone "signal_dict".
features_dict = deepcopy(signal_dict)
# Navigate through "signal_dict" hierarchy.
list_classes = signal_dict.keys()
for class_i in list_classes:
list_trials = signal_dict[class_i].keys()
for trial in list_trials:
# Initialise "features_dict" entry content.
features_dict[class_i][trial] = []
for chn in [emg_flexor, emg_adductor, acc_z]:
# Temporary storage of signal inside a reusable variable.
signal = signal_dict[class_i][trial][chn]
# Start the feature extraction procedure accordingly to the channel under analysis.
if chn == emg_flexor or chn == emg_adductor: # EMG Features.
# Converted signal (taking into consideration that our device is a "biosignalsplux", the resolution is
# equal to 16 bits and the output unit should be in "mV").
signal = bsnb.raw_to_phy("EMG", device="biosignalsplux", raw_signal=signal, resolution=16, option="mV")
# Standard Deviation.
features_dict[class_i][trial] += [std(signal)]
# Maximum Value.
features_dict[class_i][trial] += [max(signal)]
# Zero-Crossing Rate.
features_dict[class_i][trial] += [sum([1 for i in range(1, len(signal))
if signal[i]*signal[i-1] <= 0]) / (len(signal) - 1)]
# Standard Deviation of the absolute signal.
features_dict[class_i][trial] += [std(absolute(signal))]
else: # ACC Features.
# Converted signal (taking into consideration that our device is a "biosignalsplux", the resolution is
# equal to 16 bits and the output unit should be in "g").
signal = bsnb.raw_to_phy("ACC", device="biosignalsplux", raw_signal=signal, resolution=16, option="g")
# Average value.
features_dict[class_i][trial] += [average(signal)]
# Standard Deviation.
features_dict[class_i][trial] += [std(signal)]
# Maximum Value.
features_dict[class_i][trial] += [max(signal)]
# Zero-Crossing Rate.
features_dict[class_i][trial] += [sum([1 for i in range(1, len(signal))
if signal[i]*signal[i-1] <= 0]) / (len(signal) - 1)]
# Slope of the regression curve.
x_axis = range(0, len(signal))
features_dict[class_i][trial] += [linregress(x_axis, signal)[0]]
```
Each training array has the following structure/content:
<br>
\[$\sigma_{emg\,flexor}$, $max_{emg\,flexor}$, $zcr_{emg\,flexor}$, $\sigma_{emg\,flexor}^{abs}$, $\sigma_{emg\,adductor}$, $max_{emg\,adductor}$, $zcr_{emg\,adductor}$, $\sigma_{emg\,adductor}^{abs}$, $\mu_{acc\,z}$, $\sigma_{acc\,z}$, $max_{acc\,z}$, $zcr_{acc\,z}$, $m_{acc\,z}$\]
<p class="steps">3 - Storage of the content inside the filled "features_dict" to an external file (<a href="https://fileinfo.com/extension/json">.json <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></a>)</p>
With this procedure it is possible to ensure a "permanent" memory of the results produced during feature extraction, reusable in the future by simple reading the file (without the need to reprocess again).
```
# Package dedicated to the manipulation of json files.
from json import dump
filename = "classification_game_features.json"
# Generation of .json file in our previously mentioned "relative_path".
# [Generation of new file]
with open(relative_path + "/features/" + filename, 'w') as file:
dump(features_dict, file)
```
We reach the end of the "Classification Game" second volume. Now all the features of training examples are in our possession.
If you are feeling your interest increasing, please jump to the next <a href="../Train_and_Classify/classification_game_volume_3.ipynb">volume <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></a>
<strong><span class="color7">We hope that you have enjoyed this guide. </span><span class="color2">biosignalsnotebooks</span><span class="color4"> is an environment in continuous expansion, so don't stop your journey and learn more with the remaining <a href="../MainFiles/biosignalsnotebooks.ipynb">Notebooks <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></a></span></strong> !
<span class="color6">**Auxiliary Code Segment (should not be replicated by
the user)**</span>
```
from biosignalsnotebooks.__notebook_support__ import css_style_apply
css_style_apply()
%%html
<script>
// AUTORUN ALL CELLS ON NOTEBOOK-LOAD!
require(
['base/js/namespace', 'jquery'],
function(jupyter, $) {
$(jupyter.events).on("kernel_ready.Kernel", function () {
console.log("Auto-running all cells-below...");
jupyter.actions.call('jupyter-notebook:run-all-cells-below');
jupyter.actions.call('jupyter-notebook:save-notebook');
});
}
);
</script>
```
|
github_jupyter
|
```
import pandas as pd
import os
import time
import re
import numpy as np
import json
from urllib.parse import urlparse, urljoin
run_root = "/home/icejm/Code/OpenWPM/stockdp/page_ana/"
# gather all potent/black links
count = 0
for root, dirs, files in os.walk(os.path.abspath('.')):
if len(dirs)==0:
for i in files:
if i.endswith(".json") and i.startswith("potent"):
count += 1
file = ((root+'/'+i).split(run_root))[1]
web_name = root.split('/')[-1]
with open(file,"r") as f:
text = f.read()
if i.startswith("potent"):
tmp_data = json.loads(text)
for each_page in tmp_data:
with open("potentlist.csv", "a+") as potent_f:
for j in tmp_data[each_page]:
j = j.replace(",", "/").replace("http://", "").replace("https://", "")
write_data = j+','+web_name+'\n'
potent_f.writelines(write_data)
print(count)
potent_dp_links = pd.read_csv("potentlist.csv", names=["url", "website"])
print(potent_dp_links.shape)
potent_dp_links.head()
def getlistnum(li):
li = list(li)
set1 = set(li)
dict1 = {}
for item in set1:
dict1.update({item:li.count(item)})
return dict1
getlistnum(potent_dp_links['website'])
x = "data.eastmoney.com/report/zw_stock.jshtml?encodeUrl=zXF5Zl6XRyYdSx1spWVTCqDhUpdvWCPeqRcR2Jjm0qE="
path = urlparse(x).path + urlparse(x).params + urlparse(x).query + urlparse(x).fragment
print(path)
print(len(re.findall("([a-z])",path)))
print(len(re.findall("([A-Z])",path)))
print(len(re.findall("([/_\.\%&#\-\?])",x)))
```
# Build Features
## 1.Basic Features
length, num of (signs, upper characters, lower character, number)
```
def build_features(df):
processed_features = df[["url"]].copy()
processed_features["path"] = processed_features["url"].map(
lambda x: urlparse(x).path + urlparse(x).params + urlparse(x).query + urlparse(x).fragment)
processed_features["path_len"] = processed_features["path"].map(
lambda x: len(x))
processed_features["num_sign"] = processed_features["url"].map(
lambda x: len(re.findall("([/_\.\%&#\-\?])",x)))
processed_features["num_upper_char"] = processed_features["path"].map(
lambda x: len(re.findall("([A-Z])",x)))
processed_features["num_lower_char"] = processed_features["path"].map(
lambda x: len(re.findall("([a-z])",x)))
processed_features["num_number"] = processed_features["path"].map(
lambda x: len(re.findall("(\d)",x)))
processed_features.drop(['url', 'path'], axis=1, inplace=True)
return processed_features
feature = build_features(potent_dp_links)
data = pd.concat([potent_dp_links, feature], axis = 1, ignore_index = False)
data.head()
```
# 2. Levenshtein
buuild a series distance between url and the website url
1. Edit Distance
2. Levenshtein Ratio
3. Jaro/Jaro-Winkler Dsitance(the answers are actually same)
```
import Levenshtein
def build_leven_features(df):
processed_features = []
for index, row in df.iterrows():
str1 = row['url']
str2 = row['website']
row['edit-dis'] = Levenshtein.distance(str1, str2)
row['leven-ratio'] = Levenshtein.ratio(str1, str2)
row['jw-dis'] = Levenshtein.jaro_winkler(str1, str2)
processed_features.append(row)
back_data = pd.DataFrame(processed_features).drop(['url', 'website'], axis=1)
return back_data
leven_features = build_leven_features(potent_dp_links)
data = pd.concat([data, leven_features], axis = 1, ignore_index = False)
data.to_csv("featured_data.csv", index=False)
data_features = data.drop(['url', 'website'], axis=1)
data = pd.read_csv("featured_data.csv", )
potent_dp_links = pd.read_csv("potentlist.csv", names=["url", "website"])
data_features = data.drop(['url', 'website'], axis=1)
import seaborn as sns
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
from sklearn.preprocessing import normalize
from hdbscan import HDBSCAN
%matplotlib inline
data_features.describe()
data_features = data_features[['path_len','num_sign','num_upper_char','num_lower_char','num_number','edit-dis','leven-ratio','jw-dis']]
dfData = abs(pd.DataFrame(data_features).corr())
plt.subplots(figsize=(12, 9)) # 设置画面大小
sns.heatmap(dfData, annot=True, vmax=1, square=True, cmap="Blues")
data_features = normalize(data_features, axis=1)
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
# pca = PCA(tol=10)
pca_data = pca.fit_transform(data_features)
print('Matrix of PCs: %s' % str(pca_data.shape))
print('Data matrix: %s' % str(data_features.shape))
print('%d singular values: %s' % (pca.singular_values_.shape[0], str(pca.singular_values_)))
```
# Clustering
## DBSCAN
```
# test dbscan
from sklearn.cluster import DBSCAN
from sklearn.utils import parallel_backend
with parallel_backend('threading'):
clusterer = DBSCAN(eps=0.005, min_samples=5, n_jobs=10, metric='euclidean')
cluster_labels = clusterer.fit(data_features)
potent_dp_links['cluster_dbscan'] = pd.Series(cluster_labels.labels_).values
print('Number of clusters: %d' % len(set(cluster_labels.labels_)))
with parallel_backend('threading'):
clusterer = DBSCAN(eps=0.005, min_samples=5, n_jobs=10, metric='euclidean')
cluster_labels = clusterer.fit(pca_data)
potent_dp_links['pca_cluster_dbscan'] = pd.Series(cluster_labels.labels_).values
print('Number of clusters: %d' % len(set(cluster_labels.labels_)))
```
## HDBSCAN
```
clusterer = HDBSCAN(min_cluster_size=5, metric='euclidean')
cluster_labels = clusterer.fit_predict(data_features)
pca_cluster_labels = clusterer.fit_predict(pca_data)
potent_dp_links['cluster_hdbscan'] = pd.Series(cluster_labels).values
potent_dp_links['pca_cluster_hdbscan'] = pd.Series(pca_cluster_labels).values
print('HDBSCAN without PCA: \n Number of clusters: %s' % len(potent_dp_links['cluster_hdbscan'].value_counts()))
# print('cluster_hdbscan.value_counts(): \n %s' % potent_dp_links['cluster_hdbscan'].value_counts().to_string())
print('HDBSCAN wit PCA: \n Number of clusters: %s' % len(potent_dp_links['pca_cluster_hdbscan'].value_counts()))
# print('cluster_hdbscan.value_counts(): \n %s' % potent_dp_links['cluster_hdbscan'].value_counts().to_string())
hdbscan = HDBSCAN(min_cluster_size=5, min_samples=4, cluster_selection_epsilon=0.001, metric='euclidean')
dbscan = DBSCAN(eps=0.001, min_samples=4, metric='euclidean')
hdbscan_labels = hdbscan.fit_predict(data_features)
pca_hdbscan_labels = hdbscan.fit_predict(pca_data)
dbscan_labels = dbscan.fit_predict(data_features)
pca_dbscan_labels = dbscan.fit_predict(pca_data)
potent_dp_links['cluster_hdbscan'] = pd.Series(hdbscan_labels).values
potent_dp_links['pca_cluster_hdbscan'] = pd.Series(pca_hdbscan_labels).values
potent_dp_links['cluster_dbscan'] = pd.Series(dbscan_labels).values
potent_dp_links['pca_cluster_dbscan'] = pd.Series(pca_dbscan_labels).values
print('HDBSCAN without PCA: \n Number of clusters: %s' % len(potent_dp_links['cluster_hdbscan'].value_counts()))
print('HDBSCAN wit PCA: \n Number of clusters: %s' % len(potent_dp_links['pca_cluster_hdbscan'].value_counts()))
print('DBSCAN without PCA: \n Number of clusters: %s' % len(potent_dp_links['cluster_dbscan'].value_counts()))
print('DBSCAN wit PCA: \n Number of clusters: %s' % len(potent_dp_links['pca_cluster_dbscan'].value_counts()))
potent_dp_links.head()
# Silhouette Coefficient
from sklearn import metrics
s1 = metrics.silhouette_score(data_features, potent_dp_links['cluster_hdbscan'], metric='euclidean')
s2 = metrics.silhouette_score(pca_data, potent_dp_links['pca_cluster_hdbscan'], metric='euclidean')
s3 = metrics.silhouette_score(data_features, potent_dp_links['cluster_dbscan'], metric='euclidean')
s4 = metrics.silhouette_score(pca_data, potent_dp_links['pca_cluster_dbscan'], metric='euclidean')
print('Silhouette score: %.5f' % s1)
print('Silhouette score: %.5f' % s2)
print('Silhouette score: %.5f' % s3)
print('Silhouette score: %.5f' % s4)
# Calinski-Harabaz Index
from sklearn import metrics
chi1 = metrics.calinski_harabasz_score(data_features, potent_dp_links['cluster_hdbscan'])
chi2 = metrics.calinski_harabasz_score(pca_data, potent_dp_links['pca_cluster_hdbscan'])
chi3 = metrics.calinski_harabasz_score(data_features, potent_dp_links['cluster_dbscan'])
chi4 = metrics.calinski_harabasz_score(pca_data, potent_dp_links['pca_cluster_dbscan'])
print('Calinski-Harabaz Index: %.3f' % chi1)
print('Calinski-Harabaz Index: %.3f' % chi2)
print('Calinski-Harabaz Index: %.3f' % chi3)
print('Calinski-Harabaz Index: %.3f' % chi4)
# Davies-Bouldin Index
from sklearn.metrics import davies_bouldin_score
dbi1 = davies_bouldin_score(data_features, potent_dp_links['cluster_hdbscan'])
print('Davies-Bouldin Index: %.5f' % dbi1)
dbi2 = davies_bouldin_score(pca_data, potent_dp_links['pca_cluster_hdbscan'])
print('Davies-Bouldin Index: %.5f' % dbi2)
dbi3 = davies_bouldin_score(data_features, potent_dp_links['cluster_dbscan'])
print('Davies-Bouldin Index: %.5f' % dbi3)
dbi4 = davies_bouldin_score(pca_data, potent_dp_links['pca_cluster_dbscan'])
print('Davies-Bouldin Index: %.5f' % dbi4)
para_min_cluster_size = [2,3,4,5,6,7,8,9,10]
para_min_samples = [3,4,5]
cluster_selection_epsilon = [0.1,0.01,0.001, 0.0001, 0]
for i in cluster_selection_epsilon:
clusterer = hdbscan.HDBSCAN(min_cluster_size=5, min_samples=4, cluster_selection_epsilon=0.001, metric='euclidean')
pca_cluster_labels = clusterer.fit_predict(pca_data)
s4 = metrics.silhouette_score(pca_data, pca_cluster_labels, metric='euclidean')
print('Number of clusters: %s, Silhouette score: %.5f' %
(len(pd.DataFrame(pca_cluster_labels).value_counts()), s4))
potent_dp_links.to_csv("potent_dp_links_cluster.csv")
```
|
github_jupyter
|
# 3D Tic-Tac-Toe
## Objective and Prerequisites
Try this logic programming example to learn how to solve the problem of arranging X’s and O’s on a three-dimensional Tic-Tac-Toe board so as to minimize the number of completed lines or diagonals. This example will show you how a binary programming model can be used to capture simple logical constraints.
This is example 17 from the fifth edition of Model Building in Mathematical Programming by H. Paul Williams on pages 272 and 327 – 328.
This modeling example is at the beginning level. We assume that you have some familiarity with Python and the Gurobi Python API, but you can hopefully pick up any missing concepts from the example.
**Download the Repository** <br />
You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip).
**Gurobi License** <br />
In order to run this Jupyter Notebook properly, you must have a Gurobi license. If you do not have one, you can request an
[evaluation license](https://www.gurobi.com/downloads/request-an-evaluation-license/?utm_source=3PW&utm_medium=OT&utm_campaign=WW-MU-EDU-OR-O_LEA-PR_NO-Q3_FY20_WW_JPME_3D-Tic-Tac-Toe_COM_EVAL_GITHUB_&utm_term=logic-programing&utm_content=C_JPM)
as a *commercial user*, or download a
[free license](https://www.gurobi.com/academia/academic-program-and-licenses/?utm_source=3PW&utm_medium=OT&utm_campaign=WW-MU-EDU-OR-O_LEA-PR_NO-Q3_FY20_WW_JPME_3D-Tic-Tac-Toe_ACADEMIC_EVAL_GITHUB_&utm_term=logic-programing&utm_content=C_JPM)
as an *academic user*.
---
## Problem Description
Given a 3-D tic-tac-toe board, where players take turns placing $X$'s and $O$'s, the game typically ends when one player completes a line or diagonal; that is, when they manage to place their symbols in three cells that form a line or diagonal in the grid. The twist that is tackled here is that the game continues until every cell contains a symbol, and the goal is to arrange the symbols to minimize the number of completed lines or diagonals.
---
## Model Formulation
### Decision Variables
$\text{isX}_{ijk} \in [0,1]$: Does cell $(i,j,k)$ contain an $X$ ($isX=1$) or an $O$ ($isX=0$)?
$\text{isLine}_{l} \in [0,1]$: Does line/diagonal $l$ contain 3 of the same symbol?
### Objective Function
- **Lines**: Minimize the number of completed lines or diagonals
\begin{equation}
\text{Minimize} \quad Z = \sum_{l \in \text{Lines}}\text{isLine}_l
\end{equation}
### Constraints
- **Take turns**: The board must contain 14 $X$'s and 13 $O$'s ($X$ goes first).
\begin{equation}
\sum_{ijk} \text{isX}_{ijk} = 14
\end{equation}
- **Lines**: For a line to not be complete, one cell must have a different value. The simple observation here is that the sum of the corresponding 3 binary variables would be 3 if they are all $X$ and 0 if they were all $O$. We need to forbid those outcomes whenever $isLine_l == 0$. Note that $l_0$ is the first cell in line $l$, $l_1$ is the second, and $l_2$ is the third.
\begin{equation}
\text{isLine}_l == 0 \implies isX[l_0] + isX[l_1] + isX[l_2] >= 1 \quad \forall l \in \text{Lines}
\end{equation}
\begin{equation}
\text{isLine}_l == 0 \implies isX[l_0] + isX[l_1] + isX[l_2] <= 2 \quad \forall l \in \text{Lines}
\end{equation}
---
## Python Implementation
We import the Gurobi Python Module.
```
import gurobipy as gp
from gurobipy import GRB
# tested with Python 3.7.0 & Gurobi 9.0
```
## Model Deployment
We first create a list of all possible lines and diagonals in a 3-D tic-tac-toe board. Each is represented as a Python tuple with 3 entries, where each entry gives the (i,j,k) position of the corresponding cell. There are 49 in total.
```
lines = []
size = 3
for i in range(size):
for j in range(size):
for k in range(size):
if i == 0:
lines.append(((0,j,k), (1,j,k), (2,j,k)))
if j == 0:
lines.append(((i,0,k), (i,1,k), (i,2,k)))
if k == 0:
lines.append(((i,j,0), (i,j,1), (i,j,2)))
if i == 0 and j == 0:
lines.append(((0,0,k), (1,1,k), (2,2,k)))
if i == 0 and j == 2:
lines.append(((0,2,k), (1,1,k), (2,0,k)))
if i == 0 and k == 0:
lines.append(((0,j,0), (1,j,1), (2,j,2)))
if i == 0 and k == 2:
lines.append(((0,j,2), (1,j,1), (2,j,0)))
if j == 0 and k == 0:
lines.append(((i,0,0), (i,1,1), (i,2,2)))
if j == 0 and k == 2:
lines.append(((i,0,2), (i,1,1), (i,2,0)))
lines.append(((0,0,0), (1,1,1), (2,2,2)))
lines.append(((2,0,0), (1,1,1), (0,2,2)))
lines.append(((0,2,0), (1,1,1), (2,0,2)))
lines.append(((0,0,2), (1,1,1), (2,2,0)))
```
Next we create our model and our decision variables.
```
model = gp.Model('Tic_Tac_Toe')
isX = model.addVars(size, size, size, vtype=GRB.BINARY, name="isX")
isLine = model.addVars(lines, vtype=GRB.BINARY, name="isLine")
```
Now we create the constraints. The first states the board will contain 14 X's (and 13 O's):
```
x14 = model.addConstr(isX.sum() == 14)
```
The remaining constraints establish the relationship between the $isLine[]$ and $isX[]$ variables. A line is complete if all three cells contain the same symbol. In our model, this would correspond to three associated $isX[]$ variables summing to either 3 (all $X$) or 0 (all $O$). For our purposes, it is enough to enforce the condition that if $isLine[] = 0$, the sum must be strictly between these two values.
```
for line in lines:
model.addGenConstrIndicator(isLine[line], False, isX[line[0]] + isX[line[1]] + isX[line[2]] >= 1)
model.addGenConstrIndicator(isLine[line], False, isX[line[0]] + isX[line[1]] + isX[line[2]] <= 2)
```
Finally, we set the optimization objective, which is to minimize the number of completed lines.
```
model.setObjective(isLine.sum())
```
Now we perform the optimization.
```
model.optimize()
```
---
## Result
The optimal solution completes only 4 lines or diagonals. We can visualize the result using matplotlib (we've peeled off the third dimension of the 3-D tic-tac-toe board).
```
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax = plt.subplots(1, 3, figsize=(10,5))
for i in range(3):
ax[i].grid()
ax[i].set_xticks(range(4))
ax[i].set_yticks(range(4))
ax[i].tick_params(labelleft=False, labelbottom=False)
for cell in isX.keys():
if isX[cell].x > 0.5:
ax[cell[0]].add_patch(plt.Rectangle((cell[1],cell[2]), 1, 1))
plt.show()
```
---
## References
H. Paul Williams, Model Building in Mathematical Programming, fifth edition.
Copyright © 2020 Gurobi Optimization, LLC
|
github_jupyter
|
# Trust Scores applied to MNIST
It is important to know when a machine learning classifier's predictions can be trusted. Relying on the classifier's (uncalibrated) prediction probabilities is not optimal and can be improved upon. *Trust scores* measure the agreement between the classifier and a modified nearest neighbor classifier on the test set. The trust score is the ratio between the distance of the test instance to the nearest class different from the predicted class and the distance to the predicted class. Higher scores correspond to more trustworthy predictions. A score of 1 would mean that the distance to the predicted class is the same as to another class.
The original paper on which the algorithm is based is called [To Trust Or Not To Trust A Classifier](https://arxiv.org/abs/1805.11783). Our implementation borrows heavily from https://github.com/google/TrustScore, as does the example notebook.
Trust scores work best for low to medium dimensional feature spaces. This notebook illustrates how you can **apply trust scores to high dimensional** data like images by adding an additional pre-processing step in the form of an [auto-encoder](https://en.wikipedia.org/wiki/Autoencoder) to reduce the dimensionality. Other dimension reduction techniques like PCA can be used as well.
```
import keras
from keras import backend as K
from keras.layers import Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Input, UpSampling2D
from keras.models import Model
from keras.utils import to_categorical
import matplotlib
%matplotlib inline
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import StratifiedShuffleSplit
from alibi.confidence import TrustScore
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
print('x_train shape:', x_train.shape, 'y_train shape:', y_train.shape)
plt.gray()
plt.imshow(x_test[0])
```
Prepare data: scale, reshape and categorize
```
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
x_train = np.reshape(x_train, x_train.shape + (1,))
x_test = np.reshape(x_test, x_test.shape + (1,))
print('x_train shape:', x_train.shape, 'x_test shape:', x_test.shape)
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
print('y_train shape:', y_train.shape, 'y_test shape:', y_test.shape)
xmin, xmax = -.5, .5
x_train = ((x_train - x_train.min()) / (x_train.max() - x_train.min())) * (xmax - xmin) + xmin
x_test = ((x_test - x_test.min()) / (x_test.max() - x_test.min())) * (xmax - xmin) + xmin
```
## Define and train model
For this example we are not interested in optimizing model performance so a simple softmax classifier will do:
```
def sc_model():
x_in = Input(shape=(28, 28, 1))
x = Flatten()(x_in)
x_out = Dense(10, activation='softmax')(x)
sc = Model(inputs=x_in, outputs=x_out)
sc.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
return sc
sc = sc_model()
sc.summary()
sc.fit(x_train, y_train, batch_size=128, epochs=5, verbose=0)
```
Evaluate the model on the test set:
```
score = sc.evaluate(x_test, y_test, verbose=0)
print('Test accuracy: ', score[1])
```
## Define and train auto-encoder
```
def ae_model():
# encoder
x_in = Input(shape=(28, 28, 1))
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x_in)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(4, (3, 3), activation=None, padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
encoder = Model(x_in, encoded)
# decoder
dec_in = Input(shape=(4, 4, 4))
x = Conv2D(4, (3, 3), activation='relu', padding='same')(dec_in)
x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation=None, padding='same')(x)
decoder = Model(dec_in, decoded)
# autoencoder = encoder + decoder
x_out = decoder(encoder(x_in))
autoencoder = Model(x_in, x_out)
autoencoder.compile(optimizer='adam', loss='mse')
return autoencoder, encoder, decoder
ae, enc, dec = ae_model()
ae.summary()
ae.fit(x_train, x_train, batch_size=128, epochs=8, validation_data=(x_test, x_test), verbose=0)
```
## Calculate Trust Scores
Initialize trust scores:
```
ts = TrustScore()
```
The key is to **fit and calculate the trust scores on the encoded instances**. The encoded data still needs to be reshaped from (60000, 4, 4, 4) to (60000, 64) to comply with the k-d tree format. This is handled internally:
```
x_train_enc = enc.predict(x_train)
ts.fit(x_train_enc, y_train, classes=10) # 10 classes present in MNIST
```
We can now calculate the trust scores and closest not predicted classes of the predictions on the test set, using the distance to the 5th nearest neighbor in each class:
```
x_test_enc = enc.predict(x_test)
y_pred = sc.predict(x_test)
score, closest_class = ts.score(x_test_enc, y_pred, k=5)
```
Let's inspect which predictions have low and high trust scores:
```
n = 5
idx_min, idx_max = np.argsort(score)[:n], np.argsort(score)[-n:]
score_min, score_max = score[idx_min], score[idx_max]
closest_min, closest_max = closest_class[idx_min], closest_class[idx_max]
pred_min, pred_max = np.argmax(y_pred[idx_min], axis=1), np.argmax(y_pred[idx_max], axis=1)
imgs_min, imgs_max = x_test[idx_min], x_test[idx_max]
label_min, label_max = np.argmax(y_test[idx_min], axis=1), np.argmax(y_test[idx_max], axis=1)
```
### Low Trust Scores
The image below makes clear that the low trust scores correspond to misclassified images. Because the trust scores are significantly below 1, they correctly identified that the images belong to another class than the predicted class, and identified that class.
```
plt.figure(figsize=(20, 4))
for i in range(n):
ax = plt.subplot(1, n, i+1)
plt.imshow(imgs_min[i].reshape(28, 28))
plt.title('Model prediction: {} \n Label: {} \n Trust score: {:.3f}' \
'\n Closest other class: {}'.format(pred_min[i], label_min[i], score_min[i], closest_min[i]))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
```
### High Trust Scores
The high trust scores on the other hand all are very clear 1's:
```
plt.figure(figsize=(20, 4))
for i in range(n):
ax = plt.subplot(1, n, i+1)
plt.imshow(imgs_max[i].reshape(28, 28))
plt.title('Model prediction: {} \n Label: {} \n Trust score: {:.3f}'.format(pred_max[i], label_max[i], score_max[i]))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
```
## Comparison of Trust Scores with model prediction probabilities
Let’s compare the prediction probabilities from the classifier with the trust scores for each prediction by checking whether trust scores are better than the model’s prediction probabilities at identifying correctly classified examples.
First we need to set up a couple of helper functions.
* Define a function that handles model training and predictions:
```
def run_sc(X_train, y_train, X_test):
clf = sc_model()
clf.fit(X_train, y_train, batch_size=128, epochs=5, verbose=0)
y_pred_proba = clf.predict(X_test)
y_pred = np.argmax(y_pred_proba, axis=1)
probas = y_pred_proba[range(len(y_pred)), y_pred] # probabilities of predicted class
return y_pred, probas
```
* Define the function that generates the precision plots:
```
def plot_precision_curve(plot_title,
percentiles,
labels,
final_tp,
final_stderr,
final_misclassification,
colors = ['blue', 'darkorange', 'brown', 'red', 'purple']):
plt.title(plot_title, fontsize=18)
colors = colors + list(cm.rainbow(np.linspace(0, 1, len(final_tp))))
plt.xlabel("Percentile", fontsize=14)
plt.ylabel("Precision", fontsize=14)
for i, label in enumerate(labels):
ls = "--" if ("Model" in label) else "-"
plt.plot(percentiles, final_tp[i], ls, c=colors[i], label=label)
plt.fill_between(percentiles,
final_tp[i] - final_stderr[i],
final_tp[i] + final_stderr[i],
color=colors[i],
alpha=.1)
if 0. in percentiles:
plt.legend(loc="lower right", fontsize=14)
else:
plt.legend(loc="upper left", fontsize=14)
model_acc = 100 * (1 - final_misclassification)
plt.axvline(x=model_acc, linestyle="dotted", color="black")
plt.show()
```
* The function below trains the model on a number of folds, makes predictions, calculates the trust scores, and generates the precision curves to compare the trust scores with the model prediction probabilities:
```
def run_precision_plt(X, y, nfolds, percentiles, run_model, test_size=.2,
plt_title="", plt_names=[], predict_correct=True, classes=10):
def stderr(L):
return np.std(L) / np.sqrt(len(L))
all_tp = [[[] for p in percentiles] for _ in plt_names]
misclassifications = []
mult = 1 if predict_correct else -1
folds = StratifiedShuffleSplit(n_splits=nfolds, test_size=test_size, random_state=0)
for train_idx, test_idx in folds.split(X, y):
# create train and test folds, train model and make predictions
X_train, y_train = X[train_idx, :], y[train_idx, :]
X_test, y_test = X[test_idx, :], y[test_idx, :]
y_pred, probas = run_sc(X_train, y_train, X_test)
# target points are the correctly classified points
y_test_class = np.argmax(y_test, axis=1)
target_points = (np.where(y_pred == y_test_class)[0] if predict_correct else
np.where(y_pred != y_test_class)[0])
final_curves = [probas]
# calculate trust scores
ts = TrustScore()
ts.fit(enc.predict(X_train), y_train, classes=classes)
scores, _ = ts.score(enc.predict(X_test), y_pred, k=5)
final_curves.append(scores) # contains prediction probabilities and trust scores
# check where prediction probabilities and trust scores are above a certain percentage level
for p, perc in enumerate(percentiles):
high_proba = [np.where(mult * curve >= np.percentile(mult * curve, perc))[0] for curve in final_curves]
if 0 in map(len, high_proba):
continue
# calculate fraction of values above percentage level that are correctly (or incorrectly) classified
tp = [len(np.intersect1d(hp, target_points)) / (1. * len(hp)) for hp in high_proba]
for i in range(len(plt_names)):
all_tp[i][p].append(tp[i]) # for each percentile, store fraction of values above cutoff value
misclassifications.append(len(target_points) / (1. * len(X_test)))
# average over folds for each percentile
final_tp = [[] for _ in plt_names]
final_stderr = [[] for _ in plt_names]
for p, perc in enumerate(percentiles):
for i in range(len(plt_names)):
final_tp[i].append(np.mean(all_tp[i][p]))
final_stderr[i].append(stderr(all_tp[i][p]))
for i in range(len(all_tp)):
final_tp[i] = np.array(final_tp[i])
final_stderr[i] = np.array(final_stderr[i])
final_misclassification = np.mean(misclassifications)
# create plot
plot_precision_curve(plt_title, percentiles, plt_names, final_tp, final_stderr, final_misclassification)
```
## Detect correctly classified examples
The x-axis on the plot below shows the percentiles for the model prediction probabilities of the predicted class for each instance and for the trust scores. The y-axis represents the precision for each percentile. For each percentile level, we take the test examples whose trust score is above that percentile level and plot the percentage of those points that were correctly classified by the classifier. We do the same with the classifier’s own model confidence (i.e. softmax probabilities). For example, at percentile level 80, we take the top 20% scoring test examples based on the trust score and plot the percentage of those points that were correctly classified. We also plot the top 20% scoring test examples based on model probabilities and plot the percentage of those that were correctly classified. The vertical dotted line is the error of the classifier. The plots are an average over 2 folds of the dataset with 20% of the data kept for the test set.
The *Trust Score* and *Model Confidence* curves then show that the model precision is typically higher when using the trust scores to rank the predictions compared to the model prediction probabilities.
```
X = x_train
y = y_train
percentiles = [0 + 0.5 * i for i in range(200)]
nfolds = 2
plt_names = ['Model Confidence', 'Trust Score']
plt_title = 'MNIST -- Softmax Classifier -- Predict Correct'
run_precision_plt(X, y, nfolds, percentiles, run_sc, plt_title=plt_title,
plt_names=plt_names, predict_correct=True)
```
|
github_jupyter
|
# CSAILVision semantic segmention models
This is a semantic segmentation notebook using an [ADE20K](http://groups.csail.mit.edu/vision/datasets/ADE20K/) pretrained model from the open source project [CSAILVision/semantic-segmentation-pytorch](https://github.com/CSAILVision/semantic-segmentation-pytorch).
For other deep-learning Colab notebooks, visit [tugstugi/dl-colab-notebooks](https://github.com/tugstugi/dl-colab-notebooks).
## Clone repo and install dependencies
```
import os
from os.path import exists, join, basename, splitext
git_repo_url = 'https://github.com/CSAILVision/semantic-segmentation-pytorch.git'
project_name = splitext(basename(git_repo_url))[0]
if not exists(project_name):
# clone and install dependencies
!git clone -q $git_repo_url
#!cd $project_name && pip install -q -r requirement.txt
import sys
sys.path.append(project_name)
import time
import matplotlib
import matplotlib.pylab as plt
plt.rcParams["axes.grid"] = False
```
## Download a pretrained model
According to [https://github.com/CSAILVision/semantic-segmentation-pytorch#performance](https://github.com/CSAILVision/semantic-segmentation-pytorch#performance), **UperNet101** was the best performing model. We will use it as the pretrained model:
```
ENCODER_NAME = 'resnet101'
DECODER_NAME = 'upernet'
PRETRAINED_ENCODER_MODEL_URL = 'http://sceneparsing.csail.mit.edu/model/pytorch/baseline-%s-%s/encoder_epoch_50.pth' % (ENCODER_NAME, DECODER_NAME)
PRETRAINED_DECODER_MODEL_URL = 'http://sceneparsing.csail.mit.edu/model/pytorch/baseline-%s-%s/decoder_epoch_50.pth' % (ENCODER_NAME, DECODER_NAME)
pretrained_encoder_file = basename(PRETRAINED_ENCODER_MODEL_URL)
if not exists(pretrained_encoder_file):
!wget -q $PRETRAINED_ENCODER_MODEL_URL
pretrained_decoder_file = basename(PRETRAINED_DECODER_MODEL_URL)
if not exists(pretrained_decoder_file):
!wget -q $PRETRAINED_DECODER_MODEL_URL
```
## Prepare model
Load the pretrained model:
```
from types import SimpleNamespace
import torch
from models import ModelBuilder, SegmentationModule
from dataset import TestDataset
from utils import colorEncode
from scipy.io import loadmat
# options
options = SimpleNamespace(fc_dim=2048,
num_class=150,
imgSize = [300, 400, 500, 600],
imgMaxSize=1000,
padding_constant=8,
segm_downsampling_rate=8)
# create model
builder = ModelBuilder()
net_encoder = builder.build_encoder(arch=ENCODER_NAME, weights=pretrained_encoder_file,
fc_dim=options.fc_dim)
net_decoder = builder.build_decoder(arch=DECODER_NAME, weights=pretrained_decoder_file,
fc_dim=options.fc_dim, num_class=options.num_class, use_softmax=True)
segmentation_module = SegmentationModule(net_encoder, net_decoder, torch.nn.NLLLoss(ignore_index=-1))
segmentation_module = segmentation_module.eval()
torch.set_grad_enabled(False)
if torch.cuda.is_available():
segmentation_module = segmentation_module.cuda()
# test on a given image
def test(test_image_name):
dataset_test = TestDataset([{'fpath_img': test_image_name}], options, max_sample=-1)
batch_data = dataset_test[0]
segSize = (batch_data['img_ori'].shape[0], batch_data['img_ori'].shape[1])
img_resized_list = batch_data['img_data']
scores = torch.zeros(1, options.num_class, segSize[0], segSize[1])
if torch.cuda.is_available():
scores = scores.cuda()
for img in img_resized_list:
feed_dict = batch_data.copy()
feed_dict['img_data'] = img
del feed_dict['img_ori']
del feed_dict['info']
if torch.cuda.is_available():
feed_dict = {k: o.cuda() for k, o in feed_dict.items()}
# forward pass
pred_tmp = segmentation_module(feed_dict, segSize=segSize)
scores = scores + pred_tmp / len(options.imgSize)
_, pred = torch.max(scores, dim=1)
return pred.squeeze(0).cpu().numpy()
```
## Evaluate on a test image
First, download a test image from the internet:
```
IMAGE_URL = 'https://raw.githubusercontent.com/tugstugi/dl-colab-notebooks/master/resources/lidl.jpg'
image_file = basename(IMAGE_URL)
!wget -q -O $image_file $IMAGE_URL
plt.figure(figsize=(10, 5))
plt.imshow(matplotlib.image.imread(image_file))
```
Now, test on the downloaded image:
```
t = time.time()
pred = test(image_file)
print("executed in %.3fs" % (time.time()-t))
pred_color = colorEncode(pred, loadmat(os.path.join(project_name, 'data/color150.mat'))['colors'])
plt.imshow(pred_color)
```
|
github_jupyter
|
# In-Class Coding Lab: Functions
The goals of this lab are to help you to understand:
- How to use Python's built-in functions in the standard library.
- How to write user-defined functions
- The benefits of user-defined functions to code reuse and simplicity.
- How to create a program to use functions to solve a complex idea
We will demonstrate these through the following example:
## The Credit Card Problem
If you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?
**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?
While eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.
So there are two things we'd like to figure out, for any "potential" card number:
- Who is the issuing network? Visa, MasterCard, Discover or American Express.
- In the number potentially valid (as opposed to a made up series of digits)?
### What does the have to do with functions?
If we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name.
**Example:** `n = int("50")` the function `int()` takes the string `"50"` as input and converts it to an `int` value `50` which is then stored in the value `n`.
When you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command.
## Built-In Functions
Let's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:
```
import math
dir(math)
```
If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:
```
help(math.factorial)
```
It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:
```
math.factorial(5) #this is an example of "calling" the function with input 5. The output should be 120
math.factorial(0) # here we call the same function with input 0. The output should be 1.
## Call the factorial function with an input argument of 4. What is the output?
#TODO write code here.
math.factorial(5)
```
## Using functions to print things awesome in Juypter
Up until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.
For example this prints Hello in Heading 1.
```
from IPython.display import display, HTML
print("Exciting:")
display(HTML("<h1>Hello</h1>"))
print("Boring:")
print("Hello")
```
Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively.
Execute this code:
```
def print_title(text):
'''
This prints text to IPython.display as H1
'''
return display(HTML("<H1>" + text + "</H1>"))
def print_normal(text):
'''
this prints text to IPython.display as normal text
'''
return display(HTML(text))
```
Now let's use these two functions in a familiar program!
```
print_title("Area of a Rectangle")
length = float(input("Enter length: "))
width = float(input("Enter width: "))
area = length * width
print_normal("The area is %.2f" % area)
```
## Let's get back to credit cards....
Now that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:
- Who is the issuing network? Visa, MasterCard, Discover or American Express.
This problem can be solved by looking at the first digit of the card number:
- "4" ==> "Visa"
- "5" ==> "MasterCard"
- "6" ==> "Discover"
- "3" ==> "American Express"
So for card number `5300023581452982` the issuer is "MasterCard".
It should be easy to write a program to solve this problem. Here's the algorithm:
```
input credit card number into variable card
get the first digit of the card number (eg. digit = card[0])
if digit equals "4"
the card issuer "Visa"
elif digit equals "5"
the card issuer "MasterCard"
elif digit equals "6"
the card issuer is "Discover"
elif digit equals "3"
the card issues is "American Express"
else
the issuer is "Invalid"
print issuer
```
### Now You Try It
Turn the algorithm into python code
```
## TODO: Write your code here
ccard = input("Please enter your credit card number: ")
digit = ccard[0]
if digit == '4':
card_issuer = "Visa"
elif digit == '5':
card_issuer = "MasterCard"
elif digit == '6':
card_issuer = "Discover"
elif digit == '3':
card_issuer = "American Express"
else:
card_issuer = "Invalid"
print(card_issuer)
```
**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the "Invalid Card" case.
## Introducing the Write - Refactor - Test - Rewrite approach
It would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking "functions first" its the best way to modularize your code with functions. Here's the approach:
1. Write the code
2. Refactor (change the code around) to use a function
3. Test the function by calling it
4. Rewrite the original code to use the new function.
We already did step 1: Write so let's move on to:
### Step 2: refactor
Let's strip the logic out of the above code to accomplish the task of the function:
- Send into the function as input a credit card number as a `str`
- Return back from the function as output the issuer of the card as a `str`
To help you out we've written the function stub for you all you need to do is write the function body code.
```
def CardIssuer(card):
'''This function takes a card number (card) as input, and returns the issuer name as output'''
## TODO write code here they should be the same as lines 3-13 from the code above
digit = card[0]
if digit == '4':
card_issuer = "Visa"
elif digit == '5':
card_issuer = "MasterCard"
elif digit == '6':
card_issuer = "Discover"
elif digit == '3':
card_issuer = "American Express"
else:
card_issuer = "Invalid"
# the last line in the function should return the output
return card_issuer
```
### Step 3: Test
You wrote the function, but how do you know it works? The short answer is unless you test it you're guessing.
Testing our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!
Here's some examples:
```
WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa
WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard
WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover
WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express
WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card
```
### Now you Try it!
Write the tests based on the examples:
```
# Testing the CardIssuer() function
print("WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL", CardIssuer("40123456789"))
print("WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL", CardIssuer("50123456789"))
## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly
print("WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL:", CardIssuer("60123456789"))
print("WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL:", CardIssuer("30123456789"))
print("WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid ACTUAL:", CardIssuer("90123456789"))
```
### Step 4: Rewrite
The final step is to re-write the original program, but use the function instead. The algorithm becomes
```
input credit card number into variable card
call the CardIssuer function with card as input, issuer as output
print issuer
```
### Now You Try It!
```
# TODO Re-write the program here, calling our function.
ccard = input("Please enter your credit card number: ")
card_issuer = CardIssuer(ccard)
print(card_issuer)
```
## Functions are abstractions. Abstractions are good.
Step on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm
This nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity.
Here's the function which given a card will let you know if it passes the Luhn check:
```
# Todo: execute this code
def checkLuhn(card):
''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''
total = 0
length = len(card)
parity = length % 2
for i in range(length):
digit = int(card[i])
if i%2 == parity:
digit = digit * 2
if digit > 9:
digit = digit -9
total = total + digit
return total % 10 == 0
```
### Is that a credit card number or the ramblings of a madman?
In order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating "test" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.
Grab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:
```
WHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True
WHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False
```
```
#TODO Write your two tests here
print("When card='4716511919678261' We EXPECT checkLuhn(card) to return True. ACTUAL: %s" % checkLuhn('4716511919678261'))
print("When card='4222222222222222' We EXPECT checkLuhn(card) to return False. ACTUAL: %s" % checkLuhn('4222222222222222'))
```
## Putting it all together
Finally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.
Here's the Algorithm:
```
loop
input a credit card number
if card = 'quit' stop loop
if card passes luhn check
get issuer
print issuer
else
print invalid card
```
### Now You Try It
```
## TODO Write code here
while True:
ccard = input("Enter your credit card number or'quit'to end the program. ")
if ccard == 'quit':
break
try:
if checkLuhn(ccard) == True:
issuer = CardIssuer(ccard)
print(issuer)
else:
print("Invalid card. Try again.")
except:
print("Please enter a number. Try again.")
```
|
github_jupyter
|
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from tqdm import tqdm
%matplotlib inline
from torch.utils.data import Dataset, DataLoader
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
from torch.nn import functional as F
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark= False
m = 250
```
# Generate dataset
```
np.random.seed(12)
y = np.random.randint(0,3,500)
idx= []
for i in range(3):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((500,))
np.random.seed(12)
x[idx[0]] = np.random.uniform(low =-1,high =0,size= sum(idx[0]))
x[idx[1]] = np.random.uniform(low =0,high =1,size= sum(idx[1]))
x[idx[2]] = np.random.uniform(low =2,high =3,size= sum(idx[2]))
x[idx[0]][0], x[idx[2]][5]
print(x.shape,y.shape)
idx= []
for i in range(3):
idx.append(y==i)
for i in range(3):
y= np.zeros(x[idx[i]].shape[0])
plt.scatter(x[idx[i]],y,label="class_"+str(i))
plt.legend()
bg_idx = [ np.where(idx[2] == True)[0]]
bg_idx = np.concatenate(bg_idx, axis = 0)
bg_idx.shape
np.unique(bg_idx).shape
x = x - np.mean(x[bg_idx], axis = 0, keepdims = True)
np.mean(x[bg_idx], axis = 0, keepdims = True), np.mean(x, axis = 0, keepdims = True)
x = x/np.std(x[bg_idx], axis = 0, keepdims = True)
np.std(x[bg_idx], axis = 0, keepdims = True), np.std(x, axis = 0, keepdims = True)
for i in range(3):
y= np.zeros(x[idx[i]].shape[0])
plt.scatter(x[idx[i]],y,label="class_"+str(i))
plt.legend()
foreground_classes = {'class_0','class_1' }
background_classes = {'class_2'}
fg_class = np.random.randint(0,2)
fg_idx = np.random.randint(0,m)
a = []
for i in range(m):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(2,3)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
print(a.shape)
print(fg_class , fg_idx)
a.shape
np.reshape(a,(m,1))
desired_num = 2000
mosaic_list_of_images =[]
mosaic_label = []
fore_idx=[]
for j in range(desired_num):
np.random.seed(j)
fg_class = np.random.randint(0,2)
fg_idx = np.random.randint(0,m)
a = []
for i in range(m):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(2,3)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_of_images.append(np.reshape(a,(m,1)))
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
mosaic_list_of_images = np.concatenate(mosaic_list_of_images,axis=1).T
mosaic_list_of_images.shape
mosaic_list_of_images.shape, mosaic_list_of_images[0]
for j in range(m):
print(mosaic_list_of_images[0][j])
mosaic_list_of_images[0:2], mosaic_list_of_images[1000:1002]
np.zeros(5)
def create_avg_image_from_mosaic_dataset(mosaic_dataset,labels,foreground_index,dataset_number, m):
"""
mosaic_dataset : mosaic_dataset contains 9 images 32 x 32 each as 1 data point
labels : mosaic_dataset labels
foreground_index : contains list of indexes where foreground image is present so that using this we can take weighted average
dataset_number : will help us to tell what ratio of foreground image to be taken. for eg: if it is "j" then fg_image_ratio = j/9 , bg_image_ratio = (9-j)/8*9
"""
avg_image_dataset = []
cnt = 0
counter = np.zeros(m)
for i in range(len(mosaic_dataset)):
img = torch.zeros([1], dtype=torch.float64)
np.random.seed(int(dataset_number*10000 + i))
give_pref = foreground_index[i] #np.random.randint(0,9)
# print("outside", give_pref,foreground_index[i])
for j in range(m):
if j == give_pref:
img = img + mosaic_dataset[i][j]*dataset_number/m #2 is data dim
else :
img = img + mosaic_dataset[i][j]*(m-dataset_number)/((m-1)*m)
if give_pref == foreground_index[i] :
# print("equal are", give_pref,foreground_index[i])
cnt += 1
counter[give_pref] += 1
else :
counter[give_pref] += 1
avg_image_dataset.append(img)
print("number of correct averaging happened for dataset "+str(dataset_number)+" is "+str(cnt))
print("the averaging are done as ", counter)
return avg_image_dataset , labels , foreground_index
avg_image_dataset_1 , labels_1, fg_index_1 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 1, m)
test_dataset , labels , fg_index = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[1000:2000], mosaic_label[1000:2000], fore_idx[1000:2000] , m, m)
avg_image_dataset_1 = torch.stack(avg_image_dataset_1, axis = 0)
# mean = torch.mean(avg_image_dataset_1, keepdims= True, axis = 0)
# std = torch.std(avg_image_dataset_1, keepdims= True, axis = 0)
# avg_image_dataset_1 = (avg_image_dataset_1 - mean) / std
# print(torch.mean(avg_image_dataset_1, keepdims= True, axis = 0))
# print(torch.std(avg_image_dataset_1, keepdims= True, axis = 0))
# print("=="*40)
test_dataset = torch.stack(test_dataset, axis = 0)
# mean = torch.mean(test_dataset, keepdims= True, axis = 0)
# std = torch.std(test_dataset, keepdims= True, axis = 0)
# test_dataset = (test_dataset - mean) / std
# print(torch.mean(test_dataset, keepdims= True, axis = 0))
# print(torch.std(test_dataset, keepdims= True, axis = 0))
# print("=="*40)
x1 = (avg_image_dataset_1).numpy()
y1 = np.array(labels_1)
# idx1 = []
# for i in range(3):
# idx1.append(y1 == i)
# for i in range(3):
# z = np.zeros(x1[idx1[i]].shape[0])
# plt.scatter(x1[idx1[i]],z,label="class_"+str(i))
# plt.legend()
plt.scatter(x1[y1==0], y1[y1==0]*0, label='class 0')
plt.scatter(x1[y1==1], y1[y1==1]*0, label='class 1')
# plt.scatter(x1[y1==2], y1[y1==2]*0, label='class 2')
plt.legend()
plt.title("dataset1 CIN with alpha = 1/"+str(m))
x1 = (avg_image_dataset_1).numpy()
y1 = np.array(labels_1)
idx_1 = y1==0
idx_2 = np.where(idx_1==True)[0]
idx_3 = np.where(idx_1==False)[0]
color = ['#1F77B4','orange', 'brown']
true_point = len(idx_2)
plt.scatter(x1[idx_2[:25]], y1[idx_2[:25]]*0, label='class 0', c= color[0], marker='o')
plt.scatter(x1[idx_3[:25]], y1[idx_3[:25]]*0, label='class 1', c= color[1], marker='o')
plt.scatter(x1[idx_3[50:75]], y1[idx_3[50:75]]*0, c= color[1], marker='o')
plt.scatter(x1[idx_2[50:75]], y1[idx_2[50:75]]*0, c= color[0], marker='o')
plt.legend()
plt.xticks( fontsize=14, fontweight = 'bold')
plt.yticks( fontsize=14, fontweight = 'bold')
plt.xlabel("X", fontsize=14, fontweight = 'bold')
# plt.savefig(fp_cin+"ds1_alpha_04.png", bbox_inches="tight")
# plt.savefig(fp_cin+"ds1_alpha_04.pdf", bbox_inches="tight")
avg_image_dataset_1[0:10]
x1 = (test_dataset).numpy()/m
y1 = np.array(labels)
# idx1 = []
# for i in range(3):
# idx1.append(y1 == i)
# for i in range(3):
# z = np.zeros(x1[idx1[i]].shape[0])
# plt.scatter(x1[idx1[i]],z,label="class_"+str(i))
# plt.legend()
plt.scatter(x1[y1==0], y1[y1==0]*0, label='class 0')
plt.scatter(x1[y1==1], y1[y1==1]*0, label='class 1')
# plt.scatter(x1[y1==2], y1[y1==2]*0, label='class 2')
plt.legend()
plt.title("test dataset1 ")
test_dataset.numpy()[0:10]/m
test_dataset = test_dataset/m
test_dataset.numpy()[0:10]
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
#self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] #, self.fore_idx[idx]
avg_image_dataset_1[0].shape, avg_image_dataset_1[0]
batch = 200
traindata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
trainloader_1 = DataLoader( traindata_1 , batch_size= batch ,shuffle=True)
testdata_1 = MosaicDataset(test_dataset, labels )
testloader_1 = DataLoader( testdata_1 , batch_size= batch ,shuffle=False)
class Whatnet(nn.Module):
def __init__(self):
super(Whatnet,self).__init__()
self.linear1 = nn.Linear(1,50)
self.linear2 = nn.Linear(50,10)
self.linear3 = nn.Linear(10,2)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.zeros_(self.linear1.bias)
torch.nn.init.xavier_normal_(self.linear2.weight)
torch.nn.init.zeros_(self.linear2.bias)
torch.nn.init.xavier_normal_(self.linear3.weight)
torch.nn.init.zeros_(self.linear3.bias)
def forward(self,x):
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = (self.linear3(x))
return x
def calculate_loss(dataloader,model,criter):
model.eval()
r_loss = 0
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = model(inputs)
loss = criter(outputs, labels)
r_loss += loss.item()
return r_loss/i
def test_all(number, testloader,net):
correct = 0
total = 0
out = []
pred = []
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to("cuda"),labels.to("cuda")
out.append(labels.cpu().numpy())
outputs= net(images)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
pred = np.concatenate(pred, axis = 0)
out = np.concatenate(out, axis = 0)
print("unique out: ", np.unique(out), "unique pred: ", np.unique(pred) )
print("correct: ", correct, "total ", total)
print('Accuracy of the network on the 1000 test dataset %d: %.2f %%' % (number , 100 * correct / total))
def train_all(trainloader, ds_number, testloader_list):
print("--"*40)
print("training on data set ", ds_number)
torch.manual_seed(12)
net = Whatnet().double()
net = net.to("cuda")
criterion_net = nn.CrossEntropyLoss()
optimizer_net = optim.Adam(net.parameters(), lr=0.0001 ) #, momentum=0.9)
acti = []
loss_curi = []
epochs = 1500
running_loss = calculate_loss(trainloader,net,criterion_net)
loss_curi.append(running_loss)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
net.train()
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_net.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion_net(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_net.step()
running_loss = calculate_loss(trainloader,net,criterion_net)
if(epoch%200 == 0):
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.05:
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
break
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in trainloader:
images, labels = data
images, labels = images.to("cuda"), labels.to("cuda")
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 1000 train images: %.2f %%' % ( 100 * correct / total))
for i, j in enumerate(testloader_list):
test_all(i+1, j,net)
print("--"*40)
return loss_curi, net
train_loss_all=[]
testloader_list= [ testloader_1 ]
loss, net = train_all(trainloader_1, 1, testloader_list)
train_loss_all.append(loss)
net.linear1.weight, net.linear1.bias
%matplotlib inline
for i,j in enumerate(train_loss_all):
plt.plot(j,label ="dataset "+str(i+1))
plt.xlabel("Epochs")
plt.ylabel("Training_loss")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
```
|
github_jupyter
|
# Bayesian Camera Calibration
> Let's apply Bayesian analysis to calibrate a camera
- toc: true
- badges: true
- comments: true
- categories: [Bayesian, Computer Vision]
- image: images/2020-03-28-Bayesian-Camera-Calibration/header.jpg
```
import numpy as np
import matplotlib.pyplot as plt
import pymc3 as pm
plt.rcParams['figure.figsize'] = [10,10]
def x_rot(theta,x,y,z):
theta *= np.pi/180
x_rot = x
y_rot = np.cos(theta)*y - np.sin(theta)*z
z_rot = np.sin(theta)*y + np.cos(theta)*z
return(x_rot,y_rot,z_rot)
def y_rot(theta,x,y,z):
theta *= np.pi/180
x_rot = np.cos(theta)*x + np.sin(theta)*z
y_rot = y
z_rot = -np.sin(theta)*x + np.cos(theta)*z
return(x_rot,y_rot,z_rot)
def z_rot(theta,x,y,z):
theta *= np.pi/180
x_rot = np.cos(theta)*x - np.sin(theta)*y
y_rot = np.sin(theta)*x + np.cos(theta)*y
z_rot = z
return(x_rot,y_rot,z_rot)
points = np.loadtxt("data/2020-02-23-An-Adventure-In-Camera-Calibration/points.csv")
points_2d = points[:,0:2]
points_3d = points[:,2:5]
number_points = points.shape[0]
px = points_2d[:,0]
py = points_2d[:,1]
X_input = points_3d[:,0]
Y_input = points_3d[:,1]
Z_input = points_3d[:,2]
def rotate(theta_Z_est,theta_Y_est,theta_X_est, X_est, Y_est, Z_est):
X_est, Y_est, Z_est = z_rot(theta_Z_est, X_est, Y_est, Z_est)
X_est, Y_est, Z_est = y_rot(theta_Y_est, X_est, Y_est, Z_est)
X_est, Y_est, Z_est = x_rot(theta_X_est, X_est, Y_est, Z_est)
return(X_est, Y_est, Z_est)
# Define priors
X_translate_est = pm.Normal('X_translate', mu = -7, sigma = 1)
Y_translate_est = pm.Normal('Y_translate', mu = -13, sigma = 1)
Z_translate_est = pm.Normal('Z_translate', mu = 3, sigma = 1)
focal_length_est = pm.Normal('focal_length',mu = 1000, sigma = 100)
theta_Z_est = pm.Normal('theta_Z',mu = -45, sigma = 30)
theta_Y_est = pm.Normal('theta_Y',mu = 0, sigma = 15)
theta_X_est = pm.Normal('theta_X',mu = 90, sigma = 30)
c_x_est = pm.Normal('c_x',mu = 1038.42, sigma = 100)
c_y_est = pm.Normal('c_y',mu = 2666.56, sigma = 100)
k1 = -0.351113
k2 = 0.185768
k3 = -0.032289
error_scale = 2
X_est = X_input + X_translate_est
Y_est = Y_input + Y_translate_est
Z_est = Z_input + Z_translate_est
X_est, Y_est, Z_est = rotate(theta_Z_est, theta_Y_est, theta_X_est, X_est, Y_est, Z_est)
px_est = X_est / Z_est
py_est = Y_est / Z_est
r = np.sqrt(px_est**2 + py_est**2)
px_est *= (1 + k1 * r + k2 * r**2 + k3 * r**3)
py_est *= (1 + k1 * r + k2 * r**2 + k3 * r**3)
px_est *= focal_length_est
py_est *= focal_length_est
px_est += c_x_est
py_est += c_y_est
delta = np.sqrt((px - px_est)**2 + (py - py_est)**2)
# Define likelihood
likelihood = pm.Normal('error', mu = delta, sigma = error_scale, observed=np.zeros(number_points))
# Inference!
trace = pm.sample(2_000, cores=4, tune=5000)
plt.figure(figsize=(7, 7))
pm.traceplot(trace[1000:])
plt.tight_layout();
pm.plot_posterior(trace);
pm.summary(trace)
```
|
github_jupyter
|
# Projet de Machine Learning : Test de classification bout en bout
## 1.Chargement des données
```
#importation des modules necessaires
import pyspark
from pyspark.sql import SparkSession
import joblib
#creation d'une session spark
mon_spark=SparkSession.builder.master("local").appName("MLproject").getOrCreate()
ccdefault = mon_spark.read.format("csv").options(header=True,inferSchema=True).load("Doc_Evaluation/session pratique/data/ccdefault.csv")
# Affichage des 3 premières lignes
ccdefault.show(3)
```
## 2. Analyse exploratoire
```
# Affichage des attributs
ccdefault.printSchema()
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
# Conversion en dataframe pandas pour une meilleure manipulation
ccdefault.pd=ccdefault.toPandas()
# Changement de nom de colonnnes, la colonne "PAY_0" devrait plutot etre "PAY_1"
ccdefault.pd.rename(columns={'PAY_0':'PAY_1'},inplace = True)
ccdefault.pd.columns
## Quelques statistiques des données
ccdefault.pd.describe()
```
### 2.1 Distribution des variables continues
```
## variables quantitatives
var_quant=ccdefault.pd[['LIMIT_BAL','AGE','BILL_AMT4','BILL_AMT5','BILL_AMT6','PAY_AMT1','PAY_AMT2','PAY_AMT3','PAY_AMT4','PAY_AMT5','PAY_AMT6']]
var_quant
for col in var_quant:
plt.figure()
sns.distplot(var_quant[col])
```
### 2.2 Distribution des variables qualitatives
```
## variables qualitatives
var_qual=ccdefault.pd[['SEX','MARRIAGE','EDUCATION','PAY_1','PAY_2','PAY_3','PAY_4','PAY_5','PAY_6']]
var_qual
## Presentation des differentes catégories pour chaque variable qualitative
for col in var_qual:
print(f'{col:-<50} {var_qual[col].unique()}')
## Presentation des differentes catégories dans chaque variable
for col in var_qual:
plt.figure()
var_qual[col].value_counts().plot.pie()
```
### 2.3 Relation entre variables explicatives et variable cible
```
# Changement du code de 'SEX' (1,2) en 'F' and 'M'
ccdefault.pd["SEX"]=ccdefault.pd["SEX"].map({1:'M',2:'F'}).astype('category')
ccdefault.pd["SEX"].dtypes
# Creation d'une nouvelle colonne nommée "RET_PAY" indiquant les clients ayant au moins un retard de payement de PAY_1 to Pay_6
# 0 : PAS_RETARD ; 1: RETARD
condition = (ccdefault.pd.PAY_1 >1) | (ccdefault.pd.PAY_2 >1) | (ccdefault.pd.PAY_3 >1) | (ccdefault.pd.PAY_4 >1) | (ccdefault.pd.PAY_5 >1) | (ccdefault.pd.PAY_6 >1)
ccdefault.pd.loc[condition, "RET_PAY"] = 1
ccdefault.pd.loc[ccdefault.pd.RET_PAY.isna(),"RET_PAY"] = 0
ccdefault.pd
# fonction pour representer les relations entre les attributs et la variable cible
def relation_var(nom_colonne):
# Get the percentage of default by each group
status_rembourse_group = pd.crosstab(index=ccdefault.pd['RET_PAY'],columns = ccdefault.pd[nom_colonne], normalize = 'columns')
# Round up to 2 decimal
status_rembourse_group = status_rembourse_group.apply(lambda x: round(x,2))
labels = status_rembourse_group.columns
list1 = status_rembourse_group.iloc[0].to_list()
list2 = status_rembourse_group.iloc[1].to_list()
list1_name = "Remboursé"
list2_name = "Non Remboursé"
title = f"Default by {nom_colonne}"
xlabel = nom_colonne
ylabel = "Pourcentage de non remboursement"
fig, ax = plt.subplots(figsize=(10, 5))
bar_width = 0.5
ax1 = ax.bar(labels,list1, bar_width, label = list1_name)
ax2 = ax.bar(labels,list2, bar_width, bottom = list1, label = list2_name)
ax.set_title(title, fontweight = "bold")
ax.set_xlabel(xlabel, fontweight = "bold")
ax.set_ylabel(ylabel, fontweight = "bold")
ax.legend(loc="best")
plt.xticks(list(range(len(labels))), labels,rotation=90)
plt.yticks(fontsize=9)
for r1, r2 in zip(ax1, ax2):
h1 = r1.get_height()
h2 = r2.get_height()
plt.text(r1.get_x() + r1.get_width() / 2., h1 / 2., f"{h1:.0%}", ha="center", va="center", color="white", fontsize=9, fontweight="bold")
plt.text(r2.get_x() + r2.get_width() / 2., h1 + h2 / 2., f"{h2:.0%}", ha="center", va="center", color="white", fontsize=9, fontweight="bold")
plt.show()
```
#### Relation entre la variable cible et le sexe, le status matrimonial et le niveau d'education des clients
```
var_qual1=["SEX","MARRIAGE","EDUCATION"]
for col in var_qual1:
relation_var(col)
```
#### Relation variable cible et age des clients
```
bornes= [21,30,40,50,60,70,80]
tranches_ages = ['20-30','30-40','40-50','50-60','60-70','70-80']
ccdefault.pd['AGE'] = pd.cut(ccdefault.pd['AGE'],bins=bornes, labels=tranches_ages ,right=False)
relation_var('AGE')
```
### 2.4 Tendance des paiements entre avril 2005 et septembre 2005
```
# Extraction des clients ayant au moins un retard de payement entre avril et septembre
retard= ccdefault.pd[ccdefault.pd['RET_PAY']== 1]
tendance= retard[['PAY_6','PAY_5','PAY_4','PAY_3','PAY_2','PAY_1']].sum(axis=0)
fig,ax = plt.subplots()
ax.plot(tendance)
plt.xticks(['PAY_6','PAY_5','PAY_4','PAY_3','PAY_2','PAY_1'],['Apr','May','Jun','Jul','Aug','Sep'])
plt.xlabel('Mois',fontweight='bold')
plt.ylabel('Total mois payés en retard',fontweight='bold')
plt.title('tendance paiements retardés',fontweight='bold')
plt.show()
# Correlation entre la consommation et le temps de retard mis pour le paiement
from matplotlib.pyplot import figure
retard= ccdefault.pd[ccdefault.pd['RET_PAY']== 1]
paiements = [ f"PAY_{i}" for i in range(1, 7) ]
consommation= [ f"BILL_AMT{i}" for i in range(1, 7) ]
fig, ax = plt.subplots(3,2, figsize=(10, 10))
for paie, conso, m in zip(paiements, consommation, ax.flatten()):
data = []
for i in sorted(retard[paie].unique()):
temp = retard.loc[retard[paie] == i, conso]
data.append(temp)
m.boxplot(data, showfliers=False,)
m.set_xticklabels(sorted(retard[paie].unique()))
plt.show()
# Correlation entre le crédit alloué et la variable cible
# 1: non remboursé ; 0: remboursé
def0 = ccdefault.pd.loc[ccdefault.pd['DEFAULT'] == 0,'LIMIT_BAL']
def1 = ccdefault.pd.loc[ccdefault.pd['DEFAULT'] == 1,'LIMIT_BAL']
fig, ax = plt.subplots()
ax.boxplot([def0, def1], showfliers=False)
ax.set_xticklabels(['Remboursé',"Non remboursé"],fontweight ='bold')
ax.set_ylabel('Crédit alloué',fontweight ='bold')
ax.set_title('Crédit alloué & Etat du remboursement',fontweight ='bold')
plt.show()
```
## 3. Etude comparative de modèles de machine learning pour la prediction de la variable cible
```
## Valeurs manquantes
for c in ccdefaultNew.columns:
count=ccdefaultNew.filter(c+" is NULL" or c+"is ''" or c+"is NaN" or c+"is null").count()
print(str(count) +" valeurs manquantes dans la colonne "+ c)
from pyspark.ml.feature import VectorAssembler, StringIndexer, VectorIndexer, MinMaxScaler
from pyspark.ml import Pipeline
from pyspark.sql.functions import *
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.evaluation import BinaryClassificationEvaluator
# Drop ID column
ccdefault = ccdefault .select(ccdefault .schema.names[1:])
# Split data into training and test sample
splits = ccdefault.randomSplit([0.75, 0.25])
ccdefault_train = splits[0]
ccdefault_test = splits[1]
# Get and convert categorical features (SEX, EDUCATION, MARRIAGE)
categorical_features = ccdefault.schema.names[1:4]
catVect = VectorAssembler(inputCols = categorical_features, outputCol = "catFeatures")
catIdx = VectorIndexer(inputCol = catVect.getOutputCol(), outputCol = "idxCatFeatures")
# Get and normalize numerical features
numerical_features = ccdefault.schema.names[0:1] + ccdefault.schema.names[4:]
numVect = VectorAssembler(inputCols = numerical_features, outputCol = "numFeatures")
minMax = MinMaxScaler(inputCol = numVect.getOutputCol(), outputCol = "normFeatures")
# Define pipeline
featVect = VectorAssembler(inputCols=["idxCatFeatures", "normFeatures"], outputCol = "features")
pipeline = Pipeline(stages = [catVect, catIdx, numVect, minMax, featVect])
pipeline_object = pipeline.fit(ccdefault_train)
# Run training and test data through the pipeline
ccdefault_train = pipeline_object.transform(ccdefault_train).select("features", col("DEFAULT").alias("label"))
ccdefault_test = pipeline_object.transform(ccdefault_test).select("features", col("DEFAULT").alias("label"))
accuracy = MulticlassClassificationEvaluator(
labelCol = "label", predictionCol = "prediction", metricName = "accuracy")
precision = MulticlassClassificationEvaluator(
labelCol = "label", predictionCol = "prediction", metricName = "weightedPrecision")
recall = MulticlassClassificationEvaluator(
labelCol = "label", predictionCol = "prediction", metricName = "weightedRecall")
```
### 3.1 Regression logistique
```
logit = LogisticRegression(labelCol = "label", featuresCol = "features", maxIter = 20, regParam = 0.2)
model = logit.fit(ccdefault_train)
predictions = model.transform(ccdefault_test)
predictions.select("prediction", "label", "features").show(5)
# select (prediction, true label) and compute test error
evaluator = BinaryClassificationEvaluator() # MulticlassClassificationEvaluator().setLabelCol("label").setPredictionCol("prediction").setMetricName("rmse")
auc = evaluator.evaluate(predictions)
print("AUC = "+ str(auc))
```
### 3.2 Arbre de decision
```
dt = DecisionTreeClassifier().setLabelCol("label").setFeaturesCol("features")
# train the model
dtModel = dt.fit(ccdefault_train)
# make predictions on the test data
predictions = dtModel.transform(ccdefault_test)
predictions.select("prediction", "label", "features").show(5)
# select (prediction, true label) and compute test error
evaluator = BinaryClassificationEvaluator() # MulticlassClassificationEvaluator().setLabelCol("label").setPredictionCol("prediction").setMetricName("rmse")
auc = evaluator.evaluate(predictions)
print("AUC = "+ str(auc))
```
### 3.3 Forêt aleatoire
```
# TODO: Replace <FILL IN> with appropriate code
rf = RandomForestClassifier().setLabelCol("label").setFeaturesCol("features")
# train the model
rfModel = rf.fit(ccdefault_train)
# make predictions on the test data
predictions = rfModel.transform(ccdefault_test)
predictions.select("prediction", "label", "features").show(5)
# select (prediction, true label) and compute test error
evaluator = BinaryClassificationEvaluator() # MulticlassClassificationEvaluator().setLabelCol("label").setPredictionCol("prediction").setMetricName("rmse")
auc = evaluator.evaluate(predictions)
print("AUC = "+ str(auc))
```
|
github_jupyter
|
# How to read data from varius file formats
some of the most basic things noone ever treaches you is how to actually access your data in various formats. This notebook shows a couple of examples on how to read data from a number of sources. Feel free to edit this notebook with more methods that you have worked with.
```
#import relevant packages
#from urllib.request import urlretrieve
from urllib2 import urlopen
import matplotlib.pyplot as plt
import pandas as pd
from sqlalchemy import create_engine
import numpy as np
from astropy.io import fits
import urllib
import h5py
import pickle
%matplotlib inline
```
# importing files from the internet
```
# Assign url of file: url
url ='https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv'
# Save file locally
testfile = urllib.URLopener()
testfile.retrieve(url, "winequality-red.csv")
# or use wget
#file_name = wget.download(url)
# Read file into a DataFrame and print its head
df = pd.read_csv('winequality-red.csv', sep=';')
print(df.head())
pd.DataFrame.hist(df.ix[:, 0:1])
plt.xlabel('fixed acidity (g(tartaric acid)/dm$^3$)')
plt.ylabel('count')
plt.show()
```
# same thing with csv or txt file
```
example_sheet='cereal.csv'
example_file='cereal.txt'
xl = pd.read_csv(example_sheet)
x2 = pd.read_csv(example_sheet)
# akternatively you can use read_csv
print (xl.keys())
# pandas lets you specify seperators as well as number of colums and filling nans
#pd.read_csv(file, sep='\t', comment='#', na_values='Nothing')
# textfiles
data = np.loadtxt(example_file, delimiter='\t', skiprows=1, usecols=[4,5])
```
# Chunks
```
chunksize = 10 ** 6
for chunk in pd.read_csv(example_file,sep='\t', chunksize=chunksize):
print len(chunk) # print len can be replaced with any process that you would want to use
#similarly using read_table
for chunk in pd.read_table(example_file,sep='\t', chunksize=chunksize):
len(chunk)
```
# reading fits files
```
filename= 'example.fits'
hdulist = fits.open(filename)
final_data = hdulist[1].data
final_data.columns()
final_data[1]
```
# writing and reading HDF5 files
```
data_matrix = np.random.uniform(-1, 1, size=(10, 3))
# Write data to HDF5
data_file = h5py.File('file.hdf5', 'w')
data_file.create_dataset('group_name', data=data_matrix)
data_file.close()
filename_hdf = 'file.hdf5'
f = h5py.File(filename_hdf, 'r')
# List all groups
print("Keys: %s" % f.keys())
a_group_key = list(f.keys())[0]
# Get the data
data = list(f[a_group_key])
```
# SQL databases
assuming you want to read them into python
also have a look at the databases talk sarah gave (27/04/18)
```
# make sql database with pandas
engine = create_engine('PATH')
pd.to_sql('new_database', engine)
pd.read_sql("SELCT * FROM new_database", engine)
```
# Reading pickled files
I didn't have a pickled file ready so we will make a mock file to start with
```
your_data = {'foo': 'bar'} #makes dictionary
#alternatively use pandas to make and read pickled files
# Store data (serialize)
with open('filename.pickle', 'wb') as handle:
pickle.dump(your_data, handle, protocol=pickle.HIGHEST_PROTOCOL)
# Load data (deserialize)
with open('filename.pickle', 'rb') as handle:
unserialized_data = pickle.load(handle)
print(unserialized_data)
```
# Reading JSON files
|
github_jupyter
|
# Filtering Rows
```
# import pandas
import pandas as pd
# read movie data
movies = pd.read_csv("http://bit.ly/imdbratings")
# examine first few rows
movies.head()
```
## Filtering Movies with `for` Loop
```
booleans = []
for length in movies.duration:
if length >= 200:
booleans.append(True)
else:
booleans.append(False)
# Check length of booleans
len(booleans)
# Inspect booleans elements
booleans[0:5]
# create a pandas series
is_long = pd.Series(booleans)
# Inspect few values
is_long.head()
# show dataframe all columns in duration 200 minutes
movies[is_long]
```
## Filtering by Condition
```
# filtering by conditions
is_long = movies.duration >= 200
is_long.head()
# show the rows duration >200
movies[is_long]
```
## Filtering in DataFrame
```
# filtering by columns
movies[movies.duration >= 200]
# select only genre
movies[movies.duration >= 200].genre
# same as above
movies[movies.duration >= 200]['genre']
# select columns by label
movies.loc[movies.duration >= 200, 'genre']
```
## Multiple Filtering Criteria
```
# True and True == True
# True and False == False
# True or True == True
# True or False == True
# False or False == False
True and True
True and False
True or True
True or False
# multiple criteria
movies[(movies.duration >= 200) & (movies.genre == 'Drama')]
# multiple criteria
movies[(movies.duration >= 200) | (movies.genre == 'Drama')]
# multiple or conditions
movies[(movies.genre == "Crime") | (movies.genre == 'Drama') | (movies.genre == "Action")]
# multiple or using isin() method
movies.genre.isin(["Drama", "Action", "Crime"])
# pass the series in DataFrame
movies[movies.genre.isin(["Drama", "Action", "Crime"])]
```
<h3>About the Author</h3>
This repo was created by <a href="https://www.linkedin.com/in/jubayer28/" target="_blank">Jubayer Hossain</a> <br>
<a href="https://www.linkedin.com/in/jubayer28/" target="_blank">Jubayer Hossain</a> is a student of Microbiology at Jagannath University and the founder of <a href="https://github.com/hdro" target="_blank">Health Data Research Organization</a>. He is also a team member of a bioinformatics research group known as Bio-Bio-1.
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.m
|
github_jupyter
|
```
import numpy as np
from scipy import linalg
from scipy import optimize
import sympy as sm
%matplotlib inline
import matplotlib.pyplot as plt
import ipywidgets as widgets
```
The following Model Project is based on the classic Solow model as we know it from Macroeconomics. First, we will model the Solow model in its simplest form. Next we will build on this simple Solow model by adding Total Factor Productivity (TFP).
1) Our model is defined by the framework below. Note that TFP is not part of the model yet and that we are assuming that we are in a small closed economy:
A small closed economy can be described by following equations:
\\[ Y_t = BK_t^\alpha L_t^{1-\alpha},\alpha\in(0,1) \\]
\\[ S_{t+1} = sY_t, s\in(0,1) \\]
\\[ L_{t+1} = (1+n)L_t, n>-1 \\]
where $Y_t = F(K_t,L_t)$ is GDP; $K_t$ is capital; $L_t$ is labor (growing with a constant rate of $n$); $S_t$ is total savings; s is the savings rate and $k_t = K_t/L_t$. Note also that $B$, alpha, $s$ and $n$ are exogenous parameters.
The transition equation, which shows how capital is accumulated, then becomes
\\[ k_{t+1} = \frac{1}{1+n}(sBk_t^{\alpha}+(1-\delta)k_t), 0<\delta =<1\\]
where, in addition to above defined parameters, $\delta$ is depreciation of capital.
**Steady state** for $k_t$ and $y_t$ is derived below
```
#Below is run to get LaTex format
sm.init_printing(use_unicode=True)
# Define varibles
k = sm.symbols('k')
y = sm.symbols('y')
K = sm.symbols('K')
B = sm.symbols('B')
L = sm.symbols('L')
Y = sm.symbols('Y')
alpha = sm.symbols('alpha')
delta = sm.symbols('delta')
s = sm.symbols('s')
n = sm.symbols('n')
# Define transition equation and solve steady state for capital per capital
f=B*k**alpha
transition1=sm.Eq(k,((1)/(1+n)*(s*B*(k**alpha)+(1-delta)*k)))
steadystate_k1=sm.solve(transition1,k)
print('Steady state of k is')
steadystate_k1
y=B*k**alpha
print('Steady state for y is')
steadystate_y1=y.subs({'k':steadystate_k1[0]})
steadystate_y1
```
Above we have found the expression for steady state in capital per capita and income per capita. We test our steady state expressions by plugging in arbitrary values for each parameter in each expression: $B=1.5$, $s=0.20$, $n=2\%$, $\alpha=\frac{1}{3}$ and $\delta=1$. We use a simple function: solution(x), to retrieve a specific steady state value for each. See below.
We notice that this value seems plausible and proceed by solving for optimal steady states using more sophisticated functions. Again we start by choosing arbitrary parameter values.
```
s=0.2
n=0.02
delta=1
alpha=1/3
B=10
opt_steadystate_k1= lambda steadystate_k1: steadystate_k1 - ( ((1)/(1+n)*(s*B*(steadystate_k1)**alpha+(1-delta)*steadystate_k1)) )
result1 = optimize.root_scalar(opt_steadystate_k1,bracket=[0.1,10],method='brentq')
print('the steady state for k is', result1.root)
```
Given the specified parameter values, steady state of capital per capita is 2.75.
We now wish to investigate how the steady state value of capital per capita changes when the savings rate changes by an arbitrary amount.
```
n=0.02
delta=1
alpha=1/3
B=10
savings = [0.05,0.1,0.2,0.25,0.4,0.5,0.75,0.9]
for s1 in savings:
obj_kss = lambda steadystate_k1: steadystate_k1 - ((1)/(1+n)*(s1*B*(steadystate_k1)**alpha+(1-delta)*steadystate_k1))
result1 = optimize.root_scalar(obj_kss,bracket=[0.1,1000],method='brentq')
print(f'for savings = {s1:.3f} the steady state for k is',result1.root)
```
From above result it is instantly clear that the savings rate, s, has a substantial impact on the capital accumulation in the society. Below is a visualisation of how the parameteres impact the accumulation of capital as well as income.
In order to create this visualisation, our next step is to create an interactive graph plotting the transition curve and the 45 degree line including sliders for each of the parameters in the model. The purpose of this is to see the effect on the steady state point from changing one of more parameters. In order to do so, we first define the range of each slider. Then we set up the interactive figure and finally we set up the appropriate sliders.
```
# We define below the possible numeric intervals of the parameters
alpha = np.arange(0,1)
s = np.arange(0,1)
k = np.arange(100)
B = np.arange(0,100)
n = np.arange(0,0.2)
delta = np.arange(0,1.1)
def interactive_transition1(B,s,alpha,n,delta,k):
k0 = ((1)/(1+n)*(s*B*(k**alpha)+(1-delta)*k))
m0 = k
#plt.plot(m)
fig = plt.figure(dpi=90)
ax = fig.add_subplot(1,1,1)
ax.plot(k0, label = 'Capital')
ax.plot(m0, label = '45 degrees line')
ax.set_xlim([0,15]) # fixed x range
ax.set_ylim([0,15]) # fixed y range
plt.xlabel('k in period t')
plt.ylabel('k in period t+1')
plt.title('Transition diagram')
ax.grid(True)
ax.legend(loc='upper left')
y0 = B*((1)/(1+n)*(s*B*(k**alpha)+(1-delta)*k))**alpha
fig = plt.figure(dpi=90)
ax = fig.add_subplot(1,1,1)
ax.plot(y0, label = 'Income per capita')
ax.set_xlim([0,15]) # fixed x range
ax.set_ylim([0,100]) # fixed y range
plt.xlabel('k')
plt.ylabel('y')
plt.title('Income per capita')
ax.grid(True)
ax.legend(loc='upper left')
widgets.interact(interactive_transition1,
#k1=widgets.fixed(k1),
alpha=widgets.FloatSlider(description="$alpha$", min=0, max=1, step=0.005, value=0.33),
s=widgets.FloatSlider(description="$s$", min=0, max=0.7, step=0.005, value=0.2),
B=widgets.FloatSlider(description="$B$", min=0, max=50, step=1, value=10),
n=widgets.FloatSlider(description="$n$", min=0, max=0.2, step=0.01, value=0.02),
delta=widgets.FloatSlider(description="$delta$", min=0, max=1, step=0.01, value=0.4),
k=widgets.fixed(k)
);
```
In the figure we notice that as B, s and alpha increases, so does steady state for capital accumulation. Conversely, when n and delta increase, steady state for capital accumulation decreases.
**2) We now consider the Solow-model with a productive externality per worker where:**
A small closed economy can be described by following equations:
\\[ Y_t = A_tK_t^\alpha L_t^{1-\alpha},\alpha\in(0,1) \\]
\\[ A_t = Bk_t^{\phi(1-\alpha)}, B>0,\phi\in(0,1) \\]
\\[ K_{t+1} = sY_t, s\in(0,1) \\]
\\[ L_{t+1} = (1+n)L_t, n>-1 \\]
where $K_t$ is capital; $L_t$ is labor (growing with a constant rate of $n$); $A_t$ is total factor productivity; $Y_t = F(K_t,A_t,L_t)$ is GDP; $k_t = K_t/L_t$ and s is the savings rate.
Note that TFP is dependent on capital accumulation and therefore an increase in capital effects income accumulation via 2 channels: (i) directly through an increase in k and (ii) indirectly through an increase in productivity. Hence all increases in capital will have a larger effect on income accumulation in this model compared to the simple Solow model in part 1.
The transition equation then becomes
\\[ k_{t+1} = (\frac{sB}{1+n})k_t^{\alpha+\phi*(1-\alpha)}\\]
**Steady state** for $k_t$ and $y_t$ is derived below
```
#Below is run to get LaTex format
sm.init_printing(use_unicode=True)
# Define additional variables
# Define varibles
k = sm.symbols('k')
y = sm.symbols('y')
K = sm.symbols('K')
B = sm.symbols('B')
L = sm.symbols('L')
Y = sm.symbols('Y')
alpha = sm.symbols('alpha')
delta = sm.symbols('delta')
s = sm.symbols('s')
n = sm.symbols('n')
A = sm.symbols('A')
phi = sm.symbols('phi')
# Define income equation and transition equation and solve steady state for capital per capita
y=B*k**(alpha + phi*(1-alpha))
transition=sm.Eq(k,((s*B)/(1+n)*k**(alpha+phi*(1-alpha))))
steadystate_k=sm.solve(transition,k)
print('Steady state of k is')
steadystate_k
print('Steady state for y is')
steadystate_y=y.subs({'k':steadystate_k[0]})
steadystate_y
```
Above we have found the expression for steady state in capital per capita and income per capits. We test our steady state expressions by plugging in arbitrary values for each parameter in each expression: $B=10$, $s=0.20$, $n=2\%$, $\alpha=\frac{1}{3}$, $\phi=0.56$. We use a simple function: solution(x), to retrieve a specific steady state value for each. See below.
We notice that this value seems plausible and proceed by solving for optimal steady states using more sophisticated functions. Again we start by choosing arbitrary parameter values.
```
print('Inserting the values gives the steadystate value of:')
Solution=sm.lambdify((B,s,n,alpha,phi),steadystate_k)
Solution(10, 0.2, 0.02, 1/3, 0.4)
s=0.2
n=0.02
phi=0.4
alpha=1/3
B=10
f = lambda k: A*k**alpha
opt_steadystate_k= lambda steadystate_k: steadystate_k - ( ((s*B)/(1+n))*steadystate_k**(alpha+phi*(1-alpha)) )
result = optimize.root_scalar(opt_steadystate_k,bracket=[0.1,10],method='brentq')
print('the steady state for k is', result.root)
```
Given the specified parameter values, steady state of capital per capita is 5.38.
We now wish to investigate how the steady state value of capital per capita changes when the savings rate changes by an arbitrary amount.
```
n=0.02
phi=0.4
alpha=1/3
B=10
savings_rate = [0.05,0.1,0.2,0.25,0.4,0.5,0.75,0.9]
for s1 in savings_rate:
f = lambda k: A*k**alpha
opt_steadystate_k= lambda steadystate_k: steadystate_k - ( ((s1*B)/(1+n))*steadystate_k**(alpha+phi*(1-alpha)) )
result = optimize.root_scalar(opt_steadystate_k,bracket=[0.1,1000],method='brentq')
print(f'for s = {s1:.3f} the steady state for k is',result.root)
```
Similar to part 1, it is instantly clear that the savings rate, s, has a substantial impact on the capital accumulation in the society. Below is a visualisation of how the parameteres impact the accumulation of capital as well as income.
Our next step is to create an interactive graph plotting the transition curve and the 45 degree line including sliders for each of the parameters in the model. The purpose of this is to see the effect on the steady state point from changing one of more parameters. In order to do so, we first define the range of each slider. Then we set up the interactive figure and finally we set up the appropriate sliders.
```
alpha = np.arange(0,1)
s = np.arange(0,1)
k = np.arange(100)
B = np.arange(0,100)
n = np.arange(0,0.2)
phi = np.arange(0,1)
y = np.arange(100)
def interactive_transition(B,s,alpha,n,phi,k):
k1 = ( ((s*B)/(1+n))*k**(alpha+phi*(1-alpha)) )
m = k
#plt.plot(m)
fig = plt.figure(dpi=90)
ax = fig.add_subplot(1,1,1)
ax.plot(k1, label = 'Capital')
ax.plot(m, label = '45 degrees line')
ax.set_xlim([0,15]) # fixed x range
ax.set_ylim([0,15]) # fixed y range
plt.xlabel('k in period t')
plt.ylabel('k in period t+1')
plt.title('Transition diagram')
ax.grid(True)
ax.legend(loc='upper left')
y1=B*( ((s*B)/(1+n))*k**(alpha+phi*(1-alpha)) )**(alpha + phi*(1-alpha))
fig = plt.figure(dpi=90)
ax = fig.add_subplot(1,1,1)
ax.plot(y1, label='Income per capita')
ax.set_xlim([0,15]) # fixed x range
ax.set_ylim([0,100]) # fixed y range
plt.xlabel('k ')
plt.ylabel('y ')
plt.title('Income per capita')
ax.grid(True)
ax.legend(loc='upper left')
widgets.interact(interactive_transition,
#k1=widgets.fixed(k1),
alpha=widgets.FloatSlider(description="$alpha$", min=0, max=1, step=0.005, value=0.33),
s=widgets.FloatSlider(description="$s$", min=0, max=0.7, step=0.005, value=0.2),
B=widgets.FloatSlider(description="$B$", min=0, max=50, step=1, value=10),
n=widgets.FloatSlider(description="$n$", min=0, max=0.2, step=0.01, value=0.02),
phi=widgets.FloatSlider(description="$phi$", min=0, max=1, step=0.01, value=0.56),
k=widgets.fixed(k)
);
```
Similar to part 1, we notice that as B, s, alpha and phi increase, so does steady state for capital accumulation. Conversely, when n increases, steady state for capital accumulation decreases. Importantly we notice that when total factor productivity is dependent on k, all changes in either k or the productivity factor, B, they will have a significantly higher effect on capital accumulation and thereby income.
We can therefore conclude that an economy with a productive externality per capita will increase capital accumulation.
|
github_jupyter
|
```
from itertools import combinations
import qiskit
import numpy as np
import tqix
import sys
def generate_u_pauli(num_qubits):
lis = [0, 1, 2]
coms = []
if num_qubits == 2:
for i in lis:
for j in lis:
coms.append([i, j])
if num_qubits == 3:
for i in lis:
for j in lis:
for k in lis:
coms.append([i, j, k])
if num_qubits == 4:
for i in lis:
for j in lis:
for k in lis:
for l in lis:
coms.append([i, j, k, l])
if num_qubits == 5:
for i in lis:
for j in lis:
for k in lis:
for l in lis:
for m in lis:
coms.append([i, j, k, l, m])
sigma = [tqix.sigmax(), tqix.sigmay(), tqix.sigmaz()]
Us = []
for com in coms:
U = sigma[com[0]]
for i in range(1, num_qubits):
U = np.kron(U, sigma[com[i]])
Us.append(U)
return Us[: 3**num_qubits]
def create_basic_vector(num_qubits: int):
"""Generate list of basic vectors
Args:
num_qubits (int): number of qubits
Returns:
np.ndarray: |00...0>, |00...1>, ..., |11...1>
"""
bs = []
for i in range(0, 2**num_qubits):
b = np.zeros((2**num_qubits, 1))
b[i] = 1
bs.append(b)
return bs
def calculate_sigma(U: np.ndarray, b: np.ndarray):
"""Calculate measurement values
Args:
U (np.ndarray): operator
b (np.ndarray): basic vector
Returns:
np.ndarray: sigma operator
"""
return (np.conjugate(np.transpose(U)) @ b @ np.conjugate(np.transpose(b)) @ U)
# def calculate_mu(density_matrix):
# M = np.zeros((2**num_qubits, 2**num_qubits), dtype=np.complex128)
# for i in range(0, num_observers):
# for j in range(0, 2**num_qubits):
# k = sigmass[i][j]
# M += np.trace(k @ density_matrix) * k
# M /= num_observers
# return M
def calculate_mu_inverse(density_matrix, num_qubits):
k = 3*density_matrix - \
np.trace(density_matrix) * np.identity(2 **
num_qubits, dtype=np.complex128)
# M = k.copy()
# for i in range(1, num_qubits):
# M = np.kron(M, k)
return k
def self_tensor(matrix, n):
product = matrix
for i in range(1, n):
product = np.kron(product, matrix)
return product
num_qubits = 4
psi = 2*np.random.rand(2**num_qubits)
psi = psi / np.linalg.norm(psi)
rho = qiskit.quantum_info.DensityMatrix(psi).data
def shadow(num_experiments):
num_observers = 3**num_qubits
Us, bs = [], []
bs = create_basic_vector(num_qubits)
Us = generate_u_pauli(num_qubits)
count_i = [0] * (num_observers)
sum_b_s = [np.zeros((2**num_qubits, 2**num_qubits),
dtype=np.complex128)] * (num_observers)
for i in range(0, num_experiments):
r = np.random.randint(0, num_observers)
count_i[r] += 1
U = Us[r]
sum_b = np.zeros((2**num_qubits, 2**num_qubits), dtype=np.complex128)
for j in range(0, 2**num_qubits):
k = calculate_sigma(U, bs[j])
sum_b_s[r] += np.trace(k @ rho)*calculate_mu_inverse(k, num_qubits)
temp = sum_b_s[r].copy()
sum_b_s[r] = (np.conjugate(np.transpose(
temp)) @ temp) / (np.trace(np.conjugate(np.transpose(temp)) @ temp))
ps = np.zeros(num_observers)
rho_hat = np.zeros((2**num_qubits, 2**num_qubits), dtype=np.complex128)
rho_hat_variant = 0
for i in range(0, num_observers):
ps[i] = count_i[i] / num_experiments
traceA = np.trace(self_tensor(tqix.sigmaz(), num_qubits) @ sum_b_s[i])
traceB = np.trace(self_tensor(tqix.sigmaz(), num_qubits) @ rho)
rho_hat_variant += ps[i] * (traceA - traceB)**2
rho_hat += ps[i] * sum_b_s[i]
return rho_hat_variant, rho_hat
# new_rho_hat = (np.conjugate(np.transpose(
# rho_hat)) @ rho_hat) / (np.trace(np.conjugate(np.transpose(rho_hat)) @ rho_hat))
# fidelity = qtm.base.trace_fidelity(rho, new_rho_hat)
# trace = qtm.base.trace_distance(rho, new_rho_hat)
# return trace, fidelity, rho, new_rho_hat
# traces = []
# fidelities = []
# rho_hats = []
# for i in range(0, 1):
# trace, fidelity, rho, new_rho_hat = shadow_tomo()
# traces.append(trace)
# fidelities.append(fidelity)
# rho_hats.append(new_rho_hat.copy())
# print(np.mean(traces))
# print(np.mean(fidelities))
# print(np.std(traces))
# print(np.std(fidelities))
# min_rho_hat = (rho_hats[np.argmin(traces)])
rho_hat_variantss = []
noe_large = [10**2, 10**3, 10**4, 10**5]
for noe in noe_large:
rho_hat_variants = []
for i in range(0, 10):
rho_hat_variant, rho_hat = shadow(noe)
rho_hat_variants.append(rho_hat_variant)
rho_hat_variantss.append(rho_hat_variants)
np.savetxt("./rho_hat_variantss" + str(num_qubits) + ".csv",
rho_hat_variantss,
delimiter=",")
averages_var = [0]*4
averages_std = [0]*4
for i in range(len(noe_large)):
averages_var[i] = np.mean(rho_hat_variantss[i])
averages_std[i] = np.std(rho_hat_variantss[i])
print(averages_var)
print(averages_std)
import matplotlib.pyplot as plt
plt.plot(noe_large, averages_var)
plt.subplot(2, 1, 1)
plt.plot(noe_large, averages_var)
plt.xscale('log')
plt.yscale('log')
plt.xlabel('NoE')
plt.ylabel('Var')
plt.show()
```
L = 2, W_chain, Adam
Calculate $var(Z\otimes Z) = (\langle\tilde{\psi}|ZZ|\tilde{\psi}\rangle^2 - \langle\psi|ZZ|\psi\rangle^2)$
```
import sys
sys.path.insert(1, '../')
import qtm.fubini_study
import qtm.nqubit
import qtm.base
num_layers = 2
thetas = np.ones(num_layers*num_qubits*4)
qc = qiskit.QuantumCircuit(num_qubits, num_qubits)
qc.initialize(psi, range(0, num_qubits))
loss_values = []
thetass = []
for i in range(0, 400):
if i % 20 == 0:
print('W_chain: (' + str(num_layers) +
',' + str(num_qubits) + '): ' + str(i))
grad_loss = qtm.base.grad_loss(
qc,
qtm.nqubit.create_Wchain_layerd_state,
thetas, r=1/2, s=np.pi/2, num_layers=num_layers)
if i == 0:
m, v = list(np.zeros(thetas.shape[0])), list(
np.zeros(thetas.shape[0]))
thetas = qtm.base.adam(thetas, m, v, i, grad_loss)
thetass.append(thetas.copy())
qc_copy = qtm.nqubit.create_Wchain_layerd_state(
qc.copy(), thetas, num_layers)
loss = qtm.base.loss_basis(qtm.base.measure(
qc_copy, list(range(qc_copy.num_qubits))))
loss_values.append(loss)
variances = []
for thetas in thetass:
qc = qiskit.QuantumCircuit(num_qubits, num_qubits)
qc = qtm.nqubit.create_Wchain_layerd_state(
qc, thetas, num_layers=num_layers).inverse()
psi_hat = qiskit.quantum_info.Statevector.from_instruction(qc).data
variances.append((np.conjugate(np.transpose(psi_hat)) @ self_tensor(tqix.sigmaz(), num_qubits) @ psi_hat)
** 2 - (np.conjugate(np.transpose(psi)) @ self_tensor(tqix.sigmaz(), num_qubits) @ psi)**2)
plt.plot(variances)
np.savetxt("./thetass"+ str(num_qubits) + ".csv",
thetass,
delimiter=",")
np.savetxt("./variances" + str(num_qubits) + ".csv",
variances,
delimiter=",")
min((abs(x), x) for x in variances)[1]
variances[-1]
```
|
github_jupyter
|
# Building a docker container for training/deploying our classifier
In this exercise we'll create a Docker image that will have the required code for training and deploying a ML model. In this particular example, we'll use scikit-learn (https://scikit-learn.org/) and the **Random Forest Tree** implementation of that library to train a flower classifier. The dataset used in this experiment is a toy dataset called Iris (http://archive.ics.uci.edu/ml/datasets/iris). The clallenge itself is very basic, so you can focus on the mechanics and the features of this automated environment.
A first pipeline will be executed at the end of this exercise, automatically. It will get the assets you'll push to a Git repo, build this image and push it to ECR, a docker image repository, used by SageMaker.
> **Question**: Why would I create a Scikit-learn container from scratch if SageMaker already offerst one (https://docs.aws.amazon.com/sagemaker/latest/dg/sklearn.html).
> **Answer**: This is an exercise and the idea here is also to show you how you can create your own container. In a real-life scenario, the best approach is to use the native container offered by SageMaker.
## Why do I have to do this? If you're asking yourself this question you probably don't need to create a custom conainer. If that is the case, you can skip this section by clicking on the link bellow and use the built-in container with XGBoost to run the automated pipeline
> [Skip this section](../02_TrainYourModel/01_Training%20our%20model.ipynb) and start training your ML model
## PART 1 - Creating the assets required to build/test a docker image
### 1.1 Let's start by creating the training script!
As you can see, this is a very basic example of Scikit-Learn. Nothing fancy.
```
%%writefile train.py
import os
import pandas as pd
import re
import joblib
import json
import traceback
import sys
from sklearn.ensemble import RandomForestClassifier
def load_dataset(path):
# Take the set of files and read them all into a single pandas dataframe
files = [ os.path.join(path, file) for file in os.listdir(path) ]
print(files)
if len(files) == 0:
raise ValueError("Invalid # of files in dir: {}".format(path))
raw_data = [ pd.read_csv(file, sep=",", header=None ) for file in files ]
data = pd.concat(raw_data)
print(data.head(10))
# labels are in the first column
y = data.iloc[:,0]
X = data.iloc[:,1:]
return X,y
def start(args):
print("Training mode")
try:
X_train, y_train = load_dataset(args.train)
X_test, y_test = load_dataset(args.validation)
hyperparameters = {
"max_depth": args.max_depth,
"verbose": 1, # show all logs
"n_jobs": args.n_jobs,
"n_estimators": args.n_estimators
}
print("Training the classifier")
model = RandomForestClassifier()
model.set_params(**hyperparameters)
model.fit(X_train, y_train)
print("Score: {}".format( model.score(X_test, y_test)) )
joblib.dump(model, open(os.path.join(args.model_dir, "iris_model.pkl"), "wb"))
except Exception as e:
# Write out an error file. This will be returned as the failureReason in the
# DescribeTrainingJob result.
trc = traceback.format_exc()
output_path="/tmp/"
with open(os.path.join(output_path, "failure"), "w") as s:
s.write("Exception during training: " + str(e) + "\\n" + trc)
# Printing this causes the exception to be in the training job logs, as well.
print("Exception during training: " + str(e) + "\\n" + trc, file=sys.stderr)
# A non-zero exit code causes the training job to be marked as Failed.
sys.exit(255)
```
### 1.2 Ok. Lets then create the handler. The **Inference Handler** is how we use the SageMaker Inference Toolkit to encapsulate our code and expose it as a SageMaker container.
SageMaker Inference Toolkit: https://github.com/aws/sagemaker-inference-toolkit
```
%%writefile handler.py
import os
import sys
import joblib
from sagemaker_inference.default_inference_handler import DefaultInferenceHandler
from sagemaker_inference.default_handler_service import DefaultHandlerService
from sagemaker_inference import content_types, errors, transformer, encoder, decoder
class HandlerService(DefaultHandlerService, DefaultInferenceHandler):
def __init__(self):
op = transformer.Transformer(default_inference_handler=self)
super(HandlerService, self).__init__(transformer=op)
## Loads the model from the disk
def default_model_fn(self, model_dir):
model_filename = os.path.join(model_dir, "iris_model.pkl")
return joblib.load(open(model_filename, "rb"))
## Parse and check the format of the input data
def default_input_fn(self, input_data, content_type):
if content_type != "text/csv":
raise Exception("Invalid content-type: %s" % content_type)
return decoder.decode(input_data, content_type).reshape(1,-1)
## Run our model and do the prediction
def default_predict_fn(self, payload, model):
return model.predict( payload ).tolist()
## Gets the prediction output and format it to be returned to the user
def default_output_fn(self, prediction, accept):
if accept != "text/csv":
raise Exception("Invalid accept: %s" % accept)
return encoder.encode(prediction, accept)
```
### 1.3 Now we need to create the entrypoint of our container. The main function
We'll use **SageMaker Training Toolkit** (https://github.com/aws/sagemaker-training-toolkit) to work with the arguments and environment variables defined by SageMaker. This library will make our code simpler.
```
%%writefile main.py
import train
import argparse
import sys
import os
import traceback
from sagemaker_inference import model_server
from sagemaker_training import environment
if __name__ == "__main__":
if len(sys.argv) < 2 or ( not sys.argv[1] in [ "serve", "train" ] ):
raise Exception("Invalid argument: you must inform 'train' for training mode or 'serve' predicting mode")
if sys.argv[1] == "train":
env = environment.Environment()
parser = argparse.ArgumentParser()
# https://github.com/aws/sagemaker-training-toolkit/blob/master/ENVIRONMENT_VARIABLES.md
parser.add_argument("--max-depth", type=int, default=10)
parser.add_argument("--n-jobs", type=int, default=env.num_cpus)
parser.add_argument("--n-estimators", type=int, default=120)
# reads input channels training and testing from the environment variables
parser.add_argument("--train", type=str, default=env.channel_input_dirs["train"])
parser.add_argument("--validation", type=str, default=env.channel_input_dirs["validation"])
parser.add_argument("--model-dir", type=str, default=env.model_dir)
args,unknown = parser.parse_known_args()
train.start(args)
else:
model_server.start_model_server(handler_service="serving.handler")
```
### 1.4 Then, we can create the Dockerfile
Just pay attention to the packages we'll install in our container. Here, we'll use **SageMaker Inference Toolkit** (https://github.com/aws/sagemaker-inference-toolkit) and **SageMaker Training Toolkit** (https://github.com/aws/sagemaker-training-toolkit) to prepare the container for training/serving our model. **By serving** you can understand: exposing our model as a webservice that can be called through an api call.
```
%%writefile Dockerfile
FROM python:3.7-buster
# Set a docker label to advertise multi-model support on the container
LABEL com.amazonaws.sagemaker.capabilities.multi-models=false
# Set a docker label to enable container to use SAGEMAKER_BIND_TO_PORT environment variable if present
LABEL com.amazonaws.sagemaker.capabilities.accept-bind-to-port=true
RUN apt-get update -y && apt-get -y install --no-install-recommends default-jdk
RUN rm -rf /var/lib/apt/lists/*
RUN pip --no-cache-dir install multi-model-server sagemaker-inference sagemaker-training
RUN pip --no-cache-dir install pandas numpy scipy scikit-learn
ENV PYTHONUNBUFFERED=TRUE
ENV PYTHONDONTWRITEBYTECODE=TRUE
ENV PYTHONPATH="/opt/ml/code:${PATH}"
COPY main.py /opt/ml/code/main.py
COPY train.py /opt/ml/code/train.py
COPY handler.py /opt/ml/code/serving/handler.py
ENTRYPOINT ["python3", "/opt/ml/code/main.py"]
```
### 1.5 Finally, let's create the buildspec
This file will be used by CodeBuild for creating our Container image.
With this file, CodeBuild will run the "docker build" command, using the assets we created above, and deploy the image to the Registry.
As you can see, each command is a bash command that will be executed from inside a Linux Container.
```
%%writefile buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
docker: 18
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- echo docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
- echo $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG > image.url
- echo Done
artifacts:
files:
- image.url
name: image_url
discard-paths: yes
```
## PART 2 - Local Test: Let's build the image locally and do some tests
### 2.1 Building the image locally, first
Each SageMaker Jupyter Notebook already has a **docker** envorinment pre-installed. So we can play with Docker containers just using the same environment.
```
!docker build -f Dockerfile -t iris_model:1.0 .
```
### 2.2 Now that we have the algorithm image we can run it to train/deploy a model
### Then, we need to prepare the dataset
You'll see that we're splitting the dataset into training and validation and also saving these two subsets of the dataset into csv files. These files will be then uploaded to an S3 Bucket and shared with SageMaker.
```
!rm -rf input
!mkdir -p input/data/train
!mkdir -p input/data/validation
import pandas as pd
import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split
iris = datasets.load_iris()
dataset = np.insert(iris.data, 0, iris.target,axis=1)
df = pd.DataFrame(data=dataset, columns=["iris_id"] + iris.feature_names)
X = df.iloc[:,1:]
y = df.iloc[:,0]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
train_df = X_train.copy()
train_df.insert(0, "iris_id", y_train)
train_df.to_csv("input/data/train/training.csv", sep=",", header=None, index=None)
test_df = X_test.copy()
test_df.insert(0, "iris_id", y_test)
test_df.to_csv("input/data/validation/testing.csv", sep=",", header=None, index=None)
df.head()
```
### 2.3 Just a basic local test, using the local Docker daemon
Here we will simulate SageMaker calling our docker container for training and serving. We'll do that using the built-in Docker Daemon of the Jupyter Notebook Instance.
```
!rm -rf input/config && mkdir -p input/config
%%writefile input/config/hyperparameters.json
{"max_depth": 20, "n_jobs": 4, "n_estimators": 120}
%%writefile input/config/resourceconfig.json
{"current_host": "localhost", "hosts": ["algo-1-kipw9"]}
%%writefile input/config/inputdataconfig.json
{"train": {"TrainingInputMode": "File"}, "validation": {"TrainingInputMode": "File"}}
%%time
!rm -rf model/
!mkdir -p model
print( "Training...")
!docker run --rm --name "my_model" \
-v "$PWD/model:/opt/ml/model" \
-v "$PWD/input:/opt/ml/input" iris_model:1.0 train
```
### 2.4 This is the serving test. It simulates an Endpoint exposed by Sagemaker
After you execute the next cell, this Jupyter notebook will freeze. A webservice will be exposed at the port 8080.
```
!docker run --rm --name "my_model" \
-p 8080:8080 \
-v "$PWD/model:/opt/ml/model" \
-v "$PWD/input:/opt/ml/input" iris_model:1.0 serve
```
> While the above cell is running, click here [TEST NOTEBOOK](02_Testing%20our%20local%20model%20server.ipynb) to run some tests.
> After you finish the tests, press **STOP**
## PART 3 - Integrated Test: Everything seems ok, now it's time to put all together
We'll start by running a local **CodeBuild** test, to check the buildspec and also deploy this image into the container registry. Remember that SageMaker will only see images published to ECR.
```
import boto3
sts_client = boto3.client("sts")
session = boto3.session.Session()
account_id = sts_client.get_caller_identity()["Account"]
region = session.region_name
credentials = session.get_credentials()
credentials = credentials.get_frozen_credentials()
repo_name="iris-model"
image_tag="test"
!sudo rm -rf tests && mkdir -p tests
!cp handler.py main.py train.py Dockerfile buildspec.yml tests/
with open("tests/vars.env", "w") as f:
f.write("AWS_ACCOUNT_ID=%s\n" % account_id)
f.write("IMAGE_TAG=%s\n" % image_tag)
f.write("IMAGE_REPO_NAME=%s\n" % repo_name)
f.write("AWS_DEFAULT_REGION=%s\n" % region)
f.write("AWS_ACCESS_KEY_ID=%s\n" % credentials.access_key)
f.write("AWS_SECRET_ACCESS_KEY=%s\n" % credentials.secret_key)
f.write("AWS_SESSION_TOKEN=%s\n" % credentials.token )
f.close()
!cat tests/vars.env
%%time
!/tmp/aws-codebuild/local_builds/codebuild_build.sh \
-a "$PWD/tests/output" \
-s "$PWD/tests" \
-i "samirsouza/aws-codebuild-standard:3.0" \
-e "$PWD/tests/vars.env" \
-c
```
> Now that we have an image deployed in the ECR repo we can also run some local tests using the SageMaker Estimator.
> Click on this [TEST NOTEBOOK](03_Testing%20the%20container%20using%20SageMaker%20Estimator.ipynb) to run some tests.
> After you finishing the tests, come back to **this notebook** to push the assets to the Git Repo
## PART 4 - Let's push all the assets to the Git Repo connected to the Build pipeline
There is a CodePipeine configured to keep listeining to this Git Repo and start a new Building process with CodeBuild.
```
%%bash
cd ../../../mlops
git branch iris_model
git checkout iris_model
cp $OLDPWD/buildspec.yml $OLDPWD/handler.py $OLDPWD/train.py $OLDPWD/main.py $OLDPWD/Dockerfile .
git add --all
git commit -a -m " - files for building an iris model image"
git push --set-upstream origin iris_model
```
> Alright, now open the AWS console and go to the **CodePipeline** dashboard. Look for a pipeline called **mlops-iris-model**. This pipeline will deploy the final image to an ECR repo. When this process finishes, open the **Elastic Compute Registry** dashboard, in the AWS console, and check if you have an image called **iris-model:latest**. If yes, you can go to the next exercise. If not, wait a little more.
|
github_jupyter
|
```
import cv2
record = False
cap = cv2.VideoCapture(0)
if (cap.isOpened() == False):
print("Unable to read camera feed")
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
out = cv2.VideoWriter('output.avi',cv2.VideoWriter_fourcc('M','J','P','G'), 10, (frame_width,frame_height))
while(True):
ret, frame = cap.read()
k = cv2.waitKey(1)
if ret == True:
cv2.imshow('frame',frame)
# press space key to start recording
if k%256 == 32:
record = True
if record:
out.write(frame)
# press q key to close the program
if k & 0xFF == ord('q'):
break
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
%matplotlib inline
import numpy as np
# Load the networks inputs
# Useful Constants
# Output classes to learn how to classify
LABELS = [
"JUMPING",
"JUMPING_JACKS",
"BOXING",
"WAVING_2HANDS",
"WAVING_1HAND",
"CLAPPING_HANDS"
]
n_steps = 32 # 32 timesteps per series
def load_X(X_path):
file = open(X_path, 'r')
X_ = np.array(
[elem for elem in [
row.split(',') for row in file
]],
dtype=np.float32
)
file.close()
blocks = int(len(X_) / n_steps)
X_ = np.array(np.split(X_,blocks))
return X_
# Load the networks outputs
def load_y(y_path):
file = open(y_path, 'r')
y_ = np.array(
[elem for elem in [
row.replace(' ', ' ').strip().split(' ') for row in file
]],
dtype=np.int32
)
file.close()
# for 0-based indexing
return y_ - 1
import torch
import torch.nn as nn
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
import torch
import torch.nn as nn
import numpy as np
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
class LSTM(nn.Module):
def __init__(self,input_dim,hidden_dim,output_dim,layer_num):
super(LSTM,self).__init__()
self.hidden_dim = hidden_dim
self.output_dim = output_dim
self.lstm = torch.nn.LSTM(input_dim,hidden_dim,layer_num,batch_first=True)
self.fc = torch.nn.Linear(hidden_dim,output_dim)
self.bn = nn.BatchNorm1d(32)
def forward(self,inputs):
x = self.bn(inputs)
lstm_out,(hn,cn) = self.lstm(x)
out = self.fc(lstm_out[:,-1,:])
return out
arr = np.array([[307.589,162.976,319.364,205.944,293.267,204.68,285.434,250.281,277.616,286.729,349.276,208.539,357.095,255.523,349.362,297.17,297.183,290.662,307.599,351.938,308.905,401.454,329.758,291.958,333.701,357.132,337.546,411.953,304.994,160.291,315.459,160.269,0,0,329.752,161.651]])
np.r_[arr,[[307.567,162.979,319.362,205.947,293.257,204.695,285.428,250.28,277.616,285.504,349.282,208.541,357.101,255.503,349.361,297.149,297.182,290.671,307.603,351.914,307.689,401.429,329.757,291.969,333.698,357.127,337.545,411.937,304.969,160.295,315.434,160.268,0,0,328.527,161.655
]]]
print (arr)
inputs = load_X("demofile3.txt")
inputs=torch.from_numpy(inputs)
inputs
n_hidden = 128
n_joints = 18*2
n_categories = 6
n_layer = 3
#model = torch.load('lstm_6_bn.pkl')
model = LSTM(n_joints,n_hidden,n_categories,n_layer)
model.load_state_dict(torch.load('lstm_6_bn.pkl'))
model.eval()
#category_tensor, inputs = randomTrainingExampleBatch(1,'test',0)
#category = LABELS[int(category_tensor[0])]
inputs = inputs.to(device)
model.cuda()
output = model(inputs)
top_n, top_i = output.topk(1)
category_i = top_i[0].item()
category = LABELS[category_i]
category_ii = LABELS.index(category)
category ,category_ii,inputs
```
# PoseNet
```
import torch
import cv2
import time
#import argparse
import posenet
#parser = argparse.ArgumentParser()
#parser.add_argument('--model', type=int, default=101)
#parser.add_argument('--cam_id', type=int, default=0)
#parser.add_argument('--cam_width', type=int, default=1280)
#parser.add_argument('--cam_height', type=int, default=720)
#parser.add_argument('--scale_factor', type=float, default=0.7125)
#args = parser.parse_args()
model = posenet.load_model(int(101))#args.model
model = model.cuda()
output_stride = model.output_stride
cap = cv2.VideoCapture(0)#args.cam_id = 0
cap.set(3, 1280)#args.cam_width=1280
cap.set(4, 720)#args.cam_height=720
start = time.time()
frame_count = 0
while True:
input_image, display_image, output_scale = posenet.read_cap(
cap, scale_factor= 0.7125, output_stride=output_stride)#args.scale_factor=0.7125
with torch.no_grad():
input_image = torch.Tensor(input_image).cuda()
heatmaps_result, offsets_result, displacement_fwd_result, displacement_bwd_result = model(input_image)
pose_scores, keypoint_scores, keypoint_coords = posenet.decode_multiple_poses(
heatmaps_result.squeeze(0),
offsets_result.squeeze(0),
displacement_fwd_result.squeeze(0),
displacement_bwd_result.squeeze(0),
output_stride=output_stride,
max_pose_detections=10,
min_pose_score=0.15)
keypoint_coords *= output_scale
# TODO this isn't particularly fast, use GL for drawing and display someday...
overlay_image = posenet.draw_skel_and_kp(
display_image, pose_scores, keypoint_scores, keypoint_coords,
min_pose_score=0.15, min_part_score=0.1)
cv2.imshow('posenet', overlay_image)
frame_count += 1
if cv2.waitKey(1) & 0xFF == ord('q'):
break
print('Average FPS: ', frame_count / (time.time() - start))
```
# Convert to TensorRT
```
import torch
from torch2trt import torch2trt
model = LSTM(n_joints,n_hidden,n_categories,n_layer)
model.load_state_dict(torch.load('lstm_6_bn.pkl'))
rnn.cuda().eval()
x = torch.ones((1,32,36)).half().cuda()
rnn(x)
import pdb
pdb.pm()
class LSTM(nn.Module):
def __init__(self):
super(LSTM,self).__init__()
self.fc = torch.nn.Linear(12,1)
def forward(self,inputs):
out = self.fc(inputs)
return out
a = LSTM()
a.cuda().eval()
x = torch.ones((1,12)).cuda()
model_trt = torch2trt(a, [x])
wget https://github.com/xieyulai/LSTM-for-Human-Activity-Recognition-using-2D-Pose_Pytorch/blob/master/lstm.ipynb
```
|
github_jupyter
|
# Simulate Artificial Physiological Signals
Neurokit's core signal processing functions surround electrocardiogram (ECG), respiratory (RSP), electrodermal activity (EDA), and electromyography (EMG) data. Hence, this example shows how to use Neurokit to simulate these physiological signals with customized parametric control.
```
# Load NeuroKit and other useful packages
import neurokit2 as nk
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = [8, 5] # Bigger images
```
## Cardiac Activity (ECG)
With `ecg_simulate()`, you can generate an artificial ECG signal of a desired length (in this case here, `duration=10`), noise, and heart rate. As you can see in the plot below, *ecg50* has about half the number of heart beats than *ecg100*, and *ecg50* also has more noise in the signal than the latter.
```
# Alternate heart rate and noise levels
ecg50 = nk.ecg_simulate(duration=10, noise=0.05, heart_rate=50)
ecg100 = nk.ecg_simulate(duration=10, noise=0.01, heart_rate=100)
# Visualize
pd.DataFrame({"ECG_100": ecg100,
"ECG_50": ecg50}).plot()
```
You can also choose to generate the default, simple simulation based on Daubechies wavelets, which roughly approximates one cardiac cycle, or a more complex one by specifiying `method="ecgsyn"`.
```
# Alternate methods
ecg_sim = nk.ecg_simulate(duration=10, method="simple")
ecg_com = nk.ecg_simulate(duration=10, method="ecgsyn")
# Visualize
pd.DataFrame({"ECG_Simple": ecg_sim,
"ECG_Complex": ecg_com}).plot(subplots=True)
```
## Respiration (RSP)
To simulate a synthetic respiratory signal, you can use `rsp_simulate()` and choose a specific duration and breathing rate. In this example below, you can see that *rsp7* has a lower breathing rate than *rsp15*. You can also decide which model you want to generate the signal. The *simple rsp15* signal incorporates `method = "sinusoidal"` which approximates a respiratory cycle based on the trigonometric sine wave. On the other hand, the *complex rsp15* signal specifies `method = "breathmetrics"` which uses a more advanced model by interpolating inhalation and exhalation pauses between each respiratory cycle.
```
# Simulate
rsp15_sim = nk.rsp_simulate(duration=20, respiratory_rate=15, method="sinusoidal")
rsp15_com = nk.rsp_simulate(duration=20, respiratory_rate=15, method="breathmetrics")
rsp7 = nk.rsp_simulate(duration=20, respiratory_rate=7, method="breathmetrics")
# Visualize respiration rate
pd.DataFrame({"RSP7": rsp7,
"RSP15_simple": rsp15_sim,
"RSP15_complex": rsp15_com}).plot(subplots=True)
```
## Electromyography (EMG)
Now, we come to generating an artificial EMG signal using `emg_simulate()`. Here, you can specify the number of bursts of muscular activity (`n_bursts`) in the signal as well as the duration of the bursts (`duration_bursts`). As you can see the active muscle periods in *EMG2_Longer* are greater in duration than that of *EMG2*, and *EMG5* contains more bursts than the former two.
```
# Simulate
emg2 = nk.emg_simulate(duration=10, burst_number=2, burst_duration=1.0)
emg2_long = nk.emg_simulate(duration=10, burst_number=2, burst_duration=1.5)
emg5 = nk.emg_simulate(duration=10, burst_number=5, burst_duration=1.0)
# Visualize
pd.DataFrame({"EMG2": emg2,
"EMG2_Longer": emg2_long,
"EMG5": emg5}).plot(subplots=True)
```
## Electrodermal Activity (EDA)
Finally, `eda_simulate()` can be used to generate a synthetic EDA signal of a given duration, specifying the number of skin conductance responses or activity 'peaks' (`n_scr`) and the `drift` of the signal. You can also modify the noise level of the signal.
```
# Simulate
eda1 = nk.eda_simulate(duration=10, scr_number=1, drift=-0.01, noise=0.05)
eda3 = nk.eda_simulate(duration=10, scr_number=3, drift=-0.01, noise=0.01)
eda3_long = nk.eda_simulate(duration=10, scr_number=3, drift=-0.1, noise=0.01)
# Visualize
pd.DataFrame({"EDA1": eda1,
"EDA3": eda3,
"EDA3_Longer": eda3_long}).plot(subplots=True)
```
|
github_jupyter
|
```
# All necessary imports here
import oci
import os.path
import sys
import json
import logging
import pprint
import re
from collections import Counter
import ipaddr #pip3 install ipaddr
# Read config and create clients (identity,network,etc.)
config = oci.config.from_file()
identity_client = oci.identity.IdentityClient(config)
virtual_network_client = oci.core.VirtualNetworkClient(config)
virtual_network_composite_operations = oci.core.VirtualNetworkClientCompositeOperations(virtual_network_client)
# local logger
def local_logger(value):
print(value)
def print_decorator(message):
print ("====================")
print (message)
print ("====================")
# Helper methods for extracting and json_lookup
def extract_value_by_field(obj, key):
"""Pull all values of specified key from nested JSON."""
arr = []
def extract(obj, arr, key):
"""Recursively search for values of key in JSON tree."""
if isinstance(obj, dict):
for k, v in obj.items():
if isinstance(v, (dict, list)):
extract(v, arr, key)
elif k == key:
arr.append(v)
elif isinstance(obj, list):
for item in obj:
extract(item, arr, key)
return arr
results = extract(obj, arr, key)
return results
# helper method to convert response to dictionary
def convert_response_to_dict(oci_response):
return oci.util.to_dict(oci_response.data)
# Special Characters Regex validation
def special_char_regex_validation(value):
special_chars_regex = re.compile('[@!#$%^&*()<>?/\|}{~:`]')
special_char_check = special_chars_regex.search(value)
if special_char_check is None:
return True
else:
return False
# IPV4 CIDR Notation Regex Validation
def ipv4_regex_validation(value):
ipv4_cidr_regex = re.compile('^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])/(1[0-9]|2[0-9]|3[0-1])$')
ipv4_cidr_check = ipv4_cidr_regex.search(value)
if ipv4_cidr_check is None:
return False
else:
return True
# Json file exist validation
def validate_file_exist():
if os.path('./hub_spokes.json').is_file():
local_logger ("====================")
local_logger ("Input file hub_spokes.json exists !")
local_logger ("====================")
return True
else:
local_logger ("====================")
local_logger ("Input file hub_spokes.json does not exists !")
local_logger ("====================")
return False
# Json file complete payload validation
def validate_json(payload):
# Name sanitization check i.e. all vcn name and dns-name and drg-name
hub_check = False
spoke_check = False
hub_vcn = payload["hub"]["vcn"]
hub_drg = payload["hub"]["drg"]
spokes = payload["spokes"]
spoke_vcns = []
for spoke in spokes:
spoke_vcns.append(spoke["vcn"])
hub_vcn_name = special_char_regex_validation(hub_vcn["name"])
hub_vcn_dns_name = special_char_regex_validation(hub_vcn["dns_name"])
hub_drg_name = special_char_regex_validation(hub_drg["name"])
if hub_vcn_name and hub_vcn_dns_name and hub_drg_name:
hub_check = True
else:
hub_check = False
for spoke in spoke_vcns:
spoke_vcn_name = special_char_regex_validation(spoke["name"])
spoke_vcn_dns_name = special_char_regex_validation(spoke["dns_name"])
if spoke_vcn_name and spoke_vcn_dns_name:
spoke_check = True
else:
spoke_check = False
break
if hub_check and spoke_check:
local_logger("============SUCCESS=======")
local_logger("HUB AND SPOKE CHECK FOR SPECIAL CHARACTER PASSED !")
local_logger("==========================")
else:
local_logger("===========FAILED=========")
local_logger("HUB AND SPOKE CHECK FOR SPECIAL CHARACTER FAILED !")
local_logger("==========================")
# CIDR overlap check VCN level
hub_cidr_blocks = hub_vcn["cidr"]
cidr_blocks = []
cidr_blocks.append(ipaddr.IPNetwork(hub_cidr_blocks))
for spoke_vcn in spoke_vcns:
cidr_blocks.append(ipaddr.IPNetwork(spoke_vcn["cidr"]))
for index in range(len(cidr_blocks)):
for ind in range(len(cidr_blocks)):
if index!=ind:
is_overlap = cidr_blocks[index].overlaps(cidr_blocks[ind])
if is_overlap is True:
hub_check = False
spoke_check = False
break
else:
hub_check = True
spoke_check = True
if hub_check and spoke_check:
local_logger("==========SUCCESS=========")
local_logger("CIDR BLOCKS CHECK IN THE INPUT HUB/SPOKE SUCCESSFULLY VALIDATED !" )
local_logger("==========================")
else:
local_logger("=========FAILED===========")
local_logger("OVERLAPPING CIDR BLOCKS FOUND IN THE INPUT HUB/SPOKE !" )
local_logger("==========================")
return True if hub_check and spoke_check else False
# Compartment validation
def fetch_all_compartments_in_tenancy(client):
"""Fetch all Compartments in Tenancy , and look across all subtrees."""
compartmentResponse = oci.pagination.list_call_get_all_results(
client.list_compartments,
compartment_id=tenancy_ocid,
limit=200,
access_level="ACCESSIBLE",
compartment_id_in_subtree=True,
retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,
)
return convert_response_to_dict(compartmentResponse)
def checkByOCID_if_compartment_active(compartment_id, client):
compartment_response = client.get_compartment(
compartment_id=compartment_id, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,
)
compartment = convert_response_to_dict(compartment_response)
return True if compartment["lifecycle_state"] == "ACTIVE" else False
def filter_compartments_by_state(compartmentList=[], compartmentState="ACTIVE"):
"""Filter Compartments by their lifecycle state, ACTIVE| DELETNG | DELETED | CREATING"""
filteredCompartments = [
compartment
for compartment in compartmentList
if compartment["lifecycle_state"] == compartmentState
]
return filteredCompartments
# VCN Checks
# Check if VCN created matches by Name
def checkVcnNameMatch(client, compartment_id, vcn_name):
listVCNReponse = client.list_vcns(compartment_id = compartment_id)
vcns = convert_response_to_dict(listVCNReponse)
vcn_names = extract_value_by_field(vcns, "display_name")
if vcn_name in vcn_names:
print_decorator("VCN NAME ALREADY EXIST. SKIPPING VCN CREATION")
return True
else:
return False
# Get VCN OCID for matched VCN Name
def checkVCNNameGetOCID(client, compartment_id, vcn_name):
listVCNReponse = client.list_vcns(compartment_id = compartment_id)
vcns = convert_response_to_dict(listVCNReponse)
for vcn in vcns:
if vcn["display_name"]==vcn_name:
return vcn["id"]
else:
return None
# Get VCN OCID for matched VCN CIDR
def checkVCNCidrGetOCID(client, compartment_id, cidr):
listVCNReponse = client.list_vcns(compartment_id = compartment_id)
vcns = convert_response_to_dict(listVCNReponse)
for vcn in vcns:
if vcn["cidr_block"]==cidr:
return vcn["id"]
else:
return None
# Check if VCN is in available state
def checkVCNStateAvailable(client, compartment_id, vcn_id):
getVCNResponse = client.get_vcn(vcn_id = vcn_id)
vcn = convert_response_to_dict(getVCNResponse)
if vcn["lifecycle_state"] == "AVAILABLE":
return True
else:
return False
# Check if VCN created matches by CIDR
def checkVcnCidrMatch(client, compartment_id, vcn_cidr):
getResponse = client.list_vcns(compartment_id = compartment_id)
vcns = convert_response_to_dict(getResponse)
for vcn in vcns:
if vcn["cidr_block"] == vcn_cidr:
return True
else:
return False
# Get VCN by VCN OCID
def getVCNbyOCID(client, compartment_id, vcn_ocid):
try:
getVCN = client.get_vcn(vcn_id = vcn_ocid)
vcn = getVCN.data
return vcn.id
except Exception as e:
return None
getVCNbyOCID(virtual_network_client, config["compartment"], "ocid1.vcn.oc1.phx.amaaaaaa43cggciamaezwaggscjbvqgbwzbo4zlakrisgc4xrnkh2k6kbzga")
# Create VCN
def create_new_vcn(client,composite_client, vcn_object):
vcn_details = oci.core.models.CreateVcnDetails(cidr_block=vcn_object["cidr"],
display_name=vcn_object["name"],
compartment_id=vcn_object["compartment_id"],
dns_label= vcn_object["dns_name"])
VCNResponse = composite_client.create_vcn_and_wait_for_state(
vcn_details,
wait_for_states=[oci.core.models.Vcn.LIFECYCLE_STATE_AVAILABLE]
)
vcn = convert_response_to_dict(VCNResponse)
return vcn["id"]
def check_before_vcn_create(client,composite_client, compartment_id, vcn_object):
vcn_ocid = None
checkCompartmentActive = checkByOCID_if_compartment_active(compartment_id, identity_client)
print_decorator("COMPARTMENT EXISTS AND ACTIVE !") if checkCompartmentActive else print_decorator("COMPARTMENT DOES NOT EXIST OR IS NOT ACTIVE !")
checkVCNNameMatch = checkVcnNameMatch(client, compartment_id, vcn_object["name"])
if checkVCNNameMatch:
vcn_ocid = checkVCNNameGetOCID(client, compartment_id, vcn_object["name"])
checkVCNCIDRMatch = checkVcnCidrMatch(client, compartment_id, vcn_object["cidr"])
if checkVCNCIDRMatch:
vcn_ocid = checkVCNCidrGetOCID(client, compartment_id, vcn_object["cidr"])
if vcn_ocid is None:
return create_new_vcn(client,composite_client, vcn_object)
else:
print_decorator("VCN ALREADY EXIST ! SKIPPING VCN CREATION !")
return vcn_ocid
# DRG Checks
# Check if DRG exist by name
def is_drg_exist_by_name(client, compartment_ocid, drg_name):
listDRGs = client.list_drgs(compartment_id = compartment_ocid)
drgs = convert_response_to_dict(listDRGs)
drg_name_extract = extract_value_by_field(drgs, "display_name")
return True if drg_name in drg_name_extract else False
# Get DRG OCID from DRG Name
def get_drg_ocid_by_name(client, compartment_ocid, drg_name):
listDRGs = client.list_drgs(compartment_id = compartment_ocid)
drgs = convert_response_to_dict(listDRGs)
for drg in drgs:
if drg["display_name"] == drg_name:
return drg["id"]
else:
return None
# Check if DRG exist already by OCID and State in Compartment
def is_drg_already_exist_in_compartment(client, compartment_ocid, drg_ocid):
drg = client.get_drg(drg_id = drg_ocid, retry_strategy= oci.retry.DEFAULT_RETRY_STRATEGY)
drg_dict = convert_response_to_dict(drg)
if drg_dict is not None and drg_dict["lifecycle_state"] == "AVAILABLE":
return True
else:
return False
# Check DRG attachment status
def get_drg_attachment_status(client, drg_attachment_ocid):
drg_attachment = client.get_drg_attachment(drg_attachment_id = drg_attachment_ocid)
drg_attachment_dict = convert_response_to_dict(drg_attachment)
if drg_attachment_dict is not None and drg_attachment_dict["lifecycle_state"]=="ATTACHED":
return True
else:
return False
def get_drg_attachment_ocid(client, drg_attachment_ocid):
drg_attachment = client.get_drg_attachment(drg_attachment_id = drg_attachment_ocid)
drg_attach = convert_response_to_dict(drg_attachment)
return drg_attach["id"]
def filter_drg_attachment_id(client,compartment_ocid, drg_id):
drg_attachments = client.list_drg_attachments(compartment_id = compartment_ocid)
drg_attachments_dict = convert_response_to_dict(drg_attachments)
for drg_attachment in drg_attachments_dict:
if drg_attachment["drg_id"] == drg_id:
return drg_attachment["id"]
break
else:
return None
# Create DRG
def create_new_drg(client, compartment_id, hub_drg):
drg_result = client.create_drg(
oci.core.models.CreateDrgDetails(
compartment_id=compartment_id,
display_name= hub_drg["name"]
)
)
drg = oci.wait_until(
client,
client.get_drg(drg_result.data.id),
'lifecycle_state',
'AVAILABLE'
)
local_logger('Created DRG')
local_logger('===============')
local_logger('\n')
drg_new = convert_response_to_dict(drg)
return drg_new["id"]
# Create DRG if drg does not exist already
def check_before_drg_create(client, compartment_id, hub_drg):
drg_ocid = None
checkDRGStatusCompartment = False
checkCompartmentActive = checkByOCID_if_compartment_active(compartment_id, identity_client)
print_decorator("COMPARTMENT EXISTS AND ACTIVE !") if checkCompartmentActive else print_decorator("COMPARTMENT DOES NOT EXIST OR IS NOT ACTIVE !")
checkDRGNameMatch = is_drg_exist_by_name(client, compartment_id, hub_drg["name"])
if checkDRGNameMatch:
drg_ocid = get_drg_ocid_by_name(client, compartment_id, hub_drg["name"])
checkDRGStatusCompartment = is_drg_already_exist_in_compartment(client, compartment_id, drg_ocid)
if drg_ocid is None and checkDRGStatusCompartment is False:
return create_new_drg(client,compartment_id,hub_drg)
else:
print_decorator("DRG ALREADY EXIST ! SKIPPING DRG CREATION !")
return drg_ocid
# Attach newly created DRG to VCN using VCN OCID
def drg_attach(client, vcn_ocid, drg_ocid, drg_name):
drg_attach_result = client.create_drg_attachment(
oci.core.models.CreateDrgAttachmentDetails(
display_name= drg_name,
vcn_id=vcn_ocid,
drg_id=drg_ocid
)
)
drg_attachment = oci.wait_until(
client,
client.get_drg_attachment(drg_attach_result.data.id),
'lifecycle_state',
'ATTACHED'
)
local_logger('Created DRG Attachment')
local_logger('=========================')
local_logger('\n')
drg_attach = convert_response_to_dict(drg_attachment)
return drg_attach["id"]
def check_before_drg_attach(client, compartment_id, vcn_ocid, drg_ocid, drg_name):
drg_attachment_ocid = None
checkCompartmentActive = checkByOCID_if_compartment_active(compartment_id, identity_client)
print_decorator("COMPARTMENT EXISTS AND ACTIVE !") if checkCompartmentActive else print_decorator("COMPARTMENT DOES NOT EXIST OR IS NOT ACTIVE !")
checkIfVCNIsAvailable = checkVCNStateAvailable(client, compartment_id, vcn_ocid)
checkIfDRGIsAvailable = is_drg_already_exist_in_compartment(client, compartment_id, drg_ocid)
drg_attachment_ocid = filter_drg_attachment_id(client, compartment_id, drg_ocid)
if drg_attachment_ocid is not None:
print_decorator("DRG ATTACHMENT ALREADY EXIST ! SKIPPING DRG ATTACHMENT !")
return drg_attachment_ocid
if checkCompartmentActive and checkIfVCNIsAvailable and checkIfDRGIsAvailable:
return drg_attach(client, vcn_ocid, drg_ocid, drg_name)
# LPG METHODS
# Check for name match in LPG
def match_lpg_by_name(client, compartment_ocid,vcn_ocid, lpg_name):
listLPGs = client.list_local_peering_gateways(compartment_id = compartment_ocid, vcn_id= vcn_ocid)
lpgs = convert_response_to_dict(listLPGs)
lpg_names = extract_value_by_field(lpgs, "display_name")
if lpg_name in lpg_names:
return True
else:
return False
# Get LPG OCID from matching name
def get_lpg_ocid (client, compartment_ocid,vcn_ocid, lpg_name):
listlpgs = client.list_local_peering_gateways(compartment_id = compartment_ocid, vcn_id= vcn_ocid)
lpgs = convert_response_to_dict(listlpgs)
lpg_id= None
for lpg in lpgs:
if lpg["display_name"]==lpg_name:
lpg_id = lpg["id"]
else:
lpg_id = None
return lpg_id
# Check for LPG state
def is_lpg_available_status(client, lpg_ocid, compartment_ocid):
lpg = client.get_local_peering_gateway(local_peering_gateway_id = lpg_ocid)
lpg_dict = convert_response_to_dict(lpg)
if lpg_dict["lifecycle_state"] == "AVAILABLE":
return True
else:
return False
# Create Local Peering Gateway been Hub and Spokes VCN
def create_local_peering_gateway(composite_client, lpg_name, compartment_id, vcn_ocid):
create_lpg_details = oci.core.models.CreateLocalPeeringGatewayDetails(compartment_id = compartment_id, display_name = lpg_name, vcn_id = vcn_ocid)
create_lpg_response = composite_client.create_local_peering_gateway_and_wait_for_state(
create_lpg_details,
wait_for_states=[oci.core.models.LocalPeeringGateway.LIFECYCLE_STATE_AVAILABLE]
)
lpg = create_lpg_response
lpg_dict = convert_response_to_dict(lpg)
local_logger('Created LPG')
local_logger('=========================')
local_logger('\n')
return lpg_dict["id"]
def check_before_lpg_create(client,composite_client, compartment_id, lpg_name, vcn_ocid):
checkCompartmentActive = checkByOCID_if_compartment_active(compartment_id, identity_client)
print_decorator("COMPARTMENT EXISTS AND ACTIVE !") if checkCompartmentActive else print_decorator("COMPARTMENT DOES NOT EXIST OR IS NOT ACTIVE !")
vcn_exist = getVCNbyOCID(client, compartment_id, vcn_ocid)
if vcn_exist is not None:
vcn_state = checkVCNStateAvailable(client, compartment_id, vcn_ocid)
checkLpgNameMatch = match_lpg_by_name(client, compartment_id, vcn_ocid, lpg_name)
getLpgocid = get_lpg_ocid(client, compartment_id,vcn_ocid, lpg_name)
if getLpgocid is not None:
getLpgStatus = is_lpg_available_status(client, getLpgocid, compartment_id)
if checkLpgNameMatch and getLpgStatus:
print_decorator("LPG EXIST ALREADY FOR THIS VCN")
return getLpgocid
else:
return create_local_peering_gateway(composite_client, lpg_name, compartment_id, vcn_ocid)
# Quickstart code
def quick_start():
with open('./hub_spokes.json') as hub_spokes:
data = json.load(hub_spokes)
# TODO: JSON Validation to be captured
validate_json(data)
vcn_ocid = check_before_vcn_create(virtual_network_client, virtual_network_composite_operations, config["compartment"], data["hub"]["vcn"])
drg_ocid = check_before_drg_create(virtual_network_client, config["compartment"], data["hub"]["drg"])
drg_attachment_ocid = check_before_drg_attach(virtual_network_client, config["compartment"], vcn_ocid, drg_ocid, data["hub"]["drg"]["name"])
lpg_ocid = check_before_lpg_create(virtual_network_client, virtual_network_composite_operations, config["compartment"], "HubLPG", vcn_ocid)
for spoke in data["spokes"]:
vcn_ocid = check_before_vcn_create(virtual_network_client,virtual_network_composite_operations, config["compartment"], spoke["vcn"])
lpg_ocid = check_before_lpg_create(virtual_network_client, virtual_network_composite_operations, config["compartment"], "SpokeLPG", vcn_ocid)
quick_start()
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/cstorm125/abtestoo/blob/master/notebooks/frequentist_colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# A/B Testing from Scratch: Frequentist Approach
Frequentist A/B testing is one of the most used and abused statistical methods in the world. This article starts with a simple problem of comparing two online ads campaigns (or teatments, user interfaces or slot machines). It outlines several useful statistical concepts and how we exploit them to solve our problem. At the end, it acknowledges some common pitfalls we face when doing a frequentist A/B test and proposes some possible solutions to a more robust A/B testing. Readers are encouraged to tinker with the widgets provided in order to explore the impacts of each parameter.
Thanks to [korakot](https://github.com/korakot) for notebook conversion to Colab.
```
# #depedencies for colab
# %%capture
# !pip install plotnine
import numpy as np
import pandas as pd
from typing import Collection, Tuple
#widgets เอาออก เปลี่ยนไปใช้ colab form แทน
#from ipywidgets import interact, interactive, fixed, interact_manual
#import ipywidgets as widgets
# from IPython.display import display
#plots
import matplotlib.pyplot as plt
from plotnine import *
#stats
import scipy as sp
#suppress annoying warning prints
import warnings
warnings.filterwarnings('ignore')
```
## Start with A Problem
A typical situation marketers (research physicians, UX researchers, or gamblers) find themselves in is that they have two variations of ads (treatments, user interfaces, or slot machines) and want to find out which one has the better performance in the long run.
Practitioners know this as A/B testing and statisticians as **hypothesis testing**. Consider the following problem. We are running an online ads campaign `A` for a period of time, but now we think a new ads variation might work better so we run an experiemnt by dividing our audience in half: one sees the existing campaign `A` whereas the other sees a new campaign `B`. Our performance metric is conversion (sales) per click (ignore [ads attribution problem](https://support.google.com/analytics/answer/1662518) for now). After the experiment ran for two months, we obtain daily clicks and conversions of each campaign and determine which campaign has the better performance.
We simulate the aforementioned problem with both campaigns getting randomly about a thousand clicks per day. The secrete we will pretend to not know is that hypothetical campaign `B` has slightly better conversion rate than `A` in the long run. With this synthetic data, we will explore some useful statistical concepts and exploit them for our frequentist A/B testing.
```
def gen_bernoulli_campaign(p1: float, p2: float,
lmh: Collection = [500, 1000, 1500],
timesteps: int = 60,
scaler: float = 300, seed: int = 1412) -> pd.DataFrame:
'''
:meth: generate fake impression-conversion campaign based on specified parameters
:param float p1: true conversion rate of group 1
:param float p2: true conversion rate of group 2
:param Collection lmh: low-, mid-, and high-points for the triangular distribution of clicks
:param int nb_days: number of timesteps the campaigns run for
:param float scaler: scaler for Gaussian noise
:param int seed: seed for Gaussian noise
:return: dataframe containing campaign results
'''
np.random.seed(seed)
ns = np.random.triangular(*lmh, size=timesteps * 2).astype(int)
np.random.seed(seed)
es = np.random.randn(timesteps * 2) / scaler
n1 = ns[:timesteps]
c1 = ((p1 + es[:timesteps]) * n1).astype(int)
n2 = ns[timesteps:]
c2 = ((p2 + es[timesteps:]) * n2).astype(int)
result = pd.DataFrame({'timesteps': range(timesteps), 'impression_a': n1, 'conv_a': c1, 'impression_b': n2, 'conv_b': c2})
result = result[['timesteps', 'impression_a', 'impression_b', 'conv_a', 'conv_b']]
result['cumu_impression_a'] = result.impression_a.cumsum()
result['cumu_impression_b'] = result.impression_b.cumsum()
result['cumu_conv_a'] = result.conv_a.cumsum()
result['cumu_conv_b'] = result.conv_b.cumsum()
result['cumu_rate_a'] = result.cumu_conv_a / result.cumu_impression_a
result['cumu_rate_b'] = result.cumu_conv_b / result.cumu_impression_b
return result
conv_days = gen_bernoulli_campaign(p1 = 0.10,
p2 = 0.105,
timesteps = 60,
scaler=300,
seed = 1412) #god-mode
conv_days.columns = [i.replace('impression','click') for i in conv_days.columns] #function uses impressions but we use clicks
conv_days.head()
rates_df = conv_days[['timesteps','cumu_rate_a','cumu_rate_b']].melt(id_vars='timesteps')
g = (ggplot(rates_df, aes(x='timesteps', y='value', color='variable')) + geom_line() + theme_minimal() +
xlab('Days of Experiment Run') + ylab('Cumulative Conversions / Cumulative Clicks'))
g
#sum after 2 months
conv_df = pd.DataFrame({'campaign_id':['A','B'], 'clicks':[conv_days.click_a.sum(),conv_days.click_b.sum()],
'conv_cnt':[conv_days.conv_a.sum(),conv_days.conv_b.sum()]})
conv_df['conv_per'] = conv_df['conv_cnt'] / conv_df['clicks']
conv_df
```
## Random Variables and Probability Distributions
Take a step back and think about the numbers we consider in our daily routines, whether it is conversion rate of an ads campaign, the relative risk of a patient group, or sales and revenues of a shop during a given period of time. From our perspective, they have one thing in common: **we do not know exactly how they come to be**. In fact, we would not need an A/B test if we do. For instance, if we know for certain that conversion rate of an ads campaign will be `0.05 + 0.001 * number of letters in the ads`, we can tell exactly which ads to run: the one with the highest number of letters in it.
With our lack of knowledge, we do the next best thing and assume that our numbers are generated by some mathematical formula, calling them **random variables**. For instance, we might think of the probability of a click converting the same way as a coin-flip event, with the probability of converting as $p$ (say 0.1) and not converting as $1-p$ (thus 0.9). With this, we can simulate the event aka click conversion for as many times as we want:
```
def bernoulli(n,p):
flips = np.random.choice([0,1], size=n, p=[1-p,p])
flips_df = pd.DataFrame(flips)
flips_df.columns = ['conv_flag']
g = (ggplot(flips_df,aes(x='factor(conv_flag)')) + geom_bar(aes(y = '(..count..)/sum(..count..)')) +
theme_minimal() + xlab('Conversion Flag') + ylab('Percentage of Occurence') +
geom_hline(yintercept=p, colour='red') + ggtitle(f'Distribution after {n} Trials'))
g.draw()
print(f'Expectation: {p}\nVariance: {p*(1-p)}')
print(f'Sample Mean: {np.mean(flips)}\nSample Variance: {np.var(flips)}')
# ใช้ colab form แทน interact
#interact(bernoulli, n=widgets.IntSlider(min=1,max=500,step=1,value=20),
# p=widgets.FloatSlider(min=0.1,max=0.9))
#@title {run: "auto"}
n = 20 #@param {type:"slider", min:1, max:500, step:1}
p = 0.1 #@param {type:"slider", min:0.1, max:0.9, step:0.1}
bernoulli(n, p)
```
**Probability distribution** is represented with the values of a random variable we are interested in the X-axis, and the chance of them appearing after a number of trials in the Y-axis. The distribution above is called [Bernoulli Distribution](http://mathworld.wolfram.com/BernoulliDistribution.html), usually used to model hypothetical coin flips and online advertisements. [Other distributions](https://en.wikipedia.org/wiki/List_of_probability_distributions) are used in the same manner for other types of random variables. [Cloudera](https://www.cloudera.com/) provided a [quick review](https://blog.cloudera.com/blog/2015/12/common-probability-distributions-the-data-scientists-crib-sheet/) on a few of them you might find useful.
<img src='https://github.com/cstorm125/abtestoo/blob/master/images/distribution.png?raw=1' alt='Common Probability Distributions; Cloudera'/>
## Law of Large Numbers
There are two sets of indicators of a distribution that are especially relevant to our problem: one derived theoretically and another derived from data we observed. **Law of Large Numbers (LLN)** describes the relationship of between them.
Theoretically, we can derive these values about any distribution:
* **Expectation** of a random variable $X_i$ is its long-run average dervied from repetitively sampling $X_i$ from the same distribution. Each distribution requires its own way to obtain the expectation. For our example, it is the weighted average of outcomes $X_i$ ($X_i=1$ converted; $X_i=0$ not converted) and their respective probabilities ($p$ converted; $1-p$ not converted):
\begin{align}
E[X_i] &= \mu = \sum_{i=1}^{k} p_i * X_i \\
&= (1-p)*0 + p*1 \\
&= p
\end{align}
where $k$ is number of patterns of outcomes
* **Variance** of a random variable $X_i$ represents the expectation of how much $X_i$ deviates from its expectation, for our example formulated as:
\begin{align}
Var(X_i) &= \sigma^2 = E[(X_i-E(X_i))^2] \\
&= E[X_i^2] - E[X_i]^2 \\
&= \{(1-p)*0^2 + p*1^2\} - p^2 \\
&= p(1-p)
\end{align}
Empirically, we can also calculate their counterparts with the any amount of data we have on hand:
* **Sample Mean** is simply an average of all $X_i$ we currently have in our sample of size $n$:
\begin{align}
\bar{X} &= \frac{1}{n} \sum_{i=1}^{n} X_i
\end{align}
* **Sample Variance** is the variance based on deviation from sample mean; the $n-1$ is due to [Bessel's correction](https://en.wikipedia.org/wiki/Bessel%27s_correction#Source_of_bias) (See Appendix):
\begin{align}
s^2 &= \frac{1}{n-1} \sum_{i=1}^{n} (X_i - \bar{X})^2
\end{align}
LLN posits that when we have a large enough number of sample $n$, the sample mean will converge to expectation. This can be shown with a simple simulation:
```
def lln(n_max,p):
mean_flips = []
var_flips = []
ns = []
for n in range(1,n_max):
flips = np.random.choice([0,1], size=n, p=[1-p,p])
ns.append(n)
mean_flips.append(flips.mean())
var_flips.append(flips.var())
flips_df = pd.DataFrame({'n':ns,'mean_flips':mean_flips,'var_flips':var_flips}).melt(id_vars='n')
g = (ggplot(flips_df,aes(x='n',y='value',colour='variable')) + geom_line() +
facet_wrap('~variable', ncol=1, scales='free') + theme_minimal() +
ggtitle(f'Expectation={p:2f}; Variance={p*(1-p):2f}') + xlab('Number of Samples') +
ylab('Value'))
g.draw()
# interact(lln, n_max=widgets.IntSlider(min=2,max=10000,step=1,value=1000),
# p=widgets.FloatSlider(min=0.1,max=0.9))
#@title {run: "auto"}
n = 1000 #@param {type:"slider", min:2, max:10000, step:1}
p = 0.1 #@param {type:"slider", min:0.1, max:0.9, step:0.1}
lln(n, p)
```
Notice that even though LLN does not says that sample variance will also converge to variance as $n$ grows large enough, it is also the case. Mathematically, it can be derived as follows:
\begin{align}
s^2 &= \frac{1}{n}\sum_{i=1}^{n}(X_i - \bar{X}^2) \\
&= \frac{1}{n}\sum_{i=1}^{n}(X_i - \mu)^2 \text{; as }n\rightarrow\infty\text{ }\bar{X}\rightarrow\mu\\
&=\frac{1}{n}(\sum_{i=1}^{n}{X_i}^2 - 2\mu\sum_{i=1}^{n}X_i + n\mu^2) \\
&=\frac{\sum_{i=1}^{n}{X_i}^2}{n} - \frac{2\mu\sum_{i=1}^{n}X_i}{n} + \mu^2 \\
&= \frac{\sum_{i=1}^{n}{X_i}^2}{n} - 2\mu\bar{X} + \mu^2\text{; as }\frac{\sum_{i=1}^{n}X_i}{n} = \bar{X}\\
&= \frac{\sum_{i=1}^{n}{X_i}^2}{n} - 2\mu^2 + \mu^2 = \frac{\sum_{i=1}^{n}{X_i}^2}{n} - \mu^2 \text{; as }n\rightarrow\infty\text{ }\bar{X}\rightarrow\mu\\
&= E[{X_i}^2] - E[X_i]^2 = Var(X_i) = \sigma^2
\end{align}
## Central Limit Theorem
Assuming some probability distribution for our random variable also lets us exploit another extremely powerful statistical concept: **Central Limit Theorem (CLT)**. To see CLT in action, let us simplify our problem a bit and say we are only trying to find out if a hypothetical ads campaign `C` has a conversion rate of more than 10% or not, assuming data collected so far say that `C` has 1,000 clicks and 107 conversions.
```
c_df = pd.DataFrame({'campaign_id':'C','clicks':1000,'conv_cnt':107,'conv_per':0.107},index=[0])
c_df
```
CLT goes as follows:
> If $X_i$ is an independent and identically distributed (i.i.d.) random variable with expectation $\mu$ and variance $\sigma^2$ and $\bar{X_j}$ is the sample mean of $n$ samples of $X_i$ we drew as part of sample group $j$, then when $n$ is large enough, $\bar{X_j}$ will follow a [normal distribution](http://mathworld.wolfram.com/NormalDistribution.html) with with expectation $\mu$ and variance $\frac{\sigma^2}{n}$
It is a mouthful to say and full of weird symbols, so let us break it down line by line.
**If $X_i$ is an independent and identically distributed (i.i.d.) random variable with expectation $\mu$ and variance $\sigma^2$** <br/>In our case, $X_i$ is if click $i$ is coverted ($X_i=1$) or not converted ($X_i=0$) with $\mu$ as some probability that represents how likely a click will convert on average. *Independent* means that the probability of each click converting depends only on itself and not other clicks. *Identically distributed* means that the true probability of each click converting is more or less the same. We need to rely on domain knowledge to verify these assumptions; for example, in online advertisement, we would expect, at least for when working with a reputable ads network such as Criteo, that each click comes from indepdent users, as opposed to, say, a click farm where we would see a lot of clicks behaving the same way by design. Identical distribution is a little difficult to assume since we would think different demographics the ads are shown to will react differently so they might not have the same expectation.
```
ind_df = pd.DataFrame({'iid':[False]*100+[True]*100,
'order': list(range(100)) + list(range(100)),
'conv_flag':[1]*50+ [0]*50+ list(np.random.choice([0,1], size=100))})
g = (ggplot(ind_df,aes(x='order',y='conv_flag',color='iid')) + geom_point() +
facet_wrap('~iid') + theme_minimal() + xlab('i-th Click') + ylab('Conversion') +
ggtitle('Both plots has conversion rate of 50% but only one is i.i.d.'))
g
```
**and $\bar{X_j}$ is the sample mean of $n$ samples of $X_i$ we drew as part of sample group $j$, then**<br/>
For campaign `C`, we can think of all the clicks we observed as one sample group, which exists in parallel with an infinite number of sample groups that we have not seen yet but can be drawn from the distribution by additional data collection. This way, we calculate the sample mean as total conversions divided by total number of clicks observed during the campaign.
<img src='https://github.com/cstorm125/abtestoo/blob/master/images/sample_group.png?raw=1' alt='Sample Group in Universe'>
**when $n$ is large enough, $\bar{X_j}$ will follow a [normal distribution](http://mathworld.wolfram.com/NormalDistribution.html) with with expectation $\mu$ and variance $\frac{\sigma^2}{n}$**</br>
Here's the kicker: regardless of what distribution each $X_i$ of sample group $j$ is drawn from, as long as you have enough number of sample $n$, the sample mean of that sample group $\bar{X_j}$ will converge to a normal distribution. Try increase $n$ in the plot below and see what happens.
```
def clt(n, dist):
n_total = n * 10000
if dist == 'discrete uniform':
r = np.random.uniform(size=n_total)
elif dist =='bernoulli':
r = np.random.choice([0,1],size=n_total,p=[0.9,0.1])
elif dist =='poisson':
r = np.random.poisson(size=n_total)
else:
raise ValueError('Choose distributions that are available')
#generate base distribution plot
r_df = pd.DataFrame({'r':r})
g1 = (ggplot(r_df, aes(x='r')) + geom_histogram(bins=30) + theme_minimal() +
xlab('Values') + ylab('Number of Samples') +
ggtitle(f'{dist} distribution where sample groups are drawn from'))
g1.draw()
#generate sample mean distribution plot
normal_distribution = np.random.normal(loc=np.mean(r), scale=np.std(r) / np.sqrt(n), size=10000)
sm_df = pd.DataFrame({'sample_means':r.reshape(-1,n).mean(1),
'normal_distribution': normal_distribution}).melt()
g2 = (ggplot(sm_df, aes(x='value',fill='variable')) +
geom_histogram(bins=30,position='nudge',alpha=0.5) +
theme_minimal() + xlab('Sample Means') + ylab('Number of Sample Means') +
ggtitle(f'Distribution of 10,000 sample means with size {n}'))
g2.draw()
dists = ['bernoulli','discrete uniform','poisson']
# interact(clt, n=widgets.IntSlider(min=1,max=100,value=1),
# dist = widgets.Dropdown(
# options=dists,
# value='bernoulli')
# )
#@title {run: "auto"}
n = 30 #@param {type:"slider", min:1, max:100, step:1}
dist = 'bernoulli' #@param ["discrete uniform", "bernoulli", "poisson"] {type:"string"}
clt(n, dist)
```
The expectation and variance of the sample mean distribution can be derived as follows:
\begin{align}
E[\bar{X_j}] &= E[\frac{\sum_{i=1}^{n} X_i}{n}] \\
&= \frac{1}{n} \sum_{i=1}^{n} E[X_i] = \frac{1}{n} \sum_{i=1}^{n} \mu\\
&= \frac{n\mu}{n} = \mu \\
Var(\bar{X_j}) &= Var(\frac{\sum_{i=1}^{n} X_i}{n}) \\
&= \frac{1}{n^2} \sum_{i=1}^{n} Var(X_i) = \frac{1}{n^2} \sum_{i=1}^{n} \sigma^2\\
&= \frac{n\sigma^2}{n^2} = \frac{\sigma^2}{n} \\
\end{align}
The fact that we know this specific normal distribution of sample means has expectation $\mu$ and variance $\frac{\sigma^2}{n}$ is especially useful. Remember we want to find out whether campaign `C` **in general, not just in any sample group,** has better conversion rate than 10%. Below is that exact normal distribution based on information from our sample group (1,000 clicks) and the assumption that conversion rate is 10%:
\begin{align}
E[\bar{X_j}] &= \mu = p\\
&= 0.1 \text{; by our assumption}\\
Var(\bar{X_j}) &= \frac{\sigma^2}{n} = \frac{p*(1-p)}{n}\\
&= \frac{0.1 * (1-0.1)}{1000}\\
&= 0.0009\\
\end{align}
```
n = c_df.clicks[0]
x_bar = c_df.conv_per[0]
p = 0.1
mu = p; variance = p*(1-p)/n; sigma = (variance)**(0.5)
# mu = 0; variance = 1; sigma = (variance)**(0.5)
x = np.arange(0.05, 0.15, 1e-3)
y = np.array([sp.stats.norm.pdf(i, loc=mu, scale=sigma) for i in x])
sm_df = pd.DataFrame({'x': x, 'y': y, 'crit':[False if i>x_bar else True for i in x]})
g = (ggplot(sm_df, aes(x='x', y='y')) + geom_area() +
theme_minimal() + xlab('Sample Means') + ylab('Probability Density Function') +
ggtitle('Sample mean distribution under our assumption'))
g
```
As long as we know the expectation (which we usually do as part of the assumption) and variance (which is more tricky) of the base distribution, we can use this normal distribution to model random variable from *any* distribution. That is, we can model *any* data as long as we can assume their expectation and variance.
## Think Like A ~~Detective~~ Frequentist
In a frequentist perspective, we treat a problem like a criminal persecution. First, we assume innocence of the defendant often called **null hypothesis** (in our case that conversion rate is *less than or equal to* 10%). Then, we collect the evidence (all clicks and conversions from campaign `C`). After that, we review how *unlikely* it is that we have this evidence assuming the defendant is innocent (by looking at where our sample mean lands on the sample mean distribution). Most frequentist tests are simply saying:
>If we assume that [conversion rate]() of [ads campaign C]() has the long-run [conversion rate]() of less than or equal to [10%](), our results with sample mean [0.107]() or more extreme ones are so unlikely that they happen only [23%]() of the time, calculated by the area of the distribution with higher value than our sample mean.
Note that you can substitute the highlighted parts with any other numbers and statistics you are comparing; for instance, medical trials instead of ads campaigns and relative risks instead of converion rates.
```
g = (ggplot(sm_df, aes(x='x', y='y', group='crit')) + geom_area(aes(fill='crit')) +
theme_minimal() + xlab('Sample Means') + ylab('Probability Density Function') +
ggtitle('Sample mean distribution under our assumption') +
guides(fill=guide_legend(title="Conversion Rate < 0.1")))
g
```
Whether 23% is unlikely *beyond reasonable doubt* depends on how much we are willing to tolerate the false positive rate (the percentage of innocent people you are willing to execute). By convention, a lot of practioners set this to 1-5% depending on their problems; for instance, an experiment in physics may use 1% or less because physical phenomena is highly reproducible whereas social science may use 5% because the human behaviors are more variable. This is not to be confused with **false discovery rate** which is the probability of our positive predictions turning out to be wrong. The excellent book [Statistics Done Wrong](https://www.statisticsdonewrong.com/p-value.html) has given this topic an extensive coverage that you definitely should check out (Reinhart, 2015).
This degree of acceptable unlikeliness is called **alpha** and the probability we observe is called **p-value**. We must set alpha as part of the assumption before looking at the data (the law must first state how bad an action is for a person to be executed).
## Transforming A Distribution
In the previous example of `C`, we are only interested when the conversion rate is *more than* 10% so we look only beyond the right-hand side of our sample mean (thus called **one-tailed tests**). If we were testing whether the conversion rate is *equal to* 10% or not we would be interested in both sides (thus called **two-tailed tests**). However, it is not straightforward since we have to know the equivalent position of our sample mean on the left-hand side of the distribution.
One way to remedy this is to convert the sample mean distribution to a distribution that is symmetrical around zero and has a fixed variance so the value on one side is equivalent to minus that value of the other side. **Standard normal distribution** is the normal distribution with expectation $\mu=0$ and variance $\sigma^2=1$. We convert any normal distribution to a standard normal distribution by:
1. Shift its expectation to zero. This can be done by substracting all values of a distribution by its expectation:
\begin{align}
E[\bar{X_j}-\mu] &= E[\bar{X_j}]-\mu \\
&= \mu-\mu \\
&= 0 \\
\end{align}
2. Scale its variance to 1. This can be done by dividing all values by square root of its variance called **standard deviation**:
\begin{align}
Var(\frac{\bar{X_j}}{\sqrt{\sigma^2/n}}) &= \frac{1}{\sigma^2/n}Var(\bar{X_j})\\
&= \frac{\sigma^2/n}{\sigma^2/n}\\
&=1
\end{align}
Try shifting and scaling the distribution below with different $m$ and $v$.
```
def shift_normal(m,v):
n = c_df.clicks[0]
x_bar = c_df.conv_per[0]
p = 0.1
mu = p; variance = p*(1-p)/n; sigma = (variance)**(0.5)
x = np.arange(0.05, 0.15, 1e-3)
y = np.array([sp.stats.norm.pdf(i, loc=mu, scale=sigma) for i in x])
sm_df = pd.DataFrame({'x': x, 'y': y})
#normalize process
sm_df['x'] = (sm_df.x - m) / np.sqrt(v)
sm_df['y'] = np.array([sp.stats.norm.pdf(i, loc=mu-m, scale=sigma/np.sqrt(v)) for i in sm_df.x])
print(f'Expectation of sample mean: {mu-m}; Variance of sample mean: {variance/v}')
g = (ggplot(sm_df, aes(x='x', y='y')) + geom_area() +
theme_minimal() + xlab('Sample Means') + ylab('Probability Density Function') +
ggtitle('Shifted Normal Distribution of Sample Mean'))
g.draw()
# interact(shift_normal,
# m=widgets.FloatSlider(min=-1e-1,max=1e-1,value=1e-1,step=1e-2),
# v=widgets.FloatSlider(min=9e-5,max=9e-3,value=9e-5,step=1e-4, readout_format='.5f'))
#@title {run: "auto"}
m = 0.1 #@param {type:"slider", min:-1e-1, max:1e-1, step:1e-2}
v = 9e-5 #@param {type:"slider", min:9e-5, max:9e-3, step:1e-4}
shift_normal(m,v)
```
By shifting and scaling, we can find out where `C`'s sample mean of 0.107 lands on the X-axis of a standard normal distribution:
\begin{align}
\bar{Z_j} &= \frac{\bar{X_j} - \mu}{\sigma / \sqrt{n}} \\
&= \frac{0.107 - 0.1}{0.3 / \sqrt{1000}} \approx 0.7378648\\
\end{align}
With $\bar{Z_j}$ and $-\bar{Z_j}$, we can calculate the probability of falsely rejecting the null hypotheysis, or p-value, as the area in red, summing up to approximately 46%. This is most likely too high a false positive rate anyone is comfortable with (no one believes a pregnancy test that turns out positive for 46% of the people who are not pregnant), so we fail to reject the null hypothesis that conversion rate of `C` is equal to 10%.
If someone asks a frequentist for an opinion, they would probably say that they cannot disprove `C` has conversion rate of 10% in the long run. If they were asked to choose an action, they would probably go with the course of action that assumes `C` has a conversion rate of 10%.
```
n = c_df.clicks[0]
x_bar = c_df.conv_per[0]
p = 0.1; mu = p; variance = p*(1-p)/n; sigma = (variance)**(0.5)
x_bar_norm = (x_bar - mu) / sigma
def standard_normal(x_bar_norm, legend_title):
x_bar_norm = abs(x_bar_norm)
x = np.arange(-3, 3, 1e-2)
y = np.array([sp.stats.norm.pdf(i, loc=0, scale=1) for i in x])
sm_df = pd.DataFrame({'x': x, 'y': y})
#normalize process
sm_df['crit'] = sm_df.x.map(lambda x: False if ((x<-x_bar_norm)|(x>x_bar_norm)) else True)
g = (ggplot(sm_df, aes(x='x', y='y',group='crit')) + geom_area(aes(fill='crit')) +
theme_minimal() + xlab('Sample Means') + ylab('Probability Density Function') +
ggtitle('Standard Normal Distribution of Sample Mean') +
guides(fill=guide_legend(title=legend_title)))
g.draw()
standard_normal(x_bar_norm, "Conversion Rate = 0.1")
```
## Z-test and More
With CLT and standard normal distribution (sometimes called **Z-distribution**), we now have all the tools for one of the most popular and useful statistical hypothesis test, the **Z-test**. In fact we have already done it with the hypothetical campaign `C`. But let us go back to our original problem of comparing the long-run conversion rates of `A` and `B`. Let our null hypothesis be that they are equal to each other and alpha be 0.05 (we are comfortable with false positive rate of 5%).
```
conv_df
```
We already know how to compare a random variable to a fixed value, but now we have two random variables from two ads campaign. We get around this by comparing **the difference of their sample mean** $\bar{X_\Delta} = \bar{X_{A}} - \bar{X_{B}}$ to 0. This way, our null hypothesis states that there is no difference between the long-run conversion rates of these campaigns. Through another useful statistical concept, we also know that the variance of $\bar{X_\Delta}$ is the sum of sample mean variances of $\bar{X_\text{A}}$ and $\bar{X_\text{B}}$ (Normal Sum Theorem; [Lemon, 2002](https://www.goodreads.com/book/show/3415974-an-introduction-to-stochastic-processes-in-physics)).
Thus, we can calculate the **test statistic** or, specifically for Z-test, **Z-value** as follows:
\begin{align}
\bar{Z_\Delta} &= \frac{\bar{X_\Delta}-\mu}{\sqrt{\frac{\sigma^2_\text{A}}{n_\text{A}} + \frac{\sigma^2_\text{B}}{n_\text{B}}}} \\
&= \frac{\bar{X_\Delta}-\mu}{\sqrt{\sigma^2_\text{pooled} * (\frac{1}{n_\text{A}} + \frac{1}{n_\text{B}})}}
\end{align}
Since we are assuming that `A` and `B` has the same conversion rate, their variance is also assumed to be the same:
$$\sigma^2_{A} = \sigma^2_{B} = \sigma_\text{pooled} = p * (1-p)$$
where $p$ is the total conversions of both campaigns divided by their clicks (**pooled probability**).
In light of the Z-value calculated from our data, we found that p-value of rejecting the null hypothesis that conversion rates of `A` and `B` are equal to each other is less than 3%, lower than our acceptable false positive rate of 5%, so we reject the null hypothesis that they perform equally well. The result of the test is **statistically significant**; that is, it is unlikely enough for us given the null hypothesis.
```
def proportion_test(c1: int, c2: int,
n1: int, n2: int,
mode: str = 'one_sided') -> Tuple[float, float]:
'''
:meth: Z-test for difference in proportion
:param int c1: conversions for group 1
:param int c2: conversions for group 2
:param int n1: impressions for group 1
:param int n2: impressions for group 2
:param str mode: mode of test; `one_sided` or `two_sided`
:return: Z-score, p-value
'''
p = (c1 + c2) / (n1 + n2)
p1 = c1 / n1
p2 = c2 / n2
z = (p1 - p2) / np.sqrt(p * (1 - p) * (1 / n1 + 1 / n2))
if mode == 'two_sided':
p = 2 * (1 - sp.stats.norm.cdf(abs(z)))
elif mode == 'one_sided':
p = 1 - sp.stats.norm.cdf(abs(z))
else:
raise ValueError('Available modes are `one_sided` and `two_sided`')
return z, p
z_value, p_value = proportion_test(c1=conv_df.conv_cnt[0], c2=conv_df.conv_cnt[1],
n1=conv_df.clicks[0], n2=conv_df.clicks[1], mode='two_sided')
print(f'Z-value: {z_value}; p-value: {p_value}')
standard_normal(z_value, "No Difference in Conversion Rates")
```
This rationale extends beyond comparing proportions such as conversion rates. For instance, we can also compare revenues of two different stores, assuming they are i.i.d. However in this case, we do not know the variance of the base distribution $\sigma^2$, as it cannot be derived from our assumption (variance of Bernoulli distribution is $p*(1-p)$ but store revenues are not modelled after a coin flip). The test statistic then is created with sample variance $s^2$ based on our sample group and follows a slightly modified version of standard normal distribution (see [Student's t-test](https://en.wikipedia.org/wiki/Student%27s_t-test)). Your test statistics and sample mean distributions may change, but bottom line of frequentist A/B test is exploiting CLT and frequentist reasoning.
## Confidence Intervals
Notice that we can calculate p-value from Z-value and vice versa. This gives us another canny way to look at the problem; that is, we can calculate the intervals where there is an arbitrary probability, say 95%, that sample mean of `A` or `B` will fall into. We call it **confidence interval**. You can see that despite us rejecting the null hypothesis that their difference is zero, the confidence intervals of both campaigns can still overlap.
Try changing the number of conversion rate and clicks of each group as well as the alpha to see what changes in terms of p-value of Z-test and confidence intervals. You will see that the sample mean distribution gets "wider" as we have fewer samples in a group. Intuitively, this makes sense because the fewer clicks you have collected, the less information you have about true performance of an ads campaign and less confident you are about where it should be. So when designing an A/B test, you should plan to have similar number of sample between both sample groups in order to have similarly distributed sample means.
```
def proportion_plot(c1: int, c2: int,
n1: int, n2: int, alpha: float = 0.05,
mode: str = 'one_sided') -> None:
'''
:meth: plot Z-test for difference in proportion and confidence intervals for each campaign
:param int c1: conversions for group 1
:param int c2: conversions for group 2
:param int n1: impressions for group 1
:param int n2: impressions for group 2
:param float alpha: alpha
:param str mode: mode of test; `one_sided` or `two_sided`
:return: None
'''
p = (c1 + c2) / (n1 + n2)
p1 = c1 / n1
p2 = c2 / n2
se1 = np.sqrt(p1 * (1 - p1) / n1)
se2 = np.sqrt(p2 * (1 - p2) / n2)
z = sp.stats.norm.ppf(1 - alpha / 2)
x1 = np.arange(p1 - 3 * se1, p1 + 3 * se1, 1e-4)
x2 = np.arange(p2 - 3 * se2, p2 + 3 * se2, 1e-4)
y1 = np.array([sp.stats.norm.pdf(i, loc=p1, scale=np.sqrt(p1 * (1 - p1) / n1)) for i in x1])
y2 = np.array([sp.stats.norm.pdf(i, loc=p2, scale=np.sqrt(p2 * (1 - p2) / n2)) for i in x2])
sm_df = pd.DataFrame({'campaign_id': ['Campaign A'] * len(x1) + ['Campaign B'] * len(x2),
'x': np.concatenate([x1, x2]), 'y': np.concatenate([y1, y2])})
z_value, p_value = proportion_test(c1, c2, n1, n2, mode)
print(f'Z-value: {z_value}; p-value: {p_value}')
g = (ggplot(sm_df, aes(x='x', y='y', fill='campaign_id')) +
geom_area(alpha=0.5)
+ theme_minimal() + xlab('Sample Mean Distribution of Each Campaign')
+ ylab('Probability Density Function')
+ geom_vline(xintercept=[p1 + se1 * z, p1 - se1 * z], colour='red')
+ geom_vline(xintercept=[p2+se2*z, p2-se2*z], colour='blue')
+ ggtitle(f'Confident Intervals at alpha={alpha}'))
g.draw()
# interact(ci_plot,
# p1 = widgets.FloatSlider(min=0,max=1,value=conv_df.conv_cnt[0] / conv_df.clicks[0],
# step=1e-3,readout_format='.5f'),
# p2 = widgets.FloatSlider(min=0,max=1,value=conv_df.conv_cnt[1] / conv_df.clicks[1],
# step=1e-3,readout_format='.5f'),
# n1 = widgets.IntSlider(min=10,max=70000,value=conv_df.clicks[0]),
# n2 = widgets.IntSlider(min=10,max=70000,value=conv_df.clicks[1]),
# alpha = widgets.FloatSlider(min=0,max=1,value=0.05))
conv_df.clicks[0], conv_df.clicks[1]
#@title {run: "auto"}
c1 = 5950 #@param {type:"slider", min:0, max:70000}
c2 = 6189 #@param {type:"slider", min:0, max:70000}
n1 = 59504 #@param {type:"slider", min:10, max:70000, step:10}
n2 = 58944 #@param {type:"slider", min:10, max:70000, step:10}
alpha = 0.05 #@param {type:"slider", min:0, max:1, step:1e-3}
proportion_plot(c1,c2,n1,n2,alpha)
```
## Any Hypothesis Test Is Statistically Significant with Enough Samples
Because we generated the data, we know that conversion rate of campaign `A` (10%) is about 95% that of campaign `B` (10.5%). If we go with our gut feeling, most of us would say that they are practically the same; yet, our Z-test told us that they are different. The reason for this becomes apparent graphically when we decrease the number of clicks for both campaigns in the plot above. The Z-test stops becoming significant when both campaigns have about 50,000 clicks each, even though they still have exactly the same conversion rate. The culprit is our Z-value calculated as:
\begin{align}
\bar{Z_\Delta} &= \frac{\bar{X_\Delta}-\mu}{\sqrt{\sigma^2_\text{pooled} * (\frac{1}{n_\text{A}} + \frac{1}{n_\text{B}})}}
\end{align}
Notice number of clicks $n_\text{A}$ and $n_\text{B}$ hiding in the denominator. Our test statistics $\bar{Z_\Delta}$ will go infinitely higher as long as we collect more clicks. If both campaigns `A` and `B` have one million clicks each, the difference of as small as 0.1% will be detected as statistically significant. Try adjusting the probabilities $p1$ and $p2$ in the plot below and see if the area of statistical significance expands or contracts as the difference between the two numbers changes.
```
def significance_plot(p1,p2):
n1s = pd.DataFrame({'n1':[10**i for i in range(1,7)],'k':0})
n2s = pd.DataFrame({'n2':[10**i for i in range(1,7)],'k':0})
ns = pd.merge(n1s,n2s,how='outer').drop('k',1)
ns['p_value'] = ns.apply(lambda row: proportion_test(p1*row['n1'], p2*row['n2'],row['n1'],row['n2'])[1], 1)
g = (ggplot(ns,aes(x='factor(n1)',y='factor(n2)',fill='p_value')) + geom_tile(aes(width=.95, height=.95)) +
geom_text(aes(label='round(p_value,3)'), size=10)+ theme_minimal() +
xlab('Number of Samples in A') + ylab('Number of Samples in B') +
guides(fill=guide_legend(title="p-value")))
g.draw()
# interact(significance_plot,
# p1 = widgets.FloatSlider(min=0,max=1,value=conv_df.conv_cnt[0] / conv_df.clicks[0],
# step=1e-3,readout_format='.5f'),
# p2 = widgets.FloatSlider(min=0,max=1,value=conv_df.conv_cnt[1] / conv_df.clicks[1],
# step=1e-3,readout_format='.5f'))
#@title {run: "auto"}
p1 = 0.09898494218876042 #@param {type:"slider", min:0, max:1, step:1e-3}
p2 = 0.10367467426710097 #@param {type:"slider", min:0, max:1, step:1e-3}
significance_plot(p1,p2)
```
More practically, look at cumulative conversion rates and z-values of `A` and `B` on a daily basis. Every day that we check the results based on cumulative clicks and conversions, we will come up with a different test statistic and p-value. Difference in conversion rates seem to stabilize after 20 days; however, notice that if you stop the test at day 25 or so, you would say it is NOT statistically significant, whereas if you wait a little longer, you will get the opposite result. The only thing that changes as time goes on is that we have more samples.
```
g = (ggplot(rates_df, aes(x='timesteps', y='value', color='variable')) + geom_line() + theme_minimal() +
xlab('Days of Experiment Run') + ylab('Cumulative Conversions / Cumulative Clicks'))
g
#test
conv_days['cumu_z_value'] = conv_days.apply(lambda row: proportion_test(row['cumu_conv_a'],
row['cumu_conv_b'],row['cumu_click_a'],
row['cumu_click_b'], mode='two_sided')[0],1)
conv_days['cumu_p_value'] = conv_days.apply(lambda row: proportion_test(row['cumu_conv_a'],
row['cumu_conv_b'],row['cumu_click_a'],
row['cumu_click_b'], mode='two_sided')[1],1)
#plot
g = (ggplot(conv_days, aes(x='timesteps',y='cumu_z_value',color='cumu_p_value')) + geom_line() + theme_minimal() +
xlab('Days of Campaign') + ylab('Z-value Calculated By Cumulative Data') +
geom_hline(yintercept=[sp.stats.norm.ppf(0.95),sp.stats.norm.ppf(0.05)], color=['red','green']) +
annotate("text", label = "Above this line A is better than B", x = 20, y = 2, color = 'red') +
annotate("text", label = "Below this line B is better than A", x = 20, y = -2, color = 'green'))
g
```
## Minimum Detectable Effect, Power and Required Sample Size
We argue that this too-big-to-fail phenomena among sample groups is especially dangerous in the context of today's "big data" society. Gone are the days where statistical tests are done among two control groups of 100 people each using paper survey forms. Now companies are performning A/B testing between ad variations that could have tens of thousands or more samples (impressions or clicks), and potentially all of them will be "statistically significant".
One way to remedy this is to do what frequentists do best: make more assumptions, more specifically **two** more.
First, if we want to find out whether `B` has *better* conversion than `A`, we do not only make assumptions about the mean of the null hypothesis but **minimally by how much**, aka the mean of the alternative hypothesis. We can set **mininum detectable effect** as the smallest possible difference that would be worth investing the time and money in one campaign over the other; let say that from experience we think it is 1%. We then ask:
> What is the mininum number of samples in a sample group (clicks in a campaign) should we have in order to reject the null hypothesis at a **significance level ($\alpha$)** and **power ($1-\beta$)** when the difference in sample means is [1%]()?
The **significance level ($\alpha$)** takes care of the false positive rate promise, for example to be lower than 5% (95% specificity), where as **power ($1-\beta$)** indicates the desired recall, for example to be 80% (20% false negative rate).
```
def power_plot(mean_h0: float,
mean_h1: float,
critical: float) -> None:
'''
:meth: plot Z-test for difference in proportion with power and alpha highlighted
:param float mean1: mean for null hypothesis
:param float mean2: mean for alternative hypothesis
:param float critical: critical value selected
:return: None
'''
x = np.arange(-4,6,0.1)
dat = pd.DataFrame({'x':x,
'y1':sp.stats.norm.pdf(x,mean_h0,1),
'y2':sp.stats.norm.pdf(x,mean_h1,1)})
dat['x1'] = dat.x.map(lambda x: np.where(x>critical,x,None))
dat['x2'] = dat.x.map(lambda x: np.where(x>critical,x,None))
g = (
ggplot(dat, aes(x = 'x')) +
geom_line(aes(y = 'y1'), color='red', size = 1.2) +
geom_line(aes(y = 'y2'), color='blue',size = 1.2) +
geom_vline(xintercept=mean_h0,linetype='dashed',color='red')+
geom_vline(xintercept=mean_h1,linetype='dashed',color='blue')+
geom_area(aes(y = 'y1', x = 'x1'), fill='red') +
geom_area(aes(y = 'y2', x = 'x2'), fill = 'blue', alpha = 0.3) +
ylab('Probability Density Function') + xlab('Z value')+
ggtitle(f'significance level = {sp.stats.norm.pdf(critical,mean_h0,1):.2f}; power ={1-sp.stats.norm.pdf(critical,mean_h1,1):.2f}')+
theme_minimal()
)
g.draw()
#@title {run: "auto"}
mean_h0 = 0 #@param {type:"slider", min:0, max:6, step:1e-3}
mean_h1 = 3.18 #@param {type:"slider", min:0, max:6, step:1e-3}
critical = 2 #@param {type:"slider", min:0, max:3, step:1e-1}
power_plot(mean_h0, mean_h1, critical)
```
Given a minimum detectable effect $\text{MDE}$, significance level $\alpha$ and power $1-\beta$, we can calculate the critical Z value $Z_{critical}$ that satisfies these conditions, where the required number of samples in each group is $n$ and $mn$ (where m is multiplier):
\begin{align}
Z_{critical} &= \mu_{H0} + Z_{\alpha} * \sqrt{\sigma^2 * (\frac{1}{n} + \frac{1}{mn})}\\
Z_{critical} &= 0 + Z_{\alpha} * \sqrt{\sigma^2 * (\frac{1}{n} + \frac{1}{mn})}\\
Z_{critical} &= \mu_{H1}-\mu_{H0} - Z_{\beta} * \sqrt{\sigma^2 * (\frac{1}{n} + \frac{1}{mn})}\\
Z_{critical} &= \text{MDE} - Z_{\beta} * \sqrt{\sigma^2 * (\frac{1}{n} + \frac{1}{mn})}\\
0 + Z_{\alpha} * \sqrt{\sigma^2 * (\frac{1}{n} + \frac{1}{mn})} &= \text{MDE} - Z_{\beta} * \sqrt{\sigma^2 * (\frac{1}{n} + \frac{1}{mn})}\\
Z_{\alpha} + Z_{\beta} &= \frac{\text{MDE}}{\sqrt{\sigma^2 * (\frac{1}{n} + \frac{1}{mn})}} \\
\frac{(m+1)\sigma^2}{mn} &= (\frac{\text{MDE}}{Z_{\alpha} + Z_{\beta}})^2 \\
n &= \frac{m+1}{m}(\frac{(Z_{\alpha} + Z_{\beta}) \sigma}{\text{MDE}})^2 \\
n &= 2(\frac{(Z_{\alpha} + Z_{\beta}) \sigma}{\text{MDE}})^2; m=1
\end{align}
Second, we make yet another crucial assumption about **the variance $\sigma^2$ we expect**. Remember we used to estimate the variance by using the pooled probability of our sample groups, but here we have not even started the experiments. In a conventional A/B testing scenario, we are testing whether an experimental variation is better than the existing one, so one choice is **using sample variance of a campaign you are currently running**; for instance, if `A` is our current ads and we want to know if we should change to `B`, then we will use conversion rate of `A` from past time period to calculate the variance, say 10%.
Let us go back in time before we even started our 2-month-long test between campaign `A` and `B`. Now we assume not only acceptable false positive rate alpha of 0.05 but also minimum detectable effect of 1% and expected variance of $\sigma^2 = 0.1 * (1-0.1) = 0.09$, then we calculate that the minimum number of samples we should collect for each campaign. You can see that should we have done that we would have not been able to reject the null hypothesis, and stuck with campaign `A` going forward.
The upside is that now we only have to run the test for about 5 days instead of 60 days assuming every day is the same for the campaigns (no peak traffic on weekends, for instance). The downside is that our null hypothesis gets much more specific with not only one but three assumptions:
* Long-run conversion rate of `B` is no better than `A`'s
* The difference that will matter to us is at least 1%
* The expected variance conversion rates is $\sigma^2 = 0.1 * (1-0.1) = 0.09$
This fits many A/B testing scenarios since we might not want to change to a new variation even though it is better but not so much that we are willing to invest our time and money to change our current setup. Try adjusting $\text{MDE}$ and $\sigma$ in the plot below and see how the number of required samples change.
```
def proportion_samples(mde: float, p: float, m: float = 1,
alpha: float = 0.05,
beta: float = 0.8,
mode: str = 'one_sided') -> float:
'''
:meth: get number of required sample based on minimum detectable difference (in absolute terms)
:param float mde: minimum detectable difference
:param float p: pooled probability of both groups
:param float m: multiplier of number of samples; groups are n and nm
:param float alpha: alpha
:param float beta: beta
:param str mode: mode of test; `one_sided` or `two_sided`
:return: estimated number of samples to get significance
'''
variance = p * (1 - p)
z_b = sp.stats.norm.ppf(beta)
if mode == 'two_sided':
z_a = sp.stats.norm.ppf(1 - alpha / 2)
elif mode == 'one_sided':
z_a = sp.stats.norm.ppf(1 - alpha)
else:
raise ValueError('Available modes are `one_sided` and `two_sided`')
return ((m + 1) / m) * variance * ((z_a+z_b) / mde)**2
def plot_proportion_samples(mde, p, m=1, alpha=0.05,beta=0.8, mode='one_sided'):
minimum_samples = proportion_samples(mde, p,m, alpha,beta, mode)
g = (ggplot(conv_days, aes(x='cumu_click_a',y='cumu_z_value',color='cumu_p_value')) + geom_line() +
theme_minimal() +
xlab('Number of Samples per Campaign') + ylab('Z-value Calculated By Cumulative Data') +
geom_hline(yintercept=[sp.stats.norm.ppf(0.95),sp.stats.norm.ppf(0.05)], color=['red','green']) +
annotate("text", label = "Above this line A is better than B", x = 30000, y = 2, color = 'red') +
annotate("text", label = "Below this line B is better than A", x = 30000, y = -2, color = 'green') +
annotate("text", label = f'Minimum required samples at MDE {mde}={int(minimum_samples)}', x = 30000, y = 0,) +
geom_vline(xintercept=minimum_samples))
g.draw()
#@title {run: "auto"}
mde = 0.01 #@param {type:"slider", min:0.001, max:0.01, step:1e-3}
p = 0.1 #@param {type:"slider", min:0, max:1, step:1e-3}
m = 1 #@param {type:"slider", min:0, max:1, step:1e-1}
p_value = 0.05 #@param {type:"slider", min:0.01, max:0.1, step:1e-3}
mode = 'one_sided' #@param ['one_sided','two_sided'] {type:"string"}
plot_proportion_samples(mde, p, m, alpha, mode)
```
## You Will Get A Statistically Significant Result If You Try Enough Times
The concept p-value represents is false positive rate of our test, that is, how unlikely it is to observe our sample groups given that they do not have different conversion rates in the long run. Let us re-simulate our campaigns `A` and `B` to have equal expectation of 10%. If we apply our current method, we can be comfortably sure we will not get statistical significance (unless we have an extremely large number of samples).
```
conv_days = gen_bernoulli_campaign(p1 = 0.10,
p2 = 0.10,
timesteps = 60,
scaler=100,
seed = 1412) #god-mode
conv_days.columns = [i.replace('impression','click') for i in conv_days.columns] #function uses impressions but we use clicks
conv_days['cumu_z_value'] = conv_days.apply(lambda row: proportion_test(row['cumu_conv_a'],
row['cumu_conv_b'],row['cumu_click_a'],
row['cumu_click_b'], mode='two_sided')[0],1)
conv_days['cumu_p_value'] = conv_days.apply(lambda row: proportion_test(row['cumu_conv_a'],
row['cumu_conv_b'],row['cumu_click_a'],
row['cumu_click_b'], mode='two_sided')[1],1)
conv_days['z_value'] = conv_days.apply(lambda row: proportion_test(row['conv_a'],
row['conv_b'],row['click_a'],
row['click_b'], mode='two_sided')[0],1)
conv_days['p_value'] = conv_days.apply(lambda row: proportion_test(row['conv_a'],
row['conv_b'],row['click_a'],
row['click_b'], mode='two_sided')[1],1)
g = (ggplot(conv_days, aes(x='timesteps',y='cumu_z_value',color='cumu_p_value')) + geom_line() + theme_minimal() +
xlab('Days in Campaign') + ylab('Z-value Calculated By Cumulative Data') +
geom_hline(yintercept=[sp.stats.norm.ppf(0.975),sp.stats.norm.ppf(0.025)], color=['red','red']))
g
```
Another approach is instead of doing the test only once, we **do it every day using clicks and conversions of that day alone**. We will have 60 tests where 3 of them give statistically significant results that `A` and `B` have different conversion rates in the long run. The fact that we have exactly 5% of the tests turning positive despite knowing that none of them should is not a coincidence. The Z-value is calculated based on alpha of 5%, which means even if there is no difference at 5% of the time we perform this test with this specific set of assumptions we will still have a positive result ([Obligatory relevant xkcd strip](https://xkcd.com/882/); Munroe, n.d.).
```
g = (ggplot(conv_days, aes(x='timesteps',y='z_value',color='p_value')) + geom_line() + theme_minimal() +
xlab('Each Day in Campaign') + ylab('Z-value Calculated By Daily Data') +
geom_hline(yintercept=[sp.stats.norm.ppf(0.975),sp.stats.norm.ppf(0.025)], color=['red','red']) +
ggtitle(f'We Have {(conv_days.p_value<0.05).sum()} False Positives Out of {conv_days.shape[0]} Days ({100*(conv_days.p_value<0.05).sum()/conv_days.shape[0]}%)'))
g
```
Not many people will test online ads campaigns based on daily data, but many researchers perform repeated experiments and by necessity repeated A/B tests as shown above. If you have a reason to believe that sample groups from different experiments have the same distribution, you might consider grouping them together and perform one large test as usual. Otherwise, you can tinker the assumption of how much false positive you can tolerate. One such approach, among [others](https://en.wikipedia.org/wiki/Multiple_comparisons_problem), is the [Bonferroni correction](http://mathworld.wolfram.com/BonferroniCorrection.html). It scales your alpha down by the number of tests you perform to make sure that your false positive rate stays at most your original alpha. In our case, if we cale our alpha as$\alpha_{\text{new}}=\frac{0.05}{60} \approx 0.0008$, we will have the following statistically non-significant results.
```
g = (ggplot(conv_days, aes(x='timesteps',y='z_value',color='p_value')) + geom_line() + theme_minimal() +
xlab('Each Day in Campaign') + ylab('Z-value Calculated By Daily Data') +
geom_hline(yintercept=[sp.stats.norm.ppf(1-0.0008/2),sp.stats.norm.ppf(0.0008/2)], color=['red','red']) +
ggtitle(f'We Have {(conv_days.p_value<0.05).sum()} False Positives Out of {conv_days.shape[0]} Days ({100*(conv_days.p_value<0.05).sum()/conv_days.shape[0]}%)'))
g
```
## Best Practices
To the best of our knowledge, the most reasonable and practical way to perform a frequentist A/B test is to know your assumptions, including but not limited to:
* What distribution should your data be assumed to be drawn from? In many cases, we use Bernoulli distribution for proportions, Poisson distribution for counts and normal distribution for real numbers.
* Are you comparing your sample group to a fixed value or another sample group?
* Do you want to know if the expectation of the sample group is equal to, more than or less than its counterpart?
* What is the minimum detectable effect and how many samples should you collect? What is a reasonable variance to assume in order to calculated required sample size?
* What is the highest false positive rate $\alpha$ that you can accept?
With these assumptions cleared, you can most likely create a test statistics, then with frequentist reasoning, you can determine if the sample group you collected are unlikely enough that you would reject your null hypothesis because of it.
## References
* Lemons, D. S. (2002). An introduction to stochastic processes in physics. Baltimore: Johns Hopkins University Press.
Normal Sum Theorem; p34
* Munroe, Randall (n.d.). HOW TO Absurd Scientific Answers toCommon Real-world Problems. Retrieved from https://xkcd.com/882/
* Reinhart, A. (2015, March 1). The p value and the base rate fallacy. Retrieved from https://www.statisticsdonewrong.com/p-value.html
* [whuber](https://stats.stackexchange.com/users/919/whuber) (2017). Can a probability distribution value exceeding 1 be OK?. Retrieved from https://stats.stackexchange.com/q/4223
## Appendix
### Bessel's Correction for Sample Variance
Random variables can be thought of as estimation of the real values such as sample variance is an estimation of variance from the "true" distribution. An estimator is said to be **biased** when its expectation is not equal to the true value (not to be confused with LLN where the estimator itself approaches the true value as number of samples grows).
We can repeat the experiment we did for LLN with sample mean and true mean, but this time we compare how biased version ($\frac{1}{n} \sum_{i=1}^{n} (X_i - \bar{X})^2$) and unbiased version ($\frac{1}{n-1} \sum_{i=1}^{n} (X_i - \bar{X})^2$) of sample variance approach true variance as number of sample groups grow. Clearly, we can see that biased sample variance normally underestimates the true variance.
```
def var(x, dof=0):
n = x.shape[0]
mu = np.sum(x)/n
return np.sum((x - mu)**2) / (n-dof)
n_total = 10000 #total number of stuff
n_sample = 100 #number of samples per sample group
sg_range = range(1,100) #number of sample groups to take average of sample variances from
r = np.random.normal(loc=0,scale=1,size=n_total) #generate random variables based on Z distribution
pop_var = var(r) #true variance of the population
mean_s_bs = []
mean_s_us = []
for n_sg in sg_range:
s_bs = []
s_us =[]
for i in range(n_sg):
sg = np.random.choice(r,size=n_sample,replace=False)
s_bs.append(var(sg)) #biased sample variance
s_us.append(var(sg,1)) #unbiased sample variance
mean_s_bs.append(np.mean(s_bs))
mean_s_us.append(np.mean(s_us))
s_df = pd.DataFrame({'nb_var':sg_range,'biased_var':mean_s_bs,
'unbiased_var':mean_s_us}).melt(id_vars='nb_var')
g = (ggplot(s_df,aes(x='nb_var',y='value',color='variable',group='variable')) + geom_line() +
geom_hline(yintercept=pop_var) + theme_minimal() +
xlab('Number of Sample Groups') + ylab('Sample Mean of Sample Variance in Each Group'))
g
```
We derive exactly how much the bias is as follows:
$$B[s_{biased}^2] = E[s_{biased}^2] - \sigma^2 = E[s_{biased}^2 - \sigma^2]$$
where $B[s^2]$ is the bias of estimator (biased sample variance) $s_{biased}^2$ of variance $\sigma^2$. Then we can calculate the bias as:
\begin{align}
E[s_{biased}^2 - \sigma^2] &= E[\frac{1}{n} \sum_{i=1}^n(X_i - \bar{X})^2 - \frac{1}{n} \sum_{i=1}^n(X_i - \mu)^2] \\
&= \frac{1}{n}E[(\sum_{i=1}^n X_i^2 -2\bar{X}\sum_{i=1}^n X_i + n\bar{X^2}) - (\sum_{i=1}^n X_i^2 -2\mu\sum_{i=1}^n X_i + n\mu^2)] \\
&= E[\bar{X^2} - \mu^2 - 2\bar{X^2} + 2\mu\bar{X}] \\
&= -E[\bar{X^2} -2\mu\bar{X} +\mu^2] \\
&= -E[(\bar{X} - \mu)^2] \\
&= -\frac{\sigma^2}{n} \text{; variance of sample mean}\\
E[s_{biased}^2] &= \sigma^2 - \frac{\sigma^2}{n} \\
&= (1-\frac{1}{n})\sigma^2
\end{align}
Therefore if we divide biased estimator $s_{biased}^2$ by $1-\frac{1}{n}$, we will get an unbiased estimator of variance $s_{unbiased}^2$,
\begin{align}
s_{unbiased}^2 &= \frac{s_{biased}^2}{1-\frac{1}{n}} \\
&= \frac{\frac{1}{n} \sum_{i=1}^n(X_i - \bar{X})^2}{1-\frac{1}{n}}\\
&= \frac{1}{n-1} \sum_{i=1}^n(X_i - \bar{X})^2
\end{align}
This is why the sample variance we usually use $s^2$ has $n-1$ instead of $n$. Also, this is not to be confused with the variance of sample means which is $\frac{\sigma^2}{n}$ when variance of the base distribution is known or assumed and $\frac{s^2}{n}$ when it is not.
### Mass vs Density
You might wonder why the sample mean distribution has Y-axis that exceeds 1 even though it seemingly should represents probability of each value of sample mean. The short answer is that it does not represents probability but rather **probability density function**. The long answer is that there are two ways of representing probability distributions depending on whether they describe **discrete** or **continuous** data. See also this excellent [answer on Stack Exchange](https://stats.stackexchange.com/questions/4220/can-a-probability-distribution-value-exceeding-1-be-ok) (whuber, 2017).
**Discrete probability distributions** contain values that are finite (for instance, $1, 2, 3, ...$) or countably infinite (for instance, $\frac{1}{2^i}$ where $i=1, 2, 3, ...$). They include but not limited to distributions we have used to demonstrate CLT namely uniform, Bernoulli and Poisson distribution. In all these distributions, the Y-axis, now called **probability mass function**, represents the exact probability each value in the X-axis will take, such as the Bernouilli distribution we have shown before:
```
flips = np.random.choice([0,1], size=n, p=[1-p,p])
flips_df = pd.DataFrame(flips)
flips_df.columns = ['conv_flag']
g = (ggplot(flips_df,aes(x='factor(conv_flag)')) + geom_bar(aes(y = '(..count..)/sum(..count..)')) +
theme_minimal() + xlab('Value') + ylab('Probability Mass Function') +
ggtitle(f'Bernoulli Distribution'))
g
```
**Continuous probability distribution** contains values that can take infinitely many, uncountable values (for instance, all real numbers between 0 and 1). Since there are infinitely many values, the probability of each individual value is essentially zero (what are the chance of winning the lottery that has infinite number of digits). Therefore, instead of the exact probability of each value (probability mass function), the Y-axis only represents the **probability density function**. This can be thought of as the total probability within an immeasurably small interval around the value. Take an example of a normal distribution with expectation $\mu=0$ and variance $\sigma^2=0.01$. The probability density function of the value 0 is described as:
\begin{align}
f(x) &= \frac{1}{\sqrt{2\pi\sigma^2}} e^{\frac{-(x-\mu)^2}{2\sigma^2}}\\
&= \frac{1}{\sqrt{2\pi(0.01)}} e^{\frac{-(x-0)^2}{2(0.01)}} \text{; }\mu=0;\sigma^2=0.01 \\
&\approx 3.989 \text{; when } x=0
\end{align}
This of course does not mean that there is 398.9% chance that we will draw the value 0 but the density of the probability around the value. The actual probability of that interval around 0 is 3.989 times an immeasurably small number which will be between 0 and 1.
Intuitively, we can think of these intervals as start from relatively large numbers such as 0.1 and gradually decreases to smaller numbers such as 0.005. As you can see from the plot below, the plot becomes more fine-grained and looks more "normal" as the intervals get smaller.
```
def prob_density(step,mu=0,sigma=0.1):
x = np.arange(-0.5, 0.5, step)
y = np.array([sp.stats.norm.pdf(i, loc=mu, scale=sigma) for i in x])
sm_df = pd.DataFrame({'x': x, 'y': y})
g = (ggplot(sm_df, aes(x='x', y='y')) + geom_bar(stat='identity') +
theme_minimal() + xlab('Value') + ylab('Probability Density Function') +
ggtitle(f'Normal Distribution with Expectation={mu} and Variance={sigma**2:2f}'))
g.draw()
# interact(prob_density, step=widgets.FloatSlider(min=5e-3,max=1e-1,value=1e-1,step=1e-3,readout_format='.3f'))
#@title {run: "auto"}
step = 0.1 #@param {type:"slider", min:5e-3, max:0.1, step:1e-3}
prob_density(step)
```
|
github_jupyter
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.