text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# भाग 9 - एन्क्रिप्ट किए गए कार्यक्रमों में प्रवेश करें
मानो या न मानो, एन्क्रिप्टेड डेटा के साथ गणना करना संभव है। दूसरे शब्दों में, ऐसा प्रोग्राम चलाना संभव है जहां **प्रोग्राम में सभी चर** हैं **एन्क्रिप्टेड**!
इस ट्यूटोरियल में, हम एन्क्रिप्टेड कम्प्यूटेशन के बहुत ही बुनियादी टूल्स से गुजरने वाले हैं। विशेष रूप से, हम एक लोकप्रिय दृष्टिकोण पर ध्यान केंद्रित करने जा रहे हैं जिसे सिक्योर मल्टी-पार्टी कम्प्यूटेशन कहा जाता है। इस पाठ में, हम सीखेंगे कि एक एन्क्रिप्टेड कैलकुलेटर का निर्माण कैसे करें जो एन्क्रिप्टेड संख्याओं पर गणना कर सकता है।
लेखक:
- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)
- Théo Ryffel - Github: [@LaRiffle](https://github.com/Laiffiff)
संदर्भ:
- Morten Dahl - [ब्लॉग](https://mortendahl.github.io) - Twitter: [@mortendahlcs](https://twitter.com/mortendahlcs)
nbTranslate का उपयोग करके अनुवादित
संपादक:
- Urvashi Raheja - Github: [@raheja](https://github.com/raheja)
# चरण 1: सुरक्षित बहु-पक्षीय संगणना का उपयोग कर एन्क्रिप्शन
SMPC पहली नज़र में "एन्क्रिप्शन" का एक अजीब रूप है। चर को एन्क्रिप्ट करने के लिए सार्वजनिक / निजी कुंजी का उपयोग करने के बजाय, प्रत्येक मान को कई `shares` में विभाजित किया जाता है, जिनमें से प्रत्येक एक निजी कुंजी की तरह काम करता है। आमतौर पर, ये `share` 2 या अधिक _owners_ के बीच वितरित किए जाएंगे। इस प्रकार, चर को डिक्रिप्ट करने के लिए, सभी owners को डिक्रिप्शन की अनुमति देने के लिए सहमत होना चाहिए। संक्षेप में, सभी के पास एक निजी कुंजी है।
### Encrypt()
तो, मान लें कि हम एक चर 'x' को "एन्क्रिप्ट" करना चाहते हैं, हम निम्नलिखित तरीके से ऐसा कर सकते हैं।
> एन्क्रिप्शन फ़्लोट्स या वास्तविक संख्याओं का उपयोग नहीं करता है, लेकिन एक गणितीय स्थान में होता है जिसे [integer quotient ring](http://mathworld.wolfram.com/QuotientRing.html) कहा जाता है जो मूल रूप से `0` और` Q-1` के बीच पूर्णांक है, जहां `Q` प्राइम् और "पर्याप्त" है ताकि हमारे प्रयोगों में उपयोग किए जाने वाले सभी नंबर शामिल हो सकें। व्यवहार में, 'x' पूर्णांक को मान देते हुए, हम रिंग में फिट होने के लिए `x % Q` करते हैं। (इसीलिए हम नंबर `x' > Q` का उपयोग करने से बचते हैं)।
```
Q = 1234567891011
x = 25
import random
def encrypt(x):
share_a = random.randint(-Q,Q)
share_b = random.randint(-Q,Q)
share_c = (x - share_a - share_b) % Q
return (share_a, share_b, share_c)
encrypt(x)
```
जैसा कि आप यहां देख सकते हैं, हमने अपने वेरिएबल `x` को 3 अलग-अलग शेयरों में विभाजित किया है, जिसे 3 अलग-अलग owners (मालिकों) को भेजा जा सकता है।
### Decrypt()
यदि हम इन 3 शेयरों को डिक्रिप्ट करना चाहते हैं, तो हम उन्हें एक साथ जोड़ सकते हैं और परिणाम के मापांक (मॉड क्यू) ले सकते हैं।
```
def decrypt(*shares):
return sum(shares) % Q
a,b,c = encrypt(25)
decrypt(a, b, c)
```
महत्वपूर्ण रूप से, ध्यान दें कि यदि हम केवल दो shares के साथ डिक्रिप्ट करने की कोशिश करते हैं, तो डिक्रिप्शन काम नहीं करता है!
```
decrypt(a, b)
```
इस प्रकार, हमें मूल्य को डिक्रिप्ट करने के लिए सभी मालिकों की आवश्यकता है। यह इस तरह से है कि `shares` निजी कुंजी की तरह काम करते हैं, जिनमें से सभी को एक मूल्य को डिक्रिप्ट करने के लिए मौजूद होना चाहिए।
# चरण 2: SMPC का उपयोग करके बुनियादी अंकगणित
हालाँकि, सुरक्षित बहु-पक्षीय संगणना की वास्तव में असाधारण संपत्ति है गणना करने की क्षमता **जब चर अभी भी एन्क्रिप्ट किए जा रहे हैं**। आइए नीचे सरल जोड़ प्रदर्शित करें।
```
x = encrypt(25)
y = encrypt(5)
def add(x, y):
z = list()
# the first worker adds their shares together
z.append((x[0] + y[0]) % Q)
# the second worker adds their shares together
z.append((x[1] + y[1]) % Q)
# the third worker adds their shares together
z.append((x[2] + y[2]) % Q)
return z
decrypt(*add(x,y))
```
### सफलता!!!
आखिर तुमने इसे हासिल कर ही लिया है! यदि प्रत्येक कार्यकर्ता (अलग-अलग) अपने शेयरों को एक साथ जोड़ता है, तो परिणामस्वरूप शेयर सही मूल्य (25 + 5 == 30) के लिए डिक्रिप्ट करेंगे।
जैसा कि यह पता चला है, SMPC प्रोटोकॉल मौजूद हैं जो निम्नलिखित कार्यों के लिए इस एन्क्रिप्टेड संगणना की अनुमति दे सकते हैं:
- इसके अलावा (जो हमने अभी देखा है)
- गुणन
- तुलना
और इन मूल अंतर्निहित प्राथमिकताओं का उपयोग करके, हम मनमानी गणना कर सकते हैं !!!
अगले भाग में, हम इन ऑपरेशनों को करने के लिए PySyft लाइब्रेरी का उपयोग करना सीखेंगे!
# चरण 3: SMPC PySyft का उपयोग करना
पिछले खंडों में, हमने SMPC के आसपास कुछ बुनियादी अंतर्ज्ञानों को रेखांकित किया है जो काम करने वाले हैं। हालाँकि, व्यवहार में हम अपने एन्क्रिप्टेड कार्यक्रमों को लिखने के दौरान स्वयं ही सभी आदिम संचालन को हाथ से लिखना नहीं चाहते हैं। इस प्रकार, इस खंड में हम PySyft का उपयोग करते हुए एन्क्रिप्टेड संगणना कैसे करें की मूल बातों से गुजरने वाले हैं। विशेष रूप से, हम इस बात पर ध्यान केंद्रित करने जा रहे हैं कि पहले बताई गई 3 प्रधान बातें कैसे करें: जोड़, गुणा और तुलना।
सबसे पहले, हमें कुछ Virtual Workers (वर्चुअल वर्कर्स) बनाने की ज़रूरत है (जो उम्मीद है कि अब आप हमारे पिछले ट्यूटोरियल दिए गए हैं)।
```
import torch
import syft as sy
hook = sy.TorchHook(torch)
bob = sy.VirtualWorker(hook, id="bob")
alice = sy.VirtualWorker(hook, id="alice")
bill = sy.VirtualWorker(hook, id="bill")
```
### बुनियादी एन्क्रिप्शन / डिक्रिप्शन
एन्क्रिप्शन किसी भी PySyft टेंसर और calling.share() के रूपांतरण सरल है। डिक्रिप्शन साझा चर पर .get() के रूपांतरण सरल है
```
x = torch.tensor([25])
x
encrypted_x = x.share(bob, alice, bill)
encrypted_x.get()
```
### एन्क्रिप्ट किए गए मानों का परिचय
यदि हम बॉब, एलिस और बिल के श्रमिकों के करीब देखते हैं, तो हम उन shares को देख सकते हैं जो बनते हैं!
```
bob._objects
x = torch.tensor([25]).share(bob, alice, bill)
# Bob's share
bobs_share = list(bob._objects.values())[0]
bobs_share
# Alice's share
alices_share = list(alice._objects.values())[0]
alices_share
# Bill's share
bills_share = list(bill._objects.values())[0]
bills_share
```
और अगर हम चाहते थे, हम पहले से बात की गई वही दृष्टिकोण का उपयोग करके इन मूल्यों को डिक्रिप्ट कर सकते हैं !!!
```
(bobs_share + alices_share + bills_share)
```
जैसा कि आप देख सकते हैं, जब हमने `.share()` कहा तो यह केवल 3 शेयरों में मूल्य को विभाजित करता है और प्रत्येक पक्ष को एक शेयर भेजता है!
# एन्क्रिप्टेड अंकगणित
और अब आप देखते हैं कि हम अंतर्निहित मूल्यों पर अंकगणित कर सकते हैं! एपीआई का निर्माण इसलिए किया जाता है ताकि हम अंकगणित का प्रदर्शन सामान्य Pytorch tensors जैसे कर सकें।
```
x = torch.tensor([25]).share(bob,alice)
y = torch.tensor([5]).share(bob,alice)
z = x + y
z.get()
z = x - y
z.get()
```
# एन्क्रिप्ट किया गया गुणन
गुणन के लिए हमें एक अतिरिक्त पार्टी की आवश्यकता होती है जो लगातार यादृच्छिक संख्या उत्पन्न करने के लिए ज़िम्मेदार है (और किसी भी अन्य दलों के साथ मिलीभगत नहीं करता है)। हम इस व्यक्ति को "crypto provider" कहते हैं। सभी गहन उद्देश्यों के लिए, crypto provider सिर्फ एक अतिरिक्त VirtualWorker है, लेकिन यह स्वीकार करना महत्वपूर्ण है कि crypto provider एक "मालिक" नहीं है जिसमें वह / उसके पास खुद के शेयर नहीं हैं, लेकिन जिन पर भरोसा किया जा सकता है कि वह मौजूदा शेयरधारकों में से किसी के साथ साँठ गाँठ नहीं करता है।
```
crypto_provider = sy.VirtualWorker(hook, id="crypto_provider")
x = torch.tensor([25]).share(bob,alice, crypto_provider=crypto_provider)
y = torch.tensor([5]).share(bob,alice, crypto_provider=crypto_provider)
# multiplication
z = x * y
z.get()
```
आप मैट्रिक्स गुणा भी कर सकते हैं
```
x = torch.tensor([[1, 2],[3,4]]).share(bob,alice, crypto_provider=crypto_provider)
y = torch.tensor([[2, 0],[0,2]]).share(bob,alice, crypto_provider=crypto_provider)
# matrix multiplication
z = x.mm(y)
z.get()
```
# एन्क्रिप्टेड तुलना
निजी मूल्यों के बीच निजी तुलना करना भी संभव है। हम यहां SecureNN प्रोटोकॉल पर भरोसा करते हैं, जिसका विवरण [यहां](https://eprint.iacr.org/2018/442.pdf) पाया जा सकता है। तुलना का परिणाम एक निजी साझा टेंसर भी है।
```
x = torch.tensor([25]).share(bob,alice, crypto_provider=crypto_provider)
y = torch.tensor([5]).share(bob,alice, crypto_provider=crypto_provider)
z = x > y
z.get()
z = x <= y
z.get()
z = x == y
z.get()
z = x == y + 20
z.get()
```
आप अधिकतम संचालन भी कर सकते हैं
```
x = torch.tensor([2, 3, 4, 1]).share(bob,alice, crypto_provider=crypto_provider)
x.max().get()
x = torch.tensor([[2, 3], [4, 1]]).share(bob,alice, crypto_provider=crypto_provider)
max_values, max_ids = x.max(dim=0)
max_values.get()
```
# बधाई हो!!! - समुदाय में शामिल होने का समय!
इस नोटबुक ट्यूटोरियल को पूरा करने पर बधाई! यदि आपने इसका आनंद लिया है और एआई और एअर सप्लाई चेन (डेटा) के विकेन्द्रीकृत स्वामित्व के संरक्षण की ओर आंदोलन में शामिल होना चाहते हैं, तो आप निम्न तरीकों से ऐसा कर सकते हैं!
### GitHub पर स्टार PySyft
हमारे समुदाय की मदद करने का सबसे आसान तरीका सिर्फ रिपोज अभिनीत है! यह हमारे द्वारा बनाए जा रहे कूल टूल्स के बारे में जागरूकता बढ़ाने में मदद करता है।
- [स्टार PySyft](https://github.com/OpenMined/PySyft)
### हमारे Slack में शामिल हों!
नवीनतम प्रगति पर अद्यतित रहने का सबसे अच्छा तरीका हमारे समुदाय में शामिल होना है! [http://slack.openmined.org](http://slack.openminn.org) पर फॉर्म भरकर आप ऐसा कर सकते हैं
### एक कोड परियोजना में शामिल हों!
हमारे समुदाय में योगदान करने का सबसे अच्छा तरीका एक कोड योगदानकर्ता बनना है! किसी भी समय आप PySyft GitHub जारी करने वाले पृष्ठ पर जा सकते हैं और "Projects" के लिए फ़िल्टर कर सकते हैं। यह आपको सभी शीर्ष स्तर के टिकट दिखाएगा कि आप किन परियोजनाओं में शामिल हो सकते हैं! यदि आप किसी परियोजना में शामिल नहीं होना चाहते हैं, लेकिन आप थोड़ी सी कोडिंग करना चाहते हैं, तो आप "good first issue" के रूप में चिह्नित GitHub मुद्दों की खोज करके अधिक मिनी-प्रोजेक्ट्स की तलाश कर सकते हैं।
- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)
- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
### दान करना
यदि आपके पास हमारे कोडबेस में योगदान करने का समय नहीं है, लेकिन फिर भी समर्थन उधार देना चाहते हैं, तो आप हमारे ओपन कलेक्टिव में भी एक बैकर बन सकते हैं। सभी दान हमारी वेब होस्टिंग और अन्य सामुदायिक खर्च जैसे कि हैकाथॉन और मीटअप की ओर जाते हैं!
[OpenMined का ओपन कलेक्टिव पेज](https://opencollective.com/openmined)
| github_jupyter |
<a href="https://colab.research.google.com/github/bs3537/dengueAI/blob/master/V5_San_Juan_XGB_all_environmental_features.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
#https://www.drivendata.org/competitions/44/dengai-predicting-disease-spread/page/80/
#Your goal is to predict the total_cases label for each (city, year, weekofyear) in the test set.
#Performance metric = mean absolute error
```
##LIST OF FEATURES:
You are provided the following set of information on a (year, weekofyear) timescale:
(Where appropriate, units are provided as a _unit suffix on the feature name.)
###City and date indicators
1. city – City abbreviations: sj for San Juan and iq for Iquitos
2. week_start_date – Date given in yyyy-mm-dd format
###NOAA's GHCN daily climate data weather station measurements
1. station_max_temp_c – Maximum temperature
2. station_min_temp_c – Minimum temperature
3. station_avg_temp_c – Average temperature
4. station_precip_mm – Total precipitation
5. station_diur_temp_rng_c – Diurnal temperature range
###PERSIANN satellite precipitation measurements (0.25x0.25 degree scale)
6. precipitation_amt_mm – Total precipitation
###NOAA's NCEP Climate Forecast System Reanalysis measurements (0.5x0.5 degree scale)
7. reanalysis_sat_precip_amt_mm – Total precipitation
8. reanalysis_dew_point_temp_k – Mean dew point temperature
9. reanalysis_air_temp_k – Mean air temperature
10. reanalysis_relative_humidity_percent – Mean relative humidity
11. reanalysis_specific_humidity_g_per_kg – Mean specific humidity
12. reanalysis_precip_amt_kg_per_m2 – Total precipitation
13. reanalysis_max_air_temp_k – Maximum air temperature
14. reanalysis_min_air_temp_k – Minimum air temperature
15. reanalysis_avg_temp_k – Average air temperature
16. reanalysis_tdtr_k – Diurnal temperature range
###Satellite vegetation - Normalized difference vegetation index (NDVI) - NOAA's CDR Normalized Difference Vegetation Index (0.5x0.5 degree scale) measurements
17. ndvi_se – Pixel southeast of city centroid
18. ndvi_sw – Pixel southwest of city centroid
19. ndvi_ne – Pixel northeast of city centroid
20. ndvi_nw – Pixel northwest of city centroid
####TARGET VARIABLE = total_cases label for each (city, year, weekofyear)
```
import sys
#Load train features and labels datasets
train_features = pd.read_csv('https://s3.amazonaws.com/drivendata/data/44/public/dengue_features_train.csv')
train_features.head()
train_features.shape
train_labels = pd.read_csv('https://s3.amazonaws.com/drivendata/data/44/public/dengue_labels_train.csv')
train_labels.head()
train_labels.shape
#Merge train features and labels datasets
train = pd.merge(train_features, train_labels)
train.head()
train.shape
#city, year and week of year columns are duplicate in train_features and train_labels datasets so the total_cases column is added to the features dataset
train.dtypes
#Data rows for San Juan
train.city.value_counts()
#San Juan has 936 rows which we can isolate and analyze separately
train = train[train['city'].str.match('sj')]
train.head(5)
train.shape
#Thus, we have isolated the train dataset with only city data for San Juan
#Distribution of the target
import seaborn as sns
sns.distplot(train['total_cases'])
#The target distribution is skewed
#Find outliers
train['total_cases'].describe()
#Remove outliers
train = train[(train['total_cases'] >= np.percentile(train['total_cases'], 0.5)) &
(train['total_cases'] <= np.percentile(train['total_cases'], 99.5))]
train.shape
sns.distplot(train['total_cases'])
#Do train, val split
from sklearn.model_selection import train_test_split
train, val = train_test_split(train, train_size=0.80, test_size=0.20,
random_state=42)
train.shape, val.shape
#Load test features dataset (for the competition)
test = pd.read_csv('https://s3.amazonaws.com/drivendata/data/44/public/dengue_features_test.csv')
#Pandas Profiling
```
#####Baseline statistics (mean and MAE) for the target variable total_cases in train dataset and baseline validation MAE
```
train['total_cases']. describe()
#Baseline mean and mean absolute error
guess = train['total_cases'].mean()
print(f'At the baseline, the mean total number of dengue cases in a year is: {guess:.2f}')
#If we had just guessed that the total number of dengue cases was 31.58 for a city in a particular year, we would be off by how much?
from sklearn.metrics import mean_absolute_error
# Arrange y target vectors
target = 'total_cases'
y_train = train[target]
y_val = val[target]
# Get mean baseline
print('Mean Baseline (using 0 features)')
guess = y_train.mean()
# Train Error
y_pred = [guess] * len(y_train)
mae = mean_absolute_error(y_train, y_pred)
print(f'Train mean absolute error: {mae:.2f} dengue cases per year')
# Test Error
y_pred = [guess] * len(y_val)
mae = mean_absolute_error(y_val, y_pred)
print(f'Validation mean absolute error: {mae:.2f} dengue cases per year')
#we need to convert week_start_date to numeric form uisng pd.to_dateime function
#wrangle function
def wrangle(X):
X = X.copy()
# Convert week_start_date to numeric form
X['week_start_date'] = pd.to_datetime(X['week_start_date'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['week_start_date'].dt.year
X['month_recorded'] = X['week_start_date'].dt.month
#X['day_recorded'] = X['week_start_date'].dt.day
X = X.drop(columns='week_start_date')
X = X.drop(columns='year')
X = X.drop(columns='station_precip_mm')
#I engineered few features which represent standing water, high risk feature for mosquitos
#1. X['standing water feature 1'] = X['station_precip_mm'] / X['station_max_temp_c']
#Standing water features
X['total satellite vegetation index of city'] = X['ndvi_se'] + X['ndvi_sw'] + X['ndvi_ne'] + X['ndvi_nw']
#Standing water features
#Standing water feature 1 = 'NOAA GCN precipitation amount in kg per m2 reanalyzed' * (total vegetation, sum of all 4 parts of the city)
X['standing water feature 1'] = X['reanalysis_precip_amt_kg_per_m2'] * X['total satellite vegetation index of city']
#Standing water feature 2: 'NOAA GCN precipitation amount in kg per m2 reanalyzed'} * 'NOAA GCN mean relative humidity in pct reanalyzed'}
X['standing water feature 2'] = X['reanalysis_precip_amt_kg_per_m2'] * X['reanalysis_relative_humidity_percent']
#Standing water feature 3: 'NOAA GCN precipitation amount in kg per m2 reanalyzed'} * 'NOAA GCN mean relative humidity in pct reanalyzed'} * (total vegetation)
X['standing water feature 3'] = X['reanalysis_precip_amt_kg_per_m2'] * X['reanalysis_relative_humidity_percent'] * X['total satellite vegetation index of city']
#Standing water feature 4: 'NOAA GCN precipitation amount in kg per m2 reanalyzed'} / 'NOAA GCN max air temp reanalyzed'
X['standing water feature 4'] = X['reanalysis_precip_amt_kg_per_m2'] / X['reanalysis_max_air_temp_k']
#Standing water feature 5: ['NOAA GCN precipitation amount in kg per m2 reanalyzed'} * 'NOAA GCN mean relative humidity in pct reanalyzed'} * (total vegetation)]/['NOAA GCN max air temp reanalyzed']
X['standing water feature 5'] = X['reanalysis_precip_amt_kg_per_m2'] * X['reanalysis_relative_humidity_percent'] * X['total satellite vegetation index of city'] / X['reanalysis_max_air_temp_k']
#Rename columns
X.rename(columns= {'reanalysis_air_temp_k':'Mean air temperature in K'}, inplace=True)
X.rename(columns= {'reanalysis_min_air_temp_k':'Minimum air temperature in K'}, inplace=True)
X.rename(columns= {'weekofyear':'Week of Year'}, inplace=True)
X.rename(columns= {'station_diur_temp_rng_c':'Diurnal temperature range in C'}, inplace=True)
X.rename(columns= {'reanalysis_precip_amt_kg_per_m2':'Total precipitation kg/m2'}, inplace=True)
X.rename(columns= {'reanalysis_tdtr_k':'Diurnal temperature range in K'}, inplace=True)
X.rename(columns= {'reanalysis_max_air_temp_k':'Maximum air temperature in K'}, inplace=True)
X.rename(columns= {'year_recorded':'Year recorded'}, inplace=True)
X.rename(columns= {'reanalysis_relative_humidity_percent':'Mean relative humidity'}, inplace=True)
X.rename(columns= {'month_recorded':'Month recorded'}, inplace=True)
X.rename(columns= {'reanalysis_dew_point_temp_k':'Mean dew point temp in K'}, inplace=True)
X.rename(columns= {'precipitation_amt_mm':'Total precipitation in mm'}, inplace=True)
X.rename(columns= {'station_min_temp_c':'Minimum temp in C'}, inplace=True)
X.rename(columns= {'ndvi_se':'Southeast vegetation index'}, inplace=True)
X.rename(columns= {'ndvi_ne':'Northeast vegetation index'}, inplace=True)
X.rename(columns= {'ndvi_nw':'Northwest vegetation index'}, inplace=True)
X.rename(columns= {'ndvi_sw':'Southwest vegetation index'}, inplace=True)
X.rename(columns= {'reanalysis_avg_temp_k':'Average air temperature in K'}, inplace=True)
X.rename(columns= {'reanalysis_sat_precip_amt_mm':'Total precipitation in mm (2)'}, inplace=True)
X.rename(columns= {'reanalysis_specific_humidity_g_per_kg':'Mean specific humidity'}, inplace=True)
X.rename(columns= {'station_avg_temp_c':'Average temp in C'}, inplace=True)
X.rename(columns= {'station_max_temp_c':'Maximum temp in C'}, inplace=True)
X.rename(columns= {'total_cases':'Total dengue cases in the week'}, inplace=True)
#Drop columns
X = X.drop(columns='Year recorded')
X = X.drop(columns='Week of Year')
X = X.drop(columns='Month recorded')
X = X.drop(columns='Total precipitation in mm (2)')
X = X.drop(columns='Average temp in C')
X = X.drop(columns='Maximum temp in C')
X = X.drop(columns='Minimum temp in C')
X = X.drop(columns='Diurnal temperature range in C')
X = X.drop(columns='Average air temperature in K')
X = X.drop(columns='city')
# return the wrangled dataframe
return X
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
train.head().T
#Before we build the model to train on train dataset, log transform target variable due to skew
import numpy as np
target_log = np.log1p(train['Total dengue cases in the week'])
sns.distplot(target_log)
plt.title('Log-transformed target');
target_log_series = pd.Series(target_log)
train = train.assign(log_total_cases = target_log_series)
#drop total_cases target column while training the model
train = train.drop(columns='Total dengue cases in the week')
#Do the same log transformation with validation dataset
target_log_val = np.log1p(val['Total dengue cases in the week'])
target_log_val_series = pd.Series(target_log_val)
val = val.assign(log_total_cases = target_log_val_series)
val = val.drop(columns='Total dengue cases in the week')
#Fitting XGBoost Regresser model
#Define target and features
# The status_group column is the target
target = 'log_total_cases'
# Get a dataframe with all train columns except the target
train_features = train.drop(columns=[target])
# Get a list of the numeric features
numeric_features = train_features.select_dtypes(include='number').columns.tolist()
# Combine the lists
features = numeric_features
# Arrange data into X features matrix and y target vector
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
pip install category_encoders
from sklearn.pipeline import make_pipeline
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import OneHotEncoder
import xgboost as xgb
from xgboost import XGBRegressor
from sklearn import model_selection, preprocessing
processor = make_pipeline(
SimpleImputer(strategy='mean')
)
X_train_processed = processor.fit_transform(X_train)
X_val_processed = processor.transform(X_val)
model = XGBRegressor(n_estimators=200, objective='reg:squarederror', n_jobs=-1)
eval_set = [(X_train_processed, y_train),
(X_val_processed, y_val)]
model.fit(X_train_processed, y_train, eval_set=eval_set, eval_metric='mae',
early_stopping_rounds=10)
results = model.evals_result()
train_error = results['validation_0']['mae']
val_error = results['validation_1']['mae']
iterations = range(1, len(train_error) + 1)
plt.figure(figsize=(10,7))
plt.plot(iterations, train_error, label='Train')
plt.plot(iterations, val_error, label='Validation')
plt.title('XGBoost Validation Curve')
plt.ylabel('Mean Absolute Error (log transformed)')
plt.xlabel('Model Complexity (n_estimators)')
plt.legend();
#predict on X_val
y_pred = model.predict(X_val_processed)
print('XGBoost Validation Mean Absolute Error, log transformed)', mean_absolute_error(y_val, y_pred))
#Transform y_pred back to original units from log transformed
y_pred_original = np.expm1(y_pred)
y_val_original = np.expm1(y_val)
print('XGBoost Validation Mean Absolute Error (non-log transformed)', mean_absolute_error(y_val_original, y_pred_original))
```
| github_jupyter |
# Introduction
This notebook demonstrates basic usage of BioThings Explorer, an engine for autonomously querying a distributed knowledge graph. BioThings Explorer can answer two classes of queries -- "PREDICT" and "EXPLAIN". PREDICT queries are described in [PREDICT_demo.ipynb](PREDICT_demo.ipynb). Here, we describe EXPLAIN queries and how to use BioThings Explorer to execute them. A more detailed overview of the BioThings Explorer systems is provided in [these slides](https://docs.google.com/presentation/d/1QWQqqQhPD_pzKryh6Wijm4YQswv8pAjleVORCPyJyDE/edit?usp=sharing).
EXPLAIN queries are designed to **identify plausible reasoning chains to explain the relationship between two entities**. For example, in this notebook, we explore the question:
"*Why does hydroxychloroquine have an effect on ACE2?*"
**To experiment with an executable version of this notebook, [load it in Google Colaboratory](https://colab.research.google.com/github/biothings/biothings_explorer/blob/master/jupyter%20notebooks/EXPLAIN_ACE2_hydroxychloroquine_demo.ipynb).**
## Step 0: Load BioThings Explorer modules
First, install the `biothings_explorer` and `biothings_schema` packages, as described in this [README](https://github.com/biothings/biothings_explorer/blob/master/jupyter%20notebooks/README.md#prerequisite). This only needs to be done once (but including it here for compability with [colab](https://colab.research.google.com/)).
```
!pip install git+https://github.com/biothings/biothings_explorer#egg=biothings_explorer
```
Next, import the relevant modules:
* **Hint**: Find corresponding bio-entity representation used in BioThings Explorer based on user input (could be any database IDs, symbols, names)
* **FindConnection**: Find intermediate bio-entities which connects user specified input and output
```
# import modules from biothings_explorer
from biothings_explorer.hint import Hint
from biothings_explorer.user_query_dispatcher import FindConnection
```
## Step 1: Find representation of "ACE2" and "hydroxychloroquine" in BTE
In this step, BioThings Explorer translates our query strings "ACE2" and "hydroxychloroquine " into BioThings objects, which contain mappings to many common identifiers. Generally, the top result returned by the `Hint` module will be the correct item, but you should confirm that using the identifiers shown.
Search terms can correspond to any child of [BiologicalEntity](https://biolink.github.io/biolink-model/docs/BiologicalEntity.html) from the [Biolink Model](https://biolink.github.io/biolink-model/docs/), including `DiseaseOrPhenotypicFeature` (e.g., "lupus"), `ChemicalSubstance` (e.g., "acetaminophen"), `Gene` (e.g., "CDK2"), `BiologicalProcess` (e.g., "T cell differentiation"), and `Pathway` (e.g., "Citric acid cycle").
```
ht = Hint()
# find all potential representations of ACE2
ace2_hint = ht.query("ACE2")
# select the correct representation of ACE2
ace2 = ace2_hint['Gene'][0]
ace2
# find all potential representations of hydroxychloroquine
hydroxychloroquine_hint = ht.query("hydroxychloroquine")
# select the correct representation of hydroxychloroquine
hydroxychloroquine = hydroxychloroquine_hint['ChemicalSubstance'][0]
hydroxychloroquine
```
## Step 2: Find intermediate nodes connecting ACE2 and hydroxychloroquine
In this section, we find all paths in the knowledge graph that connect ACE2 and hydroxychloroquine . To do that, we will use `FindConnection`. This class is a convenient wrapper around two advanced functions for **query path planning** and **query path execution**. More advanced features for both query path planning and query path execution are in development and will be documented in the coming months.
The parameters for `FindConnection` are described below:
```
help(FindConnection.__init__)
```
Here, we formulate a `FindConnection` query with "CML" as the `input_ojb`, "imatinib" as the `output_obj`. We further specify with the `intermediate_nodes` parameter that we are looking for paths joining chronic myelogenous leukemia and imatinib with *one* intermediate node that is a Gene. (The ability to search for longer reasoning paths that include additional intermediate nodes will be added shortly.)
```
fc = FindConnection(input_obj=ace2, output_obj=hydroxychloroquine, intermediate_nodes=['BiologicalEntity'])
```
We next execute the `connect` method, which performs the **query path planning** and **query path execution** process. In short, BioThings Explorer is deconstructing the query into individual API calls, executing those API calls, then assembling the results.
A verbose log of this process is displayed below:
```
# set verbose=True will display all steps which BTE takes to find the connection
fc.connect(verbose=True)
```
## Step 3: Display and Filter results
This section demonstrates post-query filtering done in Python. Later, more advanced filtering functions will be added to the **query path execution** module for interleaved filtering, thereby enabling longer query paths. More details to come...
First, all matching paths can be exported to a data frame. Let's examine a sample of those results.
```
df = fc.display_table_view()
df.head()
```
While most results are based on edges from [semmed](https://skr3.nlm.nih.gov/SemMed/), edges from [DGIdb](http://www.dgidb.org/), [biolink](https://monarchinitiative.org/), [disgenet](http://www.disgenet.org/), [mydisease.info](https://mydisease.info) and [drugcentral](http://drugcentral.org/) were also retrieved from their respective APIs.
Next, let's look to see which genes are mentioned the most.
```
df.node1_type.unique()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
scores = pd.read_csv('numerical_data.csv')
scores
scores.drop('Unnamed: 0', axis=1, inplace=True)
scores
a_scores = scores[['num_features', 'a_fitness', 'a_accuracy', 'a_sensitivity', 'a_specificity']]
a_scores
kin_df = pd.read_csv('reported_kinases.csv', header=None, names=range(32))
kin_df
kin = []
with open('reported_kinases.csv', 'r') as f:
for line in f:
kin.append(line.strip().split(','))
kin
k_cnts = {}
for l in kin:
for k in l:
if k in k_cnts:
k_cnts[k] += 1
else:
k_cnts[k] = 1
kin_cnt_ser = pd.Series(list(k_cnts.values()), index=k_cnts.keys())
kin_cnt_ser.sort_values(ascending=False, inplace=True)
kin_cnt_ser
groups = []
with open('linked_groups.csv', 'r') as f:
for line in f:
groups.append(line.strip().split(','))
for i in range(len(groups)):
groups[i] = set(groups[i])
groups
new_col = [-2 for i in range(len(kin_cnt_ser.index))]
for i in range(len(kin_cnt_ser.index)):
for j in range(len(groups)):
if kin_cnt_ser.index[i] in groups[j]:
new_col[i] = j
continue
if new_col[i] == -2:
new_col[i] = -1
-2 in new_col
kin_cnt_df = pd.DataFrame({'Kinase': kin_cnt_ser.index, 'Count': kin_cnt_ser, 'Group': new_col})
kin_cnt_df
kin_cnt_df['Group'].value_counts()
fig1 = plt.figure(figsize=(8, 6))
ax1_1 = fig1.add_subplot(111)
ax1_1.bar(kin_cnt_df['Group'].value_counts().index, kin_cnt_df['Group'].value_counts())
ax1_1.set_xticks(list(range(-1, 15)))
ax1_1.set_title('Number of Kinases Reported From Linked Groups')
ax1_1.set_xlabel('Linked Group (-1 for None of the Listed Groups)')
ax1_1.set_ylabel('Reported Kinases')
plt.show()
grp_freq = kin_cnt_df['Group'].value_counts()
sum = 0
for g in groups:
sum += len(g)
sum
for i in range(grp_freq.size):
if grp_freq.index[i] == -1:
grp_freq.iloc[i] /= (190-46)
else:
grp_freq.iloc[i] /= len(groups[grp_freq.index[i]])
grp_freq.sort_values(ascending=False, inplace=True)
grp_freq
a_scores
import seaborn as sns
sns.distplot(a_scores['num_features'])
sns.distplot(a_scores['a_fitness'])
sns.distplot(a_scores['a_accuracy'])
sns.distplot(a_scores['a_sensitivity'])
sns.distplot(a_scores['a_specificity'])
print('{:20} Mean | StdDev'.format(' '))
print('{:20} --------|----------'.format(' '))
for col in a_scores:
print('{:20} : {:7.4} | {:7.4}'.format(col, a_scores[col].mean(), a_scores[col].std()))
kin_cnt_df[(kin_cnt_df['Group'] == -1) & (kin_cnt_df['Count'] > 2)]
kin_cnt_df[(kin_cnt_df['Group'] != -1) & (kin_cnt_df['Count'] > 2)]
```
| github_jupyter |
```
import __init__
from __init__ import DATA_PATH
from __init__ import PACKAGE_PATH
from dce import DCE
from cluster import Cluster
import utilities
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
import seaborn as sns
from keras import Model
%matplotlib inline
from rdkit.Chem import MACCSkeys
df = pd.read_csv(os.path.join(DATA_PATH,'BOD_RDKit_Descriptors_1063.csv'))
df['cleaned_bod'] = utilities.clean_out_of_bound(df['value (% BOD)'])
df['bi_class_partition'] = utilities.divide_classes(df['cleaned_bod'], [60])
true_label_biclass = df['bi_class_partition'].values
plt.hist(true_label_biclass)
from descriptor import rdkitDescriptors as rDesc
fps = rDesc.batch_compute_MACCSkeys(df['SMILES'])
sns.set(style='white', font_scale=3)
```
## 1. Baseline Model: Kmeans
```
from cluster import KMeans
kmeans_cluster = KMeans(n_clusters=2)
kmeans_cluster.build_model()
kmeans_cluster.train_model(fps, true_labels=true_label_biclass)
fig, ax=plt.subplots(1,1, figsize=(16,9))
utilities.tsne_2d_visulization(input_feat=fps,
plot_labels=true_label_biclass,
labels=['Non-Biodegradable', 'Biodegradable'],
verbose=0,
ax=ax)
ax.set_xticks([])
ax.set_yticks([])
ax.legend()
plt.savefig('1.baseline_model.pdf', bbox_inches='tight')
```
## 2. Two-step Training: Autoencoder + KMeans
```
from dimreducer import DeepAutoEncoder as DAE
dims = [167, 120, 60]
autoencoder = DAE(dims, act='relu')
autoencoder.build_model(norm=False)
history = autoencoder.train_model(fps, loss="binary_crossentropy", verbose=0, epochs=60)
plt.plot(history.history['loss'])
encoder = Model(inputs=autoencoder.model.input,
outputs=autoencoder.model.get_layer(name='embedding_layer').output)
hidden_feat = encoder.predict(fps)
kmeans_cluster = KMeans(n_clusters=2)
kmeans_cluster.build_model()
kmeans_cluster.train_model(hidden_feat, true_labels=true_label_biclass)
fig, ax=plt.subplots(1,1, figsize=(16,9))
utilities.tsne_2d_visulization(input_feat=hidden_feat,
plot_labels=true_label_biclass,
labels=['Non-Biodegradable', 'Biodegradable'],
verbose=0,
ax=ax)
ax.set_xticks([])
ax.set_yticks([])
ax.legend()
plt.savefig('2.two_step_train.pdf', bbox_inches='tight')
```
## 3. Simultaneous training: Autoencoder + Clustering
```
from dce import DCE
autoencoder_dims = [167, 120, 60]
cl_weight = 0.5
dce = DCE(autoencoder_dims, n_clusters=2, update_interval=20)
dce.build_model(norm=False)
loss = dce.train_model(
data_train=fps,
clustering_loss='kld', decoder_loss='binary_crossentropy',
verbose=0,clustering_loss_weight=cl_weight)
q, _ = dce.model.predict(fps)
y_pred = q.argmax(1)
encoder = Model(inputs=dce.model.input,
outputs=dce.model.get_layer(name='embedding_layer').output)
hidden_feats = encoder.predict(fps)
plt.plot(loss[0],label='Total test loss')
plt.plot(loss[1],label='Clustering test loss')
plt.plot(loss[2],label='Decoder test loss')
plt.legend()
plt.title('clustering weight: ' + str(cl_weight))
Cluster.true_label_metrics(true_label_biclass,y_pred,print_metric=False)
fig, ax=plt.subplots(1,1, figsize=(16,9))
utilities.tsne_2d_visulization(input_feat=hidden_feats,
plot_labels=true_label_biclass,
labels=['Non-Biodegradable', 'Biodegradable'],
verbose=0,
ax=ax)
ax.set_xticks([])
ax.set_yticks([])
ax.legend()
plt.savefig('3.co_train_clustering.pdf', bbox_inches='tight')
```
## 4. Simultaneous training: Autoencoder + Classification
```
class_names = np.array(["Non-biodegradable", "Biodegradable"])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
fps, true_label_biclass, test_size=0.25, random_state=42)
cl_weight = 0.5
autoencoder_dims = [167, 120, 60]
dce = DCE(autoencoder_dims, n_clusters=2, update_interval=20)
dce.build_model(norm=False)
train_loss, test_loss = dce.train_model(data_train=X_train, labels_train=y_train,
data_test=X_test, labels_test=y_test,
clustering_loss='kld', decoder_loss='binary_crossentropy',
verbose=0,clustering_loss_weight=cl_weight)
q, _ = dce.model.predict(X_train)
train_y_pred = q.argmax(1)
q, _ = dce.model.predict(X_test)
test_y_pred = q.argmax(1)
encoder = Model(inputs=dce.model.input,
outputs=dce.model.get_layer(name='embedding_layer').output)
train_hidden_feats = encoder.predict(X_train)
test_hidden_feats = encoder.predict(X_test)
plt.plot(train_loss[0],label='Total train loss')
plt.plot(train_loss[1],label='Clustering train loss')
plt.plot(train_loss[2],label='Decoder train loss')
plt.plot(test_loss[0],label='Total test loss')
plt.plot(test_loss[1],label='Clustering test loss')
plt.plot(test_loss[2],label='Decoder test loss')
plt.legend()
plt.title('clustering weight: ' + str(cl_weight))
print('Train Score:')
Cluster.true_label_metrics(y_train,train_y_pred,print_metric=True)
print('Test Score:')
Cluster.true_label_metrics(y_test,test_y_pred,print_metric=True)
fig, ax =plt.subplots(figsize=(16,9))
utilities.tsne_2d_visulization_test_and_train(
train_feat=train_hidden_feats,
train_labels=y_train,
test_feat=test_hidden_feats,
test_labels=y_test,
labels=['Non-biodegradable', 'Biodegradable'],
verbose=0,
ax=ax)
ax.set_xticks([])
ax.set_yticks([])
ax.legend(fontsize=20)
plt.savefig('4.co_train_classifying.pdf', bbox_inches='tight')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Leonas2000/lil-Beethoven/blob/main/Lil'_Beethoven.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive')
#@title Imports
import numpy as np
import sys
import os
from scipy.io import wavfile
!pip install python_speech_features
from python_speech_features import mfcc
import librosa
import matplotlib.pyplot as plt
from scipy.io import wavfile,savemat
import os.path
# Import keras main libraries
from keras.preprocessing import sequence
from keras.utils import np_utils
from keras.models import Sequential, load_model
from keras.layers import Dense, Dropout, Activation
from keras.regularizers import l2
from keras import callbacks
from keras.callbacks import History, ModelCheckpoint, EarlyStopping
#@title Preprocessing / CreateList
# Read args
Label_text_source = "/content/drive/MyDrive/Lil_Beethoven/Input/Txt/";
Output_dir = "/content/drive/MyDrive/Lil_Beethoven/Output/";
f = open(Output_dir + 'train.lst','w')
for filename in os.listdir(Label_text_source):
f.write(filename + '\n')
f.close()
```
:)
Split the train.lst file to train_tr.lst, train_va.lst and test.lst
then run the code for different source_List with the above files path (_tr _va and test)
```
#@title Preprocessing / WAV2mat_batch
# Parameters
hop_length_in = 512
n_bins_in = 252
bins_octaves_in = 36
win_step = 0.01
number_notes = 88
num_cep_def = 40
num_filt_def = 40
length_per_file = 4000000
# Read args
source_List = "/content/drive/MyDrive/Lil_Beethoven/Output/test.lst";
source_WAV = "/content/drive/MyDrive/Lil_Beethoven/Input/Wav/";
source_Txt = "/content/drive/MyDrive/Lil_Beethoven/Input/Txt/";
out_mat = "/content/drive/MyDrive/Lil_Beethoven/Output/";
# Output .npz
train2mat = []
labels2mat = []
contador = 0
# Get the name of the list
source_list_split = source_List.split('.')
source_list_split = source_list_split[0].split('/')
list_name = source_list_split[-1]
# Open the list
file_List = open( source_List , "r")
# Iterate on every file
for filename in file_List:
filename_split = filename.split('.')
#### MFCC extraction ####
# Transform to raw data from wav. Get the sampling rate 2
sampling_freq, stereo_vector = wavfile.read(source_WAV + filename_split[0] + '.wav')
win_len = 512/float(sampling_freq)
#plt.imshow( np.array(np.absolute(cqt_feat)))
#plt.show()
# Transform to mono
mono_vector = np.mean(stereo_vector, axis = 1)
# Extract mfcc_features
cqt_feat = np.absolute(librosa.cqt(mono_vector, sampling_freq, hop_length=hop_length_in,n_bins=n_bins_in,bins_per_octave=bins_octaves_in)).transpose()
#### LABELING ####
# Number of frames in the file
number_Frames = np.max( cqt_feat.shape[0])
# Aux_Vector of times
vector_aux = np.arange(1, number_Frames + 1)*win_len
# Binary labels - we need multiple labels at the same time to represent the chords
labels = np.zeros((number_Frames, number_notes))
# Open the align txt labels
file = open( source_Txt + filename_split[0] + '.txt' , "r")
#f = open(out_mat + filename_split[0] + 'label.lst','w')
# Loop over all the lines
for line in file:
line_split = line.split()
if line_split[0] == "OnsetTime":
print ("Preprocessing operations . . .")
else:
# Get the values from the text
init_range, fin_range, pitch = float(line_split[0]), float(line_split[1]), int(line_split[2])
# Pitch move to 0-87 range
pitch = pitch - 21;
# Get the range indexes
index_min = np.where(vector_aux >= init_range)
index_max = np.where(vector_aux - 0.01 > int((fin_range)*100)/float(100))
labels[index_min[0][0]:index_max[0][0],pitch] = 1
#If you want to save the labels to a txt file
"""for i in range( number_Frames):
for j in range( 88 ):
if labels[i][j] == 1:
f.write('%f' %vector_aux[i] + ' - ' + '%d\n' %j)
f.close()
"""
file.close()
"""
plt.figure()
plt.imshow( np.array(labels.transpose()),aspect='auto')
plt.figure()
plt.imshow( np.array(np.absolute(cqt_feat)), aspect='auto')
plt.show()
"""
while (len(train2mat) + len(cqt_feat)) >= length_per_file:
size_to_add = length_per_file - len(train2mat)
# Append to add to npz
train2mat.extend(cqt_feat[0:size_to_add,:])
# Append the labels
labels2mat.extend(labels[0:size_to_add,:])
train2mat = np.array(train2mat)
labels2mat = np.array(labels2mat)
# Plotting stuff
print (" Shape of MFCC is " + str(train2mat.shape) + " - Saved in " + out_mat + list_name + '/' + str(contador) + list_name)
print (" Shape of Labels is " + str(labels2mat.shape) + " - Saved in " + out_mat + list_name + '/' + str(contador) + list_name)
np.save('{}_X'.format(out_mat + list_name + '/' + str(contador) + list_name ), train2mat)
np.save('{}_y'.format(out_mat + list_name + '/' + str(contador) + list_name), labels2mat)
contador = contador + 1;
train2mat = []
labels2mat = []
cqt_feat = cqt_feat[size_to_add:,:]
labels = labels[size_to_add:,:]
if len(cqt_feat) == length_per_file:
# Append to add to npz
train2mat.extend(cqt_feat)
# Append the labels
labels2mat.extend(labels)
train2mat = np.array(train2mat)
labels2mat = np.array(labels2mat)
# Plotting stuff
print (" Shape of MFCC is " + str(train2mat.shape) + " - Saved in " + out_mat + list_name + '/' + str(contador) + list_name)
print (" Shape of Labels is " + str(labels2mat.shape) + " - Saved in " + out_mat + list_name + '/' + str(contador) + list_name)
np.save('{}_X'.format(out_mat + list_name + '/' + str(contador) + list_name ), train2mat)
np.save('{}_y'.format(out_mat + list_name + '/' + str(contador) + list_name), labels2mat)
contador = contador + 1;
train2mat = []
labels2mat = []
elif len(cqt_feat) > 0:
# Append to add to npz
train2mat.extend(cqt_feat)
# Append the labels
labels2mat.extend(labels)
train2mat = np.array(train2mat)
labels2mat = np.array(labels2mat)
"""
plt.figure()
plt.imshow( np.array(labels2mat.transpose()),aspect='auto')
plt.colorbar()
plt.figure()
plt.imshow( np.array(train2mat.transpose()), aspect='auto')
plt.colorbar()
plt.show()
"""
# Plotting stuff
print (" Shape of MFCC is " + str(train2mat.shape) + " - Saved in " + out_mat + list_name + '/' + str(contador) + list_name)
print (" Shape of Labels is " + str(labels2mat.shape) + " - Saved in " + out_mat + list_name + '/' + str(contador) + list_name)
np.save('{}_X'.format(out_mat + list_name + '/' + str(contador) + list_name ), train2mat)
np.save('{}_y'.format(out_mat + list_name + '/' + str(contador) + list_name), labels2mat)
#out_mat = "/content/drive/MyDrive/Lil_Beethoven/Output/";
#np.save('{}_X'.format(out_mat + list_name + '/' + str(contador) + list_name ), train2mat)
#np.save('{}_y'.format(out_mat + list_name + '/' + str(contador) + list_name), labels2mat)
#@title Preprocessing / mat2norm_batch
# Read args
source = "/content/drive/MyDrive/Lil_Beethoven/Output/"
train_folder = "train_tr/"
val_folder = "train_va/"
test_folder = "test/"
mean_X = []
min_X = []
max_X = []
print ("Get max - min ")
# Iterate on every file
for filename in os.listdir(source + train_folder):
if "tr_X" in filename:
X_train = np.load(source + train_folder + filename)
max_X.append(X_train.max())
min_X.append(X_train.min())
max_train = max(max_X)
min_train = min(min_X)
print ("Get mean")
total_length = 0
# Iterate on every file
for filename in os.listdir(source + train_folder):
if "tr_X" in filename:
X_train = np.load(source + train_folder + filename)
X_train_norm = (X_train - min_train)/(max_train - min_train)
# Compute the mean
mean_X.append(np.sum(X_train_norm, axis = 0))
total_length = total_length + len(X_train_norm)
train_mean = np.sum(mean_X, axis = 0)/float(total_length)
print ("Normalize ")
# Iterate on every file
for filename in os.listdir(source + train_folder):
filename_split = filename.split('.')
if "tr_X" in filename:
X_train = np.load(source + train_folder + filename)
X_train_norm = (X_train - min_train)/(max_train - min_train)
X_train_norm = X_train_norm - train_mean
print ("X_train file : " + filename)
np.save('{}'.format(source + train_folder + filename_split[0] ), X_train_norm)
for filename in os.listdir(source + val_folder):
filename_split = filename.split('.')
if "va_X" in filename:
X_val = np.load(source + val_folder+ filename)
X_val_norm = (X_val - min_train)/(max_train - min_train)
X_val_norm = X_val_norm - train_mean
print ("X_val file : " + filename)
np.save('{}'.format(source + val_folder + filename_split[0]), X_val_norm)
for filename in os.listdir(source + test_folder):
filename_split = filename.split('.')
if "_X" in filename:
X_test = np.load(source + test_folder + filename)
X_test_norm = (X_test - min_train)/(max_train - min_train)
X_test_norm = X_test_norm - train_mean
print ("X_test file : " + filename)
np.save('{}'.format(source + test_folder + filename_split[0] ), X_test_norm)
print (train_mean)
print (min_train)
print (max_train)
!cp /content/drive/MyDrive/Lil_Beethoven/Output/test/*.npy /content/drive/MyDrive/Lil_Beethoven/Output2/
!cp /content/drive/MyDrive/Lil_Beethoven/Output/train_tr/*.npy /content/drive/MyDrive/Lil_Beethoven/Output2/
!cp /content/drive/MyDrive/Lil_Beethoven/Output/train_va/*.npy /content/drive/MyDrive/Lil_Beethoven/Output2/
#@title Preprocessing / minidataset
# Read args
source = "/content/drive/MyDrive/Lil_Beethoven/Output2/";
# Iterate on every file
for filename in os.listdir(source):
if "tr_X" in filename:
X_train = np.load(source + filename)
print ("X_train file : " + filename)
elif "va_X" in filename:
X_val = np.load(source + filename)
print ("X_val file : " + filename)
elif "_X" in filename:
X_test = np.load(source + filename)
print ("X_test file : " + filename)
elif "tr_y" in filename:
y_tr = np.load(source + filename)
print ("X_val file : " + filename)
elif "va_y" in filename:
y_va = np.load(source + filename)
print ("X_test file : " + filename)
X_train = X_train[1:5000,:]
X_val = X_val[1:5000,:]
y_tr = y_tr[1:5000,:]
y_va = y_va[1:5000,:]
# Normalization
max_train = X_train.max()
min_train = X_train.min()
max_val = X_val.max()
min_val = X_val.min()
max_test = X_test.max()
min_test = X_test.min()
"""max_Global = max(max_train, max_val, max_test)
min_Global = min(min_train, min_val, min_test)
X_val_norm = (X_val - min_Global)/(max_Global - min_Global)
X_test_norm = (X_test - min_Global)/(max_Global - min_Global)
X_train_norm = (X_train - min_Global)/(max_Global - min_Global)"""
X_val_norm = (X_val - min_train)/(max_train - min_train)
X_test_norm = (X_test - min_train)/(max_train - min_train)
X_train_norm = (X_train - min_train)/(max_train - min_train)
# Compute the mean
train_mean = np.mean(X_train_norm, axis = 0)
# Substract it
X_train_norm = X_train_norm - train_mean
X_val_norm = X_val_norm - train_mean
X_test_norm = X_test_norm - train_mean
# Get the name
np.save('{}X_train_norm'.format(source + 'normalized/' ), X_train_norm)
np.save('{}X_val_norm'.format(source + 'normalized/' ), X_val_norm)
np.save('{}y_train_norm'.format(source + 'normalized/' ), y_tr)
np.save('{}y_val_norm'.format(source + 'normalized/' ), y_va)
#@title Train 1
''###### TRAIN 1: DNN - 3 layers - 150 unis per layer ######'''
# We need to set the random seed so that we get ther same results with the same parameters
np.random.seed(400)
mini_batch_size, num_epochs = 100, 100
input_size = 252
number_units = 256
number_layers = 3
number_classes = 88
best_accuracy = 0
contador_bad = 0
#Arg inputs
data_directory = "/content/drive/MyDrive/Lil_Beethoven/Output/"
weights_dir = "/content/drive/MyDrive/Lil_Beethoven/Saved_weights/"
print ('Build model...')
model = Sequential()
history = History()
print ('Load validation data...')
X_val = np.load(data_directory + "train_va/" + str(0) + "train_va_X.npy" )
y_val = np.load(data_directory + "train_va/" + str(0) + "train_va_y.npy" )
# Count the number of files in the training folder
num_tr_batches = len([name for name in os.listdir(data_directory + "train_tr/")])/2
num_tr_batches = int(num_tr_batches)
print ('Loading all data')
for i in range(num_tr_batches):
print ("Batching..." + str(i) + "train_tr_X.npy")
X_train = np.array(np.load(data_directory + "train_tr/" + str(i) + "train_tr_X.npy" ))
y_train = np.array(np.load(data_directory + "train_tr/" + str(i) + "train_tr_y.npy" ))
if i == 0:
X = X_train
y = y_train
else:
X = np.concatenate((X,X_train), axis = 0)
y = np.concatenate((y,y_train), axis = 0)
print (X.shape)
print ("Adding 1st layer of {} units".format(number_units) )
model.add(Dense(number_units, input_shape=(input_size,), kernel_initializer='normal', activation='relu'))
model.add(Dropout(0.2))
for i in range(number_layers-1):
print ("Adding %d" % (i+2) + "th layer of %d" % number_units + " units")
model.add(Dense(number_units, kernel_initializer='normal', activation='relu'))
model.add(Dropout(0.2))
print (" Adding classification layer")
model.add(Dense(number_classes, kernel_initializer='normal', activation='sigmoid'))
# Compile model
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mean_squared_error'])
checkpointer = ModelCheckpoint(filepath= weights_dir + "weights.hdf5", verbose=1, save_best_only=False)
early = EarlyStopping(monitor='val_loss', min_delta=0, patience=20, verbose=1, mode='auto')
training_log = open(weights_dir + "Training.log", "w")
print ('Train . . .')
# let's say you have an ImageNet generat print "Fitting the batch :"
save = model.fit(X, y,batch_size=mini_batch_size,epochs = num_epochs,validation_data=(X_val, y_val),verbose=1,callbacks=[checkpointer,early])
training_log.write(str(save.history) + "\n")
training_log.close()
#@title train load (don't need now)
'''###### TRAIN 1: DNN - 3 layers - 150 unis per layer ######'''
# We need to set the random seed so that we get ther same results with the same parameters
np.random.seed(400)
mini_batch_size, num_epochs = 100, 100
input_size = 40
number_units = 256
number_layers = 3
number_classes = 88
best_accuracy = 0
#Arg inputs
data_directory = "/content/drive/MyDrive/Lil_Beethoven/Output/"
weights_dir = "/content/drive/MyDrive/Lil_Beethoven/Saved_weights/"
print ('Load model...' )
model = load_model(weights_dir + "weights.hdf5")
starting_epoch = 13
print ('Load validation data...')
X_val = np.load(data_directory + "train_va/" + str(0) + "train_va_X.npy" )
y_val = np.load(data_directory + "train_va/" + str(0) + "train_va_y.npy" )
# Count the number of files in the training folder
num_tr_batches = len([name for name in os.listdir(data_directory + "train_tr/")])/2
# Count the number of files in the training folder
num_tr_batches = len([name for name in os.listdir(data_directory + "train_tr/")])/2
num_tr_batches = int(num_tr_batches)
print ('Loading all data')
for i in range(num_tr_batches):
print ("Batching..." + str(i) + "train_tr_X.npy")
X_train = np.array(np.load(data_directory + "train_tr/" + str(i) + "train_tr_X.npy" ))
y_train = np.array(np.load(data_directory + "train_tr/" + str(i) + "train_tr_y.npy" ))
if i == 0:
X = X_train
y = y_train
else:
X = np.concatenate((X,X_train), axis = 0)
y = np.concatenate((y,y_train), axis = 0)
checkpointer = ModelCheckpoint(filepath= weights_dir + "weights.hdf5", verbose=1, save_best_only=False)
early = EarlyStopping(monitor='val_loss', min_delta=0, patience=20, verbose=1, mode='auto')
training_log = open(weights_dir + "Training.log", "w")
print ('Train . . .')
# let's say you have an ImageNet generat print "Fitting the batch :"
save = model.fit(X, y,batch_size=mini_batch_size,epochs = num_epochs,validation_data=(X_val, y_val),verbose=1,callbacks=[checkpointer,early])
training_log.write(str(save.history) + "\n")
training_log.close()
#@title Text 2 text
'''###### TRAIN 1: DNN - 3 layers - 150 unis per layer ######'''
# We need to set the random seed so that we get ther same results with the same parameters
np.random.seed(400)
mini_batch_size, num_epochs = 100, 50
input_size = 252
number_units = 256
number_layers = 3
number_classes = 88
size_samples = 100
data_directory = "/content/drive/MyDrive/Lil_Beethoven/Output/test/"
weights_dir = "/content/drive/MyDrive/Lil_Beethoven/Saved_weights/"
X = []
y = []
num_test_batches = len([name for name in os.listdir(data_directory )])/2
num_test_batches = int(num_test_batches)
print ('Loading test data')
for i in range(num_test_batches):
print ("Batching..." + str(i) + "test_X.npy")
X_test = np.array(np.load(data_directory + str(i) + "test_X.npy" ))
y_test = np.array(np.load(data_directory + str(i) + "test_y.npy" ))
if i == 0:
X = X_test
y = y_test
else:
X = np.concatenate((X,X_test), axis = 0)
y = np.concatenate((y,y_test), axis = 0)
# Load the model
model = load_model(weights_dir + "weights.hdf5")
TP = 0
FP = 0
FN = 0
print ("Predicting model. . . ")
predictions = model.predict(X, batch_size=mini_batch_size, verbose = 1)
predictions = np.array(predictions).round()
predictions[predictions > 1] = 1
np.save('{}predictions'.format(weights_dir), predictions)
print ("\nCalculating accuracy. . .")
TP = np.count_nonzero(np.logical_and( predictions == 1, y == 1 ))
FN = np.count_nonzero(np.logical_and( predictions == 0, y == 1 ))
FP = np.count_nonzero(np.logical_and( predictions == 1, y == 0 ))
print("TP:" + str(TP), "FP:" + str(FP), "FN:" + str(FN))
if (TP + FN) > 0 and (TP +FP) > 0:
R = TP/float(TP + FN)
P = TP/float(TP + FP)
A = 100*TP/float(TP + FP + FN)
if P == 0 and R == 0:
F = 0
else:
F = 100*2*P*R/(P + R)
else:
A = 0
F = 0
R = 0
P = 0
print ('\n F-measure pre-processed: ')
print (F)
print ('\n Accuracy pre-processed: ')
print (A)
print ("\nCleaning model . . .")
for a in range(predictions.shape[1]):
for j in range(2,predictions.shape[0]-3):
if predictions[j-1,a] == 1 and predictions[j,a] == 0 and predictions[j+1,a] == 0 and predictions[j+2,a] == 1:
predictions[j,a] = 1
predictions[j+1,a] = 1
if predictions[j-2,a] == 0 and predictions[j-1,a] == 0 and predictions[j,a] == 1 and predictions[j+1,a] == 1 and predictions[j+2,a] == 0 and predictions[j+3,a] == 0:
predictions[j,a] = 0
predictions[j+1,a] = 0
if predictions[j-1,a] == 0 and predictions[j,a] == 1 and predictions[j+1,a] == 0 and predictions[j+2,a] == 0:
predictions[j,a] = 0
if predictions[j-1,a] == 1 and predictions[j,a] == 0 and predictions[j+1,a] == 1 and predictions[j+2,a] == 1:
predictions[j,a] = 1
print ("Calculating accuracy after cleaning. . .")
np.save('{}predictions_post'.format(weights_dir), predictions)
TP = np.count_nonzero(np.logical_and( predictions == 1, y == 1 ))
FN = np.count_nonzero(np.logical_and( predictions == 0, y == 1 ))
FP = np.count_nonzero(np.logical_and( predictions == 1, y == 0 ))
if (TP + FN) > 0 and (TP +FP) > 0:
R = TP/float(TP + FN)
P = TP/float(TP + FP)
A = 100*TP/float(TP + FP + FN)
if P == 0 and R == 0:
F = 0
else:
F = 100*2*P*R/(P + R)
else:
A = 0
F = 0
R = 0
P = 0
print ('\n F-measure post-processed: ')
print (F)
print ('\n Accuracy post-processed: ')
print (A)
main_data = open(weights_dir + "Accuracy.lst", "w")
main_data.write("R-pre = " + str("%.6f" % R) + "\n")
main_data.write("P-pre = " + str("%.6f" % P) + "\n")
main_data.write("A-pre = " + str("%.6f" % A) + "\n")
main_data.write("F-pre = " + str("%.6f" % F) + "\n")
main_data.write("R-post = " + str("%.6f" % R) + "\n")
main_data.write("P-post = " + str("%.6f" % P) + "\n")
main_data.write("A-post = " + str("%.6f" % A) + "\n")
main_data.write("F-post = " + str("%.6f" % F) + "\n")
main_data.close()
#@title Plot result
'''###### TRAIN 1: DNN - 3 layers - 150 unis per layer ######'''
# We need to set the random seed so that we get ther same results with the same parameters
np.random.seed(400)
mini_batch_size, num_epochs = 100, 50
input_size = 252
number_units = 256
number_layers = 3
number_classes = 88
data_directory = "/content/drive/MyDrive/Lil_Beethoven/Output/test/"
weights_dir = "/content/drive/MyDrive/Lil_Beethoven/Saved_weights/"
predictions_draw = []
y_draw = []
print ('Predict . . . ')
num_test_batches = len([name for name in os.listdir(data_directory)])/2
num_test_batches = int(num_test_batches)
y = []
print ('Loading test data')
for i in range(num_test_batches):
print ("Batching..." + str(i) + "test_X.npy")
y_test = np.array(np.load(data_directory + str(i) + "test_y.npy" ))
if i == 0:
y = y_test
else:
y = np.concatenate((y,y_test), axis = 0)
predictions = np.load(weights_dir + "predictions_post.npy" )
plt.figure()
plt.subplot(211)
plt.imshow(predictions.transpose(),cmap='Greys',aspect='auto')
plt.subplot(212)
plt.imshow(y.transpose(),cmap='Greys',aspect='auto')
plt.show()
```
| github_jupyter |
# Neural networks with PyTorch
Deep learning networks tend to be massive with dozens or hundreds of layers, that's where the term "deep" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module `nn` that provides a nice way to efficiently build large neural networks.
```
# Import necessary packages
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import torch
import helper
import matplotlib.pyplot as plt
```
Now we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample below
<img src='assets/mnist.png'>
Our goal is to build a neural network that can take one of these images and predict the digit in the image.
First up, we need to get our dataset. This is provided through the `torchvision` package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later.
```
### Run this cell
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
```
We have the training data loaded into `trainloader` and we make that an iterator with `iter(trainloader)`. Later, we'll use this to loop through the dataset for training, like
```python
for image, label in trainloader:
## do things with images and labels
```
You'll notice I created the `trainloader` with a batch size of 64, and `shuffle=True`. The batch size is the number of images we get in one iteration from the data loader and pass through our network, often called a *batch*. And `shuffle=True` tells it to shuffle the dataset every time we start going through the data loader again. But here I'm just grabbing the first batch so we can check out the data. We can see below that `images` is just a tensor with size `(64, 1, 28, 28)`. So, 64 images per batch, 1 color channel, and 28x28 images.
```
dataiter = iter(trainloader)
images, labels = dataiter.next()
print(type(images))
print(images.shape)
print(labels.shape)
```
This is what one of the images looks like.
```
plt.imshow(images[1].numpy().squeeze(), cmap='Greys_r');
```
First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's `nn` module which provides a much more convenient and powerful method for defining network architectures.
The networks you've seen so far are called *fully-connected* or *dense* networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape `(64, 1, 28, 28)` to a have a shape of `(64, 784)`, 784 is 28 times 28. This is typically called *flattening*, we flattened the 2D images into 1D vectors.
Previously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next.
> **Exercise:** Flatten the batch of images `images`. Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units using random tensors for the weights and biases. For now, use a sigmoid activation for the hidden layer. Leave the output layer without an activation, we'll add one that gives us a probability distribution next.
```
## Solution
def activation(x):
return 1/(1+torch.exp(-x))
# Flatten the input images
inputs = images.view(images.shape[0], -1)
# Create parameters
w1 = torch.randn(784, 256)
b1 = torch.randn(256)
w2 = torch.randn(256, 10)
b2 = torch.randn(10)
h = activation(torch.mm(inputs, w1) + b1)
out = torch.mm(h, w2) + b2
```
Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this:
<img src='assets/image_distribution.png' width=500px>
Here we see that the probability for each class is roughly the same. This is representing an untrained network, it hasn't seen any data yet so it just returns a uniform distribution with equal probabilities for each class.
To calculate this probability distribution, we often use the [**softmax** function](https://en.wikipedia.org/wiki/Softmax_function). Mathematically this looks like
$$
\Large \sigma(x_i) = \cfrac{e^{x_i}}{\sum_k^K{e^{x_k}}}
$$
What this does is squish each input $x_i$ between 0 and 1 and normalizes the values to give you a proper probability distribution where the probabilites sum up to one.
> **Exercise:** Implement a function `softmax` that performs the softmax calculation and returns probability distributions for each example in the batch. Note that you'll need to pay attention to the shapes when doing this. If you have a tensor `a` with shape `(64, 10)` and a tensor `b` with shape `(64,)`, doing `a/b` will give you an error because PyTorch will try to do the division across the columns (called broadcasting) but you'll get a size mismatch. The way to think about this is for each of the 64 examples, you only want to divide by one value, the sum in the denominator. So you need `b` to have a shape of `(64, 1)`. This way PyTorch will divide the 10 values in each row of `a` by the one value in each row of `b`. Pay attention to how you take the sum as well. You'll need to define the `dim` keyword in `torch.sum`. Setting `dim=0` takes the sum across the rows while `dim=1` takes the sum across the columns.
```
## Solution
def softmax(x):
return torch.exp(x)/torch.sum(torch.exp(x), dim=1).view(-1, 1)
probabilities = softmax(out)
# Does it have the right shape? Should be (64, 10)
print(probabilities.shape)
# Does it sum to 1?
print(probabilities.sum(dim=1))
```
## Building networks with PyTorch
PyTorch provides a module `nn` that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output.
```
from torch import nn
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
# Define sigmoid activation and softmax output
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
# Pass the input tensor through each of our operations
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
return x
```
Let's go through this bit by bit.
```python
class Network(nn.Module):
```
Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything.
```python
self.hidden = nn.Linear(784, 256)
```
This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`.
```python
self.output = nn.Linear(256, 10)
```
Similarly, this creates another linear transformation with 256 inputs and 10 outputs.
```python
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
```
Here I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns.
```python
def forward(self, x):
```
PyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method.
```python
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
```
Here the input tensor `x` is passed through each operation a reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method.
Now we can create a `Network` object.
```
# Create the network and look at it's text representation
model = Network()
model
```
You can define the network somewhat more concisely and clearly using the `torch.nn.functional` module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as `F`, `import torch.nn.functional as F`.
```
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
def forward(self, x):
# Hidden layer with sigmoid activation
x = F.sigmoid(self.hidden(x))
# Output layer with softmax activation
x = F.softmax(self.output(x), dim=1)
return x
```
### Activation functions
So far we've only been looking at the softmax activation, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions: Tanh (hyperbolic tangent), and ReLU (rectified linear unit).
<img src="assets/activation.png" width=700px>
In practice, the ReLU function is used almost exclusively as the activation function for hidden layers.
### Your Turn to Build a Network
<img src="assets/mlp_mnist.png" width=600px>
> **Exercise:** Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, then a hidden layer with 64 units and a ReLU activation, and finally an output layer with a softmax activation as shown above. You can use a ReLU activation with the `nn.ReLU` module or `F.relu` function.
It's good practice to name your layers by their type of network, for instance 'fc' to represent a fully-connected layer. As you code your solution, use `fc1`, `fc2`, and `fc3` as your layer names.
```
## Solution
class Network(nn.Module):
def __init__(self):
super().__init__()
# Defining the layers, 128, 64, 10 units each
self.fc1 = nn.Linear(784, 128)
self.fc2 = nn.Linear(128, 64)
# Output layer, 10 units - one for each digit
self.fc3 = nn.Linear(64, 10)
def forward(self, x):
''' Forward pass through the network, returns the output logits '''
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
x = F.softmax(x, dim=1)
return x
model = Network()
model
```
### Initializing weights and biases
The weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with `model.fc1.weight` for instance.
```
print(model.fc1.weight)
print(model.fc1.bias)
```
For custom initialization, we want to modify these tensors in place. These are actually autograd *Variables*, so we need to get back the actual tensors with `model.fc1.weight.data`. Once we have the tensors, we can fill them with zeros (for biases) or random normal values.
```
# Set biases to all zeros
model.fc1.bias.data.fill_(0)
# sample from random normal with standard dev = 0.01
model.fc1.weight.data.normal_(std=0.01)
```
### Forward pass
Now that we have a network, let's see what happens when we pass in an image.
```
# Grab some data
dataiter = iter(trainloader)
images, labels = dataiter.next()
# Resize images into a 1D vector, new shape is (batch size, color channels, image pixels)
images.resize_(64, 1, 784)
# or images.resize_(images.shape[0], 1, 784) to automatically get batch size
# Forward pass through the network
img_idx = 0
ps = model.forward(images[img_idx,:])
img = images[img_idx]
helper.view_classify(img.view(1, 28, 28), ps)
```
As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random!
### Using `nn.Sequential`
PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, `nn.Sequential` ([documentation](https://pytorch.org/docs/master/nn.html#torch.nn.Sequential)). Using this to build the equivalent network:
```
# Hyperparameters for our network
input_size = 784
hidden_sizes = [128, 64]
output_size = 10
# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
nn.ReLU(),
nn.Linear(hidden_sizes[0], hidden_sizes[1]),
nn.ReLU(),
nn.Linear(hidden_sizes[1], output_size),
nn.Softmax(dim=1))
print(model)
# Forward pass through the network and display output
images, labels = next(iter(trainloader))
images.resize_(images.shape[0], 1, 784)
ps = model.forward(images[0,:])
helper.view_classify(images[0].view(1, 28, 28), ps)
```
The operations are availble by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use `model[0]`.
```
print(model[0])
model[0].weight
```
You can also pass in an `OrderedDict` to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so _each operation must have a different name_.
```
from collections import OrderedDict
model = nn.Sequential(OrderedDict([
('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
('relu2', nn.ReLU()),
('output', nn.Linear(hidden_sizes[1], output_size)),
('softmax', nn.Softmax(dim=1))]))
model
```
Now you can access layers either by integer or the name
```
print(model[0])
print(model.fc1)
```
In the next notebook, we'll see how we can train a neural network to accuractly predict the numbers appearing in the MNIST images.
| github_jupyter |
# Learning MNIST & Fashion
In this exercise you will design a classifier for the very simple but very popular [MNIST dataset](http://yann.lecun.com/exdb/mnist/), a classic of dataset in computer vision and one of the first real world problems solved by neural networks.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD, Adam, RMSprop
from keras.utils import to_categorical
```
Keras provides access to a few simple datasets for convenience in the `keras.datasets` module. Here we will load MNIST, a standard benchmark dataset for image classification. This will download the dataset if you have run this code before.
```
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train.shape
```
MNIST is a simple dataset of grayscale hand-written digits 28x28 pixels big. So there are 10 classes in the dataset corresponding to the digits 0-9. We can get a sense for what this dataset is like (always a good idea) by looking at some random samples for the training data:
```
plt.imshow(X_train[np.random.randint(len(X_train))], cmap='gray')
```
We need to do a little preprocessing of the dataset. Firstly, we will flatten the 28x28 images to a 784 dimensional vector. This is because our first model below does not care about the spatial dimensions, only the pixel values. The images are represented by numpy arrays of integers between 0 and 255. Since this is a fixed range, we should scale the values down to be from 0 to 1. This normalization simplifies things is usually a good idea, especially since weights are usually initialized randomly near zero.
Read the code below and make sure you understand what we are doing to the data.
```
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
y_train_cat = to_categorical(y_train, 10)
y_test_cat = to_categorical(y_test, 10)
```
## Exercise 1 - design a fully conncted network for MNIST
Build a fully connected network. It is up to you what the structure of the model will be, but keep in mind that this problem is much higher dimensional than previous problems we have worked on. This is your first chance to design a model on real data! See if you can get 90% accuracy or better.
Here are some of the things you will need to decide about your model:
* number of layers
* activation function
* number of dimensions in each layer
* batch size
* number of epochs
* learning rate
Suggestions:
* You can pass the argument `verbose=2` to the `model.fit` method to quiet the output a bit, which will speed up the training as well.
* You already divided the training and test data, but since you will be trying a series of experiments and changing your model, it is good practice to set aside a **validation** dataset for you to use to track your model improvements. You should only use the test data after you believe you have a good model to evaluate the final performance. Keras can create a validation set for you if you pass the `validation_split=0.1` argument to `model.fit` to tell Keras to hold out 10% of the training data to use as validation.
* You can use the `plot_loss` if you find it useful in setting your learning rate etc. during your experiments.
* You can refer to previous notebooks and the [documentation](http://keras.io/models/sequential/).
If you want to talk over design decisions, feel free to ask.
```
def plot_loss(hist):
loss = hist.history['loss']
plt.plot(range(len(loss)), loss)
plt.title('loss')
plt.xlabel('epochs')
# Final test evaluation
score = model.evaluate(X_test, y_test_cat, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
## Exercise 2: Fashion Mnist
Repeat the classification exercise using the Fashion Mnist dataset from Zalando Research:
https://github.com/zalandoresearch/fashion-mnist
This dataset has the same specs as MNIST but it's designed to be more indicative of a real image classification problem. It contains 10 classes of clothing items:
Label Description
0 T-shirt/top
1 Trouser
2 Pullover
3 Dress
4 Coat
5 Sandal
6 Shirt
7 Sneaker
8 Bag
9 Ankle boot
Do you get to similar performance?
| github_jupyter |
# Code Style
In this chapter, we'll discuss a number of important considerations to make when styling your code. If you think of writing code like writing an essay, considering code style improves your code the same way editing an essay improves your essay. Often, considering code style is referred to as making our code *pythonic*, meaning that it adheres to the foundational principles of the Python programming language.
Learning how to consider and improve your code style up front has a number of benefits. First, your code will be more user-friendly for anyone reading your code. This includes you, who will come back to and edit your code over time. Second, while considering code style and being pythonic is a bit more work up-front on developers (the people writing the code), it pays off on the long run by making your code easier to maintai. Third, by learning this now, early on in your Python journey, you avoid falling into bad habits. It's much easier to learn something and implement it than it is to unlearn bad habits.
Note that what we're discussing here will not affect the functionality of your code. Unlike *programmatic errors* (i.e. errors and exceptions that require debugging for your code to execute properly), *stylistic errors* do not affect the functionality of your code. However, *stylistic errors* are considered bad style and are to be avoided, as they make your code harder to understand.
## Style Guides
Programming lanugages often have style guides, which include a set of conventions for how to write good code. While many of the concepts we'll cover here are applicable for other programming languages (i.e. being consistent), some of the specifics (i.e. variable naming conventions) are more specific to programming in Python.
<div class="alert alert-success">
Coding style refers to a set of conventions for how to write good code.
</div>
### The Zen of Python
To explain the programming philosophy in Python, we'll first introduce what's known as *The Zen of Python*, which lays out the design principles of the individuals who developed the Python programming language. *The Zen of Python* is included as an easter egg in Python, so if you `import this` you're able to read its contents:
```
import this
```
While we won't discuss each of these above we'll highlight two of these tenantsthat are particularly pertinent to the considerations in this chapter. Specifically, **beautiful is better than ugly** and **readability counts** together indicate that how one's code looks matters. Python prioritizes readability in its syntax (relative to other programming languages) and adheres to the idea that "code is more often read than it is written." As such, those who program in Python are encouraged to consider the beauty and readability of their code. To do so, we'll cover a handful of considerations here.
### Code Consistency
For very understandable and good reasons, beginner programmers often focus on getting their code to execute without throwing an error. In this process, however, they often forget about code style. While we'll discuss specific considerations to write well-styled python code in this chapter, the most important overarching concept is that **consistency is the goal**. Rules help us achieve consistency, and so we'll discuss a handful of rules and guidelines to help you write easy-to-read code with consistent code style. However, in doing so, we want you to keep the idea of consistency in mind, as programming is (at least partly) subjective. Since it's easier to recognize & read consistent style, do your best to follow the style guidelines presented in this chapter and once you pick a way to style your code, it's best to use that consistently across your code.
### PEP8
Python Enhancement Proposals (PEPs) are proposals for how something should be or how something shoudl work in the Python programming language. These are written by the people responsible for writing and maintaining the Python programming language. And, PEPs are voted on before incorporation. **[PEP8](https://www.python.org/dev/peps/pep-0008/)**, specfiically, is an accepted proposal that outlines the style guidelines for the Python programming language.
<div class="alert alert-info">
<b><a href="https://www.python.org/dev/peps/pep-0008/">PEP8</a></b> is an accepted proposal that outlines the style guide for Python.
</div>
The general concepts laid out in PEP8 (and in *The Zen of Python*) are as follows:
- Be *explicit & clear*: prioritize readability over cleerness
- There should be a *specific, standard way to do things*: use them
- Coding Style are *guidelines*: They are designed to help the code, but are not laws
#### PEP8: Structure
Throughout this section we'll highlight the PEP8 guideline, provide an example of what to avoid and hten demonstrate an improvement on the error. Note that for each "what to avoid" the code *will* execute without error. This is because we're discussing *stylistic* rather than *programmatic* errors here.
##### Blank Lines
- Use 2 blank lines between functions & classes and 1 between methods
- Use 1 blank line between segments to indicate logical structure
This allows you to, at a glance, identify what pieces of code are there. Using blank lines to separate out components in your code and your code's overall structure improves its readability.
**What to avoid**
In this example of what to avoid, there are no blank lines between segments within your code, making it more difficult to read. Note that if two functions were provided here, there would be 2 blank lines between the different function definitions.
```
def my_func():
my_nums = '123'
output = ''
for num in my_nums:
output += str(int(num) + 1)
return output
```
**How to improve**
To improve the above example, we can use what you see here, with variable definition being separated out from the `for` loop, being separated from the `return` statement. This code helps separate out the logical structures within a function. Note that we do *not* add a blank line between each line of code, as that would *decrease* the readability of the code.
```
# Goodness
def my_func():
my_nums = '123'
output = ''
for num in my_nums:
output += str(int(num) + 1)
return output
```
##### PEP8: Indentation
Use spaces to indicate indentation levels, with each level defined as 4 spaces. Programming languages differ on the speicfics of what constitutes a "tab," but Python has settled on a tab being equivalent to 4 spaces. When you hit "tab" on your keyboard within a Jupyter notebook, for example, the 4 spaces convention is implemented for you automatically, so you may not have even realized this convention before now!
**What to avoid**
Here, you'll note that, while the `print()` statement is indented, only *two* spaces are used. Jupyter will alert you to this by making the word `print` red, rather than its typical green.
```
if True:
print('Words.')
```
**How to improve**
Conversely, here we see the accepted four spaces for a tab/indentation being utilized. Again, remember that the functionality of the code in this example is equivalent to that above; only the style has changed.
```
if True:
print('Words.')
```
##### PEP8: Spacing
- Put one (and only one) space between each element
- Index and assignment don't have a space between opening & closing '()' or '[]'
**What to avoid**
Building on the above, spacing within and surrounding your code should be considered. Here, we see that spaces are missing around operators in the first line of code, whereas the second line has too many spaces around the assignment operator. We also see that there are unecessary spaces around the square brackets the list in line two and spaces after each comma missin in that same line of code. Finaly, in the third line of code there is an unecessary space between `my_list` and the square bracket being used for indexing.
```
my_var=1+2==3
my_list = [ 1,2,3,4 ]
el = my_list [1]
```
**How to improve**
The above spacingissues have all been resolved below:
```
my_var = 1 + 2 == 3
my_list = [1, 2, 3, 4]
el = my_list[1]
```
##### PEP8: Line Length
- PEP8 recommends that each line be at most 79 characters long
Note that this specification is somewhat historical, as computers used to require this. As such, there are tools and development environments that will help ensure that no single line of code exceeds 79 characters. However, in Jupyter notebooks, the general guideline "avoid lengthy lines of code or comments" can be used, as super long lines are hard to read at a glance.
**Multi-line**
To achieve this, know that you can always separate lines of code easily after a comma. In Jupyter notebooks, if you hit return/enter on your keyboard after a comma, your code will be aligned appropriately. For example below you see that after the comma in the first line of code, the `6` is automatically aligned with the `1` from the line above. This visually makes it clear that all of the integers are part of the same list `my_long_list`. Using multiple lines to make your code easier to read is a great habit to get into.
```
my_long_list = [1, 2, 3, 4, 5,
6, 7, 8, 9, 10]
```
Further, note that you can explicitly state that the code on the following line is a continuation of the first line of code with a backlash (`\`) at the end of a line, as you see exemplified here:
```
my_string = 'Python is ' + \
'a pretty great language.'
```
**One Statement Per Line**
While on the topic of line length and readable code, note that while you *can* often condense multiple statements into one line of code, you usually shouldn't, as it makes it harder to read.
**What to avoid**
For example, for loops *can* syntactically be specified on a single line, as you see here:
```
for i in [1, 2, 3]: print(i**2 + i%2)
```
**How to Improve**
However, in he code above, it's harder to read at a glance. Instead, what is being looped over should go on the first line with what code is being executed contained in an indented code block on lines underneat the `for` statement, as this is easier to read than the above example:
```
for i in [1, 2, 3]:
print(i**2 + i%2)
```
##### PEP8: Imports
- Import one module per line
- Avoid `*` imports
- Use the import order: standard library; 3rd party packages; local/custom code
**What to avoid**
While you may still be learning which packages are part of the standard library and which are third party packages, this will become more second nature over time. And, we haven't yet discussed local or custom code, but this includes functions/classes/code you've written and stored in `.py` files. This should be imported last.
In this example here, there are a number of issues! First, `numpy` is a third party package, while `os` and `sys` are part of the standard library, so the order should be flipped. Second `*` imports are to be avoided, as it would be unclear in any resultant code which functionality came from the `numpy` package. Third, `os` and `sys` should be imported on separate lines to be most clear.
```
from numpy import *
import os, sys
```
**How to Improve**
The above issues have been resolved in this set of imports:
```
import os
import sys
import numpy as np
```
##### PEP8: Naming
- Use descriptive names for all modules, variables, functions and classes, that are longer than 1 character
**What to avoid**
Here, single character, non-descriptive names are used.
```
a = 12
b = 24
```
**How to Improve**
Instead, python encourages object names that describe what is stored in the object or what the object is or does.
This is also important when you want to change an object name after the fact. If you were to "Find + Replace All" on the letter `a` that would change every single a in your code. However, if you "Find + Replace All" for `n_filters`, this would likely only change the places in your code you actually intended to replace.
```
n_filters = 12
n_freqs = 24
```
**Naming Style**
- CapWords (leading capitals, no separation) for Classes
- snake_case (all lowercase, underscore separator) for variables, functions, and modules
Note: snake_case is easier to read than CapWords, so we use snake_case for the things (variables, functions) that we name more frequently.
**What to avoid**
While we've been using this convention, it's important to state it explicitly here. Pythonistas (those who program in python) expect the above conventions to be used within their code. Thus, if they see a function `MyFunc`, there will be cognitive dissonance, as CapWords is to be used for classes, not functions. The same for `my_class`; this would require the reader of this code to work harder than necessary, as snake_case is to be used for functions, variables, and modules, not classes.
```
def MyFunc():
pass
class my_class():
def __init__():
pass
```
**How to Improve**
Intead, follow the guideline above. Also, note that we've added two lines between the function and class definitions (to follow the guideline earlier in this chapter).
```
def my_func():
pass
class MyClass():
def __init__():
pass
```
##### String Quotes
In Python, single-quoted strings and double-quoted strings are the same. Note that *PEP8 does not make a recommendation for this*. Rather, you are encouraged to be consistent: **pick a rule and stick to it.** (The author of this books is *exceptionally* bad at following this advice.)
One place, however, to choose one approach over another is when a string contains single or double quote character string literal. In this case, use the other one that's not included in the string to avoid backslashes in the string, as this improves readability. For example...
**What to avoid**
As you see below, you *could* use a backslash to "escape" the apostraphe within the string; however, this makes the string harder to read.
```
my_string = 'Prof\'s Project'
```
**How to Improve**
Instead, using double quotes to specify the string with the apostraphe (single quote) inside the string leads to more readable code, and is thus preferable.
```
my_string = "Prof's Project"
```
#### PEP8: Documentation
While documentation (including how to write docstrings and when, how and where to include code comments) will be covered more explicitly in the next chapter, we'll discuss the style considerations for including code comments and docstrins at this point.
##### PEP8: Comments
First, out-of-date comments are worse than no comments at all. Keep your comments up-to-date. While we encourage writing comments to explain your thinking as you're writing the code, you want to be sure to re-visit your code comments during your "editing" and "improving code style" sessions to ensure that what is stated in the comments matches what is done in your code to avoid confusion for any readers of your code.
**Block comments**
Block comments are comments that are on their own line and come before the code they intend to describe. They follow the following conventions:
- apply to some (or all) code that follows them
- are indented to the same level as that code
- each line of a block comment starts with a # and a single space
**What to avoid**
In the function below, while the code comment does come before the code it describes (good!), it is not at the same level of indentation of the code it describes (not good!) *and* there is no space between the pound sign/hashtag and the code comment:
```
import random
def encourage():
#help try to destress students by picking one thing from the following list using random
statements = ["You've totally got this!","You're so close!","You're going to do great!","Remember to take breaks!","Sleep, water, and food are really important!"]
out = random.choice(statements)
return out
encourage()
```
**How to Improve**
Intead, here, we see improved code comment style by 1) having the block comment at the same level of indentation as the code it describes, 2) having a space in between the `#` and the comment, and 3) breaking up the comment onto two separate lines to avoid having a too-long comment.
The code style is also further improved by considering spacing within the `statements` list *and* considering line spacing throughout the function.
```
def encourage():
# Randomly pick from list of de-stressing statements
# to help students as they finish the quarter.
statements = ["You've totally got this!",
"You're so close!",
"You're going to do great!",
"Remember to take breaks!",
"Sleep, water, and food are really important!"]
out = random.choice(statements)
return out
encourage()
```
**Inline comments**
Inline comments are those comments on the same line as the code they're describing. These are:
- to be used sparingly
- to be separated by at least two spaces from the statement
- start with a # and a single space
**What to avoid**
For example, we'll avoid inline comments that 1) are right up against the code they describe and 2) that fail to have a space after the `#`:
```
encourage()#words of encouragement
```
**How to Improve**
Instead, we'll have two spaces after the code, and a space after the `#`:
```
encourage() # words of encouragement
```
##### PEP8: Documentation
We'll cover docstrings in the following chapter, so for now we'll just specify that PEP8 specifies that a descriptive docstring should be written and included for all functions & classes. We'll discuss how to approach this shortly!
## Exercises
Q1. **Considering code style, which of these is best - A, B, or C?**
A)
```python
def squared(input_number):
val = input_number
power = 2
output = val ** power
return output
```
B)
```python
def squared(input_number, power=2):
output = input_number ** power
return output
```
C)
```python
def squared(input_number):
val = input_number
power = 2
output = val ** power
return output
```
Q2. **Which of the following uses PEP-approved spacing?**
A) `my_list=[1,2,3,4,5]`
B) `my_list = [1,2,3,4,5]`
C) `my_list = [1, 2, 3, 4, 5]`
D) `my_list=[1, 2, 3, 4, 5]`
E) `my_list = [1, 2, 3, 4, 5]`
Q3. **If you were reading code and came cross the following, which of the following would you expect to be a class?**
A) `Phillies_Game`
B) `PhilliesGame`
C) `phillies_game`
D) `philliesgame`
E) `PhIlLiEsGaMe`
Q4. **If you were reading code and came cross the following, which of the following would you expect to be a function or variable name?**
A) `Phillies_Game`
B) `PhilliesGame`
C) `phillies_game`
D) `philliesgame`
E) `PhIlLiEsGaMe`
Q5. **Which of the following would not cause an error in Python and would store the string *You're so close!* ?**
A) `my_string = "You're so close!"`
B) `my_string = "You"re so close!"`
C) `my_string = 'You''re so close!'`
D) `my_string = "You\\'re so close"`
E) `my_string = 'You're so close!'`
Q6. **Identify and improve all of the PEP8/Code Style violations found in the following code**:
```python
def MyFunction(input_num):
my_list = [0,1,2,3]
if 1 in my_list: ind = 1
else:
ind = 0
qq = []
for i in my_list [ind:]:
qq.append(input_num/i)
return qq
```
Q7. **Identify and improve all of the PEP8/Code Style violations found in the following code**:
```python
def ff(jj):
oo = list(); jj = list(jj)
for ii in jj: oo.append(str(ord(ii)))
return '+'.join(oo)
```
| github_jupyter |
# NNCP Splitter
[](https://colab.research.google.com/github/byronknoll/tensorflow-compress/blob/master/nncp-splitter.ipynb)
Made by Byron Knoll. GitHub repository: https://github.com/byronknoll/tensorflow-compress
### Description
This notebook can be used to split files that have been preprocessed by NNCP. This is for compression using [tensorflow-compress](https://colab.research.google.com/github/byronknoll/tensorflow-compress/blob/master/tensorflow-compress.ipynb). The primary use-case is to get around Colab's session time limit by processing large files in smaller parts.
This file splitting does not use the naive method of dividing the file into consecutive parts. Instead, it takes into account the batch size used in tensorflow-compress so that the same sequence of symbols will be used for compressing the split parts as for the original file.
### Instructions
1. In tensorflow-compress, using "preprocess_only" mode, choose "nncp" preprocessor and download the result.
2. Upload the preprocessed file (named "preprocessed.dat") to this notebook, and download the split parts.
3. In tensorflow-compress, compress each split part sequentially, enabling the checkpoint option. Choose "nncp-done" as the preprocessor.
4. In tensorflow-compress, decompress each split part sequentially, enabling the checkpoint option. Choose "nncp-done" as the preprocessor.
5. Upload the decompressed parts to this notebook to reproduce the original file. The files should be named: part.0, part.1, ..., part.N. Also upload the original NNCP dictionary file (named "dictionary.words").
## Parameters
```
batch_size = 96 #@param {type:"integer"}
#@markdown >_Set this to the same value that will be used in tensorflow-compress._
mode = 'split' #@param ["split", "join"]
num_parts = 4 #@param {type:"integer"}
#@markdown >_This is the number of parts the file should be split to._
http_path = '' #@param {type:"string"}
#@markdown >_The file from this URL will be downloaded. It is recommended to use Google Drive URLs to get fast transfer speed. Use this format for Google Drive files: https://drive.google.com/uc?id= and paste the file ID at the end of the URL. You can find the file ID from the "Get Link" URL in Google Drive. You can enter multiple URLs here, space separated._
local_upload = False #@param {type:"boolean"}
#@markdown >_If enabled, you will be prompted in the "Setup Files" section to select files to upload from your local computer. You can upload multiple files. Note: the upload speed can be quite slow (use "http_path" for better transfer speeds)._
download_option = "no_download" #@param ["no_download", "local", "google_drive"]
#@markdown >_If this is set to "local", the output files will be downloaded to your computer. If set to "google_drive", they will be copied to your Google Drive account (which is significantly faster than downloading locally)._
```
## Setup
```
#@title Imports
from google.colab import files
from google.colab import drive
import math
#@title Mount Google Drive
if download_option == "google_drive":
drive.mount('/content/gdrive')
#@title Setup Files
!mkdir -p "data"
if local_upload:
%cd data
files.upload()
%cd ..
if http_path:
%cd data
paths = http_path.split()
for path in paths:
!gdown $path
%cd ..
if mode == "join":
!gdown --id 1EzVPbRkBIIbgOzvEMeM0YpibDi2R4SHD
!tar -xf nncp-2019-11-16.tar.gz
%cd nncp-2019-11-16/
!make preprocess
%cd ..
```
## Run
```
#@title Split/Join
if mode == "split":
input_path = "data/preprocessed.dat"
orig = open(input_path, 'rb').read()
int_list = []
for i in range(0, len(orig), 2):
int_list.append(orig[i] * 256 + orig[i+1])
file_len = len(int_list)
split = math.ceil(file_len / batch_size)
part_split = math.ceil(file_len / (num_parts * batch_size))
pos = 0
for i in range(num_parts):
output = []
for j in range(batch_size):
for k in range(part_split):
if pos + k >= split:
break
index = pos + (j*split) + k
if index >= file_len:
break
output.append(int_list[index])
pos += part_split
with open(("data/part." + str(i)), "wb") as out:
for j in range(len(output)):
out.write(bytes(((output[j] // 256),)))
out.write(bytes(((output[j] % 256),)))
if mode == "join":
file_len = 0
for i in range(num_parts):
part = open("data/part." + str(i), 'rb').read()
file_len += len(part) / 2
split = math.ceil(file_len / batch_size)
part_split = math.ceil(file_len / (num_parts * batch_size))
int_list = [0] * math.floor(file_len)
pos = 0
for i in range(num_parts):
part = open("data/part." + str(i), 'rb').read()
part_list = []
for j in range(0, len(part), 2):
part_list.append(part[j] * 256 + part[j+1])
index2 = 0
for j in range(batch_size):
for k in range(part_split):
if pos + k >= split:
break
index = pos + (j*split) + k
if index >= file_len:
break
int_list[index] = part_list[index2]
index2 += 1
pos += part_split
with open("data/output.dat", "wb") as out:
for i in range(len(int_list)):
out.write(bytes(((int_list[i] // 256),)))
out.write(bytes(((int_list[i] % 256),)))
!./nncp-2019-11-16/preprocess d data/dictionary.words ./data/output.dat ./data/final.dat
#@title File Sizes
!ls -l data
#@title MD5
!md5sum data/*
#@title Download Result
def download(path):
"""Downloads the file at the specified path."""
if download_option == 'local':
files.download(path)
elif download_option == 'google_drive':
!cp -f $path /content/gdrive/My\ Drive
if mode == "split":
for i in range(num_parts):
download("data/part." + str(i))
if mode == "join":
download("data/final.dat")
```
| github_jupyter |
# Point-based and Parallel Processing Water Observations from Space (WOfS) Product in Africa <img align="right" src="../Supplementary_data/DE_Africa_Logo_Stacked_RGB_small.jpg">
* **Products used:**
[ga_ls8c_wofs_2](https://explorer.digitalearth.africa/ga_ls8c_wofs_2)
## Description
The [Water Observations from Space (WOfS)](https://www.ga.gov.au/scientific-topics/community-safety/flood/wofs/about-wofs) is a derived product from Landsat 8 satellite observations as part of provisional Landsat 8 Collection 2 surface reflectance and shows surface water detected in Africa.
Individual water classified images are called Water Observation Feature Layers (WOFLs), and are created in a 1-to-1 relationship with the input satellite data.
Hence there is one WOFL for each satellite dataset processed for the occurrence of water.
The data in a WOFL is stored as a bit field. This is a binary number, where each digit of the number is independantly set or not based on the presence (1) or absence (0) of a particular attribute (water, cloud, cloud shadow etc). In this way, the single decimal value associated to each pixel can provide information on a variety of features of that pixel.
For more information on the structure of WOFLs and how to interact with them, see [Water Observations from Space](../Datasets/Water_Observations_from_Space.ipynb) and [Applying WOfS bitmasking](../Frequently_used_code/Applying_WOfS_bitmasking.ipynb) notebooks.
This notebook explains how you can query WOfS product for each collected validation points in Africa based on point-based sampling approach.
The notebook demonstrates how to:
1. Load validation points for each partner institutions following cleaning stage described in
2. Query WOFL data for validation points and capture available WOfS defined class using point-based sampling and multiprocessing functionality
3. Extract a LUT for each point that contains both information for validation points and WOfS class as well number of clear observation in each month
***
## Getting started
To run this analysis, run all the cells in the notebook, starting with the "Load packages" cell.
### Load packages
Import Python packages that are used for the analysis.
```
%matplotlib inline
import datacube
from datacube.utils import masking, geometry
import sys
import os
import rasterio
import xarray
import glob
import numpy as np
import pandas as pd
import seaborn as sn
import geopandas as gpd
import matplotlib.pyplot as plt
import multiprocessing as mp
import scipy, scipy.ndimage
import warnings
warnings.filterwarnings("ignore") #this will suppress the warnings for multiple UTM zones in your AOI
sys.path.append("../Scripts")
from geopandas import GeoSeries, GeoDataFrame
from shapely.geometry import Point
from sklearn.metrics import confusion_matrix, accuracy_score
from sklearn.metrics import plot_confusion_matrix, f1_score
from deafrica_plotting import map_shapefile,display_map, rgb
from deafrica_spatialtools import xr_rasterize
from deafrica_datahandling import wofs_fuser, mostcommon_crs,load_ard,deepcopy
from deafrica_dask import create_local_dask_cluster
from tqdm import tqdm
```
### Analysis parameters
To analyse validation points collected by each partner institution, we need to obtain WOfS surface water observation data that corresponds with the labelled input data locations.
- Path2csv: the path to CEO validation points labelled by each partner institutions in Africa
- ValPoints: CEO validation points labelled by each partner institutions in Africa in ESRI shapefile format
- Path: Direct path to the ESRI shapefile in case that the shapefile in available
- input_data: geopandas datafram for CEO validation points labelled by each partner institutions in Africa
*** Note: Run the following three cells in case that you dont have a ESRI shapefile for validation points.
```
path2csv = '../Data/Processed/AGRYHMET/AGRYHMET_ValidationPoints.csv'
df = pd.read_csv(path2csv,delimiter=",")
geometries = [Point(xy) for xy in zip(df.LON, df.LAT)]
crs = {'init': 'epsg:4326'}
ValPoints = GeoDataFrame(df, crs=crs, geometry=geometries)
ValPoints.to_file(filename='../Data/Processed/AGRYHMET/AGRYHMET_ValidationPoints.shp')
```
*** Note: In case that you have ESRI shapefile for validation points, please continute from this point onward.
```
path = '../Data/Processed/AGRYHMET/AGRYHMET_ValidationPoints.shp'
#reading the table and converting CRS to metric
input_data = gpd.read_file(path).to_crs('epsg:6933')
input_data.columns
input_data= input_data.drop(['Unnamed_ 0'], axis=1)
#Checking the size of the input data
input_data.shape
```
### Sample WOfS at the ground truth coordinates
To load WOFL data, we can first create a re-usable query as below that will define two particular items, `group_by` solar day, ensuring that the data between scenes is combined correctly. The second parameter is `resampling` method that is set to be nearest. This query will later be updated in the script for other parameters to conduct WOfS query. the time period we are interested in, as well as other important parameters that are used to correctly load the data.
We can convert the WOFL bit field into a binary array containing True and False values. This allows us to use the WOFL data as a mask that can be applied to other datasets. The `make_mask` function allows us to create a mask using the flag labels (e.g. "wet" or "dry") rather than the binary numbers we used above. For more details on how to do masking on WOfS, see the [Applying_WOfS_bit_masking](../Frequently_used_code/Applying_WOfS_bitmasking.ipynb) notebook in Africa sandbox.
```
#generate query object
query ={'group_by':'solar_day',
'resampling':'nearest'}
```
Defining a function to query WOfS database according to the first five days before and after of each calendar month
```
def get_wofs_for_point(index, row, input_data, query, results_wet, results_clear):
dc = datacube.Datacube(app='WOfS_accuracy')
#get the month value for each index
month = input_data.loc[index]['MONTH']
#get the value for time including year, month, start date and end date
timeYM = '2018-'+f'{month:02d}'
start_date = np.datetime64(timeYM) - np.timedelta64(5,'D')
end_date = np.datetime64(timeYM) + np.timedelta64(5,'D')
time = (str(start_date),str(end_date))
plot_id = input_data.loc[index]['PLOT_ID']
#having the original query as it is
dc_query = deepcopy(query)
geom = geometry.Geometry(input_data.geometry.values[index].__geo_interface__, geometry.CRS('EPSG:6933'))
q = {"geopolygon":geom}
t = {"time":time}
#updating the query
dc_query.update(t)
dc_query.update(q)
#loading landsat-8 WOfs product and set the values for x and y (point-based) and also (window-based)
wofls = dc.load(product ="ga_ls8c_wofs_2",
y = (input_data.geometry.y[index], input_data.geometry.y[index]),
x =(input_data.geometry.x[index], input_data.geometry.x[index]),
#y = (input_data.geometry.y[index] - 30.5, input_data.geometry.y[index] + 30.5), # setting x and y coordinates based on 3*3 pixel window-based query
#x =(input_data.geometry.x[index] - 30.5, input_data.geometry.x[index] + 30.5),
crs = 'EPSG:6933',
time=time,
output_crs = 'EPSG:6933',
resolution=(-30,30))
#exclude the records that wofl return as empty for water
if not 'water' in wofls:
pass
else:
#Define a mask for wet and clear pixels
wet_nocloud = {"water_observed":True, "cloud_shadow":False, "cloud":False,"nodata":False}
#Define a mask for dry and clear pixels
dry_nocloud = {"water_observed":False, "cloud_shadow":False, "cloud":False, "nodata":False}
wofl_wetnocloud = masking.make_mask(wofls, **wet_nocloud).astype(int)
wofl_drynocloud = masking.make_mask(wofls, **dry_nocloud).astype(int)
clear = (wofl_wetnocloud | wofl_drynocloud).water.all(dim=['x','y']).values
#record the total number of clear observations for each point in each month and use it to filter out month with no valid data
n_clear = clear.sum()
#condition to identify whether WOfS seen water in specific month for a particular location
if n_clear > 0:
wet = wofl_wetnocloud.isel(time=clear).water.max().values
else:
wet = 0
#updating results for both wet and clear observations
results_wet.update({str(int(plot_id))+"_"+str(month) : int(wet)})
results_clear.update({str(int(plot_id))+"_"+str(month) : int(n_clear)})
return time
```
Define a function for parallel processing
```
def _parallel_fun(input_data, query, ncpus):
manager = mp.Manager()
results_wet = manager.dict()
results_clear = manager.dict()
# progress bar
pbar = tqdm(total=len(input_data))
def update(*a):
pbar.update()
with mp.Pool(ncpus) as pool:
for index, row in input_data.iterrows():
pool.apply_async(get_wofs_for_point,
[index,
row,
input_data,
query,
results_wet,
results_clear], callback=update)
pool.close()
pool.join()
pbar.close()
return results_wet, results_clear
```
Test the for loop
```
results_wet_test = dict()
results_clear_test = dict()
for index, row in input_data[0:14].iterrows():
time = get_wofs_for_point(index, row, input_data, query, results_wet_test, results_clear_test)
print(time)
```
Point-based query and parallel processing on WOfS
```
wet, clear = _parallel_fun(input_data, query, ncpus=15)
#extracting the final table with both CEO labels and WOfS class Wet and clear observations
wetdf = pd.DataFrame.from_dict(wet, orient = 'index')
cleardf = pd.DataFrame.from_dict(clear,orient='index')
df2 = wetdf.merge(cleardf, left_index=True, right_index=True)
df2 = df2.rename(columns={'0_x':'CLASS_WET','0_y':'CLEAR_OBS'})
#split the index (which is plotid + month) into seperate columns
for index, row in df2.iterrows():
df2.at[index,'PLOT_ID'] = index.split('_')[0] +'.0'
df2.at[index,'MONTH'] = index.split('_')[1]
#reset the index
df2 = df2.reset_index(drop=True)
#convert plot id and month to str to help with matching
input_data['PLOT_ID'] = input_data.PLOT_ID.astype(str)
input_data['MONTH']= input_data.MONTH.astype(str)
# merge both dataframe at locations where plotid and month match
final_df = pd.merge(input_data, df2, on=['PLOT_ID','MONTH'], how='outer')
#Defining the shape of final table
final_df.shape
#Counting the number of rows in the final table with NaN values in class_wet and clear observation (Optional)
#This part is to test the parallel processig function returns identicial results each time that it runs
countA = final_df["CLASS_WET"].isna().sum()
countB = final_df["CLEAR_OBS"].isna().sum()
countA, countB
final_df.to_csv(('../../Results/WOfS_Assessment/Point_Based/Institutions/AGRYHMET_PointBased_5D.csv'))
print(datacube.__version__)
```
***
## Additional information
**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
Digital Earth Africa data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.
**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).
If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/digitalearthafrica/deafrica-sandbox-notebooks).
**Last modified:** September 2020
**Compatible datacube version:**
## Tags
Browse all available tags on the DE Africa User Guide's [Tags Index](https://) (placeholder as this does not exist yet)
| github_jupyter |
<a href="https://colab.research.google.com/github/Adminixtrator/gpt-2/blob/master/GPT_2_With_SQuAD.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Calling file from Repository
```
!git clone https://github.com/adminixtrator/gpt-2.git
%cd gpt-2
%ls
```
# Using the gpt-2 model 345M
```
#Download the gpt-2 model 345M..
!python3 download_model.py 345M
#Encoding..
!export PYTHONIOENCODING=UTF-8
```
# Now to Implementing gpt-2
```
#Changing directory..
import os
os.chdir('src')
!pip install regex #For OpenAI GPT
#Importing the necessary libraries..
import json
import numpy as np
import tensorflow as tf
import model, sample, encoder
#Function to use the interaction model..
def interact_model(model_name, seed, nsamples, batch_size, length, temperature, top_k, models_dir):
models_dir = os.path.expanduser(os.path.expandvars(models_dir))
if batch_size is None:
batch_size = 1
assert nsamples % batch_size == 0
enc = encoder.get_encoder(model_name, models_dir)
hparams = model.default_hparams()
with open(os.path.join(models_dir, model_name, 'hparams.json')) as f:
hparams.override_from_dict(json.load(f))
if length is None:
length = hparams.n_ctx // 2
elif length > hparams.n_ctx:
raise ValueError("Can't get samples longer than window size: %s" % hparams.n_ctx)
with tf.Session(graph=tf.Graph()) as sess:
context = tf.placeholder(tf.int32, [batch_size, None])
np.random.seed(seed)
tf.set_random_seed(seed)
output = sample.sample_sequence(hparams=hparams, length=length, context=context, batch_size=batch_size, temperature=temperature, top_k=top_k)
saver = tf.train.Saver(save_relative_paths=True)
ckpt = tf.train.latest_checkpoint(os.path.join(models_dir, model_name))
saver.restore(sess, ckpt)
while True:
raw_text = input("\nModel prompt >>> ")
if raw_text == 'ADMIN_NIXTRATOR':
raw_text = False
break
while not raw_text:
print('\nPrompt should not be empty!')
raw_text = input("\nModel prompt >>> ")
context_tokens = enc.encode(raw_text)
generated = 0
for _ in range(nsamples // batch_size):
out = sess.run(output, feed_dict={
context: [context_tokens for _ in range(batch_size)]
})[:, len(context_tokens):]
for i in range(batch_size):
generated += 1
text = enc.decode(out[i])
print("=" * 40 + " SAMPLE " + str(generated) + " " + "=" * 40)
print(text)
print("=" * 80)
```
# **Code Explanation**
## **model_name**:
This indicates which model we are using. In our case, we are using the GPT-2 model with 345 million parameters or weights
## **seed**:
Integer seed for random number generators, fix seed to reproduce results
## **nsamples**:
This represents the number of sample texts generated in our output
## **batch_size**:
This only affects speed/memory. This must also divide nsamples
*Note: To generate more than one sample, you need to change the values of both nsamples and batch_size and also have to keep them equal.*
## **length**:
It represents the number of tokens in the generated text. If the length is None, then the number of tokens is decided by model hyperparameters
## **temperature**:
This controls randomness in Boltzmann distribution. Lower temperature results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Higher temperature results in more random completions
## **top_k**:
This parameter controls diversity. If the value of top_k is set to 1, this means that only 1 word is considered for each step (token). If top_k is set to 40, that means 40 words are considered at each step. 0 (default) is a special setting meaning no restrictions. top_k = 40 generally is a good value
## **models_dir**:
It represents the path to parent folder containing model subfolders (contains the <model_name> folder)
# Results
```
#Using the arguements above..
interact_model('345M', None, 1, 1, 20, 1, 0, '/content/gpt-2/models')
```
# Fine-tuning on SQuAD for question-answering
```
#Checking Directory..
os.chdir('/content/gpt-2/SQuAD/')
%ls
#Importing the neccessary libraries..
import numpy as np, pandas as pd
import json
import ast
from textblob import TextBlob
import nltk
import torch
import pickle
from scipy import spatial
import warnings
warnings.filterwarnings('ignore')
import spacy
from nltk import Tree
en_nlp = spacy.load('en')
from nltk.stem.lancaster import LancasterStemmer
st = LancasterStemmer()
from sklearn.feature_extraction.text import TfidfVectorizer, TfidfTransformer
#Train set
train = pd.read_json("data/train-v2.0.json")
#Familiarizing with the dataset..
train.shape
```
## Loading Embedding dictionary
```
def get_target(x):
idx = -1
for i in range(len(x["sentences"])):
if x["text"] in x["sentences"][i]: idx = i
return idx
train.data
train.dropna(inplace=True)
train.shape
```
## Data Processing
```
def process_data(train):
print("step 1")
train['sentences'] = train['context'].apply(lambda x: [item.raw for item in TextBlob(x).sentences])
print("step 2")
train["target"] = train.apply(get_target, axis = 1)
print("step 3")
train['sent_emb'] = train['sentences'].apply(lambda x: [dict_emb[item][0] if item in\
dict_emb else np.zeros(4096) for item in x])
print("step 4")
train['quest_emb'] = train['question'].apply(lambda x: dict_emb[x] if x in dict_emb else np.zeros(4096) )
return train
train = process_data(train)
def cosine_sim(x):
li = []
for item in x["sent_emb"]:
li.append(spatial.distance.cosine(item,x["quest_emb"][0]))
return li
def pred_idx(distances):
return np.argmin(distances)
#Function to make predictions..
def predictions(train):
train["cosine_sim"] = train.apply(cosine_sim, axis = 1)
train["diff"] = (train["quest_emb"] - train["sent_emb"])**2
train["euclidean_dis"] = train["diff"].apply(lambda x: list(np.sum(x, axis = 1)))
del train["diff"]
print("cosine start")
train["pred_idx_cos"] = train["cosine_sim"].apply(lambda x: pred_idx(x))
train["pred_idx_euc"] = train["euclidean_dis"].apply(lambda x: pred_idx(x))
return train
#Making predictions..
predicted = predictions(train)
```
## Accuracy
```
#Function to check accuracy..
def accuracy(target, predicted):
acc = (target==predicted).sum()/len(target)
return acc
print(accuracy(predicted["target"], predicted["pred_idx_euc"])) #Accuracy for euclidean Distance
print(accuracy(predicted["target"], predicted["pred_idx_cos"])) #Accuracy for Cosine Similarity
```
## Combed Accuracy
```
label = []
for i in range(predicted.shape[0]):
if predicted.iloc[i,10] == predicted.iloc[i,11]:
label.append(predicted.iloc[i,10])
else:
label.append((predicted.iloc[i,10],predicted.iloc[i,10]))
ct = 0
for i in range(75206):
item = predicted["target"][i]
try:
if label[i] == predicted["target"][i]: ct +=1
except:
if item in label[i]: ct +=1
ct/75206 #Accuracy..
```
| github_jupyter |
##### Copyright 2021 The TF-Agents Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# DQN C51/Rainbow
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/agents/tutorials/9_c51_tutorial">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/9_c51_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/agents/blob/master/docs/tutorials/9_c51_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/9_c51_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Introduction
This example shows how to train a [Categorical DQN (C51)](https://arxiv.org/pdf/1707.06887.pdf) agent on the Cartpole environment using the TF-Agents library.

Make sure you take a look through the [DQN tutorial](https://github.com/tensorflow/agents/blob/master/docs/tutorials/1_dqn_tutorial.ipynb) as a prerequisite. This tutorial will assume familiarity with the DQN tutorial; it will mainly focus on the differences between DQN and C51.
## Setup
If you haven't installed tf-agents yet, run:
```
!sudo apt-get update
!sudo apt-get install -y xvfb ffmpeg
!pip install 'imageio==2.4.0'
!pip install pyvirtualdisplay
!pip install tf-agents
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.categorical_dqn import categorical_dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym
from tf_agents.environments import tf_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import categorical_q_network
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
tf.compat.v1.enable_v2_behavior()
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
```
## Hyperparameters
```
env_name = "CartPole-v1" # @param {type:"string"}
num_iterations = 15000 # @param {type:"integer"}
initial_collect_steps = 1000 # @param {type:"integer"}
collect_steps_per_iteration = 1 # @param {type:"integer"}
replay_buffer_capacity = 100000 # @param {type:"integer"}
fc_layer_params = (100,)
batch_size = 64 # @param {type:"integer"}
learning_rate = 1e-3 # @param {type:"number"}
gamma = 0.99
log_interval = 200 # @param {type:"integer"}
num_atoms = 51 # @param {type:"integer"}
min_q_value = -20 # @param {type:"integer"}
max_q_value = 20 # @param {type:"integer"}
n_step_update = 2 # @param {type:"integer"}
num_eval_episodes = 10 # @param {type:"integer"}
eval_interval = 1000 # @param {type:"integer"}
```
## Environment
Load the environment as before, with one for training and one for evaluation. Here we use CartPole-v1 (vs. CartPole-v0 in the DQN tutorial), which has a larger max reward of 500 rather than 200.
```
train_py_env = suite_gym.load(env_name)
eval_py_env = suite_gym.load(env_name)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
```
## Agent
C51 is a Q-learning algorithm based on DQN. Like DQN, it can be used on any environment with a discrete action space.
The main difference between C51 and DQN is that rather than simply predicting the Q-value for each state-action pair, C51 predicts a histogram model for the probability distribution of the Q-value:

By learning the distribution rather than simply the expected value, the algorithm is able to stay more stable during training, leading to improved final performance. This is particularly true in situations with bimodal or even multimodal value distributions, where a single average does not provide an accurate picture.
In order to train on probability distributions rather than on values, C51 must perform some complex distributional computations in order to calculate its loss function. But don't worry, all of this is taken care of for you in TF-Agents!
To create a C51 Agent, we first need to create a `CategoricalQNetwork`. The API of the `CategoricalQNetwork` is the same as that of the `QNetwork`, except that there is an additional argument `num_atoms`. This represents the number of support points in our probability distribution estimates. (The above image includes 10 support points, each represented by a vertical blue bar.) As you can tell from the name, the default number of atoms is 51.
```
categorical_q_net = categorical_q_network.CategoricalQNetwork(
train_env.observation_spec(),
train_env.action_spec(),
num_atoms=num_atoms,
fc_layer_params=fc_layer_params)
```
We also need an `optimizer` to train the network we just created, and a `train_step_counter` variable to keep track of how many times the network was updated.
Note that one other significant difference from vanilla `DqnAgent` is that we now need to specify `min_q_value` and `max_q_value` as arguments. These specify the most extreme values of the support (in other words, the most extreme of the 51 atoms on either side). Make sure to choose these appropriately for your particular environment. Here we use -20 and 20.
```
optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate)
train_step_counter = tf.compat.v2.Variable(0)
agent = categorical_dqn_agent.CategoricalDqnAgent(
train_env.time_step_spec(),
train_env.action_spec(),
categorical_q_network=categorical_q_net,
optimizer=optimizer,
min_q_value=min_q_value,
max_q_value=max_q_value,
n_step_update=n_step_update,
td_errors_loss_fn=common.element_wise_squared_loss,
gamma=gamma,
train_step_counter=train_step_counter)
agent.initialize()
```
One last thing to note is that we also added an argument to use n-step updates with $n$ = 2. In single-step Q-learning ($n$ = 1), we only compute the error between the Q-values at the current time step and the next time step using the single-step return (based on the Bellman optimality equation). The single-step return is defined as:
$G_t = R_{t + 1} + \gamma V(s_{t + 1})$
where we define $V(s) = \max_a{Q(s, a)}$.
N-step updates involve expanding the standard single-step return function $n$ times:
$G_t^n = R_{t + 1} + \gamma R_{t + 2} + \gamma^2 R_{t + 3} + \dots + \gamma^n V(s_{t + n})$
N-step updates enable the agent to bootstrap from further in the future, and with the right value of $n$, this often leads to faster learning.
Although C51 and n-step updates are often combined with prioritized replay to form the core of the [Rainbow agent](https://arxiv.org/pdf/1710.02298.pdf), we saw no measurable improvement from implementing prioritized replay. Moreover, we find that when combining our C51 agent with n-step updates alone, our agent performs as well as other Rainbow agents on the sample of Atari environments we've tested.
## Metrics and Evaluation
The most common metric used to evaluate a policy is the average return. The return is the sum of rewards obtained while running a policy in an environment for an episode, and we usually average this over a few episodes. We can compute the average return metric as follows.
```
#@test {"skip": true}
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),
train_env.action_spec())
compute_avg_return(eval_env, random_policy, num_eval_episodes)
# Please also see the metrics module for standard implementations of different
# metrics.
```
## Data Collection
As in the DQN tutorial, set up the replay buffer and the initial data collection with the random policy.
```
#@test {"skip": true}
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_capacity)
def collect_step(environment, policy):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
replay_buffer.add_batch(traj)
for _ in range(initial_collect_steps):
collect_step(train_env, random_policy)
# This loop is so common in RL, that we provide standard implementations of
# these. For more details see the drivers module.
# Dataset generates trajectories with shape [BxTx...] where
# T = n_step_update + 1.
dataset = replay_buffer.as_dataset(
num_parallel_calls=3, sample_batch_size=batch_size,
num_steps=n_step_update + 1).prefetch(3)
iterator = iter(dataset)
```
## Training the agent
The training loop involves both collecting data from the environment and optimizing the agent's networks. Along the way, we will occasionally evaluate the agent's policy to see how we are doing.
The following will take ~7 minutes to run.
```
#@test {"skip": true}
try:
%%time
except:
pass
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
for _ in range(collect_steps_per_iteration):
collect_step(train_env, agent.collect_policy)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience)
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss.loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
print('step = {0}: Average Return = {1:.2f}'.format(step, avg_return))
returns.append(avg_return)
```
## Visualization
### Plots
We can plot return vs global steps to see the performance of our agent. In `Cartpole-v1`, the environment gives a reward of +1 for every time step the pole stays up, and since the maximum number of steps is 500, the maximum possible return is also 500.
```
#@test {"skip": true}
steps = range(0, num_iterations + 1, eval_interval)
plt.plot(steps, returns)
plt.ylabel('Average Return')
plt.xlabel('Step')
plt.ylim(top=550)
```
### Videos
It is helpful to visualize the performance of an agent by rendering the environment at each step. Before we do that, let us first create a function to embed videos in this colab.
```
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
```
The following code visualizes the agent's policy for a few episodes:
```
num_episodes = 3
video_filename = 'imageio.mp4'
with imageio.get_writer(video_filename, fps=60) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = agent.policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
embed_mp4(video_filename)
```
C51 tends to do slightly better than DQN on CartPole-v1, but the difference between the two agents becomes more and more significant in increasingly complex environments. For example, on the full Atari 2600 benchmark, C51 demonstrates a mean score improvement of 126% over DQN after normalizing with respect to a random agent. Additional improvements can be gained by including n-step updates.
For a deeper dive into the C51 algorithm, see [A Distributional Perspective on Reinforcement Learning (2017)](https://arxiv.org/pdf/1707.06887.pdf).
| github_jupyter |
## Churn Prediction using Logisitic Regression
## Data Dictionary
There are multiple variables in the dataset which can be cleanly divided in 3 categories:
### Demographic information about customers
<b>customer_id</b> - Customer id
<b>vintage</b> - Vintage of the customer with the bank in number of days
<b>age</b> - Age of customer
<b>gender</b> - Gender of customer
<b>dependents</b> - Number of dependents
<b>occupation</b> - Occupation of the customer
<b>city</b> - City of customer (anonymised)
### Customer Bank Relationship
<b>customer_nw_category</b> - Net worth of customer (3:Low 2:Medium 1:High)
<b>branch_code</b> - Branch Code for customer account
<b>days_since_last_transaction</b> - No of Days Since Last Credit in Last 1 year
### Transactional Information
<b>current_balance</b> - Balance as of today
<b>previous_month_end_balance</b> - End of Month Balance of previous month
<b>average_monthly_balance_prevQ</b> - Average monthly balances (AMB) in Previous Quarter
<b>average_monthly_balance_prevQ2</b> - Average monthly balances (AMB) in previous to previous quarter
<b>current_month_credit</b> - Total Credit Amount current month
<b>previous_month_credit</b> - Total Credit Amount previous month
<b>current_month_debit</b> - Total Debit Amount current month
<b>previous_month_debit</b> - Total Debit Amount previous month
<b>current_month_balance</b> - Average Balance of current month
<b>previous_month_balance</b> - Average Balance of previous month
<b>churn</b> - Average balance of customer falls below minimum balance in the next quarter (1/0)
## Churn Prediction
* Load Data & Packages for model building & preprocessing
* Preprocessing & Missing value imputation
* Select features on the basis of EDA Conclusions & build baseline model
* Decide Evaluation Metric on the basis of business problem
* Build model using all features & compare with baseline
### Loading Packages
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import KFold, StratifiedKFold, train_test_split
from sklearn.metrics import roc_auc_score, accuracy_score, confusion_matrix, roc_curve, precision_score, recall_score, precision_recall_curve
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.simplefilter(action='ignore', category=UserWarning)
```
### Loading Data
```
df = pd.read_csv('churn_prediction.csv')
```
### Missing Values
Before we go on to build the model, we must look for missing values within the dataset as treating the missing values is a necessary step before we fit a model on the dataset.
```
pd.isnull(df).sum()
```
The result of this function shows that there are quite a few missing values in columns gender, dependents, city, days since last transaction and Percentage change in credits. Let us go through each of them 1 by 1 to find the appropriate missing value imputation strategy for each of them.
#### Gender
L
et us look at the categories within gender column
```
df['gender'].value_counts()
```
So there is a good mix of males and females and arguably missing values cannot be filled with any one of them. We could create a seperate category by assigning the value -1 for all missing values in this column.
Before that, first we will convert the gender into 0/1 and then replace missing values with -1
```
#Convert Gender
dict_gender = {'Male': 1, 'Female':0}
df.replace({'gender': dict_gender}, inplace = True)
df['gender'] = df['gender'].fillna(-1)
```
#### Dependents, occupation and city with mode
Next we will have a quick look at the dependents & occupations column and impute with mode as this is sort of an ordinal variable
```
df['dependents'].value_counts()
df['occupation'].value_counts()
df['dependents'] = df['dependents'].fillna(0)
df['occupation'] = df['occupation'].fillna('self_employed')
```
Similarly City can also be imputed with most common category 1020
```
df['city'] = df['city'].fillna(1020)
```
#### Days since Last Transaction
A fair assumption can be made on this column as this is number of days since last transaction in 1 year, we can substitute missing values with a value greater than 1 year say 999
```
df['days_since_last_transaction'] = df['days_since_last_transaction'].fillna(999)
```
### Preprocessing
Now, before applying linear model such as logistic regression, we need to scale the data and keep all features as numeric strictly.
### Dummies with Multiple Categories
```
# Convert occupation to one hot encoded features
df = pd.concat([df,pd.get_dummies(df['occupation'],prefix = str('occupation'),prefix_sep='_')],axis = 1)
```
### Scaling Numerical Features for Logistic Regression
Now, we remember that there are a lot of outliers in the dataset especially when it comes to previous and current balance features. Also, the distributions are skewed for these features. We will take 2 steps to deal with that here:
* Log Transformation
* Standard Scaler
Standard scaling is anyways a necessity when it comes to linear models and we have done that here after doing log transformation on all balance features.
```
num_cols = ['customer_nw_category', 'current_balance',
'previous_month_end_balance', 'average_monthly_balance_prevQ2', 'average_monthly_balance_prevQ',
'current_month_credit','previous_month_credit', 'current_month_debit',
'previous_month_debit','current_month_balance', 'previous_month_balance']
for i in num_cols:
df[i] = np.log(df[i] + 17000)
std = StandardScaler()
scaled = std.fit_transform(df[num_cols])
scaled = pd.DataFrame(scaled,columns=num_cols)
df_df_og = df.copy()
df = df.drop(columns = num_cols,axis = 1)
df = df.merge(scaled,left_index=True,right_index=True,how = "left")
y_all = df.churn
df = df.drop(['churn','customer_id','occupation'],axis = 1)
```
## Model Building and Evaluation Metrics
Since this is a binary classification problem, we could use the following 2 popular metrics:
1. Recall
2. Area under the Receiver operating characteristic curve
Now, we are looking at the recall value here because a customer falsely marked as churn would not be as bad as a customer who was not detected as a churning customer and appropriate measures were not taken by the bank to stop him/her from churning
The ROC AUC is the area under the curve when plotting the (normalized) true positive rate (x-axis) and the false positive rate (y-axis).
Our main metric here would be Recall values, while AUC ROC Score would take care of how well predicted probabilites are able to differentiate between the 2 classes.
### Conclusions from EDA
* For debit values, we see that there is a significant difference in the distribution for churn and non churn and it might be turn out to be an important feature
* For all the balance features the lower values have much higher proportion of churning customers
* For most frequent vintage values, the churning customers are slightly higher, while for higher values of vintage, we have mostly non churning customers which is in sync with the age variable
* We see significant difference for different occupations and certainly would be interesting to use as a feature for prediction of churn.
Now, we will first split our dataset into test and train and using the above conclusions select columns and build a baseline logistic regression model to check the ROC-AUC Score & the confusion matrix
### Baseline Columns
```
baseline_cols = ['current_month_debit', 'previous_month_debit','current_balance','previous_month_end_balance','vintage'
,'occupation_retired', 'occupation_salaried','occupation_self_employed', 'occupation_student']
df_baseline = df[baseline_cols]
```
### Train Test Split to create a validation set
```
# Splitting the data into Train and Validation set
xtrain, xtest, ytrain, ytest = train_test_split(df_baseline,y_all,test_size=1/3, random_state=11, stratify = y_all)
model = LogisticRegression()
model.fit(xtrain,ytrain)
pred = model.predict_proba(xtest)[:,1]
```
### AUC ROC Curve & Confusion Matrix
Now, let us quickly look at the AUC-ROC curve for our logistic regression model and also the confusion matrix to see where the logistic regression model is failing here.
```
from sklearn.metrics import roc_curve
fpr, tpr, _ = roc_curve(ytest,pred)
auc = roc_auc_score(ytest, pred)
plt.figure(figsize=(12,8))
plt.plot(fpr,tpr,label="Validation AUC-ROC="+str(auc))
x = np.linspace(0, 1, 1000)
plt.plot(x, x, linestyle='-')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend(loc=4)
plt.show()
# Confusion Matrix
pred_val = model.predict(xtest)
label_preds = pred_val
cm = confusion_matrix(ytest,label_preds)
def plot_confusion_matrix(cm, normalized=True, cmap='bone'):
plt.figure(figsize=[7, 6])
norm_cm = cm
if normalized:
norm_cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
sns.heatmap(norm_cm, annot=cm, fmt='g', xticklabels=['Predicted: No','Predicted: Yes'], yticklabels=['Actual: No','Actual: Yes'], cmap=cmap)
plot_confusion_matrix(cm, ['No', 'Yes'])
# Recall Score
recall_score(ytest,pred_val)
```
### Cross validation
Cross Validation is one of the most important concepts in any type of data modelling. It simply says, try to leave a sample on which you do not train the model and test the model on this sample before finalizing the model.
We divide the entire population into k equal samples. Now we train models on k-1 samples and validate on 1 sample. Then, at the second iteration we train the model with a different sample held as validation.
In k iterations, we have basically built model on each sample and held each of them as validation. This is a way to reduce the selection bias and reduce the variance in prediction power.
Since it builds several models on different subsets of the dataset, we can be more sure of our model performance if we use CV for testing our models.
```
def cv_score(ml_model, rstate = 12, thres = 0.5, cols = df.columns):
i = 1
cv_scores = []
df1 = df.copy()
df1 = df[cols]
# 5 Fold cross validation stratified on the basis of target
kf = StratifiedKFold(n_splits=5,random_state=rstate,shuffle=True)
for df_index,test_index in kf.split(df1,y_all):
print('\n{} of kfold {}'.format(i,kf.n_splits))
xtr,xvl = df1.loc[df_index],df1.loc[test_index]
ytr,yvl = y_all.loc[df_index],y_all.loc[test_index]
# Define model for fitting on the training set for each fold
model = ml_model
model.fit(xtr, ytr)
pred_probs = model.predict_proba(xvl)
pp = []
# Use threshold to define the classes based on probability values
for j in pred_probs[:,1]:
if j>thres:
pp.append(1)
else:
pp.append(0)
# Calculate scores for each fold and print
pred_val = pp
roc_score = roc_auc_score(yvl,pred_probs[:,1])
recall = recall_score(yvl,pred_val)
precision = precision_score(yvl,pred_val)
sufix = ""
msg = ""
msg += "ROC AUC Score: {}, Recall Score: {:.4f}, Precision Score: {:.4f} ".format(roc_score, recall,precision)
print("{}".format(msg))
# Save scores
cv_scores.append(roc_score)
i+=1
return cv_scores
baseline_scores = cv_score(LogisticRegression(), cols = baseline_cols)
```
Now let us try using all columns available to check if we get significant improvement.
```
all_feat_scores = cv_score(LogisticRegression())
```
There is some improvement in both ROC AUC Scores and Precision/Recall Scores.
```
from sklearn.ensemble import RandomForestClassifier
rf_all_features = cv_score(RandomForestClassifier(n_estimators=100, max_depth=8))
```
## Comparison of Different model fold wise
Let us visualise the cross validation scores for each fold for the following 3 models and observe differences:
* Baseline Model
* Model based on all features
* Model based on top 10 features obtained from RFE
```
results_df = pd.DataFrame({'baseline':baseline_scores, 'all_feats': all_feat_scores, 'random_forest': rf_all_features})
results_df.plot(y=["baseline", "all_feats", "random_forest"], kind="bar")
```
Here, we can see that the random forest model is giving the best result for each fold and students are encouraged to try and fine tune the model to get the best results.
| github_jupyter |
# Sample notebook showcasing R on Jupyter
An overview of some plotting controls available in R for visualizing networks and visualizing tree models.
To execute a cell, select it and then use **[Shift] + [Enter]**.
```
# Default plot size is 7 inches x 7 inches; change to 7 x 3
options(repr.plot.height=3)
library(rpart) # CART tree models
library(rpart.plot) # Pretty plotting
library(vcd) # Spline plotting
titanic <- as.data.frame(Titanic)
head(titanic, n=5)
summary(titanic)
```
## Data visualization
Before making the tree models, try some visualization.
```
Survival.by.Sex <- xtabs(Freq~Sex+Survived, data=titanic)
Survival.by.Class <- xtabs(Freq~Class+Survived, data=titanic)
Survival.by.Age <- xtabs(Freq~Age+Survived, data=titanic)
oldpar <- par(mfrow=c(1,3))
options(repr.plot.width=7)
spineplot(Survival.by.Sex, col=c(rgb(0, 0, 0.5), rgb(0.3, 0.3, 1)))
spineplot(Survival.by.Class, col=c(rgb(0, 0, 0.5), rgb(0.3, 0.3, 1)))
spineplot(Survival.by.Age, col=c(rgb(0, 0, 0.5), rgb(0.3, 0.3, 1)))
par(oldpar)
cart.control <- rpart.control(minbucket=1, cp=0, maxdepth=5)
model.cart = rpart(
Survived ~ . ,
data=titanic[ , -5],
weights=titanic$Freq,
method="class",
#xval=10,
control=cart.control
)
print(model.cart)
printcp(model.cart)
# The standard Tree plot
plot(model.cart, margin=0.01)
text(model.cart, use.n=TRUE, cex=.8)
options(repr.plot.height=5)
# Better visualization using rpart.plot
prp(x=model.cart,
fallen.leaves=TRUE, branch=.5, faclen=0, trace=1,
extra=1, under=TRUE,
branch.lty=3,
split.box.col="whitesmoke", split.border.col="darkgray", split.round=0.4)
# Confusion Matrix given a cutoff
threshold = 0.8
cm <- table(titanic$Survived,
predict(model.cart, titanic[,-5], type="prob")[,2] > threshold)
print(cm)
```
# For fun, let's make a Caffeine molecule
This notebook also demonstrates importing an extra library, `igraph`.
The Docker container sets this up, without the student needing to import anything
(grep for igraph).
We'll use an adjacency matrix to describe the network topology of Caffeine, and create the graph
using `graph.adjacency(<the-adjacency-matrix>)` to demonstrate some standard selection and
plotting functions using R's `igraph` library. The chemical formula below demonstrates use of inline LaTeX math markup, and the image inline image placement.
$$C_8H_{10}N_4O_2$$
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/a/a1/Koffein_-_Caffeine.svg/220px-Koffein_-_Caffeine.svg.png" alt="Caffeine molecule"></img>
[mybinder]: http://mybinder.org
```
library(igraph)
caffeine.adjacency <- as.matrix(read.table("caffeine.txt", sep=" "))
caffeine <- graph.adjacency(caffeine.adjacency, mode='undirected')
V(caffeine)$name <- strsplit('CHHHNCOCNCHHHCHNCNCHHHCO', '')[[1]]
V(caffeine)$color <- rgb(1, 1, 1)
V(caffeine)[name == 'C']$color <- rgb(0, 0, 0, 0.7)
V(caffeine)[name == 'O']$color <- rgb(1, 0, 0, 0.7)
V(caffeine)[name == 'N']$color <- rgb(0, 0, 1, 0.7)
plot(caffeine)
options(repr.plot.height=5, repr.plot.width=5)
```
| github_jupyter |
```
# Install RAPIDS (takes ~10 min).
!git clone https://github.com/rapidsai/rapidsai-csp-utils.git
!bash rapidsai-csp-utils/colab/rapids-colab.sh 0.18
import sys, os
dist_package_index = sys.path.index('/usr/local/lib/python3.7/dist-packages')
sys.path = sys.path[:dist_package_index] + ['/usr/local/lib/python3.7/site-packages'] + sys.path[dist_package_index:]
sys.path
exec(open('rapidsai-csp-utils/colab/update_modules.py').read(), globals())
# https://github.com/NVIDIA/NVTabular/blob/main/examples/winning-solution-recsys2020-twitter/01-02-04-Download-Convert-ETL-with-NVTabular-Training-with-XGBoost.ipynb
# Needed to fix conda and install nvtabular.
!conda install https://repo.anaconda.com/pkgs/main/linux-64/conda-4.9.2-py37h06a4308_0.tar.bz2
!pip install git+https://github.com/NVIDIA/NVTabular.git@main
# For rapidsai 0.19 ONLY, not working.
"""
!sudo add-apt-repository ppa:ubuntu-toolchain-r/test
!sudo apt-get update
!sudo apt-get install gcc-4.9
!sudo apt-get upgrade libstdc++6
!sudo apt-get dist-upgrade
!strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep GLIBCXX"""
# External Dependencies
import time
import glob
import gc
import cupy as cp # CuPy is an implementation of NumPy-compatible multi-dimensional array on GPU
import cudf # cuDF is an implementation of Pandas-like Dataframe on GPU
import rmm # library for pre-allocating memory on GPU
import dask # dask is an open-source library to nateively scale Python on multiple workers/nodes
import dask_cudf # dask_cudf uses dask to scale cuDF dataframes on multiple workers/nodes
import numpy as np
# NVTabular is the core library, we will use here for feature engineering/preprocessing on GPU
import nvtabular as nvt
import xgboost as xgb
# More dask / dask_cluster related libraries to scale NVTabular
from dask_cuda import LocalCUDACluster
from dask.distributed import Client
from dask.distributed import wait
from dask.utils import parse_bytes
from dask.delayed import delayed
from nvtabular.utils import device_mem_size
from nvtabular.column_group import ColumnGroup
!nvidia-smi
time_total_start = time.time()
# Assume dataset in MyDrive/RecSys2021
from google.colab import drive
drive.mount('/content/drive')
BASE_DIR = '/content/drive/MyDrive/RecSys2021/'
cluster = LocalCUDACluster(
protocol="tcp"
)
client = Client(cluster)
client
# Preparing our dataset
features = [
'text_tokens', ###############
'hashtags', #Tweet Features
'tweet_id', #
'media', #
'links', #
'domains', #
'tweet_type', #
'language', #
'timestamp', ###############
'a_user_id', ###########################
'a_follower_count', #Engaged With User Features
'a_following_count', #
'a_is_verified', #
'a_account_creation', ###########################
'b_user_id', #######################
'b_follower_count', #Engaging User Features
'b_following_count', #
'b_is_verified', #
'b_account_creation', #######################
'b_follows_a', #################### Engagement Features
'reply', #Target Reply
'retweet', #Target Retweet
'retweet_comment',#Target Retweet with comment
'like', #Target Like
####################
]
# Splits the entries in media by \t and keeps only the first two values (if available).
def splitmedia(col):
if col.shape[0] == 0:
return(col)
else:
return(col)
return(col.str.split('\t', expand=True)[0].fillna('') + '_' + col.str.split('\t', expand=True)[1].fillna(''))
# Counts the number of token in a column (e.g. how many hashtags are in a tweet).
def count_token(col,token):
not_null = col.isnull()==0
return ((col.str.count(token)+1)*not_null).fillna(0)
# >> is an overloaded operator, it transforms columns in other columns applying functions to them.
count_features = (
nvt.ColumnGroup(['hashtags', 'domains', 'links']) >> (lambda col: count_token(col,'\t')) >> nvt.ops.Rename(postfix = '_count_t')
)
split_media = nvt.ColumnGroup(['media']) >> (lambda col: splitmedia(col))
# Encode categorical columns as a small, continuous integer to save memory.
# Before we can apply Categorify, we need to fill na/missing values in the columns hashtags, domains and links.
multihot_filled = ['hashtags', 'domains', 'links'] >> nvt.ops.FillMissing()
cat_features = (
split_media + multihot_filled + ['language', 'tweet_type', 'tweet_id', 'a_user_id', 'b_user_id'] >>
nvt.ops.Categorify()
)
label_name = ['reply', 'retweet', 'retweet_comment', 'like']
label_name_feature = label_name >> nvt.ops.FillMissing()
weekday = (
nvt.ColumnGroup(['timestamp']) >>
(lambda col: cudf.to_datetime(col, unit='s').dt.weekday) >>
nvt.ops.Rename(postfix = '_wd')
)
output = count_features+cat_features+label_name_feature+weekday
(output).graph
remaining_columns = [x for x in features if x not in (output.columns+['text_tokens'])]
remaining_columns
proc = nvt.Workflow(output+remaining_columns)
data_parts = []
for file in os.listdir(BASE_DIR):
if file.endswith(".tsv"):
data_parts.append(os.path.join(BASE_DIR, file))
trains_itrs = nvt.Dataset(data_parts,
header=None,
names=features,
engine='csv',
sep='\x01',
part_size='2GB')
client = Client(cluster) # Sample client connecting to `cluster` object
# client.run(cudf.set_allocator, "managed") # Uses managed memory instead of "default"
import torch, gc
gc.collect()
torch.cuda.empty_cache()
time_preproc_start = time.time()
proc.fit(trains_itrs)
time_preproc = time.time()-time_preproc_start
time_preproc
# We define the output datatypes for continuous columns to save memory. We can define the output datatypes as a dict and parse it to the to_parquet function.
dict_dtypes = {}
for col in label_name + ['media', 'language', 'tweet_type', 'tweet_id',
'a_user_id', 'b_user_id', 'hashtags', 'domains',
'links', 'timestamp', 'a_follower_count',
'a_following_count', 'a_account_creation',
'b_follower_count', 'b_following_count', 'b_account_creation']:
dict_dtypes[col] = np.uint32
time_preproc_start = time.time()
proc.transform(trains_itrs).to_parquet(output_path=BASE_DIR + 'preprocess/', dtypes=dict_dtypes)
time_preproc += time.time()-time_preproc_start
time_preproc
# Splitting dataset
# We split the training data by time into a train and validation set. The first 5 days are train and the last 2 days are for validation. We use the weekday for it.
# The first day of the dataset is a Thursday (weekday id = 3) and the last day is Wednesday (weekday id = 2) (Not sure for this year).
time_split_start = time.time()
import pandas as pd
df = dask_cudf.read_parquet(BASE_DIR + 'preprocess/*.parquet')
if 'text_tokens' in list(df.columns):
df = df.drop('text_tokens', axis=1)
VALID_DOW = [1, 2]
# pd.set_option('display.max_rows', 100)
# df.head(100)
valid = df[df['timestamp_wd'].isin(VALID_DOW)].reset_index(drop=True)
train = df[~df['timestamp_wd'].isin(VALID_DOW)].reset_index(drop=True)
train = train.sort_values(["b_user_id", "timestamp"]).reset_index(drop=True)
valid = valid.sort_values(["b_user_id", "timestamp"]).reset_index(drop=True)
train.to_parquet(BASE_DIR + 'nv_train/')
valid.to_parquet(BASE_DIR + 'nv_valid/')
time_split = time.time()-time_split_start
time_split
del train; del valid
gc.collect()
# Feature Engineering
# We count encode the columns media, tweet_type, language, a_user_id, b_user_id.
# For counting encoding info see https://github.com/rapidsai/deeplearning/blob/main/RecSys2020Tutorial/03_4_CountEncoding.ipynb
count_encode = (
['media', 'tweet_type', 'language', 'a_user_id', 'b_user_id'] >>
nvt.ops.JoinGroupby(cont_cols=['reply'],stats=["count"], out_path='./')
)
# We transform timestamp to datetime type and extract hours, minutes, seconds.
datetime = nvt.ColumnGroup(['timestamp']) >> (lambda col: cudf.to_datetime(col.astype('int32'), unit='s'))
hour = datetime >> (lambda col: col.dt.hour) >> nvt.ops.Rename(postfix = '_hour')
minute = datetime >> (lambda col: col.dt.minute) >> nvt.ops.Rename(postfix = '_minute')
seconds = datetime >> (lambda col: col.dt.second) >> nvt.ops.Rename(postfix = '_second')
# We difference encode b_follower_count, b_following_count, language grouped by b_user_id. First, we need to transform the datatype to float32 to prevent overflow/underflow.
# After DifferenceEncoding, we want to fill NaN values with 0.
# For difference encoding info see https://github.com/rapidsai/deeplearning/blob/main/RecSys2020Tutorial/05_2_TimeSeries_Differences.ipynb
diff_lag = (
nvt.ColumnGroup(['b_follower_count','b_following_count','language']) >>
(lambda col: col.astype('float32')) >>
nvt.ops.DifferenceLag(partition_cols=['b_user_id'], shift = [1, -1]) >>
nvt.ops.FillMissing(fill_val=0)
)
# Transform targets in binary labels.
LABEL_COLUMNS = ['reply', 'retweet', 'retweet_comment', 'like']
labels = nvt.ColumnGroup(LABEL_COLUMNS) >> (lambda col: (col>0).astype('int8'))
# We apply TargetEncoding with kfold of 5 and smoothing of 20.
# For target encoding info see https://medium.com/rapids-ai/target-encoding-with-rapids-cuml-do-more-with-your-categorical-data-8c762c79e784
# and https://github.com/rapidsai/deeplearning/blob/main/RecSys2020Tutorial/03_3_TargetEncoding.ipynb
target_encode = (
['media', 'tweet_type', 'language', 'a_user_id', 'b_user_id',
['domains','language','b_follows_a','tweet_type','media','a_is_verified']] >>
nvt.ops.TargetEncoding(
labels,
kfold=5,
p_smooth=20,
out_dtype="float32",
)
)
output = count_encode+hour+minute+seconds+diff_lag+labels+target_encode
(output).graph
# We want to keep all columns of the input dataset. Therefore, we extract all column names from the first input parquet file.
df_tmp = cudf.read_parquet(BASE_DIR + '/nv_train/part.0.parquet')
all_input_columns = df_tmp.columns
del df_tmp
gc.collect()
remaining_columns = [x for x in all_input_columns if x not in (output.columns+['text_tokens'])]
remaining_columns
# We initialize our NVTabular workflow and add the "remaining" columns to it.
proc = nvt.Workflow(output+remaining_columns)
# We initialize the train and valid as NVTabular datasets.
train_dataset = nvt.Dataset(glob.glob(BASE_DIR + 'nv_train/*.parquet'),
engine='parquet',
part_size="2GB")
valid_dataset = nvt.Dataset(glob.glob(BASE_DIR + 'nv_valid/*.parquet'),
engine='parquet',
part_size="2GB")
time_fe_start = time.time()
proc.fit(train_dataset)
time_fe = time.time()-time_fe_start
time_fe
# The columns a_is_verified, b_is_verified and b_follows_a have the datatype boolean.
# XGBoost does not support boolean datatypes and we need convert them to int8. We can define the output datatypes as a dict and parse it to the .to_parquet function.
dict_dtypes = {}
for col in ['a_is_verified','b_is_verified','b_follows_a']:
dict_dtypes[col] = np.int8
# We apply the transformation to the train and valid datasets.
time_fe_start = time.time()
proc.transform(train_dataset).to_parquet(output_path=BASE_DIR + 'nv_train_fe/', dtypes=dict_dtypes)
proc.transform(valid_dataset).to_parquet(output_path=BASE_DIR + 'nv_valid_fe/', dtypes=dict_dtypes)
time_fe += time.time()-time_fe_start
time_fe
# Training
train = dask_cudf.read_parquet(BASE_DIR + 'nv_train_fe/*.parquet')
valid = dask_cudf.read_parquet(BASE_DIR + 'nv_valid_fe/*.parquet')
train[['a_is_verified','b_is_verified','b_follows_a']].dtypes
# Some columns are only used for feature engineering. Therefore, we define the columns we want to ignore for training.
dont_use =[
'__null_dask_index__',
'text_tokens',
'timestamp',
'a_account_creation',
'b_account_creation',
'hashtags',
'tweet_id',
'links',
'domains',
'a_user_id',
'b_user_id',
'timestamp_wd',
'timestamp_to_datetime',
'a_following_count_a_ff_rate',
'b_following_count_b_ff_rate'
]
dont_use = [x for x in train.columns if x in dont_use]
label_names = ['reply', 'retweet', 'retweet_comment', 'like']
# Nvidia experiments show that we require only 10% of the training dataset. Our feature engineering, such as TargetEncoding,
# uses the training datasets and leverage the information of the full dataset.
# In the competition, Nvidia trained the models with higher ratio (20% and 50%), but could not observe an improvement in performance.
# Since I'm using only a small part of the dataset, I will use all of it.
SAMPLE_RATIO = 0.999 # 0.1
SEED = 1
if SAMPLE_RATIO < 1.0:
train['sample'] = train['tweet_id'].map_partitions(lambda cudf_df: cudf_df.hash_encode(stop=10))
print(len(train))
train = train[train['sample']<10*SAMPLE_RATIO]
train, = dask.persist(train)
print(len(train))
Y_train = train[label_names]
Y_train, = dask.persist(Y_train)
train = train.drop(['sample']+label_names+dont_use,axis=1)
train, = dask.persist(train)
print('Using %i features'%(train.shape[1]))
# Similar to the training dataset, Nvidia experiments show that 35% of our validation dataset is enough to get a good estimate of the performance metric.
# 35% of the validation dataset has a similar size as the test set of the RecSys2020 competition.
SAMPLE_RATIO = 0.999 # 0.35
SEED = 1
if SAMPLE_RATIO < 1.0:
print(len(valid))
valid['sample'] = valid['tweet_id'].map_partitions(lambda cudf_df: cudf_df.hash_encode(stop=10))
valid = valid[valid['sample']<10*SAMPLE_RATIO]
valid, = dask.persist(valid)
print(len(valid))
Y_valid = valid[label_names]
Y_valid, = dask.persist(Y_valid)
valid = valid.drop(['sample']+label_names+dont_use,axis=1)
valid, = dask.persist(valid)
# We initialize our XGBoost parameter.
print('XGB Version',xgb.__version__)
xgb_parms = {
'max_depth':8,
'learning_rate':0.1,
'subsample':0.8,
'colsample_bytree':0.3,
'eval_metric':'logloss',
'objective':'binary:logistic',
'tree_method':'gpu_hist',
'predictor' : 'gpu_predictor'
}
train,valid = dask.persist(train,valid)
# We train our XGBoost models. The challenge requires to predict 4 targets, does a user
# _ like a tweet
# _ reply a tweet
# _ comment a tweet
# _ comment and reply a tweet
# We train 4x XGBoost models for 300 rounds on a GPU.
time_train_start = time.time()
NROUND = 300
VERBOSE_EVAL = 50
preds = []
for i in range(4):
name = label_names[i]
print('#'*25);print('###',name);print('#'*25)
start = time.time(); print('Creating DMatrix...')
dtrain = xgb.dask.DaskDMatrix(client,data=train,label=Y_train.iloc[:, i])
print('Took %.1f seconds'%(time.time()-start))
start = time.time(); print('Training...')
model = xgb.dask.train(client, xgb_parms,
dtrain=dtrain,
num_boost_round=NROUND,
verbose_eval=VERBOSE_EVAL)
print('Took %.1f seconds'%(time.time()-start))
start = time.time(); print('Predicting...')
preds.append(xgb.dask.predict(client,model,valid))
print('Took %.1f seconds'%(time.time()-start))
del model, dtrain
time_train = time.time()-time_train_start
time_train
yvalid = Y_valid[label_names].values.compute()
oof = cp.array([i.values.compute() for i in preds]).T
yvalid.shape
# The hosts of the RecSys2020 competition provide code for calculating the performance metric PRAUC and RCE.
# Nvidia optimized the code to speed up the calculation, as well. Using cuDF / cupy, we can calculate the performance metric on the GPU.
from sklearn.metrics import auc
def precision_recall_curve(y_true,y_pred):
y_true = y_true.astype('float32')
ids = cp.argsort(-y_pred)
y_true = y_true[ids]
y_pred = y_pred[ids]
y_pred = cp.flip(y_pred,axis=0)
acc_one = cp.cumsum(y_true)
sum_one = cp.sum(y_true)
precision = cp.flip(acc_one/cp.cumsum(cp.ones(len(y_true))),axis=0)
precision[:-1] = precision[1:]
precision[-1] = 1.
recall = cp.flip(acc_one/sum_one,axis=0)
recall[:-1] = recall[1:]
recall[-1] = 0
n = (recall==1).sum()
return precision[n-1:],recall[n-1:],y_pred[n:]
def compute_prauc(pred, gt):
prec, recall, thresh = precision_recall_curve(gt, pred)
recall, prec = cp.asnumpy(recall), cp.asnumpy(prec)
prauc = auc(recall, prec)
return prauc
def log_loss(y_true,y_pred,eps=1e-7, normalize=True, sample_weight=None):
y_true = y_true.astype('int32')
y_pred = cp.clip(y_pred, eps, 1 - eps)
if y_pred.ndim == 1:
y_pred = cp.expand_dims(y_pred, axis=1)
if y_pred.shape[1] == 1:
y_pred = cp.hstack([1 - y_pred, y_pred])
y_pred /= cp.sum(y_pred, axis=1, keepdims=True)
loss = -cp.log(y_pred)[cp.arange(y_pred.shape[0]), y_true]
return _weighted_sum(loss, sample_weight, normalize).item()
def _weighted_sum(sample_score, sample_weight, normalize):
if normalize:
return cp.average(sample_score, weights=sample_weight)
elif sample_weight is not None:
return cp.dot(sample_score, sample_weight)
else:
return sample_score.sum()
def compute_rce_fast(pred, gt):
cross_entropy = log_loss(gt, pred)
yt = cp.mean(gt).item()
# cross_entropy and yt are single numbers (no arrays) and using CPU is fast.
strawman_cross_entropy = -(yt*np.log(yt) + (1 - yt)*np.log(1 - yt))
return (1.0 - cross_entropy/strawman_cross_entropy)*100.0
# Finally, we calculate the performance metrics PRAUC and RCE for each target.
txt = ''
for i in range(4):
prauc = compute_prauc(oof[:,i], yvalid[:, i])
rce = compute_rce_fast(oof[:,i], yvalid[:, i]).item()
txt_ = f"{label_names[i]:20} PRAUC:{prauc:.5f} RCE:{rce:.5f}"
print(txt_)
txt += txt_ + '\n'
# Performance metrics for RecSys Challenge 2021.
from sklearn.metrics import average_precision_score, log_loss
def calculate_ctr(gt):
positive = len([x for x in gt if x == 1])
ctr = positive/float(len(gt))
return ctr
def compute_rce(pred, gt):
cross_entropy = log_loss(gt, pred)
data_ctr = calculate_ctr(gt)
strawman_cross_entropy = log_loss(gt, [data_ctr for _ in range(len(gt))])
return (1.0 - cross_entropy/strawman_cross_entropy)*100.0
# ground_truth = read_predictions("gt.csv") # will return data in the form (tweet_id, user_id, labed (1 or 0))
# predictions = read_predictions("predictions.csv") # will return data in the form (tweet_id, user_id, prediction)
# Finally, we calculate the performance metrics AP and RCE for each target.
txt = ''
for i in range(4):
ap = average_precision_score(yvalid[:, i].get(), oof[:,i].get())
rce = compute_rce(oof[:,i].get(), yvalid[:, i].get())
txt_ = f"{label_names[i]:20} AP:{ap:.5f} RCE:{rce:.5f}"
print(txt_)
txt += txt_ + '\n'
# Timings
time_total = time.time()-time_total_start
print('Total time: {:.2f}s'.format(time_total))
print()
print('1. Preprocessing: {:.2f}s'.format(time_preproc))
print('2. Splitting: {:.2f}s'.format(time_split))
print('3. Feature engineering: {:.2f}s'.format(time_fe))
print('4. Training: {:.2f}s'.format(time_train))
```
| github_jupyter |
# Network Visualization
This notebook demonstrates how to view **MASSpy** models on network maps using the [Escher](https://escher.github.io/#/) visualization tool <cite data-cite="KDragerE+15">(King et al., 2015)</cite>.
The **Escher** package must already be installed into the environment. To install **Escher**:
```python
pip install escher
```
## Viewing Models with Escher
The **MASSpy** package also comes with some maps for testing purposes.
```
from os.path import join
import numpy as np
import mass
import mass.test
# Load the glycolysis and hemoglobin models, then merge them
glycolysis = mass.test.create_test_model("Glycolysis")
hemoglobin = mass.test.create_test_model("Hemoglobin")
model = glycolysis.merge(hemoglobin, inplace=False)
# Set the path to the map file
map_filepath = join(mass.test.MAPS_DIR, "RBC.glycolysis.map.json")
# To view the list of available maps, remove the semicolon
mass.test.view_test_maps();
```
The primary object for viewing **Escher** maps is the `escher.Builder`, a Jupyter widget that can be viewed in a Jupyter notebook.
```
import escher
from escher import Builder
# Turns off the warning message when leaving or refreshing this page.
# The default setting is False to help avoid losing work.
escher.rc['never_ask_before_quit'] = True
```
To load an existing map, the path to the JSON file of the **Escher** map is provided to the `map_json` argument of the `Builder`. The `MassModel` can be loaded using the `model` argument.
```
escher_builder = Builder(
model=model,
map_json=map_filepath)
escher_builder
```
## Mapping Data onto Escher
### Viewing Reaction Data
Reaction data can be displayed on the **Escher** map using a dictionary that contains reaction identifiers, and values to map onto reaction arrows. The `dict` can be provided to the `reaction_data` argument upon initialization of the builder.
For example, to display the steady state fluxes on the map:
```
initial_flux_data = {
reaction.id: flux
for reaction, flux in model.steady_state_fluxes.items()}
# New instance to prevent modifications to the existing maps
escher_builder = Builder(
model=model,
map_json=map_filepath,
reaction_data=initial_flux_data)
# Display map in notebook
escher_builder
```
The color and size of the data scale can be altered by providing a tuple of at least two dictionaries. Each dictionary is considered a "stop" that defines the color and size at or near that particular value in the data set. The `type` key defines the type for the stop, the `color` key defines the color of the arrow, and the `size` key defines the thickness of the arrow.
```
# New instance to prevent modifications to the existing maps
escher_builder = Builder(
model=model,
map_json=map_filepath,
reaction_data=initial_flux_data,
reaction_scale=(
{"type": 'min', "color": 'green', "size": 5 },
{"type": 'value', "value": 1.12, "color": 'purple', "size": 10},
{"type": 'max', "color": 'blue', "size": 15 }),
)
# Display map in notebook
escher_builder
```
### Viewing Metabolite Data
Metabolite data also can be displayed on an **Escher** map by using a dictionary containing metabolite identifiers, and values to map onto metabolite nodes. In addition to setting the attributes to apply upon initializing the builder, the attributes also can be set for a map after initialization.
For example, to display metabolite concentrations on the map:
```
initial_conc_data = {
metabolite.id: round(conc, 8)
for metabolite, conc in model.initial_conditions.items()}
# New instance to prevent modifications to the existing maps
escher_builder = Builder(
model=model,
map_json=map_filepath,
metabolite_data=initial_conc_data)
# Display map in notebook
escher_builder
```
The secondary metabolites can be removed by setting `hide_secondary_metabolites` as `True` to provide a cleaner visualization of the primary metabolites in the network.
```
escher_builder.hide_secondary_metabolites = True
```
Note that changes made affect the already displayed map. Here, a preset scale is applied to the metabolite concentrations.
```
escher_builder.metabolite_scale_preset = "RdYlBu"
```
### Visualizing SBML models with Escher in Python
Suppose that we would like to visualize our SBML model on a network map as follows:
1. We would like to create this map with the **Escher** web-based API.
2. We would like to view the model on the network map within in a Jupyter notebook using the **Escher** Python-based API.
3. We would like to display the value of forward rate constants for each reaction on the network map.
The JSON format is the preferred format for **Escher** to load models onto network maps ([read more here](https://escher.readthedocs.io/en/latest/escher_and_cobrapy.html#what-is-json-and-why-do-we-use-it)). Therefore, we must convert models between SBML and JSON formats to achieve our goal.
**Note:** The models and maps used in the following example are also available in the testing data.
```
import mass.io
```
Fortunately, the [mass.io](../autoapi/mass/io/index.rst) submodule is capable of exporting such models.
First the SBML model is loaded using the ``mass.io.sbml`` submodule. The model is then exported to a JSON format using the ``mass.io.json`` submodule for use in the [Escher web-based API](https://escher.github.io/#/).
```
# Define path to SBML model
path_to_sbml_model = join(mass.test.MODELS_DIR, "Simple_Toy.xml")
# Load SBML model
model = mass.io.sbml.read_sbml_model(path_to_sbml_model)
# Export as JSON
path_to_json_model = "./Simple_Toy.json"
mass.io.json.save_json_model(model, filename=path_to_json_model)
```
Suppose that we have now created our map using the **Escher** web-based API and saved it as the file "simple_toy_map.json". To display the map with the model:
```
# Define path to Escher map
path_to_map = join(mass.test.MAPS_DIR, "simple_toy_map.json")
escher_builder = Builder(
model_json=path_to_json_model,
map_json=path_to_map)
escher_builder
```
Finally the forward rate constant data from the ``MassModel`` object is added to the map:
```
escher_builder.reaction_data = dict(zip(
model.reactions.list_attr("id"),
model.reactions.list_attr("forward_rate_constant")
))
```
## Additional Examples
For additional information and examples on how to visualize networks and **MASSpy** models using **Escher**, see the following:
* [Animating Simulations with Escher](../gallery/visualization/animating_simulations.ipynb)
| github_jupyter |
========================================
__Contents__
* Search usage
1. Import module & Load data
2. Defining parameter search space
3. Defining feature search space (optional)
4. Run search
* Log usage
1. Extract pramater & feature setting
2. Make meta feature for stacking
* Sample:run all backend search
========================================
# Search usage
## 1. Import module & Load data
In here, use the breast cancer wisconsin dataset to modeling.
This is binary classification dataset.
Firstly, this dataset is splitted to three datasets(Train, Test)
```
import os ,sys
import numpy as np, pandas as pd, scipy as sp
from sklearn import datasets
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn.linear_model import LogisticRegression
from cvopt.model_selection import SimpleoptCV
from cvopt.search_setting import search_category, search_numeric
dataset = datasets.load_breast_cancer()
Xtrain, Xtest, ytrain, ytest = train_test_split(dataset.data, dataset.target, test_size=0.3, random_state=0)
print("Train features shape:", Xtrain.shape)
print("Test features shape:", Xtest.shape)
from bokeh.io import output_notebook
output_notebook() # When you need search visualization, need run output_notebook()
```
## 2. Defining parameter search space
Can use a common style in all cv class.
```
param_distributions = {
"penalty": search_category(['none', 'l2']),
"C": search_numeric(0.01, 3.0, "float"),
"tol" : search_numeric(0.0001, 0.001, "float"),
"class_weight" : search_category([None, "balanced", {0:0.5, 1:0.1}]),
}
```
### 2.A Other styles
Can use other styles depend on base module.
### for HyperoptCV (base module: Hyperopt)
```python
param_distributions = {
"penalty": hp.choice("penalty", ['none', 'l2']),
"C": hp.loguniform("C", 0.01, 3.0),
"tol" : hp.loguniform("tol", 0.0001, 0.001),
"class_weight" : hp.choice("class_weight", [None, "balanced", {0:0.5, 1:0.1}]),
}
```
### for BayesoptCV (base module: GpyOpt)
__NOTE:__
* In GpyOpt, search space is list of dict. But in cvopt, need dict of dict(key:param name, value:dict).
* If `type` is `categorical`, search space's dict must have `categories` key. `categories` value is category name's list.
```python
param_distributions = {
"penalty" : {"name": "penalty", "type":"categorical", "domain":(0,1), "categories":['none', 'l2']},
"C": {"name": "C", "type":"continuous", "domain":(0.01, 3.0)},
"tol" : {"name": "tol", "type":"continuous", "domain":(0.0001, 0.001)},
"class_weight" : {"name": "class_weight", "type":"categorical", "domain":(0,1), "categories":[None, "balanced", {0:0.5, 1:0.1}]},
}
```
### for GAoptCV, RandomoptCV
__NOTE:__
* Support search_setting.search_numeric, search_setting.search_category, or scipy.stats class.
```python
param_distributions = {
"penalty" : hp.choice("penalty", ['none', 'l2']),
"C": sp.stats.randint(low=0.01, high=3.0),
"tol" : sp.stats.uniform(loc=0.0001, scale=0.00009),
"class_weight" : hp.choice("class_weight", [None, "balanced", {0:0.5, 1:0.1}]),
}
```
## 3. Defining feature search space (optional)
Features are selected per `feature_group`.
__If `feature_group` is set "-1", this group's features always are used.__
Criterion of separating group is, for example, random, difference of feature engineering method or difference of data source.
When you don't set `feature_group`, optimizer use all input features.
------------------------------------
### Example.
When data has 5 features(5 cols) and `feature_group` is set as shown below.
| feature index(data col index) | feature group |
|:------------:|:------------:|
| 0 | 0 |
| 1 | 0 |
| 2 | 0 |
| 3 | 1 |
| 4 | 1 |
Define as follows python's list.
```
feature_groups = [0, 0, 0, 1, 1]
```
As search result, you may get flag per `feature_group`.
```
feature_groups0: True
feature_groups1: False
```
This result means that optimizer recommend using group 1 features(col index:0,1,2) and not using group 2.
------------------------------------
```
feature_groups = np.random.randint(0, 5, Xtrain.shape[1])
```
## 4. Run search
cvopt has API like scikit-learn cross validation class.
When you had use scikit-learn, you can use cvopt very easy.
For each optimizer class's detail, please see [API reference](https://genfifth.github.io/cvopt/).
```
estimator = LogisticRegression()
cv = StratifiedKFold(n_splits=3, shuffle=True, random_state=0)
opt = SimpleoptCV(estimator, param_distributions,
scoring="roc_auc", # Objective of search
cv=cv, # Cross validation setting
max_iter=32, # Number of search
n_jobs=3, # Number of jobs to run in parallel.
verbose=2, # 0: don't display status, 1:display status by stdout, 2:display status by graph
logdir="./search_usage", # If this path is specified, save the log.
model_id="search_usage", # used estimator's dir and file name in save.
save_estimator=2, # estimator save setting.
backend="hyperopt", # hyperopt,bayesopt, gaopt or randomopt.
)
opt.fit(Xtrain, ytrain, validation_data=(Xtest, ytest),
# validation_data is optional.
# This data is only used to compute validation score(don't fit).
# When this data is input & save_estimator=True,the estimator which is fitted whole Xtrain is saved.
feature_groups=feature_groups,
)
ytest_pred = opt.predict(Xtest)
pd.DataFrame(opt.cv_results_).head() # Search results
```
# Log usage
## 1. Extract pramater & feature setting
In cvopt, helper function is Included to handle log file easily.
When you want to extract settings from log file, It can be implemented as follows.
```
from cvopt.utils import extract_params
target_index = pd.DataFrame(opt.cv_results_)[pd.DataFrame(opt.cv_results_)["mean_test_score"] == opt.best_score_]["index"].values[0]
estimator_params, feature_params, feature_select_flag = extract_params(logdir="./search_usage",
model_id="search_usage",
target_index=target_index,
feature_groups=feature_groups)
estimator.set_params(**estimator_params) # Set estimator parameters
Xtrain_selected = Xtrain[:, feature_select_flag] # Extract selected feature columns
print(estimator)
print("Train features shape:", Xtrain.shape)
print("Train selected features shape:",Xtrain_selected.shape)
```
## 2. Make meta feature for stacking
When you want to male mete feature and [stacking](https://mlwave.com/kaggle-ensembling-guide/), It can be implemented as follows.
When run search, You need set `save_estimator`>0 to make meta feature.
In addition, you need set `save_estimator`>1 to make meta feature from the data which is not fitted.
```
from cvopt.utils import mk_metafeature
target_index = pd.DataFrame(opt.cv_results_)[pd.DataFrame(opt.cv_results_)["mean_test_score"] == opt.best_score_]["index"].values[0]
Xtrain_meta, Xtest_meta = mk_metafeature(Xtrain, ytrain,
logdir="./search_usage",
model_id="search_usage",
target_index=target_index,
cv=cv,
validation_data=(Xtest, ytest),
feature_groups=feature_groups,
estimator_method="predict_proba")
print("Train features shape:", Xtrain.shape)
print("Train meta features shape:", Xtrain_meta.shape)
print("Test features shape:", Xtest.shape)
print("Test meta features shape:", Xtest_meta.shape)
```
# Sample:run all backend search
```
for bk in ["hyperopt", "gaopt", "bayesopt", "randomopt"]:
estimator = LogisticRegression()
cv = StratifiedKFold(n_splits=3, shuffle=True, random_state=0)
opt = SimpleoptCV(estimator, param_distributions,
scoring="roc_auc", # Objective of search
cv=cv, # Cross validation setting
max_iter=32, # Number of search
n_jobs=3, # Number of jobs to run in parallel.
verbose=2, # 0: don't display status, 1:display status by stdout, 2:display status by graph
logdir="./search_usage", # If this path is specified, save the log.
model_id=bk, # used estimator's dir and file name in save.
save_estimator=2, # estimator save setting.
backend=bk, # hyperopt,bayesopt, gaopt or randomopt.
)
opt.fit(Xtrain, ytrain, validation_data=(Xtest, ytest),
# validation_data is optional.
# This data is only used to compute validation score(don't fit).
# When this data is input & save_estimator=True,the estimator which is fitted whole Xtrain is saved.
feature_groups=feature_groups,
)
ytest_pred = opt.predict(Xtest)
from cvopt.utils import extract_params
target_index = pd.DataFrame(opt.cv_results_)[pd.DataFrame(opt.cv_results_)["mean_test_score"] == opt.best_score_]["index"].values[0]
estimator_params, feature_params, feature_select_flag = extract_params(logdir="./search_usage",
model_id=bk,
target_index=target_index,
feature_groups=feature_groups)
estimator.set_params(**estimator_params) # Set estimator parameters
Xtrain_selected = Xtrain[:, feature_select_flag] # Extract selected feature columns
print(estimator)
print("Train features shape:", Xtrain.shape)
print("Train selected features shape:",Xtrain_selected.shape)
from cvopt.utils import mk_metafeature
Xtrain_meta, Xtest_meta = mk_metafeature(Xtrain, ytrain,
logdir="./search_usage",
model_id=bk,
target_index=target_index,
cv=cv,
validation_data=(Xtest, ytest),
feature_groups=feature_groups,
estimator_method="predict_proba")
print("Train features shape:", Xtrain.shape)
print("Train meta features shape:", Xtrain_meta.shape)
print("Test features shape:", Xtest.shape)
print("Test meta features shape:", Xtest_meta.shape)
```
| github_jupyter |
```
import torch
import torch.utils.data
from torch import nn
from torch.nn import functional as F
from ignite.engine import Events, Engine
from ignite.metrics import Accuracy, Loss
import numpy as np
import sklearn.datasets
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
class Model_bilinear(nn.Module):
def __init__(self, features, num_embeddings):
super().__init__()
self.gamma = 0.99
self.sigma = 0.3
embedding_size = 10
self.fc1 = nn.Linear(2, features)
self.fc2 = nn.Linear(features, features)
self.fc3 = nn.Linear(features, features)
self.W = nn.Parameter(torch.normal(torch.zeros(embedding_size, num_embeddings, features), 1))
self.register_buffer('N', torch.ones(num_embeddings) * 20)
self.register_buffer('m', torch.normal(torch.zeros(embedding_size, num_embeddings), 1))
self.m = self.m * self.N.unsqueeze(0)
def embed(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
# i is batch, m is embedding_size, n is num_embeddings (classes)
x = torch.einsum('ij,mnj->imn', x, self.W)
return x
def bilinear(self, z):
embeddings = self.m / self.N.unsqueeze(0)
diff = z - embeddings.unsqueeze(0)
y_pred = (- diff**2).mean(1).div(2 * self.sigma**2).exp()
return y_pred
def forward(self, x):
z = self.embed(x)
y_pred = self.bilinear(z)
return z, y_pred
def update_embeddings(self, x, y):
z = self.embed(x)
# normalizing value per class, assumes y is one_hot encoded
self.N = torch.max(self.gamma * self.N + (1 - self.gamma) * y.sum(0), torch.ones_like(self.N))
# compute sum of embeddings on class by class basis
features_sum = torch.einsum('ijk,ik->jk', z, y)
self.m = self.gamma * self.m + (1 - self.gamma) * features_sum
np.random.seed(0)
torch.manual_seed(0)
l_gradient_penalty = 1.0
# Moons
noise = 0.1
X_train, y_train = sklearn.datasets.make_moons(n_samples=1500, noise=noise)
X_test, y_test = sklearn.datasets.make_moons(n_samples=200, noise=noise)
num_classes = 2
batch_size = 64
model = Model_bilinear(20, num_classes)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9, weight_decay=1e-4)
def calc_gradient_penalty(x, y_pred):
gradients = torch.autograd.grad(
outputs=y_pred,
inputs=x,
grad_outputs=torch.ones_like(y_pred),
create_graph=True,
)[0]
gradients = gradients.flatten(start_dim=1)
# L2 norm
grad_norm = gradients.norm(2, dim=1)
# Two sided penalty
gradient_penalty = ((grad_norm - 1) ** 2).mean()
# One sided penalty - down
# gradient_penalty = F.relu(grad_norm - 1).mean()
return gradient_penalty
def output_transform_acc(output):
y_pred, y, x, z = output
y = torch.argmax(y, dim=1)
return y_pred, y
def output_transform_bce(output):
y_pred, y, x, z = output
return y_pred, y
def output_transform_gp(output):
y_pred, y, x, z = output
return x, y_pred
def step(engine, batch):
model.train()
optimizer.zero_grad()
x, y = batch
x.requires_grad_(True)
z, y_pred = model(x)
loss1 = F.binary_cross_entropy(y_pred, y)
loss2 = l_gradient_penalty * calc_gradient_penalty(x, y_pred)
loss = loss1 + loss2
loss.backward()
optimizer.step()
with torch.no_grad():
model.update_embeddings(x, y)
return loss.item()
def eval_step(engine, batch):
model.eval()
x, y = batch
x.requires_grad_(True)
z, y_pred = model(x)
return y_pred, y, x, z
trainer = Engine(step)
evaluator = Engine(eval_step)
metric = Accuracy(output_transform=output_transform_acc)
metric.attach(evaluator, "accuracy")
metric = Loss(F.binary_cross_entropy, output_transform=output_transform_bce)
metric.attach(evaluator, "bce")
metric = Loss(calc_gradient_penalty, output_transform=output_transform_gp)
metric.attach(evaluator, "gp")
ds_train = torch.utils.data.TensorDataset(torch.from_numpy(X_train).float(), F.one_hot(torch.from_numpy(y_train)).float())
dl_train = torch.utils.data.DataLoader(ds_train, batch_size=batch_size, shuffle=True, drop_last=True)
ds_test = torch.utils.data.TensorDataset(torch.from_numpy(X_test).float(), F.one_hot(torch.from_numpy(y_test)).float())
dl_test = torch.utils.data.DataLoader(ds_test, batch_size=200, shuffle=False)
@trainer.on(Events.EPOCH_COMPLETED)
def log_results(trainer):
evaluator.run(dl_test)
metrics = evaluator.state.metrics
print("Test Results - Epoch: {} Acc: {:.4f} BCE: {:.2f} GP {:.2f}"
.format(trainer.state.epoch, metrics['accuracy'], metrics['bce'], metrics['gp']))
trainer.run(dl_train, max_epochs=30)
domain = 3
x_lin = np.linspace(-domain+0.5, domain+0.5, 100)
y_lin = np.linspace(-domain, domain, 100)
xx, yy = np.meshgrid(x_lin, y_lin)
X_grid = np.column_stack([xx.flatten(), yy.flatten()])
X_vis, y_vis = sklearn.datasets.make_moons(n_samples=1000, noise=noise)
mask = y_vis.astype(np.bool)
with torch.no_grad():
output = model(torch.from_numpy(X_grid).float())[1]
confidence = output.max(1)[0].numpy()
z = confidence.reshape(xx.shape)
plt.figure()
plt.contourf(x_lin, y_lin, z, cmap='cividis')
plt.scatter(X_vis[mask,0], X_vis[mask,1])
plt.scatter(X_vis[~mask,0], X_vis[~mask,1])
```
| github_jupyter |
```
## By Saina Srivastava
## Based on
# - # Link: https://towardsdatascience.com/machine-learning-part-19-time-series-and-autoregressive-integrated-moving-average-model-arima-c1005347b0d7
# - https://machinelearningmastery.com/arima-for-time-series-forecasting-with-python/
# - https://www.statsmodels.org/stable/generated/statsmodels.tsa.arima.model.ARIMA.html?highlight=arima#statsmodels.tsa.arima.model.ARIMA
import numpy as np
import pandas as pd
import datetime
from matplotlib import pyplot as plt
from statsmodels.tsa.stattools import adfuller
from statsmodels.tsa.seasonal import seasonal_decompose
from statsmodels.tsa.arima_model import ARIMA
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
file_name = 'data/shampoo.csv'
df = pd.read_csv(file_name, parse_dates = ['Month'], index_col = ['Month'])
df.head()
# Reading the file and plotting the basics
plt.xlabel('Date')
plt.ylabel('Price')
plt.xticks(rotation=90)
plt.plot(df)
# Panda provides basic time operations
window_size = 3 # Size of window
rolling_mean = df.rolling(window = window_size).mean()
rolling_std = df.rolling(window = window_size).std() # Used for standard deviation
plt.plot(df, color = 'blue', label = 'Original')
plt.plot(rolling_mean, color = 'red', label = 'Rolling Mean')
plt.plot(rolling_std, color = 'black', label = 'Rolling Std')
plt.legend(loc = 'best')
plt.title('Rolling Mean & Rolling Standard Deviation for Shampoo Sales')
plt.xticks(rotation=90)
#for i, t in enumerate(plt.xticklabels()):
# if (i % 5) != 0:
# t.set_visible(False)
plt.show()
# Showing on a log scale
df_log = np.log(df)
plt.xticks(rotation=90)
plt.plot(df_log)
# Checking if a timeseries is stationary or not
# Stationary: The mean of the series should not grow over time,
# the variance of the series should not spread over time, and the
# convariance of'i'th and (i + m)th should not be a funtion of time
def get_stationarity(timeseries):
# rolling statistics
rolling_mean = timeseries.rolling(window=12).mean()
rolling_std = timeseries.rolling(window=12).std()
# rolling statistics plot
original = plt.plot(timeseries, color='blue', label='Original')
mean = plt.plot(rolling_mean, color='red', label='Rolling Mean')
std = plt.plot(rolling_std, color='black', label='Rolling Std')
plt.xticks(rotation=90)
plt.legend(loc='best')
plt.title('Rolling Mean & Standard Deviation')
plt.show(block=False)
# Dickey–Fuller test:
result = adfuller(timeseries['Sales'])
print('ADF Statistic: {}'.format(result[0]))
print('p-value: {}'.format(result[1]))
print('Critical Values:')
for key, value in result[4].items():
print('\t{}: {}'.format(key, value))
# To see how it behaves over time
rolling_mean = df_log.rolling(window=12).mean()
df_log_minus_mean = df_log - rolling_mean
df_log_minus_mean.dropna(inplace=True)
get_stationarity(df_log_minus_mean)
# Used to turn the timeframe into a stationary one
rolling_mean_exp_decay = df_log.ewm(halflife=12, min_periods=0, adjust=True).mean()
df_log_exp_decay = df_log - rolling_mean_exp_decay
df_log_exp_decay.dropna(inplace=True)
get_stationarity(df_log_exp_decay)
# Another method to determine if there is a better method to turn a
# moving time series into a stationary one
df_log_shift = df_log - df_log.shift()
df_log_shift.dropna(inplace=True)
get_stationarity(df_log_shift)
## Fit model using ARIMA
# p: The number of lag observations included in the model, also called the lag order.
# d: The number of times that the raw observations are differenced, also called the degree of differencing.
# q: The size of the moving average window, also called the order of moving average.
model = ARIMA(df_log, order=(5,1,0))
model_fit = model.fit()
# summary of fit model
print(model_fit.summary())
# Building ARIMA model
# p: The number of lag observations included in the model, also called the lag order.
# d: The number of times that the raw observations are differenced, also called the degree of differencing.
# q: The size of the moving average window, also called the order of moving average.
model = ARIMA(df_log, order=(0,1,2))
# Fit the model.
# Result form is here: https://www.statsmodels.org/stable/generated/statsmodels.tsa.arima.model.ARIMAResults.html
results = model.fit(disp=-1)
# Show
plt.xticks(rotation=90)
plt.plot(df_log_shift)
plt.plot(results.fittedvalues, color='red')
# Allows values to be predicted in future
results.plot_predict(1,50)
```
| github_jupyter |
# Exporting and Archiving
Most of the other user guides show you how to use HoloViews for interactive, exploratory visualization of your data, while the [Applying Customizations](03-Applying_Customizations.ipynb) user guide shows how to use HoloViews completely non-interactively, generating and rendering images directly to disk using `hv.save`. In this notebook, we show how HoloViews works together with the Jupyter Notebook to establish a fully interactive yet *also* fully reproducible scientific or engineering workflow for generating reports or publications. That is, as you interactively explore your data and build visualizations in the notebook, you can automatically generate and export them as figures that will feed directly into your papers or web pages, along with records of how those figures were generated and even storing the actual data involved so that it can be re-analyzed later.
```
import holoviews as hv
from holoviews import opts
from holoviews.operation import contours
hv.extension('matplotlib')
```
## Exporting specific files
During interactive exploration in the Jupyter Notebook, your results are always visible within the notebook itself, but you can explicitly request that any visualization is also exported to an external file on disk:
```
penguins = hv.RGB.load_image('../assets/penguins.png')
hv.save(penguins, 'penguin_plot.png', fmt='svg')
penguins
```
This mechanism can be used to provide a clear link between the steps for generating the figure, and the file on disk. You can now load the exported PNG image back into HoloViews, if you like, using ``hv.RGB.load_image`` although the result would be a bit confusing due to the nested axes.
The ``fig="png"`` part of the ``hv.save`` function call above specified that the file should be saved in PNG format, which is useful for posting on web pages or editing in raster-based graphics programs. Note that `hv.save` also accepts `HoloMap`s which can be saved to formats such as ``'scrubber'``, ``'widgets'`` or even ``'gif'`` or ``'mp4'`` (if the necessary matplotlib dependencies are available).
If the file extension is part of the filename, that will automatically be used to set the format. Conversely, if the format is explicitly specified, then the extension does not have to be part of the filename (and any filename extension that is provided will be ignored). Sometimes the two pieces of information are independent: for instance, a filename ending in `.html` can support either the `'widgets'` or `'scrubber'` formats.
For a publication, you will usually want to select SVG format because this vector format preserves the full resolution of all text and drawing elements. SVG files can be be used in some document preparation programs directly (e.g. [LibreOffice](http://www.libreoffice.org/)), and can easily be converted and manipulated in vector graphics editors such as [Inkscape](https://inkscape.org).
## Exporting notebooks
The ``hv.save`` function is useful when you want specific plots saved into specific files. Often, however, a notebook will contain an entire suite of results contained in multiple different cells, and manually specifying these cells and their filenames is error-prone, with a high likelihood of accidentally creating multiple files with the same name or using different names in different notebooks for the same objects.
To make the exporting process easier for large numbers of outputs, as well as more predictable, HoloViews also offers a powerful automatic notebook exporting facility, creating an archive of all your results. Automatic export is very useful in the common case of having a notebook that contains a series of figures to be used in a report or publication, particularly if you are repeatedly re-running the notebook as you finalize your results, and want the full set of current outputs to be available to an external document preparation system.
The advantage of using this archival system over simply converting the notebook to a static HTML file with nbconvert is that you can generate a collection of individual file assets in one or more desired file formats.
To turn on automatic adding of your files to the export archive, run ``hv.archive.auto()``:
```
hv.archive.auto()
```
This object's behavior can be customized extensively; try pressing tab within the parentheses for a list of options, which are described more fully below.
By default, the output will go into a directory with the same name as your notebook, and the names for each object will be generated from the groups and labels used by HoloViews. Objects that contain HoloMaps are not exported by default, since those are usually rendered as animations that are not suitable for inclusion in publications, but you can change it to ``.auto(holomap='gif')`` if you want those as well.
### Adding files to an archive
To see how the auto-exporting works, let's define a few HoloViews objects:
```
penguins[:,:,'R'].relabel("Red") + penguins[:,:,'G'].relabel("Green") + penguins[:,:,'B'].relabel("Blue")
penguins * hv.Arrow(0.15, 0.3, 'Penguin', '>')
cs = contours(penguins[:,:,'R'], levels=[0.10,0.80])
overlay = penguins[:, :, 'R'] * cs
overlay.opts(
opts.Contours(linewidth=1.3, cmap='Autumn'),
opts.Image(cmap="gray"))
```
We can now list what has been captured, along with the names that have been generated:
```
hv.archive.contents()
```
Here each object has resulted in two files, one in SVG format and one in Python "pickle" format (which appears as a ``zip`` file with extension ``.hvz`` in the listing). We'll ignore the pickle files for now, focusing on the SVG images.
The name generation code for these files is heavily customizable, but by default it consists of a list of dimension values and objects:
``{dimension},{dimension},...{group}-{label},{group}-{label},...``.
The ``{dimension}`` shows what dimension values are included anywhere in this object, if it contains any high-level ``Dimensioned`` objects like ``HoloMap``, ``NdOverlay``, and ``GridSpace``. Of course, nearly all HoloViews objects have dimensions, such as ``x`` and ``y`` in this case, but those dimensions are not used in the filenames because they are explicitly shown in the plots; only the top-level dimensions are used (those that determine which plot this is, not those that are shown in the plot itself.)
The ``{group}-{label}`` information lists the names HoloViews uses for default titles and for attribute access for the various objects that make up a given displayed object. E.g. the first SVG image in the list is a ``Layout`` of the three given ``Image`` objects, and the second one is an ``Overlay`` of an ``RGB`` object and an ``Arrow`` object. This information usually helps distinguish one plot from another, because they will typically be plots of objects that have different labels.
If the generated names are not unique, a numerical suffix will be added to make them unique. A maximum filename length is enforced, which can be set with ``hv.archive.max_filename=``_num_.
If you prefer a fixed-width filename, you can use a hash for each name instead (or in addition), where ``:.8`` specifies how many characters to keep from the hash:
```
hv.archive.filename_formatter="{SHA:.8}"
cs
hv.archive.contents()
```
You can see that the newest files added have the shorter, fixed-width format, though the names are no longer meaningful. If the ``filename_formatter`` had been set from the start, all filenames would have been of this type, which has both practical advantages (short names, all the same length) and disadvantages (no semantic clue about the contents).
### Generated indexes
In addition to the files that were added to the archive for each of the cell outputs above, the archive exporter will also add an ``index.html`` file with a static copy of the notebook, with each cell labelled with the filename used to save it once `hv.archive.export()` is called (you can verify this for yourself after this call is executed below). This HTML file acts as a definitive index to your results, showing how they were generated and where they were exported on disk.
The exporter will also add a cleared, runnable copy of the notebook ``index.ipynb`` (with output deleted), so that you can later regenerate all of the output, with changes if necessary.
The exported archive will thus be a complete set of your results, along with a record of how they were generated, plus a recipe for regenerating them -- i.e., fully reproducible research! This HTML file and .ipynb file can the be submitted as supplemental materials for a paper, allowing any reader to build on your results, or it can just be kept privately so that future collaborators can start where this research left off.
### Adding your own data to the archive
Of course, your results may depend on a lot of external packages, libraries, code files, and so on, which will not automatically be included or listed in the exported archive.
Luckily, the archive support is very general, and you can add any object to it that you want to be exported along with your output. For instance, you can store arbitrary metadata of your choosing, such as version control information, here as a JSON-format text file:
```
import json
hv.archive.add(filename='metadata.json',
data=json.dumps({'repository':'git@github.com:ioam/holoviews.git',
'commit':'437e8d69'}), info={'mime_type':'text/json'})
```
The new file can now be seen in the contents listing:
```
hv.archive.contents()
```
You can get a more direct list of filenames using the ``listing`` method:
```
listing = hv.archive.listing()
listing
```
In this way, you should be able to automatically generate output files, with customizable filenames, storing any data or metadata you like along with them so that you can keep track of all the important information for reproducing these results later.
### Controlling the behavior of ``hv.archive``
The ``hv.archive`` object provides numerous parameters that can be changed. You can e.g.:
- output the whole directory to a single compressed ZIP or tar archive file (e.g. ``hv.archive.set_param(pack=False, archive_format='zip')`` or ``archive_format='tar'``)
- generate a new directory or archive every time the notebook is run (``hv.archive.uniq_name=True``); otherwise the old output directory is erased each time
- choose your own name for the output directory or archive (e.g. ``hv.archive.export_name="{timestamp}"``)
- change the format of the optional timestamp (e.g. to retain snapshots hourly, ``archive.set_param(export_name="{timestamp}", timestamp_format="%Y_%m_%d-%H")``)
- select PNG output, at a specified rendering resolution: ``hv.archive.exporters=[hv.renderer('bokeh').instance(size=50)])
``
These options and any others listed above can all be set in the ``hv.archive.auto()`` call at the start, for convenience and to ensure that they apply to all of the files that are added.
### Writing the archive to disk
To actually write the files you have stored in the archive to disk, you need to call ``export()`` after any cell that might contain computation-intensive code. Usually it's best to do so as the last or nearly last cell in your notebook, though here we do it earlier because we wanted to show how to use the exported files.
```
hv.archive.export()
```
Shortly after the ``export()`` command has been executed, the output should be available as a directory on disk, by default in the same directory as the notebook file, named with the name of the notebook:
```
import os
os.getcwd()
if os.path.exists(hv.archive.notebook_name):
print('\n'.join(sorted(os.listdir(hv.archive.notebook_name))))
```
For technical reasons to do with how the IPython Notebook interacts with JavaScript, if you use the Jupyter Notebook command ``Run all``, the ``hv.archive.export()`` command is not actually executed when the cell with that call is encountered during the run. Instead, the ``export()`` is queued until after the final cell in the notebook has been executed. This asynchronous execution has several awkward but not serious consequences:
- It is not possible for the ``export()`` cell to show whether any errors were encountered during exporting, because these will not occur until after the notebook has completed processing. To see any errors, you can run ``hv.archive.last_export_status()`` separately, *after* the ``Run all`` has completed. E.g. just press shift-[Enter] in the following cell, which will tell you whether the previous export was successful.
- If you use ``Run all``, the directory listing ``os.listdir()`` above will show the results from the *previous* time this notebook was run, since it executes before the export. Again, you can use shift-[Enter] to update the data once complete.
- The ``Export name:`` in the output of ``hv.archive.export()`` will not always show the actual name of the directory or archive that will be created. In particular, it may say ``{notebook}``, which when saving will actually expand to the name of your Jupyter Notebook.
```
hv.archive.last_export_status()
```
### Accessing your saved data
By default, HoloViews saves not only your rendered plots (PNG, SVG, etc.), but also the actual HoloViews objects that the plots visualize, which contain all your actual data. The objects are stored in compressed Python pickle files (``.hvz``), which are visible in the directory listings above but have been ignored until now. The plots are what you need for writing a document, but the raw data is is a crucial record to keep as well. For instance, you now can load in the HoloViews object, and manipulate it just as you could when it was originally defined. E.g. we can re-load our ``Levels`` ``Overlay`` file, which has the contours overlaid on top of the image, and easily pull out the underlying ``Image`` object:
```
import os
from holoviews.core.io import Unpickler
c, a = None,None
hvz_file = [f for f in listing if f.endswith('hvz')][0]
path = os.path.join(hv.archive.notebook_name, hvz_file)
if os.path.isfile(path):
print('Unpickling {filename}'.format(filename=hvz_file))
obj = Unpickler.load(open(path,"rb"))
print(obj)
else:
print('Could not find file {path}'.format(path=path))
print('Current directory is {cwd}'.format(cwd=os.getcwd()))
print('Containing files and directories: {listing}'.format(listing=os.listdir(os.getcwd())))
```
Given the ``Image``, you can also access the underlying array data, because HoloViews objects are simply containers for your data and associated metadata. This means that years from now, as long as you can still run HoloViews, you can now easily re-load and explore your data, plotting it entirely different ways or running different analyses, even if you no longer have any of the original code you used to generate the data. All you need is HoloViews, which is permanently archived on GitHub and is fully open source and thus should always remain available. Because the data is stored conveniently in the archive alongside the figure that was published, you can see immediately which file corresponds to the data underlying any given plot in your paper, and immediately start working with the data, rather than laboriously trying to reconstruct the data from a saved figure.
If you do not want the pickle files, you can of course turn them off if you prefer, by changing ``hv.archive.auto()`` to:
```python
hv.archive.auto(exporters=[hv.renderer('matplotlib').instance(holomap=None)])
```
Here, the exporters list has been updated to include the usual default exporters *without* the `Pickler` exporter that would usually be included.
## Using HoloViews to do reproducible research
The export options from HoloViews help you establish a feasible workflow for doing reproducible research: starting from interactive exploration, either export specific files with ``hv.save``, or enable ``hv.archive.auto()``, which will store a copy of your notebook and its output ready for inclusion in a document but retaining the complete recipe for reproducing the results later.
### Why reproducible research matters
To understand why these capabilities are important, let's consider the process by which scientific results are typically generated and published without HoloViews. Scientists and engineers use a wide variety of data-analysis tools, ranging from GUI-based programs like Excel spreadsheets, mixed GUI/command-line programs like Matlab, or purely scriptable tools like matplotlib or bokeh. The process by which figures are created in any of these tools typically involves copying data from its original source, selecting it, transforming it, choosing portions of it to put into a figure, choosing the various plot options for a subfigure, combining different subfigures into a complete figure, generating a publishable figure file with the full figure, and then inserting that into a report or publication.
If using GUI tools, often the final figure is the only record of that process, and even just a few weeks or months later a researcher will often be completely unable to say precisely how a given figure was generated. Moreover, this process needs to be repeated whenever new data is collected, which is an error-prone and time-consuming process. The lack of records is a serious problem for building on past work and revisiting the assumptions involved, which greatly slows progress both for individual researchers and for the field as a whole. Graphical environments for capturing and replaying a user's GUI-based workflow have been developed, but these have greatly restricted the process of exploration, because they only support a few of the many analyses required, and thus they have rarely been successful in practice. With GUI tools it is also very difficult to "curate" the sequence of steps involved, i.e., eliminating dead ends, speculative work, and unnecessary steps, with a goal of showing the clear path from incoming data to a final figure.
In principle, using scriptable or command-line tools offers the promise of capturing the steps involved, in a form that can be curated. In practice, however, the situation is often no better than with GUI tools, because the data is typically taken through many manual steps that culminate in a published figure, and without a laboriously manually created record of what steps are involved, the provenance of a given figure remains unknown. Where reproducible workflows are created in this way, they tend to be "after the fact", as an explicit exercise to accompany a publication, and thus (a) they are rarely done, (b) they are very difficult to do if any of the steps were not recorded originally.
A Jupyter notebook helps significantly to make the scriptable-tools approach viable, by recording both code and the resulting output, and can thus in principle act as a record for establishing the full provenance of a figure. But because typical plotting libraries require so much plotting-specific code before any plot is visible, the notebook quickly becomes unreadable. To make notebooks readable, researchers then typically move the plotting code for a specific figure to some external file, which then drifts out of sync with the notebook so that the notebook no longer acts as a record of the link between the original data and the resulting figure.
HoloViews provides the final missing piece in this approach, by allowing researchers to work directly with their data interactively in a notebook, using small amounts of code that focus on the data and analyses rather than plotting code, yet showing the results directly alongside the specification for generating them. This user guide will describe how use a Jupyter notebook with HoloViews to export your results in a way that preserves the information about how those results were generated, providing a clear chain of provenance and making reproducible research practical at last.
For more information on how HoloViews can help build a reproducible workflow, see our [2015 paper on using HoloViews for reproducible research](http://conference.scipy.org/proceedings/scipy2015/pdfs/jean-luc_stevens.pdf).
| github_jupyter |
Homework 2
=====
Daphne Ippolito
```
import xor_network
```
What issues did you have?
-----
The first issue that I has was that I was trying to output a single scalar whose value could be thresholded to determine whether the network should return TRUE or FALSE. It turns out loss functions for this are much more complicated than if I had instead treated the XOR problem as a classification task with one output per possible label ('TRUE', 'FALSE'). This is the approach I have implemented here.
Another issue I encountered at first was that I was using too few hidden nodes. I originally thought that such a simple problem would only need a couple nodes in a single hidden layer to implement. However, such small networks were extremely slow to converge. This is exemplified in the Architectures section.
Lastly, when I was using small batch sizes (<= 5 examples), and randomly populating the batches, the network would sometimes fail to converge, probably because the batches didn't contain all the possible examples.
Which activation functions did you try? Which loss functions?
-----
I tried ReLU, sigmoid, and tanh activation functions. I only successfully uses a softmax cross-entropy loss function.
The results for the different activation functions can be seen by running the block below. The sigmoid function consistently takes the longest to converge. I'm unsure why tanh does significantly better than sigmoid.
```
batch_size = 100
num_steps = 10000
num_hidden = 7
num_hidden_layers = 2
learning_rate = 0.2
xor_network.run_network(batch_size, num_steps, num_hidden, num_hidden_layers, learning_rate, False, 'sigmoid')
xor_network.run_network(batch_size, num_steps, num_hidden, num_hidden_layers, learning_rate, False, 'tanh')
xor_network.run_network(batch_size, num_steps, num_hidden, num_hidden_layers, learning_rate, False, 'relu')
```
What architectures did you try? What were the different results? How long did it take?
-----
The results for several different architectures can be seen by running the code below. Since there is no reading from disk, each iteration takes almost exactly the same amount of time. Therefore, I will report "how long it takes" in number of iterations rather than in time.
```
# Network with 2 hidden layers of 5 nodes
xor_network.run_network(batch_size, num_steps, 5, 2, learning_rate, False, 'relu')
# Network with 5 hidden layers of 2 nodes each
num_steps = 3000 # (so it doesn't go on forever)
xor_network.run_network(batch_size, num_steps, 2, 5, learning_rate, False, 'relu')
```
**Conclusion from the above:** With the number of parameters held constant, a deeper network does not necessarily perform better than a shallower one. I am guessing this is because fewer nodes in a layer means that the network can keep around less information from layer to layer.
```
xor_network.run_network(batch_size, num_steps, 3, 5, learning_rate, False, 'relu')
```
**Conclusion from the above:** Indeed, the problem is not the number of layers, but the number of nodes in each layer.
```
# This is the minimum number of nodes I can use to consistently get convergence with Gradient Descent.
xor_network.run_network(batch_size, num_steps, 5, 1, learning_rate, False, 'relu')
# If I switch to using Adam Optimizer, I can get down to 2 hidden nodes and consistently have convergence.
xor_network.run_network(batch_size, num_steps, 2, 1, learning_rate, True, 'relu')
```
| github_jupyter |
```
%matplotlib inline
```
# Classifier comparison
A comparison of a several classifiers in scikit-learn on synthetic datasets.
The point of this example is to illustrate the nature of decision boundaries
of different classifiers.
This should be taken with a grain of salt, as the intuition conveyed by
these examples does not necessarily carry over to real datasets.
Particularly in high-dimensional spaces, data can more easily be separated
linearly and the simplicity of classifiers such as naive Bayes and linear SVMs
might lead to better generalization than is achieved by other classifiers.
The plots show training points in solid colors and testing points
semi-transparent. The lower right shows the classification accuracy on the test
set.
```
print(__doc__)
# Code source: Gaël Varoquaux
# Andreas Müller
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
h = .02 # step size in the mesh
names = ["Nearest Neighbors", "Linear SVM", "RBF SVM", "Gaussian Process",
"Decision Tree", "Random Forest", "Neural Net", "AdaBoost",
"Naive Bayes", "QDA"]
classifiers = [
KNeighborsClassifier(3),
SVC(kernel="linear", C=0.025),
SVC(gamma=2, C=1),
GaussianProcessClassifier(1.0 * RBF(1.0)),
DecisionTreeClassifier(max_depth=5),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
MLPClassifier(alpha=1, max_iter=1000),
AdaBoostClassifier(),
GaussianNB(),
QuadraticDiscriminantAnalysis()]
X, y = make_classification(n_features=2, n_redundant=0, n_informative=2,
random_state=1, n_clusters_per_class=1)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
linearly_separable = (X, y)
datasets = [make_moons(noise=0.3, random_state=0),
make_circles(noise=0.2, factor=0.5, random_state=1),
linearly_separable
]
figure = plt.figure(figsize=(27, 9))
i = 1
# iterate over datasets
for ds_cnt, ds in enumerate(datasets):
# preprocess dataset, split into training and test part
X, y = ds
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=.4, random_state=42)
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# just plot the dataset first
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
if ds_cnt == 0:
ax.set_title("Input data")
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright,
edgecolors='k')
# Plot the testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6,
edgecolors='k')
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
i += 1
# iterate over classifiers
for name, clf in zip(names, classifiers):
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
else:
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
# Put the result into a color plot
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, cmap=cm, alpha=.8)
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright,
edgecolors='k')
# Plot the testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright,
edgecolors='k', alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
if ds_cnt == 0:
ax.set_title(name)
ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'),
size=15, horizontalalignment='right')
i += 1
plt.tight_layout()
plt.show()
```
| github_jupyter |
# Using Fuzzingbook Code in your own Programs
This notebook has instructions on how to use the `fuzzingbook` code in your own programs.
In short, there are three ways:
1. Simply run the notebooks in your browser, using the "mybinder" environment. Choose "Resources→Edit as Notebook" in any of the `fuzzingbook.org` pages; this will lead you to a preconfigured Jupyter Notebook environment where you can toy around at your leisure.
2. Import the code for your own Python programs. Using `pip install fuzzingbook`, you can install all code and start using it from your own code. See "Can I import the code for my own Python projects?", below.
3. Download or check out the code and/or the notebooks from the project site. This allows you to edit and run all things locally. However, be sure to also install the required packages; see below for details.
```
import bookutils
from bookutils import YouTubeVideo
YouTubeVideo("b4HitpWsJL4")
```
## Can I import the code for my own Python projects?
Yes, you can! (If you like Python, that is.) We provide a `fuzzingbook` Python package that you can install using the `pip` package manager:
```shell
$ pip install fuzzingbook
```
As of `fuzzingbook 1.0`, this is set up such that almost all additional required packages are also installed. For a full installation, also follow the steps in "Which other Packages do I need to use the Python Modules?" below.
Once `pip` is complete, you can import individual classes, constants, or functions from each notebook using
```python
>>> from fuzzingbook.<notebook> import <identifier>
```
where `<identifier>` is the name of the class, constant, or function to use, and `<notebook>` is the name of the respective notebook. (If you read this at fuzzingbook.org, then the notebook name is the identifier preceding `".html"` in the URL).
Here is an example importing `RandomFuzzer` from [the chapter on fuzzers](Fuzzer.ipynb), whose notebook name is `Fuzzer`:
```python
>>> from fuzzingbook.Fuzzer import RandomFuzzer
>>> f = RandomFuzzer()
>>> f.fuzz()
'!7#%"*#0=)$;%6*;>638:*>80"=</>(/*:-(2<4 !:5*6856&?""11<7+%<%7,4.8,*+&,,$,."5%<%76< -5'
```
The "Synopsis" section at the beginning of a chapter gives a short survey on useful code features you can use.
## Which OS and Python versions are required?
As of `fuzzingbook 1.0`, Python 3.9 and later is required. Specifically, we use Python 3.9.7 for development and testing. This is also the version to be used if you check out the code from git, and the version you get if you use the debugging book within the "mybinder" environment.
To use the `fuzzingbook` code with earlier Python version, use
```shell
$ pip install 'fuzzingbook=0.95'
```
Our notebooks generally assume a Unix-like environment; the code is tested on Linux and macOS. System-independent code may also run on Windows.
## Can I use the code from within a Jupyter notebook?
Yes, you can! You would first install the `fuzzingbook` package (as above); you can then access all code right from your notebook.
Another way to use the code is to _import the notebooks directly_. Download the notebooks from the menu. Then, add your own notebooks into the same folder. After importing `bookutils`, you can then simply import the code from other notebooks, just as our own notebooks do.
Here is again the above example, importing `RandomFuzzer` from [the chapter on fuzzers](Fuzzer.ipynb) – but now from a notebook:
```
import bookutils
from Fuzzer import RandomFuzzer
f = RandomFuzzer()
f.fuzz()
```
If you'd like to share your notebook, let us know; we can integrate it in the repository or even in the book.
## Can I check out the code from git and get the latest and greatest?
Yes, you can! We have a few continuous integration (CI) workflows running which do exactly that. After cloning the repository from [the project page](https://github.com/uds-se/fuzzingbook/) and installing the additional packages (see below), you can `cd` into `notebooks` and start `jupyter` right away!
There also is a `Makefile` provided with literally hundreds of targets; most important are the ones we also use in continuous integration:
* `make check-imports` checks whether your code is free of syntax errors
* `make check-style` checks whether your code is free of type errors
* `make check-code` runs all derived code, testing it
* `make check-notebooks` runs all notebooks, testing them
If you want to contribute to the project, ensure that the above tests run through.
The `Makefile` has many more, often experimental, targets. `make markdown` creates a `.md` variant in `markdown/`, and there's also `make word` and `make epub`, which are set to create Word and EPUB variants (with mixed results). Try `make help` for commonly used targets.
## Can I just run the Python code? I mean, without notebooks?
Yes, you can! You can download the code as Python programs; simply select "Resources $\rightarrow$ Download Code" for one chapter or "Resources $\rightarrow$ All Code" for all chapters. These code files can be executed, yielding (hopefully) the same results as the notebooks.
The code files can also be edited if you wish, but (a) they are very obviously generated from notebooks, (b) therefore not much fun to work with, and (c) if you fix any errors, you'll have to back-propagate them to the notebook before you can make a pull request. Use code files only under severely constrained circumstances.
If you only want to **use** the Python code, install the code package (see above).
## Which other Packages do I need to use the Python Modules?
After downloaded `fuzzingbook` code, installing the `fuzzingbook` package, or checking out `fuzzingbook` from the repository, here's what to do to obtain a complete set of packages.
### Step 1: Install Required Python Packages
The [`requirements.txt` file within the project root folder](https://github.com/uds-se/fuzzingbook/tree/master/) lists all _Python packages required_.
You can do
```sh
$ pip install -r requirements.txt
```
to install all required packages (but using `pipenv` is preferred; see below).
### Step 2: Install Additional Non-Python Packages
The [`apt.txt` file in the `binder/` folder](https://github.com/uds-se/fuzzingbook/tree/master/binder) lists all _Linux_ packages required.
In most cases, however, it suffices to install the `dot` graph drawing program (part of the `graphviz` package). Here are some instructions:
#### Installing Graphviz on Linux
```sh
$ sudo apt-get install graphviz
```
to install it.
#### Installing Graphviz on macOS
On macOS, if you use `conda`, run
```sh
$ conda install graphviz
```
If you use HomeBrew, run
```sh
$ brew install graphviz
```
## Installing Fuzzingbook Code in an Isolated Environment
If you wish to install the `fuzzingbook` code in an environment that is isolated from your system interpreter,
we recommend using [Pipenv](https://pipenv.pypa.io/), which can automatically create a so called *virtual environment* hosting all required packages.
To accomplish this, please follow these steps:
### Step 1: Install PyEnv
Optionally install `pyenv` following the [official instructions](https://github.com/pyenv/pyenv#installation) if you are on a Unix operating system.
If you are on Windows, consider using [pyenv-win](https://github.com/pyenv-win/pyenv-win) instead.
This will allow you to seamlessly install any version of Python.
### Step 2: Install PipEnv
Install Pipenv following the official [installation instructions](https://pypi.org/project/pipenv/).
If you have `pyenv` installed, Pipenv can automatically download and install the appropriate version of the Python distribution.
Otherwise, Pipenv will use your system interpreter, which may or may not be the right version.
### Step 3: Install Python Packages
Run
```sh
$ pipenv install -r requirements.txt
```
in the `fuzzingbook` root directory.
### Step 4: Install Additional Non-Python Packages
See above for instructions on how to install additional non-python packages.
### Step 5: Enter the Environment
Enter the environment with
```sh
$ pipenv shell
```
where you can now execute
```sh
$ make -k check-code
```
to run the tests.
| github_jupyter |
# Introduction to obspy
The obspy package is very useful to download seismic data and to do some signal processing on them. Most signal processing methods are based on the signal processing method in the Python package scipy.
First we import useful packages.
```
import obspy
import obspy.clients.earthworm.client as earthworm
import obspy.clients.fdsn.client as fdsn
from obspy import read
from obspy import read_inventory
from obspy import UTCDateTime
from obspy.core.stream import Stream
from obspy.signal.cross_correlation import correlate
import matplotlib.pyplot as plt
import numpy as np
import os
import urllib.request
%matplotlib inline
```
We are going to download data from an array of seismic stations.
```
network = 'XU'
arrayName = 'BS'
staNames = ['BS01', 'BS02', 'BS03', 'BS04', 'BS05', 'BS06', 'BS11', 'BS20', 'BS21', 'BS22', 'BS23', 'BS24', 'BS25', \
'BS26', 'BS27']
chaNames = ['SHE', 'SHN', 'SHZ']
staCodes = 'BS01,BS02,BS03,BS04,BS05,BS06,BS11,BS20,BS21,BS22,BS23,BS24,BS25,BS26,BS27'
chans = 'SHE,SHN,SHZ'
```
We also need to define the time period for which we want to download data.
```
myYear = 2010
myMonth = 8
myDay = 17
myHour = 6
TDUR = 2 * 3600.0
Tstart = UTCDateTime(year=myYear, month=myMonth, day=myDay, hour=myHour)
Tend = Tstart + TDUR
```
We start by defining the client for downloading the data
```
fdsn_client = fdsn.Client('IRIS')
```
Download the seismic data for all the stations in the array.
```
Dtmp = fdsn_client.get_waveforms(network=network, station=staCodes, location='--', channel=chans, starttime=Tstart, \
endtime=Tend, attach_response=True)
```
Some stations did not record the entire two hours. We delete these and keep only stations with a complte two hour recording.
```
ntmp = []
for ksta in range(0, len(Dtmp)):
ntmp.append(len(Dtmp[ksta]))
ntmp = max(set(ntmp), key=ntmp.count)
D = Dtmp.select(npts=ntmp)
```
This is a function for plotting after each operation on the data.
```
def plot_2hour(D, channel, offset, title):
""" Plot seismograms
D = Stream
channel = 'E', 'N', or 'Z'
offset = Offset between two stations
title = Title of the figure
"""
fig, ax = plt.subplots(figsize=(15, 10))
Dplot = D.select(component=channel)
t = (1.0 / Dplot[0].stats.sampling_rate) * np.arange(0, Dplot[0].stats.npts)
for ksta in range(0, len(Dplot)):
plt.plot(t, ksta * offset + Dplot[ksta].data, 'k')
plt.xlim(np.min(t), np.max(t))
plt.ylim(- offset, len(Dplot) * offset)
plt.title(title, fontsize=24)
plt.xlabel('Time (s)', fontsize=24)
ax.set_yticklabels([])
ax.tick_params(labelsize=20)
plot_2hour(D, 'E', 1200.0, 'Downloaded data')
```
We start by detrending the data.
```
D
D.detrend(type='linear')
plot_2hour(D, 'E', 1200.0, 'Detrended data')
```
We then taper the data.
```
D.taper(type='hann', max_percentage=None, max_length=5.0)
plot_2hour(D, 'E', 1200.0, 'Tapered data')
```
And we remove the instrment response.
```
D.remove_response(output='VEL', pre_filt=(0.2, 0.5, 10.0, 15.0), water_level=80.0)
plot_2hour(D, 'E', 1.0e-6, 'Deconvolving the instrument response')
```
Then we filter the data.
```
D.filter('bandpass', freqmin=2.0, freqmax=8.0, zerophase=True)
plot_2hour(D, 'E', 1.0e-6, 'Filtered data')
```
And we resample the data.
```
D.interpolate(100.0, method='lanczos', a=10)
D.decimate(5, no_filter=True)
plot_2hour(D, 'E', 1.0e-6, 'Resampled data')
```
We can also compute the envelope of the signal.
```
for index in range(0, len(D)):
D[index].data = obspy.signal.filter.envelope(D[index].data)
plot_2hour(D, 'E', 1.0e-6, 'Envelope')
```
You can also download the instrument response separately:
```
network = 'XQ'
station = 'ME12'
channels = 'BHE,BHN,BHZ'
location = '01'
```
This is to download the instrument response.
```
fdsn_client = fdsn.Client('IRIS')
inventory = fdsn_client.get_stations(network=network, station=station, level='response')
inventory.write('response/' + network + '_' + station + '.xml', format='STATIONXML')
```
We then read the data and start precessing the signal as we did above.
```
fdsn_client = fdsn.Client('IRIS')
Tstart = UTCDateTime(year=2008, month=4, day=1, hour=4, minute=49)
Tend = UTCDateTime(year=2008, month=4, day=1, hour=4, minute=50)
D = fdsn_client.get_waveforms(network=network, station=station, location=location, channel=channels, starttime=Tstart, endtime=Tend, attach_response=False)
D.detrend(type='linear')
D.taper(type='hann', max_percentage=None, max_length=5.0)
```
But we now use the xml file that contains the instrment response to remove it from the signal.
```
filename = 'response/' + network + '_' + station + '.xml'
inventory = read_inventory(filename, format='STATIONXML')
D.attach_response(inventory)
D.remove_response(output='VEL', pre_filt=(0.2, 0.5, 10.0, 15.0), water_level=80.0)
```
We resume signal processing.
```
D.filter('bandpass', freqmin=2.0, freqmax=8.0, zerophase=True)
D.interpolate(100.0, method='lanczos', a=10)
D.decimate(5, no_filter=True)
```
And we plot.
```
t = (1.0 / D[0].stats.sampling_rate) * np.arange(0, D[0].stats.npts)
plt.plot(t, D[0].data, 'k')
plt.xlim(np.min(t), np.max(t))
plt.title('Single waveform', fontsize=18)
plt.xlabel('Time (s)', fontsize=18)
```
Not all seismic data are stored on IRIS. This is an example of how to download data from the Northern California Earthquake Data Center (NCEDC).
```
network = 'BK'
station = 'WDC'
channels = 'BHE,BHN,BHZ'
location = '--'
```
This is to download the instrument response.
```
url = 'http://service.ncedc.org/fdsnws/station/1/query?net=' + network + '&sta=' + station + '&level=response&format=xml&includeavailability=true'
s = urllib.request.urlopen(url)
contents = s.read()
file = open('response/' + network + '_' + station + '.xml', 'wb')
file.write(contents)
file.close()
```
And this is to download the data.
```
Tstart = UTCDateTime(year=2007, month=2, day=12, hour=1, minute=11, second=54)
Tend = UTCDateTime(year=2007, month=2, day=12, hour=1, minute=12, second=54)
request = 'waveform_' + station + '.request'
file = open(request, 'w')
message = '{} {} {} {} '.format(network, station, location, channels) + \
'{:04d}-{:02d}-{:02d}T{:02d}:{:02d}:{:02d} '.format( \
Tstart.year, Tstart.month, Tstart.day, Tstart.hour, Tstart.minute, Tstart.second) + \
'{:04d}-{:02d}-{:02d}T{:02d}:{:02d}:{:02d}\n'.format( \
Tend.year, Tend.month, Tend.day, Tend.hour, Tend.minute, Tend.second)
file.write(message)
file.close()
miniseed = 'station_' + station + '.miniseed'
request = 'curl -s --data-binary @waveform_' + station + '.request -o ' + miniseed + ' http://service.ncedc.org/fdsnws/dataselect/1/query'
os.system(request)
D = read(miniseed)
D.detrend(type='linear')
D.taper(type='hann', max_percentage=None, max_length=5.0)
filename = 'response/' + network + '_' + station + '.xml'
inventory = read_inventory(filename, format='STATIONXML')
D.attach_response(inventory)
D.remove_response(output='VEL', pre_filt=(0.2, 0.5, 10.0, 15.0), water_level=80.0)
D.filter('bandpass', freqmin=2.0, freqmax=8.0, zerophase=True)
D.interpolate(100.0, method='lanczos', a=10)
D.decimate(5, no_filter=True)
t = (1.0 / D[0].stats.sampling_rate) * np.arange(0, D[0].stats.npts)
plt.plot(t, D[0].data, 'k')
plt.xlim(np.min(t), np.max(t))
plt.title('Single waveform', fontsize=18)
plt.xlabel('Time (s)', fontsize=18)
```
| github_jupyter |
# train.py: What it does step by step
This tutorial will break down what train.py does when it is run, and illustrate the functionality of some of the custom 'utils' functions that are called during a training run, in a way that is easy to understand and follow.
Note that parts of the functionality of train.py depend on the config.json file you are using. This tutorial is self-contained, and doesn't use a config file, but for more information on working with this file when using ProLoaF, see [this explainer](https://acs.pages.rwth-aachen.de/public/automation/plf/proloaf/docs/files-and-scripts/config/). Before proceeding to any of the sections below, please run the following code block:
```
import os
import sys
sys.path.append("../")
import pandas as pd
import utils.datahandler as dh
import matplotlib.pyplot as plt
import numpy as np
```
## Table of contents:
[1. Dealing with missing values in the data](#1.-Dealing-with-missing-values-in-the-data)
[2. Selecting and scaling features](#2.-Selecting-and-scaling-features)
[3. Creating a dataframe to log training results](#3.-Creating-a-dataframe-to-log-training-results)
[4. Exploration](#4.-Exploration)
[5. Main run - creating the training model](#5.-Main-run---creating-the-training-model)
[6. Main run - training the model](#6.-Main-run---training-the-model)
[7. Updating the config, Saving the model & logs](#7.-Updating-the-config,-saving-the-model-&-logs)
## 1. Dealing with missing values in the data
The first thing train.py does after loading the dataset that was specified in your config file, is to check for any missing values, and fill them in as necessary. It does this using the function 'utils.datahandler.fill_if_missing'. In the following example, we will load some data that has missing values and examine what the 'fill_if_missing' function does. Please run the code block below to get started.
```
#Load the data sample and prep for use with datahandler functions
df = pd.read_csv("../data/fill_missing.csv", sep=";")
df['Time'] = pd.to_datetime(df['Time'])
df = df.set_index('Time')
df = df.astype(float)
df_missing_range = df.copy()
#Plot the data
df.iloc[0:194].plot(kind='line',y='DE_load_actual_entsoe_transparency', figsize = (12, 6), xlabel='Hours', use_index = False)
```
As should be clearly visible in the plot above, the data has some missing values. There is a missing range (a range refers to multiple adjacent values), from around 96-121, as well as two individual values that are missing, at 160 and 192. Please run the code block below to see how 'fill_if_missing' deals with these problems.
```
#Use fill_if_missing and plot the results
df=dh.fill_if_missing(df, periodicity=24)
df.iloc[0:192].plot(kind='line',y='DE_load_actual_entsoe_transparency', figsize = (12, 6), use_index = False)
#TODO: Test this again once interpolation is working
```
As we can see by the printed console messages, fill_if_missing first checks whether there are any missing values. If there are, it checks whether they are individual values or ranges, and handles these cases differently:
### Single missing values:
These are simply replaced by the average of the values on either side.
### Missing range:
If a range of values is missing, fill_if_missing will use the specified periodicity of the data to provide an estimate of the missing values, by averaging the ranges on either side of the missing range and then adapting the new values to fit the trend. If not specified, the periodicity has a default value of 1, but since we are using hourly data, we will use a periodicity of p = 24.
For each missing value at a given position t in the range, fill_if_missing first searches backwards through the data at intervals equal to the periodicity of the data (i.e. t1 = t - 24\*n, n = 1, 2,...) until it finds an existing value. It then does the same thing searching forwards through the data (i.e. t2 = t + 24\*n, n = 1, 2,...), and then it sets the value at t equal to the average of t1 and t2. Run the code block below to see the result for the missing range at 95-121:
```
start = 95
end = 121
p = 24
seas = np.zeros(len(df_missing_range))
#fill the missing values
for t in range(start, end + 1):
p1 = p
p2 = p
while np.isnan(df_missing_range.iloc[t - p1, 0]):
p1 += p
while np.isnan(df_missing_range.iloc[t + p2, 0]):
p2 += p
seas[t] = (df_missing_range.iloc[t - p1, 0] + df_missing_range.iloc[t + p2, 0]) / 2
#plot the result
ax = plt.gca()
df_missing_range["Interpolated"] = pd.Series(len(seas))
for t in range(start, end + 1):
df_missing_range.iloc[t, 1] = seas[t]
df_missing_range.iloc[0:192].plot(kind='line',y='DE_load_actual_entsoe_transparency', figsize = (12, 6), use_index = False, ax = ax)
df_missing_range.iloc[0:192].plot(kind='line',y='Interpolated', figsize = (12, 6), use_index = False, ax = ax)
```
The missing values in the range between 95 and 121 have now been filled in, but the end points aren't continuous with the original data, and the new values don't take into account the trend in the data. To deal with this, the function uses the difference in slope between the start and end points of the missing data range, and the start and end points of the newly interpolated values, to offset the new values so that they line up with the original data:
```
print("Create two straight lines that connect the interpolated start and end points, and the original start and end points.\nThese capture the 'trend' in each case over the missing section")
trend1 = np.poly1d(
np.polyfit([start, end], [seas[start], seas[end]], 1)
)
trend2 = np.poly1d(
np.polyfit(
[start - 1, end + 1],
[df_missing_range.iloc[start - 1, 0], df_missing_range.iloc[end + 1, 0]],
1,
)
)
#by subtracting the trend of the interpolated data, then adding the trend of the original data, we match the filled in
#values to what we had before
for t in range(start, end + 1):
df_missing_range.iloc[t, 1] = seas[t] - trend1(t) + trend2(t)
#plot the result
ax = plt.gca()
df_missing_range.iloc[0:192].plot(kind='line',y='DE_load_actual_entsoe_transparency', figsize = (12, 6), use_index = False, ax = ax)
df_missing_range.iloc[0:192].plot(kind='line',y='Interpolated', figsize = (12, 6), use_index = False, ax = ax)
```
**Please note:**
- Missing data ranges at the beginning or end of the data are handled differently (TODO: Explain how)
- Though the examples shown here use a single column for simplicity's sake, fill_if_missing automatically works on every column (feature) of your original dataframe.
## 2. Selecting and scaling features
The next thing train.py does is to select and scale features in the data as specified in the relevant config file, using the function 'utils.datahandler.scale_all'.
Consider the following dataset:
```
#Load and then plot the new dataset
df_to_scale = pd.read_csv("../data/opsd.csv", sep=";", index_col=0)
df_to_scale.plot(kind='line',y='AT_load_actual_entsoe_transparency', figsize = (8, 4), use_index = False)
df_to_scale.plot(kind='line',y='AT_temperature', figsize = (8, 4), use_index = False)
df_to_scale.head()
```
The above dataset has 55 features (columns), some of which are at totally different scales, as is clearly visible when looking at the y-axes of the above graphs for load and temperature data from Austria.
Depending on our dataset, we may not want to use all of the available features for training.
If we wanted to select only the two features highlighted above for training, we could do so by editing the value at the "feature_groups" key in the config.json, which takes the form of a list of dicts like the one below:
```
two_features = [
{
"name": "main",
"scaler": [
"minmax",
-1.0,
1.0
],
"features": [
"AT_load_actual_entsoe_transparency",
"AT_temperature"
]
}
]
```
Each dict in the list represents a feature group, and should have the following keys:
- "name" - the name of the feature group
- "scaler" - the scaler used by this feature group (value: a list with entries for scaler name and scaler specific attributes.) Valid scaler names include 'standard', 'robust' or 'minmax'. For more information on these scalers and their use, please see the [scikit-learn documentation](https://scikit-learn.org/stable/modules/preprocessing.html#preprocessing) or [the documentation for scale_all](https://acs.pages.rwth-aachen.de/public/automation/plf/proloaf/reference/proloaf/proloaf/utils/datahandler.html#scale_all)
- "features" - which features are to be included in the group (value: a list containing the feature names)
The 'scale_all' function will only return the selected features, scaled using the scaler assigned to their feature group.
Here we only have one group, 'main', which uses the 'minmax' scaler:
```
#Select, scale and plot the features as specified by the two_features list (see above)
selected_features, scalers = dh.scale_all(df_to_scale, two_features)
selected_features.plot(figsize = (12, 6), use_index = False)
print("Currently used scalers:")
print(scalers)
```
As you can see, both of our features (load and temperature for Austria) have now been scaled to fit within the same range (between -1 and 1).
Let's say we also wanted to include the weekday data from the data set in our training. Let us first take a look at what the weekday features look like. Here are the first 500 hours (approx. 3 weeks) of weekday_0:
```
df_to_scale[:500].plot(kind='line',y='weekday_0', figsize = (12, 4), use_index = False)
```
As we can see, these features are already within the range [0,1] and thus don't need to be scaled. So we can include them in a second feature group called 'aux'. Note, features which we deliberately aren't scaling should go in a group with this name.
The value of the "feature_groups" key in the config.json could then look like this:
```
feature_groups = [
{
"name": "main",
"scaler": [
"minmax",
0.0,
1.0
],
"features": [
"AT_load_actual_entsoe_transparency",
"AT_temperature"
]
},
{
"name": "aux",
"scaler": None,
"features": [
"weekday_0",
"weekday_1",
"weekday_2",
"weekday_3",
"weekday_4",
"weekday_5",
"weekday_6"
]
}
]
```
We now have two feature groups, 'main' (which uses the 'minmax' scaler, this time with a range between 0 and 1) and 'aux' (which uses no scaler):
```
#Select, scale and plot the features as specified by feature_groups (see above)
selected_features, scalers = dh.scale_all(df_to_scale,feature_groups)
selected_features[23000:28000].plot(figsize = (12, 6), use_index = False)
print("Currently used scalers:")
print(scalers)
```
We can see that all of our selected features now fit between 0 and 1. From this point onward, train.py will only work with our selected, scaled features.
```
print("Currently selected and scaled features: ")
print(selected_features.columns)
```
### Selecting scalers
When selecting which scalers to use, it is important that whichever one we choose does not adversely affect the shape of the distribution of our data, as this would distort our results. For example, this is the distribution of the feature "AT_load_actual_entsoe_transparency" before scaling:
```
df_unscaled = pd.read_csv("../data/opsd.csv", sep=";", index_col=0)
df_unscaled["AT_load_actual_entsoe_transparency"].plot.kde()
```
And this is the distribution after scaling using the minmax scaler, as we did above:
```
selected_features["AT_load_actual_entsoe_transparency"].plot.kde()
```
It is clear that in both cases, the distribution functions have a similar shape. The axes are scaled differently, but both graphs have maxima to the right of zero. On the other hand, this is what the distribution looks like if we use the robust scaler on this data:
```
feature_robust = [
{
"name": "main",
"scaler": [
"robust",
0.25,
0.75
],
"features": [
"AT_load_actual_entsoe_transparency",
]
}
]
selected_feat_robust, scalers_robust = dh.scale_all(df_to_scale,feature_robust)
selected_feat_robust["AT_load_actual_entsoe_transparency"].plot.kde()
```
Not only have the axes been scaled, but the data has also been shifted so that the maxima are centered around zero. The same problem can be observed with the "standard" scaler:
```
feature_std = [
{
"name": "main",
"scaler": [
"standard"
],
"features": [
"AT_load_actual_entsoe_transparency",
]
}
]
selected_feat_std, scalers_std = dh.scale_all(df_to_scale,feature_std)
selected_feat_std["AT_load_actual_entsoe_transparency"].plot.kde()
```
As a result, the minmax scaler is the best option for this feature
## 3. Creating a dataframe to log training results
Having already filled any missing values in our data, and scaled and selected the features we want to use for training,
at this point we use the function 'utils.loghandler.create_log' to create a dataframe which will log the results of our training.
This dataframe is saved as a .csv file at the end of the main training run. This allows different training runs, e.g. using new data or different parameters, to be compared with one another, so that we can monitor any changes in performance and compare the most recent run to the best performance achieved so far.
'create_log' creates the dataframe by getting which features we'll be logging from the [log.json file](https://acs.pages.rwth-aachen.de/public/automation/plf/proloaf/docs/files-and-scripts/log/), and then:
- loading an existing log file (from MAIN_PATH/\<log_path\>/\<model_name\>/\<model_name\>+"_training.csv" - see the [config.json explainer](https://acs.pages.rwth-aachen.de/public/automation/plf/proloaf/docs/files-and-scripts/config/) for more information)
- or creating a new dataframe from scratch, with the required features.
The newly created dataframe, log_df, is used at various later points in train.py (see sections [4](#4.-Exploration), [6](#6.-Main-run---training-the-model) and [7](#7.-Updating-the-config,-saving-the-model-&-logs) for more info).
## 4. Exploration
From this point on, it is assumed that we are working with prepared and scaled data (see earlier sections for more details).
The exploration phase is optional, and will only be carried out if the 'exploration' key in the config.json file is set to 'true'. The purpose of exploration is to tune our hyperparameters before the main training run. This is done by using [Optuna](https://optuna.org/) to optimize our [objective function](#Objective-function). Optuna iterates through a number of trials - either a fixed number, or until timeout (as specified in the tuning.json file, [see below](#tuning.json) for more info) - with the purpose of finding the hyperparameter settings that result in the smallest validation loss. This is an indicator of the quality of the prediction - the validation loss is the discrepancy between the predicted values and the actual values (targets) from the validation dataset, as determined by one of the metrics from [utils.metrics](https://acs.pages.rwth-aachen.de/public/automation/plf/proloaf/reference/proloaf/proloaf/utils/eval_metrics.html). The metric used is specified when train.py is called.
Once Optuna is done iterating, a summary of the trials is printed (number of trials, details of the best trial), and if the new best trial represents an improvement over the previously logged best, you will be prompted about whether you would like to overwrite the config.json with the newly found hyperparameters, so that they can be used for future training.
Optuna also has built-in paralellization, which we can opt to use by setting the 'parallel_jobs' key in the config.json to 'true'.
### Objective function
The previously mentioned objective function is a callable which Optuna uses for its optimization. In our case, it is the function 'mh.tuning_objective', which does the following per trial:
- Suggests values for each hyperparameter as per the [tuning.json](#tuning.json)
- Creates (see [section 5](#5.-Main-run---creating-the-training-model)) and trains (see [section 6](#6.-Main-run---training-the-model)) a model using these hyperparameters and our selected features and scalers
- Returns a score for the model in the form of the validation loss after training.
### tuning.json
The tuning.json file ([see explainer](https://acs.pages.rwth-aachen.de/public/automation/plf/proloaf/docs/files-and-scripts/config/#tuning-config) is located in the same folder as the other configs, under proloaf/targets/\<model name\>, and contains information about the hyperparameters to be tuned, as well as settings that limit how long Optuna should run. It can look like this, for example:
```json
{
"number_of_tests": 100,
"settings":
{
"learning_rate": {
"function": "suggest_loguniform",
"kwargs": {
"name": "learning_rate",
"low": 0.000001,
"high": 0.0001
}
},
"batch_size": {
"function": "suggest_int",
"kwargs": {
"name": "batch_size",
"low": 32,
"high": 120
}
}
}
}
```
- "number_of_tests": 100 here means that Optuna will stop after 100 trials. Alternatively, "timeout": \<number in seconds\> would limit the maximum duration for which Optuna would run. If both are specified, Optuna will stop whenever the first criterion is met. If neither are specified, Optuna will wait for a termination signal (Ctrl+C or SIGTERM).
- The "settings" keyword has a dictionary as its value. This dictionary contains keywords for each of the hyperparameters that Optuna is to optimize, e.g. "learning_rate", "batch_size".
- Each hyperparameter keyword takes yet another dictionary as a value, with the keywords "function" and "kwargs"
- "function" takes as its value a function name beginning with "suggest_..." as described in the [Optuna docs](https://optuna.readthedocs.io/en/v1.4.0/reference/trial.html#optuna.trial.Trial.suggest_loguniform). These "suggest" functions are used to suggest hyperparameter values by sampling from a range with the relevant distribution.
- "kwargs" has keywords for the arguments required by the "suggest" functions, typically "low" and "high" for the start and endpoints of the desired ranges as well as a "name" keyword which stores the hyperparameter name.
### Notes
- When running train.py using Gitlab CI, the prompt about whether to overwrite the config.json with the newly found values is disabled (as well as all other similar prompts).
- Though the objective function takes log_df (from the previous section) as a parameter, as it is required by the 'train' function, none of the training runs from this exploration phase are actually logged. Only the main run is logged. (See the next section for more)
## 5. Main run - creating the training model
Now that the data is free of missing values, we've selected which features we'll use, and the (optional) hyperparameter exploration is done, we are almost ready to create the model that will be used for the training.
The last step we need to take before we can do that is to split our data into training, validation and testing sets. During this process, we also transition from using the familiar Dataframe structure we've been using up until now, to using a new custom structure called CustomTensorData.
### Splitting the data
We do this using the function 'utils.datahandler.transform', which splits our 'selected_features' dataframe into three new dataframes for the three sets outlined above. It does this according to the 'train_split' and 'validation_split' parameters in the config file, as follows:
Each of the 'split' parameters are multiplied with the length of the 'selected_features' dataframe, with the result converted to an integer. These new integer values are the indices for the split.
e.g. For 'train_split' = 0.6 and 'validation_split'= 0.8, the first 60% of the data entries would be stored in the new training dataframe, with the next 20% going to the validation dataframe, and the final 20% used for the testing dataframe.
### Transformation into Tensors
Each of the three new dataframes is then transformed into the new CustomTensorData structure. This is done because the new structure is better suited for use with our RNN, and it is accomplished using 'utils.tensorloader.make_dataloader'.
A single CustomTensorData structure also has three components, each of which is comprised of a different set of features as defined in the config file:
- inputs1 - Encoder features: Features used as input for the encoder RNN. They provide information from a certain number of timesteps leading up to the period we wish to forecast i.e. from historical data.
- inputs2 - Decoder features: Features used as input for the decoder RNN. They provide information from the same time steps as the period we wish to forecast i.e. from data about the future that we know in advance, e.g. weather forecasts, day of the week, etc.
- targets - The features we are trying to forecast.
These three components contain the features described above, but reorganized from the familiar tabular Dataframe format into a series of samples of a given length ([horizon](#Horizons)).
To understand this change, please consider the image below, which focuses on a single feature that is only 7 time steps long. (Feature 1 - with data that is merely illustrative)

The final format, as seen in the two examples on the right, consists of a number of samples (rows) of a given length (horizon), such that the first sample begins at the first time step of the range given to 'make_dataloader', and the last sample ends at the last time step in the aforementioned range. Each consecutive sample also begins one time step later than the previous sample (the one above it).
When we consider a set of multiple features (with many more than 7 timesteps), each feature is transformed individually and then all features are combined into a 3D Tensor, as depicted in the following image:

### Horizons
So far, we have been using the term 'horizon' to refer to how many time steps are contained in one sample in our Custom Tensor structure. However, the three components (inputs1, inputs2 and targets) do not all share the same horizon length. In fact, there are two different parameters for horizon length, namely 'history_horizon', and 'forecast_horizon', and the different components use them as illustrated in the following diagram:

As we can see, sample 1 of inputs1 contains 'history_horizon' timesteps up to a certain point, while sample 1 of inputs2 and targets contain the 'forecast_horizon' timesteps that follow from that point onwards. This is because, as previously mentioned, inputs1 provides historical data, while inputs2 and targets both provide "future data" - data from the future that we know ahead of making our forecast.
**NB:** our "future data" can include uncertainty, for example, if we use existing forecasts as input.
With increasing sample number, we have a kind of moving window which shifts towards the right in the image above.
So that all three components contain the same number of samples, the inputs1 tensor does not include the final 'forecast_horizon' timesteps, while inputs2 and targets do not contain the first 'history_horizon' timesteps.
**Note:** for illustration purposes, the above image shows which timesteps are in a given sample of the three components, in relation to the original dataframe. The components are nevertheless at this point already stored in the custom tensor structure described earlier in this subsection.
### Model Creation
The final step in this stage is to create the model which we will be training. We do this using 'utils.modelhandler.make_model'. This function takes the number of features in component 1 and in component 2 (inputs1 and inputs2) of the training tensor, as well as our scalers from [section 2](#2.-Selecting-and-scaling-features) of this guide and other parameters from the config file, and returns a EncoderDecoder model as defined in 'utils.models'.
## 6. Main run - training the model
At this point, we have everything we need to train our model: our training, validation and testing datasets, stored as CustomTensorData structures, and the instantiated EncoderDecoder model we are going to be training. We give these along with the hyperparameters in our config file (e.g. learning rate, batch size etc.) to the function 'utils.modelhandler.train'. This function returns:
- a trained model
- our logging dataframe (see [section 3](#3.-Creating-a-dataframe-to-log-training-results)) updated with the results of the training
- the minimum validation loss after training
- the model's score as calculated by the function 'performance_test', in our case using the metric 'mis' (Mean Interval Score) as defined in 'utils.metrics'
What follows is a short breakdown of what 'utils.modelhandler.train' does.
**Reminder:** 'loss' generally refers to a measure of how far off our predictions are from the actual values (targets) we are trying to predict.
### How training works
Training lasts for a number of epochs, given by the parameter 'max_epochs' in the config file.
During each epoch, we perform the training step, followed by the validation step.
#### Training step:
Use .train() on the model to set it in training mode, then loop through every sample in our training data tensor, and in each iteration of the loop:
- get the model's prediction using the current sample of inputs1 and inputs2
- zero the gradients of our optimizer
- calculate the loss of the prediction we just made for this sample (using whichever metric is specified in the config)
- compute new gradients using loss.backward()
- update the optimizer's parameters (by taking an optimization step) using the new gradients
- update the epoch loss (the loss for the current sample is divided by the total number of samples in the training data and added to the current epoch loss)
This step iteratively teaches the model to produce better predictions.
#### Validation step:
Use .eval() on the model to set it in validation mode, then loop through every sample in our validation data tensor, and in each iteration of the loop:
- get the model's prediction using the current sample of inputs1 and inputs2
- update the validation loss (the loss for the current sample is calculated using whichever metric is specified in the config, then divided by the total number of samples in the validation data and added to the current validation loss)
This step gives us a way to track whether our model is improving, by validating the training using new data, to ensure that our model doesn't only work on our specific training data, but that it is actually being trained to predict the behaviour of our target feature.
The training uses early stopping, which basically means that it stops before 'max_epochs' iterations have been reached, if and when a certain number of epochs go by without any improvement to the model.
**Note:** If you are using Tensorboard's SummaryWriter to log training performance, this is when it gets logged.
#### Testing step:
Here, the function 'performance_test' is used to calculate the model's score using the testing data set. As mentioned at the top of this section, this function calculates the Mean Interval Score along the horizon. This score is saved along with the model, see [section 7]((#7.-Updating-the-config,-saving-the-model-&-logs))
## 7. Updating the config, saving the model & logs
The last thing we need to do is save our current model and the relevant scores and logs, so that we can use it in the future and monitor changes in performance.
First, the model is saved using 'modelhandler.save'. By default, this function only saves the model if the most recently achieved score is better than the previous best. In this case, and only if running in exploration mode, the config is updated with the most recent parameters before saving. <br>
If no improvement was achieved, the model can still be saved by opting to do so when prompted. In this case, the config parameters will not be updated before saving.
Saving the model entails calling torch.save() to store the model at the location given by the config parameter "output_path", and then saving the config to "config_path" using the function 'utils.confighandler.write_config'.
Lastly, the logs are saved using the function 'utils.loghandler.end_logging', which writes the logs to a .csv stored at:<br>
MAIN_PATH/\<log_path\>/\<model_name\>/\<model_name\>+"_training.csv", (the same location as where the log file was read in from)
**Note:** Things work a little differently if using Tensorboard logging (TODO: expand) <br>
In this case, when the 'modelhandler.train' function is called during the exploration phase (see [section 4](#4.-Exploration)) the config is automatically updated with the hyperparameter values from the latest trial, right after that trial ends.
| github_jupyter |
```
! nvidia-smi
```
# Install
ติดตั้ง Library Transformers จาก HuggingFace
```
! pip install transformers -q
! pip install fastai2 -q
```
# Import
เราจะ Import
```
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
```
# Download Pre-trained Model
ดาวน์โหลด Weight ของโมเดล ที่เทรนไว้เรียบร้อยแล้ว ชื่อ GPT2
```
pretrained_weights = 'gpt2'
tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_weights)
model = GPT2LMHeadModel.from_pretrained(pretrained_weights)
```
ใช้ Tokenizer ตัดตำ โดย Tokenizer ของ HuggingFace นี้ encode จะ Tokenize แปลงเป็น ตัวเลข Numericalize ในขั้นตอนเดียว
```
ids = tokenizer.encode("A lab at Florida Atlantic University is simulating a human cough")
ids
```
หรือ เราสามารถแยกเป็น 2 Step ได้
```
# toks = tokenizer.tokenize("A lab at Florida Atlantic University is simulating a human cough")
# toks, tokenizer.convert_tokens_to_ids(toks)
```
decode กลับเป็นข้อความต้นฉบับ
```
tokenizer.decode(ids)
```
# Generate text
```
import torch
t = torch.LongTensor(ids)[None]
preds = model.generate(t)
preds.shape
preds[0]
tokenizer.decode(preds[0].numpy())
```
# Fastai
```
from fastai2.text.all import *
path = untar_data(URLs.WIKITEXT_TINY)
path.ls()
df_train = pd.read_csv(path/"train.csv", header=None)
df_valid = pd.read_csv(path/"test.csv", header=None)
df_train.head()
all_texts = np.concatenate([df_train[0].values, df_valid[0].values])
len(all_texts)
```
# Creating TransformersTokenizer Transform
เราจะนำ Tokenizer ของ Transformer มาสร้าง Transform ใน fastai ด้วยการกำหนด encodes, decodes และ setups
```
class TransformersTokenizer(Transform):
def __init__(self, tokenizer): self.tokenizer = tokenizer
def encodes(self, x):
toks = self.tokenizer.tokenize(x)
return tensor(self.tokenizer.convert_tokens_to_ids(toks))
def decodes(self, x):
return TitledStr(self.tokenizer.decode(x.cpu().numpy()))
```
ใน encodes เราจะไม่ได้ใช้ tokenizer.encode เนื่องจากภายในนั้น มีการ preprocessing นอกจาก tokenize และ numericalize ที่เรายังไม่ต้องการในขณะนี้ และ decodes จะ return TitledStr แทนที่ string เฉย ๆ จะได้รองรับ show method
```
# list(range_of(df_train))
# list(range(len(df_train), len(all_texts)))
```
เราจะเอา Transform ที่สร้างด้านบน ไปใส่ TfmdLists โดย split ตามลำดับที่ concat ไว้ และ กำหนด dl_type DataLoader Type เป็น LMDataLoader สำหรับใช้ในงาน Lanugage Model
```
splits = [list(range_of(df_train)), list(range(len(df_train), len(all_texts)))]
tls = TfmdLists(all_texts, TransformersTokenizer(tokenizer), splits=splits, dl_type=LMDataLoader)
# tls
```
ดูข้อมูล Record แรก ของ Training Set
```
tls.train[0].shape, tls.train[0]
```
ดูเป็นข้อมูล ที่ decode แล้ว
```
# show_at(tls.train, 0)
```
ดูข้อมูล Record แรก ของ Validation Set
```
tls.valid[0].shape, tls.valid[0]
```
ดูเป็นข้อมูล ที่ decode แล้ว
```
# show_at(tls.valid, 0)
```
# DataLoaders
สร้าง DataLoaders เพื่อส่งให้กับ Model ด้วย Batch Size ขนาด 64 และ Sequence Length 1024 ตามที่ GPT2 ใช้
```
bs, sl = 4, 1024
dls = tls.dataloaders(bs=bs, seq_len=sl)
dls
dls.show_batch(max_n=5)
```
จะได้ DataLoader สำหรับ Lanugage Model ที่มี input และ label เหลื่อมกันอยู่ 1 Token สำหรับให้โมเดล Predict คำต่อไปของประโยค
# Preprocessing ไว้ก่อนให้หมด
อีกวิธีนึงคือ เราสามารถ Preprocessing ข้อมูลทั้งหมดไว้ก่อนได้เลย
```
# def tokenize(text):
# toks = tokenizer.tokenize(text)
# return tensor(tokenizer.convert_tokens_to_ids(toks))
# tokenized = [tokenize(t) for t in progress_bar(all_texts)]
# len(tokenized), tokenized[0]
```
เราจะประกาศ TransformersTokenizer ใหม่ ให้ใน encodes ไม่ต้องทำอะไร (แต่ถ้าไม่เป็น Tensor มาก็ให้ tokenize ใหม่)
```
# class TransformersTokenizer(Transform):
# def __init__(self, tokenizer): self.tokenizer = tokenizer
# def encodes(self, x):
# return x if isinstance(x, Tensor) else tokenize(x)
# def decodes(self, x):
# return TitledStr(self.tokenizer.decode(x.cpu().numpy()))
```
แล้วจึงสร้าง TfmdLists โดยส่ง tokenized (ข้อมูลทั้งหมดที่ถูก tokenize เรียบร้อยแล้ว) เข้าไป
```
# tls = TfmdLists(tokenized, TransformersTokenizer(tokenizer), splits=splits, dl_type=LMDataLoader)
# dls = tls.dataloaders(bs=bs, seq_len=sl)
# dls.show_batch(max_n=5)
```
# Fine-tune Model
เนื่องจากโมเดลของ HuggingFace นั่น return output เป็น Tuple ที่ประกอบด้วย Prediction และ Activation เพิ่มเติมอื่น ๆ สำหรับใช้ในงานอื่น ๆ แต่ในเคสนี้เรายังไม่ต้องการ ทำให้เราต้องสร้าง after_pred Callback มาคั่นเพื่อเปลี่ยนให้ return แต่ Prediction เพื่อส่งไปให้กับ Loss Function ทำงานได้ตามปกติเหมือนเดิม
```
class DropOutput(Callback):
def after_pred(self): self.learn.pred = self.pred[0]
```
ใน callback เราสามารถอ้างถึง Prediction ของโมเดล ได้ด้วย self.pred ได้เลย แต่จะเป็นการ Read-only ถ้าต้องการ Write ต้องอ้างเต็ม ๆ ด้วย self.learn.pred
ตอนนี้เราสามารถสร้าง learner เพื่อเทรนโมเดลได้แล้ว
```
learn = None
torch.cuda.empty_cache()
Perplexity??
learn = Learner(dls, model, loss_func=CrossEntropyLossFlat(), cbs=[DropOutput], metrics=Perplexity()).to_fp16()
learn
```
ดูประสิทธิภาพของโมเดล ก่อนที่จะ Fine-tuned ตัวเลขแรกคือ Validation Loss ตัวที่สอง คือ Metrics ในที่นี้คือ Perplexity
```
learn.validate()
```
ได้ Perplexity 25.6 คือไม่เลวเลยทีเดียว
# Training
ก่อนเริ่มต้นเทรน เราจะเรียก lr_find หา Learning Rate กันก่อน
```
learn.lr_find()
```
แล้วเทรนไปแค่ 1 Epoch
```
learn.fit_one_cycle(1, 3e-5)
```
เราเทรนไปแค่ 1 Epoch โดยไม่ได้ปรับอะไรเลย โมเดลไม่ได้ประสิทธิภาพดีขึ้นสักเท่าไร เพราะมันดีมากอยู่แล้ว ต่อมาเราจะมาลองใช้โมเดล generate ข้อความดู ดังรูปแบบตัวอย่างใน Validation Set
```
df_valid.head(1)
prompt = "\n = Modern economy = \n \n The modern economy is driven by data, and that trend is being accelerated by"
prompt_ids = tokenizer.encode(prompt)
# prompt_ids
inp = torch.LongTensor(prompt_ids)[None].cuda()
inp.shape
preds = learn.model.generate(inp, max_length=50, num_beams=5, temperature=1.6)
preds.shape
preds[0]
tokenizer.decode(preds[0].cpu().numpy())
```
# Credit
* https://dev.fast.ai/tutorial.transformers
* https://github.com/huggingface/transformers
```
```
| github_jupyter |
# An Introduction to SageMaker Random Cut Forests
***Unsupervised anomaly detection on timeseries data a Random Cut Forest algorithm.***
---
1. [Introduction](#Introduction)
1. [Setup](#Setup)
1. [Training](#Training)
1. [Inference](#Inference)
1. [Epilogue](#Epilogue)
# Introduction
***
Amazon SageMaker Random Cut Forest (RCF) is an algorithm designed to detect anomalous data points within a dataset. Examples of when anomalies are important to detect include when website activity uncharactersitically spikes, when temperature data diverges from a periodic behavior, or when changes to public transit ridership reflect the occurrence of a special event.
In this notebook, we will use the SageMaker RCF algorithm to train an RCF model on the Numenta Anomaly Benchmark (NAB) NYC Taxi dataset which records the amount New York City taxi ridership over the course of six months. We will then use this model to predict anomalous events by emitting an "anomaly score" for each data point. The main goals of this notebook are,
* to learn how to obtain, transform, and store data for use in Amazon SageMaker;
* to create an AWS SageMaker training job on a data set to produce an RCF model,
* use the RCF model to perform inference with an Amazon SageMaker endpoint.
The following are ***not*** goals of this notebook:
* deeply understand the RCF model,
* understand how the Amazon SageMaker RCF algorithm works.
If you would like to know more please check out the [SageMaker RCF Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/randomcutforest.html).
# Setup
***
*This notebook was created and tested on an ml.m4.xlarge notebook instance.*
Our first step is to setup our AWS credentials so that AWS SageMaker can store and access training data and model artifacts. We also need some data to inspect and to train upon.
## Select Amazon S3 Bucket
We first need to specify the locations where we will store our training data and trained model artifacts. ***This is the only cell of this notebook that you will need to edit.*** In particular, we need the following data:
* `bucket` - An S3 bucket accessible by this account.
* `prefix` - The location in the bucket where this notebook's input and output data will be stored. (The default value is sufficient.)
```
import boto3
import botocore
import sagemaker
import sys
bucket = '' # <--- specify a bucket you have access to
prefix = 'sagemaker/rcf-benchmarks'
execution_role = sagemaker.get_execution_role()
# check if the bucket exists
try:
boto3.Session().client('s3').head_bucket(Bucket=bucket)
except botocore.exceptions.ParamValidationError as e:
print('Hey! You either forgot to specify your S3 bucket'
' or you gave your bucket an invalid name!')
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == '403':
print("Hey! You don't have permission to access the bucket, {}.".format(bucket))
elif e.response['Error']['Code'] == '404':
print("Hey! Your bucket, {}, doesn't exist!".format(bucket))
else:
raise
else:
print('Training input/output will be stored in: s3://{}/{}'.format(bucket, prefix))
```
## Obtain and Inspect Example Data
Our data comes from the Numenta Anomaly Benchmark (NAB) NYC Taxi dataset [[1](https://github.com/numenta/NAB/blob/master/data/realKnownCause/nyc_taxi.csv)]. These data consists of the number of New York City taxi passengers over the course of six months aggregated into 30-minute buckets. We know, a priori, that there are anomalous events occurring during the NYC marathon, Thanksgiving, Christmas, New Year's day, and on the day of a snow storm.
> [1] https://github.com/numenta/NAB/blob/master/data/realKnownCause/nyc_taxi.csv
```
%%time
import pandas as pd
import urllib.request
data_filename = 'nyc_taxi.csv'
data_source = 'https://raw.githubusercontent.com/numenta/NAB/master/data/realKnownCause/nyc_taxi.csv'
urllib.request.urlretrieve(data_source, data_filename)
taxi_data = pd.read_csv(data_filename, delimiter=',')
```
Before training any models it is important to inspect our data, first. Perhaps there are some underlying patterns or structures that we could provide as "hints" to the model or maybe there is some noise that we could pre-process away. The raw data looks like this:
```
taxi_data.head()
```
Human beings are visual creatures so let's take a look at a plot of the data.
```
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rcParams['figure.dpi'] = 100
taxi_data.plot()
```
Human beings are also extraordinarily good at perceiving patterns. Note, for example, that something uncharacteristic occurs at around datapoint number 6000. Additionally, as we might expect with taxi ridership, the passenger count appears more or less periodic. Let's zoom in to not only examine this anomaly but also to get a better picture of what the "normal" data looks like.
```
taxi_data[5500:6500].plot()
```
Here we see that the number of taxi trips taken is mostly periodic with one mode of length approximately 50 data points. In fact, the mode is length 48 since each datapoint represents a 30-minute bin of ridership count. Therefore, we expect another mode of length $336 = 48 \times 7$, the length of a week. Smaller frequencies over the course of the day occur, as well.
For example, here is the data across the day containing the above anomaly:
```
taxi_data[5952:6000]
```
# Training
***
Next, we configure a SageMaker training job to train the Random Cut Forest (RCF) algorithm on the taxi cab data.
## Hyperparameters
Particular to a SageMaker RCF training job are the following hyperparameters:
* **`num_samples_per_tree`** - the number randomly sampled data points sent to each tree. As a general rule, `1/num_samples_per_tree` should approximate the the estimated ratio of anomalies to normal points in the dataset.
* **`num_trees`** - the number of trees to create in the forest. Each tree learns a separate model from different samples of data. The full forest model uses the mean predicted anomaly score from each constituent tree.
* **`feature_dim`** - the dimension of each data point.
In addition to these RCF model hyperparameters, we provide additional parameters defining things like the EC2 instance type on which training will run, the S3 bucket containing the data, and the AWS access role. Note that,
* Recommended instance type: `ml.m4`, `ml.c4`, or `ml.c5`
* Current limitations:
* The RCF algorithm does not take advantage of GPU hardware.
```
from sagemaker import RandomCutForest
session = sagemaker.Session()
# specify general training job information
rcf = RandomCutForest(role=execution_role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
data_location='s3://{}/{}/'.format(bucket, prefix),
output_path='s3://{}/{}/output'.format(bucket, prefix),
num_samples_per_tree=512,
num_trees=50)
# automatically upload the training data to S3 and run the training job
rcf.fit(rcf.record_set(taxi_data.value.as_matrix().reshape(-1,1)))
```
If you see the message
> `===== Job Complete =====`
at the bottom of the output logs then that means training successfully completed and the output RCF model was stored in the specified output path. You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the "Jobs" tab and select training job matching the training job name, below:
```
print('Training job name: {}'.format(rcf.latest_training_job.job_name))
```
# Inference
***
A trained Random Cut Forest model does nothing on its own. We now want to use the model we computed to perform inference on data. In this case, it means computing anomaly scores from input time series data points.
We create an inference endpoint using the SageMaker Python SDK `deploy()` function from the job we defined above. We specify the instance type where inference is computed as well as an initial number of instances to spin up. We recommend using the `ml.c5` instance type as it provides the fastest inference time at the lowest cost.
```
rcf_inference = rcf.deploy(
initial_instance_count=1,
instance_type='ml.m4.xlarge',
)
```
Congratulations! You now have a functioning SageMaker RCF inference endpoint. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console and selecting the endpoint matching the endpoint name, below:
```
print('Endpoint name: {}'.format(rcf_inference.endpoint))
```
## Data Serialization/Deserialization
We can pass data in a variety of formats to our inference endpoint. In this example we will demonstrate passing CSV-formatted data. Other available formats are JSON-formatted and RecordIO Protobuf. We make use of the SageMaker Python SDK utilities `csv_serializer` and `json_deserializer` when configuring the inference endpoint.
```
from sagemaker.predictor import csv_serializer, json_deserializer
rcf_inference.content_type = 'text/csv'
rcf_inference.serializer = csv_serializer
rcf_inference.accept = 'application/json'
rcf_inference.deserializer = json_deserializer
```
Let's pass the training dataset, in CSV format, to the inference endpoint so we can automatically detect the anomalies we saw with our eyes in the plots, above. Note that the serializer and deserializer will automatically take care of the datatype conversion from Numpy NDArrays.
For starters, let's only pass in the first six datapoints so we can see what the output looks like.
```
taxi_data_numpy = taxi_data.value.as_matrix().reshape(-1,1)
print(taxi_data_numpy[:6])
results = rcf_inference.predict(taxi_data_numpy[:6])
```
## Computing Anomaly Scores
Now, let's compute and plot the anomaly scores from the entire taxi dataset.
```
results = rcf_inference.predict(taxi_data_numpy)
scores = [datum['score'] for datum in results['scores']]
# add scores to taxi data frame and print first few values
taxi_data['score'] = pd.Series(scores, index=taxi_data.index)
taxi_data.head()
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
#
# *Try this out* - change `start` and `end` to zoom in on the
# anomaly found earlier in this notebook
#
start, end = 0, len(taxi_data)
#start, end = 5500, 6500
taxi_data_subset = taxi_data[start:end]
ax1.plot(taxi_data_subset['value'], color='C0', alpha=0.8)
ax2.plot(taxi_data_subset['score'], color='C1')
ax1.grid(which='major', axis='both')
ax1.set_ylabel('Taxi Ridership', color='C0')
ax2.set_ylabel('Anomaly Score', color='C1')
ax1.tick_params('y', colors='C0')
ax2.tick_params('y', colors='C1')
ax1.set_ylim(0, 40000)
ax2.set_ylim(min(scores), 1.4*max(scores))
fig.set_figwidth(10)
```
Note that the anomaly score spikes where our eyeball-norm method suggests there is an anomalous data point as well as in some places where our eyeballs are not as accurate.
Below we print and plot any data points with scores greater than 3 standard deviations (approx 99.9th percentile) from the mean score.
```
score_mean = taxi_data['score'].mean()
score_std = taxi_data['score'].std()
score_cutoff = score_mean + 3*score_std
anomalies = taxi_data_subset[taxi_data_subset['score'] > score_cutoff]
anomalies
```
The following is a list of known anomalous events which occurred in New York City within this timeframe:
* `2014-11-02` - NYC Marathon
* `2015-01-01` - New Year's Eve
* `2015-01-27` - Snowstorm
Note that our algorithm managed to capture these events along with quite a few others. Below we add these anomalies to the score plot.
```
ax2.plot(anomalies.index, anomalies.score, 'ko')
fig
```
With the current hyperparameter choices we see that the three-standard-deviation threshold, while able to capture the known anomalies as well as the ones apparent in the ridership plot, is rather sensitive to fine-grained peruturbations and anomalous behavior. Adding trees to the SageMaker RCF model could smooth out the results as well as using a larger data set.
## Stop and Delete the Endpoint
Finally, we should delete the endpoint before we close the notebook.
To do so execute the cell below. Alternately, you can navigate to the "Endpoints" tab in the SageMaker console, select the endpoint with the name stored in the variable `endpoint_name`, and select "Delete" from the "Actions" dropdown menu.
```
sagemaker.Session().delete_endpoint(rcf_inference.endpoint)
```
# Epilogue
---
We used Amazon SageMaker Random Cut Forest to detect anomalous datapoints in a taxi ridership dataset. In these data the anomalies occurred when ridership was uncharacteristically high or low. However, the RCF algorithm is also capable of detecting when, for example, data breaks periodicity or uncharacteristically changes global behavior.
Depending on the kind of data you have there are several ways to improve algorithm performance. One method, for example, is to use an appropriate training set. If you know that a particular set of data is characteristic of "normal" behavior then training on said set of data will more accurately characterize "abnormal" data.
Another improvement is make use of a windowing technique called "shingling". This is especially useful when working with periodic data with known period, such as the NYC taxi dataset used above. The idea is to treat a period of $P$ datapoints as a single datapoint of feature length $P$ and then run the RCF algorithm on these feature vectors. That is, if our original data consists of points $x_1, x_2, \ldots, x_N \in \mathbb{R}$ then we perform the transformation,
```
data = [[x_1], shingled_data = [[x_1, x_2, ..., x_{P}],
[x_2], ---> [x_2, x_3, ..., x_{P+1}],
... ...
[x_N]] [x_{N-P}, ..., x_{N}]]
```
```
import numpy as np
def shingle(data, shingle_size):
num_data = len(data)
shingled_data = np.zeros((num_data-shingle_size, shingle_size))
for n in range(num_data - shingle_size):
shingled_data[n] = data[n:(n+shingle_size)]
return shingled_data
# single data with shingle size=48 (one day)
shingle_size = 48
prefix_shingled = 'sagemaker/randomcutforest_shingled'
taxi_data_shingled = shingle(taxi_data.values[:,1], shingle_size)
print(taxi_data_shingled)
```
We create a new training job and and inference endpoint. (Note that we cannot re-use the endpoint created above because it was trained with one-dimensional data.)
```
session = sagemaker.Session()
# specify general training job information
rcf = RandomCutForest(role=execution_role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
data_location='s3://{}/{}/'.format(bucket, prefix_shingled),
output_path='s3://{}/{}/output'.format(bucket, prefix_shingled),
num_samples_per_tree=512,
num_trees=50)
# automatically upload the training data to S3 and run the training job
rcf.fit(rcf.record_set(taxi_data_shingled))
from sagemaker.predictor import csv_serializer, json_deserializer
rcf_inference = rcf.deploy(
initial_instance_count=1,
instance_type='ml.m4.xlarge',
)
rcf_inference.content_type = 'text/csv'
rcf_inference.serializer = csv_serializer
rcf_inference.accept = 'appliation/json'
rcf_inference.deserializer = json_deserializer
```
Using the above inference endpoint we compute the anomaly scores associated with the shingled data.
```
# Score the shingled datapoints
results = rcf_inference.predict(taxi_data_shingled)
scores = np.array([datum['score'] for datum in results['scores']])
# compute the shingled score distribution and cutoff and determine anomalous scores
score_mean = scores.mean()
score_std = scores.std()
score_cutoff = score_mean + 3*score_std
anomalies = scores[scores > score_cutoff]
anomaly_indices = np.arange(len(scores))[scores > score_cutoff]
print(anomalies)
```
Finally, we plot the scores from the shingled data on top of the original dataset and mark the score lying above the anomaly score threshold.
```
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
#
# *Try this out* - change `start` and `end` to zoom in on the
# anomaly found earlier in this notebook
#
start, end = 0, len(taxi_data)
taxi_data_subset = taxi_data[start:end]
ax1.plot(taxi_data['value'], color='C0', alpha=0.8)
ax2.plot(scores, color='C1')
ax2.scatter(anomaly_indices, anomalies, color='k')
ax1.grid(which='major', axis='both')
ax1.set_ylabel('Taxi Ridership', color='C0')
ax2.set_ylabel('Anomaly Score', color='C1')
ax1.tick_params('y', colors='C0')
ax2.tick_params('y', colors='C1')
ax1.set_ylim(0, 40000)
ax2.set_ylim(min(scores), 1.4*max(scores))
fig.set_figwidth(10)
```
We see that with this particular shingle size, hyperparameter selection, and anomaly cutoff threshold that the shingled approach more clearly captures the major anomalous events: the spike at around t=6000 and the dips at around t=9000 and t=10000. In general, the number of trees, sample size, and anomaly score cutoff are all parameters that a data scientist may need experiment with in order to achieve desired results. The use of a labeled test dataset allows the used to obtain common accuracy metrics for anomaly detection algorithms. For more information about Amazon SageMaker Random Cut Forest see the [AWS Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/randomcutforest.html).
```
sagemaker.Session().delete_endpoint(rcf_inference.endpoint)
```
| github_jupyter |
# This is the Saildrone and GOES collocation code.
trying to get mfopendataset to work with opendap data......
```
import os
import numpy as np
import matplotlib.pyplot as plt
import datetime as dt
import xarray as xr
import requests
def get_sat_filename(date):
dir_sat='https://opendap.jpl.nasa.gov/opendap/OceanTemperature/ghrsst/data/GDS2/L3C/AMERICAS/GOES16/OSISAF/v1/'
syr, smon, sdym = str(date.dt.year.data), str(date.dt.month.data).zfill(2), str(date.dt.day.data).zfill(2)
sjdy, shr = str(date.dt.dayofyear.data).zfill(2),str(date.dt.hour.data).zfill(2)
if date.dt.hour.data==0:
datetem = date - np.timedelta64(1,'D')
sjdy = str(datetem.dt.dayofyear.data).zfill(2)
# syr, smon, sdym = str(datetem.dt.year.data), str(datetem.dt.month.data).zfill(2), str(datetem.dt.day.data).zfill(2)
fgoes='0000-OSISAF-L3C_GHRSST-SSTsubskin-GOES16-ssteqc_goes16_'
dstr=syr+smon+sdym+shr
dstr2=syr+smon+sdym+'_'+shr
sat_filename=dir_sat+syr+'/'+sjdy+'/'+ dstr + fgoes +dstr2+'0000-v02.0-fv01.0.nc'
r = requests.get(sat_filename)
if r.status_code != requests.codes.ok:
exists = False
else:
exists = True
print(exists,sat_filename)
return sat_filename, exists
```
# Read in USV data
Read in the Saildrone USV file either from a local disc or using OpenDAP.
There are 6 NaN values in the lat/lon data arrays, interpolate across these
We want to collocate with wind vectors for this example, but the wind vectors are only every 10 minutes rather than every minute, so use .dropna to remove all values in the dataset from all dataarrays when wind vectors aren't availalbe
```
filename_collocation_data = 'F:/data/cruise_data/saildrone/baja-2018/ccmp_collocation_data.nc'
#filename_usv = 'https://podaac-opendap.jpl.nasa.gov/opendap/hyrax/allData/insitu/L2/saildrone/Baja/saildrone-gen_4-baja_2018-sd1002-20180411T180000-20180611T055959-1_minutes-v1.nc'
filename_usv='f:/data/cruise_data/saildrone/baja-2018/saildrone-gen_4-baja_2018-sd1002-20180411T180000-20180611T055959-1_minutes-v1.nc'
ds_usv = xr.open_dataset(filename_usv)
ds_usv.close()
ds_usv = ds_usv.isel(trajectory=0).swap_dims({'obs':'time'}).rename({'longitude':'lon','latitude':'lat'})
ds_usv = ds_usv.sel(time=slice('2018-04-11T18:30',ds_usv.time[-1].data)) #first part of data is when USV being towed, elminiate
ds_usv['lon'] = ds_usv.lon.interpolate_na(dim='time',method='linear') #there are 6 nan values
ds_usv['lat'] = ds_usv.lat.interpolate_na(dim='time',method='linear')
ds_usv['wind_speed']=np.sqrt(ds_usv.UWND_MEAN**2+ds_usv.VWND_MEAN**2)
ds_usv['wind_dir']=np.arctan2(ds_usv.VWND_MEAN,ds_usv.UWND_MEAN)*180/np.pi
ds_usv_subset = ds_usv.copy(deep=True)
#ds_usv_subset = ds_usv.dropna(dim='time',subset={'UWND_MEAN'}) #get rid of all the nan
#print(ds_usv_subset.UWND_MEAN[2000:2010].values)
```
In order to use open_mfdataset you need to either provide a path or a list of filenames to input
Here we use the USV cruise start and end date to read in all data for that period
```
read_date,end_date = ds_usv_subset.time.min(),ds_usv_subset.time.max()
filelist = []
while read_date<=(end_date+np.timedelta64(1,'h')):
#while read_date<=(ds_usv_subset.time.min()+np.timedelta64(10,'h')):
tem_filename,exists = get_sat_filename(read_date)
if exists:
filelist.append(tem_filename)
read_date=read_date+np.timedelta64(1,'h')
print(filelist[0])
```
# Read in MUR data
Read in data using open_mfdataset with the option coords='minimal'
The dataset is printed out and you can see that rather than straight xarray data array for each of the data variables open_mfdataset using dask arrays
```
ds_sat = xr.open_mfdataset(filelist,coords='minimal')
ds_sat
```
# Xarray interpolation won't run on chunked dimensions.
1. First let's subset the data to make it smaller to deal with by using the cruise lat/lons
1. Now load the data into memory (de-Dask-ify) it
```
#Step 1 from above
subset = ds_sat.sel(lon=slice(ds_usv_subset.lon.min().data,ds_usv_subset.lon.max().data),
lat=slice(ds_usv_subset.lat.min().data,ds_usv_subset.lat.max().data))
#Step 2 from above
subset.load()
#now collocate with usv lat and lons
ds_collocated = subset.interp(lat=ds_usv_subset.lat,lon=ds_usv_subset.lon,time=ds_usv_subset.time,method='linear')
ds_collocated_nearest = subset.interp(lat=ds_usv_subset.lat,lon=ds_usv_subset.lon,time=ds_usv_subset.time,method='nearest')
```
# A larger STD that isn't reflective of uncertainty in the observation
The collocation above will result in multiple USV data points matched with a single satellite
observation. The USV is sampling every 1 min and approximately few meters, while the satellite
is an average over a footprint that is interpolated onto a daily mean map. While calculating the mean would results in a valid mean, the STD would be higher and consist of a component that reflects the uncertainty of the USV and the satellite and a component that reflects the natural variability in the region that is sampled by the USV
Below we use the 'nearest' collocation results to identify when multiple USV data are collcated to
a single satellite observation.
This code goes through the data and creates averages of the USV data that match the single CCMP collocated value.
```
ds_tem.dims['time']
index=302
ds_tem = ds_collocated_nearest.copy(deep=True)
ds_tem_subset = ds_tem.analysed_sst[index:index+1000]
cond = ((ds_tem_subset==ds_collocated_nearest.analysed_sst[index]))
notcond = np.logical_not(cond)
#cond = np.append(np.full(index,True),cond)
#cond = np.append(cond,np.full(ilen-index-1000,True))
#cond.shape
print(cond[0:5].data)
print(ds_tem.analysed_sst[index:index+5].data)
ds_tem.analysed_sst[index:index+1000]=ds_tem.analysed_sst.where(notcond)
print(ds_tem.analysed_sst[index:index+5].data)
print(ds_collocated_nearest.analysed_sst[300:310].data)
print(ds_collocated_nearest.time.dt.day[300:310].data)
index=302
ilen = ds_tem.dims['time']
#cond = ((ds_tem.analysed_sst[index:index+1000]==ds_collocated_nearest.analysed_sst[index])
# & (ds_tem.time.dt.day[index:index+1000]==ds_collocated_nearest.time.dt.day[index])
# & (ds_tem.time.dt.hour[index:index+1000]==ds_collocated_nearest.time.dt.hour[index]))
cond = ((ds_tem.analysed_sst[index:index+1000]==ds_collocated_nearest.analysed_sst[index]))
#cond = np.append(np.full(index,True),cond)
#cond = np.append(cond,np.full(ilen-index-1000,True))
print(cond[index:index+10].data)
print(np.logical_not(cond[index+10]).data)
masked_usv = ds_usv_subset.where(cond,drop=True)
#ds_collocated_nearest
#print(ds_collocated_nearest.uwnd[244:315].data)
#print(masked_usv.UWND_MEAN[244:315].data)
#print(masked_usv.UWND_MEAN[244:315].mean().data)
#print(masked_usv.time.min().data)
#print(masked_usv.time.max().data)
#print(masked_usv.lon.min().data)
#print(masked_usv.lon.max().data)
#print(masked_usv.time[0].data,masked_usv.time[-1].data)
ilen,index = ds_collocated_nearest.dims['time'],0
ds_tem = ds_collocated_nearest.copy(deep=True)
duu, duv1, duv2, dlat, dlon, dut = [],[],[],[],[],np.empty((),dtype='datetime64')
while index <= ilen-2:
index += 1
if np.isnan(ds_collocated_nearest.analysed_sst[index]):
continue
if np.isnan(ds_tem.analysed_sst[index]):
continue
# print(index, ilen)
iend = index + 1000
if iend > ilen-1:
iend = ilen-1
ds_tem_subset = ds_tem.analysed_sst[index:iend]
ds_usv_subset2sst = ds_usv_subset.TEMP_CTD_MEAN[index:iend]
ds_usv_subset2uwnd = ds_usv_subset.UWND_MEAN[index:iend]
ds_usv_subset2vwnd = ds_usv_subset.VWND_MEAN[index:iend]
ds_usv_subset2lat = ds_usv_subset.lat[index:iend]
ds_usv_subset2lon = ds_usv_subset.lon[index:iend]
ds_usv_subset2time = ds_usv_subset.time[index:iend]
cond = ((ds_tem_subset==ds_collocated_nearest.analysed_sst[index]))
notcond = np.logical_not(cond)
#cond = ((ds_tem.analysed_sst==ds_collocated_nearest.analysed_sst[index]))
#notcond = np.logical_not(cond)
masked = ds_tem_subset.where(cond)
if masked.sum().data==0: #don't do if data not found
continue
masked_usvsst = ds_usv_subset2sst.where(cond,drop=True)
masked_usvuwnd = ds_usv_subset2uwnd.where(cond,drop=True)
masked_usvvwnd = ds_usv_subset2vwnd.where(cond,drop=True)
masked_usvlat = ds_usv_subset2lat.where(cond,drop=True)
masked_usvlon = ds_usv_subset2lon.where(cond,drop=True)
masked_usvtime = ds_usv_subset2time.where(cond,drop=True)
duu=np.append(duu,masked_usvsst.mean().data)
duv1=np.append(duv1,masked_usvuwnd.mean().data)
duv2=np.append(duv2,masked_usvvwnd.mean().data)
dlat=np.append(dlat,masked_usvlat.mean().data)
dlon=np.append(dlon,masked_usvlon.mean().data)
tdif = masked_usvtime[-1].data-masked_usvtime[0].data
mtime=masked_usvtime[0].data+np.timedelta64(tdif/2,'ns')
dut=np.append(dut,mtime)
ds_tem.analysed_sst[index:iend]=ds_tem.analysed_sst.where(notcond)
# ds_tem=ds_tem.where(notcond,np.nan) #masked used values by setting to nan
dut2 = dut[1:] #remove first data point which is a repeat from what array defined
ds_new=xr.Dataset(data_vars={'sst_usv': ('time',duu),'uwnd_usv': ('time',duv1),'vwnd_usv': ('time',duv2),
'lon': ('time',dlon),
'lat': ('time',dlat)},
coords={'time':dut2})
ds_new.to_netcdf('F:/data/cruise_data/saildrone/baja-2018/goes_downsampled_usv_data2.nc')
ds_new=xr.Dataset(data_vars={'sst_usv': ('time',duu),'uwnd_usv': ('time',duv1),'vwnd_usv': ('time',duv2),
'lon': ('time',dlon),
'lat': ('time',dlat)},
coords={'time':dut2})
ds_new.to_netcdf('F:/data/cruise_data/saildrone/baja-2018/goes_downsampled_usv_data2.nc')
```
# redo the collocation
Now, redo the collocation, using 'linear' interpolation using the averaged data. This will interpolate the data temporally onto the USV sampling which has been averaged to the satellite data grid points
```
ds_collocated_averaged = subset.interp(lat=ds_new.lat,lon=ds_new.lon,time=ds_new.time,method='linear')
ds_collocated_averaged
ds_collocated_averaged.to_netcdf('F:/data/cruise_data/saildrone/baja-2018/mur_downsampled_collocated_usv_data2.nc')
sat_sst = ds_collocated_averaged.analysed_sst[:-19]-273.15
usv_sst = ds_new.sst_usv[:-19]
ds_new['spd']=np.sqrt(ds_new.uwnd_usv**2+ds_new.vwnd_usv**2)
usv_spd = ds_new.spd[:-19]
dif_sst = sat_sst - usv_sst
print('mean,std dif ',[dif_sst.mean().data,dif_sst.std().data,dif_sst.shape[0]])
plt.plot(usv_spd,dif_sst,'.')
sat_sst = ds_collocated_averaged.analysed_sst[:-19]-273.15
usv_sst = ds_new.sst_usv[:-19]
dif_sst = sat_sst - usv_sst
cond = usv_spd>2
dif_sst = dif_sst.where(cond)
print('no low wind mean,std dif ',[dif_sst.mean().data,dif_sst.std().data,sum(cond).data])
plt.plot(usv_spd,dif_sst,'.')
fig, ax = plt.subplots(figsize=(5,4))
ax.plot(sat_sst,sat_sst-usv_sst,'.')
ax.set_xlabel('USV wind speed (ms$^{-1}$)')
ax.set_ylabel('USV - Sat wind direction (deg)')
fig_fname='F:/data/cruise_data/saildrone/baja-2018/figs/sat_sst_both_bias.png'
fig.savefig(fig_fname, transparent=False, format='png')
plt.plot(dif_sst[:-19],'.')
#faster not sure why
ilen,index = ds_collocated_nearest.dims['time'],0
ds_tem = ds_collocated_nearest.copy(deep=True)
duu,dvu, dlat, dlon, dut = [],[],[],[],np.empty((),dtype='datetime64')
while index <= ilen-2:
index += 1
if np.isnan(ds_collocated_nearest.uwnd[index]):
continue
test = ds_collocated_nearest.where((ds_tem.uwnd==ds_collocated_nearest.uwnd[index])&(ds_tem.vwnd==ds_collocated_nearest.vwnd[index]))
test = test/test
if test.uwnd.sum()>0:
duu=np.append(duu,(ds_usv_subset.UWND_MEAN*test.uwnd).mean().data)
dvu=np.append(dvu,(ds_usv_subset.VWND_MEAN*test.vwnd).mean().data)
dlat=np.append(dlat,(ds_usv_subset.lat*test.lat).mean().data)
dlon=np.append(dlon,(ds_usv_subset.lon*test.lon).mean().data)
tdif = ds_usv_subset.time.where(test.vwnd==1).max().data-ds_usv_subset.time.where(test.vwnd==1).min().data
mtime=ds_usv_subset.time.where(test.vwnd==1).min().data+np.timedelta64(tdif/2,'ns')
dut=np.append(dut,mtime)
ds_tem=ds_tem.where(np.isnan(test),np.nan) #you have used values, so set to nan
dut2 = dut[1:] #remove first data point which is a repeat from what array defined
ds_new2=xr.Dataset(data_vars={'u_usv': ('time',duu),
'v_usv': ('time',dvu),
'lon': ('time',dlon),
'lat': ('time',dlat)},
coords={'time':dut2})
#testing code above
ds_tem = ds_collocated_nearest.copy(deep=True)
print(ds_collocated_nearest.uwnd[1055].data)
print(ds_collocated_nearest.uwnd[1050:1150].data)
test = ds_collocated_nearest.where((ds_collocated_nearest.uwnd==ds_collocated_nearest.uwnd[1055])&(ds_collocated_nearest.vwnd==ds_collocated_nearest.vwnd[1055]))
test = test/test
print(test.uwnd[1050:1150].data)
ds_tem=ds_tem.where(np.isnan(test),np.nan)
print(ds_tem.uwnd[1050:1150].data)
print((ds_usv_subset.UWND_MEAN*test.uwnd).mean())
print((ds_usv_subset.VWND_MEAN*test.vwnd).mean())
from scipy.interpolate import griddata
# interpolate
points = (ds_usv_subset.lon.data,ds_usv_subset.lat.data)
grid_in_lon,grid_in_lat = np.meshgrid(subset.lon.data,subset.lat.data)
grid_in = (grid_in_lon,grid_in_lat)
values = ds_usv_subset.UWND_MEAN.data
#print(points.size)
zi = griddata(points,values,grid_in,method='linear',fill_value=np.nan)
zi2 = griddata(points,values/values,grid_in,method='linear',fill_value=np.nan)
print(np.isfinite(zi).sum())
plt.pcolormesh(subset.lon,subset.lat,zi,vmin=-5,vmax=5)
plt.plot(ds_usv_subset.lon,ds_usv_subset.lat,'.')
#plt.contourf(subset.uwnd[0,:,:])
len(points[0])
from scipy.interpolate.interpnd import _ndim_coords_from_arrays
from scipy.spatial import cKDTree
THRESHOLD=1
# Construct kd-tree, functionality copied from scipy.interpolate
tree = cKDTree(points)
xi = _ndim_coords_from_arrays(grid_in, ndim=len(points[0]))
dists, indexes = tree.query(xi)
# Copy original result but mask missing values with NaNs
result3 = result2[:]
result3[dists > THRESHOLD] = np.nan
# Show
plt.figimage(result3)
plt.show()
#testing
index=300
ds_tem = ds_collocated_nearest.copy(deep=True)
cond = ((ds_tem.uwnd==ds_collocated_nearest.uwnd[index]) & (ds_tem.vwnd==ds_collocated_nearest.vwnd[index]))
notcond = ((ds_tem.uwnd!=ds_collocated_nearest.uwnd[index]) & (ds_tem.vwnd!=ds_collocated_nearest.vwnd[index]))
masked = ds_tem.where(cond)
masked_usv = ds_usv_subset.where(cond,drop=True)
print(masked.uwnd.sum().data)
#print(masked.nobs[290:310].data)
print((masked_usv.UWND_MEAN).mean().data)
print(ds_tem.uwnd[243:316])
ds_tem=ds_tem.where(notcond,np.nan) #you have used values, so set to nan
print(ds_tem.uwnd[243:316])
ilen,index = ds_collocated_nearest.dims['time'],0
ds_tem = ds_collocated_nearest.copy(deep=True)
duu, duv1, duv2, dlat, dlon, dut = [],[],[],[],[],np.empty((),dtype='datetime64')
while index <= ilen-2:
index += 1
if np.isnan(ds_collocated_nearest.analysed_sst[index]):
continue
if np.isnan(ds_tem.analysed_sst[index]):
continue
print(index, ilen)
cond = ((ds_tem.analysed_sst==ds_collocated_nearest.analysed_sst[index])
& (ds_tem.time.dt.day==ds_collocated_nearest.time.dt.day[index])
& (ds_tem.time.dt.hour==ds_collocated_nearest.time.dt.hour[index]))
notcond = np.logical_not(cond)
masked = ds_tem.where(cond)
masked_usv = ds_usv_subset.where(cond,drop=True)
if masked.analysed_sst.sum().data==0: #don't do if data not found
continue
duu=np.append(duu,masked_usv.TEMP_CTD_MEAN.mean().data)
duv1=np.append(duu,masked_usv.UWND_MEAN.mean().data)
duv2=np.append(duu,masked_usv.VWND_MEAN.mean().data)
dlat=np.append(dlat,masked_usv.lat.mean().data)
dlon=np.append(dlon,masked_usv.lon.mean().data)
tdif = masked_usv.time[-1].data-masked_usv.time[0].data
mtime=masked_usv.time[0].data+np.timedelta64(tdif/2,'ns')
dut=np.append(dut,mtime)
ds_tem=ds_tem.where(notcond,np.nan) #masked used values by setting to nan
dut2 = dut[1:] #remove first data point which is a repeat from what array defined
ds_new=xr.Dataset(data_vars={'sst_usv': ('time',duu),'uwnd_usv': ('time',duv1),'vwnd_usv': ('time',duv2),
'lon': ('time',dlon),
'lat': ('time',dlat)},
coords={'time':dut2})
ds_new.to_netcdf('F:/data/cruise_data/saildrone/baja-2018/mur_downsampled_usv_data.nc')
```
| github_jupyter |
**Course Announcements**
Due Friday (11:59 PM):
- D8
- Q8
- A4
- weekly project survey (*optional*)
# Geospatial Analysis
- Analysis:
- Exploratory Spatial Data Analysis
- K-Nearest Neighbors
- Tools:
- `shapely` - create and manipulate shape objects
- `geopandas` - shapely + dataframe + visualization
Today's notes are adapted from the [Scipy 2018 Tutorial - Introduction to Geospatial Data Analysis with Python](https://github.com/geopandas/scipy2018-geospatial-data).
To get all notes and examples from this workshop, do the following:
```
git clone https://github.com/geopandas/scipy2018-geospatial-data # get materials
conda env create -f environment.yml # download packages
python check_environment.py # check environment
```
Additional resource for mapping data with `geopandas`: http://darribas.org/gds15/content/labs/lab_03.html
```
# uncomment below if not yet installed
# !pip install --user geopandas
# !pip install --user descartes
%matplotlib inline
import pandas as pd
import geopandas as gpd
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (17, 5)
plt.rcParams.update({'font.size': 16})
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
import shapely.geometry as shp
import sklearn.neighbors as skn
import sklearn.metrics as skm
import warnings
warnings.filterwarnings('ignore')
pd.options.display.max_rows = 10
#improve resolution
#comment this line if erroring on your machine/screen
%config InlineBackend.figure_format ='retina'
```
# `geopandas` basics
Examples here are from `geopandas` documentation: http://geopandas.org/mapping.html
## The Data
```
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
cities = gpd.read_file(gpd.datasets.get_path('naturalearth_cities'))
world
cities
```
## Population Estimates
```
# Plot population estimates with an accurate legend
fig, ax = plt.subplots(1, 1, figsize=(17, 7))
divider = make_axes_locatable(ax)
world.plot(column='pop_est', ax=ax, legend=True);
# Plot population estimates with a different color scale
fig, ax = plt.subplots(1, 1, figsize=(17, 7))
divider = make_axes_locatable(ax)
world.plot(column='pop_est', ax=ax, cmap='GnBu', legend=True);
```
## GDP per capita
```
# Plot by GDP per capita
# specify data
world = world[(world.pop_est>0) & (world.name!="Antarctica")]
world['gdp_per_cap'] = world.gdp_md_est / world.pop_est
# plot choropleth
fig, ax = plt.subplots(1, 1, figsize=(17, 7))
divider = make_axes_locatable(ax)
world.plot(column='gdp_per_cap', ax = ax, figsize=(17, 6), cmap='GnBu', legend = True);
world[world['gdp_per_cap'] > 0.08]
# combining maps
base = world.plot(column='pop_est', cmap='GnBu')
cities.plot(ax=base, marker='o', color='red', markersize=5);
```
## Geospatial Analysis
- Data
- EDA (Visualization)
- Analysis
### District data: Berlin
```
# berlin districts
df = gpd.read_file('https://raw.githubusercontent.com/geopandas/scipy2018-geospatial-data/master/data/berlin-districts.geojson')
df.shape
df.head()
```
### Exploratory Spatial Data Analysis
```
sns.distplot(df['median_price']);
```
We get an idea of what the median price for listings in this area of Berlin is, but we don't know how this information is spatially related.
```
df.plot(column='median_price', figsize=(18, 12), cmap='GnBu', legend=True);
```
Unless you happen to know something about this area of Germany, interpreting what's going on in this choropleth is likely a little tricky, but we can see there is some variation in median prices across this region.
### Spatial Autocorrelation
Note that if prices were distributed randomly, there would be no clustering of similar values.
To visualize the existence of global spatial autocorrelation, let's take it to the extreme. Let's look at the 68 districts with the highest Airbnb prices and those with the lowest prices.
```
# get data to dichotomize
y = df['median_price']
yb = y > y.median()
labels = ["0 Low", "1 High"]
yb = [labels[i] for i in 1*yb]
df['yb'] = yb
# take a look
fig = plt.figure(figsize=(12,10))
ax = plt.gca()
df.plot(column='yb', cmap='binary',
edgecolor='grey', legend=True, ax=ax);
```
### Airbnb Listings: Berlin
- kernel regressions
- "borrow strength" from nearby observations
A reminder that in geospatial data, there *two simultaneous senses of what is near:*
- things that similar in attribute (classical kernel regression)
- things that are similar in spatial position (spatial kernel regression)
### Question
What features would you consider including in a model to predict an Airbnb's nightly price?
First, though, let's try to predict the log of an **Airbnb's nightly price** based on a few factors:
- `accommodates`: the number of people the airbnb can accommodate
- `review_scores_rating`: the aggregate rating of the listing
- `bedrooms`: the number of bedrooms the airbnb has
- `bathrooms`: the number of bathrooms the airbnb has
- `beds`: the number of beds the airbnb offers
### Airbnb Listings: The Data
```
listings = pd.read_csv('https://raw.githubusercontent.com/geopandas/scipy2018-geospatial-data/master/data/berlin-listings.csv.gz')
listings['geometry'] = listings[['longitude', 'latitude']].apply(shp.Point, axis=1)
listings = gpd.GeoDataFrame(listings)
listings.crs = {'init':'epsg:4269'} # coordinate reference system
listings = listings.to_crs(epsg=3857)
listings.shape
listings.head()
```
### Airbnb Listings: Outcome Variable
```
fig, ax = plt.subplots(1, 1, figsize=(11, 7))
divider = make_axes_locatable(ax)
listings.sort_values('price').plot('price', cmap='plasma',
figsize=(10, 18), ax=ax, legend=True);
# distribution of price
sns.distplot(listings['price']);
listings['price_log'] = np.log(listings['price'])
fig, ax = plt.subplots(1, 1, figsize=(11, 7))
divider = make_axes_locatable(ax)
listings.sort_values('price_log').plot('price_log', cmap='plasma',
figsize=(10, 18), ax=ax, legend=True);
# distribution of log price
sns.distplot(listings['price_log'], bins=10);
```
### The Models
```
# get data for attributes model
model_data = listings[['accommodates', 'review_scores_rating',
'bedrooms', 'bathrooms', 'beds',
'price', 'geometry']].dropna()
# specify predictors (X) and outcome (y)
Xnames = ['accommodates', 'review_scores_rating',
'bedrooms', 'bathrooms', 'beds' ]
X = model_data[Xnames].values
X = X.astype(float)
y = np.log(model_data[['price']].values)
```
We'll need the spatial coordinates for each listing...
```
# get spatial coordinates
coordinates = np.vstack(model_data.geometry.apply(lambda p: np.hstack(p.xy)).values)
```
`scikit-learn`'s neighbor regressions are contained in the sklearn.neighbors module, and there are two main types:
- **KNeighborsRegressor** - uses a k-nearest neighborhood of observations around each focal site
- **RadiusNeighborsRegressor** - considers all observations within a fixed radius around each focal site.
Further, these methods can use inverse distance weighting to rank the relative importance of sites around each focal; in this way, near things are given more weight than far things, even when there's a lot of near things.
#### Training & Test
```
# specify training and test set
shuffle = np.random.permutation(len(y))
num = int(0.8*len(shuffle))
train, test = shuffle[:num],shuffle[num:]
```
#### Three Models
So, let's fit three models:
- `spatial`: using inverse distance weighting on the nearest 100 neighbors geographical space
- `attribute`: using inverse distance weighting on the nearest 100 neighbors in attribute space
- `both`: using inverse distance weighting in both geographical and attribute space.
```
# spatial
KNNR = skn.KNeighborsRegressor(weights='distance', n_neighbors=100)
spatial = KNNR.fit(coordinates[train,:],
y[train,:])
# attribute
KNNR = skn.KNeighborsRegressor(weights='distance', n_neighbors=100)
attribute = KNNR.fit(X[train,:],
y[train,])
# both
KNNR = skn.KNeighborsRegressor(weights='distance', n_neighbors=100)
both = KNNR.fit(np.hstack((coordinates,X))[train,:],
y[train,:])
```
### Performance
To score them, I'm going to look at the scatterplot and get their % explained variance:
#### Training Data
```
# generate predictions in the training set
sp_ypred_train = spatial.predict(coordinates[train,:]) # spatial
att_ypred_train = attribute.predict(X[train,:]) # attribute
both_ypred_train = both.predict(np.hstack((X,coordinates))[train,:]) # combo
# variance explained in training data
(skm.explained_variance_score(y[train,], sp_ypred_train),
skm.explained_variance_score(y[train,], att_ypred_train),
skm.explained_variance_score(y[train,], both_ypred_train))
# take a look at predictions
plt.plot(y[train,], sp_ypred_train, '.')
plt.xlabel('reported')
plt.ylabel('predicted');
```
#### Test Data
```
# generate predictions in the test set
sp_ypred = spatial.predict(coordinates[test,:])
att_ypred = attribute.predict(X[test,:])
both_ypred = both.predict(np.hstack((X,coordinates))[test,:])
(skm.explained_variance_score(y[test,], sp_ypred),
skm.explained_variance_score(y[test,], att_ypred),
skm.explained_variance_score(y[test,], both_ypred))
# take a look at predictions
plt.plot(y[test,], both_ypred, '.')
plt.xlabel('reported')
plt.ylabel('predicted');
```
### Model Improvement
None of these models is performing particularly well...
Cosiderations for improvement:
- features included in attribute model
- model tuning (i.e. number of nearest neighbors)
- model selected
- etc...
One method that can exploit the fact that local data may be more informative in predicting $y$ at site $i$ than distant data is **Geographically Weighted Regression**, a type of Generalized Additive Spatial Model. Kind of like a Kernel Regression, GWR conducts a bunch of regressions at each training site only considering data near that site. This means it works like the kernel regressions above, but uses *both* the coordinates *and* the data in $X$ to predict $y$ at each site. It optimizes its sense of "local" depending on some information criteria or fit score.
You can find this in the `gwr` package, and significant development is ongoing on this at `https://github.com/pysal/gwr`.
| github_jupyter |
```
'''
first lets import the neccesary labraries
we import re which stand for regular expressions because we what to use it to remove the currency symbol
on the price
'''
from bs4 import BeautifulSoup as bs4
import requests
import pandas as pd
import re
'''
next we initiale the list of columns we what to fetch
'''
pages = []
prices = []
stars = []
titles = []
stock_availibility =[]
urlss = []
'''
this variable will the number of pages we want to fetch information from
this variable can be made dynamic where as a user can be prompt to enter the number of pages he wants.
'''
no_pages =5
'''
this loop will iterate to the number of pages we want to fetch data from
and append each url to the list of pages
http://books.toscrape.com/catalogue/page-2.html
looking at the url above we can see that before the .html has a figure 2
show it's important that we are able to identify the pattern of the url
'''
for i in range (1, no_pages + 1):
url = ('http://books.toscrape.com/catalogue/page-{}.html').format(i)
pages.append(url)
'''
now we have the number pages we want, we are going to iterate through each page and use the beautifulsoup library
to easy iterate through all the html tags
'''
for item in pages:
page = requests.get(item)
soup = bs4(page.text, 'html.parser')
'''
each of this loops is going append the information collected to the list of columns we declared above
the first two for loops are straight forward so there is nothing much to explain
'''
for i in soup.findAll('h3'):
ttl = i.getText()
titles.append(ttl)
for n in soup.findAll('p', class_='instock availability'):
stk = n.getText().strip()
stock_availibility.append(stk)
'''
for the firt loop below, we can see it has another loop within it. this because looking at the inspected
elements of the website, the rating is embeded int he string of the class definition see the sample below
(<p class="star-rating One">) for we need to apply some trick to get only the value we want.
class ['star-rating', 'Three']
class ['star-rating', 'One']
class ['star-rating', 'One']
'''
for s in soup.findAll('p', class_='star-rating'):
for k,v in s.attrs.items():
star = v[1]
stars.append(star)
'''
in this loop too we can see why we imported the regular expression library.
this is to remove the currency symbols for the price tag
'''
for j in soup.findAll('p',class_='price_color'):
price = j.getText()
trim = re.compile(r'[^\d.,]+')
price = trim.sub('',price)
prices.append(price)
'''
also like the above we need to apply some paitonic stuff so that we can get the image thumbnail url
'''
divs = soup.findAll('div', class_='image_container')
for thumbs in divs:
tags = thumbs.find('img', class_='thumbnail')
urls = 'http://books.toscrape.com/'+str(tags['src'])
newurls = urls.replace("../","")
urlss.append(newurls)
'''
Finally we save all the list of data as a dictionary because is easier to convert it to a data frame when the data
is in dictionary fomart.also we can save our data in csv, json or even insert it to our database
'''
dic = {'TITLE': titles, 'PRICE': prices, 'RATING': stars, 'STOCK': stock_availibility, 'URLs': urlss}
df = pd.DataFrame(data = dic)
df.head()
```
| github_jupyter |
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
<i>Licensed under the MIT License.</i>
# Evaluation
Evaluation with offline metrics is pivotal to assess the quality of a recommender before it goes into production. Usually, evaluation metrics are carefully chosen based on the actual application scenario of a recommendation system. It is hence important to data scientists and AI developers that build recommendation systems to understand how each evaluation metric is calculated and what it is for.
This notebook deep dives into several commonly used evaluation metrics, and illustrates how these metrics are used in practice. The metrics covered in this notebook are merely for off-line evaluations.
## 0 Global settings
Most of the functions used in the notebook can be found in the `reco_utils` directory.
```
# set the environment path to find Recommenders
import sys
sys.path.append("../../")
import pandas as pd
import pyspark
from sklearn.preprocessing import minmax_scale
from reco_utils.common.spark_utils import start_or_get_spark
from reco_utils.evaluation.spark_evaluation import SparkRankingEvaluation, SparkRatingEvaluation
from reco_utils.evaluation.python_evaluation import auc, logloss
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
print("PySpark version: {}".format(pyspark.__version__))
```
Note to successfully run Spark codes with the Jupyter kernel, one needs to correctly set the environment variables of `PYSPARK_PYTHON` and `PYSPARK_DRIVER_PYTHON` that point to Python executables with the desired version. Detailed information can be found in the setup instruction document [SETUP.md](../../SETUP.md).
```
COL_USER = "UserId"
COL_ITEM = "MovieId"
COL_RATING = "Rating"
COL_PREDICTION = "Rating"
HEADER = {
"col_user": COL_USER,
"col_item": COL_ITEM,
"col_rating": COL_RATING,
"col_prediction": COL_PREDICTION,
}
```
## 1 Prepare data
### 1.1 Prepare dummy data
For illustration purpose, a dummy data set is created for demonstrating how different evaluation metrics work.
The data has the schema that can be frequently found in a recommendation problem, that is, each row in the dataset is a (user, item, rating) tuple, where "rating" can be an ordinal rating score (e.g., discrete integers of 1, 2, 3, etc.) or an numerical float number that quantitatively indicates the preference of the user towards that item.
For simplicity reason, the column of rating in the dummy dataset we use in the example represent some ordinal ratings.
```
df_true = pd.DataFrame(
{
COL_USER: [1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3],
COL_ITEM: [1, 2, 3, 1, 4, 5, 6, 7, 2, 5, 6, 8, 9, 10, 11, 12, 13, 14],
COL_RATING: [5, 4, 3, 5, 5, 3, 3, 1, 5, 5, 5, 4, 4, 3, 3, 3, 2, 1],
}
)
df_pred = pd.DataFrame(
{
COL_USER: [1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3],
COL_ITEM: [3, 10, 12, 10, 3, 5, 11, 13, 4, 10, 7, 13, 1, 3, 5, 2, 11, 14],
COL_PREDICTION: [14, 13, 12, 14, 13, 12, 11, 10, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5]
}
)
```
Take a look at ratings of the user with ID "1" in the dummy dataset.
```
df_true[df_true[COL_USER] == 1]
df_pred[df_pred[COL_USER] == 1]
```
### 1.2 Prepare Spark data
Spark framework is sometimes used to evaluate metrics given datasets that are hard to fit into memory. In our example, Spark DataFrames can be created from the Python dummy dataset.
```
spark = start_or_get_spark("EvaluationTesting", "local")
dfs_true = spark.createDataFrame(df_true)
dfs_pred = spark.createDataFrame(df_pred)
dfs_true.filter(dfs_true[COL_USER] == 1).show()
dfs_pred.filter(dfs_pred[COL_USER] == 1).show()
```
## 2 Evaluation metrics
### 2.1 Rating metrics
Rating metrics are similar to regression metrics used for evaluating a regression model that predicts numerical values given input observations. In the context of recommendation system, rating metrics are to evaluate how accurate a recommender is to predict ratings that users may give to items. Therefore, the metrics are **calculated exactly on the same group of (user, item) pairs that exist in both ground-truth dataset and prediction dataset** and **averaged by the total number of users**.
#### 2.1.1 Use cases
Rating metrics are effective in measuring the model accuracy. However, in some cases, the rating metrics are limited if
* **the recommender is to predict ranking instead of explicit rating**. For example, if the consumer of the recommender cares about the ranked recommended items, rating metrics do not apply directly. Usually a relevancy function such as top-k will be applied to generate the ranked list from predicted ratings in order to evaluate the recommender with other metrics.
* **the recommender is to generate recommendation scores that have different scales with the original ratings (e.g., the SAR algorithm)**. In this case, the difference between the generated scores and the original scores (or, ratings) is not valid for measuring accuracy of the model.
#### 2.1.2 How-to with the evaluation utilities
A few notes about the interface of the Rating evaluator class:
1. The columns of user, item, and rating (prediction) should be present in the ground-truth DataFrame (prediction DataFrame).
2. There should be no duplicates of (user, item) pairs in the ground-truth and the prediction DataFrames, othewise there may be unexpected behavior in calculating certain metrics.
3. Default column names for user, item, rating, and prediction are "UserId", "ItemId", "Rating", and "Prediciton", respectively.
In our examples below, to calculate rating metrics for input data frames in Spark, a Spark object, `SparkRatingEvaluation` is initialized. The input data schemas for the ground-truth dataset and the prediction dataset are
* Ground-truth dataset.
|Column|Data type|Description|
|-------------|------------|-------------|
|`COL_USER`|<int\>|User ID|
|`COL_ITEM`|<int\>|Item ID|
|`COL_RATING`|<float\>|Rating or numerical value of user preference.|
* Prediction dataset.
|Column|Data type|Description|
|-------------|------------|-------------|
|`COL_USER`|<int\>|User ID|
|`COL_ITEM`|<int\>|Item ID|
|`COL_RATING`|<float\>|Predicted rating or numerical value of user preference.|
```
spark_rate_eval = SparkRatingEvaluation(dfs_true, dfs_pred, **HEADER)
```
#### 2.1.3 Root Mean Square Error (RMSE)
RMSE is for evaluating the accuracy of prediction on ratings. RMSE is the most widely used metric to evaluate a recommendation algorithm that predicts missing ratings. The benefit is that RMSE is easy to explain and calculate.
```
print("The RMSE is {}".format(spark_rate_eval.rmse()))
```
#### 2.1.4 R Squared (R2)
R2 is also called "coefficient of determination" in some context. It is a metric that evaluates how well a regression model performs, based on the proportion of total variations of the observed results.
```
print("The R2 is {}".format(spark_rate_eval.rsquared()))
```
#### 2.1.5 Mean Absolute Error (MAE)
MAE evaluates accuracy of prediction. It computes the metric value from ground truths and prediction in the same scale. Compared to RMSE, MAE is more explainable.
```
print("The MAE is {}".format(spark_rate_eval.mae()))
```
#### 2.1.6 Explained Variance
Explained variance is usually used to measure how well a model performs with regard to the impact from the variation of the dataset.
```
print("The explained variance is {}".format(spark_rate_eval.exp_var()))
```
#### 2.1.7 Summary
|Metric|Range|Selection criteria|Limitation|Reference|
|------|-------------------------------|---------|----------|---------|
|RMSE|$> 0$|The smaller the better.|May be biased, and less explainable than MSE|[link](https://en.wikipedia.org/wiki/Root-mean-square_deviation)|
|R2|$\leq 1$|The closer to $1$ the better.|Depend on variable distributions.|[link](https://en.wikipedia.org/wiki/Coefficient_of_determination)|
|MSE|$\geq 0$|The smaller the better.|Dependent on variable scale.|[link](https://en.wikipedia.org/wiki/Mean_absolute_error)|
|Explained variance|$\leq 1$|The closer to $1$ the better.|Depend on variable distributions.|[link](https://en.wikipedia.org/wiki/Explained_variation)|
### 2.2 Ranking metrics
"Beyond-accuray evaluation" was proposed to evaluate how relevant recommendations are for users. In this case, a recommendation system is a treated as a ranking system. Given a relency definition, recommendation system outputs a list of recommended items to each user, which is ordered by relevance. The evaluation part takes ground-truth data, the actual items that users interact with (e.g., liked, purchased, etc.), and the recommendation data, as inputs, to calculate ranking evaluation metrics.
#### 2.2.1 Use cases
Ranking metrics are often used when hit and/or ranking of the items are considered:
* **Hit** - defined by relevancy, a hit usually means whether the recommended "k" items hit the "relevant" items by the user. For example, a user may have clicked, viewed, or purchased an item for many times, and a hit in the recommended items indicate that the recommender performs well. Metrics like "precision", "recall", etc. measure the performance of such hitting accuracy.
* **Ranking** - ranking metrics give more explanations about, for the hitted items, whether they are ranked in a way that is preferred by the users whom the items will be recommended to. Metrics like "mean average precision", "ndcg", etc., evaluate whether the relevant items are ranked higher than the less-relevant or irrelevant items.
#### 2.2.2 How-to with evaluation utilities
A few notes about the interface of the Rating evaluator class:
1. The columns of user, item, and rating (prediction) should be present in the ground-truth DataFrame (prediction DataFrame). The column of timestamp is optional, but it is required if certain relevanc function is used. For example, timestamps will be used if the most recent items are defined as the relevant one.
2. There should be no duplicates of (user, item) pairs in the ground-truth and the prediction DataFrames, othewise there may be unexpected behavior in calculating certain metrics.
3. Default column names for user, item, rating, and prediction are "UserId", "ItemId", "Rating", and "Prediciton", respectively.
#### 2.2.1 Relevancy of recommendation
Relevancy of recommendation can be measured in different ways:
* **By ranking** - In this case, relevant items in the recommendations are defined as the top ranked items, i.e., top k items, which are taken from the list of the recommended items that is ordered by the predicted ratings (or other numerical scores that indicate preference of a user to an item).
* **By timestamp** - Relevant items are defined as the most recently viewed k items, which are obtained from the recommended items ranked by timestamps.
* **By rating** - Relevant items are defined as items with ratings (or other numerical scores that indicate preference of a user to an item) that are above a given threshold.
Similarly, a ranking metric object can be initialized as below. The input data schema is
* Ground-truth dataset.
|Column|Data type|Description|
|-------------|------------|-------------|
|`COL_USER`|<int\>|User ID|
|`COL_ITEM`|<int\>|Item ID|
|`COL_RATING`|<float\>|Rating or numerical value of user preference.|
|`COL_TIMESTAMP`|<string\>|Timestamps.|
* Prediction dataset.
|Column|Data type|Description|
|-------------|------------|-------------|
|`COL_USER`|<int\>|User ID|
|`COL_ITEM`|<int\>|Item ID|
|`COL_RATING`|<float\>|Predicted rating or numerical value of user preference.|
|`COL_TIMESTAM`|<string\>|Timestamps.|
In this case, in addition to the input datasets, there are also other arguments used for calculating the ranking metrics:
|Argument|Data type|Description|
|------------|------------|--------------|
|`k`|<int\>|Number of items recommended to user.|
|`revelancy_method`|<string\>|Methonds that extract relevant items from the recommendation list|
For example, the following code initializes a ranking metric object that calculates the metrics.
```
spark_rank_eval = SparkRankingEvaluation(dfs_true, dfs_pred, k=3, relevancy_method="top_k", **HEADER)
```
A few ranking metrics can then be calculated.
#### 2.2.1 Precision
Precision@k is a metric that evaluates how many items in the recommendation list are relevant (hit) in the ground-truth data. For each user the precision score is normalized by `k` and then the overall precision scores are averaged by the total number of users.
Note it is apparent that the precision@k metric grows with the number of `k`.
```
print("The precision at k is {}".format(spark_rank_eval.precision_at_k()))
```
#### 2.2.2 Recall
Recall@k is a metric that evaluates how many relevant items in the ground-truth data are in the recommendation list. For each user the recall score is normalized by the total number of ground-truth items and then the overall recall scores are averaged by the total number of users.
```
print("The recall at k is {}".format(spark_rank_eval.recall_at_k()))
```
#### 2.2.3 Normalized Discounted Cumulative Gain (NDCG)
NDCG is a metric that evaluates how well the recommender performs in recommending ranked items to users. Therefore both hit of relevant items and correctness in ranking of these items matter to the NDCG evaluation. The total NDCG score is normalized by the total number of users.
```
print("The ndcg at k is {}".format(spark_rank_eval.ndcg_at_k()))
```
#### 2.2.4 Mean Average Precision (MAP)
MAP is a metric that evaluates the average precision for each user in the datasets. It also penalizes ranking correctness of the recommended items. The overall MAP score is normalized by the total number of users.
```
print("The map at k is {}".format(spark_rank_eval.map_at_k()))
```
#### 2.2.5 ROC and AUC
ROC, as well as AUC, is a well known metric that is used for evaluating binary classification problem. It is similar in the case of binary rating typed recommendation algorithm where the "hit" accuracy on the relevant items is used for measuring the recommender's performance.
To demonstrate the evaluation method, the original data for testing is manipuldated in a way that the ratings in the testing data are arranged as binary scores, whilst the ones in the prediction are scaled in 0 to 1.
```
# Convert the original rating to 0 and 1.
df_true_bin = df_true.copy()
df_true_bin[COL_RATING] = df_true_bin[COL_RATING].apply(lambda x: 1 if x > 3 else 0)
df_true_bin
# Convert the predicted ratings into a [0, 1] scale.
df_pred_bin = df_pred.copy()
df_pred_bin[COL_PREDICTION] = minmax_scale(df_pred_bin[COL_PREDICTION].astype(float))
df_pred_bin
# Calculate the AUC metric
auc_score = auc(
df_true_bin,
df_pred_bin,
col_user = COL_USER,
col_item = COL_ITEM,
col_rating = COL_RATING,
col_prediction = COL_RATING
)
print("The auc score is {}".format(auc_score))
```
It is worth mentioning that in some literature there are variants of the original AUC metric, that considers the effect of **the number of the recommended items (k)**, **grouping effect of users (compute AUC for each user group, and take the average across different groups)**. These variants are applicable to various different scenarios, and choosing an appropriate one depends on the context of the use case itself.
#### 2.3.2 Logistic loss
Logistic loss (sometimes it is called simply logloss, or cross-entropy loss) is another useful metric to evaluate the hit accuracy. It is defined as the negative log-likelihood of the true labels given the predictions of a classifier.
```
# Calculate the logloss metric
logloss_score = logloss(
df_true_bin,
df_pred_bin,
col_user = COL_USER,
col_item = COL_ITEM,
col_rating = COL_RATING,
col_prediction = COL_RATING
)
print("The logloss score is {}".format(logloss_score))
```
It is worth noting that logloss may be sensitive to the class balance of datasets, as it penalizes heavily classifiers that are confident about incorrect classifications. To demonstrate, the ground truth data set for testing is manipulated purposely to unbalance the binary labels. For example, the following binarizes the original rating data by using a lower threshold, i.e., 2, to create more positive feedback from the user.
```
df_true_bin_pos = df_true.copy()
df_true_bin_pos[COL_RATING] = df_true_bin_pos[COL_RATING].apply(lambda x: 1 if x > 2 else 0)
df_true_bin_pos
```
By using threshold of 2, the labels in the ground truth data is not balanced, and the ratio of 1 over 0 is
```
one_zero_ratio = df_true_bin_pos[COL_PREDICTION].sum() / (df_true_bin_pos.shape[0] - df_true_bin_pos[COL_PREDICTION].sum())
print('The ratio between label 1 and label 0 is {}'.format(one_zero_ratio))
```
Another prediction data is also created, where the probabilities for label 1 and label 0 are fixed. Without loss of generity, the probability of predicting 1 is 0.6. The data set is purposely created to make the precision to be 100% given an presumption of cut-off equal to 0.5.
```
prob_true = 0.6
df_pred_bin_pos = df_true_bin_pos.copy()
df_pred_bin_pos[COL_PREDICTION] = df_pred_bin_pos[COL_PREDICTION].apply(lambda x: prob_true if x==1 else 1-prob_true)
df_pred_bin_pos
```
Then the logloss is calculated as follows.
```
# Calculate the logloss metric
logloss_score_pos = logloss(
df_true_bin_pos,
df_pred_bin_pos,
col_user = COL_USER,
col_item = COL_ITEM,
col_rating = COL_RATING,
col_prediction = COL_RATING
)
print("The logloss score is {}".format(logloss_score))
```
For comparison, a similar process is used with a threshold value of 3 to create a more balanced dataset. Another prediction dataset is also created by using the balanced dataset. Again, the probabilities of predicting label 1 and label 0 are fixed as 0.6 and 0.4, respectively. **NOTE**, same as above, in this case, the prediction also gives us a 100% precision. The only difference is the proportion of binary labels.
```
prob_true = 0.6
df_pred_bin_balanced = df_true_bin.copy()
df_pred_bin_balanced[COL_PREDICTION] = df_pred_bin_balanced[COL_PREDICTION].apply(lambda x: prob_true if x==1 else 1-prob_true)
df_pred_bin_balanced
```
The ratio of label 1 and label 0 is
```
one_zero_ratio = df_true_bin[COL_PREDICTION].sum() / (df_true_bin.shape[0] - df_true_bin[COL_PREDICTION].sum())
print('The ratio between label 1 and label 0 is {}'.format(one_zero_ratio))
```
It is perfectly balanced.
Applying the logloss function to calculate the metric gives us a more promising result, as shown below.
```
# Calculate the logloss metric
logloss_score = logloss(
df_true_bin,
df_pred_bin_balanced,
col_user = COL_USER,
col_item = COL_ITEM,
col_rating = COL_RATING,
col_prediction = COL_RATING
)
print("The logloss score is {}".format(logloss_score))
```
It can be seen that the score is more close to 0, and, by definition, it means that the predictions are generating better results than the one before where binary labels are more biased.
#### 2.2.5 Summary
|Metric|Range|Selection criteria|Limitation|Reference|
|------|-------------------------------|---------|----------|---------|
|Precision|$\geq 0$ and $\leq 1$|The closer to $1$ the better.|Only for hits in recommendations.|[link](https://spark.apache.org/docs/2.3.0/mllib-evaluation-metrics.html#ranking-systems)|
|Recall|$\geq 0$ and $\leq 1$|The closer to $1$ the better.|Only for hits in the ground truth.|[link](https://en.wikipedia.org/wiki/Precision_and_recall)|
|NDCG|$\geq 0$ and $\leq 1$|The closer to $1$ the better.|Does not penalize for bad/missing items, and does not perform for several equally good items.|[link](https://spark.apache.org/docs/2.3.0/mllib-evaluation-metrics.html#ranking-systems)|
|MAP|$\geq 0$ and $\leq 1$|The closer to $1$ the better.|Depend on variable distributions.|[link](https://spark.apache.org/docs/2.3.0/mllib-evaluation-metrics.html#ranking-systems)|
|AUC|$\geq 0$ and $\leq 1$|The closer to $1$ the better. 0.5 indicates an uninformative classifier|Depend on the number of recommended items (k).|[link](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve)|
|Logloss|$0$ to $\infty$|The closer to $0$ the better.|Logloss can be sensitive to imbalanced datasets.|[link](https://en.wikipedia.org/wiki/Cross_entropy#Relation_to_log-likelihood)|
```
# cleanup spark instance
spark.stop()
```
## References
1. Guy Shani and Asela Gunawardana, "Evaluating Recommendation Systems", Recommender Systems Handbook, Springer, 2015.
2. PySpark MLlib evaluation metrics, url: https://spark.apache.org/docs/2.3.0/mllib-evaluation-metrics.html.
3. Dimitris Paraschakis et al, "Comparative Evaluation of Top-N Recommenders in e-Commerce: An Industrial Perspective", IEEE ICMLA, 2015, Miami, FL, USA.
4. Yehuda Koren and Robert Bell, "Advances in Collaborative Filtering", Recommender Systems Handbook, Springer, 2015.
5. Chris Bishop, "Pattern Recognition and Machine Learning", Springer, 2006.
| github_jupyter |
# Advanced Seq2Seq Modeling
# Problem
Build a model to help pronounce english words. We'll convert english words in to [Arpabet](https://en.wikipedia.org/wiki/Arpabet) phoneme
@sunilmallya: refer for more live instructions https://www.twitch.tv/videos/171226133
## Dataset
http://svn.code.sf.net/p/cmusphinx/code/trunk/cmudict/
```
# Load data
data = open('cmudict-0.7b', 'r').readlines()
phones = open('cmudict-0.7b.phones', 'r').readlines()
phones = open('cmudict-0.7b.symbols', 'r').readlines()
words = []
phones = []
def f_char(word):
for c in ["(", ".", "'", ")", "-", "_", "\xc0", "\xc9"]:
#print c in word, type(word)
if c in word:
return True
return False
for d in data:
parts = d.strip('\n').split(' ')
if not f_char(parts[0]):
words.append(parts[0])
phones.append(parts[1])
words[:5], phones[:5]
len(words), len(phones)
all_chars = set()
for word, phone in zip(words, phones):
for c in word:
all_chars.add(c)
for p in phone.split(" "):
all_chars.add(p)
print all_chars
# Create a map of symbols to numbers
symbol_set = list(all_chars)
symbol_set.append("+") # add space for padding
# word to symbol index
def word_to_symbol_index(word):
return [symbol_set.index(char) for char in word]
# list of symbol index to word
def symbol_index_to_word(indices):
return [symbol_set[idx] for idx in indices]
# phone to symbol index
def phone_to_symbol_index(phone):
return [symbol_set.index(p) for p in phone.split(" ")]
# list of symbol index to word
def psymbol_index_to_word(indices):
return [symbol_set[idx] for idx in indices]
print symbol_set
# sample
indices = word_to_symbol_index("ARDBERG")
print indices, symbol_index_to_word(indices)
indices = phone_to_symbol_index("AA1 B ER0 G")
print indices, symbol_index_to_word(indices)
# Pad input and output data
input_sequence_length = max([len(w) for w in words])
output_sequence_length = max([len(p.split(' ')) for p in phones])
input_sequence_length, output_sequence_length
# input data
trainX = []
labels = []
def pad_string(word, max_len, pad_char = "+"):
out = ''
for _ in range(max_len - len(word)):
out += pad_char
return out + word
#for word in words:
# padded_strng = "%*s" % (input_sequence_length, word)
# trainX.append(word_to_symbol_index(padded_strng))
# output data
#for p in phones:
# padded_strng = "%*s" % (output_sequence_length, p)
# print phone_to_symbol_index(padded_strng)
pad_string('EY2 EY1', output_sequence_length)
for word in words:
padded_strng = pad_string(word, input_sequence_length)
trainX.append(word_to_symbol_index(padded_strng))
# output labels
# TODO: Fix padding logic
labels =[]
for p in phones:
label = []
for _ in range(output_sequence_length - len(p.split(' '))):
label.append(phone_to_symbol_index('+')[0])
label.extend(phone_to_symbol_index(p))
labels.append(label)
len(labels), len(trainX)
trainX[0], labels[0]
print "INP: ", symbol_index_to_word(trainX[2])
print "LBL: ", symbol_index_to_word(labels[2])
import mxnet as mx
import numpy as np
def shuffle_together(a, b):
assert len(a) == len(b)
p = np.random.permutation(len(a))
return a[p], b[p]
batch_size = 128
trainX, labels = np.array(trainX), np.array(labels)
trainX, labels = shuffle_together(trainX, labels)
N = int(len(trainX) * 0.9) # 90%
dataX = np.array(trainX)[:N]
dataY = np.array(labels)[:N]
testX = np.array(trainX)[N:]
testY = np.array(labels)[N:]
print dataX.shape, dataY.shape
print testX.shape, testY.shape
## Lets define the Iterator
train_iter = mx.io.NDArrayIter(data=dataX, label=dataY,
data_name="data", label_name="target",
batch_size=batch_size,
shuffle=True)
test_iter = mx.io.NDArrayIter(data=testX, label=testY,
data_name="data", label_name="target",
batch_size=batch_size,
shuffle=True)
print train_iter.provide_data, train_iter.provide_label
data_dim = len(symbol_set)
data = mx.sym.var('data') # Shape: (N, T)
target = mx.sym.var('target') # Shape: (N, T)
# 2 Layer LSTM
# get_next_state = return the states that can be used as starting states next time
lstm1 = mx.rnn.FusedRNNCell(num_hidden=128, prefix="lstm1_", get_next_state=True)
lstm2 = mx.rnn.FusedRNNCell(num_hidden=128, prefix="lstm2_", get_next_state=False)
# In the layout, 'N' represents batch size, 'T' represents sequence length,
# and 'C' represents the number of dimensions in hidden states.
# one hot encode
data_one_hot = mx.sym.one_hot(data, depth=data_dim) # Shape: (N, T, C)
data_one_hot = mx.sym.transpose(data_one_hot, axes=(1, 0, 2)) # Shape: (T, N, C)
# Note that when unrolling, if 'merge_outputs'== True, the 'outputs' is merged into a single symbol
# encoder (with repeat vector)
_, encode_state = lstm1.unroll(length=input_sequence_length, inputs=data_one_hot, layout="TNC")
encode_state_h = mx.sym.broadcast_to(encode_state[0], shape=(output_sequence_length, 0, 0)) #Shape: (T,N,C); use ouput seq shape
# decoder
decode_out, _ = lstm2.unroll(length=output_sequence_length, inputs=encode_state_h, layout="TNC")
decode_out = mx.sym.reshape(decode_out, shape=(-1, batch_size))
# logits out
logits = mx.sym.FullyConnected(decode_out, num_hidden=data_dim, name="logits")
logits = mx.sym.reshape(logits, shape=(output_sequence_length, -1, data_dim))
logits = mx.sym.transpose(logits, axes=(1, 0, 2))
# Lets define a loss function: Convert Logits to softmax probabilities
loss = mx.sym.mean(-mx.sym.pick(mx.sym.log_softmax(logits), target, axis=-1))
loss = mx.sym.make_loss(loss)
# visualize
#shape = {"data" : (batch_size, dataX[0].shape[0])}
#mx.viz.plot_network(loss, shape=shape)
net = mx.mod.Module(symbol=loss,
data_names=['data'],
label_names=['target'],
context=mx.gpu())
net.bind(data_shapes=train_iter.provide_data,
label_shapes=train_iter.provide_label)
net.init_params(initializer=mx.init.Xavier())
net.init_optimizer(optimizer="adam",
optimizer_params={'learning_rate': 1E-3,
'rescale_grad': 1.0},
kvstore=None)
# lets keep a test network to see how we do
predict_net = mx.mod.Module(symbol=logits,
data_names=['data'],
label_names=None,
context=mx.gpu())
data_desc = train_iter.provide_data[0]
# shared_module = True: sharesthe same parameters and memory of the training network
predict_net.bind(data_shapes=[data_desc],
label_shapes=None,
for_training=False,
grad_req='null',
shared_module=net)
def predict(data_iter):
data_iter.reset()
corr = 0
for i, data_batch in enumerate(data_iter):
#print data_batch.label[0]
predict_net.forward(data_batch=data_batch)
predictions = predict_net.get_outputs()[0].asnumpy()
indices = np.argmax(predictions, axis=2)
lbls = data_batch.label[0].asnumpy()
results = (indices == lbls)
for r in results:
# Exact match
if np.sum(r) == output_sequence_length:
corr += 1.0
# total % match per sample
#corr += (1.0 *np.sum(r)/ output_sequence_length)
return corr/data_iter.num_data
#test_iter.__dict__
predict(test_iter)
epochs = 200
total_batches = len(dataX) // batch_size
for epoch in range(epochs):
avg_loss = 0
train_iter.reset()
for i, data_batch in enumerate(train_iter):
net.forward_backward(data_batch=data_batch)
loss = net.get_outputs()[0].asscalar()
avg_loss += loss /total_batches
net.update()
# every 10 epochs
test_acc = predict(test_iter)
print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.9f}'.format(avg_loss))
print('Epoch:', '%04d' % (epoch + 1), 'test acc =', '{:.9f}'.format(test_acc))
# Save the model
prefix = 'pronounce128'
net.save_checkpoint(prefix, epochs)
#pred_model = mx.mod.Module.load(prefix, num_epoch)
# Test module
test_net = mx.mod.Module(symbol=logits,
data_names=['data'],
label_names=None,
context=mx.gpu())
data_desc = train_iter.provide_data[0]
# shared_module = True: sharesthe same parameters and memory of the training network
test_net.bind(data_shapes=[data_desc],
label_shapes=None,
for_training=False,
grad_req='null',
shared_module=net)
def print_word(arr):
word_indices = symbol_index_to_word(arr)
out = filter(lambda x: x != symbol_set[-1], word_indices)
return "".join(out)
def print_phone(arr):
word_indices = psymbol_index_to_word(arr)
out = filter(lambda x: x != symbol_set[-1], word_indices)
return " ".join(out)
testX, testY = trainX[0:10], labels[0:10]
#print testX
testX = [word_to_symbol_index(pad_string("SUNIL", input_sequence_length))]
testX += [word_to_symbol_index(pad_string("JOSEPH", input_sequence_length))]
testX += [word_to_symbol_index(pad_string("RANDALL", input_sequence_length))]
testX += [word_to_symbol_index(pad_string("SAUSALITO", input_sequence_length))]
testX += [word_to_symbol_index(pad_string("EMBARCADERO", input_sequence_length))]
testX += [word_to_symbol_index(pad_string("AMULYA", input_sequence_length))]
testX += [word_to_symbol_index(pad_string("TWITCH", input_sequence_length))]
testX += [word_to_symbol_index(pad_string("ALUMINUM", input_sequence_length))]
testX = np.array(testX, dtype=np.int)
test_net.reshape(data_shapes=[mx.io.DataDesc('data', (1, input_sequence_length))])
predictions = test_net.predict(mx.io.NDArrayIter(testX, batch_size=1)).asnumpy()
print "expression", "predicted", "actual"
for i, prediction in enumerate(predictions):
#x_str = symbol_index_to_word(testX[i])
word = print_word(testX[i])
index = np.argmax(prediction, axis=1)
result = print_phone(index)
#result = [symbol_set[j] for j in index]
print "%10s" % word, result
#label = [alphabet[j] for j in testY[i]]
#print "".join(x_str), "".join(result), " ", "".join(label)
```
| github_jupyter |
```
# mahalanobis_discriminative model
from collections import OrderedDict
import numpy as np
import torch as th
from torch import nn
import seaborn as sns
from pathlib import Path
import matplotlib.pyplot as plt
import cv2
import pandas as pd
import math
from scipy.spatial import distance as mahal_distance
from skimage.util import random_noise
classes = np.array(['uCry', 'sCry', 'cCry', 'hCast', 'nhCast', 'sEC', 'nsEC', 'WBC', 'RBC'])
outlier_classes1 = np.array(['Artifact', 'Dirt', 'LD'])
outlier_classes2 = np.array(['blankurine', 'bubbles', 'cathair', 'condensation', 'dust', 'feces', 'fingerprint', 'humanhair',
'Lipids', 'Lotion', 'pollen', 'semifilled', 'void', 'wetslide', 'yeast'])
# Loading the pre-trained classifier
def conv_bn_relu(
in_channels, out_channels,
kernel_size=3, padding=None, stride=1,
depthwise=False, normalization=True,
activation=True, init_bn_zero=False):
"""
Make a depthwise or normal convolution layer,
followed by batch normalization and an activation.
"""
layers = []
padding = kernel_size // 2 if padding is None else padding
if depthwise and in_channels > 1:
layers += [
nn.Conv2d(in_channels, in_channels, bias=False,
kernel_size=kernel_size, stride=stride,
padding=padding, groups=in_channels),
nn.Conv2d(in_channels, out_channels,
bias=not normalization, kernel_size=1)
]
else:
layers.append(
nn.Conv2d(in_channels, out_channels, bias=not normalization,
kernel_size=kernel_size, stride=stride,
padding=padding)
)
if normalization:
bn = nn.BatchNorm2d(out_channels)
if init_bn_zero:
nn.init.zeros_(bn.weight)
layers.append(bn)
if activation:
# TODO: parametrize activation
layers.append(nn.ReLU())
return nn.Sequential(*layers)
def depthwise_cnn_classifier(
channels=[],
strides=None,
img_width=32,
img_height=32,
c_in=None,
c_out=None,
):
channels = channels[:]
if c_in is not None:
channels.insert(0, c_in)
if c_out is not None:
channels.append(c_out)
if len(channels) < 2:
raise ValueError("Not enough channels")
layers = OrderedDict()
number_convolutions = len(channels) - 2
if strides is None:
strides = [2] * number_convolutions
out_width = img_width
out_height = img_height
for layer_index in range(number_convolutions):
in_channels = channels[layer_index]
out_channels = channels[layer_index + 1]
layers["conv1" + str(layer_index)] = conv_bn_relu(
in_channels,
out_channels,
kernel_size=3,
stride=strides[layer_index],
depthwise=layer_index > 0,
normalization=True,
activation=True,
)
layers["conv2" + str(layer_index)] = conv_bn_relu(
out_channels,
out_channels,
kernel_size=3,
stride=1,
depthwise=True,
normalization=True,
activation=True,
)
out_width = out_width // strides[layer_index]
out_height = out_height // strides[layer_index]
layers["drop"] = nn.Dropout(p=0.2)
layers["flatten"] = nn.Flatten()
layers["final"] = nn.Linear(out_width * out_height * channels[-2], channels[-1])
#layers["softmax"] = nn.Softmax(-1)
return nn.Sequential(layers)
# load model
cnn = depthwise_cnn_classifier([32, 64, 128], c_in=1, c_out=9, img_width=32, img_height=32)
cnn.load_state_dict(th.load("/home/erdem/pickle/thomas_classifier/urine_classifier_uniform_32x32.pt"))
cnn.eval() # IMPORTANT
cnn
from ood_metrics import calc_metrics, plot_roc, plot_pr, plot_barcode
# Mahalanobis
# get empirical class means and covariances
def get_mean_covariance(f):
means= []
observations = []
for cl in classes:
print("in class", cl)
cl_path = "/home/thomas/tmp/patches_urine_32_scaled/"+cl+"/"
counter = 0
temp_array = None
for img_path in Path(cl_path).glob("*.png"):
counter += 1
image = th.from_numpy(plt.imread(img_path)).float()
if counter == 1:
temp_array = f(image[None, None, :, :] - 1).detach().view(-1).numpy()
observations.append(f(image[None, None, :, :] - 1).detach().view(-1).numpy())
else:
temp_array += f(image[None, None, :, :] - 1).detach().view(-1).numpy()
observations.append(f(image[None, None, :, :] - 1).detach().view(-1).numpy())
means.append(temp_array/counter)
V = np.cov(observations, rowvar=False)
VM = np.matrix(V)
return means, VM
# Returns the -mahal distance per class, max is better
def Mahal_distance(f, x, means, cov):
np_output = f(x[None, None, :, :] - 1).detach().view(-1).numpy()
mahal_distance_per_C = []
for i in range(len(classes)):
maha = mahal_distance.mahalanobis(np_output, means[i], cov)
mahal_distance_per_C.append(maha)
return mahal_distance_per_C
def test_mahala(f, means, covs, outlier_class, outlier_temp, perturb):
covi = covs.I
inlier_scores = []
inlier_labels = []
for cl in classes:
print(cl)
cl_path = "/home/thomas/tmp/patches_urine_32_scaled/"+cl+"/"
for img_path in Path(cl_path).glob("*.png"):
inlier_labels.append(1)
image = th.from_numpy(plt.imread(img_path)).float()
if perturb == 'gaussian':
image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float()
elif perturb == 's&p':
image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, amount=0.03, clip=True))
mahal_dist_per_c = Mahal_distance(f, image, means, covi)
temp_score = np.amax(mahal_dist_per_c)
inlier_scores.append(temp_score)
sns.scatterplot(data=inlier_scores)
outlier_scores = []
outlier_labels = []
for cl in outlier_class:
print(cl)
cl_path = outlier_temp+cl+"/"
for img_path in Path(cl_path).glob("*.png"):
outlier_labels.append(0)
image = th.from_numpy(plt.imread(img_path)).float()
if perturb == 'gaussian':
image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float()
elif perturb == 's&p':
image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, clip=True))
mahal_dist_per_c = Mahal_distance(f, image, means, covi)
temp_score = np.amax(mahal_dist_per_c)
outlier_scores.append(temp_score)
sns.scatterplot(data=outlier_scores)
score_array = inlier_scores+outlier_scores
label_array = inlier_labels+outlier_labels
print(calc_metrics(score_array, label_array))
plot_roc(score_array, label_array)
# plot_pr(score_array, label_array)
# plot_barcode(score_array, label_array)
def test_mahala_final(f, means, covs, perturb):
covi = covs.I
inlier_scores = []
inlier_labels = []
outlier_scores = []
outlier_labels = []
inlier_path = "/home/erdem/dataset/urine_test_32/inliers"
outlier_path = "/home/erdem/dataset/urine_test_32/outliers"
# Inliers
for img_path in Path(inlier_path).glob("*.png"):
inlier_labels.append(0)
image = th.from_numpy(plt.imread(img_path)).float()
if perturb == 'gaussian':
image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float()
elif perturb == 's&p':
image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, amount=0.03, clip=True))
mahal_dist_per_c = Mahal_distance(f, image, means, covi)
temp_score = np.amax(mahal_dist_per_c)
inlier_scores.append(temp_score)
# Outliers
for img_path in Path(outlier_path).glob("*.png"):
outlier_labels.append(1)
image = th.from_numpy(plt.imread(img_path)).float()
if perturb == 'gaussian':
image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float()
elif perturb == 's&p':
image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, amount=0.03, clip=True))
mahal_dist_per_c = Mahal_distance(f, image, means, covi)
temp_score = np.amax(mahal_dist_per_c)
outlier_scores.append(temp_score)
d_outliers = {"Mahalanobis Distance": outlier_scores, "outlier_labels": outlier_labels, "Index of Image Patches": np.linspace(1, 636, num=636)}
d_inliers = {"Mahalanobis Distance": inlier_scores, "inlier_labels": inlier_labels, "Index of Image Patches": np.linspace(1, 636, num=636)}
df1 = pd.DataFrame(data=d_inliers)
df2 = pd.DataFrame(data=d_outliers)
sns.scatterplot(data=df1, x="Index of Image Patches", y="Mahalanobis Distance")
sns.scatterplot(data=df2, x="Index of Image Patches", y="Mahalanobis Distance")
score_array = inlier_scores+outlier_scores
label_array = inlier_labels+outlier_labels
print(calc_metrics(score_array, label_array))
plot_roc(score_array, label_array)
plot_pr(score_array, label_array)
# plot_barcode(score_array, label_array)
from copy import deepcopy
cnn_flattened = deepcopy(cnn)
del cnn_flattened[-1] # remove linear
image = th.from_numpy(plt.imread("/home/thomas/tmp/patches_contaminants_32_scaled/bubbles/Anvajo_bubbles1_100um_161_385_201_426.png")).float()
cnn_dropout = deepcopy(cnn_flattened)
del cnn_dropout[-1] # remove flatten
seq6 = deepcopy(cnn_dropout)
del seq6[-1] # remove dropout
del seq6[-1][-1] # remove last relu
seq5 = deepcopy(seq6)
del seq5[-1] # remove dropout
del seq5[-1][-1] # remove last relu
seq4 = deepcopy(seq5)
del seq4[-1] # remove dropout
del seq4[-1][-1] # remove last relu
seq3 = deepcopy(seq4)
del seq3[-1] # remove dropout
del seq3[-1][-1] # remove last relu
seq2 = deepcopy(seq3)
del seq2[-1] # remove dropout
del seq2[-1][-1] # remove last relu
seq1 = deepcopy(seq2)
del seq1[-1] # remove dropout
del seq1[-1][-1] # remove last relu
means, COV= get_mean_covariance(cnn_flattened)
COVI = COV.I
print(COVI)
# cnn without the linear last layer
test_mahala_final(cnn_flattened, means, COV, perturb = None)
# cnn without the linear last layer
test_mahala_final(cnn_flattened, means, COV, perturb = 'gaussian')
# cnn without the linear last layer
test_mahala_final(cnn_flattened, means, COV, perturb = 's&p')
means, COV= get_mean_covariance(cnn_flattened)
# cnn without the linear last layer
test_mahala(cnn_flattened, means, COV, outlier_classes1, "/home/thomas/tmp/patches_urine_32_scaled/", perturb = None)
import pandas as pd
COVI = COV.I
outlier_temp = "/home/thomas/tmp/patches_urine_32_scaled/"
perturb = None
outlier_labels = []
outlier_scores = []
outlier_path = []
for cl in outlier_classes1:
print(cl)
cl_path = outlier_temp+cl+"/"
for img_path in Path(cl_path).glob("*.png"):
outlier_path.append(img_path)
outlier_labels.append(cl)
image = th.from_numpy(plt.imread(img_path)).float()
if perturb == 'gaussian':
image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float()
elif perturb == 's&p':
image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, clip=True))
mahal_dist_per_c = Mahal_distance(cnn_flattened, image, means, COVI)
temp_score = np.amax(mahal_dist_per_c)
outlier_scores.append(temp_score)
d = {"outlier_scores": outlier_scores, "outlier_labels": outlier_labels, "outlier_path": outlier_path}
df = pd.DataFrame(data=d)
sns.scatterplot(data=df, x = "outlier_labels", y="outlier_scores")
outlier_temp = "/home/thomas/tmp/patches_urine_32_scaled/"
perturb = None
outlier_labels = []
outlier_scores = []
outlier_path = []
for cl in classes:
print(cl)
cl_path = outlier_temp+cl+"/"
for img_path in Path(cl_path).glob("*.png"):
outlier_path.append(img_path)
outlier_labels.append(cl)
image = th.from_numpy(plt.imread(img_path)).float()
if perturb == 'gaussian':
image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float()
elif perturb == 's&p':
image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, clip=True))
mahal_dist_per_c = Mahal_distance(cnn_flattened, image, means, COVI)
temp_score = np.amax(mahal_dist_per_c)
outlier_scores.append(temp_score)
d = {"outlier_scores": outlier_scores, "outlier_labels": outlier_labels, "outlier_path": outlier_path}
df2 = pd.DataFrame(data=d)
sns.scatterplot(data=df2, x = "outlier_labels", y="outlier_scores")
outlier_temp = "/home/thomas/tmp/patches_contaminants_32_scaled/"
perturb = None
outlier_labels = []
outlier_scores = []
outlier_path = []
for cl in outlier_classes2:
print(cl)
cl_path = outlier_temp+cl+"/"
for img_path in Path(cl_path).glob("*.png"):
outlier_path.append(img_path)
outlier_labels.append(cl)
image = th.from_numpy(plt.imread(img_path)).float()
if perturb == 'gaussian':
image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float()
elif perturb == 's&p':
image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, clip=True))
mahal_dist_per_c = Mahal_distance(cnn_flattened, image, means, COVI)
temp_score = np.amax(mahal_dist_per_c)
outlier_scores.append(temp_score)
d = {"outlier_scores": outlier_scores, "outlier_labels": outlier_labels, "outlier_path": outlier_path}
df4 = pd.DataFrame(data=d)
sns.scatterplot(data=df4, x = "outlier_labels", y="outlier_scores")
cl_path = "/home/thomas/tmp/patches_urine_32_scaled/Unclassified"
perturb = None
outlier_labels = []
outlier_scores = []
outlier_path = []
cl = "Unclassified"
COVI = COV.I
for img_path in Path(cl_path).glob("*.png"):
outlier_path.append(img_path)
outlier_labels.append(cl)
image = th.from_numpy(plt.imread(img_path)).float()
if perturb == 'gaussian':
image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float()
elif perturb == 's&p':
image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, clip=True))
mahal_dist_per_c = Mahal_distance(cnn_flattened, image, means, COVI)
temp_score = np.amax(mahal_dist_per_c)
outlier_scores.append(temp_score)
d = {"outlier_scores": outlier_scores, "outlier_labels": outlier_labels, "outlier_path": outlier_path}
df3 = pd.DataFrame(data=d)
sns.scatterplot(data=df3, x = "outlier_labels", y="outlier_scores")
# sorted_outliers1 = df.sort_values(by=['outlier_scores'])
# sorted_outliers2 = df4.sort_values(by=['outlier_scores'])
# sorted_inliers = df2.sort_values(by=['outlier_scores'])
sorted_unclassified = df3.sort_values(by=['outlier_scores'])
index = 0
# index: 717 and after is inlier
for a in sorted_unclassified['outlier_scores']:
print(index, a)
index += 1
from torchvision.utils import make_grid
from torchvision.io import read_image
import torchvision.transforms.functional as F
%matplotlib inline
def show(imgs):
if not isinstance(imgs, list):
imgs = [imgs]
fix, axs = plt.subplots(ncols=len(imgs), squeeze=False)
for i, img in enumerate(imgs):
img = img.detach()
img = F.to_pil_image(img)
axs[0, i].imshow(np.asarray(img))
axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[])
unclassified_imgs = []
for path in sorted_unclassified["outlier_path"]:
unclassified_imgs.append(read_image(str(path)))
```
| github_jupyter |
## Coding Exercise #0703
### 1. Softmax regression (multi-class logistic regression):
```
# import tensorflow as tf
import tensorflow.compat.v1 as tf
import numpy as np
import pandas as pd
from sklearn.preprocessing import scale
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_iris
tf.disable_v2_behavior()
```
#### 1.1. Read in the data:
```
# We will use Iris data.
# 4 explanatory variables.
# 3 classes for the response variable.
data_raw = load_iris()
data_raw.keys()
# Print out the description.
# print(data_raw['DESCR'])
X = data_raw['data']
y = data_raw['target']
# Check the shape.
print(X.shape)
print(y.shape)
```
#### 1.2. Data pre-processing:
```
# One-Hot-Encoding.
y = np.array(pd.get_dummies(y, drop_first=False)) # drop_frist = False for one-hot-encoding.
y.shape
# Scaling
X = scale(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=3)
n_train_size = y_train.shape[0]
```
#### 1.3. Do the necessary definitions:
```
batch_size = 100 # Size of each (mini) batch.
n_epochs = 30000 # Number of epochs.
learn_rate = 0.05
W = tf.Variable(tf.ones([4,3])) # Initial value of the weights = 1.
b = tf.Variable(tf.ones([3])) # Initial value of the bias = 1.
X_ph = tf.placeholder(tf.float32, shape=(None, 4)) # Number of rows not specified. Number of columns = numbmer of X variables = 4.
y_ph = tf.placeholder(tf.float32, shape=(None,3)) # Number of rows not specified. Number of columns = number of classes of the y variable = 3.
# Model.
# Not strictly necessary to apply the softmax activation. => in the end we will apply argmax() function to predict the label!
# y_model = tf.nn.softmax(tf.matmul(X_ph, W) + b)
# The following will work just fine.
y_model = tf.matmul(X_ph, W) + b
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_ph, logits=y_model)) # Loss = cross entropy.
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learn_rate)
train = optimizer.minimize(loss) # Define training.
init = tf.global_variables_initializer() # Define Variable initialization.
```
#### 1.4. Training and Testing:
```
with tf.Session() as sess:
# Variables initialization.
sess.run(init)
# Training.
for i in range(n_epochs):
idx_rnd = np.random.choice(range(n_train_size),batch_size,replace=False) # Random sampling w/o replacement for the batch indices.
batch_X, batch_y = [X_train[idx_rnd,:], y_train[idx_rnd,:]] # Get a batch.
my_feed = {X_ph:batch_X, y_ph:batch_y} # Prepare the feed data as a dictionary.
sess.run(train, feed_dict = my_feed)
if (i + 1) % 2000 == 0: print("Step : {}".format(i + 1)) # Print the step number at every multiple of 2000.
# Testing.
correct_predictions = tf.equal(tf.argmax(y_ph, axis=1), tf.argmax(y_model, axis=1)) # In argmax(), axis=1 means horizontal direction.
accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32)) # Recast the Boolean as float32 first. Then calculate the mean.
accuracy_value = sess.run(accuracy, feed_dict={X_ph:X_test, y_ph:y_test}) # Actually run the test with the test data.
```
Print the testing result.
```
print("Accuracy = {:5.3f}".format(accuracy_value))
```
| github_jupyter |
# TensorFlow Tutorial
Welcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow:
- Initialize variables
- Start your own session
- Train algorithms
- Implement a Neural Network
Programing frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code.
## 1 - Exploring the Tensorflow Library
To start, you will import the library:
```
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
%matplotlib inline
np.random.seed(1)
```
Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example.
$$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
```
y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y') # Define y. Set to 39
loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss
init = tf.global_variables_initializer() # When init is run later (session.run(init)),
# the loss variable will be initialized and ready to be computed
with tf.Session() as session: # Create a session and print the output
session.run(init) # Initializes the variables
print(session.run(loss)) # Prints the loss
```
Writing and running programs in TensorFlow has the following steps:
1. Create Tensors (variables) that are not yet executed/evaluated.
2. Write operations between those Tensors.
3. Initialize your Tensors.
4. Create a Session.
5. Run the Session. This will run the operations you'd written above.
Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run `init=tf.global_variables_initializer()`. That initialized the loss variable, and in the last line we were finally able to evaluate the value of `loss` and print its value.
Now let us look at an easy example. Run the cell below:
```
a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c)
```
As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
```
sess = tf.Session()
print(sess.run(c))
```
Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**.
Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later.
To specify values for a placeholder, you can pass in values by using a "feed dictionary" (`feed_dict` variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.
```
# Change the value of x in the feed_dict
x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()
```
When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session.
Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph.
### 1.1 - Linear function
Lets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector.
**Exercise**: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):
```python
X = tf.constant(np.random.randn(3,1), name = "X")
```
You might find the following functions helpful:
- tf.matmul(..., ...) to do a matrix multiplication
- tf.add(..., ...) to do an addition
- np.random.randn(...) to initialize randomly
```
# GRADED FUNCTION: linear_function
def linear_function():
"""
Implements a linear function:
Initializes W to be a random tensor of shape (4,3)
Initializes X to be a random tensor of shape (3,1)
Initializes b to be a random tensor of shape (4,1)
Returns:
result -- runs the session for Y = WX + b
"""
np.random.seed(1)
### START CODE HERE ### (4 lines of code)
X = tf.constant(np.random.randn(3,1), name = "X")
W = tf.constant(np.random.randn(4,3), name = "weights")
b = tf.constant(np.random.randn(4,1), name = "bias")
Y = tf.add(tf.matmul(W, X), b)
### END CODE HERE ###
# Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
### START CODE HERE ###
sess = tf.Session()
result = sess.run(Y)
### END CODE HERE ###
# close the session
sess.close()
return result
print( "result = " + str(linear_function()))
```
*** Expected Output ***:
<table>
<tr>
<td>
**result**
</td>
<td>
[[-2.15657382]
[ 2.95891446]
[-1.08926781]
[-0.84538042]]
</td>
</tr>
</table>
### 1.2 - Computing the sigmoid
Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmoid function of an input.
You will do this exercise using a placeholder variable `x`. When running the session, you should use the feed dictionary to pass in the input `z`. In this exercise, you will have to (i) create a placeholder `x`, (ii) define the operations needed to compute the sigmoid using `tf.sigmoid`, and then (iii) run the session.
** Exercise **: Implement the sigmoid function below. You should use the following:
- `tf.placeholder(tf.float32, name = "...")`
- `tf.sigmoid(...)`
- `sess.run(..., feed_dict = {x: z})`
Note that there are two typical ways to create and use sessions in tensorflow:
**Method 1:**
```python
sess = tf.Session()
# Run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
sess.close() # Close the session
```
**Method 2:**
```python
with tf.Session() as sess:
# run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
# This takes care of closing the session for you :)
```
```
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
results -- the sigmoid of z
"""
### START CODE HERE ### ( approx. 4 lines of code)
# Create a placeholder for x. Name it 'x'.
x = tf.placeholder(tf.float32, name = "x")
# compute sigmoid(x)
sigmoid = tf.sigmoid(x)
# Create a session, and run it. Please use the method 2 explained above.
# You should use a feed_dict to pass z's value to x.
with tf.Session() as sess:
# Run session and call the output "result"
result = sess.run(sigmoid, feed_dict = {x : z})
### END CODE HERE ###
return result
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12)))
```
*** Expected Output ***:
<table>
<tr>
<td>
**sigmoid(0)**
</td>
<td>
0.5
</td>
</tr>
<tr>
<td>
**sigmoid(12)**
</td>
<td>
0.999994
</td>
</tr>
</table>
<font color='blue'>
**To summarize, you how know how to**:
1. Create placeholders
2. Specify the computation graph corresponding to operations you want to compute
3. Create the session
4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values.
### 1.3 - Computing the Cost
You can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{[2](i)}$ and $y^{(i)}$ for i=1...m:
$$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$
you can do it in one line of code in tensorflow!
**Exercise**: Implement the cross entropy loss. The function you will use is:
- `tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)`
Your code should input `z`, compute the sigmoid (to get `a`) and then compute the cross entropy cost $J$. All this can be done using one call to `tf.nn.sigmoid_cross_entropy_with_logits`, which computes
$$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{[2](i)}) + (1-y^{(i)})\log (1-\sigma(z^{[2](i)})\large )\small\tag{2}$$
```
# GRADED FUNCTION: cost
def cost(logits, labels):
"""
Computes the cost using the sigmoid cross entropy
Arguments:
logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
labels -- vector of labels y (1 or 0)
Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"
in the TensorFlow documentation. So logits will feed into z, and labels into y.
Returns:
cost -- runs the session of the cost (formula (2))
"""
### START CODE HERE ###
# Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
z = tf.placeholder(tf.float32, shape = logits.shape, name = "logits")
y = tf.placeholder(tf.float32, shape = labels.shape, name = "label")
# Use the loss function (approx. 1 line)
cost = tf.nn.sigmoid_cross_entropy_with_logits(labels = y, logits = z)
# Create a session (approx. 1 line). See method 1 above.
sess = tf.Session()
# Run the session (approx. 1 line).
cost = sess.run(cost, feed_dict = {z: logits, y: labels})
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return cost
logits = sigmoid(np.array([0.2,0.4,0.7,0.9]))
cost = cost(logits, np.array([0,0,1,1]))
print ("cost = " + str(cost))
```
** Expected Output** :
<table>
<tr>
<td>
**cost**
</td>
<td>
[ 1.00538719 1.03664088 0.41385433 0.39956614]
</td>
</tr>
</table>
### 1.4 - Using One Hot encodings
Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:
<img src="images/onehot.png" style="width:600px;height:150px;">
This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code:
- tf.one_hot(labels, depth, axis)
**Exercise:** Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use `tf.one_hot()` to do this.
```
# GRADED FUNCTION: one_hot_matrix
def one_hot_matrix(labels, C):
"""
Creates a matrix where the i-th row corresponds to the ith class number and the jth column
corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
will be 1.
Arguments:
labels -- vector containing the labels
C -- number of classes, the depth of the one hot dimension
Returns:
one_hot -- one hot matrix
"""
### START CODE HERE ###
# Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
C = tf.constant(C, name = "C")
# Use tf.one_hot, be careful with the axis (approx. 1 line)
one_hot_matrix = tf.one_hot(labels, C, axis = 0, name = "one_hot")
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session (approx. 1 line)
one_hot = sess.run(one_hot_matrix)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return one_hot
labels = np.array([1,2,3,0,2,1])
one_hot = one_hot_matrix(labels, C = 4)
print ("one_hot = " + str(one_hot))
```
**Expected Output**:
<table>
<tr>
<td>
**one_hot**
</td>
<td>
[[ 0. 0. 0. 1. 0. 0.]
[ 1. 0. 0. 0. 0. 1.]
[ 0. 1. 0. 0. 1. 0.]
[ 0. 0. 1. 0. 0. 0.]]
</td>
</tr>
</table>
### 1.5 - Initialize with zeros and ones
Now you will learn how to initialize a vector of zeros and ones. The function you will be calling is `tf.ones()`. To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively.
**Exercise:** Implement the function below to take in a shape and to return an array (of the shape's dimension of ones).
- tf.ones(shape)
```
# GRADED FUNCTION: ones
def ones(shape):
"""
Creates an array of ones of dimension shape
Arguments:
shape -- shape of the array you want to create
Returns:
ones -- array containing only ones
"""
### START CODE HERE ###
# Create "ones" tensor using tf.ones(...). (approx. 1 line)
ones = tf.ones(shape)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session to compute 'ones' (approx. 1 line)
ones = sess.run(ones)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return ones
print ("ones = " + str(ones([3])))
```
**Expected Output:**
<table>
<tr>
<td>
**ones**
</td>
<td>
[ 1. 1. 1.]
</td>
</tr>
</table>
# 2 - Building your first neural network in tensorflow
In this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:
- Create the computation graph
- Run the graph
Let's delve into the problem you'd like to solve!
### 2.0 - Problem statement: SIGNS Dataset
One afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.
- **Training set**: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).
- **Test set**: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).
Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.
Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels.
<img src="images/hands.png" style="width:800px;height:350px;"><caption><center> <u><font color='purple'> **Figure 1**</u><font color='purple'>: SIGNS dataset <br> <font color='black'> </center>
Run the following code to load the dataset.
```
# Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
```
Change the index below and run the cell to visualize some examples in the dataset.
```
# Example of a picture
index = 0
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
```
As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
```
# Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten/255.
X_test = X_test_flatten/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)
print ("number of training examples = " + str(X_train.shape[1]))
print ("number of test examples = " + str(X_test.shape[1]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
```
**Note** that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing.
**Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one.
**The model** is *LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX*. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes.
### 2.1 - Create placeholders
Your first task is to create placeholders for `X` and `Y`. This will allow you to later pass your training data in when you run your session.
**Exercise:** Implement the function below to create the placeholders in tensorflow.
```
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_x, n_y):
"""
Creates the placeholders for the tensorflow session.
Arguments:
n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
n_y -- scalar, number of classes (from 0 to 5, so -> 6)
Returns:
X -- placeholder for the data input, of shape [n_x, None] and dtype "float"
Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"
Tips:
- You will use None because it let's us be flexible on the number of examples you will for the placeholders.
In fact, the number of examples during test/train is different.
"""
### START CODE HERE ### (approx. 2 lines)
X = tf.placeholder(tf.float32, shape = [n_x, None], name = "X")
Y = tf.placeholder(tf.float32, shape = [n_y, None], name = "Y")
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(12288, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
```
**Expected Output**:
<table>
<tr>
<td>
**X**
</td>
<td>
Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1)
</td>
</tr>
<tr>
<td>
**Y**
</td>
<td>
Tensor("Placeholder_2:0", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2)
</td>
</tr>
</table>
### 2.2 - Initializing the parameters
Your second task is to initialize the parameters in tensorflow.
**Exercise:** Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use:
```python
W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
```
Please use `seed = 1` to make sure your results match ours.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
"""
Initializes parameters to build a neural network with tensorflow. The shapes are:
W1 : [25, 12288]
b1 : [25, 1]
W2 : [12, 25]
b2 : [12, 1]
W3 : [6, 12]
b3 : [6, 1]
Returns:
parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
"""
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 6 lines of code)
W1 = tf.get_variable("W1", shape = [25, 12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b1 = tf.get_variable("b1", shape = [25, 1], initializer = tf.zeros_initializer())
W2 = tf.get_variable("W2", shape = [12, 25], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b2 = tf.get_variable("b2", shape = [12, 1], initializer = tf.zeros_initializer())
W3 = tf.get_variable("W3", shape = [6, 12], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b3 = tf.get_variable("b3", shape = [6, 1], initializer = tf.zeros_initializer())
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2,
"W3": W3,
"b3": b3}
return parameters
tf.reset_default_graph()
with tf.Session() as sess:
parameters = initialize_parameters()
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table>
<tr>
<td>
**W1**
</td>
<td>
< tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**b1**
</td>
<td>
< tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**W2**
</td>
<td>
< tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**b2**
</td>
<td>
< tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref >
</td>
</tr>
</table>
As expected, the parameters haven't been evaluated yet.
### 2.3 - Forward propagation in tensorflow
You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are:
- `tf.add(...,...)` to do an addition
- `tf.matmul(...,...)` to do a matrix multiplication
- `tf.nn.relu(...)` to apply the ReLU activation
**Question:** Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at `z3`. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need `a3`!
```
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
"""
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:
Z1 = tf.add(tf.matmul(W1, X), b1) # Z1 = np.dot(W1, X) + b1
A1 = tf.nn.relu(Z1) # A1 = relu(Z1)
Z2 = tf.add(tf.matmul(W2, A1), b2) # Z2 = np.dot(W2, a1) + b2
A2 = tf.nn.relu(Z2) # A2 = relu(Z2)
Z3 = tf.add(tf.matmul(W3, A2), b3) # Z3 = np.dot(W3,Z2) + b3
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
print("Z3 = " + str(Z3))
```
**Expected Output**:
<table>
<tr>
<td>
**Z3**
</td>
<td>
Tensor("Add_2:0", shape=(6, ?), dtype=float32)
</td>
</tr>
</table>
You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation.
### 2.4 Compute cost
As seen before, it is very easy to compute the cost using:
```python
tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))
```
**Question**: Implement the cost function below.
- It is important to know that the "`logits`" and "`labels`" inputs of `tf.nn.softmax_cross_entropy_with_logits` are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.
- Besides, `tf.reduce_mean` basically does the summation over the examples.
```
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
"""
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
"""
# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
logits = tf.transpose(Z3)
labels = tf.transpose(Y)
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = labels))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
print("cost = " + str(cost))
```
**Expected Output**:
<table>
<tr>
<td>
**cost**
</td>
<td>
Tensor("Mean:0", shape=(), dtype=float32)
</td>
</tr>
</table>
### 2.5 - Backward propagation & parameter updates
This is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.
After you compute the cost function. You will create an "`optimizer`" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.
For instance, for gradient descent the optimizer would be:
```python
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)
```
To make the optimization you would do:
```python
_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
```
This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.
**Note** When coding, we often use `_` as a "throwaway" variable to store values that we won't need to use later. Here, `_` takes on the evaluated value of `optimizer`, which we don't need (and `c` takes the value of the `cost` variable).
### 2.6 - Building the model
Now, you will bring it all together!
**Exercise:** Implement the model. You will be calling the functions you had previously implemented.
```
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
num_epochs = 1500, minibatch_size = 32, print_cost = True):
"""
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
n_y = Y_train.shape[0] # n_y : output size
costs = [] # To keep track of the cost
# Create Placeholders of shape (n_x, n_y)
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_x, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0. # Defines a cost related to an epoch
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , minibatch_cost = sess.run([optimizer, cost], feed_dict = {X : minibatch_X, Y: minibatch_Y})
### END CODE HERE ###
epoch_cost += minibatch_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# lets save the parameters in a variable
parameters = sess.run(parameters)
print ("Parameters have been trained!")
# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
return parameters
```
Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!
```
parameters = model(X_train, Y_train, X_test, Y_test)
```
**Expected Output**:
<table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
0.999074
</td>
</tr>
<tr>
<td>
**Test Accuracy**
</td>
<td>
0.716667
</td>
</tr>
</table>
Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.
**Insights**:
- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting.
- Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters.
### 2.7 - Test with your own image (optional / ungraded exercise)
Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right!
```
import scipy
from PIL import Image
from scipy import ndimage
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "thumbs_up.jpg"
## END CODE HERE ##
# We preprocess your image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
my_image_prediction = predict(my_image, parameters)
plt.imshow(image)
print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))
```
You indeed deserved a "thumbs-up" although as you can see the algorithm seems to classify it incorrectly. The reason is that the training set doesn't contain any "thumbs-up", so the model doesn't know how to deal with it! We call that a "mismatched data distribution" and it is one of the various of the next course on "Structuring Machine Learning Projects".
<font color='blue'>
**What you should remember**:
- Tensorflow is a programming framework used in deep learning
- The two main object classes in tensorflow are Tensors and Operators.
- When you code in tensorflow you have to take the following steps:
- Create a graph containing Tensors (Variables, Placeholders ...) and Operations (tf.matmul, tf.add, ...)
- Create a session
- Initialize the session
- Run the session to execute the graph
- You can execute the graph multiple times as you've seen in model()
- The backpropagation and optimization is automatically done when running the session on the "optimizer" object.
| github_jupyter |
<img src="qiskit-heading.gif" width="500 px" align="center">
# _*Qiskit Aqua: Experimenting with Traveling Salesman problem with variational quantum eigensolver*_
This notebook is based on an official notebook by Qiskit team, available at https://github.com/qiskit/qiskit-tutorial under the [Apache License 2.0](https://github.com/Qiskit/qiskit-tutorial/blob/master/LICENSE) license.
The original notebook was developed by Antonio Mezzacapo<sup>[1]</sup>, Jay Gambetta<sup>[1]</sup>, Kristan Temme<sup>[1]</sup>, Ramis Movassagh<sup>[1]</sup>, Albert Frisch<sup>[1]</sup>, Takashi Imamichi<sup>[1]</sup>, Giacomo Nannicni<sup>[1]</sup>, Richard Chen<sup>[1]</sup>, Marco Pistoia<sup>[1]</sup>, Stephen Wood<sup>[1]</sup>(<sup>[1]</sup>IBMQ)
Your **TASK** is to execute every step of this notebook while learning to use qiskit-aqua and also how to leverage general problem modeling into know problems that qiskit-aqua can solve, namely the [Travelling salesman problem](https://en.wikipedia.org/wiki/Travelling_salesman_problem).
## Introduction
Many problems in quantitative fields such as finance and engineering are optimization problems. Optimization problems lay at the core of complex decision-making and definition of strategies.
Optimization (or combinatorial optimization) means searching for an optimal solution in a finite or countably infinite set of potential solutions. Optimality is defined with respect to some criterion function, which is to be minimized or maximized. This is typically called cost function or objective function.
**Typical optimization problems**
Minimization: cost, distance, length of a traversal, weight, processing time, material, energy consumption, number of objects
Maximization: profit, value, output, return, yield, utility, efficiency, capacity, number of objects
We consider here max-cut problem of practical interest in many fields, and show how they can mapped on quantum computers.
### Weighted Max-Cut
Max-Cut is an NP-complete problem, with applications in clustering, network science, and statistical physics. To grasp how practical applications are mapped into given Max-Cut instances, consider a system of many people that can interact and influence each other. Individuals can be represented by vertices of a graph, and their interactions seen as pairwise connections between vertices of the graph, or edges. With this representation in mind, it is easy to model typical marketing problems. For example, suppose that it is assumed that individuals will influence each other's buying decisions, and knowledge is given about how strong they will influence each other. The influence can be modeled by weights assigned on each edge of the graph. It is possible then to predict the outcome of a marketing strategy in which products are offered for free to some individuals, and then ask which is the optimal subset of individuals that should get the free products, in order to maximize revenues.
The formal definition of this problem is the following:
Consider an $n$-node undirected graph *G = (V, E)* where *|V| = n* with edge weights $w_{ij}>0$, $w_{ij}=w_{ji}$, for $(i, j)\in E$. A cut is defined as a partition of the original set V into two subsets. The cost function to be optimized is in this case the sum of weights of edges connecting points in the two different subsets, *crossing* the cut. By assigning $x_i=0$ or $x_i=1$ to each node $i$, one tries to maximize the global profit function (here and in the following summations run over indices 0,1,...n-1)
$$\tilde{C}(\textbf{x}) = \sum_{i,j} w_{ij} x_i (1-x_j).$$
In our simple marketing model, $w_{ij}$ represents the probability that the person $j$ will buy a product after $i$ gets a free one. Note that the weights $w_{ij}$ can in principle be greater than $1$, corresponding to the case where the individual $j$ will buy more than one product. Maximizing the total buying probability corresponds to maximizing the total future revenues. In the case where the profit probability will be greater than the cost of the initial free samples, the strategy is a convenient one. An extension to this model has the nodes themselves carry weights, which can be regarded, in our marketing model, as the likelihood that a person granted with a free sample of the product will buy it again in the future. With this additional information in our model, the objective function to maximize becomes
$$C(\textbf{x}) = \sum_{i,j} w_{ij} x_i (1-x_j)+\sum_i w_i x_i. $$
In order to find a solution to this problem on a quantum computer, one needs first to map it to an Ising Hamiltonian. This can be done with the assignment $x_i\rightarrow (1-Z_i)/2$, where $Z_i$ is the Pauli Z operator that has eigenvalues $\pm 1$. Doing this we find that
$$C(\textbf{Z}) = \sum_{i,j} \frac{w_{ij}}{4} (1-Z_i)(1+Z_j) + \sum_i \frac{w_i}{2} (1-Z_i) = -\frac{1}{2}\left( \sum_{i<j} w_{ij} Z_i Z_j +\sum_i w_i Z_i\right)+\mathrm{const},$$
where const = $\sum_{i<j}w_{ij}/2+\sum_i w_i/2 $. In other terms, the weighted Max-Cut problem is equivalent to minimizing the Ising Hamiltonian
$$ H = \sum_i w_i Z_i + \sum_{i<j} w_{ij} Z_iZ_j.$$
Aqua can generate the Ising Hamiltonian for the first profit function $\tilde{C}$.
### Approximate Universal Quantum Computing for Optimization Problems
There has been a considerable amount of interest in recent times about the use of quantum computers to find a solution to combinatorial problems. It is important to say that, given the classical nature of combinatorial problems, exponential speedup in using quantum computers compared to the best classical algorithms is not guaranteed. However, due to the nature and importance of the target problems, it is worth investigating heuristic approaches on a quantum computer that could indeed speed up some problem instances. Here we demonstrate an approach that is based on the Quantum Approximate Optimization Algorithm by Farhi, Goldstone, and Gutman (2014). We frame the algorithm in the context of *approximate quantum computing*, given its heuristic nature.
The Algorithm works as follows:
1. Choose the $w_i$ and $w_{ij}$ in the target Ising problem. In principle, even higher powers of Z are allowed.
2. Choose the depth of the quantum circuit $m$. Note that the depth can be modified adaptively.
3. Choose a set of controls $\theta$ and make a trial function $|\psi(\boldsymbol\theta)\rangle$, built using a quantum circuit made of C-Phase gates and single-qubit Y rotations, parameterized by the components of $\boldsymbol\theta$.
4. Evaluate $C(\boldsymbol\theta) = \langle\psi(\boldsymbol\theta)~|H|~\psi(\boldsymbol\theta)\rangle = \sum_i w_i \langle\psi(\boldsymbol\theta)~|Z_i|~\psi(\boldsymbol\theta)\rangle+ \sum_{i<j} w_{ij} \langle\psi(\boldsymbol\theta)~|Z_iZ_j|~\psi(\boldsymbol\theta)\rangle$ by sampling the outcome of the circuit in the Z-basis and adding the expectation values of the individual Ising terms together. In general, different control points around $\boldsymbol\theta$ have to be estimated, depending on the classical optimizer chosen.
5. Use a classical optimizer to choose a new set of controls.
6. Continue until $C(\boldsymbol\theta)$ reaches a minimum, close enough to the solution $\boldsymbol\theta^*$.
7. Use the last $\boldsymbol\theta$ to generate a final set of samples from the distribution $|\langle z_i~|\psi(\boldsymbol\theta)\rangle|^2\;\forall i$ to obtain the answer.
It is our belief the difficulty of finding good heuristic algorithms will come down to the choice of an appropriate trial wavefunction. For example, one could consider a trial function whose entanglement best aligns with the target problem, or simply make the amount of entanglement a variable. In this tutorial, we will consider a simple trial function of the form
$$|\psi(\theta)\rangle = [U_\mathrm{single}(\boldsymbol\theta) U_\mathrm{entangler}]^m |+\rangle$$
where $U_\mathrm{entangler}$ is a collection of C-Phase gates (fully entangling gates), and $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, where $n$ is the number of qubits and $m$ is the depth of the quantum circuit. The motivation for this choice is that for these classical problems this choice allows us to search over the space of quantum states that have only real coefficients, still exploiting the entanglement to potentially converge faster to the solution.
One advantage of using this sampling method compared to adiabatic approaches is that the target Ising Hamiltonian does not have to be implemented directly on hardware, allowing this algorithm not to be limited to the connectivity of the device. Furthermore, higher-order terms in the cost function, such as $Z_iZ_jZ_k$, can also be sampled efficiently, whereas in adiabatic or annealing approaches they are generally impractical to deal with.
References:
- A. Lucas, Frontiers in Physics 2, 5 (2014)
- E. Farhi, J. Goldstone, S. Gutmann e-print arXiv 1411.4028 (2014)
- D. Wecker, M. B. Hastings, M. Troyer Phys. Rev. A 94, 022309 (2016)
- E. Farhi, J. Goldstone, S. Gutmann, H. Neven e-print arXiv 1703.06199 (2017)
```
# useful additional packages
import matplotlib.pyplot as plt
import matplotlib.axes as axes
%matplotlib inline
import numpy as np
import networkx as nx
from qiskit.tools.visualization import plot_histogram
from qiskit.aqua import Operator, run_algorithm, get_algorithm_instance
from qiskit.aqua.input import get_input_instance
from qiskit.aqua.translators.ising import max_cut, tsp
# setup aqua logging
import logging
from qiskit.aqua._logging import set_logging_config, build_logging_config
# set_logging_config(build_logging_config(logging.DEBUG)) # choose INFO, DEBUG to see the log
# ignoring deprecation errors on matplotlib
import warnings
import matplotlib.cbook
warnings.filterwarnings("ignore",category=matplotlib.cbook.mplDeprecation)
```
### [Optional] Setup token to run the experiment on a real device
If you would like to run the experiement on a real device, you need to setup your account first.
Note: If you do not store your token yet, use `IBMQ.save_accounts()` to store it first.
```
from qiskit import IBMQ
IBMQ.load_accounts()
```
## Traveling Salesman Problem
In addition to being a notorious NP-complete problem that has drawn the attention of computer scientists and mathematicians for over two centuries, the Traveling Salesman Problem (TSP) has important bearings on finance and marketing, as its name suggests. Colloquially speaking, the traveling salesman is a person that goes from city to city to sell merchandise. The objective in this case is to find the shortest path that would enable the salesman to visit all the cities and return to its hometown, i.e. the city where he started traveling. By doing this, the salesman gets to maximize potential sales in the least amount of time.
The problem derives its importance from its "hardness" and ubiquitous equivalence to other relevant combinatorial optimization problems that arise in practice.
The mathematical formulation with some early analysis was proposed by W.R. Hamilton in the early 19th century. Mathematically the problem is, as in the case of Max-Cut, best abstracted in terms of graphs. The TSP on the nodes of a graph asks for the shortest *Hamiltonian cycle* that can be taken through each of the nodes. A Hamilton cycle is a closed path that uses every vertex of a graph once. The general solution is unknown and an algorithm that finds it efficiently (e.g., in polynomial time) is not expected to exist.
Find the shortest Hamiltonian cycle in a graph $G=(V,E)$ with $n=|V|$ nodes and distances, $w_{ij}$ (distance from vertex $i$ to vertex $j$). A Hamiltonian cycle is described by $N^2$ variables $x_{i,p}$, where $i$ represents the node and $p$ represents its order in a prospective cycle. The decision variable takes the value 1 if the solution occurs at node $i$ at time order $p$. We require that every node can only appear once in the cycle, and for each time a node has to occur. This amounts to the two constraints (here and in the following, whenever not specified, the summands run over 0,1,...N-1)
$$\sum_{i} x_{i,p} = 1 ~~\forall p$$
$$\sum_{p} x_{i,p} = 1 ~~\forall i.$$
For nodes in our prospective ordering, if $x_{i,p}$ and $x_{j,p+1}$ are both 1, then there should be an energy penalty if $(i,j) \notin E$ (not connected in the graph). The form of this penalty is
$$\sum_{i,j\notin E}\sum_{p} x_{i,p}x_{j,p+1}>0,$$
where it is assumed the boundary condition of the Hamiltonian cycle $(p=N)\equiv (p=0)$. However, here it will be assumed a fully connected graph and not include this term. The distance that needs to be minimized is
$$C(\textbf{x})=\sum_{i,j}w_{ij}\sum_{p} x_{i,p}x_{j,p+1}.$$
Putting this all together in a single objective function to be minimized, we get the following:
$$C(\textbf{x})=\sum_{i,j}w_{ij}\sum_{p} x_{i,p}x_{j,p+1}+ A\sum_p\left(1- \sum_i x_{i,p}\right)^2+A\sum_i\left(1- \sum_p x_{i,p}\right)^2,$$
where $A$ is a free parameter. One needs to ensure that $A$ is large enough so that these constraints are respected. One way to do this is to choose $A$ such that $A > \mathrm{max}(w_{ij})$.
Once again, it is easy to map the problem in this form to a quantum computer, and the solution will be found by minimizing a Ising Hamiltonian.
```
# Generating a graph of 3 nodes
n = 3
num_qubits = n ** 2
ins = tsp.random_tsp(n)
G = nx.Graph()
G.add_nodes_from(np.arange(0, n, 1))
colors = ['r' for node in G.nodes()]
pos = {k: v for k, v in enumerate(ins.coord)}
default_axes = plt.axes(frameon=True)
nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos)
print('distance\n', ins.w)
```
### Brute force approach
```
from itertools import permutations
def brute_force_tsp(w, N):
a=list(permutations(range(1,N)))
last_best_distance = 1e10
for i in a:
distance = 0
pre_j = 0
for j in i:
distance = distance + w[j,pre_j]
pre_j = j
distance = distance + w[pre_j,0]
order = (0,) + i
if distance < last_best_distance:
best_order = order
last_best_distance = distance
print('order = ' + str(order) + ' Distance = ' + str(distance))
return last_best_distance, best_order
best_distance, best_order = brute_force_tsp(ins.w, ins.dim)
print('Best order from brute force = ' + str(best_order) + ' with total distance = ' + str(best_distance))
def draw_tsp_solution(G, order, colors, pos):
G2 = G.copy()
n = len(order)
for i in range(n):
j = (i + 1) % n
G2.add_edge(order[i], order[j])
default_axes = plt.axes(frameon=True)
nx.draw_networkx(G2, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos)
draw_tsp_solution(G, best_order, colors, pos)
```
### Mapping to the Ising problem
```
qubitOp, offset = tsp.get_tsp_qubitops(ins)
algo_input = get_input_instance('EnergyInput')
algo_input.qubit_op = qubitOp
```
### Checking that the full Hamiltonian gives the right cost
```
#Making the Hamiltonian in its full form and getting the lowest eigenvalue and eigenvector
algorithm_cfg = {
'name': 'ExactEigensolver',
}
params = {
'problem': {'name': 'ising'},
'algorithm': algorithm_cfg
}
result = run_algorithm(params,algo_input)
print('energy:', result['energy'])
#print('tsp objective:', result['energy'] + offset)
x = tsp.sample_most_likely(result['eigvecs'][0])
print('feasible:', tsp.tsp_feasible(x))
z = tsp.get_tsp_solution(x)
print('solution:', z)
print('solution objective:', tsp.tsp_value(z, ins.w))
draw_tsp_solution(G, z, colors, pos)
```
### Running it on quantum computer
We run the optimization routine using a feedback loop with a quantum computer that uses trial functions built with Y single-qubit rotations, $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, and entangler steps $U_\mathrm{entangler}$.
```
algorithm_cfg = {
'name': 'VQE',
'operator_mode': 'matrix'
}
optimizer_cfg = {
'name': 'SPSA',
'max_trials': 300
}
var_form_cfg = {
'name': 'RY',
'depth': 5,
'entanglement': 'linear'
}
params = {
'problem': {'name': 'ising', 'random_seed': 10598},
'algorithm': algorithm_cfg,
'optimizer': optimizer_cfg,
'variational_form': var_form_cfg,
'backend': {'name': 'statevector_simulator'}
}
result = run_algorithm(params,algo_input)
print('energy:', result['energy'])
print('time:', result['eval_time'])
#print('tsp objective:', result['energy'] + offset)
x = tsp.sample_most_likely(result['eigvecs'][0])
print('feasible:', tsp.tsp_feasible(x))
z = tsp.get_tsp_solution(x)
print('solution:', z)
print('solution objective:', tsp.tsp_value(z, ins.w))
draw_tsp_solution(G, z, colors, pos)
# run quantum algorithm with shots
params['algorithm']['operator_mode'] = 'grouped_paulis'
params['backend']['name'] = 'qasm_simulator'
params['backend']['shots'] = 1024
result = run_algorithm(params,algo_input)
print('energy:', result['energy'])
print('time:', result['eval_time'])
#print('tsp objective:', result['energy'] + offset)
x = tsp.sample_most_likely(result['eigvecs'][0])
print('feasible:', tsp.tsp_feasible(x))
z = tsp.get_tsp_solution(x)
print('solution:', z)
print('solution objective:', tsp.tsp_value(z, ins.w))
plot_histogram(result['eigvecs'][0])
draw_tsp_solution(G, z, colors, pos)
```
| github_jupyter |
### solve the global sequence alignment problem using needleman-wunsch algorithm
```
import numpy as np
equal_score = 1
unequal_score = -1
space_score = -2
# needleman-wunsch 算法可能出现负分的情况
def createScoreMatrix(list1, list2, debug=False):
lenList1, lenList2 = len(list1), len(list2)
#initialize matrix
scoreMatrix = np.zeros((lenList1+1, lenList2+1), dtype=int)
for i in range(1, lenList1+1):
scoreMatrix[i][0] = i * space_score
for j in range(1, lenList2+1):
scoreMatrix[0][j] = j * space_score
#populate the matrix
for i, x in enumerate(list1):
for j, y in enumerate(list2):
if x == y:
scoreMatrix[i+1][j+1] = scoreMatrix[i][j]+equal_score
else:
scoreMatrix[i+1][j+1] = max(scoreMatrix[i][j+1]+space_score, scoreMatrix[i+1][j]+space_score, scoreMatrix[i][j]+unequal_score)
if debug:
print("score Matrix:")
print(scoreMatrix)
return scoreMatrix
list1=[1, 2, 4, 6,7,8,0]
list2=[4,5,7,1,2,0]
print(createScoreMatrix(list1, list2))
list1=list("GCCCTAGCG")
list2=list("GCGCAATG")
print(createScoreMatrix(list1, list2))
def traceBack(list1, list2, scoreMatrix):
'''
Return:
alignedList1, alignedList2, commonSub
'''
commonSub = []
alignedList1 = []
alignedList2 = []
i, j = scoreMatrix.shape[0]-1, scoreMatrix.shape[1]-1
if i == 0 or j == 0:
return list1, list2, commonSub
else:
while i != 0 and j != 0: #顺序是左上,上,左
if list1[i-1] == list2[j-1]:
commonSub.append(list1[i-1])
alignedList1.append(list1[i-1])
alignedList2.append(list2[j-1])
i -= 1
j -= 1
elif scoreMatrix[i][j] == scoreMatrix[i-1][j-1] + unequal_score:
alignedList1.append(list1[i-1])
alignedList2.append(list2[j-1])
i -= 1
j -= 1
elif scoreMatrix[i][j] == scoreMatrix[i-1][j] + space_score:
alignedList1.append(list1[i-1])
alignedList2.append('_')
i -= 1
else:#scoreMatrix[i][j] == scoreMatrix[i][j-1] + space_score:
alignedList1.append('_')
alignedList2.append(list2[j-1])
j -= 1
#己回滋到最左一行,或最上一列,但未到达0, 0 位置
while i > 0:
alignedList1.append(list1[i-1])
alignedList2.append('_')
i -= 1
while j > 0:
alignedList2.append(list2[j-1])
alignedList1.append('_')
j -= 1
alignedList1.reverse()
alignedList2.reverse()
commonSub.reverse()
return alignedList1, alignedList2, commonSub
list1=[1, 2, 4, 6,7,8,0]
list2=[4,5,7,1,2,0]
alignedList1, alignedList2, commonSub= traceBack(list1, list2, createScoreMatrix(list1, list2))
print(alignedList1)
print(alignedList2)
print(commonSub)
def needleman_wunsch(list1, list2, debug=False):
return traceBack(list1, list2, createScoreMatrix(list1, list2, debug))
list1 = list("GCCCTAGCG")
list2 = list("GCGCAATG")
alignedList1, alignedList2, commonSub = needleman_wunsch(list1, list2, True)
print(alignedList1)
print(alignedList2)
print(commonSub)
text1 = "this is a test for text alignment from xxxx"
text2 = "Hi, try A test for alignment , Heirish"
list1 = text1.lower().split(" ")
list2 = text2.lower().split(" ")
alignedList1, alignedList2, commonSub = needleman_wunsch(list1, list2)
print(alignedList1)
print(alignedList2)
print(commonSub)
```
| github_jupyter |
```
import numpy as np
import tensorflow as tf
from sklearn.utils import shuffle
import re
import time
import collections
import os
def build_dataset(words, n_words, atleast=1):
count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words)
counter = [i for i in counter if i[1] >= atleast]
count.extend(counter)
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 0)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
lines = open('movie_lines.txt', encoding='utf-8', errors='ignore').read().split('\n')
conv_lines = open('movie_conversations.txt', encoding='utf-8', errors='ignore').read().split('\n')
id2line = {}
for line in lines:
_line = line.split(' +++$+++ ')
if len(_line) == 5:
id2line[_line[0]] = _line[4]
convs = [ ]
for line in conv_lines[:-1]:
_line = line.split(' +++$+++ ')[-1][1:-1].replace("'","").replace(" ","")
convs.append(_line.split(','))
questions = []
answers = []
for conv in convs:
for i in range(len(conv)-1):
questions.append(id2line[conv[i]])
answers.append(id2line[conv[i+1]])
def clean_text(text):
text = text.lower()
text = re.sub(r"i'm", "i am", text)
text = re.sub(r"he's", "he is", text)
text = re.sub(r"she's", "she is", text)
text = re.sub(r"it's", "it is", text)
text = re.sub(r"that's", "that is", text)
text = re.sub(r"what's", "that is", text)
text = re.sub(r"where's", "where is", text)
text = re.sub(r"how's", "how is", text)
text = re.sub(r"\'ll", " will", text)
text = re.sub(r"\'ve", " have", text)
text = re.sub(r"\'re", " are", text)
text = re.sub(r"\'d", " would", text)
text = re.sub(r"\'re", " are", text)
text = re.sub(r"won't", "will not", text)
text = re.sub(r"can't", "cannot", text)
text = re.sub(r"n't", " not", text)
text = re.sub(r"n'", "ng", text)
text = re.sub(r"'bout", "about", text)
text = re.sub(r"'til", "until", text)
text = re.sub(r"[-()\"#/@;:<>{}`+=~|.!?,]", "", text)
return ' '.join([i.strip() for i in filter(None, text.split())])
clean_questions = []
for question in questions:
clean_questions.append(clean_text(question))
clean_answers = []
for answer in answers:
clean_answers.append(clean_text(answer))
min_line_length = 2
max_line_length = 5
short_questions_temp = []
short_answers_temp = []
i = 0
for question in clean_questions:
if len(question.split()) >= min_line_length and len(question.split()) <= max_line_length:
short_questions_temp.append(question)
short_answers_temp.append(clean_answers[i])
i += 1
short_questions = []
short_answers = []
i = 0
for answer in short_answers_temp:
if len(answer.split()) >= min_line_length and len(answer.split()) <= max_line_length:
short_answers.append(answer)
short_questions.append(short_questions_temp[i])
i += 1
question_test = short_questions[500:550]
answer_test = short_answers[500:550]
short_questions = short_questions[:500]
short_answers = short_answers[:500]
concat_from = ' '.join(short_questions+question_test).split()
vocabulary_size_from = len(list(set(concat_from)))
data_from, count_from, dictionary_from, rev_dictionary_from = build_dataset(concat_from, vocabulary_size_from)
print('vocab from size: %d'%(vocabulary_size_from))
print('Most common words', count_from[4:10])
print('Sample data', data_from[:10], [rev_dictionary_from[i] for i in data_from[:10]])
print('filtered vocab size:',len(dictionary_from))
print("% of vocab used: {}%".format(round(len(dictionary_from)/vocabulary_size_from,4)*100))
concat_to = ' '.join(short_answers+answer_test).split()
vocabulary_size_to = len(list(set(concat_to)))
data_to, count_to, dictionary_to, rev_dictionary_to = build_dataset(concat_to, vocabulary_size_to)
print('vocab from size: %d'%(vocabulary_size_to))
print('Most common words', count_to[4:10])
print('Sample data', data_to[:10], [rev_dictionary_to[i] for i in data_to[:10]])
print('filtered vocab size:',len(dictionary_to))
print("% of vocab used: {}%".format(round(len(dictionary_to)/vocabulary_size_to,4)*100))
GO = dictionary_from['GO']
PAD = dictionary_from['PAD']
EOS = dictionary_from['EOS']
UNK = dictionary_from['UNK']
for i in range(len(short_answers)):
short_answers[i] += ' EOS'
class Chatbot:
def __init__(self, size_layer, num_layers, embedded_size,
from_dict_size, to_dict_size, learning_rate, batch_size):
def cells(reuse=False):
return tf.nn.rnn_cell.LSTMCell(size_layer,initializer=tf.orthogonal_initializer(),
reuse=reuse)
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None, None])
self.X_seq_len = tf.placeholder(tf.int32, [None])
self.Y_seq_len = tf.placeholder(tf.int32, [None])
encoder_embeddings = tf.Variable(tf.random_uniform([from_dict_size, embedded_size], -1, 1))
decoder_embeddings = tf.Variable(tf.random_uniform([to_dict_size, embedded_size], -1, 1))
encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)
main = tf.strided_slice(self.X, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1)
decoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, decoder_input)
attention_mechanism = tf.contrib.seq2seq.BahdanauAttention(num_units = size_layer,
memory = encoder_embedded)
rnn_cells = tf.contrib.seq2seq.AttentionWrapper(cell = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)]),
attention_mechanism = attention_mechanism,
attention_layer_size = size_layer)
_, last_state = tf.nn.dynamic_rnn(rnn_cells, encoder_embedded,
dtype = tf.float32)
last_state = tuple(last_state[0][-1] for _ in range(num_layers))
with tf.variable_scope("decoder"):
rnn_cells_dec = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)])
outputs, _ = tf.nn.dynamic_rnn(rnn_cells_dec, decoder_embedded,
initial_state = last_state,
dtype = tf.float32)
self.logits = tf.layers.dense(outputs,to_dict_size)
masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.logits,
targets = self.Y,
weights = masks)
self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost)
size_layer = 128
num_layers = 2
embedded_size = 128
learning_rate = 0.001
batch_size = 16
epoch = 20
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Chatbot(size_layer, num_layers, embedded_size, len(dictionary_from),
len(dictionary_to), learning_rate,batch_size)
sess.run(tf.global_variables_initializer())
def str_idx(corpus, dic):
X = []
for i in corpus:
ints = []
for k in i.split():
ints.append(dic.get(k,UNK))
X.append(ints)
return X
X = str_idx(short_questions, dictionary_from)
Y = str_idx(short_answers, dictionary_to)
X_test = str_idx(question_test, dictionary_from)
Y_test = str_idx(answer_test, dictionary_from)
def pad_sentence_batch(sentence_batch, pad_int):
padded_seqs = []
seq_lens = []
max_sentence_len = 10
for sentence in sentence_batch:
padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence)))
seq_lens.append(10)
return padded_seqs, seq_lens
def check_accuracy(logits, Y):
acc = 0
for i in range(logits.shape[0]):
internal_acc = 0
count = 0
for k in range(len(Y[i])):
try:
if Y[i][k] == logits[i][k]:
internal_acc += 1
count += 1
if Y[i][k] == EOS:
break
except:
break
acc += (internal_acc / count)
return acc / logits.shape[0]
for i in range(epoch):
total_loss, total_accuracy = 0, 0
for k in range(0, (len(short_questions) // batch_size) * batch_size, batch_size):
batch_x, seq_x = pad_sentence_batch(X[k: k+batch_size], PAD)
batch_y, seq_y = pad_sentence_batch(Y[k: k+batch_size], PAD)
predicted, loss, _ = sess.run([tf.argmax(model.logits,2), model.cost, model.optimizer],
feed_dict={model.X:batch_x,
model.Y:batch_y,
model.X_seq_len:seq_x,
model.Y_seq_len:seq_y})
total_loss += loss
total_accuracy += check_accuracy(predicted,batch_y)
total_loss /= (len(short_questions) // batch_size)
total_accuracy /= (len(short_questions) // batch_size)
print('epoch: %d, avg loss: %f, avg accuracy: %f'%(i+1, total_loss, total_accuracy))
for i in range(len(batch_x)):
print('row %d'%(i+1))
print('QUESTION:',' '.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]]))
print('REAL ANSWER:',' '.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]]))
print('PREDICTED ANSWER:',' '.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\n')
batch_x, seq_x = pad_sentence_batch(X_test[:batch_size], PAD)
batch_y, seq_y = pad_sentence_batch(Y_test[:batch_size], PAD)
predicted = sess.run(tf.argmax(model.logits,2), feed_dict={model.X:batch_x,model.X_seq_len:seq_x})
for i in range(len(batch_x)):
print('row %d'%(i+1))
print('QUESTION:',' '.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]]))
print('REAL ANSWER:',' '.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]]))
print('PREDICTED ANSWER:',' '.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\n')
```
| github_jupyter |
## Regression Analysis : First Machine Learning Algorithm !!
### Machine learning
- is an application of artificial intelligence (AI) that provides systems the __ability to automatically learn and improve from experience without being explicitly programmed__.
<img style="float: left;" src = "./img/ml_definition.png" width="600" height="600">
<img style="float: left;" src = "./img/traditionalVsml.png" width="600" height="600">
### Types of Machine Learning
<img style="float: left;" src = "./img/types-ml.png" width="700" height="600">
<br>
<br>
<img style="float: left;" src = "./img/ml-ex.png" width="800" height="700">
__Why use linear regression?__
1. Easy to use
2. Easy to interpret
3. Basis for many methods
4. Runs fast
5. Most people have heard about it :-)
### Libraries in Python for Linear Regression
The two most popular ones are
1. `scikit-learn`
2. `statsmodels`
Highly recommend learning `scikit-learn` since that's also the machine learning package in Python.
### Linear regression
Let's use `scikit-lean` for this example.
Linear regression is of the form:
$y = \beta_0 + \beta_1x_1 + \beta_2x_2 + ... + \beta_nx_n$
- $y$ is what we have to predict/independent variable/response variable
- $\beta_0$ is the intercept/slope
- $\beta_1$ is the coefficient for $x_1$ (the first feature/dependent variable)
- $\beta_n$ is the coefficient for $x_n$ (the nth feature/dependent variable)
The $\beta$ are called *model coefficients*
The model coefficients are estimated in this process. (In Machine Learning parlance - the weights are learned using the algorithm). The objective function is least squares method.
<br>
**Least Squares Method** : To identify the weights so that the overall solution minimizes the sum of the squares of the errors made in the results of every single equation. [Wiki](https://en.wikipedia.org/wiki/Least_squares)
<img style="float: left;" src = "./img/lin_reg.jpg" width="600" height="600">
<h2> Model Building & Testing Methodology </h2>
<img src="./img/train_test.png" alt="Train & Test Methodology" width="700" height="600">
<br>
<br>
<br>
### Must read blog:
Interpretable Machine Learning by Christoph
https://christophm.github.io/interpretable-ml-book/intro.html
```
# Step1: Import packages
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
import seaborn as sns
sns.set(color_codes = True)
%matplotlib inline
# Step2: Load our data
df = pd.read_csv('Mall_Customers.csv')
df.rename(columns={'CustomerID':'id','Spending Score (1-100)':'score','Annual Income (k$)':'income'},inplace=True)
df.head() # Visualize first 5 rows of data
df.tail()
# Step3: Feature Engineering - transforming variables as appropriate for inputs to Machine Learning Algorithm
# transforming categorical variable Gender using One hot encodding
gender_onhot = pd.get_dummies(df['Gender'])
gender_onhot.tail()
# Create input dataset aka X
X = pd.merge(df[['Age','score']], gender_onhot, left_index=True, right_index=True)
X.head()
sns.pairplot(X[['Age','score']])
print("Correlation between variables.........")
X.iloc[:,:4].corr()
# Create target variable
Y = df['income']
Y.head()
# Step3: Split data in train & test set
X_train, X_test, y_train, y_test = train_test_split(X,Y,test_size=0.10,random_state = 35)
print('Shape of Training Xs:{}'.format(X_train.shape))
print('Shape of Test Xs:{}'.format(X_test.shape))
# Step4: Build Linear Regression Analysis Model
learner = LinearRegression(); #initializing linear regression model
learner.fit(X_train,y_train); #training the linear regression model
y_predicted = learner.predict(X_test)
score=learner.score(X_test,y_test);#testing the linear regression model
```
### Interpretation
__Score__: R^2 (pronounced as R Square) it is also called as __coefficient of determination__ of prediction.
__Range of Score values__: 0 to 1 , 0 -> No relation between predicted Y and input Xs, 1 -> best case scenario where predicted value is same as actual value.
__Formula for Score__: R^2 = (1 - u/v), where u is the residual sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum()
```
print(score)
print(y_predicted)
sns.boxplot(x = df['score'])
sns.distplot(df['score'])
# Step5: Check Accuracy of Model
df_new = pd.DataFrame({"true_income":y_test,"predicted_income":y_predicted})
df_new
# Step6: Diagnostic analysis
from sklearn.metrics import mean_squared_error, r2_score
print("Intercept is at: %.2f"%(learner.intercept_))
# The coefficients
print('Coefficients: \n', learner.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% mean_squared_error(y_test, y_predicted))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.4f' % r2_score(y_test, y_predicted))
```
| github_jupyter |
```
#hide
#skip
! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab
#export
from fastai.basics import *
from fastai.text.core import *
from fastai.text.data import *
from fastai.text.models.core import *
from fastai.text.models.awdlstm import *
from fastai.callback.rnn import *
from fastai.callback.progress import *
#hide
from nbdev.showdoc import *
#default_exp text.learner
```
# Learner for the text application
> All the functions necessary to build `Learner` suitable for transfer learning in NLP
The most important functions of this module are `language_model_learner` and `text_classifier_learner`. They will help you define a `Learner` using a pretrained model. See the [text tutorial](http://docs.fast.ai/tutorial.text) for exmaples of use.
## Loading a pretrained model
In text, to load a pretrained model, we need to adapt the embeddings of the vocabulary used for the pre-training to the vocabulary of our current corpus.
```
#export
def match_embeds(old_wgts, old_vocab, new_vocab):
"Convert the embedding in `old_wgts` to go from `old_vocab` to `new_vocab`."
bias, wgts = old_wgts.get('1.decoder.bias', None), old_wgts['0.encoder.weight']
wgts_m = wgts.mean(0)
new_wgts = wgts.new_zeros((len(new_vocab),wgts.size(1)))
if bias is not None:
bias_m = bias.mean(0)
new_bias = bias.new_zeros((len(new_vocab),))
old_o2i = old_vocab.o2i if hasattr(old_vocab, 'o2i') else {w:i for i,w in enumerate(old_vocab)}
for i,w in enumerate(new_vocab):
idx = old_o2i.get(w, -1)
new_wgts[i] = wgts[idx] if idx>=0 else wgts_m
if bias is not None: new_bias[i] = bias[idx] if idx>=0 else bias_m
old_wgts['0.encoder.weight'] = new_wgts
if '0.encoder_dp.emb.weight' in old_wgts: old_wgts['0.encoder_dp.emb.weight'] = new_wgts.clone()
old_wgts['1.decoder.weight'] = new_wgts.clone()
if bias is not None: old_wgts['1.decoder.bias'] = new_bias
return old_wgts
```
For words in `new_vocab` that don't have a corresponding match in `old_vocab`, we use the mean of all pretrained embeddings.
```
wgts = {'0.encoder.weight': torch.randn(5,3)}
new_wgts = match_embeds(wgts.copy(), ['a', 'b', 'c'], ['a', 'c', 'd', 'b'])
old,new = wgts['0.encoder.weight'],new_wgts['0.encoder.weight']
test_eq(new[0], old[0])
test_eq(new[1], old[2])
test_eq(new[2], old.mean(0))
test_eq(new[3], old[1])
#hide
#With bias
wgts = {'0.encoder.weight': torch.randn(5,3), '1.decoder.bias': torch.randn(5)}
new_wgts = match_embeds(wgts.copy(), ['a', 'b', 'c'], ['a', 'c', 'd', 'b'])
old_w,new_w = wgts['0.encoder.weight'],new_wgts['0.encoder.weight']
old_b,new_b = wgts['1.decoder.bias'], new_wgts['1.decoder.bias']
test_eq(new_w[0], old_w[0])
test_eq(new_w[1], old_w[2])
test_eq(new_w[2], old_w.mean(0))
test_eq(new_w[3], old_w[1])
test_eq(new_b[0], old_b[0])
test_eq(new_b[1], old_b[2])
test_eq(new_b[2], old_b.mean(0))
test_eq(new_b[3], old_b[1])
#export
def _get_text_vocab(dls):
vocab = dls.vocab
if isinstance(vocab, L): vocab = vocab[0]
return vocab
#export
def load_ignore_keys(model, wgts):
"Load `wgts` in `model` ignoring the names of the keys, just taking parameters in order"
sd = model.state_dict()
for k1,k2 in zip(sd.keys(), wgts.keys()): sd[k1].data = wgts[k2].data.clone()
return model.load_state_dict(sd)
#export
def _rm_module(n):
t = n.split('.')
for i in range(len(t)-1, -1, -1):
if t[i] == 'module':
t.pop(i)
break
return '.'.join(t)
#export
#For previous versions compatibility, remove for release
def clean_raw_keys(wgts):
keys = list(wgts.keys())
for k in keys:
t = k.split('.module')
if f'{_rm_module(k)}_raw' in keys: del wgts[k]
return wgts
#export
#For previous versions compatibility, remove for release
def load_model_text(file, model, opt, with_opt=None, device=None, strict=True):
"Load `model` from `file` along with `opt` (if available, and if `with_opt`)"
distrib_barrier()
if isinstance(device, int): device = torch.device('cuda', device)
elif device is None: device = 'cpu'
state = torch.load(file, map_location=device)
hasopt = set(state)=={'model', 'opt'}
model_state = state['model'] if hasopt else state
get_model(model).load_state_dict(clean_raw_keys(model_state), strict=strict)
if hasopt and ifnone(with_opt,True):
try: opt.load_state_dict(state['opt'])
except:
if with_opt: warn("Could not load the optimizer state.")
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
#export
@log_args(but_as=Learner.__init__)
@delegates(Learner.__init__)
class TextLearner(Learner):
"Basic class for a `Learner` in NLP."
def __init__(self, dls, model, alpha=2., beta=1., moms=(0.8,0.7,0.8), **kwargs):
super().__init__(dls, model, moms=moms, **kwargs)
self.add_cbs([ModelResetter(), RNNRegularizer(alpha=alpha, beta=beta)])
def save_encoder(self, file):
"Save the encoder to `file` in the model directory"
if rank_distrib(): return # don't save if child proc
encoder = get_model(self.model)[0]
if hasattr(encoder, 'module'): encoder = encoder.module
torch.save(encoder.state_dict(), join_path_file(file, self.path/self.model_dir, ext='.pth'))
def load_encoder(self, file, device=None):
"Load the encoder `file` from the model directory, optionally ensuring it's on `device`"
encoder = get_model(self.model)[0]
if device is None: device = self.dls.device
if hasattr(encoder, 'module'): encoder = encoder.module
distrib_barrier()
wgts = torch.load(join_path_file(file,self.path/self.model_dir, ext='.pth'), map_location=device)
encoder.load_state_dict(clean_raw_keys(wgts))
self.freeze()
return self
def load_pretrained(self, wgts_fname, vocab_fname, model=None):
"Load a pretrained model and adapt it to the data vocabulary."
old_vocab = load_pickle(vocab_fname)
new_vocab = _get_text_vocab(self.dls)
distrib_barrier()
wgts = torch.load(wgts_fname, map_location = lambda storage,loc: storage)
if 'model' in wgts: wgts = wgts['model'] #Just in case the pretrained model was saved with an optimizer
wgts = match_embeds(wgts, old_vocab, new_vocab)
load_ignore_keys(self.model if model is None else model, clean_raw_keys(wgts))
self.freeze()
return self
#For previous versions compatibility. Remove at release
@delegates(load_model_text)
def load(self, file, with_opt=None, device=None, **kwargs):
if device is None: device = self.dls.device
if self.opt is None: self.create_opt()
file = join_path_file(file, self.path/self.model_dir, ext='.pth')
load_model_text(file, self.model, self.opt, device=device, **kwargs)
return self
```
Adds a `ModelResetter` and an `RNNRegularizer` with `alpha` and `beta` to the callbacks, the rest is the same as `Learner` init.
This `Learner` adds functionality to the base class:
```
show_doc(TextLearner.load_pretrained)
```
`wgts_fname` should point to the weights of the pretrained model and `vocab_fname` to the vocabulary used to pretrain it.
```
show_doc(TextLearner.save_encoder)
```
The model directory is `Learner.path/Learner.model_dir`.
```
show_doc(TextLearner.load_encoder)
```
## Language modeling predictions
For language modeling, the predict method is quite different form the other applications, which is why it needs its own subclass.
```
#export
def decode_spec_tokens(tokens):
"Decode the special tokens in `tokens`"
new_toks,rule,arg = [],None,None
for t in tokens:
if t in [TK_MAJ, TK_UP, TK_REP, TK_WREP]: rule = t
elif rule is None: new_toks.append(t)
elif rule == TK_MAJ:
new_toks.append(t[:1].upper() + t[1:].lower())
rule = None
elif rule == TK_UP:
new_toks.append(t.upper())
rule = None
elif arg is None:
try: arg = int(t)
except: rule = None
else:
if rule == TK_REP: new_toks.append(t * arg)
else: new_toks += [t] * arg
return new_toks
test_eq(decode_spec_tokens(['xxmaj', 'text']), ['Text'])
test_eq(decode_spec_tokens(['xxup', 'text']), ['TEXT'])
test_eq(decode_spec_tokens(['xxrep', '3', 'a']), ['aaa'])
test_eq(decode_spec_tokens(['xxwrep', '3', 'word']), ['word', 'word', 'word'])
#export
@log_args(but_as=TextLearner.__init__)
class LMLearner(TextLearner):
"Add functionality to `TextLearner` when dealing with a language model"
def predict(self, text, n_words=1, no_unk=True, temperature=1., min_p=None, no_bar=False,
decoder=decode_spec_tokens, only_last_word=False):
"Return `text` and the `n_words` that come after"
self.model.reset()
idxs = idxs_all = self.dls.test_dl([text]).items[0].to(self.dls.device)
if no_unk: unk_idx = self.dls.vocab.index(UNK)
for _ in (range(n_words) if no_bar else progress_bar(range(n_words), leave=False)):
with self.no_bar(): preds,_ = self.get_preds(dl=[(idxs[None],)])
res = preds[0][-1]
if no_unk: res[unk_idx] = 0.
if min_p is not None:
if (res >= min_p).float().sum() == 0:
warn(f"There is no item with probability >= {min_p}, try a lower value.")
else: res[res < min_p] = 0.
if temperature != 1.: res.pow_(1 / temperature)
idx = torch.multinomial(res, 1).item()
idxs = idxs_all = torch.cat([idxs_all, idxs.new([idx])])
if only_last_word: idxs = idxs[-1][None]
num = self.dls.train_ds.numericalize
tokens = [num.vocab[i] for i in idxs_all if num.vocab[i] not in [BOS, PAD]]
sep = self.dls.train_ds.tokenizer.sep
return sep.join(decoder(tokens))
@delegates(Learner.get_preds)
def get_preds(self, concat_dim=1, **kwargs): return super().get_preds(concat_dim=1, **kwargs)
show_doc(LMLearner, title_level=3)
show_doc(LMLearner.predict)
```
The words are picked randomly among the predictions, depending on the probability of each index. `no_unk` means we never pick the `UNK` token, `temperature` is applied to the predictions, if `min_p` is passed, we don't consider the indices with a probability lower than it. Set `no_bar` to `True` if you don't want any progress bar, and you can pass a long a custom `decoder` to process the predicted tokens.
## `Learner` convenience functions
```
#export
from fastai.text.models.core import _model_meta
#export
def _get_text_vocab(dls):
vocab = dls.vocab
if isinstance(vocab, L): vocab = vocab[0]
return vocab
#export
@log_args(to_return=True, but_as=Learner.__init__)
@delegates(Learner.__init__)
def language_model_learner(dls, arch, config=None, drop_mult=1., backwards=False, pretrained=True, pretrained_fnames=None, **kwargs):
"Create a `Learner` with a language model from `dls` and `arch`."
vocab = _get_text_vocab(dls)
model = get_language_model(arch, len(vocab), config=config, drop_mult=drop_mult)
meta = _model_meta[arch]
learn = LMLearner(dls, model, loss_func=CrossEntropyLossFlat(), splitter=meta['split_lm'], **kwargs)
url = 'url_bwd' if backwards else 'url'
if pretrained or pretrained_fnames:
if pretrained_fnames is not None:
fnames = [learn.path/learn.model_dir/f'{fn}.{ext}' for fn,ext in zip(pretrained_fnames, ['pth', 'pkl'])]
else:
if url not in meta:
warn("There are no pretrained weights for that architecture yet!")
return learn
model_path = untar_data(meta[url] , c_key='model')
try: fnames = [list(model_path.glob(f'*.{ext}'))[0] for ext in ['pth', 'pkl']]
except IndexError: print(f'The model in {model_path} is incomplete, download again'); raise
learn = learn.load_pretrained(*fnames)
return learn
```
You can use the `config` to customize the architecture used (change the values from `awd_lstm_lm_config` for this), `pretrained` will use fastai's pretrained model for this `arch` (if available) or you can pass specific `pretrained_fnames` containing your own pretrained model and the corresponding vocabulary. All other arguments are passed to `Learner`.
```
path = untar_data(URLs.IMDB_SAMPLE)
df = pd.read_csv(path/'texts.csv')
dls = TextDataLoaders.from_df(df, path=path, text_col='text', is_lm=True, valid_col='is_valid')
learn = language_model_learner(dls, AWD_LSTM)
```
You can then use the `.predict` method to generate new text.
```
learn.predict('This movie is about', n_words=20)
```
By default the entire sentence is feed again to the model after each predicted word, this little trick shows an improvement on the quality of the generated text. If you want to feed only the last word, specify argument `only_last_word`.
```
learn.predict('This movie is about', n_words=20, only_last_word=True)
#export
@log_args(to_return=True, but_as=Learner.__init__)
@delegates(Learner.__init__)
def text_classifier_learner(dls, arch, seq_len=72, config=None, backwards=False, pretrained=True, drop_mult=0.5, n_out=None,
lin_ftrs=None, ps=None, max_len=72*20, y_range=None, **kwargs):
"Create a `Learner` with a text classifier from `dls` and `arch`."
vocab = _get_text_vocab(dls)
if n_out is None: n_out = get_c(dls)
assert n_out, "`n_out` is not defined, and could not be inferred from data, set `dls.c` or pass `n_out`"
model = get_text_classifier(arch, len(vocab), n_out, seq_len=seq_len, config=config, y_range=y_range,
drop_mult=drop_mult, lin_ftrs=lin_ftrs, ps=ps, max_len=max_len)
meta = _model_meta[arch]
learn = TextLearner(dls, model, splitter=meta['split_clas'], **kwargs)
url = 'url_bwd' if backwards else 'url'
if pretrained:
if url not in meta:
warn("There are no pretrained weights for that architecture yet!")
return learn
model_path = untar_data(meta[url], c_key='model')
try: fnames = [list(model_path.glob(f'*.{ext}'))[0] for ext in ['pth', 'pkl']]
except IndexError: print(f'The model in {model_path} is incomplete, download again'); raise
learn = learn.load_pretrained(*fnames, model=learn.model[0])
learn.freeze()
return learn
```
You can use the `config` to customize the architecture used (change the values from `awd_lstm_clas_config` for this), `pretrained` will use fastai's pretrained model for this `arch` (if available). `drop_mult` is a global multiplier applied to control all dropouts. `n_out` is usually inferred from the `dls` but you may pass it.
The model uses a `SentenceEncoder`, which means the texts are passed `seq_len` tokens at a time, and will only compute the gradients on the last `max_len` steps. `lin_ftrs` and `ps` are passed to `get_text_classifier`.
All other arguments are passed to `Learner`.
```
path = untar_data(URLs.IMDB_SAMPLE)
df = pd.read_csv(path/'texts.csv')
dls = TextDataLoaders.from_df(df, path=path, text_col='text', label_col='label', valid_col='is_valid')
learn = text_classifier_learner(dls, AWD_LSTM)
```
## Show methods -
```
#export
@typedispatch
def show_results(x: LMTensorText, y, samples, outs, ctxs=None, max_n=10, **kwargs):
if ctxs is None: ctxs = get_empty_df(min(len(samples), max_n))
for i,l in enumerate(['input', 'target']):
ctxs = [b.show(ctx=c, label=l, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs,range(max_n))]
ctxs = [b.show(ctx=c, label='pred', **kwargs) for b,c,_ in zip(outs.itemgot(0),ctxs,range(max_n))]
display_df(pd.DataFrame(ctxs))
return ctxs
#export
@typedispatch
def show_results(x: TensorText, y, samples, outs, ctxs=None, max_n=10, trunc_at=150, **kwargs):
if ctxs is None: ctxs = get_empty_df(min(len(samples), max_n))
samples = L((s[0].truncate(trunc_at),*s[1:]) for s in samples)
ctxs = show_results[object](x, y, samples, outs, ctxs=ctxs, max_n=max_n, **kwargs)
display_df(pd.DataFrame(ctxs))
return ctxs
#export
@typedispatch
def plot_top_losses(x: TensorText, y:TensorCategory, samples, outs, raws, losses, trunc_at=150, **kwargs):
rows = get_empty_df(len(samples))
samples = L((s[0].truncate(trunc_at),*s[1:]) for s in samples)
for i,l in enumerate(['input', 'target']):
rows = [b.show(ctx=c, label=l, **kwargs) for b,c in zip(samples.itemgot(i),rows)]
outs = L(o + (TitledFloat(r.max().item()), TitledFloat(l.item())) for o,r,l in zip(outs, raws, losses))
for i,l in enumerate(['predicted', 'probability', 'loss']):
rows = [b.show(ctx=c, label=l, **kwargs) for b,c in zip(outs.itemgot(i),rows)]
display_df(pd.DataFrame(rows))
```
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
```
import numpy as np
import tensorflow as tf
from tensorflow import keras
from IPython.display import Image
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
model_builder = keras.applications.xception.Xception
img_size = (299, 299)
preprocess_input = keras.applications.xception.preprocess_input
decode_predictions = keras.applications.xception.decode_predictions
last_conv_layer_name = "block14_sepconv2_act"
classifier_layer_names = [
"avg_pool",
"predictions",
]
img_path = './dog.jpeg'
display(Image(img_path))
def get_img_array(img_path, size):
img = keras.preprocessing.image.load_img(img_path, target_size=size)
array = keras.preprocessing.image.img_to_array(img)
array = np.expand_dims(array, axis=0)
return array
def make_gradcam_heatmap(
img_array, model, last_conv_layer_name, classifier_layer_names
):
last_conv_layer = model.get_layer(last_conv_layer_name)
last_conv_layer_model = keras.Model(model.inputs, last_conv_layer.output)
classifier_input = keras.Input(shape=last_conv_layer.output.shape[1:])
x = classifier_input
for layer_name in classifier_layer_names:
x = model.get_layer(layer_name)(x)
classifier_model = keras.Model(classifier_input, x)
with tf.GradientTape() as tape:
last_conv_layer_output = last_conv_layer_model(img_array)
tape.watch(last_conv_layer_output)
preds = classifier_model(last_conv_layer_output)
top_pred_index = tf.argmax(preds[0])
top_class_channel = preds[:, top_pred_index]
grads = tape.gradient(top_class_channel, last_conv_layer_output)
pooled_grads = tf.reduce_mean(grads, axis=(0, 1, 2))
last_conv_layer_output = last_conv_layer_output.numpy()[0]
pooled_grads = pooled_grads.numpy()
for i in range(pooled_grads.shape[-1]):
last_conv_layer_output[:, :, i] *= pooled_grads[i]
heatmap = np.mean(last_conv_layer_output, axis=-1)
heatmap = np.maximum(heatmap, 0) / np.max(heatmap)
return heatmap
img_array = preprocess_input(get_img_array(img_path, size=img_size))
model = model_builder(weights="imagenet")
preds = model.predict(img_array)
heatmap = make_gradcam_heatmap(
img_array, model, last_conv_layer_name, classifier_layer_names
)
plt.imshow(heatmap)
plt.title("Predicted: {}".format(decode_predictions(preds, top=1)[0][0][1].upper().replace('_', ' ')))
plt.show()
img = keras.preprocessing.image.load_img(img_path)
img = keras.preprocessing.image.img_to_array(img)
heatmap = np.uint8(255 * heatmap)
jet = cm.get_cmap("jet")
jet_colors = jet(np.arange(256))[:, :3]
jet_heatmap = jet_colors[heatmap]
jet_heatmap = keras.preprocessing.image.array_to_img(jet_heatmap)
jet_heatmap = jet_heatmap.resize((img.shape[1], img.shape[0]))
jet_heatmap = keras.preprocessing.image.img_to_array(jet_heatmap)
superimposed_img = jet_heatmap * 0.6 + img
superimposed_img = keras.preprocessing.image.array_to_img(superimposed_img)
save_path = "." + img_path.split('.')[-2] + '-cam.jpg'
superimposed_img.save(save_path)
display(Image(save_path))
```
# References
[1] [Grad-CAM class activation visualization](https://keras.io/examples/vision/grad_cam/)
[2] [Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization](https://arxiv.org/abs/1610.02391)
| github_jupyter |
**This notebook is an exercise in the [Python](https://www.kaggle.com/learn/python) course. You can reference the tutorial at [this link](https://www.kaggle.com/colinmorris/booleans-and-conditionals).**
---
In this exercise, you'll put to work what you have learned about booleans and conditionals.
To get started, **run the setup code below** before writing your own code (and if you leave this notebook and come back later, don't forget to run the setup code again).
```
from learntools.core import binder; binder.bind(globals())
from learntools.python.ex3 import *
print('Setup complete.')
```
# 1.
Many programming languages have [`sign`](https://en.wikipedia.org/wiki/Sign_function) available as a built-in function. Python doesn't, but we can define our own!
In the cell below, define a function called `sign` which takes a numerical argument and returns -1 if it's negative, 1 if it's positive, and 0 if it's 0.
```
# Your code goes here. Define a function called 'sign'
def sign(num):
try:
return num/abs(num)
except ZeroDivisionError as z:
return 0
# Check your answer
q1.check()
#q1.solution()
```
# 2.
We've decided to add "logging" to our `to_smash` function from the previous exercise.
```
def to_smash(total_candies):
"""Return the number of leftover candies that must be smashed after distributing
the given number of candies evenly between 3 friends.
>>> to_smash(91)
1
"""
print("Splitting", total_candies, "candies")
return total_candies % 3
to_smash(91)
```
What happens if we call it with `total_candies = 1`?
```
to_smash(1)
```
That isn't great grammar!
Modify the definition in the cell below to correct the grammar of our print statement. (If there's only one candy, we should use the singular "candy" instead of the plural "candies")
```
def to_smash(total_candies):
"""Return the number of leftover candies that must be smashed after distributing
the given number of candies evenly between 3 friends.
>>> to_smash(91)
1
"""
if total_candies == 1:
print("Splitting", total_candies, "candy")
else:
print("Splitting", total_candies, "candies")
return total_candies % 3
to_smash(91)
to_smash(1)
```
To get credit for completing this problem, and to see the official answer, run the code cell below.
```
# Check your answer (Run this code cell to receive credit!)
q2.solution()
```
# 3. <span title="A bit spicy" style="color: darkgreen ">🌶️</span>
In the tutorial, we talked about deciding whether we're prepared for the weather. I said that I'm safe from today's weather if...
- I have an umbrella...
- or if the rain isn't too heavy and I have a hood...
- otherwise, I'm still fine unless it's raining *and* it's a workday
The function below uses our first attempt at turning this logic into a Python expression. I claimed that there was a bug in that code. Can you find it?
To prove that `prepared_for_weather` is buggy, come up with a set of inputs where either:
- the function returns `False` (but should have returned `True`), or
- the function returned `True` (but should have returned `False`).
To get credit for completing this question, your code should return a <font color='#33cc33'>Correct</font> result.
```
def prepared_for_weather(have_umbrella, rain_level, have_hood, is_workday):
# Don't change this code. Our goal is just to find the bug, not fix it!
return have_umbrella or rain_level < 5 and have_hood or not rain_level > 0 and is_workday
# Change the values of these inputs so they represent a case where prepared_for_weather
# returns the wrong answer.
have_umbrella = False
rain_level = 5.5
have_hood = True
is_workday = False
# Check what the function returns given the current values of the variables above
actual = prepared_for_weather(have_umbrella, rain_level, have_hood, is_workday)
print(actual)
# Check your answer
q3.check()
#q3.hint()
#q3.solution()
```
# 4.
The function `is_negative` below is implemented correctly - it returns True if the given number is negative and False otherwise.
However, it's more verbose than it needs to be. We can actually reduce the number of lines of code in this function by *75%* while keeping the same behaviour.
See if you can come up with an equivalent body that uses just **one line** of code, and put it in the function `concise_is_negative`. (HINT: you don't even need Python's ternary syntax)
```
def is_negative(number):
if number < 0:
return True
else:
return False
def concise_is_negative(number):
return True if number < 0 else False
# Check your answer
q4.check()
#q4.hint()
#q4.solution()
```
# 5a.
The boolean variables `ketchup`, `mustard` and `onion` represent whether a customer wants a particular topping on their hot dog. We want to implement a number of boolean functions that correspond to some yes-or-no questions about the customer's order. For example:
```
def onionless(ketchup, mustard, onion):
"""Return whether the customer doesn't want onions.
"""
return not onion
def wants_all_toppings(ketchup, mustard, onion):
"""Return whether the customer wants "the works" (all 3 toppings)
"""
return all([ketchup, mustard, onion])
# Check your answer
q5.a.check()
#q5.a.hint()
#q5.a.solution()
```
# 5b.
For the next function, fill in the body to match the English description in the docstring.
```
def wants_plain_hotdog(ketchup, mustard, onion):
"""Return whether the customer wants a plain hot dog with no toppings.
"""
return not any([ketchup, mustard, onion])
# Check your answer
q5.b.check()
#q5.b.hint()
#q5.b.solution()
```
# 5c.
You know what to do: for the next function, fill in the body to match the English description in the docstring.
```
def exactly_one_sauce(ketchup, mustard, onion):
"""Return whether the customer wants either ketchup or mustard, but not both.
(You may be familiar with this operation under the name "exclusive or")
"""
return bool(ketchup ^ mustard) and onion
# Check your answer
q5.c.check()
#q5.c.hint()
#q5.c.solution()
```
# 6. <span title="A bit spicy" style="color: darkgreen ">🌶️</span>
We’ve seen that calling `bool()` on an integer returns `False` if it’s equal to 0 and `True` otherwise. What happens if we call `int()` on a bool? Try it out in the notebook cell below.
Can you take advantage of this to write a succinct function that corresponds to the English sentence "does the customer want exactly one topping?"?
```
def exactly_one_topping(ketchup, mustard, onion):
"""Return whether the customer wants exactly one of the three available toppings
on their hot dog.
"""
return sum([ketchup, mustard, onion]) == 1
# Check your answer
q6.check()
#q6.hint()
#q6.solution()
```
# 7. <span title="A bit spicy" style="color: darkgreen ">🌶️</span> (Optional)
In this problem we'll be working with a simplified version of [blackjack](https://en.wikipedia.org/wiki/Blackjack) (aka twenty-one). In this version there is one player (who you'll control) and a dealer. Play proceeds as follows:
- The player is dealt two face-up cards. The dealer is dealt one face-up card.
- The player may ask to be dealt another card ('hit') as many times as they wish. If the sum of their cards exceeds 21, they lose the round immediately.
- The dealer then deals additional cards to himself until either:
- the sum of the dealer's cards exceeds 21, in which case the player wins the round
- the sum of the dealer's cards is greater than or equal to 17. If the player's total is greater than the dealer's, the player wins. Otherwise, the dealer wins (even in case of a tie).
When calculating the sum of cards, Jack, Queen, and King count for 10. Aces can count as 1 or 11 (when referring to a player's "total" above, we mean the largest total that can be made without exceeding 21. So e.g. A+8 = 19, A+8+8 = 17)
For this problem, you'll write a function representing the player's decision-making strategy in this game. We've provided a very unintelligent implementation below:
```
def should_hit(dealer_total, player_total, player_low_aces, player_high_aces):
"""Return True if the player should hit (request another card) given the current game
state, or False if the player should stay.
When calculating a hand's total value, we count aces as "high" (with value 11) if doing so
doesn't bring the total above 21, otherwise we count them as low (with value 1).
For example, if the player's hand is {A, A, A, 7}, we will count it as 11 + 1 + 1 + 7,
and therefore set player_total=20, player_low_aces=2, player_high_aces=1.
"""
if player_total > dealer_total:
return False
```
This very conservative agent *always* sticks with the hand of two cards that they're dealt.
We'll be simulating games between your player agent and our own dealer agent by calling your function.
Try running the function below to see an example of a simulated game:
```
q7.simulate_one_game()
```
The real test of your agent's mettle is their average win rate over many games. Try calling the function below to simulate 50000 games of blackjack (it may take a couple seconds):
```
q7.simulate(n_games=50000)
```
Our dumb agent that completely ignores the game state still manages to win shockingly often!
Try adding some more smarts to the `should_hit` function and see how it affects the results.
```
def should_hit(dealer_total, player_total, player_low_aces, player_high_aces):
"""Return True if the player should hit (request another card) given the current game
state, or False if the player should stay.
When calculating a hand's total value, we count aces as "high" (with value 11) if doing so
doesn't bring the total above 21, otherwise we count them as low (with value 1).
For example, if the player's hand is {A, A, A, 7}, we will count it as 11 + 1 + 1 + 7,
and therefore set player_total=20, player_low_aces=2, player_high_aces=1.
"""
return False
q7.simulate(n_games=50000)
```
# Keep Going
Learn about **[lists and tuples](https://www.kaggle.com/colinmorris/lists)** to handle multiple items of data in a systematic way.
---
*Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161283) to chat with other Learners.*
| github_jupyter |
<img align="right" src="images/tf.png" width="128"/>
<img align="right" src="images/etcbc.png" width="128"/>
<img align="right" src="images/syrnt.png" width="128"/>
<img align="right" src="images/peshitta.png" width="128"/>
# Use lectionaries in the Peshitta (OT and NT)
This notebook shows just one way to use the Syriac Lectionary data by Geert Jan Veldman
together with the Peshitta texts, OT and NT.
It has been used in the Syriac Bootcamp at the ETCBC, VU Amsterdam, on 2019-01-18.
## Provenance
The lectionary data can be downloaded from the
[DANS archive](https://dans.knaw.nl/en/front-page?set_language=en)
through this DOI:
[10.17026/dans-26t-hhv7](https://doi.org/10.17026/dans-26t-hhv7).
The Peshitta (OT) and (NT) text sources in text-fabric format are on GitHub:
* OT: [etcbc/peshitta](https://github.com/ETCBC/peshitta)
* NT: [etcbc/syrnt](https://github.com/ETCBC/syrnt)
The program that generated the text-fabric features linking the lectionaries with the text is in
a Jupyter notebook:
* [makeLectio](https://nbviewer.jupyter.org/github/etcbc/linksyr/blob/master/programs/lectionaries/makeLectio.ipynb)
## Run it yourself!
Make sure you have installed
* Python (3.6.3 or higher)
* Jupyter
```pip3 install jupyter```
* Text-Fabric
```pip3 install text-fabric```
If you have already installed text-fabric before, make sure to do
```pip3 install --upgrade text-fabric```
because Text-Fabric is in active development every now and then.
```
%load_ext autoreload
%autoreload 2
import os
import re
from tf.app import use
```
# Context
We will be working with two TF data sources,
* the `peshitta`, (OT Peshitta) which name we store in variable `P`
* the `syrnt`, (NT Peshitta) which name we store in variable `S`
They both contain Syriac text and transcriptions, but the SyrNT has linguistic annotations and lexemes, while the
Peshitta (OT) lacks them.
```
P = 'peshitta'
S = 'syrnt'
A = {P: None, S: None}
```
# Text-Fabric browser
Let's first look at the data in your own browser.
What you need to do is to open a command prompt.
If you do not know what that is: on Windows it is the program `cmd.exe`, on the Mac it is the app called `Terminal`,
and on Linux you know what it is.
You can use it from any directory.
If one of the commands below do not work, you have installed things differently than I assume here, or the installation was not succesful.
For more information, consult
[Install](https://annotation.github.io/text-fabric/tf/about/install.html) and/or
[FAQ](https://annotation.github.io/text-fabric/tf/about/faq.html)
Start the TF browser as follows:
### Old Testament
```
text-fabric peshitta -c --mod=etcbc/linksyr/data/tf/lectio/peshitta
```
### New Testament
Open a new command prompt and say there:
```
text-fabric syrnt -c --mod=etcbc/linksyr/data/tf/lectio/syrnt
```
### Example queries
In both cases, issue a query such as
```
verse taksa link
```
or a more refined one:
```
verse taksa link
word word_etcbc=LLJ>
```
You will see all verses that are associated with a lectionary that has a `taksa` and a `link` value.
After playing around with the browsing interface on both testaments, return to this notebook.
We are going to load both texts here in our program:
```
for volume in A:
A[volume] = use(volume+':clone', mod=f'etcbc/linksyr/data/tf/lectio/{volume}')
```
Above you can see that we have loaded the `peshitta` and `syrnt` data sources but also additional data from
* **etcbc/linksyr/data/tf/lectio/peshitta**
* **etcbc/linksyr/data/tf/lectio/syrnt**
From both additional sources we have loaded several features: `lectio`, `mark1`, `mark2`, `siglum`, `taksa`, `taksaTr`.
Every lectionary has a number. A lectionary is linked to several verses.
Here is what kind of information the features contain:
feature | description
--- | ---
**lectio** | comma separated list of numbers of lectionaries associated with this verse
**mark1** | comma separated list of words which mark the precise location of where the lectionaries start
**taksa** | newline separated list of liturgical events associated with the lectionaries (in Syriac)
**taksaTr** | same as **taksa**, but now in English
**siglum** | newline separated list of document references that mention specify the lectionary
**link** | newline separated list of links to the *sigla*
**mark2** | same as **mark2**, but the word is in a different language
When you work with TF, you usually have handy variables called `F`, `L`, `T` ready with which you access all data in the text.
Since we use two TF resources in this program, we make a double set of these variables, and instead of just `F`, we'll say
`F[P]` for accessing the Peshitta (OT) and `F[S]` for accessing the SyrNT. Same pattern for `L` and `T`.
For the meaning of these variables, consult
* [F Features](https://annotation.github.io/text-fabric/tf/core/nodefeature.html)
* [L Locality](https://annotation.github.io/text-fabric/tf/core/locality.html)
* [T Text](https://annotation.github.io/text-fabric/tf/core/text.html)
```
Fs = {}
F = {}
T = {}
L = {}
for volume in A:
thisApi = A[volume].api
F[volume] = thisApi.F
Fs[volume] = thisApi.Fs
T[volume] = thisApi.T
L[volume] = thisApi.L
extraFeatures = '''
lectio
mark1 mark2
'''.strip().split()
```
# Liturgicalness
We measure the *liturgicalness* of a word by counting the number of lectionaries it is involved in.
As a first step, we collect for each words the set of lectionaries it is involved in.
In the Peshitta OT we use the word form, since we do not have lemmas.
The word form is in the feature `word`.
In the SyrNT we use the word lemma, which is in the feature `lexeme`.
We collect the information in the dictionary `liturgical`, which maps each word form unto the set of lectionaries it is involved in.
```
# this function can do the collection in either Testament
def getLiturgical(volume):
wordRep = 'word' if volume == P else 'lexeme'
mapping = {}
# we traverse all verse nodes
for verseNode in F[volume].otype.s('verse'):
# we retrieve the value of feature 'lectio' for that verse node
lectioStr = F[volume].lectio.v(verseNode)
if lectioStr:
# we split the lectio string into a set of individual lectio numbers
lectios = lectioStr.split(',')
# we descend into the words of the verse
for wordNode in L[volume].d(verseNode, otype='word'):
# we use either the feature 'word' or 'lexeme', depending on the volume
word = Fs[volume](wordRep).v(wordNode)
# if this is the first time we encounter the word,
# we add it to the mapping and give it a start value: the empty set
if word not in mapping:
mapping[word] = set()
# in any case, we add the new found lectio numbers to the existing set for this word
mapping[word] |= set(lectios)
# we report how many words we have collected
print(f'Found {len(mapping)} words in {volume}')
# we return the mapping as result
return mapping
```
Before we call the function above for Peshitta and SyrNT, we make a place where the results can land:
```
liturgical = {}
for volume in A:
liturgical[volume] = getLiturgical(volume)
```
Remember that we count word occurrences in the Peshitta, and lemmas in the SyrNT, so we get much smaller numbers for the NT.
Let's show some mapping members for each volume:
```
for volume in liturgical:
print(f'IN {volume}:')
for (word, lectios) in list(liturgical[volume].items())[0:10]:
print(f'\t{word}')
print(f'\t\t{",".join(sorted(lectios)[0:5])} ...')
```
We are not done yet, because we are not interested in the actual lectionaries, but in their number.
So we make a new mapping `liturgicalNess`, which maps each word to the number of lectionaries it is associated with.
```
liturgicalNess = {}
for volume in liturgical:
for word in liturgical[volume]:
nLectio = len(liturgical[volume][word])
liturgicalNess.setdefault(volume, {})[word] = nLectio
```
Lets print the top twenty of each volume
```
for volume in liturgicalNess:
print(f'IN {volume}:')
for (word, lNess) in sorted(
liturgicalNess[volume].items(),
key=lambda x: (-x[1], x[0]),
)[0:20]:
print(f'\t{lNess:>5} {word}')
```
# Frequency lists
Here is how to get a frequency list of a volume.
We can produce the frequency of any feature, but let us do it here for words in the Peshitta (OT) and
lexemes in the SyrNY.
There is a hidden snag: in the SyrNT we do not have only word nodes, but also lexeme nodes.
When we count frequencies, we have to take care to count word nodes only.
The function [freqList](https://annotation.github.io/text-fabric/tf/core/nodefeature.html#tf.core.nodefeature.NodeFeature.freqList)
can do that.
Lets use it and produce the top twenty list of frequent words in both sources, and also the number of hapaxes.
```
# first we define a function to generate the table per volume
def showFreqList(volume):
print(f'IN {volume}:')
wordRep = 'word' if volume == P else 'lexeme'
freqs = Fs[volume](wordRep).freqList(nodeTypes={'word'})
# now the members of freqs are pairs (word, freqency)
# we print the top frequent words
for (word, freq) in freqs[0:10]:
print(f'\t{freq:>5} x {word}')
# we collect all hapaxes: the items with frequency 1
hapaxes = [word for (word, freq) in freqs if freq == 1]
print(f'{len(hapaxes)} hapaxes')
for hapax in hapaxes[100:105]:
print(f'\t{hapax}')
# then we execute it on both volumes
for volume in A:
showFreqList(volume)
```
# Queries
First a simple query with all verses with a lectionary (with taksa and link)
```
query = '''
verse taksa link
'''
```
We run them in both the Old and the New Testament
```
results = {}
for volume in A:
results[volume] = A[volume].search(query)
```
Let's show some results from the New Testament:
```
A[S].show(results[S], start=1, end=1)
```
Let's show some results from the New Testament:
```
A[P].show(results[P], start=1, end=1)
```
# Word study: CJN>
We want to study a word, in both volumes.
First we show a verse where the word occurs: James 3:18.
It is in the New Testament.
The
[`T.nodeFromSection()`](https://annotation.github.io/text-fabric/tf/core/text.html#tf.core.text.Text.nodeFromSection)
function can find the node (bar code) for a verse specified by a passage reference.
```
# we have to pass the section reference as a triple:
section = ('James', 3, 18)
# we retrieve the verse node
verseNode = T[S].nodeFromSection(('James', 3, 18))
# in case you're curious: here is the node, but it should not be meaningful to you,
# only to the program
print(verseNode)
```
Finally we show the corresponding verse by means of the function
[pretty()](https://annotation.github.io/text-fabric/tf/advanced/display.html#tf.advanced.display.pretty)
```
A[S].pretty(verseNode)
```
Now we use a query to find this word in the New Testament
```
queryS = '''
word lexeme_etcbc=CJN>
'''
resultsS = A[S].search(queryS)
```
We show them all:
```
A[S].show(resultsS)
```
For the OT, we do not have the lexeme value, so we try looking for word forms that *match* `CJN>` rather than those that are exactly equal to it.
Note that we have replaced '=' by '~' in the query below
```
queryP = '''
word word_etcbc~CJN>
'''
resultsP = A[P].search(queryP)
# We show only 20 results
A[P].show(resultsP, end=20)
```
Here ends the bootcamp session.
Interested? Send [me](mailto:dirk.roorda@dans.knaw.nl) a note.
| github_jupyter |
# Setup
```
import sys
import os
import re
import collections
import itertools
import bcolz
import pickle
sys.path.append('../../lib')
sys.path.append('../')
import numpy as np
import pandas as pd
import gc
import random
import smart_open
import h5py
import csv
import json
import functools
import time
import string
import datetime as dt
from tqdm import tqdm_notebook as tqdm
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import global_utils
random_state_number = 967898
import tensorflow as tf
from tensorflow.python.client import device_lib
def get_available_gpus():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos if x.device_type == 'GPU']
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)
get_available_gpus()
%pylab
%matplotlib inline
%load_ext line_profiler
%load_ext memory_profiler
%load_ext autoreload
pd.options.mode.chained_assignment = None
pd.options.display.max_columns = 999
color = sns.color_palette()
```
# Data
```
store = pd.HDFStore('../../data_prep/processed/stage1/data_frames.h5')
train_df = store['train_df']
test_df = store['test_df']
display(train_df.head())
display(test_df.head())
corpus_vocab_list, corpus_vocab_wordidx = None, None
with open('../../data_prep/processed/stage1/vocab_words_wordidx.pkl', 'rb') as f:
(corpus_vocab_list, corpus_wordidx) = pickle.load(f)
print(len(corpus_vocab_list), len(corpus_wordidx))
```
# Data Prep
To control the vocabulary pass in updated corpus_wordidx
```
from sklearn.model_selection import train_test_split
x_train_df, x_val_df = train_test_split(train_df,
test_size=0.10, random_state=random_state_number,
stratify=train_df.Class)
print(x_train_df.shape)
print(x_val_df.shape)
from tensorflow.contrib.keras.python.keras.utils import np_utils
from keras.preprocessing.sequence import pad_sequences
from keras.utils.np_utils import to_categorical
vocab_size=len(corpus_vocab_list)
```
## T:sent_words
### generate data
```
custom_unit_dict = {
"gene_unit" : "words",
"variation_unit" : "words",
# text transformed to sentences attribute
"doc_unit" : "words",
"doc_form" : "sentences",
"divide_document": "multiple_unit"
}
%autoreload
import global_utils
gen_data = global_utils.GenerateDataset(x_train_df, corpus_wordidx)
x_train_21_T, x_train_21_G, x_train_21_V, x_train_21_C = gen_data.generate_data(custom_unit_dict,
has_class=True,
add_start_end_tag=True)
del gen_data
print("Train data")
print(np.array(x_train_21_T).shape, x_train_21_T[0])
print(np.array(x_train_21_G).shape, x_train_21_G[0])
print(np.array(x_train_21_V).shape, x_train_21_V[0])
print(np.array(x_train_21_C).shape, x_train_21_C[0])
gen_data = global_utils.GenerateDataset(x_val_df, corpus_wordidx)
x_val_21_T, x_val_21_G, x_val_21_V, x_val_21_C = gen_data.generate_data(custom_unit_dict,
has_class=True,
add_start_end_tag=True)
del gen_data
print("Val data")
print("text",np.array(x_val_21_T).shape)
print("gene",np.array(x_val_21_G).shape, x_val_21_G[0])
print("variation",np.array(x_val_21_V).shape, x_val_21_V[0])
print("classes",np.array(x_val_21_C).shape, x_val_21_C[0])
```
### format data
```
word_unknown_tag_idx = corpus_wordidx["<UNK>"]
char_unknown_tag_idx = global_utils.char_unknown_tag_idx
MAX_SENT_LEN = 60
x_train_21_T = pad_sequences(x_train_21_T, maxlen=MAX_SENT_LEN, value=word_unknown_tag_idx,
padding="post",truncating="post")
x_val_21_T = pad_sequences(x_val_21_T, maxlen=MAX_SENT_LEN, value=word_unknown_tag_idx,
padding="post",truncating="post")
print(x_train_21_T.shape, x_val_21_T.shape)
```
keras np_utils.to_categorical expects zero index categorical variables
https://github.com/fchollet/keras/issues/570
```
x_train_21_C = np.array(x_train_21_C) - 1
x_val_21_C = np.array(x_val_21_C) - 1
x_train_21_C = np_utils.to_categorical(np.array(x_train_21_C), 9)
x_val_21_C = np_utils.to_categorical(np.array(x_val_21_C), 9)
print(x_train_21_C.shape, x_val_21_C.shape)
```
## T:text_words
### generate data
```
custom_unit_dict = {
"gene_unit" : "words",
"variation_unit" : "words",
# text transformed to sentences attribute
"doc_unit" : "words",
"doc_form" : "text",
"divide_document": "single_unit"
}
%autoreload
import global_utils
gen_data = global_utils.GenerateDataset(x_train_df, corpus_wordidx)
x_train_22_T, x_train_22_G, x_train_22_V, x_train_22_C = gen_data.generate_data(custom_unit_dict,
has_class=True,
add_start_end_tag=True)
del gen_data
print("Train data")
print("text",np.array(x_train_22_T).shape)
print("gene",np.array(x_train_22_G).shape, x_train_22_G[0])
print("variation",np.array(x_train_22_V).shape, x_train_22_V[0])
print("classes",np.array(x_train_22_C).shape, x_train_22_C[0])
gen_data = global_utils.GenerateDataset(x_val_df, corpus_wordidx)
x_val_22_T, x_val_22_G, x_val_22_V, x_val_22_C = gen_data.generate_data(custom_unit_dict,
has_class=True,
add_start_end_tag=True)
del gen_data
print("Val data")
print("text",np.array(x_val_22_T).shape)
print("gene",np.array(x_val_22_G).shape, x_val_22_G[0])
print("variation",np.array(x_val_22_V).shape, x_val_22_V[0])
print("classes",np.array(x_val_22_C).shape, x_val_22_C[0])
```
### format data
```
word_unknown_tag_idx = corpus_wordidx["<UNK>"]
char_unknown_tag_idx = global_utils.char_unknown_tag_idx
MAX_TEXT_LEN = 5000
x_train_22_T = pad_sequences(x_train_22_T, maxlen=MAX_TEXT_LEN, value=word_unknown_tag_idx,
padding="post",truncating="post")
x_val_22_T = pad_sequences(x_val_22_T, maxlen=MAX_TEXT_LEN, value=word_unknown_tag_idx,
padding="post",truncating="post")
print(x_train_22_T.shape, x_val_22_T.shape)
MAX_GENE_LEN = 1
MAX_VAR_LEN = 4
x_train_22_G = pad_sequences(x_train_22_G, maxlen=MAX_GENE_LEN, value=word_unknown_tag_idx)
x_train_22_V = pad_sequences(x_train_22_V, maxlen=MAX_VAR_LEN, value=word_unknown_tag_idx)
x_val_22_G = pad_sequences(x_val_22_G, maxlen=MAX_GENE_LEN, value=word_unknown_tag_idx)
x_val_22_V = pad_sequences(x_val_22_V, maxlen=MAX_VAR_LEN, value=word_unknown_tag_idx)
print(x_train_22_G.shape, x_train_22_V.shape)
print(x_val_22_G.shape, x_val_22_V.shape)
```
keras np_utils.to_categorical expects zero index categorical variables
https://github.com/fchollet/keras/issues/570
```
x_train_22_C = np.array(x_train_22_C) - 1
x_val_22_C = np.array(x_val_22_C) - 1
x_train_22_C = np_utils.to_categorical(np.array(x_train_22_C), 9)
x_val_22_C = np_utils.to_categorical(np.array(x_val_22_C), 9)
print(x_train_22_C.shape, x_val_22_C.shape)
```
### test Data setup
```
gen_data = global_utils.GenerateDataset(test_df, corpus_wordidx)
x_test_22_T, x_test_22_G, x_test_22_V, _ = gen_data.generate_data(custom_unit_dict,
has_class=False,
add_start_end_tag=True)
del gen_data
print("Test data")
print("text",np.array(x_test_22_T).shape)
print("gene",np.array(x_test_22_G).shape, x_test_22_G[0])
print("variation",np.array(x_test_22_V).shape, x_test_22_V[0])
x_test_22_T = pad_sequences(x_test_22_T, maxlen=MAX_TEXT_LEN, value=word_unknown_tag_idx,
padding="post",truncating="post")
print(x_test_22_T.shape)
MAX_GENE_LEN = 1
MAX_VAR_LEN = 4
x_test_22_G = pad_sequences(x_test_22_G, maxlen=MAX_GENE_LEN, value=word_unknown_tag_idx)
x_test_22_V = pad_sequences(x_test_22_V, maxlen=MAX_VAR_LEN, value=word_unknown_tag_idx)
print(x_test_22_G.shape, x_test_22_V.shape)
```
## T:text_chars
### generate data
```
custom_unit_dict = {
"gene_unit" : "raw_chars",
"variation_unit" : "raw_chars",
# text transformed to sentences attribute
"doc_unit" : "raw_chars",
"doc_form" : "text",
"divide_document" : "multiple_unit"
}
%autoreload
import global_utils
gen_data = global_utils.GenerateDataset(x_train_df, corpus_wordidx)
x_train_33_T, x_train_33_G, x_train_33_V, x_train_33_C = gen_data.generate_data(custom_unit_dict,
has_class=True,
add_start_end_tag=True)
del gen_data
print("Train data")
print("text",np.array(x_train_33_T).shape, x_train_33_T[0])
print("gene",np.array(x_train_33_G).shape, x_train_33_G[0])
print("variation",np.array(x_train_33_V).shape, x_train_33_V[0])
print("classes",np.array(x_train_33_C).shape, x_train_33_C[0])
%autoreload
import global_utils
gen_data = global_utils.GenerateDataset(x_val_df, corpus_wordidx)
x_val_33_T, x_val_33_G, x_val_33_V, x_val_33_C = gen_data.generate_data(custom_unit_dict,
has_class=True,
add_start_end_tag=True)
del gen_data
print("Val data")
print("text",np.array(x_val_33_T).shape, x_val_33_T[98])
print("gene",np.array(x_val_33_G).shape, x_val_33_G[0])
print("variation",np.array(x_val_33_V).shape, x_val_33_V[0])
print("classes",np.array(x_val_33_C).shape, x_val_33_C[0])
```
### format data
```
word_unknown_tag_idx = corpus_wordidx["<UNK>"]
char_unknown_tag_idx = global_utils.char_unknown_tag_idx
MAX_CHAR_IN_SENT_LEN = 150
x_train_33_T = pad_sequences(x_train_33_T, maxlen=MAX_CHAR_IN_SENT_LEN, value=char_unknown_tag_idx,
padding="post",truncating="post")
x_val_33_T = pad_sequences(x_val_33_T, maxlen=MAX_CHAR_IN_SENT_LEN, value=char_unknown_tag_idx,
padding="post",truncating="post")
print(x_train_33_T.shape, x_val_33_T.shape)
x_train_33_G = pad_sequences(x_train_33_G, maxlen=MAX_CHAR_IN_SENT_LEN, value=char_unknown_tag_idx)
x_train_33_V = pad_sequences(x_train_33_V, maxlen=MAX_CHAR_IN_SENT_LEN, value=char_unknown_tag_idx)
x_val_33_G = pad_sequences(x_val_33_G, maxlen=MAX_CHAR_IN_SENT_LEN, value=char_unknown_tag_idx)
x_val_33_V = pad_sequences(x_val_33_V, maxlen=MAX_CHAR_IN_SENT_LEN, value=char_unknown_tag_idx)
print(x_train_33_G.shape, x_train_33_V.shape)
print(x_val_33_G.shape, x_val_33_V.shape)
```
keras np_utils.to_categorical expects zero index categorical variables
https://github.com/fchollet/keras/issues/570
```
x_train_33_C = np.array(x_train_33_C) - 1
x_val_33_C = np.array(x_val_33_C) - 1
x_train_33_C = np_utils.to_categorical(np.array(x_train_33_C), 9)
x_val_33_C = np_utils.to_categorical(np.array(x_val_33_C), 9)
print(x_train_33_C.shape, x_val_33_C.shape)
```
## T:text_sent_words
### generate data
```
custom_unit_dict = {
"gene_unit" : "words",
"variation_unit" : "words",
# text transformed to sentences attribute
"doc_unit" : "word_list",
"doc_form" : "text",
"divide_document" : "single_unit"
}
%autoreload
import global_utils
gen_data = global_utils.GenerateDataset(x_train_df, corpus_wordidx)
x_train_34_T, x_train_34_G, x_train_34_V, x_train_34_C = gen_data.generate_data(custom_unit_dict,
has_class=True,
add_start_end_tag=True)
del gen_data
print("Train data")
print("text",np.array(x_train_34_T).shape, x_train_34_T[0][:1])
print("gene",np.array(x_train_34_G).shape, x_train_34_G[0])
print("variation",np.array(x_train_34_V).shape, x_train_34_V[0])
print("classes",np.array(x_train_34_C).shape, x_train_34_C[0])
%autoreload
import global_utils
gen_data = global_utils.GenerateDataset(x_val_df, corpus_wordidx)
x_val_34_T, x_val_34_G, x_val_34_V, x_val_34_C = gen_data.generate_data(custom_unit_dict,
has_class=True,
add_start_end_tag=True)
del gen_data
print("Val data")
print("text",np.array(x_val_34_T).shape, x_val_34_T[98][:1])
print("gene",np.array(x_val_34_G).shape, x_val_34_G[0])
print("variation",np.array(x_val_34_V).shape, x_val_34_V[0])
print("classes",np.array(x_val_34_C).shape, x_val_34_C[0])
```
### format data
```
word_unknown_tag_idx = corpus_wordidx["<UNK>"]
char_unknown_tag_idx = global_utils.char_unknown_tag_idx
MAX_DOC_LEN = 500 # no of sentences in a document
MAX_SENT_LEN = 80 # no of words in a sentence
for doc_i, doc in enumerate(x_train_34_T):
x_train_34_T[doc_i] = x_train_34_T[doc_i][:MAX_DOC_LEN]
# padding sentences
if len(x_train_34_T[doc_i]) < MAX_DOC_LEN:
for not_used_i in range(0,MAX_DOC_LEN - len(x_train_34_T[doc_i])):
x_train_34_T[doc_i].append([word_unknown_tag_idx]*MAX_SENT_LEN)
# padding words
x_train_34_T[doc_i] = pad_sequences(x_train_34_T[doc_i], maxlen=MAX_SENT_LEN, value=word_unknown_tag_idx)
for doc_i, doc in enumerate(x_val_34_T):
x_val_34_T[doc_i] = x_val_34_T[doc_i][:MAX_DOC_LEN]
# padding sentences
if len(x_val_34_T[doc_i]) < MAX_DOC_LEN:
for not_used_i in range(0,MAX_DOC_LEN - len(x_val_34_T[doc_i])):
x_val_34_T[doc_i].append([word_unknown_tag_idx]*MAX_SENT_LEN)
# padding words
x_val_34_T[doc_i] = pad_sequences(x_val_34_T[doc_i], maxlen=MAX_SENT_LEN, value=word_unknown_tag_idx)
x_train_34_T = np.array(x_train_34_T)
x_val_34_T = np.array(x_val_34_T)
print(x_val_34_T.shape, x_train_34_T.shape)
x_train_34_G = pad_sequences(x_train_34_G, maxlen=MAX_SENT_LEN, value=word_unknown_tag_idx)
x_train_34_V = pad_sequences(x_train_34_V, maxlen=MAX_SENT_LEN, value=word_unknown_tag_idx)
x_val_34_G = pad_sequences(x_val_34_G, maxlen=MAX_SENT_LEN, value=word_unknown_tag_idx)
x_val_34_V = pad_sequences(x_val_34_V, maxlen=MAX_SENT_LEN, value=word_unknown_tag_idx)
print(x_train_34_G.shape, x_train_34_V.shape)
print(x_val_34_G.shape, x_val_34_V.shape)
```
keras np_utils.to_categorical expects zero index categorical variables
https://github.com/fchollet/keras/issues/570
```
x_train_34_C = np.array(x_train_34_C) - 1
x_val_34_C = np.array(x_val_34_C) - 1
x_train_34_C = np_utils.to_categorical(np.array(x_train_34_C), 9)
x_val_34_C = np_utils.to_categorical(np.array(x_val_34_C), 9)
print(x_train_34_C.shape, x_val_34_C.shape)
```
Need to form 3 dimensional target data for rationale model training
```
temp = (x_train_34_C.shape[0],1,x_train_34_C.shape[1])
x_train_34_C_sent = np.repeat(x_train_34_C.reshape(temp[0],temp[1],temp[2]), MAX_DOC_LEN, axis=1)
#sentence test targets
temp = (x_val_34_C.shape[0],1,x_val_34_C.shape[1])
x_val_34_C_sent = np.repeat(x_val_34_C.reshape(temp[0],temp[1],temp[2]), MAX_DOC_LEN, axis=1)
print(x_train_34_C_sent.shape, x_val_34_C_sent.shape)
```
## Embedding layer
### for words
```
WORD_EMB_SIZE = 200
%autoreload
import global_utils
ft_file_path = "/home/bicepjai/Projects/Deep-Survey-Text-Classification/data_prep/processed/stage1/pretrained_word_vectors/ft_sg_200d_50e.vec"
trained_embeddings = global_utils.get_embeddings_from_ft(ft_file_path, WORD_EMB_SIZE, corpus_vocab_list)
trained_embeddings.shape
```
### for characters
```
CHAR_EMB_SIZE = 64
char_embeddings = np.random.randn(global_utils.CHAR_ALPHABETS_LEN, CHAR_EMB_SIZE)
char_embeddings.shape
```
# Models
## prep
```
%autoreload
import tensorflow.contrib.keras as keras
import tensorflow as tf
from keras import backend as K
from keras.engine import Layer, InputSpec, InputLayer
from keras.models import Model, Sequential
from keras.layers import Dropout, Embedding, concatenate
from keras.layers import Conv1D, MaxPool1D, Conv2D, MaxPool2D, ZeroPadding1D, GlobalMaxPool1D
from keras.layers import Dense, Input, Flatten, BatchNormalization
from keras.layers import Concatenate, Dot, Merge, Multiply, RepeatVector
from keras.layers import Bidirectional, TimeDistributed
from keras.layers import SimpleRNN, LSTM, GRU, Lambda, Permute
from keras.layers.core import Reshape, Activation
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint,EarlyStopping,TensorBoard
from keras.constraints import maxnorm
from keras.regularizers import l2
```
## model_1: paper
```
text_seq_input = Input(shape=(MAX_SENT_LEN,), dtype='int32')
text_embedding = Embedding(vocab_size, WORD_EMB_SIZE, input_length=MAX_SENT_LEN,
weights=[trained_embeddings], trainable=True)(text_seq_input)
model_1 = Sequential([
Embedding(vocab_size, WORD_EMB_SIZE, weights=[trained_embeddings],
input_length=MAX_SENT_LEN, trainable=True),
LSTM(32),
Dense(9, activation='softmax')
])
```
#### training
```
model_1.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['categorical_accuracy'])
model_1.summary()
%rm -rf ./tb_graphs/*
tb_callback = keras.callbacks.TensorBoard(log_dir='./tb_graphs', histogram_freq=0, write_graph=True, write_images=True)
checkpointer = ModelCheckpoint(filepath="model_1_weights.hdf5",
verbose=1,
monitor="val_categorical_accuracy",
save_best_only=True,
mode="max")
with tf.Session() as sess:
# model = keras.models.load_model('current_model.h5')
sess.run(tf.global_variables_initializer())
try:
model_1.load_weights("model_1_weights.hdf5")
except IOError as ioe:
print("no checkpoints available !")
model_1.fit(x_train_21_T, x_train_21_C,
validation_data=(x_val_21_T, x_val_21_C),
epochs=5, batch_size=1024, shuffle=True,
callbacks=[tb_callback,checkpointer])
#model.save('current_sent_model.h5')
```
## model_2: with GRU
```
text_seq_input = Input(shape=(MAX_SENT_LEN,), dtype='int32')
text_embedding = Embedding(vocab_size, WORD_EMB_SIZE, input_length=MAX_SENT_LEN,
weights=[trained_embeddings], trainable=True)(text_seq_input)
model_2 = Sequential([
Embedding(vocab_size, WORD_EMB_SIZE, weights=[trained_embeddings],
input_length=MAX_SENT_LEN, trainable=True),
GRU(32),
Dense(9, activation='softmax')
])
```
#### training
```
model_2.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['categorical_accuracy'])
model_2.summary()
%rm -rf ./tb_graphs/*
tb_callback = keras.callbacks.TensorBoard(log_dir='./tb_graphs', histogram_freq=0, write_graph=True, write_images=True)
checkpointer = ModelCheckpoint(filepath="model_2_weights.hdf5",
verbose=1,
monitor="val_categorical_accuracy",
save_best_only=True,
mode="max")
with tf.Session() as sess:
# model = keras.models.load_model('current_model.h5')
sess.run(tf.global_variables_initializer())
try:
model_2.load_weights("model_2_weights.hdf5")
except IOError as ioe:
print("no checkpoints available !")
model_2.fit(x_train_21_T, x_train_21_C,
validation_data=(x_val_21_T, x_val_21_C),
epochs=5, batch_size=1024, shuffle=True,
callbacks=[tb_callback,checkpointer])
#model.save('current_sent_model.h5')
```
| github_jupyter |
# Diagram Widget
The same _renderer_ that powers the [Diagram Document](./Diagram%20Document.ipynb) can be used as a computable _Jupyter Widget_, which offers even more power than the [Diagram Rich Display](./Diagram%20Rich%20Display.ipynb).
```
from ipywidgets import HBox, VBox, Textarea, jslink, jsdlink, FloatSlider, IntSlider, Checkbox, Text, SelectMultiple, Accordion
from lxml import etree
from traitlets import observe, link, dlink
from ipydrawio import Diagram
diagram = Diagram(layout=dict(min_height="80vh", flex="1"))
box = HBox([diagram])
box
```
## value
A `Diagram.source`'s `value` trait is the raw drawio XML. You can use one document for multiple diagrams.
> [graphviz2drawio](https://pypi.org/project/graphviz2drawio) is recommended for getting to **give me some drawio XML from my data right now**.
```
Diagram(source=diagram.source, layout=dict(min_height="400px"))
diagram.source.value = '''<mxfile host="127.0.0.1" modified="2021-01-27T15:56:33.612Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36" etag="u04aDhBnb7c9tLWsiHn9" version="13.6.10">
<diagram id="x" name="Page-1">
<mxGraphModel dx="1164" dy="293" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="850" pageHeight="1100" math="0" shadow="0">
<root>
<mxCell id="0"/>
<mxCell id="1" parent="0"/>
<mxCell id="2" value="" style="edgeStyle=entityRelationEdgeStyle;startArrow=none;endArrow=none;segment=10;curved=1;" parent="1" source="4" target="5" edge="1">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="3" value="" style="edgeStyle=entityRelationEdgeStyle;startArrow=none;endArrow=none;segment=10;curved=1;" parent="1" source="4" target="6" edge="1">
<mxGeometry relative="1" as="geometry">
<mxPoint x="260" y="160" as="sourcePoint"/>
</mxGeometry>
</mxCell>
<UserObject label="The Big Idea" treeRoot="1" id="4">
<mxCell style="ellipse;whiteSpace=wrap;html=1;align=center;collapsible=0;container=1;recursiveResize=0;" parent="1" vertex="1">
<mxGeometry x="300" y="140" width="100" height="40" as="geometry"/>
</mxCell>
</UserObject>
<mxCell id="5" value="Branch" style="whiteSpace=wrap;html=1;shape=partialRectangle;top=0;left=0;bottom=1;right=0;points=[[0,1],[1,1]];strokeColor=#000000;fillColor=none;align=center;verticalAlign=bottom;routingCenterY=0.5;snapToPoint=1;collapsible=0;container=1;recursiveResize=0;autosize=1;" parent="1" vertex="1">
<mxGeometry x="460" y="120" width="80" height="20" as="geometry"/>
</mxCell>
<mxCell id="6" value="Sub Topic" style="whiteSpace=wrap;html=1;rounded=1;arcSize=50;align=center;verticalAlign=middle;collapsible=0;container=1;recursiveResize=0;strokeWidth=1;autosize=1;spacing=4;" parent="1" vertex="1">
<mxGeometry x="460" y="160" width="72" height="26" as="geometry"/>
</mxCell>
</root>
</mxGraphModel>
</diagram>
</mxfile>'''
value = Textarea(description="value", rows=20)
controls = Accordion([value])
controls.set_title(0, "value")
jslink((diagram.source, "value"), (value, "value"))
box.children = [controls, diagram]
```
There are a number of challenges in using it as a protocol:
- includes hostname (ick!)
- includes etag
- stripping these out creates flicker when updating
At present, tools like jinja2, which work directly with XML, or `lxml`, which can work at a higher level, with e.g. XPath.
> Stay tuned for better tools for working with this format with e.g. `networkx`
## Interactive state
A `Diagram` exposes a number of parts of both the content and interactive state of the editor.
```
zoom = FloatSlider(description="zoom", min=0.01)
scroll_x, scroll_y = [FloatSlider(description=f"scroll {x}", min=-1e5, max=1e5) for x in "xy"]
current_page = IntSlider(description="page")
jslink((diagram, "zoom"), (zoom, "value"))
jslink((diagram, "scroll_x"), (scroll_x, "value"))
jslink((diagram, "scroll_y"), (scroll_y, "value"))
jslink((diagram, "current_page"), (current_page, "value"))
controls.children = [VBox([zoom, scroll_x, scroll_y, current_page]), value]
controls._titles = {"0": "ui", "1": "value"}
selected_cells = SelectMultiple(description="selected")
enable_selected = Checkbox(True, description="enable select")
def update_selected(*_):
if enable_selected.value:
diagram.selected_cells = [*selected_cells.value]
def update_selected_options(*_):
try:
with selected_cells.hold_trait_notifications():
selected_cells.options = [
cell.attrib["id"]
for cell in etree.fromstring(diagram.source.value).xpath("//mxCell")
if "id" in cell.attrib
]
selected_cells.value = diagram.selected_cells
except:
pass
selected_cells.observe(update_selected, "value")
diagram.source.observe(update_selected_options, "value")
diagram.observe(update_selected_options, "selected_cells")
update_selected_options()
controls.children = [VBox([zoom, scroll_x, scroll_y, current_page]), VBox([enable_selected, selected_cells]), value]
controls._titles = {"0": "ui", "1": "selection", "2": "value"}
HBox([enable_selected, selected_cells])
```
## Page Information
`Diagrams` actually describe a "real thing", measured in inches.
```
page_format = {
k: IntSlider(description=k, value=v, min=0, max=1e5)
for k,v in diagram.page_format.items()
}
def update_format(*_):
diagram.page_format = {
k: v.value for k, v in page_format.items()
}
def update_sliders(*_):
for k, v in page_format.items():
v.value = diagram.page_format[k]
[v.observe(update_format, "value") for k, v in page_format.items()]
[diagram.observe(update_sliders, "page_format")]
controls.children = [VBox([zoom, scroll_x, scroll_y, current_page]), VBox([enable_selected, selected_cells]), VBox([*page_format.values()]), value]
controls._titles = {"0": "ui", "1": "selection", "2": "page", "3": "value"}
```
## Grid
The styling of the on-screen grid is cutomizable. This typically _won't_ be included in export to e.g. SVG.
```
grid_enabled = Checkbox(description="grid")
grid_size = FloatSlider(description="grid size")
grid_color = Text("#66666666", description="grid color")
jslink((diagram, "grid_enabled"), (grid_enabled, "value"))
jslink((diagram, "grid_size"), (grid_size, "value"))
jslink((diagram, "grid_color"), (grid_color, "value"))
controls.children = [VBox([zoom, scroll_x, scroll_y, current_page]), VBox([enable_selected, selected_cells]), VBox([*page_format.values()]), VBox([ grid_enabled, grid_size, grid_color]), value]
controls._titles = {"0": "ui", "1": "selection", "2": "page", "3":"grid", "4": "value"}
```
| github_jupyter |
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title"><b>A Magic Square Solver</b></span> by <a xmlns:cc="http://creativecommons.org/ns#" href="http://mate.unipv.it/gualandi" property="cc:attributionName" rel="cc:attributionURL">Stefano Gualandi</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.<br />Based on a work at <a xmlns:dct="http://purl.org/dc/terms/" href="https://github.com/mathcoding/opt4ds" rel="dct:source">https://github.com/mathcoding/opt4ds</a>.
**NOTE:** Run the following script whenever running this script on a Google Colab.
```
import shutil
import sys
import os.path
if not shutil.which("pyomo"):
!pip install -q pyomo
assert(shutil.which("pyomo"))
if not (shutil.which("glpk") or os.path.isfile("glpk")):
if "google.colab" in sys.modules:
!apt-get install -y -qq glpk-utils
else:
try:
!conda install -c conda-forge glpk
except:
pass
```
# Magic Square Solver
In this notebook, we propose an ILP model to the [Magic Square](https://en.wikipedia.org/wiki/Magic_square) puzzle.
The puzzle asks to place into a grid of size $n \times n$ the digits from $1$ to $n^2$, in such a way that the sum of the digits in each row, the sum of digits in each column, and the sum of the digits on the two main diagonals, is equal to the same number.
## ILP Model
The model we propose is as follows.
**Decision Variables:** We use two type of variables:
* The variable $x_{ijk} \in \{0,1\}$ is equal to 1 if the cell in position $(i,j)$ contains the digit $k$, and it is equal to 0 otherwise. For easy of exposition, we use the set $I,J:=\{1,\dots,n\}$ and $K := \{1,\dots,n^2\}$.
* The variable $z\in\mathbb{Z}_+$ represents the magic number.
**Objective function:** Since the problem is a feasibility problem, we can set the objective function equal to a constant value. Otherwise, we can add the sum of every variable (this way we avoid also a warning from the solver).
**Constraints:** We introduce the following linear constraints, which encode the puzzle rules:
1. Every digit, we can be placed into a single position:
$$
\sum_{i \in I}\sum_{j \in J} x_{ijk} = 1, \;\; \forall k \in K
$$
2. In every position, we can place a single digit:
$$
\sum_{k \in K} x_{ijk} = 1, \;\; \forall i \in I, \; \forall j \in J
$$
3. The sum of the digits in each row must be equal to $z$:
$$
\sum_{j \in J}\sum_{k \in K} k x_{ijk} = z, \;\; \forall i \in I
$$
3. The sum of the digits in each column must be equal to $z$:
$$
\sum_{i \in I}\sum_{k \in K} k x_{ijk} = z, \;\; \forall j \in J
$$
4. The sum of the digits over the two main diagonals is equal to $z$:
$$
\sum_{i \in I} \sum_{k \in K} x_{iik} = z,
$$
$$
\sum_{i \in I} \sum_{k \in K} x_{i(n-i+1)k} = z,
$$
We show next how to implement this model in Pyomo.
## Pyomo implementation
As a first step we import the Pyomo libraries.
```
from pyomo.environ import ConcreteModel, Var, Objective, Constraint, SolverFactory
from pyomo.environ import Binary, RangeSet, ConstraintList, PositiveIntegers
```
We create an instance of the class *ConcreteModel*, and we start to add the *RangeSet* and *Var* corresponding to the index sets and the variables of our model. We set also the objective function.
```
# Create concrete model
model = ConcreteModel()
n = 4
# Set of indices
model.I = RangeSet(1, n)
model.J = RangeSet(1, n)
model.K = RangeSet(1, n*n)
# Variables
model.z = Var(within=PositiveIntegers)
model.x = Var(model.I, model.J, model.K, within=Binary)
# Objective Function
model.obj = Objective(expr = model.z)
```
Then, we encode all the constraints of our model using the Pyomo syntax.
```
def Unique(model, k):
return sum(model.x[i,j,k] for j in model.J for i in model.I) == 1
model.unique = Constraint(model.K, rule = Unique)
def CellUnique(model, i, j):
return sum(model.x[i,j,k] for k in model.K) == 1
model.cellUnique = Constraint(model.I, model.J, rule = CellUnique)
def Row(model, i):
return sum(k*model.x[i,j,k] for j in model.J for k in model.K) == model.z
model.row = Constraint(model.I, rule = Row)
def Col(model, j):
return sum(k*model.x[i,j,k] for i in model.I for k in model.K) == model.z
model.column = Constraint(model.J, rule = Col)
model.diag1 = Constraint(
expr = sum(k*model.x[i,i,k] for i in model.I for k in model.K) == model.z)
model.diag2 = Constraint(
expr = sum(k*model.x[i,n-i+1,k] for i in model.I for k in model.K) == model.z)
```
Finally, we solve the model for a given $n$ and we check the solution status.
```
# Solve the model
sol = SolverFactory('glpk').solve(model)
# CHECK SOLUTION STATUS
# Get a JSON representation of the solution
sol_json = sol.json_repn()
# Check solution status
if sol_json['Solver'][0]['Status'] != 'ok':
print("Problem unsolved")
if sol_json['Solver'][0]['Termination condition'] != 'optimal':
print("Problem unsolved")
```
If the problem is solved and a feasible solution is available, we write the solution into a colorful **magic square**.
```
def PlotMagicSquare(x, n):
# Report solution value
import matplotlib.pyplot as plt
import numpy as np
import itertools
sol = np.zeros((n,n), dtype=int)
for i, j, k in x:
if x[i,j,k]() > 0.5:
sol[i-1,j-1] = k
cmap = plt.get_cmap('Blues')
plt.figure(figsize=(6,6))
plt.imshow(sol, interpolation='nearest', cmap=cmap)
plt.title("Magic Square, Size: {}".format(n))
plt.axis('off')
for i, j in itertools.product(range(n), range(n)):
plt.text(j, i, "{:d}".format(sol[i, j]),
fontsize=24, ha='center', va='center')
plt.tight_layout()
plt.show()
PlotMagicSquare(model.x, n)
```
| github_jupyter |
```
import sys
sys.path.append('..')
sys.path.append('../..')
from stats import *
from sentiment_stats import *
from peewee import SQL
from database.models import RawFacebookComments, RawTwitterComments, RawInstagramComments, RawYouTubeComments, RawHashtagComments
rede_social = 'Hashtags'
modelo = RawHashtagComments
cores = ['#FFA726', '#66BB6A', '#42A5F5', '#FFEE58', '#EF5350', '#AB47BC', '#C8C8C8']
cores2 = ['#FFA726', '#AB47BC', '#FFEE58', '#C8C8C8', '#EF5350', '#66BB6A', '#42A5F5']
cores_val = ['#EF5350', '#C8C8C8', '#66BB6A']
cores_val2 = ['#66BB6A', '#EF5350', '#C8C8C8']
sentimentos = ['ALEGRIA', 'SURPRESA', 'TRISTEZA', 'MEDO', 'RAIVA', 'DESGOSTO', 'NEUTRO']
valencia = ['POSITIVO', 'NEGATIVO', 'NEUTRO']
valencia_dict = OrderedDict()
for val in valencia:
valencia_dict[val] = 0
sentimentos_dict = OrderedDict()
for sentimento in sentimentos:
sentimentos_dict[sentimento] = 0
default_clause = [
SQL('length(clean_comment) > 0'),
]
positivo_clause = [
SQL('length(emotion) > 0 AND length(valence) > 0'),
SQL('emotion in ("ALEGRIA", "SURPRESA") AND valence = "POSITIVO"')
]
negativo_clause = [
SQL('length(emotion) > 0 AND length(valence) > 0'),
SQL('emotion in ("TRISTEZA", "RAIVA", "MEDO", "DESGOSTO") AND valence = "NEGATIVO"')
]
neutro_clause = [
SQL('length(emotion) > 0 AND length(valence) > 0'),
SQL('emotion in ("NEUTRO") AND valence = "NEUTRO"')
]
general = default_clause + [
SQL('length(emotion) > 0 AND length(valence) > 0'),
SQL("""
(emotion in ("ALEGRIA", "SURPRESA") AND valence = "POSITIVO")
OR
(emotion in ("TRISTEZA", "RAIVA", "MEDO", "DESGOSTO") AND valence = "NEGATIVO")
OR
(emotion in ("NEUTRO") AND valence = "NEUTRO")
""")
]
```
### Emoções gerais dos comentários : Twitter Hashtags
```
total_comentarios = modelo.select() \
.where(default_clause) \
.count()
comentarios_positivos = modelo.select() \
.where(reduce(operator.and_, default_clause + positivo_clause)) \
.order_by(modelo.timestamp)
comentarios_negativos = modelo.select() \
.where(reduce(operator.and_, default_clause + negativo_clause)) \
.order_by(modelo.timestamp)
comentarios_neutros = modelo.select() \
.where(reduce(operator.and_, default_clause + neutro_clause)) \
.order_by(modelo.timestamp)
comentarios = modelo.select() \
.where(reduce(operator.and_, general)) \
.order_by(modelo.timestamp)
alegria, surpresa, tristeza, medo, raiva, desgosto, positivo, negativo, neutro = load_emocoes_comentarios(comentarios_positivos, comentarios_negativos, comentarios_neutros)
print_statistics(rede_social, total_comentarios, comentarios_positivos, comentarios_negativos, comentarios_neutros)
```
#### Contagem total de comentários : Valência
```
graph_valence_total(rede_social, cores_val2, valencia, positivo, negativo, neutro)
```
#### Contagem total de comentários : Emoções
```
graph_sentimentos_total(rede_social, cores, sentimentos, alegria, surpresa, tristeza, medo, raiva, desgosto, neutro)
```
#### Comentários por data : Valência
```
graph_valencia_por_data(rede_social, cores_val, valencia_dict, comentarios)
```
#### Comentários por data : Emoções
```
graph_emocoes_por_data(rede_social, cores2, sentimentos_dict, comentarios)
```
### Emoções por algumas hashtags : Twitter Hashtags
#### EleNão
```
hashtag_c = [modelo.hashtag == 'EleNão']
total_comentarios = modelo.select() \
.where(reduce(operator.and_, default_clause + hashtag_c)) \
.count()
comentarios_positivos = modelo.select() \
.where(reduce(operator.and_, default_clause + positivo_clause + hashtag_c)) \
.order_by(modelo.timestamp)
comentarios_negativos = modelo.select() \
.where(reduce(operator.and_, default_clause + negativo_clause + hashtag_c)) \
.order_by(modelo.timestamp)
comentarios_neutros = modelo.select() \
.where(reduce(operator.and_, default_clause + neutro_clause + hashtag_c)) \
.order_by(modelo.timestamp)
comentarios = modelo.select() \
.where(reduce(operator.and_, general + hashtag_c)) \
.order_by(modelo.timestamp)
alegria, surpresa, tristeza, medo, raiva, desgosto, positivo, negativo, neutro = load_emocoes_comentarios(comentarios_positivos, comentarios_negativos, comentarios_neutros)
print_statistics(rede_social, total_comentarios, comentarios_positivos, comentarios_negativos, comentarios_neutros)
```
##### Contagem total de comentários : Valência
```
graph_valence_total(rede_social, cores_val2, valencia, positivo, negativo, neutro)
```
##### Contagem total de comentários : Emoções
```
graph_sentimentos_total(rede_social, cores, sentimentos, alegria, surpresa, tristeza, medo, raiva, desgosto, neutro)
```
##### Comentários por data : Valência
```
graph_valencia_por_data(rede_social, cores_val, valencia_dict, comentarios)
```
##### Comentários por data : Emoções
```
graph_emocoes_por_data(rede_social, cores2, sentimentos_dict, comentarios)
```
#### elesimeno1turno
```
hashtag_c = [modelo.hashtag == 'elesimeno1turno']
total_comentarios = modelo.select() \
.where(reduce(operator.and_, default_clause + hashtag_c)) \
.count()
comentarios_positivos = modelo.select() \
.where(reduce(operator.and_, default_clause + positivo_clause + hashtag_c)) \
.order_by(modelo.timestamp)
comentarios_negativos = modelo.select() \
.where(reduce(operator.and_, default_clause + negativo_clause + hashtag_c)) \
.order_by(modelo.timestamp)
comentarios_neutros = modelo.select() \
.where(reduce(operator.and_, default_clause + neutro_clause + hashtag_c)) \
.order_by(modelo.timestamp)
comentarios = modelo.select() \
.where(reduce(operator.and_, general + hashtag_c)) \
.order_by(modelo.timestamp)
alegria, surpresa, tristeza, medo, raiva, desgosto, positivo, negativo, neutro = load_emocoes_comentarios(comentarios_positivos, comentarios_negativos, comentarios_neutros)
print_statistics(rede_social, total_comentarios, comentarios_positivos, comentarios_negativos, comentarios_neutros)
```
##### Contagem total de comentários : Valência
```
graph_valence_total(rede_social, cores_val2, valencia, positivo, negativo, neutro)
```
##### Contagem total de comentários : Emoções
```
graph_sentimentos_total(rede_social, cores, sentimentos, alegria, surpresa, tristeza, medo, raiva, desgosto, neutro)
```
##### Comentários por data : Valência
```
graph_valencia_por_data(rede_social, cores_val, valencia_dict, comentarios)
```
##### Comentários por data : Emoções
```
graph_emocoes_por_data(rede_social, cores2, sentimentos_dict, comentarios)
```
#### ViraViraCiro
```
hashtag_c = [modelo.hashtag == 'ViraViraCiro']
total_comentarios = modelo.select() \
.where(reduce(operator.and_, default_clause + hashtag_c)) \
.count()
comentarios_positivos = modelo.select() \
.where(reduce(operator.and_, default_clause + positivo_clause + hashtag_c)) \
.order_by(modelo.timestamp)
comentarios_negativos = modelo.select() \
.where(reduce(operator.and_, default_clause + negativo_clause + hashtag_c)) \
.order_by(modelo.timestamp)
comentarios_neutros = modelo.select() \
.where(reduce(operator.and_, default_clause + neutro_clause + hashtag_c)) \
.order_by(modelo.timestamp)
comentarios = modelo.select() \
.where(reduce(operator.and_, general + hashtag_c)) \
.order_by(modelo.timestamp)
alegria, surpresa, tristeza, medo, raiva, desgosto, positivo, negativo, neutro = load_emocoes_comentarios(comentarios_positivos, comentarios_negativos, comentarios_neutros)
print_statistics(rede_social, total_comentarios, comentarios_positivos, comentarios_negativos, comentarios_neutros)
```
##### Contagem total de comentários : Valência
```
graph_valence_total(rede_social, cores_val2, valencia, positivo, negativo, neutro)
```
##### Contagem total de comentários : Emoções
```
graph_sentimentos_total(rede_social, cores, sentimentos, alegria, surpresa, tristeza, medo, raiva, desgosto, neutro)
```
##### Comentários por data : Valência
```
graph_valencia_por_data(rede_social, cores_val, valencia_dict, comentarios)
```
##### Comentários por data : Emoções
```
graph_emocoes_por_data(rede_social, cores2, sentimentos_dict, comentarios)
```
#### FicaTemer
```
hashtag_c = [modelo.hashtag == 'FicaTemer']
total_comentarios = modelo.select() \
.where(reduce(operator.and_, default_clause + hashtag_c)) \
.count()
comentarios_positivos = modelo.select() \
.where(reduce(operator.and_, default_clause + positivo_clause + hashtag_c)) \
.order_by(modelo.timestamp)
comentarios_negativos = modelo.select() \
.where(reduce(operator.and_, default_clause + negativo_clause + hashtag_c)) \
.order_by(modelo.timestamp)
comentarios_neutros = modelo.select() \
.where(reduce(operator.and_, default_clause + neutro_clause + hashtag_c)) \
.order_by(modelo.timestamp)
comentarios = modelo.select() \
.where(reduce(operator.and_, general + hashtag_c)) \
.order_by(modelo.timestamp)
alegria, surpresa, tristeza, medo, raiva, desgosto, positivo, negativo, neutro = load_emocoes_comentarios(comentarios_positivos, comentarios_negativos, comentarios_neutros)
print_statistics(rede_social, total_comentarios, comentarios_positivos, comentarios_negativos, comentarios_neutros)
```
##### Contagem total de comentários : Valência
```
graph_valence_total(rede_social, cores_val2, valencia, positivo, negativo, neutro)
```
##### Contagem total de comentários : Emoções
```
graph_sentimentos_total(rede_social, cores, sentimentos, alegria, surpresa, tristeza, medo, raiva, desgosto, neutro)
```
##### Comentários por data : Valência
```
graph_valencia_por_data(rede_social, cores_val, valencia_dict, comentarios)
```
##### Comentários por data : Emoções
```
graph_emocoes_por_data(rede_social, cores2, sentimentos_dict, comentarios)
```
#### MarketeirosDoJair
```
hashtag_c = [modelo.hashtag == 'MarketeirosDoJair']
total_comentarios = modelo.select() \
.where(reduce(operator.and_, default_clause + hashtag_c)) \
.count()
comentarios_positivos = modelo.select() \
.where(reduce(operator.and_, default_clause + positivo_clause + hashtag_c)) \
.order_by(modelo.timestamp)
comentarios_negativos = modelo.select() \
.where(reduce(operator.and_, default_clause + negativo_clause + hashtag_c)) \
.order_by(modelo.timestamp)
comentarios_neutros = modelo.select() \
.where(reduce(operator.and_, default_clause + neutro_clause + hashtag_c)) \
.order_by(modelo.timestamp)
comentarios = modelo.select() \
.where(reduce(operator.and_, general + hashtag_c)) \
.order_by(modelo.timestamp)
alegria, surpresa, tristeza, medo, raiva, desgosto, positivo, negativo, neutro = load_emocoes_comentarios(comentarios_positivos, comentarios_negativos, comentarios_neutros)
print_statistics(rede_social, total_comentarios, comentarios_positivos, comentarios_negativos, comentarios_neutros)
```
##### Contagem total de comentários : Valência
```
graph_valence_total(rede_social, cores_val2, valencia, positivo, negativo, neutro)
```
##### Contagem total de comentários : Emoções
```
graph_sentimentos_total(rede_social, cores, sentimentos, alegria, surpresa, tristeza, medo, raiva, desgosto, neutro)
```
##### Comentários por data : Valência
```
graph_valencia_por_data(rede_social, cores_val, valencia_dict, comentarios)
```
##### Comentários por data : Emoções
```
graph_emocoes_por_data(rede_social, cores2, sentimentos_dict, comentarios)
```
#### ViraVotoHaddad13
```
hashtag_c = [modelo.hashtag == 'ViraVotoHaddad13']
total_comentarios = modelo.select() \
.where(reduce(operator.and_, default_clause + hashtag_c)) \
.count()
comentarios_positivos = modelo.select() \
.where(reduce(operator.and_, default_clause + positivo_clause + hashtag_c)) \
.order_by(modelo.timestamp)
comentarios_negativos = modelo.select() \
.where(reduce(operator.and_, default_clause + negativo_clause + hashtag_c)) \
.order_by(modelo.timestamp)
comentarios_neutros = modelo.select() \
.where(reduce(operator.and_, default_clause + neutro_clause + hashtag_c)) \
.order_by(modelo.timestamp)
comentarios = modelo.select() \
.where(reduce(operator.and_, general + hashtag_c)) \
.order_by(modelo.timestamp)
alegria, surpresa, tristeza, medo, raiva, desgosto, positivo, negativo, neutro = load_emocoes_comentarios(comentarios_positivos, comentarios_negativos, comentarios_neutros)
print_statistics(rede_social, total_comentarios, comentarios_positivos, comentarios_negativos, comentarios_neutros)
```
##### Contagem total de comentários : Valência
```
graph_valence_total(rede_social, cores_val2, valencia, positivo, negativo, neutro)
```
##### Contagem total de comentários : Emoções
```
graph_sentimentos_total(rede_social, cores, sentimentos, alegria, surpresa, tristeza, medo, raiva, desgosto, neutro)
```
##### Comentários por data : Valência
```
graph_valencia_por_data(rede_social, cores_val, valencia_dict, comentarios)
```
##### Comentários por data : Emoções
```
graph_emocoes_por_data(rede_social, cores2, sentimentos_dict, comentarios)
```
| github_jupyter |
# Process metadata
This notebook checks each experiment id is associated with gene expression data, via its run id, and returns a clean list of experiment ids that have gene expression data.
```
%load_ext autoreload
%autoreload 2
import os
import sys
import glob
import pandas as pd
import numpy as np
import random
import warnings
warnings.filterwarnings(action='ignore')
sys.path.append("../../")
from functions import utils
from numpy.random import seed
randomState = 123
seed(randomState)
# Read in config variables
config_file = os.path.abspath(os.path.join(os.getcwd(),"../../configs", "config_Human_experiment.tsv"))
params = utils.read_config(config_file)
# Load parameters
dataset_name = params["dataset_name"]
# Input files
# base dir on repo
base_dir = os.path.abspath(os.path.join(os.getcwd(),"../.."))
mapping_file = os.path.join(
base_dir,
dataset_name,
"data",
"metadata",
"recount2_metadata.tsv")
normalized_data_file = os.path.join(
base_dir,
dataset_name,
"data",
"input",
"recount2_gene_normalized_data.tsv.xz")
# Output file
experiment_id_file = os.path.join(
base_dir,
dataset_name,
"data",
"metadata",
"recount2_experiment_ids.txt")
```
### Get experiment ids
```
# Read in metadata
metadata = pd.read_table(
mapping_file,
header=0,
sep='\t',
index_col=0)
metadata.head()
map_experiment_sample = metadata[['run']]
map_experiment_sample.head()
experiment_ids = np.unique(np.array(map_experiment_sample.index)).tolist()
print("There are {} experiments in the compendium".format(len(experiment_ids)))
```
### Get sample ids from gene expression data
```
normalized_data = pd.read_table(
normalized_data_file,
header=0,
sep='\t',
index_col=0).T
normalized_data.head()
sample_ids_with_gene_expression = list(normalized_data.index)
```
### Get samples belonging to selected experiment
```
experiment_ids_with_gene_expression = []
for experiment_id in experiment_ids:
# Some project id values are descriptions
# We will skip these
if len(experiment_id) == 9:
print(experiment_id)
selected_metadata = metadata.loc[experiment_id]
#print("There are {} samples in experiment {}".format(selected_metadata.shape[0], experiment_id))
sample_ids = list(selected_metadata['run'])
if any(x in sample_ids_with_gene_expression for x in sample_ids):
experiment_ids_with_gene_expression.append(experiment_id)
print('There are {} experiments with gene expression data'.format(len(experiment_ids_with_gene_expression)))
experiment_ids_with_gene_expression_df = pd.DataFrame(experiment_ids_with_gene_expression, columns=['experiment_id'])
experiment_ids_with_gene_expression_df.head()
# Save simulated data
experiment_ids_with_gene_expression_df.to_csv(experiment_id_file, sep='\t')
```
| github_jupyter |
```
#alpha3版本内容:增加传输矩阵随机起伏 通过total函数控制训练时间控制 通过total函数控制内容存储文件夹 通过total函数控制结果预测数量
# 先说恢复误删单元格的操作
# 场景:不小心把某个cell给cut了,或者删除了单元格(前提不要关闭notebook窗口)。
# 解决方法: 先按Esc键进入命令模式,在按z键就会恢复。记住不要按Ctrl+z(这个只限没删除单元格的常规操作)
# 命令模式和编辑模式识别:
# 命令模式:左侧为蓝色。
#我们现在应该在master上弃用torch.nn.functional.tanh,因为现在已经合并了张量和变量。
#If you deprecate nn.functional.tanh I could do
# output = nn.Tanh()(input)
import h5py #导入工具包
import numpy as np
import os
import torch
import torch.nn as nn
from torch import autograd
import torch.nn.functional as F
import time
import math
import h5py #导入工具包
import os
from PIL import Image,ImageFilter
import matplotlib.pyplot as plt # plt 用于显示图片
import matplotlib.image as mpimg # mpimg 用于读取图片
from torch.nn.parameter import Parameter
from torch.nn import init
import zipfile
import torch.optim as optim
class Scattering_system_simulation:#python class ,#input shape
def __init__(self,Win,Wout,device,sparseValue,I_random_range,proportionSigma):
self.Matrix_R = self.sparseMatrixGenerate(Win,Wout,device,sparseValue)
self.Matrix_I = self.sparseMatrixGenerate(Win,Wout,device,sparseValue)
self.Win=Win
self.Wout=Wout
self.device=device
self.I_random_range = I_random_range
self.sigmaBasicProportion = 0.5/(Win*Win)
self.proportionSigma = proportionSigma
def sparseMatrixGenerate(self,Win,Wout,device,sparseValue):
_sparseValue=torch.tensor([sparseValue],dtype=torch.float32)
Matrix = torch.rand(Win*Win, Wout*Wout)
#矩阵稀疏设置 rand是0-1均匀分布
_sparseMatrix = torch.rand(Win*Win, Wout*Wout)
_sparseMatrix = torch.where(_sparseMatrix>_sparseValue,torch.tensor([1.0]),torch.tensor([0.0]))
Matrix = _sparseMatrix*Matrix
_div = (Matrix).sum(dim=0,keepdim=True)+1e-6 #加一个小数防止为零时爆炸
Matrix = Matrix/_div
Matrix = Matrix.to(device)
return Matrix
def I_rand_Generation_everyRand(self):
return torch.randn(1,self.Win*self.Win,device=self.device)*(self.I_random_range)+1
# def I_rand_Generation_Fixed():
# def I_rand_Generation_inputCorrelation():
def generate(self, input):#input.shape = (Win,Win)
with torch.no_grad():
input_R=input.view(1,self.Win*self.Win)
input_I=self.I_rand_Generation_everyRand()
tempMatrix_R = self.Matrix_R
tempMatrix_I = self.Matrix_I
R1 = torch.matmul(input_R,tempMatrix_R)
I1 = torch.matmul(input_I,tempMatrix_R)
I2 = torch.matmul(input_R,tempMatrix_I)
R2 = torch.matmul(input_I,tempMatrix_I)
# return torch.nn.functional.sigmoid(torch.sqrt(torch.pow((R1-R2),2)+torch.pow((I1+I2),2)).view(self.Wout,self.Wout))
return torch.sqrt(torch.pow((R1-R2),2)+torch.pow((I1+I2),2)).view(self.Wout,self.Wout)
import matplotlib.pyplot as plt # plt 用于显示图片
import matplotlib.image as mpimg # mpimg 用于读取图片
def show_original_and_speckle(index,samples,labels):
plt.figure()
plt.subplot(1,2,1)
plt.imshow(samples[index][0].cpu(),cmap='gray')
plt.subplot(1,2,2)
plt.imshow(labels[index][0].cpu(),cmap='gray')
plt.show()
def show_test_original_and_speckle(index,testSamples,testLabels):
plt.figure()
plt.subplot(1,2,1)
plt.imshow(testSamples[index][0].cpu(),cmap='gray')
plt.subplot(1,2,2)
plt.imshow(testLabels[index][0].cpu(),cmap='gray')
plt.show()
#创建Dataset子类
import torch.utils.data.dataloader as DataLoader
import torch.utils.data.dataset as Dataset
class subDataset(Dataset.Dataset):
#初始化,定义数据内容和标签
def __init__(self, Data, Label,W,device):
#torch.randn(1,self.Win*self.Win,device=self.device)*(self.I_random_range)
self.Data = Data
self.Label = Label
self.device = device
#返回数据集大小
def __len__(self):
return len(self.Data)
#得到数据内容和标签
def __getitem__(self, index):
data = self.Data[index].to(self.device)
label = self.Label[index].to(self.device)
return data, label
#网络构建 :单一全连接 单一通道 存在前conv 无后conv
class Transmission_Matrix(nn.Module):
def __init__(self,Win,Wout): #batchsize * channelnum * W * W,
super(Transmission_Matrix, self).__init__()
self.Matrix = nn.Linear(Win*Win, Wout*Wout, bias=True)
self.Win = Win
self.Wout = Wout
def forward(self, input):
W = input.shape[2]
input = input.view(input.shape[0],input.shape[1],self.Win*self.Win)
out = self.Matrix(input)
out = out.view(input.shape[0],input.shape[1],self.Wout,self.Wout)
return out
class EnhancedNet(nn.Module):
def __init__(self,Win,Wout,Temp_feature_nums):
super(EnhancedNet, self).__init__()
self.head=nn.Sequential(
nn.Conv2d(1,Temp_feature_nums,3,padding=3//2),
nn.Tanh(),
nn.Conv2d(Temp_feature_nums,Temp_feature_nums,3, padding=3//2),
nn.Tanh(),
nn.Conv2d(Temp_feature_nums,1,3, padding=3//2),
nn.Tanh()
)
self.Matrix_r = Transmission_Matrix(Win,Wout)
self.tailAct=nn.Sigmoid()
def forward(self,input):
result = self.tailAct(self.Matrix_r(self.head(input)))
return result
#showTestResult_XY = PositionXYGenerator(120,device)
def showTestResult(net,testSamples,testLabels,index,device):
net.train(False)
_sample = testSamples[index].to(device)
with torch.no_grad():
output = net(_sample.view(1,1,64,64))
plt.figure()
plt.subplot(131)
plt.title("speckle")
plt.imshow(testSamples[index][0].cpu(),cmap='gray')
# plt.show()#show不需要写啦!!
plt.subplot(132)
plt.title("output")
img = output[0][0].cpu().numpy()
np.where(img > 0, img, 0)
plt.imshow(output[0][0].cpu(),cmap='gray')
plt.subplot(133)
plt.title("real label")
img_t = testLabels[index][0].cpu().numpy()
np.where(img_t > 0, img_t, 0)
plt.imshow(testLabels[index][0].cpu(),cmap='gray')
plt.show()
net.train(True)
import matplotlib.pyplot as plt # plt 用于显示图片
import matplotlib.image as mpimg # mpimg 用于读取图片
def SaveResult(net,testSamples,testLabels,index,device,root_path,windows_or_linux='\\'):
net.train(False)
_sample = testSamples[index].to(device)
with torch.no_grad():
output = net(_sample.view(1,1,64,64))
mpimg.imsave(root_path+windows_or_linux+str(index)+'_speckle'+'.png',testSamples[index][0].cpu().numpy(),cmap='gray')
mpimg.imsave(root_path+windows_or_linux+str(index)+'_reconstruction'+'.png',output[0][0].cpu().numpy(),cmap='gray')
mpimg.imsave(root_path+windows_or_linux+str(index)+'_realLabel'+'.png',testLabels[index][0].cpu().numpy(),cmap='gray')
net.train(True)
# PSNR.py
import numpy as np
import math
def psnr(target, ref):
# target:目标图像 ref:参考图像 scale:尺寸大小
# assume RGB image
target_data = np.array(target)
ref_data = np.array(ref)
diff = ref_data - target_data
diff = diff.flatten('C')
rmse = math.sqrt( np.mean(diff ** 2.) )
scale=1.0
return 20*math.log10(scale/rmse)
import matplotlib.pyplot as plt # plt 用于显示图片
import matplotlib.image as mpimg # mpimg 用于读取图片
def set_psnr(net,testSamples,testLabels,index,device):
net.train(False)
_sample = testSamples[index].to(device)
with torch.no_grad():
output = net(_sample.view(1,1,64,64))
output = psnr(output[0][0].cpu().numpy(),testLabels[index][0].cpu().numpy())
net.train(True)
return output
import torch
import torch.nn.functional as F
from math import exp
import numpy as np
from torch.autograd import Variable
# 计算一维的高斯分布向量
def gaussian(window_size, sigma):
gauss = torch.Tensor([exp(-(x - window_size//2)**2/float(2*sigma**2)) for x in range(window_size)])
return gauss/gauss.sum()
# 创建高斯核,通过两个一维高斯分布向量进行矩阵乘法得到
# 可以设定channel参数拓展为3通道
def create_window(window_size, channel=1):
_1D_window = gaussian(window_size, 1.5).unsqueeze(1)
_2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0)
window = _2D_window.expand(channel, 1, window_size, window_size).contiguous()
return window
# 计算SSIM
# 直接使用SSIM的公式,但是在计算均值时,不是直接求像素平均值,而是采用归一化的高斯核卷积来代替。
# 在计算方差和协方差时用到了公式Var(X)=E[X^2]-E[X]^2, cov(X,Y)=E[XY]-E[X]E[Y].
# 正如前面提到的,上面求期望的操作采用高斯核卷积代替。
def ssim(img1, img2, window_size=11, window=None, size_average=True, full=False, val_range=1):
# Value range can be different from 255. Other common ranges are 1 (sigmoid) and 2 (tanh).
if val_range is None:
if torch.max(img1) > 128:
max_val = 255
else:
max_val = 1
if torch.min(img1) < -0.5:
min_val = -1
else:
min_val = 0
L = max_val - min_val
else:
L = val_range
padd = 0
(_, channel, height, width) = img1.size()
if window is None:
real_size = min(window_size, height, width)
window = create_window(real_size, channel=channel).to(img1.device)
mu1 = F.conv2d(img1, window, padding=padd, groups=channel)
mu2 = F.conv2d(img2, window, padding=padd, groups=channel)
mu1_sq = mu1.pow(2)
mu2_sq = mu2.pow(2)
mu1_mu2 = mu1 * mu2
sigma1_sq = F.conv2d(img1 * img1, window, padding=padd, groups=channel) - mu1_sq
sigma2_sq = F.conv2d(img2 * img2, window, padding=padd, groups=channel) - mu2_sq
sigma12 = F.conv2d(img1 * img2, window, padding=padd, groups=channel) - mu1_mu2
C1 = (0.01 * L) ** 2
C2 = (0.03 * L) ** 2
v1 = 2.0 * sigma12 + C2
v2 = sigma1_sq + sigma2_sq + C2
cs = torch.mean(v1 / v2) # contrast sensitivity
ssim_map = ((2 * mu1_mu2 + C1) * v1) / ((mu1_sq + mu2_sq + C1) * v2)
if size_average:
ret = ssim_map.mean()
else:
ret = ssim_map.mean(1).mean(1).mean(1)
if full:
return ret, cs
return ret
# Classes to re-use window
class SSIM(torch.nn.Module):
def __init__(self, window_size=11, size_average=True, val_range=None):
super(SSIM, self).__init__()
self.window_size = window_size
self.size_average = size_average
self.val_range = val_range
# Assume 1 channel for SSIM
self.channel = 1
self.window = create_window(window_size)
def forward(self, img1, img2):
(_, channel, _, _) = img1.size()
if channel == self.channel and self.window.dtype == img1.dtype:
window = self.window
else:
window = create_window(self.window_size, channel).to(img1.device).type(img1.dtype)
self.window = window
self.channel = channel
return ssim(img1 ,img2 ,window_size=self.window_size ,window=self.window ,size_average=self.size_average ,val_range=self.val_range)
def ssim_calculate(ssim_class,image1,image2):
with torch.no_grad():
result=ssim_class(image1,image2)
return result
#通过一个网络和散斑重建,并和Label计算SSIM
def netAndOriginalAndLabel_to_ssim(ssim_class,net,W1,W2,testSamples,testLabels,index,device):
net.train(False)
_sample = (testSamples[index].view(1,1,W1,W1)).to(device)
with torch.no_grad():
output = net(_sample)
output = ssim_calculate(ssim_class,output.cpu(),(testLabels[index].view(1,1,W2,W2))).numpy()
net.train(True)
return output
import zipfile
def zip_ya(startdir):
file_news = startdir +'.zip' # 压缩后文件夹的名字
z = zipfile.ZipFile(file_news,'w',zipfile.ZIP_DEFLATED) #参数一:文件夹名
for dirpath, dirnames, filenames in os.walk(startdir):
fpath = dirpath.replace(startdir,'') #这一句很重要,不replace的话,就从根目录开始复制
fpath = fpath and fpath + os.sep or ''#这句话理解我也点郁闷,实现当前文件夹以及包含的所有文件的压缩
for filename in filenames:
z.write(os.path.join(dirpath, filename),fpath+filename)
print ('压缩成功')
z.close()
# if__name__=="__main__"
# startdir = ".\\123" #要压缩的文件夹路径
# file_news = startdir +'.zip' # 压缩后文件夹的名字
# zip_ya(startdir,file_news)
def OneTrainingTotalFunction(ScatteringSystem,resultSavedPath,trainTime,reconstructedAndSavedTestNum,imageStrengthFlowSigma):
print("data preprocessing start...")
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
SSS = ScatteringSystem
path = r'/root/code/images'
dirs = os.listdir(path)
trans_num = 10000
original_image = []
scatted=[]
_t_trans_num=0
# torch.randn(1,self.Win*self.Win,device=self.device)*(self.I_random_range)
originalImages_clear = np.zeros((trans_num,64,64), dtype = np.float32)
for dir in dirs:
if _t_trans_num<trans_num:
_t_trans_num=_t_trans_num+1
file = Image.open(path+'//'+dir)
i=(np.array(file.convert('L'))/255).astype(np.float32)
i=torch.from_numpy(i)
originalImages_clear[_t_trans_num-1] = i
i= i + torch.randn(64,64)*(imageStrengthFlowSigma)
original_image.append(i)
_t=SSS.generate(i.to(device))
scatted.append(_t.to('cpu'))
file.close()
np_original_data = np.zeros((trans_num,64,64), dtype = np.float32)
np_scatted_data = np.zeros((trans_num,64,64), dtype = np.float32)
for i in range(0,trans_num):
np_original_data[i] = original_image[i]
np_scatted_data[i] =scatted[i]
samples=torch.from_numpy(np_original_data[0:9500]).view(9500,1,64,64)
labels=torch.from_numpy(np_scatted_data[0:9500]).view(9500,1,64,64)
testSamples = torch.from_numpy(np_original_data[9500:10000]).view(500,1,64,64)
testLabels = torch.from_numpy(np_scatted_data[9500:10000]).view(500,1,64,64)
originalImages_clear = torch.from_numpy(originalImages_clear[9500:10000]).view(500,1,64,64)
#交换样本和标签
temp=samples
samples=labels
labels=temp
temp=testSamples
testSamples=testLabels
testLabels=temp
print(samples.shape)
print(labels.shape)
print(testSamples.shape)
print(testLabels.shape)
print(samples.dtype)
print(labels.dtype)
print(testSamples.dtype)
print(testLabels.dtype)
print("Samples and labels OK")
index=10
print("show_test_original_and_speckle: ",index)
show_test_original_and_speckle(index=index,testSamples=testSamples,testLabels=testLabels)
print("dataset processing...")
dataset = subDataset(samples,labels,64,device)
device_count = 1
if(torch.cuda.device_count()>1):
device_count = torch.cuda.device_count()
batchSize = 16*device_count
print("batchSize:",batchSize)
dataloader = DataLoader.DataLoader(dataset,batch_size= batchSize, shuffle = True)
print(dataset.__getitem__(10)[0].shape,dataset.__getitem__(10)[1].shape)
print("network building...")
net = EnhancedNet(Win=64,Wout=64,Temp_feature_nums=64)
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
net = nn.DataParallel(net)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net.to(device)
index = 8
print("Untrained network reconstruction on test data: ",index)
showTestResult(net,testSamples,testLabels,index,device)
print("optimizer processing...")
#criterion = nn.CrossEntropyLoss()
#optimizer = optim.SGD(net.parameters(), lr=0.00001, momentum=0.9)
optimizer = optim.Adam(net.parameters(), lr=3e-4)
print("planned time is :",trainTime," ,start train...")
startTime = time.time()
net.train(True)
for epoch in range(200): # loop over the dataset multiple times
running_loss = 0.0
if (time.time()-startTime)<trainTime:
for i, data in enumerate(dataloader, 0):
# get the inputs
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = torch.dist(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
#testReconstruction
running_loss += loss.item()
# if i %100 ==99: # print every 2000 mini-batches
# print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 200))
# running_loss = 0.0
showTempNetResultNum = 3000
if i %showTempNetResultNum == showTempNetResultNum-1: # print every 2000 mini-batches
ticks = time.time()-startTime
print("当前时间戳为:", ticks)
print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / showTempNetResultNum))
running_loss = 0.0
index = 10
showTestResult(net,testSamples,testLabels,index,device)
net.train(False)
print('totalTrainTime : ',(time.time()-startTime))
print('Finished Training')
ssim_class=SSIM(val_range=1)
print("creat ssim caculate class")
#将测试集的散斑图片、原图像和重建图像 以及将对应的名称的PSNR和SSIM保存到list中。
print("计算500测试集中每张PSNR SSIM 并将相关输出保存......")
SimulationSystemResult_PSNR_SSIM=[]
average_psnr=0
average_ssim=0
root_path = resultSavedPath
if not os.path.exists(root_path):
os.makedirs(root_path)
root_path_for_imageResult = resultSavedPath+"//"+"imageResult"
if not os.path.exists(root_path_for_imageResult):
os.makedirs(root_path_for_imageResult)
root_path_for_qualityData = resultSavedPath+"//"+"qualityResult"
if not os.path.exists(root_path_for_qualityData):
os.makedirs(root_path_for_qualityData)
with open(root_path_for_qualityData+"//"+"qualityResult.txt", 'w' ) as f:
imageResult_root_path=root_path_for_imageResult
windows_or_linux='//'
net.train(False)
for index in range(500):
_sample = testSamples[index].to(device)
with torch.no_grad():
output = net(_sample.view(1,1,64,64))
if index < reconstructedAndSavedTestNum :
mpimg.imsave(imageResult_root_path+windows_or_linux+str(index)+'_speckle'+'.png',testSamples[index][0].cpu().numpy(),cmap='gray')
mpimg.imsave(imageResult_root_path+windows_or_linux+str(index)+'_reconstruction'+'.png',output[0][0].cpu().numpy(),cmap='gray')
mpimg.imsave(imageResult_root_path+windows_or_linux+str(index)+'_realLabel'+'.png',testLabels[index][0].cpu().numpy(),cmap='gray')
singe_psnr = psnr(output[0][0].cpu().numpy(),originalImages_clear[index][0].cpu().numpy())
average_psnr=average_psnr+singe_psnr
singe_ssim = ssim_calculate(ssim_class,output.cpu(),originalImages_clear[index].view(1,1,64,64)).numpy()
average_ssim = average_ssim + singe_ssim
s="result_"+str(index)+"psnr_and_ssim"
SimulationSystemResult_PSNR_SSIM.append((s,singe_psnr,singe_ssim))
f.writelines(s+' '+str(singe_psnr)+' '+str(singe_ssim)+'\n')
print((s,singe_psnr,singe_ssim))
average_psnr = average_psnr/500
average_ssim = average_ssim/500
with open(root_path_for_qualityData+"//"+"averageQualityResult.txt", 'w' ) as f:
f.writelines('average_psnr : '+str(average_psnr)+'\n')
f.writelines('average_ssim : '+str(average_ssim)+'\n')
print("average PSNR and SSIM : ",average_psnr,average_ssim)
print("result save complete")
z = zipfile.ZipFile('images.zip', 'r')
z.extractall(path=r'/root/code/images')
z.close()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
SSS=Scattering_system_simulation(64,64,device,sparseValue=0,I_random_range=0.00,proportionSigma=0)
DifferentSigmaList = [0,0.1,0.2,0.3,0.4,0.5]
resultRootSavedPath=r"root/code/TestImageStrengthFlow"
for sigma in DifferentSigmaList:
resultSavedPath = resultRootSavedPath +'//'+'sigma'+str(sigma)+'result'
OneTrainingTotalFunction(SSS,resultSavedPath=resultSavedPath,trainTime=600,reconstructedAndSavedTestNum=20,imageStrengthFlowSigma =sigma)
zip_ya(resultRootSavedPath)
print("processing success")
z = zipfile.ZipFile('images.zip', 'r')
z.extractall(path=r'/root/code/images')
z.close()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
SSS=Scattering_system_simulation(64,64,device,sparseValue=0,I_random_range=0.00,proportionSigma=0)
DifferentSigmaList = [0,0.1,0.2,0.3,0.4,0.5]
resultRootSavedPath=r"root/code/TestIFlow"
for sigma in DifferentSigmaList:
resultSavedPath = resultRootSavedPath +'//'+'sigma'+str(sigma)+'result'
SSS.I_random_range = sigma
OneTrainingTotalFunction(SSS,resultSavedPath=resultSavedPath,trainTime=600,reconstructedAndSavedTestNum=20)
zip_ya(resultRootSavedPath)
print("processing success")
```
| github_jupyter |
# Batch Normalization – Solutions
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is **not** a good network for classfying MNIST digits. You could create a _much_ simpler network and get _better_ results. However, to give you hands-on experience with batch normalization, we had to make an example that was:
1. Complicated enough that training would benefit from batch normalization.
2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.
3. Simple enough that the architecture would be easy to understand without additional resources.
This notebook includes two versions of the network that you can edit. The first uses higher level functions from the `tf.layers` package. The second is the same network, but uses only lower level functions in the `tf.nn` package.
1. [Batch Normalization with `tf.layers.batch_normalization`](#example_1)
2. [Batch Normalization with `tf.nn.batch_normalization`](#example_2)
The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named `mnist`. You'll need to run this cell before running anything else in the notebook.
```
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
```
# Batch Normalization using `tf.layers.batch_normalization`<a id="example_1"></a>
This version of the network uses `tf.layers` for almost everything, and expects you to implement batch normalization using [`tf.layers.batch_normalization`](https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization)
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
```
"""
DO NOT MODIFY THIS CELL
"""
def fully_connected(prev_layer, num_units):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
```
We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.
This version of the function does not include batch normalization.
```
"""
DO NOT MODIFY THIS CELL
"""
def conv_layer(prev_layer, layer_depth):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
```
**Run the following cell**, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network **without** batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
```
"""
DO NOT MODIFY THIS CELL
"""
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs,
labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually, just to make sure batch normalization really worked
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
```
With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
# Add batch normalization
To add batch normalization to the layers created by `fully_connected`, we did the following:
1. Added the `is_training` parameter to the function signature so we can pass that information to the batch normalization layer.
2. Removed the bias and activation function from the `dense` layer.
3. Used `tf.layers.batch_normalization` to normalize the layer's output. Notice we pass `is_training` to this layer to ensure the network updates its population statistics appropriately.
4. Passed the normalized values into a ReLU activation function.
```
def fully_connected(prev_layer, num_units, is_training):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:param is_training: bool or Tensor
Indicates whether or not the network is currently training, which tells the batch normalization
layer whether or not it should update or use its population statistics.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None)
layer = tf.layers.batch_normalization(layer, training=is_training)
layer = tf.nn.relu(layer)
return layer
```
To add batch normalization to the layers created by `conv_layer`, we did the following:
1. Added the `is_training` parameter to the function signature so we can pass that information to the batch normalization layer.
2. Removed the bias and activation function from the `conv2d` layer.
3. Used `tf.layers.batch_normalization` to normalize the convolutional layer's output. Notice we pass `is_training` to this layer to ensure the network updates its population statistics appropriately.
4. Passed the normalized values into a ReLU activation function.
If you compare this function to `fully_connected`, you'll see that – when using `tf.layers` – there really isn't any difference between normalizing a fully connected layer and a convolutional layer. However, if you look at the second example in this notebook, where we restrict ourselves to the `tf.nn` package, you'll see a small difference.
```
def conv_layer(prev_layer, layer_depth, is_training):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:param is_training: bool or Tensor
Indicates whether or not the network is currently training, which tells the batch normalization
layer whether or not it should update or use its population statistics.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', use_bias=False, activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
```
Batch normalization is still a new enough idea that researchers are still discovering how best to use it. In general, people seem to agree to remove the layer's bias (because the batch normalization already has terms for scaling and shifting) and add batch normalization _before_ the layer's non-linear activation function. However, for some networks it will work well in other ways, too.
Just to demonstrate this point, the following three versions of `conv_layer` show other ways to implement batch normalization. If you try running with any of these versions of the function, they should all still work fine (although some versions may still work better than others).
**Alternate solution that uses bias in the convolutional layer but still adds batch normalization before the ReLU activation function.**
```
def conv_layer(prev_layer, layer_num, is_training):
strides = 2 if layer_num % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_num*4, 3, strides, 'same', use_bias=True, activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
```
**Alternate solution that uses a bias and ReLU activation function _before_ batch normalization.**
```
def conv_layer(prev_layer, layer_num, is_training):
strides = 2 if layer_num % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_num*4, 3, strides, 'same', use_bias=True, activation=tf.nn.relu)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
return conv_layer
```
**Alternate solution that uses a ReLU activation function _before_ normalization, but no bias.**
```
def conv_layer(prev_layer, layer_num, is_training):
strides = 2 if layer_num % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_num*4, 3, strides, 'same', use_bias=False, activation=tf.nn.relu)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
return conv_layer
```
To modify `train`, we did the following:
1. Added `is_training`, a placeholder to store a boolean value indicating whether or not the network is training.
2. Passed `is_training` to the `conv_layer` and `fully_connected` functions.
3. Each time we call `run` on the session, we added to `feed_dict` the appropriate value for `is_training`.
4. Moved the creation of `train_opt` inside a `with tf.control_dependencies...` statement. This is necessary to get the normalization layers created with `tf.layers.batch_normalization` to update their population statistics, which we need when performing inference.
```
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Add placeholder to indicate whether or not we're training the model
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
# Tell TensorFlow to update the population statistics while training
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually, just to make sure batch normalization really worked
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
```
With batch normalization, we now get excellent performance. In fact, validation accuracy is almost 94% after only 500 batches. Notice also the last line of the output: `Accuracy on 100 samples`. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.
# Batch Normalization using `tf.nn.batch_normalization`<a id="example_2"></a>
Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.
This version of the network uses `tf.nn` for almost everything, and expects you to implement batch normalization using [`tf.nn.batch_normalization`](https://www.tensorflow.org/api_docs/python/tf/nn/batch_normalization).
This implementation of `fully_connected` is much more involved than the one that uses `tf.layers`. However, if you went through the `Batch_Normalization_Lesson` notebook, things should look pretty familiar. To add batch normalization, we did the following:
1. Added the `is_training` parameter to the function signature so we can pass that information to the batch normalization layer.
2. Removed the bias and activation function from the `dense` layer.
3. Added `gamma`, `beta`, `pop_mean`, and `pop_variance` variables.
4. Used `tf.cond` to make handle training and inference differently.
5. When training, we use `tf.nn.moments` to calculate the batch mean and variance. Then we update the population statistics and use `tf.nn.batch_normalization` to normalize the layer's output using the batch statistics. Notice the `with tf.control_dependencies...` statement - this is required to force TensorFlow to run the operations that update the population statistics.
6. During inference (i.e. when not training), we use `tf.nn.batch_normalization` to normalize the layer's output using the population statistics we calculated during training.
7. Passed the normalized values into a ReLU activation function.
If any of thise code is unclear, it is almost identical to what we showed in the `fully_connected` function in the `Batch_Normalization_Lesson` notebook. Please see that for extensive comments.
```
def fully_connected(prev_layer, num_units, is_training):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:param is_training: bool or Tensor
Indicates whether or not the network is currently training, which tells the batch normalization
layer whether or not it should update or use its population statistics.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None)
gamma = tf.Variable(tf.ones([num_units]))
beta = tf.Variable(tf.zeros([num_units]))
pop_mean = tf.Variable(tf.zeros([num_units]), trainable=False)
pop_variance = tf.Variable(tf.ones([num_units]), trainable=False)
epsilon = 1e-3
def batch_norm_training():
batch_mean, batch_variance = tf.nn.moments(layer, [0])
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon)
batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference)
return tf.nn.relu(batch_normalized_output)
```
The changes we made to `conv_layer` are _almost_ exactly the same as the ones we made to `fully_connected`. However, there is an important difference. Convolutional layers have multiple feature maps, and each feature map uses shared weights. So we need to make sure we calculate our batch and population statistics **per feature map** instead of per node in the layer.
To accomplish this, we do **the same things** that we did in `fully_connected`, with two exceptions:
1. The sizes of `gamma`, `beta`, `pop_mean` and `pop_variance` are set to the number of feature maps (output channels) instead of the number of output nodes.
2. We change the parameters we pass to `tf.nn.moments` to make sure it calculates the mean and variance for the correct dimensions.
```
def conv_layer(prev_layer, layer_depth, is_training):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:param is_training: bool or Tensor
Indicates whether or not the network is currently training, which tells the batch normalization
layer whether or not it should update or use its population statistics.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
gamma = tf.Variable(tf.ones([out_channels]))
beta = tf.Variable(tf.zeros([out_channels]))
pop_mean = tf.Variable(tf.zeros([out_channels]), trainable=False)
pop_variance = tf.Variable(tf.ones([out_channels]), trainable=False)
epsilon = 1e-3
def batch_norm_training():
# Important to use the correct dimensions here to ensure the mean and variance are calculated
# per feature map instead of for the entire layer
batch_mean, batch_variance = tf.nn.moments(layer, [0,1,2], keep_dims=False)
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon)
batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference)
return tf.nn.relu(batch_normalized_output)
```
To modify `train`, we did the following:
1. Added `is_training`, a placeholder to store a boolean value indicating whether or not the network is training.
2. Each time we call `run` on the session, we added to `feed_dict` the appropriate value for `is_training`.
3. We did **not** need to add the `with tf.control_dependencies...` statement that we added in the network that used `tf.layers.batch_normalization` because we handled updating the population statistics ourselves in `conv_layer` and `fully_connected`.
```
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Add placeholder to indicate whether or not we're training the model
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually, just to make sure batch normalization really worked
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
```
Once again, the model with batch normalization quickly reaches a high accuracy. But in our run, notice that it doesn't seem to learn anything for the first 250 batches, then the accuracy starts to climb. That just goes to show - even with batch normalization, it's important to give your network a bit of time to learn before you decide it isn't working.
| github_jupyter |
```
%matplotlib inline
from fastai.vision.all import *
from fastai.vision.gan import *
```
## LSun bedroom data
For this lesson, we'll be using the bedrooms from the [LSUN dataset](http://lsun.cs.princeton.edu/2017/). The full dataset is a bit too large so we'll use a sample from [kaggle](https://www.kaggle.com/jhoward/lsun_bedroom).
```
path = untar_data(URLs.LSUN_BEDROOMS)
```
We then grab all the images in the folder with the data block API. We don't create a validation set here for reasons we'll explain later. It consists of random noise of size 100 by default (can be changed if you replace `generate_noise` by `partial(generate_noise, size=...)`) as inputs and the images of bedrooms as targets.
```
dblock = DataBlock(blocks = (TransformBlock, ImageBlock),
get_x = generate_noise,
get_items = get_image_files,
splitter = IndexSplitter([]))
def get_dls(bs, size):
dblock = DataBlock(blocks = (TransformBlock, ImageBlock),
get_x = generate_noise,
get_items = get_image_files,
splitter = IndexSplitter([]),
item_tfms=Resize(size, method=ResizeMethod.Crop),
batch_tfms = Normalize.from_stats(torch.tensor([0.5,0.5,0.5]), torch.tensor([0.5,0.5,0.5])))
return dblock.dataloaders(path, path=path, bs=bs)
```
We'll begin with a small size since GANs take a lot of time to train.
```
dls = get_dls(128, 64)
dls.show_batch(max_n=16)
```
## Models
GAN stands for [Generative Adversarial Nets](https://arxiv.org/pdf/1406.2661.pdf) and were invented by Ian Goodfellow. The concept is that we will train two models at the same time: a generator and a critic. The generator will try to make new images similar to the ones in our dataset, and the critic will try to classify real images from the ones the generator does. The generator returns images, the critic a single number (usually 0. for fake images and 1. for real ones).
We train them against each other in the sense that at each step (more or less), we:
1. Freeze the generator and train the critic for one step by:
- getting one batch of true images (let's call that `real`)
- generating one batch of fake images (let's call that `fake`)
- have the critic evaluate each batch and compute a loss function from that; the important part is that it rewards positively the detection of real images and penalizes the fake ones
- update the weights of the critic with the gradients of this loss
2. Freeze the critic and train the generator for one step by:
- generating one batch of fake images
- evaluate the critic on it
- return a loss that rewards posisitivly the critic thinking those are real images; the important part is that it rewards positively the detection of real images and penalizes the fake ones
- update the weights of the generator with the gradients of this loss
Here, we'll use the [Wassertein GAN](https://arxiv.org/pdf/1701.07875.pdf).
We create a generator and a critic that we pass to `gan_learner`. The noise_size is the size of the random vector from which our generator creates images.
```
generator = basic_generator(64, n_channels=3, n_extra_layers=1)
critic = basic_critic (64, n_channels=3, n_extra_layers=1, act_cls=partial(nn.LeakyReLU, negative_slope=0.2))
learn = GANLearner.wgan(dls, generator, critic, opt_func = partial(Adam, mom=0.))
learn.recorder.train_metrics=True
learn.recorder.valid_metrics=False
learn.fit(30, 2e-4, wd=0)
#learn.gan_trainer.switch(gen_mode=True)
learn.show_results(max_n=16, figsize=(8,8), ds_idx=0)
```
| github_jupyter |
## The Basic Idea of Machine-learning
Imagine a monkey drawing on a canvas (say, of `128 * 128` pixels). What's the probability that it draw a human-face? Almost none, isn't it. This implies that
* the manifold of human-face involved in $\mathbb{R}^{128 \times 128}$ has relatively much smaller dimensions.
* Even, the manifold is spares.
To see this, imagine you modify the background of a painting with a human-face in the foreground, the points in $\mathbb{R}^{128 \times 128}$ before and after the modification are generally far from each other.
Thus, the task of machine-learning is to find out the low-dimensional spares manifold, mapping the manifold to a lower dimensional compact space, and mapping the element there back to generate real-world object, like painting.
We call the real-world object "observable", and the low-dimensional spares manifold "latent" space.
This serves both to data-compression and data-abstraction. In fact, these are two aspects of one thing: the probability distribution of data (which we will talk in the next topic).
## Auto-encoder
### Conceptions
This basic idea naturally forces to "auto-encoder", which has two parts:
1. Encoder: mapping the observable to latent.
2. Decoder: mapping the latent to observable.
Let $X$ the space of observable, and $Z$ the latent. Let $f: X \mapsto Z$ denotes the encoder, and $g: Z \mapsto X$ the decoder. Then, for $\forall x \in X$, we would expect
\begin{equation}
g \circ f(x) \approx x.
\end{equation}
To numerically characterize this approximation, let $d_{\text{obs}}$ some pre-defined distance in the space of observable, we can define loss
\begin{equation}
\mathcal{L}_{\text{recon}} = \frac{1}{|D|} \sum_{x \in D} d_{\text{obs}} \left(x, g \circ f (x) \right).
\end{equation}
We call this "reconstruction" loss, since $g \circ f (x)$ is a reconstruction of $x$.
For ensuring the compactness of the latent, an additional regularizer is added to the reconstruction loss, by some pre-defined distance in the latant space $d_{\text{lat}}$. Thus, the total loss is
\begin{equation}
\mathcal{L} = \frac{1}{|D|} \sum_{x \in D} d_{\text{obs}} \left(x, g \circ f (x) \right)
+ d_{\text{lat}} \left( f(x), 0 \right).
\end{equation}
The task is thus to find the functions $f$ and $g$ that minimize the total loss. This utilizes the universality property of neural network.
### Reference:
1. [Wikipedia](https://en.wikipedia.org/wiki/Autoencoder).
## Implementation
```
%matplotlib inline
from IPython.display import display
import matplotlib.pyplot as plt
from tqdm import tqdm
from PIL import Image
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
data_path = '../../dat/MNIST/'
mnist = input_data.read_data_sets(
data_path, one_hot=True,
source_url='http://yann.lecun.com/exdb/mnist/')
def get_encoder(latent_dim, hidden_layers):
def encoder(observable, name='encoder', reuse=None):
with tf.variable_scope(name, reuse=reuse):
hidden = observable
for hidden_layer in hidden_layers:
hidden = tf.layers.dense(hidden, hidden_layer,
activation=tf.nn.relu)
latent = tf.layers.dense(hidden, latent_dim, activation=None)
return latent
return encoder
def get_decoder(observable_dim, hidden_layers):
def decoder(latent, name='decoder', reuse=None):
with tf.variable_scope(name, reuse=reuse):
hidden = latent
for hidden_layer in hidden_layers:
hidden = tf.layers.dense(hidden, hidden_layer,
activation=tf.nn.relu)
reconstructed = tf.layers.dense(hidden, observable_dim,
activation=tf.nn.sigmoid)
return reconstructed
return decoder
def get_loss(observable, encoder, decoder, regularizer=None, reuse=None):
if regularizer is None:
regularizer = lambda latent: 0.0
with tf.name_scope('loss'):
# shape: [batch_size, latent_dim]
latent = encoder(observable, reuse=reuse)
# shape: [batch_size, observable_dim]
reconstructed = decoder(latent, reuse=reuse)
# shape: [batch_size]
squared_errors = tf.reduce_sum(
(reconstructed - observable) ** 2,
axis=1)
mean_square_error = tf.reduce_mean(squared_errors)
return mean_square_error + regularizer(latent)
latent_dim = 64
encoder = get_encoder(latent_dim=latent_dim,
hidden_layers=[512, 256, 128])
decoder = get_decoder(observable_dim=28*28,
hidden_layers=[128, 256, 512])
observable = tf.placeholder(shape=[None, 28*28],
dtype='float32',
name='observable')
latent_samples = tf.placeholder(shape=[None, latent_dim],
dtype='float32',
name='latent_samples')
generated = decoder(latent_samples, reuse=tf.AUTO_REUSE)
def regularizer(latent, name='regularizer'):
with tf.name_scope(name):
distances = tf.reduce_sum(latent ** 2, axis=1)
return tf.reduce_mean(distances)
loss = get_loss(observable, encoder, decoder,
regularizer=regularizer,
reuse=tf.AUTO_REUSE)
optimizer = tf.train.AdamOptimizer(epsilon=1e-3)
train_op = optimizer.minimize(loss)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
loss_vals = []
for i in tqdm(range(100000)):
X, y = mnist.train.next_batch(batch_size=128)
_, loss_val = sess.run([train_op, loss], {observable: X})
if np.isnan(loss_Xy_val):
raise ValueError('Loss has been NaN.')
loss_vals.append(loss_val)
print('Final loss:', np.mean(loss_vals[-100:]))
plt.plot(loss_vals)
plt.xlabel('steps')
plt.ylabel('loss')
plt.show()
def get_image(array):
"""
Args:
array: Numpy array with shape `[28*28]`.
Returns:
An image.
"""
array = 255 * array
array = array.reshape([28, 28])
array = array.astype(np.uint8)
return Image.fromarray(array)
latent_sample_vals = np.random.normal(size=[128, latent_dim])
generated_vals = sess.run(generated, {latent_samples: latent_sample_vals})
# Display the results
n_display = 5
for i in range(n_display):
print('Gnerated:')
display(get_image(generated_vals[i]))
print()
```
| github_jupyter |
# Monodepth Estimation with OpenVINO
This tutorial demonstrates Monocular Depth Estimation with MidasNet in OpenVINO. Model information: https://docs.openvinotoolkit.org/latest/omz_models_model_midasnet.html

### What is Monodepth?
Monocular Depth Estimation is the task of estimating scene depth using a single image. It has many potential applications in robotics, 3D reconstruction, medical imaging and autonomous systems. For this demo, we use a neural network model called [MiDaS](https://github.com/intel-isl/MiDaS) which was developed by the [Embodied AI Foundation](https://www.embodiedaifoundation.org/). Check out the research paper below to learn more.
R. Ranftl, K. Lasinger, D. Hafner, K. Schindler and V. Koltun, ["Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer,"](https://ieeexplore.ieee.org/document/9178977) in IEEE Transactions on Pattern Analysis and Machine Intelligence, doi: 10.1109/TPAMI.2020.3019967.
## Preparation
### Imports
```
import sys
import time
from pathlib import Path
import cv2
import matplotlib.cm
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import (
HTML,
FileLink,
Pretty,
ProgressBar,
Video,
clear_output,
display,
)
from openvino.inference_engine import IECore
sys.path.append("../utils")
from notebook_utils import load_image
```
### Settings
```
DEVICE = "CPU"
MODEL_FILE = "model/MiDaS_small.xml"
model_xml_path = Path(MODEL_FILE)
```
## Functions
```
def normalize_minmax(data):
"""Normalizes the values in `data` between 0 and 1"""
return (data - data.min()) / (data.max() - data.min())
def convert_result_to_image(result, colormap="viridis"):
"""
Convert network result of floating point numbers to an RGB image with
integer values from 0-255 by applying a colormap.
`result` is expected to be a single network result in 1,H,W shape
`colormap` is a matplotlib colormap.
See https://matplotlib.org/stable/tutorials/colors/colormaps.html
"""
cmap = matplotlib.cm.get_cmap(colormap)
result = result.squeeze(0)
result = normalize_minmax(result)
result = cmap(result)[:, :, :3] * 255
result = result.astype(np.uint8)
return result
def to_rgb(image_data) -> np.ndarray:
"""
Convert image_data from BGR to RGB
"""
return cv2.cvtColor(image_data, cv2.COLOR_BGR2RGB)
```
## Load the Model
Load the model in Inference Engine with `ie.read_network` and load it to the specified device with `ie.load_network`. Get input and output keys and the expected input shape for the model.
```
ie = IECore()
net = ie.read_network(model=model_xml_path, weights=model_xml_path.with_suffix(".bin"))
exec_net = ie.load_network(network=net, device_name=DEVICE)
input_key = list(exec_net.input_info)[0]
output_key = list(exec_net.outputs.keys())[0]
network_input_shape = exec_net.input_info[input_key].tensor_desc.dims
network_image_height, network_image_width = network_input_shape[2:]
```
## Monodepth on Image
### Load, resize and reshape input image
The input image is read with OpenCV, resized to network input size, and reshaped to (N,C,H,W) (N=number of images, C=number of channels, H=height, W=width).
```
IMAGE_FILE = "data/coco_bike.jpg"
image = load_image(path=IMAGE_FILE)
# resize to input shape for network
resized_image = cv2.resize(src=image, dsize=(network_image_height, network_image_width))
# reshape image to network input shape NCHW
input_image = np.expand_dims(np.transpose(resized_image, (2, 0, 1)), 0)
```
### Do inference on image
Do the inference, convert the result to an image, and resize it to the original image shape
```
result = exec_net.infer(inputs={input_key: input_image})[output_key]
# convert network result of disparity map to an image that shows
# distance as colors
result_image = convert_result_to_image(result=result)
# resize back to original image shape. cv2.resize expects shape
# in (width, height), [::-1] reverses the (height, width) shape to match this
result_image = cv2.resize(result_image, image.shape[:2][::-1])
```
### Display monodepth image
```
fig, ax = plt.subplots(1, 2, figsize=(20, 15))
ax[0].imshow(to_rgb(image))
ax[1].imshow(result_image);
```
## Monodepth on Video
By default, only the first 100 frames are processed, in order to quickly check that everything works. Change NUM_FRAMES in the cell below to modify this. Set NUM_FRAMES to 0 to process the whole video.
### Video Settings
```
# Video source: https://www.youtube.com/watch?v=fu1xcQdJRws (Public Domain)
VIDEO_FILE = "data/Coco Walking in Berkeley.mp4"
# Number of seconds of input video to process. Set to 0 to process
# the full video.
NUM_SECONDS = 4
# Set ADVANCE_FRAMES to 1 to process every frame from the input video
# Set ADVANCE_FRAMES to 2 to process every second frame. This reduces
# the time it takes to process the video
ADVANCE_FRAMES = 2
# Set SCALE_OUTPUT to reduce the size of the result video
# If SCALE_OUTPUT is 0.5, the width and height of the result video
# will be half the width and height of the input video
SCALE_OUTPUT = 0.5
# The format to use for video encoding. vp09 is slow,
# but it works on most systems.
# Try the THEO encoding if you have FFMPEG installed.
# FOURCC = cv2.VideoWriter_fourcc(*"THEO")
FOURCC = cv2.VideoWriter_fourcc(*"vp09")
# Create Path objects for the input video and the resulting video
output_directory = Path("output")
output_directory.mkdir(exist_ok=True)
result_video_path = output_directory / f"{Path(VIDEO_FILE).stem}_monodepth.mp4"
```
### Load Video
Load video from `VIDEO_FILE`, set in the *Video Settings* cell above. Open the video to read the frame width and height and fps, and compute values for these properties for the monodepth video.
```
cap = cv2.VideoCapture(str(VIDEO_FILE))
ret, image = cap.read()
if not ret:
raise ValueError(f"The video at {VIDEO_FILE} cannot be read.")
input_fps = cap.get(cv2.CAP_PROP_FPS)
input_video_frame_height, input_video_frame_width = image.shape[:2]
target_fps = input_fps / ADVANCE_FRAMES
target_frame_height = int(input_video_frame_height * SCALE_OUTPUT)
target_frame_width = int(input_video_frame_width * SCALE_OUTPUT)
cap.release()
print(
f"The input video has a frame width of {input_video_frame_width}, "
f"frame height of {input_video_frame_height} and runs at {input_fps:.2f} fps"
)
print(
"The monodepth video will be scaled with a factor "
f"{SCALE_OUTPUT}, have width {target_frame_width}, "
f" height {target_frame_height}, and run at {target_fps:.2f} fps"
)
```
### Do Inference on a Video and Create Monodepth Video
```
# Initialize variables
input_video_frame_nr = 0
start_time = time.perf_counter()
total_inference_duration = 0
# Open input video
cap = cv2.VideoCapture(str(VIDEO_FILE))
# Create result video
out_video = cv2.VideoWriter(
str(result_video_path),
FOURCC,
target_fps,
(target_frame_width * 2, target_frame_height),
)
num_frames = int(NUM_SECONDS * input_fps)
total_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT) if num_frames == 0 else num_frames
progress_bar = ProgressBar(total=total_frames)
progress_bar.display()
try:
while cap.isOpened():
ret, image = cap.read()
if not ret:
cap.release()
break
if input_video_frame_nr >= total_frames:
break
# Only process every second frame
# Prepare frame for inference
# resize to input shape for network
resized_image = cv2.resize(src=image, dsize=(network_image_height, network_image_width))
# reshape image to network input shape NCHW
input_image = np.expand_dims(np.transpose(resized_image, (2, 0, 1)), 0)
# Do inference
inference_start_time = time.perf_counter()
result = exec_net.infer(inputs={input_key: input_image})[output_key]
inference_stop_time = time.perf_counter()
inference_duration = inference_stop_time - inference_start_time
total_inference_duration += inference_duration
if input_video_frame_nr % (10 * ADVANCE_FRAMES) == 0:
clear_output(wait=True)
progress_bar.display()
# input_video_frame_nr // ADVANCE_FRAMES gives the number of
# frames that have been processed by the network
display(
Pretty(
f"Processed frame {input_video_frame_nr // ADVANCE_FRAMES}"
f"/{total_frames // ADVANCE_FRAMES}. "
f"Inference time: {inference_duration:.2f} seconds "
f"({1/inference_duration:.2f} FPS)"
)
)
# Transform network result to RGB image
result_frame = to_rgb(convert_result_to_image(result))
# Resize image and result to target frame shape
result_frame = cv2.resize(result_frame, (target_frame_width, target_frame_height))
image = cv2.resize(image, (target_frame_width, target_frame_height))
# Put image and result side by side
stacked_frame = np.hstack((image, result_frame))
# Save frame to video
out_video.write(stacked_frame)
input_video_frame_nr = input_video_frame_nr + ADVANCE_FRAMES
cap.set(1, input_video_frame_nr)
progress_bar.progress = input_video_frame_nr
progress_bar.update()
except KeyboardInterrupt:
print("Processing interrupted.")
finally:
clear_output()
processed_frames = num_frames // ADVANCE_FRAMES
out_video.release()
cap.release()
end_time = time.perf_counter()
duration = end_time - start_time
print(
f"Processed {processed_frames} frames in {duration:.2f} seconds. "
f"Total FPS (including video processing): {processed_frames/duration:.2f}."
f"Inference FPS: {processed_frames/total_inference_duration:.2f} "
)
print(f"Monodepth Video saved to '{str(result_video_path)}'.")
```
### Display Monodepth Video
```
video = Video(result_video_path, width=800, embed=True)
if not result_video_path.exists():
plt.imshow(stacked_frame)
raise ValueError("OpenCV was unable to write the video file. Showing one video frame.")
else:
print(f"Showing monodepth video saved at\n{result_video_path.resolve()}")
print(
"If you cannot see the video in your browser, please click on the "
"following link to download the video "
)
video_link = FileLink(result_video_path)
video_link.html_link_str = "<a href='%s' download>%s</a>"
display(HTML(video_link._repr_html_()))
display(video)
```
| github_jupyter |
# Lambda School Data Science - Logistic Regression
Logistic regression is the baseline for classification models, as well as a handy way to predict probabilities (since those too live in the unit interval). While relatively simple, it is also the foundation for more sophisticated classification techniques such as neural networks (many of which can effectively be thought of as networks of logistic models).
## Lecture - Where Linear goes Wrong
### Return of the Titanic 🚢
You've likely already explored the rich dataset that is the Titanic - let's use regression and try to predict survival with it. The data is [available from Kaggle](https://www.kaggle.com/c/titanic/data), so we'll also play a bit with [the Kaggle API](https://github.com/Kaggle/kaggle-api).
### Get data, option 1: Kaggle API
#### Sign up for Kaggle and get an API token
1. [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one.
2. [Follow these instructions](https://github.com/Kaggle/kaggle-api#api-credentials) to create a Kaggle “API Token” and download your `kaggle.json` file. If you are using Anaconda, put the file in the directory specified in the instructions.
_This will enable you to download data directly from Kaggle. If you run into problems, don’t worry — I’ll give you an easy alternative way to download today’s data, so you can still follow along with the lecture hands-on. And then we’ll help you through the Kaggle process after the lecture._
#### Put `kaggle.json` in the correct location
- ***If you're using Anaconda,*** put the file in the directory specified in the [instructions](https://github.com/Kaggle/kaggle-api#api-credentials).
- ***If you're using Google Colab,*** upload the file to your Google Drive, and run this cell:
```
from google.colab import drive
drive.mount('/content/drive')
%env KAGGLE_CONFIG_DIR=/content/drive/My Drive/
```
#### Install the Kaggle API package and use it to get the data
You also have to join the Titanic competition to have access to the data
```
!pip install kaggle
!kaggle competitions download -c titanic
```
### Get data, option 2: Download from the competition page
1. [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one.
2. [Go to the Titanic competition page](https://www.kaggle.com/c/titanic) to download the [data](https://www.kaggle.com/c/titanic/data).
### Get data, option 3: Use Seaborn
```
import seaborn as sns
train = sns.load_dataset('titanic')
```
But Seaborn's version of the Titanic dataset is not identical to Kaggle's version, as we'll see during this lesson!
### Read data
```
import pandas as pd
train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
train.shape, test.shape
```
Notice that `train.csv` has one more column than `test.csv` : The target, `Survived`.
Kaggle provides test labels, but not test targets. Instead, you submit your test predictions to Kaggle to get your test scores. Why? This is model validaton best practice, makes competitons fair, and helps us learn about over- and under-fitting.
```
train.sample(n=5)
test.sample(n=5)
```
Do some data exploration.
About 62% of passengers did not survive.
```
target = 'Survived'
train[target].value_counts(normalize=True)
```
Describe the numeric columns
```
train.describe(include='number')
```
Describe the non-numeric columns
```
train.describe(exclude='number')
```
### How would we try to do this with linear regression?
We choose a few numeric features, split the data into X and y, [impute missing values](https://scikit-learn.org/stable/modules/impute.html), and fit a Linear Regression model on the train set.
```
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LinearRegression
features = ['Pclass', 'Age', 'Fare']
target = 'Survived'
X_train = train[features]
y_train = train[target]
X_test = test[features]
imputer = SimpleImputer()
X_train_imputed = imputer.fit_transform(X_train)
X_test_imputed = imputer.transform(X_test)
lin_reg = LinearRegression()
lin_reg.fit(X_train_imputed, y_train)
```
Let's consider a test case. What does our Linear Regression predict for a 1st class, 5 year-old, with a fare of 500?
119% probability of survival.
```
import numpy as np
test_case = np.array([[1, 5, 500]]) # Rich 5-year old in first class
lin_reg.predict(test_case)
```
Based on the Linear Regression's intercept and coefficients, it will predict probabilities greater than 100%, or less than 0%, given high enough / low enough values for the features.
```
print('Intercept', lin_reg.intercept_)
coefficients = pd.Series(lin_reg.coef_, X_train.columns)
print(coefficients.to_string())
```
### How would we do this with Logistic Regression?
The scikit-learn API is consistent, so the code is similar.
We instantiate our model (here with `LogisticRegression()` instead of `LinearRegression()`)
We use the same method to fit the model on the training data: `.fit(X_train_imputed, y_train)`
We use the same method to make a predict for our test case: `.predict(test_case)` — But this returns different results. Regressors return continuous values, but classifiers return discrete predictions of the class label. In this binary classification problem, our discrete class labels are `0` (did not survive) or `1` (did survive).
Classifiers also have a `.predict_proba` method, which returns predicted probabilities for each class. The probabilities sum to 1.
We predict ~3% probability that our test case did not surive, and 97% probability that our test case did survive. This result is what we want and expect for our test case: to predict survival, with high probability, but less than 100%.
```
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression(solver='lbfgs')
log_reg.fit(X_train_imputed, y_train)
print('Prediction for rich 5 year old:', log_reg.predict(test_case))
print('Predicted probabilities for rich 5 year old:', log_reg.predict_proba(test_case))
```
Logistic Regression calculates predicted probablities between the range of 0 and 1. By default, scikit-learn makes a discrete prediction by returning whichever class had the highest predicted probability for that observation.
In the case of binary classification, this is equivalent to using a threshold of 0.5. However, we could choose a different threshold, for different trade-offs between false positives versus false negatives.
```
threshold = 0.5
probabilities = log_reg.predict_proba(X_test_imputed)[:,1]
manual_predictions = (probabilities > threshold).astype(int)
direct_predictions = log_reg.predict(X_test_imputed)
all(manual_predictions == direct_predictions)
```
### How accurate is the Logistic Regression?
Scikit-learn estimators provide a convenient method, `.score`. It uses the X features to generate predictions. Then it compares the predictions to the y ground truth labels. Then it returns the score.
For regressors, `.score` returns R^2.
For classifiers, `.score` returns Accuracy.
Our Logistic Regression model has 70% training accuracy. (This is higher than the 62% accuracy we would get with a baseline that predicts every passenger does not survive.)
```
score = log_reg.score(X_train_imputed, y_train)
print('Train Accuracy Score', score)
```
Accuracy is just the number of correct predictions divided by the total number of predictions.
For example, we can look at our first five predictions:
```
y_pred = log_reg.predict(X_train_imputed)
y_pred[:5]
```
And compare to the ground truth labels for these first five observations:
```
y_train[:5].values
```
We have four correct predictions, divided by five total predictions, for 80% accuracy.
```
correct_predictions = 4
total_predictions = 5
accuracy = correct_predictions / total_predictions
print(accuracy)
```
scikit-learn's `accuracy_score` function works the same way and returns the same result.
```
from sklearn.metrics import accuracy_score
accuracy_score(y_train[:5], y_pred[:5])
```
We don't want to just score our model on the training data.
We cannot calculate a test accuracy score ourselves in this notebook, because Kaggle does not provide test labels.
We could split the train data into train and validation sets. However, we don't have many observations. (Fewer than 1,000.)
As another alternative, we can use cross-validation:
```
from sklearn.model_selection import cross_val_score
scores = cross_val_score(log_reg, X_train_imputed, y_train, cv=10)
print('Cross-Validation Accuracy Scores', scores)
```
We can see a range of scores:
```
scores = pd.Series(scores)
scores.min(), scores.mean(), scores.max()
```
To learn more about Cross-Validation, see these links:
- https://scikit-learn.org/stable/modules/cross_validation.html
- https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html
- https://github.com/LambdaSchool/DS-Unit-2-Sprint-3-Classification-Validation/blob/master/module2-baselines-validation/model-validation-preread.md#what-is-cross-validation
### What's the equation for Logistic Regression?
https://en.wikipedia.org/wiki/Logistic_function
https://en.wikipedia.org/wiki/Logistic_regression#Probability_of_passing_an_exam_versus_hours_of_study
```
print('Intercept', log_reg.intercept_[0])
coefficients = pd.Series(log_reg.coef_[0], X_train.columns)
print(coefficients.to_string())
# The logistic sigmoid "squishing" function,
# implemented to work with numpy arrays
def sigmoid(x):
return 1 / (1 + np.e**(-x))
sigmoid(np.dot(log_reg.coef_, test_case.T) + log_reg.intercept_)
```
Or we can write the code with the `@` operator instead of numpy's dot product function
```
sigmoid(log_reg.coef_ @ test_case.T + log_reg.intercept_)
```
Either way, we get the same result as our scikit-learn Logistic Regression
```
log_reg.predict_proba(test_case)
```
## Feature Engineering
Get the [Category Encoder](http://contrib.scikit-learn.org/categorical-encoding/) library
If you're running on Google Colab:
```
!pip install category_encoders
```
If you're running locally with Anaconda:
```
!conda install -c conda-forge category_encoders
```
#### Notice that Seaborn's version of the Titanic dataset has more features than Kaggle's version
```
import seaborn as sns
sns_titanic = sns.load_dataset('titanic')
print(sns_titanic.shape)
sns_titanic.head()
```
#### We can make the `adult_male` and `alone` features, and we can extract features from `Name`
```
def make_features(X):
X = X.copy()
X['adult_male'] = (X['Sex'] == 'male') & (X['Age'] >= 16)
X['alone'] = (X['SibSp'] == 0) & (X['Parch'] == 0)
X['last_name'] = X['Name'].str.split(',').str[0]
X['title'] = X['Name'].str.split(',').str[1].str.split('.').str[0]
return X
train = make_features(train)
test = make_features(test)
train.head()
train['adult_male'].value_counts()
train['alone'].value_counts()
train['title'].value_counts()
train.describe(include='number')
train.describe(exclude='number')
```
### Category Encoders!
http://contrib.scikit-learn.org/categorical-encoding/onehot.html
End-to-end example
```
import category_encoders as ce
pd.set_option('display.max_columns', 1000)
features = ['Pclass', 'Age', 'Fare', 'Sex', 'Embarked', 'adult_male', 'alone', 'title']
target = 'Survived'
X_train = train[features]
X_test = test[features]
y_train = train[target]
y_test = train[target]
encoder = ce.OneHotEncoder(use_cat_names=True)
imputer = SimpleImputer()
log_reg = LogisticRegression(solver='lbfgs', max_iter=1000)
X_train_encoded = encoder.fit_transform(X_train)
X_test_encoded = encoder.transform(X_test)
X_train_imputed = imputer.fit_transform(X_train_encoded)
X_test_imputed = imputer.transform(X_test_encoded)
scores = cross_val_score(log_reg, X_train_imputed, y_train, cv=10)
print('Cross-Validation Accuracy Scores', scores)
```
Here's what the one-hot encoded data looks like
```
X_train_encoded.sample(n=5)
```
The cross-validation accuracy scores improve with the additional features
```
%matplotlib inline
import matplotlib.pyplot as plt
log_reg.fit(X_train_imputed, y_train)
coefficients = pd.Series(log_reg.coef_[0], X_train_encoded.columns)
plt.figure(figsize=(10,10))
coefficients.sort_values().plot.barh(color='grey');
```
### Scaler
https://scikit-learn.org/stable/modules/preprocessing.html#scaling-features-to-a-range
End-to-end example
```
from sklearn.preprocessing import MinMaxScaler
encoder = ce.OneHotEncoder(use_cat_names=True)
imputer = SimpleImputer()
scaler = MinMaxScaler()
log_reg = LogisticRegression(solver='lbfgs', max_iter=1000)
X_train_encoded = encoder.fit_transform(X_train)
X_test_encoded = encoder.transform(X_test)
X_train_imputed = imputer.fit_transform(X_train_encoded)
X_test_imputed = imputer.transform(X_test_encoded)
X_train_scaled = scaler.fit_transform(X_train_imputed)
X_test_scaled = scaler.transform(X_test_imputed)
scores = cross_val_score(log_reg, X_train_scaled, y_train, cv=10)
print('Cross-Validation Accuracy Scores', scores)
```
Now all the features have a min of 0 and a max of 1
```
pd.DataFrame(X_train_scaled).describe()
```
The model coefficients change with scaling
```
log_reg.fit(X_train_scaled, y_train)
coefficients = pd.Series(log_reg.coef_[0], X_train_encoded.columns)
plt.figure(figsize=(10,10))
coefficients.sort_values().plot.barh(color='grey');
```
### Pipeline
https://scikit-learn.org/stable/modules/compose.html#pipeline
```
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(
ce.OneHotEncoder(use_cat_names=True),
SimpleImputer(),
MinMaxScaler(),
LogisticRegression(solver='lbfgs', max_iter=1000)
)
scores = cross_val_score(pipe, X_train, y_train, cv=10)
print('Cross-Validation Accuracy Scores', scores)
pipe.fit(X_train, y_train)
y_pred = pipe.predict(X_test)
submission = test[['PassengerId']].copy()
submission['Survived'] = y_pred
submission.to_csv('kaggle-submission-001.csv', index=False)
```
## Assignment: real-world classification
We're going to check out a larger dataset - the [FMA Free Music Archive data](https://github.com/mdeff/fma). It has a selection of CSVs with metadata and calculated audio features that you can load and try to use to classify genre of tracks. To get you started:
### Get and unzip the data
#### Google Colab
```
!wget https://os.unil.cloud.switch.ch/fma/fma_metadata.zip
!unzip fma_metadata.zip
```
#### Windows
- Download the [zip file](https://os.unil.cloud.switch.ch/fma/fma_metadata.zip)
- You may need to use [7zip](https://www.7-zip.org/download.html) to unzip it
#### Mac
- Download the [zip file](https://os.unil.cloud.switch.ch/fma/fma_metadata.zip)
- You may need to use [p7zip](https://superuser.com/a/626731) to unzip it
### Look at first 4 lines of raw `tracks.csv` file
```
!head -n 4 fma_metadata/tracks.csv
```
### Read with pandas
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
```
tracks = pd.read_csv('fma_metadata/tracks.csv', header=[0,1], index_col=0)
tracks.head()
```
### More data prep
Get value counts of the target. (The syntax is different because the header has two levels, it's a "MultiIndex.")
The target has multiple classes, and many missing values.
```
tracks['track']['genre_top'].value_counts(normalize=True, dropna=False)
```
We can't do supervised learning where targets are missing. (In other words, we can't do supervised learning without supervision.)
So, only keep observations where the target is not null.
```
target_not_null = tracks['track']['genre_top'].notnull()
tracks = tracks[target_not_null]
```
Load `features.csv`: "common features extracted from the audio with [librosa](https://librosa.github.io/librosa/)"
It has 3 levels of columns!
```
features = pd.read_csv('fma_metadata/features.csv', header=[0,1,2], index_col=0)
features.head()
```
I want to [drop a level](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.droplevel.html) here from the audio features dataframe, so it has the same number of levels (2) as the tracks metadata dataframe, so that I can better merge the two together.
```
features.columns = features.columns.droplevel(level=2)
features.head()
```
Merge the metadata with the audio features, on track id (the index for both dataframes).
```
df = pd.merge(tracks, features, left_index=True, right_index=True)
```
And drop a level of columns again, because dealing with MultiIndex is hard
```
df.columns = df.columns.droplevel()
```
This is now a pretty big dataset. Almost 500,000 rows, over 500 columns, and over 200 megabytes in RAM.
```
print(df.shape)
df.info()
```
### Fit Logistic Regression!
```
from sklearn.model_selection import train_test_split
y = df['genre_top']
X = df.select_dtypes('number').drop(columns=['longitude', 'latitude'])
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.50, test_size=0.50,
random_state=42, stratify=y)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
model = LogisticRegression(solver='lbfgs', multi_class='auto')
model.fit(X_train, y_train)
```
Accuracy is 37%, which sounds bad, BUT ...
```
model.score(X_test, y_test)
```
... remember we have 16 classes, and the majority class (Rock) occurs 29% of the time, so the model isn't worse than random guessing for this problem
```
y.value_counts(normalize=True)
```
This dataset is bigger than many you've worked with so far, and while it should fit in Colab, it can take awhile to run. That's part of the challenge!
Your tasks:
- Clean up the variable names in the dataframe
- Use logistic regression to fit a model predicting (primary/top) genre
- Inspect, iterate, and improve your model
- Answer the following questions (written, ~paragraph each):
- What are the best predictors of genre?
- What information isn't very useful for predicting genre?
- What surprised you the most about your results?
*Important caveats*:
- This is going to be difficult data to work with - don't let the perfect be the enemy of the good!
- Be creative in cleaning it up - if the best way you know how to do it is download it locally and edit as a spreadsheet, that's OK!
- If the data size becomes problematic, consider sampling/subsetting, or [downcasting numeric datatypes](https://www.dataquest.io/blog/pandas-big-data/).
- You do not need perfect or complete results - just something plausible that runs, and that supports the reasoning in your written answers
If you find that fitting a model to classify *all* genres isn't very good, it's totally OK to limit to the most frequent genres, or perhaps trying to combine or cluster genres as a preprocessing step. Even then, there will be limits to how good a model can be with just this metadata - if you really want to train an effective genre classifier, you'll have to involve the other data (see stretch goals).
This is real data - there is no "one correct answer", so you can take this in a variety of directions. Just make sure to support your findings, and feel free to share them as well! This is meant to be practice for dealing with other "messy" data, a common task in data science.
## Resources and stretch goals
- Check out the other .csv files from the FMA dataset, and see if you can join them or otherwise fit interesting models with them
- [Logistic regression from scratch in numpy](https://blog.goodaudience.com/logistic-regression-from-scratch-in-numpy-5841c09e425f) - if you want to dig in a bit more to both the code and math (also takes a gradient descent approach, introducing the logistic loss function)
- Create a visualization to show predictions of your model - ideally show a confidence interval based on error!
- Check out and compare classification models from scikit-learn, such as [SVM](https://scikit-learn.org/stable/modules/svm.html#classification), [decision trees](https://scikit-learn.org/stable/modules/tree.html#classification), and [naive Bayes](https://scikit-learn.org/stable/modules/naive_bayes.html). The underlying math will vary significantly, but the API (how you write the code) and interpretation will actually be fairly similar.
- Sign up for [Kaggle](https://kaggle.com), and find a competition to try logistic regression with
- (Not logistic regression related) If you enjoyed the assignment, you may want to read up on [music informatics](https://en.wikipedia.org/wiki/Music_informatics), which is how those audio features were actually calculated. The FMA includes the actual raw audio, so (while this is more of a longterm project than a stretch goal, and won't fit in Colab) if you'd like you can check those out and see what sort of deeper analysis you can do.
| github_jupyter |
<h2> Import Libraries</h2>
```
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
```
## Load the Data
The boston house-price dataset is one of datasets scikit-learn comes with that do not require the downloading of any file from some external website. The code below loads the boston dataset.
```
data = load_boston()
df = pd.DataFrame(data.data, columns=data.feature_names)
df['target'] = data.target
df.head()
```
<h2> Remove Missing or Impute Values</h2>
If you want to build models with your data, null values are (almost) never allowed. It is important to always see how many samples have missing values and for which columns.
```
# Look at the shape of the dataframe
df.shape
# There are no missing values in the dataset
df.isnull().sum()
```
<h2> Arrange Data into Features Matrix and Target Vector </h2>
What we are predicing is the continuous column "target" which is the median value of owner-occupied homes in $1000’s.
```
X = df.loc[:, ['RM', 'LSTAT', 'PTRATIO']]
y = df.loc[:, 'target']
```
## Splitting Data into Training and Test Sets
```
# Original random state is 2
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=2)
```
## Train Test Split Visualization
A relatively new feature of pandas is conditional formatting. https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html
```
X_train = pd.DataFrame(X_train, columns=['RM', 'LSTAT', 'PTRATIO'])
X_test = pd.DataFrame(X_test, columns=['RM', 'LSTAT', 'PTRATIO'])
X_train['split'] = 'train'
X_test['split'] = 'test'
X_train
X_train['target'] = y_train
X_test['target'] = y_test
fullDF = pd.concat([X_train, X_test], axis = 0, ignore_index=False)
fullDF.head(10)
len(fullDF.index)
len(np.unique(fullDF.index))
fullDFsplit = fullDF.copy()
fullDF = fullDF.drop(columns = ['split'])
def highlight_color(s, fullDFsplit):
'''
highlight the the entire dataframe cyan.
'''
colorDF = s.copy()
colorDF.loc[fullDFsplit['split'] == 'train', ['RM', 'LSTAT', 'PTRATIO']] = 'background-color: #40E0D0'
colorDF.loc[fullDFsplit['split'] == 'test', ['RM', 'LSTAT', 'PTRATIO']] = 'background-color: #00FFFF'
# #9370DB
# FF D7 00
colorDF.loc[fullDFsplit['split'] == 'train', ['target']] = 'background-color: #FFD700'
# EE82EE
# BD B7 6B
colorDF.loc[fullDFsplit['split'] == 'test', ['target']] = 'background-color: #FFFF00'
return(colorDF)
temp = fullDF.sort_index().loc[0:9,:].style.apply(lambda x: highlight_color(x,pd.DataFrame(fullDFsplit['split'])), axis = None)
temp.set_properties(**{'border-color': 'black',
'border': '1px solid black'})
```
<h3>Train test split key</h3>
```
# Train test split key
temp = pd.DataFrame(data = [['X_train','X_test','y_train','y_test']]).T
temp
def highlight_mini(s):
'''
highlight the the entire dataframe cyan.
'''
colorDF = s.copy()
# colorDF.loc[0, [0]] = 'background-color: #40E0D0'
# train features
colorDF.loc[0, [0]] = 'background-color: #40E0D0'
# test features
colorDF.loc[1, [0]] = 'background-color: #00FFFF'
# train target
colorDF.loc[2, [0]] = 'background-color: #FFD700'
# test target
colorDF.loc[3, [0]] = 'background-color: #FFFF00'
return(colorDF)
temp2 = temp.sort_index().style.apply(lambda x: highlight_mini(x), axis = None)
temp2.set_properties(**{'border-color': 'black',
'border': '1px solid black',
})
```
After that I was lazy and used powerpoint to make that graph.
| github_jupyter |
# Assignment 5: Exploring Hashing
In this exercise, we will begin to explore the concept of hashing and how it related to various object containers with respect to computational complexity. We will begin with the base code for as described in Chapter 5 of Grokking Algorithms (Bhargava 2016).
## Deliverables:
We will again generate random data for this assignment.
1) Create a list of 100,000 names (randomly pick 10 characters e.g. abcdefghij, any order is fine, just make sure there are no duplicates names) and store those names in an unsorted list.
2) Now store the above names in a set
3) Make a separate copy of the list and sort it using any sorting algorithm that you have learned so far and justify why are you using it. Capture the time it takes to sort the list.
4) Pick the names from the unsorted array that are at 10,000th, 30,000th, 50,000th, 70,000th, 90,000th, and 100,000th positions, and store them in a temporary array somewhere for later use.
5) Search for these six names in each of the collections. Use linear search for the unsorted list, binary search for the sorted list, and use the set.remove() (or the in keyword) builtin for the set. Capture the time it takes using all three algorithms.
6) Create a table and plot comparing times of linear search, binary search and set lookup for the six names using Python (matplotlib or Seaborn) or JavaScript (D3) visualization tools to illustrate algorithm performance.
### Prepare an executive summary of your results, referring to the table and figures you have generated. Explain how your results relate to big O notation. Describe your results in language that management can understand. This summary should be included as text paragraphs in the Jupyter notebook. Explain how the algorithm works and why it is a useful to data engineers.
# A. Setup: Library imports, Function construction and Array generation
```
import numpy as np
import pandas as pd
import seaborn as sns
import time
import random
import string
RANDOM_SEED = 8 #sets random seed
def random_string(str_length, num_strings):
str_list = [] #instantiates an empty list to hold the strings
for i in range(0,num_strings): #loop to generate the specified number of strings
str_list.append(''.join(random.choice(string.ascii_lowercase) for m in range(str_length))) #generates a string of the defined character length
return str_list #returns the string list
def MergeSort(arr):
if len(arr) > 1:
mid = len(arr)//2 # gets middle
Left = arr[:mid] #splits elements left of middle
Right = arr[mid:] #splits elements right of middle
MergeSort(Left) #recursive call on left
MergeSort(Right) #recursive call on right
#set all indicies to 0
i=0
k=0
j=0
#below checks the values for if elements are sorted, if unsorted: swap. Merge to the original list
while i < len(Left) and j < len(Right):
if Left[i] < Right[j]:
arr[k] = Left[i] #makes k index of arr left[i] if it's less than Right[j]
i += 1 #increments i (the left index)
else:
arr[k] = Right[j] #if right value is lss than left, makes arr[k] the value of right and increments the right index
j += 1 #increments j
k += 1 #increments the arr index
while i < len(Left): #checks to see if reamaining elements in left (less than mid), if so adds to arr at k index and increments i and k
arr[k] = Left[i]
i += 1 #increments i
k += 1 #increments k
while j < len(Right): #checks to see if remaining elements in right (greater than mid), if so adds to arr at k index and increments j and k.
arr[k] = Right[j]
j += 1 #increments j
k += 1 #increments k
return arr
def Container(arr, fun):
objects = [] #instantiates an empty list to collect the returns
times = [] #instantiates an empty list to collect times for each computation
start= time.perf_counter() #collects the start time
obj = fun(arr) # applies the function to the arr object
end = time.perf_counter() # collects end time
duration = (end-start)* 1E3 #converts to milliseconds
objects.append(obj)# adds the returns of the functions to the objects list
times.append(duration) # adds the duration for computation to list
return objects, duration
#function SimpleSearch uses a value counter "low" which increments after a non successful evalution of equivalence for the item within a given array. It returns the milliseconds elapsed and a register of all the incremental guesses.
def SimpleSearch(array, item):
i = 0
guess = array[i]
start = time.perf_counter() # gets fractional seconds
while item != guess:
i += 1
guess = array[i] #increments low
end = time.perf_counter() # gets fractional seconds
duration = end - start # calcualates difference in fractional seconds
MilliElapsed = duration*1E3
# returns a tuple which contains search time in milliseconds and register of the guesses
return MilliElapsed
#function BinarySearch determines the range of the array and guwsses the midpoint of the range. A loop continues to to perform iterative range evaluations so long as the low value is equal or less than the high value of the array. When the gues converges to the item of interest, a tuple is returned with the time elapsed in milliseconds and the register of guesses.
# binary search for the sorted list
def BinarySearch(array, item):
i = 0
length = len(array)-1
low = array[i] #finds lowest value in array
high = array[length] #finds highest value in array
register = [] # creates empty register of increments; for debug purposes
start = time.perf_counter() # gets fractional seconds
while i <= length:
mid= (i + length)/2 # calculates midpoint of the range
guess = int(mid)
register.append(array[guess]) # appends increments to register; for debug purposes
if array[guess] == item:
end = time.perf_counter() #datetime.utcnow()
duration = end - start
MilliElapsed = duration*1E3
#print('the string is found for:', n)
#returns a tuple which contains search time in milliseconds and register of the guesses
return MilliElapsed #, register
elif array[guess] > item: ##### loop for if guess is higher than the item
high = array[guess] #resets high to the item at the guess index
low = array[i] #resets low to the item at the i index (typically index 0)
length = guess#resets length to guess
#print('The guess went too high!', n, i, array[guess])
elif array[guess] < item: ######loop for if guess is lower the the item
low = array[guess] #reset low to the index of guess
length = len(array)-1 #get the length of the array to pass to high
high = array[length] #reset high to be the end of the list
i = guess+1 #make sure we increment i so that it can become the end of the list, otherwise you are going to have a bad time!
#print('The guess went too low!',n, i, high, length, low)
str100000 = random_string(str_length=10, num_strings=100000) #generates random strings
str100000_copy = str100000[:] #creates a copy of the random strings
start = time.perf_counter()
MergeSort(str100000)
end = time.perf_counter()
duration = end - start
MS_time = duration*1E3
positions = [9999, 29999, 49999, 69999, 89999, 99999] #positions of the names (needles)
needles = [str100000_copy[i] for i in positions] #collects the needles from the haystack
str100000_container =Container(str100000, MergeSort) #uses mergesort to sort the strings.
temp =str100000_container[0]
str100000_sorted =temp[0]
set_str100000 = set(str100000_copy)
print('the needles are:' , needles)
print('the length of the set is:' ,len(set_str100000))
print('the length of the unsorted copy is:' , len(str100000_copy))
print('the length of the sorted list (mergesort) is:', len(str100000_sorted))
```
# B. Sorting
Search for these six names in each of the collections. Use linear search for the unsorted list, binary search for the sorted list, and use the set.remove() (or the in keyword) builtin for the set. Capture the time it takes using all three algorithms.
### B1. Linear Search of the unsorted list
```
#linear search for the unsorted list
Linear_times = []
for n in needles:
temp_time = SimpleSearch(str100000_copy, n)
Linear_times.append(temp_time)
print('The time reqired for each element in the unsorted array using linear search is:', Linear_times)
```
### B2. Binary Search of the sorted list
```
Binary_times = []
for n in needles:
temp_time = BinarySearch(str100000, n)
Binary_times.append(temp_time)
print('The time reqired for each element in the unsorted array using Binary search is:', Binary_times)
```
### B3. Set Removal for the Set
```
set_needles = set(needles)
set_times = {}
for needle in set_needles:
start = time.perf_counter()
set_str100000.intersection(needle)
end = time.perf_counter()
duration = end - start
MilliElapsed = duration*1E3
set_times[needle] = MilliElapsed
set_times
```
# C. Summary
## Figure 1: Search times in milliseconds for Strings within an array of 100000 elements (each string 10 random lowercase alpha characters)
```
Strings = {
'String': [needles[0], needles[1],needles[2], needles[3],needles[4], needles[5]],
'PostionInSortedArray': [10000, 30000, 50000, 70000, 90000, 100000],
'LinearSearch(Unsorted)': [Linear_times[0], Linear_times[1], Linear_times[2], Linear_times[3], Linear_times[4], Linear_times[5]],
'BinarySearch(Sorted)': [Binary_times[0], Binary_times[1], Binary_times[2], Binary_times[3], Binary_times[4], Binary_times[5]],
'SetIntersection(Unsorted)': [set_times.get(needles[0]), set_times.get(needles[1]), set_times.get(needles[2]), set_times.get(needles[3]), set_times.get(needles[4]), set_times.get(needles[5])]
}
string_df = pd.DataFrame.from_dict(Strings)
string_df['Binary+Sort'] = string_df['BinarySearch(Sorted)']+MS_time
string_df
```
## Table 1: Times for each algorithm given the length of the starting list
```
long_df = string_df.melt(id_vars=['String', 'PostionInSortedArray'],
value_vars=['LinearSearch(Unsorted)', 'BinarySearch(Sorted)', 'SetIntersection(Unsorted)'],var_name='Algo', value_name='Time(ms)')
```
## Figure 1: Sorth Algorithm Time Complexity
```
sns.barplot(data = long_df, x='PostionInSortedArray', hue='Algo', y='Time(ms)')
plot = sns.barplot(data = long_df, x='PostionInSortedArray', hue='Algo', y='Time(ms)')
plot.set_yscale('log')
```
## Figure 2: Merge and Quick Sort time complexity
# Discussion
Three sorting algorithms were tested for their time complexity in sorting lists of varying sizes of string elements. Each string element in the list was randomly populated with 50 alphabetic lower case characters. The number of elements within the list was varied. Five lists containing 200, 400, 600, 800, and 1000 strings were sorted via BubbleSort, MergeSort, and QuickSort. The times (given in milliseconds) required to perform the sort are collected and displayed in Table 1. By far, the most inefficient sorting algorithm demonstrated here is the bubble sort whose complexity is shown graphically (figure 1) to grow at n\*n or O(n^2) rate. This makes sense for bubble sort as it compares n elements amongst n elements.
Alternatively, the other two methodologies utilize a divide and conquer strategy. The list of strings when using QuickSort are divided into two arrays (greater and less) which contain values which are greater or less than a pivot value. In MergeSort a similar strategy is achieved by dividing the list into two arrays (left and right) which are left and right respectivly from the center element of the list. In both of these arrays recursion is used as the QuickSort and MergeSort functions are called on the subarrays. The result of this divide and conquer strategy is a complexity of n*logn or O(n*logn) in big O notation. A direct comparision of the times required for sorting the lists with these two methodologies are shown in Figure 2.
In rare instances QuickSort may also dramatically underperform as the pivot element is always selected as the first item of the array (or subarray). If an array contained a list which was sorted largest to smallest already, this method could also have very high complexity as you would not divide the list recursively for an array of n size (this would also be n\*n complexity O(n^2)). It is interesting the QuickSort seems to perform slightly better than MergeSort, but both are quite efficient. Because of the splitting methodology employed by the MergeSort, there lacks a risk of any deviation from the O(n*logn) complexity. The begining array and subarrays are always split in half size-wise. It's therefore recommended that the MergeSort method be used as the time complexity will always be constant.
# ------------------------ END ------------------------
```
# binary search for the sorted list
def BinarySearch(array, item):
i = 0
length = len(array)-1
low = array[i] #finds lowest value in array
high = array[length] #finds highest value in array
register = [] # creates empty register of increments; for debug purposes
start = time.perf_counter() # gets fractional seconds
while i <= length:
mid= (i + length)/2 # calculates midpoint of the range
guess = int(mid)
register.append(array[guess]) # appends increments to register; for debug purposes
if array[guess] == item:
end = time.perf_counter() #datetime.utcnow()
duration = end - start
MilliElapsed = duration*1E3
#print('the string is found for:', n)
#returns a tuple which contains search time in milliseconds and register of the guesses
return MilliElapsed #, register
elif array[guess] > item:
high = array[guess]
low = array[i]
length = guess
#print('The guess went too high!', n, i, array[guess])
elif array[guess] < item:
low = array[guess]
length = len(array)-1
high = array[length]
i = guess+1
#print('The guess went too low!',n, i, high, length, low)
else:
print('item not found!')
```
| github_jupyter |
# Building your Deep Neural Network: Step by Step
Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!
- In this notebook, you will implement all the functions required to build a deep neural network.
- In the next assignment, you will use these functions to build a deep neural network for image classification.
**After this assignment you will be able to:**
- Use non-linear units like ReLU to improve your model
- Build a deeper neural network (with more than 1 hidden layer)
- Implement an easy-to-use neural network class
**Notation**:
- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer.
- Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.
- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).
Let's get started!
## 1 - Packages
Let's first import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the main package for scientific computing with Python.
- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.
- dnn_utils provides some necessary functions for this notebook.
- testCases provides some test cases to assess the correctness of your functions
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
```
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v4 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
import inspect
import re
def describe(arg):
frame = inspect.currentframe()
callerframeinfo = inspect.getframeinfo(frame.f_back)
try:
context = inspect.getframeinfo(frame.f_back).code_context
caller_lines = ''.join([line.strip() for line in context])
m = re.search(r'describe\s*\((.+?)\)$', caller_lines)
if m:
caller_lines = m.group(1)
position = str(callerframeinfo.filename) + "@" + str(callerframeinfo.lineno)
# Add additional info such as array shape or string length
additional = ''
if hasattr(arg, "shape"):
additional += "[shape={}]".format(arg.shape)
elif hasattr(arg, "__len__"): # shape includes length information
additional += "[len={}]".format(len(arg))
# Use str() representation if it is printable
str_arg = str(arg)
str_arg = str_arg if str_arg.isprintable() else repr(arg)
print(position, "describe(" + caller_lines + ") = ", end='')
print(arg.__class__.__name__ + "(" + str_arg + ")", additional)
else:
print("Describe: couldn't find caller context")
finally:
del frame
del callerframeinfo
```
## 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:
- Initialize the parameters for a two-layer network and for an $L$-layer neural network.
- Implement the forward propagation module (shown in purple in the figure below).
- Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).
- We give you the ACTIVATION function (relu/sigmoid).
- Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
- Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.
- Compute the loss.
- Implement the backward propagation module (denoted in red in the figure below).
- Complete the LINEAR part of a layer's backward propagation step.
- We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
- Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
- Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
- Finally update the parameters.
<img src="images/final outline.png" style="width:800px;height:500px;">
<caption><center> **Figure 1**</center></caption><br>
**Note** that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps.
## 3 - Initialization
You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.
### 3.1 - 2-layer Neural Network
**Exercise**: Create and initialize the parameters of the 2-layer neural network.
**Instructions**:
- The model's structure is: *LINEAR -> RELU -> LINEAR -> SIGMOID*.
- Use random initialization for the weight matrices. Use `np.random.randn(shape)*0.01` with the correct shape.
- Use zero initialization for the biases. Use `np.zeros(shape)`.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros((n_y, 1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(3,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td> [[ 0.01624345 -0.00611756 -0.00528172]
[-0.01072969 0.00865408 -0.02301539]] </td>
</tr>
<tr>
<td> **b1**</td>
<td>[[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[ 0.01744812 -0.00761207]]</td>
</tr>
<tr>
<td> **b2** </td>
<td> [[ 0.]] </td>
</tr>
</table>
### 3.2 - L-layer Neural Network
The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the `initialize_parameters_deep`, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:
<table style="width:100%">
<tr>
<td> </td>
<td> **Shape of W** </td>
<td> **Shape of b** </td>
<td> **Activation** </td>
<td> **Shape of Activation** </td>
<tr>
<tr>
<td> **Layer 1** </td>
<td> $(n^{[1]},12288)$ </td>
<td> $(n^{[1]},1)$ </td>
<td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td>
<td> $(n^{[1]},209)$ </td>
<tr>
<tr>
<td> **Layer 2** </td>
<td> $(n^{[2]}, n^{[1]})$ </td>
<td> $(n^{[2]},1)$ </td>
<td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td>
<td> $(n^{[2]}, 209)$ </td>
<tr>
<tr>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$</td>
<td> $\vdots$ </td>
<tr>
<tr>
<td> **Layer L-1** </td>
<td> $(n^{[L-1]}, n^{[L-2]})$ </td>
<td> $(n^{[L-1]}, 1)$ </td>
<td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td>
<td> $(n^{[L-1]}, 209)$ </td>
<tr>
<tr>
<td> **Layer L** </td>
<td> $(n^{[L]}, n^{[L-1]})$ </td>
<td> $(n^{[L]}, 1)$ </td>
<td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>
<td> $(n^{[L]}, 209)$ </td>
<tr>
</table>
Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if:
$$ W = \begin{bmatrix}
j & k & l\\
m & n & o \\
p & q & r
\end{bmatrix}\;\;\; X = \begin{bmatrix}
a & b & c\\
d & e & f \\
g & h & i
\end{bmatrix} \;\;\; b =\begin{bmatrix}
s \\
t \\
u
\end{bmatrix}\tag{2}$$
Then $WX + b$ will be:
$$ WX + b = \begin{bmatrix}
(ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\\
(ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\\
(pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u
\end{bmatrix}\tag{3} $$
**Exercise**: Implement initialization for an L-layer Neural Network.
**Instructions**:
- The model's structure is *[LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID*. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.
- Use random initialization for the weight matrices. Use `np.random.randn(shape) * 0.01`.
- Use zeros initialization for the biases. Use `np.zeros(shape)`.
- We will store $n^{[l]}$, the number of units in different layers, in a variable `layer_dims`. For example, the `layer_dims` for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means `W1`'s shape was (4,2), `b1` was (4,1), `W2` was (1,4) and `b2` was (1,1). Now you will generalize this to $L$ layers!
- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).
```python
if L == 1:
parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01
parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
```
```
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
"""
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td>
</tr>
<tr>
<td>**b1** </td>
<td>[[ 0.]
[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2** </td>
<td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]</td>
</tr>
<tr>
<td>**b2** </td>
<td>[[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
</table>
## 4 - Forward propagation module
### 4.1 - Linear Forward
Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:
- LINEAR
- LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)
The linear forward module (vectorized over all the examples) computes the following equations:
$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$
where $A^{[0]} = X$.
**Exercise**: Build the linear part of forward propagation.
**Reminder**:
The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find `np.dot()` useful. If your dimensions don't match, printing `W.shape` may help.
```
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
"""
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
"""
### START CODE HERE ### (≈ 1 line of code)
Z = np.dot(W, A) + b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
```
**Expected output**:
<table style="width:35%">
<tr>
<td> **Z** </td>
<td> [[ 3.26295337 -1.23429987]] </td>
</tr>
</table>
### 4.2 - Linear-Activation Forward
In this notebook, you will use two activation functions:
- **Sigmoid**: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the `sigmoid` function. This function returns **two** items: the activation value "`a`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call:
``` python
A, activation_cache = sigmoid(Z)
```
- **ReLU**: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the `relu` function. This function returns **two** items: the activation value "`A`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call:
``` python
A, activation_cache = relu(Z)
```
For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.
**Exercise**: Implement the forward propagation of the *LINEAR->ACTIVATION* layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
```
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
"""
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
"""
# Ravi's note: duplicated line 1 of written code
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
```
**Expected output**:
<table style="width:35%">
<tr>
<td> **With sigmoid: A ** </td>
<td > [[ 0.96890023 0.11013289]]</td>
</tr>
<tr>
<td> **With ReLU: A ** </td>
<td > [[ 3.43896131 0. ]]</td>
</tr>
</table>
**Note**: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers.
### d) L-Layer Model
For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (`linear_activation_forward` with RELU) $L-1$ times, then follows that with one `linear_activation_forward` with SIGMOID.
<img src="images/model_architecture_kiank.png" style="width:600px;height:300px;">
<caption><center> **Figure 2** : *[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model</center></caption><br>
**Exercise**: Implement the forward propagation of the above model.
**Instruction**: In the code below, the variable `AL` will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called `Yhat`, i.e., this is $\hat{Y}$.)
**Tips**:
- Use the functions you had previously written
- Use a for loop to replicate [LINEAR->RELU] (L-1) times
- Don't forget to keep track of the caches in the "caches" list. To add a new value `c` to a `list`, you can use `list.append(c)`.
```
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
"""
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_activation_forward() (there are L-1 of them, indexed from 0 to L-1)
"""
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# // is Floor division - division that results into whole number adjusted to the left in the number line
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], "relu")
caches.append(cache)
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], "sigmoid")
caches.append(cache)
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case_2hidden()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
```
<table style="width:50%">
<tr>
<td> **AL** </td>
<td > [[ 0.03921668 0.70498921 0.19734387 0.04728177]]</td>
</tr>
<tr>
<td> **Length of caches list ** </td>
<td > 3 </td>
</tr>
</table>
Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions.
## 5 - Cost function
Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.
**Exercise**: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right)) \tag{7}$$
```
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
"""
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
"""
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = -1/m * np.sum(np.multiply(Y, np.log(AL)) + np.multiply(1-Y, np.log(1-AL)))
# Use dot product as it's all summed at the end anyway:
# But for some reason the marker didn't like it... not float64.
#cost = -1/m * np.dot(Y, np.log(AL.T)) + np.dot(1 - Y, np.log(1 - AL.T))
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
# cost = np.asscalar(cost) # Try to make marker happy
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
describe(compute_cost(AL, Y))
```
**Expected Output**:
<table>
<tr>
<td>**cost** </td>
<td> 0.41493159961539694</td>
</tr>
</table>
## 6 - Backward propagation module
Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters.
**Reminder**:
<img src="images/backprop_kiank.png" style="width:650px;height:250px;">
<caption><center> **Figure 3** : Forward and Backward propagation for *LINEAR->RELU->LINEAR->SIGMOID* <br> *The purple blocks represent the forward propagation, and the red blocks represent the backward propagation.* </center></caption>
<!--
For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:
$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$
In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.
Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.
This is why we talk about **backpropagation**.
!-->
Now, similar to forward propagation, you are going to build the backward propagation in three steps:
- LINEAR backward
- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)
### 6.1 - Linear backward
For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).
Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$.
<img src="images/linearback_kiank.png" style="width:250px;height:300px;">
<caption><center> **Figure 4** </center></caption>
The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:
$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$
$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[l](i)}\tag{9}$$
$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$
**Exercise**: Use the 3 formulas above to implement linear_backward().
```
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
"""
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = 1/m * np.dot(dZ, A_prev.T)
db = 1/m * np.sum(dZ, axis=1, keepdims=True)
dA_prev = np.dot(W.T, dZ)
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td> **dA_prev** </td>
<td > [[ 0.51822968 -0.19517421]
[-0.40506361 0.15255393]
[ 2.37496825 -0.89445391]] </td>
</tr>
<tr>
<td> **dW** </td>
<td > [[-0.10076895 1.40685096 1.64992505]] </td>
</tr>
<tr>
<td> **db** </td>
<td> [[ 0.50629448]] </td>
</tr>
</table>
### 6.2 - Linear-Activation backward
Next, you will create a function that merges the two helper functions: **`linear_backward`** and the backward step for the activation **`linear_activation_backward`**.
To help you implement `linear_activation_backward`, we provided two backward functions:
- **`sigmoid_backward`**: Implements the backward propagation for SIGMOID unit. You can call it as follows:
```python
dZ = sigmoid_backward(dA, activation_cache)
```
- **`relu_backward`**: Implements the backward propagation for RELU unit. You can call it as follows:
```python
dZ = relu_backward(dA, activation_cache)
```
If $g(.)$ is the activation function,
`sigmoid_backward` and `relu_backward` compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$.
**Exercise**: Implement the backpropagation for the *LINEAR->ACTIVATION* layer.
```
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
"""
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
return dA_prev, dW, db
dAL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
```
**Expected output with sigmoid:**
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td >[[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.10266786 0.09778551 -0.01968084]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.05729622]] </td>
</tr>
</table>
**Expected output with relu:**
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td > [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.44513824 0.37371418 -0.10478989]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.20837892]] </td>
</tr>
</table>
### 6.3 - L-Model Backward
Now you will implement the backward function for the whole network. Recall that when you implemented the `L_model_forward` function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the `L_model_backward` function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass.
<img src="images/mn_backward.png" style="width:450px;height:300px;">
<caption><center> **Figure 5** : Backward pass </center></caption>
** Initializing backpropagation**:
To backpropagate through this network, we know that the output is,
$A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute `dAL` $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.
To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):
```python
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
```
You can then use this post-activation gradient `dAL` to keep going backward. As seen in Figure 5, you can now feed in `dAL` into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a `for` loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula :
$$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$
For example, for $l=3$ this would store $dW^{[l]}$ in `grads["dW3"]`.
**Exercise**: Implement backpropagation for the *[LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model.
```
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
"""
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
"""
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "dAL, current_cache". Outputs: "grads["dAL-1"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = caches[L-1] # Caches is 0-indexed
grads["dA" + str(L-1)], grads["dW" + str(L)], grads["db" + str(L)] \
= linear_activation_backward(dAL, current_cache, "sigmoid")
### END CODE HERE ###
# Loop from l=L-2 to l=0
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 1)], current_cache". Outputs: "grads["dA" + str(l)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
#current_cache = caches[l]
# Note: caches[l] corresponds to grads["X" + str(l+1)] as layers are 1-indexed
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l+1)], caches[l], "relu")
grads["dA" + str(l)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print_grads(grads)
```
**Expected Output**
<table style="width:60%">
<tr>
<td > dW1 </td>
<td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]
[ 0. 0. 0. 0. ]
[ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td>
</tr>
<tr>
<td > db1 </td>
<td > [[-0.22007063]
[ 0. ]
[-0.02835349]] </td>
</tr>
<tr>
<td > dA1 </td>
<td > [[ 0.12913162 -0.44014127]
[-0.14175655 0.48317296]
[ 0.01663708 -0.05670698]] </td>
</tr>
</table>
### 6.4 - Update Parameters
In this section you will update the parameters of the model, using gradient descent:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$
where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary.
**Exercise**: Implement `update_parameters()` to update your parameters using gradient descent.
**Instructions**:
Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
```
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
"""
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(L):
parameters["W" + str(l+1)] -= np.multiply(learning_rate, grads["dW" + str(l+1)])
parameters["b" + str(l+1)] -= np.multiply(learning_rate, grads["db" + str(l+1)])
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
```
**Expected Output**:
<table style="width:100%">
<tr>
<td > W1 </td>
<td > [[-0.59562069 -0.09991781 -2.14584584 1.82662008]
[-1.76569676 -0.80627147 0.51115557 -1.18258802]
[-1.0535704 -0.86128581 0.68284052 2.20374577]] </td>
</tr>
<tr>
<td > b1 </td>
<td > [[-0.04659241]
[-1.28888275]
[ 0.53405496]] </td>
</tr>
<tr>
<td > W2 </td>
<td > [[-0.55569196 0.0354055 1.32964895]]</td>
</tr>
<tr>
<td > b2 </td>
<td > [[-0.84610769]] </td>
</tr>
</table>
## 7 - Conclusion
Congrats on implementing all the functions required for building a deep neural network!
We know it was a long assignment but going forward it will only get better. The next part of the assignment is easier.
In the next assignment you will put all these together to build two models:
- A two-layer neural network
- An L-layer neural network
You will in fact use these models to classify cat vs non-cat images!
| github_jupyter |
# Demo: RAIL Evaluation
The purpose of this notebook is to demonstrate the application of the metrics scripts to be used on the photo-z PDF catalogs produced by the PZ working group. The first implementation of the _evaluation_ module is based on the refactoring of the code used in [Schmidt et al. 2020](https://arxiv.org/pdf/2001.03621.pdf), available on Github repository [PZDC1paper](https://github.com/LSSTDESC/PZDC1paper).
To run this notebook, you must install qp and have the notebook in the same directory as `utils.py` (available in RAIL's examples directrory). You must also install some run-of-the-mill Python packages: numpy, scipy, matplotlib, and seaborn.
### Contents
* [Data](#data)
- [Photo-z Results](#fzboost)
* [CDF-based metrics](#metrics)
- [PIT](#pit)
- [QQ plot](#qq)
* [Summary statistics of CDF-based metrics](#summary_stats)
- [KS](#ks)
- [CvM](#cvm)
- [AD](#ad)
- [KLD](#kld)
* [CDE loss](#cde_loss)
* [Summary](#summary)
```
from rail.evaluation.metrics.pit import *
from rail.evaluation.metrics.cdeloss import *
from utils import read_pz_output, plot_pit_qq, ks_plot
from main import Summary
import qp
import os
%matplotlib inline
%reload_ext autoreload
%autoreload 2
```
<a class="anchor" id="data"></a>
# Data
To compute the photo-z metrics of a given test sample, it is necessary to read the output of a photo-z code containing galaxies' photo-z PDFs. Let's use the toy data available in `tests/data/` (**test_dc2_training_9816.hdf5** and **test_dc2_validation_9816.hdf5**) and the configuration file available in `examples/configs/FZBoost.yaml` to generate a small sample of photo-z PDFs using the **FZBoost** algorithm available on RAIL's _estimation_ module.
<a class="anchor" id="fzboost"></a>
### Photo-z Results
#### Run FZBoost
Go to dir `<your_path>/RAIL/examples/estimation/` and run the command:
`python main.py configs/FZBoost.yaml`
The photo-z output files (inputs for this notebook) will be writen at:
`<your_path>/RAIL/examples/estimation/results/FZBoost/test_FZBoost.hdf5`.
Let's use the ancillary function **read_pz_output** to facilitate the reading of all necessary data.
```
my_path = '/Users/sam/WORK/software/TMPRAIL/RAIL' # replace this with your local path to RAIL's parent dir
pdfs_file = os.path.join(my_path, "examples/estimation/results/FZBoost/test_FZBoost.hdf5")
ztrue_file = os.path.join(my_path, "tests/data/test_dc2_validation_9816.hdf5")
pdfs, zgrid, ztrue, photoz_mode = read_pz_output(pdfs_file, ztrue_file) # all numpy arrays
```
The inputs for the metrics shown above are the array of true (or spectroscopic) redshifts, and an ensemble of photo-z PDFs (a `qp.Ensemble` object).
```
fzdata = qp.Ensemble(qp.interp, data=dict(xvals=zgrid, yvals=pdfs))
```
***
<a class="anchor" id="metrics"></a>
# Metrics
<a class="anchor" id="pit"></a>
## PIT
The Probability Integral Transform (PIT), is the Cumulative Distribution Function (CDF) of the photo-z PDF
$$ \mathrm{CDF}(f, q)\ =\ \int_{-\infty}^{q}\ f(z)\ dz $$
evaluated at the galaxy's true redshift for every galaxy $i$ in the catalog.
$$ \mathrm{PIT}(p_{i}(z);\ z_{i})\ =\ \int_{-\infty}^{z^{true}_{i}}\ p_{i}(z)\ dz $$
```
pitobj = PIT(fzdata, ztrue)
quant_ens, metamets = pitobj.evaluate()
```
The _evaluate_ method PIT class returns two objects, a quantile distribution based on the full set of PIT values (a frozen distribution object), and a dictionary of meta metrics associated to PIT (to be detailed below).
```
quant_ens
metamets
```
PIT values
```
pit_vals = np.array(pitobj._pit_samps)
pit_vals
```
### PIT outlier rate
The PIT outlier rate is a global metric defined as the fraction of galaxies in the sample with extreme PIT values. The lower and upper limits for considering a PIT as outlier are optional parameters set at the Metrics instantiation (default values are: PIT $<10^{-4}$ or PIT $>0.9999$).
```
pit_out_rate = PITOutRate(pit_vals, quant_ens).evaluate()
print(f"PIT outlier rate of this sample: {pit_out_rate:.6f}")
```
<a class="anchor" id="qq"></a>
## PIT-QQ plot
The histogram of PIT values is a useful tool for a qualitative assessment of PDFs quality. It shows whether the PDFs are:
* biased (tilted PIT histogram)
* under-dispersed (excess counts close to the boudaries 0 and 1)
* over-dispersed (lack of counts close the boudaries 0 and 1)
* well-calibrated (flat histogram)
Following the standards in DC1 paper, the PIT histogram is accompanied by the quantile-quantile (QQ), which can be used to compare qualitatively the PIT distribution obtained with the PDFs agaist the ideal case (uniform distribution). The closer the QQ plot is to the diagonal, the better is the PDFs calibration.
```
plot_pit_qq(pdfs, zgrid, ztrue, title="PIT-QQ - toy data", code="FZBoost",
pit_out_rate=pit_out_rate, savefig=False)
```
The black horizontal line represents the ideal case where the PIT histogram would behave as a uniform distribution U(0,1).
***
<a class="anchor" id="summary_stats"></a>
# Summary statistics of CDF-based metrics
To evaluate globally the quality of PDFs estimates, `rail.evaluation` provides a set of metrics to compare the empirical distributions of PIT values with the reference uniform distribution, U(0,1).
<a class="anchor" id="ks"></a>
### Kolmogorov-Smirnov
Let's start with the traditional Kolmogorov-Smirnov (KS) statistic test, which is the maximum difference between the empirical and the expected cumulative distributions of PIT values:
$$
\mathrm{KS} \equiv \max_{PIT} \Big( \left| \ \mathrm{CDF} \small[ \hat{f}, z \small] - \mathrm{CDF} \small[ \tilde{f}, z \small] \ \right| \Big)
$$
Where $\hat{f}$ is the PIT distribution and $\tilde{f}$ is U(0,1). Therefore, the smaller value of KS the closer the PIT distribution is to be uniform. The `evaluate` method of the PITKS class returns a named tuple with the statistic and p-value.
```
ksobj = PITKS(pit_vals, quant_ens)
ks_stat_and_pval = ksobj.evaluate()
ks_stat_and_pval
```
Visual interpretation of the KS statistic:
```
ks_plot(pitobj)
print(f"KS metric of this sample: {ks_stat_and_pval.statistic:.4f}")
```
<a class="anchor" id="cvm"></a>
### Cramer-von Mises
Similarly, let's calculate the Cramer-von Mises (CvM) test, a variant of the KS statistic defined as the mean-square difference between the CDFs of an empirical PDF and the true PDFs:
$$ \mathrm{CvM}^2 \equiv \int_{-\infty}^{\infty} \Big( \mathrm{CDF} \small[ \hat{f}, z \small] \ - \ \mathrm{CDF} \small[ \tilde{f}, z \small] \Big)^{2} \mathrm{dCDF}(\tilde{f}, z) $$
on the distribution of PIT values, which should be uniform if the PDFs are perfect.
```
cvmobj = PITCvM(pit_vals, quant_ens)
cvm_stat_and_pval = cvmobj.evaluate()
print(f"CvM metric of this sample: {cvm_stat_and_pval.statistic:.4f}")
```
<a class="anchor" id="ad"></a>
### Anderson-Darling
Another variation of the KS statistic is the Anderson-Darling (AD) test, a weighted mean-squared difference featuring enhanced sensitivity to discrepancies in the tails of the distribution.
$$ \mathrm{AD}^2 \equiv N_{tot} \int_{-\infty}^{\infty} \frac{\big( \mathrm{CDF} \small[ \hat{f}, z \small] \ - \ \mathrm{CDF} \small[ \tilde{f}, z \small] \big)^{2}}{\mathrm{CDF} \small[ \tilde{f}, z \small] \big( 1 \ - \ \mathrm{CDF} \small[ \tilde{f}, z \small] \big)}\mathrm{dCDF}(\tilde{f}, z) $$
```
adobj = PITAD(pit_vals, quant_ens)
ad_stat_crit_sig = adobj.evaluate()
ad_stat_crit_sig
ad_stat_crit_sig
print(f"AD metric of this sample: {ad_stat_crit_sig.statistic:.4f}")
```
It is possible to remove catastrophic outliers before calculating the integral for the sake of preserving numerical instability. For instance, Schmidt et al. computed the Anderson-Darling statistic within the interval (0.01, 0.99).
```
ad_stat_crit_sig_cut = adobj.evaluate(pit_min=0.01, pit_max=0.99)
print(f"AD metric of this sample: {ad_stat_crit_sig.statistic:.4f}")
print(f"AD metric for 0.01 < PIT < 0.99: {ad_stat_crit_sig_cut.statistic:.4f}")
```
<a class="anchor" id="cde_loss"></a>
# CDE Loss
In the absence of true photo-z posteriors, the metric used to evaluate individual PDFs is the **Conditional Density Estimate (CDE) Loss**, a metric analogue to the root-mean-squared-error:
$$ L(f, \hat{f}) \equiv \int \int {\big(f(z | x) - \hat{f}(z | x) \big)}^{2} dzdP(x), $$
where $f(z | x)$ is the true photo-z PDF and $\hat{f}(z | x)$ is the estimated PDF in terms of the photometry $x$. Since $f(z | x)$ is unknown, we estimate the **CDE Loss** as described in [Izbicki & Lee, 2017 (arXiv:1704.08095)](https://arxiv.org/abs/1704.08095). :
$$ \mathrm{CDE} = \mathbb{E}\big( \int{{\hat{f}(z | X)}^2 dz} \big) - 2{\mathbb{E}}_{X, Z}\big(\hat{f}(Z, X) \big) + K_{f}, $$
where the first term is the expectation value of photo-z posterior with respect to the marginal distribution of the covariates X, and the second term is the expectation value with respect to the joint distribution of observables X and the space Z of all possible redshifts (in practice, the centroids of the PDF bins), and the third term is a constant depending on the true conditional densities $f(z | x)$.
```
cdelossobj = CDELoss(fzdata, zgrid, ztrue)
cde_stat_and_pval = cdelossobj.evaluate()
cde_stat_and_pval
print(f"CDE loss of this sample: {cde_stat_and_pval.statistic:.2f}")
```
<a class="anchor" id="summary"></a>
# Summary
```
summary = Summary(pdfs, zgrid, ztrue)
summary.markdown_metrics_table(pitobj=pitobj) # pitobj as optional input to speed-up metrics evaluation
summary.markdown_metrics_table(pitobj=pitobj, show_dc1="FlexZBoost")
```
| github_jupyter |
# Model Selection/Evaluation with Yellowbrick
Oftentimes with a new dataset, the choice of the best machine learning algorithm is not always obvious at the outset. Thanks to the scikit-learn API, we can easily approach the problem of model selection using model *evaluation*. As we'll see in these examples, Yellowbrick is helpful for facilitating the process.
## Evaluating Classifiers
Classification models attempt to predict a target in a discrete space, that is assign an instance of dependent variables one or more categories. Classification score visualizers display the differences between classes as well as a number of classifier-specific visual evaluations.
### ROCAUC
A `ROCAUC` (Receiver Operating Characteristic/Area Under the Curve) plot allows the user to visualize the tradeoff between the classifier’s sensitivity and specificity.
The Receiver Operating Characteristic (ROC) is a measure of a classifier’s predictive quality that compares and visualizes the tradeoff between the model’s sensitivity and specificity. When plotted, a ROC curve displays the true positive rate on the Y axis and the false positive rate on the X axis on both a global average and per-class basis. The ideal point is therefore the top-left corner of the plot: false positives are zero and true positives are one.
This leads to another metric, area under the curve (AUC), which is a computation of the relationship between false positives and true positives. The higher the AUC, the better the model generally is. However, it is also important to inspect the “steepness” of the curve, as this describes the maximization of the true positive rate while minimizing the false positive rate.
```
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from yellowbrick.classifier import ROCAUC
from yellowbrick.datasets import load_occupancy
# Load the classification data set
X, y = load_occupancy()
# Specify the classes of the target
classes = ["unoccupied", "occupied"]
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate the visualizer with the classification model
visualizer = ROCAUC(LogisticRegression(
multi_class="auto", solver="liblinear"
), classes=classes, size=(1080, 720)
)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.show() # Draw the data
```
Yellowbrick’s `ROCAUC` Visualizer also allows for plotting **multiclass** classification curves. ROC curves are typically used in binary classification, and in fact the Scikit-Learn `roc_curve` metric is only able to perform metrics for binary classifiers. Yellowbrick addresses this by binarizing the output (per-class) or to use one-vs-rest (micro score) or one-vs-all (macro score) strategies of classification.
```
from sklearn.linear_model import RidgeClassifier
from sklearn.preprocessing import OrdinalEncoder, LabelEncoder
from yellowbrick.datasets import load_game
# Load multi-class classification dataset
X, y = load_game()
classes = ['win', 'loss', 'draw']
# Encode the non-numeric columns
X = OrdinalEncoder().fit_transform(X)
y = LabelEncoder().fit_transform(y)
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2
)
visualizer = ROCAUC(
RidgeClassifier(), classes=classes, size=(1080, 720)
)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.show() # Draw the data
```
### ClassificationReport Heatmap
The classification report visualizer displays the precision, recall, F1, and support scores for the model. In order to support easier interpretation and problem detection, the report integrates numerical scores with a color-coded heatmap. All heatmaps are in the range `(0.0, 1.0)` to facilitate easy comparison of classification models across different classification reports.
```
from sklearn.naive_bayes import GaussianNB
from yellowbrick.classifier import ClassificationReport
# Load the classification data set
X, y = load_occupancy()
classes = ["unoccupied", "occupied"]
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2
)
# Instantiate the classification model and visualizer
bayes = GaussianNB()
visualizer = ClassificationReport(
bayes, classes=classes, support=True, size=(1080, 720)
)
visualizer.fit(X_train, y_train) # Fit the visualizer and the model
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.show() # Draw the data
```
The classification report shows a representation of the main classification metrics on a per-class basis. This gives a deeper intuition of the classifier behavior over global accuracy which can mask functional weaknesses in one class of a multiclass problem. Visual classification reports are used to compare classification models to select models that are “redder”, e.g. have stronger classification metrics or that are more balanced.
The metrics are defined in terms of true and false positives, and true and false negatives. Positive and negative in this case are generic names for the classes of a binary classification problem. In the example above, we would consider true and false occupied and true and false unoccupied. Therefore a true positive is when the actual class is positive as is the estimated class. A false positive is when the actual class is negative but the estimated class is positive. Using this terminology the meterics are defined as follows:
**precision**
Precision is the ability of a classiifer not to label an instance positive that is actually negative. For each class it is defined as as the ratio of true positives to the sum of true and false positives. Said another way, “for all instances classified positive, what percent was correct?”
**recall**
Recall is the ability of a classifier to find all positive instances. For each class it is defined as the ratio of true positives to the sum of true positives and false negatives. Said another way, “for all instances that were actually positive, what percent was classified correctly?”
**f1 score**
The F1 score is a weighted harmonic mean of precision and recall such that the best score is 1.0 and the worst is 0.0. Generally speaking, F1 scores are lower than accuracy measures as they embed precision and recall into their computation. As a rule of thumb, the weighted average of F1 should be used to compare classifier models, not global accuracy.
**support**
Support is the number of actual occurrences of the class in the specified dataset. Imbalanced support in the training data may indicate structural weaknesses in the reported scores of the classifier and could indicate the need for stratified sampling or rebalancing. Support doesn’t change between models but instead diagnoses the evaluation process.
### ClassPredictionError
The Yellowbrick `ClassPredictionError` plot is a twist on other and sometimes more familiar classification model diagnostic tools like the Confusion Matrix and Classification Report. Like the Classification Report, this plot shows the support (number of training samples) for each class in the fitted classification model as a stacked bar chart. Each bar is segmented to show the proportion of predictions (including false negatives and false positives, like a Confusion Matrix) for each class. You can use a `ClassPredictionError` to visualize which classes your classifier is having a particularly difficult time with, and more importantly, what incorrect answers it is giving on a per-class basis. This can often enable you to better understand strengths and weaknesses of different models and particular challenges unique to your dataset.
The class prediction error chart provides a way to quickly understand how good your classifier is at predicting the right classes.
```
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from yellowbrick.classifier import ClassPredictionError
from yellowbrick.datasets import load_credit
X, y = load_credit()
classes = ['account in default', 'current with bills']
# Perform 80/20 training/test split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.20, random_state=42
)
# Instantiate the classification model and visualizer
visualizer = ClassPredictionError(
RandomForestClassifier(n_estimators=10),
classes=classes, size=(1080, 720)
)
# Fit the training data to the visualizer
visualizer.fit(X_train, y_train)
# Evaluate the model on the test data
visualizer.score(X_test, y_test)
# Draw visualization
visualizer.show()
```
## Evaluating Regressors
Regression models attempt to predict a target in a continuous space. Regressor score visualizers display the instances in model space to better understand how the model is making predictions.
### PredictionError
A prediction error plot shows the actual targets from the dataset against the predicted values generated by our model. This allows us to see how much variance is in the model. Data scientists can diagnose regression models using this plot by comparing against the 45 degree line, where the prediction exactly matches the model.
```
from sklearn.linear_model import Lasso
from yellowbrick.regressor import PredictionError
from yellowbrick.datasets import load_concrete
# Load regression dataset
X, y = load_concrete()
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Instantiate the linear model and visualizer
model = Lasso()
visualizer = PredictionError(model, size=(1080, 720))
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.show() # Draw the data
```
### Residuals Plot
Residuals, in the context of regression models, are the difference between the observed value of the target variable (y) and the predicted value (ŷ), i.e. the error of the prediction. The residuals plot shows the difference between residuals on the vertical axis and the dependent variable on the horizontal axis, allowing you to detect regions within the target that may be susceptible to more or less error.
```
from sklearn.linear_model import Ridge
from yellowbrick.regressor import ResidualsPlot
# Instantiate the linear model and visualizer
model = Ridge()
visualizer = ResidualsPlot(model, size=(1080, 720))
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.show() # Draw the data
```
### Try them all
```
from sklearn.svm import SVR
from sklearn.neural_network import MLPRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import BayesianRidge, LinearRegression
regressors = {
"support vector machine": SVR(),
"multilayer perceptron": MLPRegressor(),
"nearest neighbors": KNeighborsRegressor(),
"bayesian ridge": BayesianRidge(),
"linear regression": LinearRegression(),
}
for _, regressor in regressors.items():
visualizer = ResidualsPlot(regressor)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.show()
```
## Diagnostics
Target visualizers specialize in visually describing the dependent variable for supervised modeling, often referred to as y or the target.
### Class Balance Report
One of the biggest challenges for classification models is an imbalance of classes in the training data. Severe class imbalances may be masked by relatively good F1 and accuracy scores – the classifier is simply guessing the majority class and not making any evaluation on the underrepresented class.
There are several techniques for dealing with class imbalance such as stratified sampling, down sampling the majority class, weighting, etc. But before these actions can be taken, it is important to understand what the class balance is in the training data. The `ClassBalance` visualizer supports this by creating a bar chart of the support for each class, that is the frequency of the classes’ representation in the dataset.
```
from yellowbrick.target import ClassBalance
# Load multi-class classification dataset
X, y = load_game()
# Instantiate the visualizer
visualizer = ClassBalance(
labels=["draw", "loss", "win"], size=(1080, 720)
)
visualizer.fit(y)
visualizer.show()
```
Yellowbrick visualizers are intended to steer the model selection process. Generally, model selection is a search problem defined as follows: given N instances described by numeric properties and (optionally) a target for estimation, find a model described by a triple composed of features, an algorithm and hyperparameters that best fits the data. For most purposes the “best” triple refers to the triple that receives the best cross-validated score for the model type.
The yellowbrick.model_selection package provides visualizers for inspecting the performance of cross validation and hyper parameter tuning. Many visualizers wrap functionality found in `sklearn.model_selection` and others build upon it for performing multi-model comparisons.
### Cross Validation
Generally we determine whether a given model is optimal by looking at it’s F1, precision, recall, and accuracy (for classification), or it’s coefficient of determination (R2) and error (for regression). However, real world data is often distributed somewhat unevenly, meaning that the fitted model is likely to perform better on some sections of the data than on others. Yellowbrick’s `CVScores` visualizer enables us to visually explore these variations in performance using different cross validation strategies.
Cross-validation starts by shuffling the data (to prevent any unintentional ordering errors) and splitting it into `k` folds. Then `k` models are fit on $\frac{k-1} {k}$ of the data (called the training split) and evaluated on $\frac {1} {k}$ of the data (called the test split). The results from each evaluation are averaged together for a final score, then the final model is fit on the entire dataset for operationalization.
In Yellowbrick, the `CVScores` visualizer displays cross-validated scores as a bar chart (one bar for each fold) with the average score across all folds plotted as a horizontal dotted line.
```
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import StratifiedKFold
from yellowbrick.model_selection import CVScores
# Load the classification data set
X, y = load_occupancy()
# Create a cross-validation strategy
cv = StratifiedKFold(n_splits=12, random_state=42)
# Instantiate the classification model and visualizer
model = MultinomialNB()
visualizer = CVScores(
model, cv=cv, scoring='f1_weighted', size=(1080, 720)
)
visualizer.fit(X, y)
visualizer.show()
```
Visit the Yellowbrick docs for more about visualizers for [classification](http://www.scikit-yb.org/en/latest/api/classifier/index.html), [regression](http://www.scikit-yb.org/en/latest/api/regressor/index.html) and [model selection](http://www.scikit-yb.org/en/latest/api/model_selection/index.html)!
| github_jupyter |
```
from functools import partial
from collections import defaultdict
import os
import pickle
import numpy as np
import scipy.sparse as sp
import scipy.io as spio
import matplotlib.pyplot as plt
from torchray_extremal_perturbation_sequence import extremal_perturbation, contrastive_reward, simple_reward
from torchray.utils import get_device
import torch
import torch.nn as nn
from torch.autograd import Variable
from torch import optim
import torch.nn.functional as F
from sklearn import preprocessing
import pandas as pd
class MySequence :
def __init__(self) :
self.dummy = 1
import tensorflow as tf
import tensorflow.keras
tf.keras.utils.Sequence = MySequence
from sequence_logo_helper import plot_dna_logo, dna_letter_at
#Load data
dataset_name = "optimus5_synth"
def one_hot_encode(df, col='utr', seq_len=50):
# Dictionary returning one-hot encoding of nucleotides.
nuc_d = {'a':[1,0,0,0],'c':[0,1,0,0],'g':[0,0,1,0],'t':[0,0,0,1], 'n':[0,0,0,0]}
# Creat empty matrix.
vectors=np.empty([len(df),seq_len,4])
# Iterate through UTRs and one-hot encode
for i,seq in enumerate(df[col].str[:seq_len]):
seq = seq.lower()
a = np.array([nuc_d[x] for x in seq])
vectors[i] = a
return vectors
def r2(x,y):
slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)
return r_value**2
#Train data
e_train = pd.read_csv("bottom5KIFuAUGTop5KIFuAUG.csv")
e_train.loc[:,'scaled_rl'] = preprocessing.StandardScaler().fit_transform(e_train.loc[:,'rl'].values.reshape(-1,1))
seq_e_train = one_hot_encode(e_train,seq_len=50)
x_train = seq_e_train
x_train = np.reshape(x_train, (x_train.shape[0], 1, x_train.shape[1], x_train.shape[2]))
y_train = np.array(e_train['scaled_rl'].values)
y_train = np.reshape(y_train, (y_train.shape[0],1))
y_train = (y_train >= 0.)
y_train = np.concatenate([1. - y_train, y_train], axis=1)
print("x_train.shape = " + str(x_train.shape))
print("y_train.shape = " + str(y_train.shape))
#Test data
allFiles = ["optimus5_synthetic_random_insert_if_uorf_1_start_1_stop_variable_loc_512.csv",
"optimus5_synthetic_random_insert_if_uorf_1_start_2_stop_variable_loc_512.csv",
"optimus5_synthetic_random_insert_if_uorf_2_start_1_stop_variable_loc_512.csv",
"optimus5_synthetic_random_insert_if_uorf_2_start_2_stop_variable_loc_512.csv"]
x_tests = []
for csv_to_open in allFiles :
#Load dataset for benchmarking
dataset_name = csv_to_open.replace(".csv", "")
benchmarkSet = pd.read_csv(csv_to_open)
seq_e_test = one_hot_encode(benchmarkSet, seq_len=50)
x_test = seq_e_test[:, None, ...]
print(x_test.shape)
x_tests.append(x_test)
x_test = np.concatenate(x_tests, axis=0)
y_test = -1. * np.ones((x_test.shape[0], 1))
y_test = (y_test >= 0.)
y_test = np.concatenate([1. - y_test, y_test], axis=1)
print("x_test.shape = " + str(x_test.shape))
print("y_test.shape = " + str(y_test.shape))
#Load predictor model
class CNNClassifier(nn.Module) :
def __init__(self, batch_size) :
super(CNNClassifier, self).__init__()
self.conv1 = nn.Conv2d(4, 120, kernel_size=(1, 8), padding=(0, 4))
self.conv2 = nn.Conv2d(120, 120, kernel_size=(1, 8), padding=(0, 4))
self.conv3 = nn.Conv2d(120, 120, kernel_size=(1, 8), padding=(0, 4))
self.fc1 = nn.Linear(in_features=50 * 120, out_features=40)
self.drop1 = nn.Dropout(p=0.2)
self.fc2 = nn.Linear(in_features=40, out_features=1)
self.batch_size = batch_size
self.use_cuda = True if torch.cuda.is_available() else False
def forward(self, x):
#x = x.transpose(1, 2)
x = F.relu(self.conv1(x))[..., 1:]
x = F.relu(self.conv2(x))[..., 1:]
x = F.relu(self.conv3(x))[..., 1:]
x = x.transpose(1, 3)
x = x.reshape(-1, 50 * 120)
x = F.relu(self.fc1(x))
x = self.fc2(x)
#Transform sigmoid logits to 2-input softmax scores
x = torch.cat([-1 * x, x], axis=1)
return x
model_pytorch = CNNClassifier(batch_size=1)
_ = model_pytorch.load_state_dict(torch.load("optimusRetrainedMain_pytorch.pth"))
#Create pytorch input tensor
x_test_pytorch = Variable(torch.FloatTensor(np.transpose(x_test, (0, 3, 1, 2))))
x_test_pytorch = x_test_pytorch.cuda() if model_pytorch.use_cuda else x_test_pytorch
digit_test = np.array(np.argmax(y_test, axis=1), dtype=np.int)
#Predict using pytorch model
device = get_device()
model_pytorch.to(device)
model_pytorch.eval()
y_pred_pytorch = np.concatenate([model_pytorch(x_test_pytorch[i:i+1]).data.cpu().numpy() for i in range(x_test.shape[0])], axis=0)
digit_pred_test = np.argmax(y_pred_pytorch, axis=-1)
print("Test accuracy = " + str(round(np.sum(digit_test == digit_pred_test) / digit_test.shape[0], 4)))
device = get_device()
model_pytorch.to(device)
x_test_pytorch = x_test_pytorch.to(device)
#Gradient saliency/backprop visualization
import matplotlib.collections as collections
import operator
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib.colors as colors
import matplotlib as mpl
from matplotlib.text import TextPath
from matplotlib.patches import PathPatch, Rectangle
from matplotlib.font_manager import FontProperties
from matplotlib import gridspec
from matplotlib.ticker import FormatStrFormatter
def plot_importance_scores(importance_scores, ref_seq, figsize=(12, 2), score_clip=None, sequence_template='', plot_start=0, plot_end=96) :
end_pos = ref_seq.find("#")
fig = plt.figure(figsize=figsize)
ax = plt.gca()
if score_clip is not None :
importance_scores = np.clip(np.copy(importance_scores), -score_clip, score_clip)
max_score = np.max(np.sum(importance_scores[:, :], axis=0)) + 0.01
for i in range(0, len(ref_seq)) :
mutability_score = np.sum(importance_scores[:, i])
dna_letter_at(ref_seq[i], i + 0.5, 0, mutability_score, ax)
plt.sca(ax)
plt.xlim((0, len(ref_seq)))
plt.ylim((0, max_score))
plt.axis('off')
plt.yticks([0.0, max_score], [0.0, max_score], fontsize=16)
for axis in fig.axes :
axis.get_xaxis().set_visible(False)
axis.get_yaxis().set_visible(False)
plt.tight_layout()
plt.show()
class IdentityEncoder :
def __init__(self, seq_len, channel_map) :
self.seq_len = seq_len
self.n_channels = len(channel_map)
self.encode_map = channel_map
self.decode_map = {
val : key for key, val in channel_map.items()
}
def encode(self, seq) :
encoding = np.zeros((self.seq_len, self.n_channels))
for i in range(len(seq)) :
if seq[i] in self.encode_map :
channel_ix = self.encode_map[seq[i]]
encoding[i, channel_ix] = 1.
return encoding
def encode_inplace(self, seq, encoding) :
for i in range(len(seq)) :
if seq[i] in self.encode_map :
channel_ix = self.encode_map[seq[i]]
encoding[i, channel_ix] = 1.
def encode_inplace_sparse(self, seq, encoding_mat, row_index) :
raise NotImplementError()
def decode(self, encoding) :
seq = ''
for pos in range(0, encoding.shape[0]) :
argmax_nt = np.argmax(encoding[pos, :])
max_nt = np.max(encoding[pos, :])
if max_nt == 1 :
seq += self.decode_map[argmax_nt]
else :
seq += "0"
return seq
def decode_sparse(self, encoding_mat, row_index) :
encoding = np.array(encoding_mat[row_index, :].todense()).reshape(-1, 4)
return self.decode(encoding)
#Initialize sequence encoder
seq_length = 50
residue_map = {'A': 0, 'C': 1, 'G': 2, 'T': 3}
encoder = IdentityEncoder(seq_length, residue_map)
y_pred_pytorch[:10]
#Execute method on test set
i = 0
area = 0.2
variant_mode = "preserve"
perturbation_mode = "blur"
masks = []
m, _ = extremal_perturbation(
model_pytorch, x_test_pytorch[i:i + 1], int(digit_test[i]),
reward_func=contrastive_reward,
debug=True,
jitter=False,
areas=[area],
variant=variant_mode,
perturbation=perturbation_mode,
num_levels=8,
step=3,
sigma=3
)
imp_s = np.tile(m[0, 0, :, :].cpu().numpy(), (4, 1)) * x_test[i, 0, :, :].T
score_clip = None
plot_dna_logo(x_test[i, 0, :, :], sequence_template='N'*50, figsize=(12, 1), plot_start=0, plot_end=50)
plot_importance_scores(imp_s, encoder.decode(x_test[i, 0, :, :]), figsize=(12, 1), score_clip=score_clip, sequence_template='N'*50, plot_start=0, plot_end=50)
#Execute method on test set
n_to_test = x_test.shape[0]
area = 0.2
variant_mode = "preserve"
perturbation_mode = "blur"
masks = []
for i in range(n_to_test) :
if i % 100 == 0 :
print("Processing example " + str(i) + "...")
m, _ = extremal_perturbation(
model_pytorch, x_test_pytorch[i:i + 1], int(digit_test[i]),
reward_func=contrastive_reward,
debug=False,
jitter=False,
areas=[area],
variant=variant_mode,
perturbation=perturbation_mode,
num_levels=8,
step=3,
sigma=3
)
masks.append(np.expand_dims(m.cpu().numpy()[:, 0, ...], axis=-1))
importance_scores_test = np.concatenate(masks, axis=0)
#Visualize a few images
for plot_i in range(0, 5) :
print("Test sequence " + str(plot_i) + ":")
imp_s = np.tile(importance_scores_test[plot_i, :, :, 0], (4, 1)) * x_test[plot_i, 0, :, :].T
score_clip = None
plot_dna_logo(x_test[plot_i, 0, :, :], sequence_template='N'*50, figsize=(12, 1), plot_start=0, plot_end=50)
plot_importance_scores(imp_s, encoder.decode(x_test[plot_i, 0, :, :]), figsize=(12, 1), score_clip=score_clip, sequence_template='N'*50, plot_start=0, plot_end=50)
#Save predicted importance scores
model_name = "extremal_" + "optimus5_synthetic_random_insert_if_uorf_1_start_1_stop_variable_loc_512" + "_mode_" + variant_mode + "_perturbation_" + perturbation_mode + "_area_" + str(area).replace(".", "")
np.save(model_name + "_importance_scores_test", importance_scores_test[0:512, ...])
model_name = "extremal_" + "optimus5_synthetic_random_insert_if_uorf_1_start_2_stop_variable_loc_512" + "_mode_" + variant_mode + "_perturbation_" + perturbation_mode + "_area_" + str(area).replace(".", "")
np.save(model_name + "_importance_scores_test", importance_scores_test[512:1024, ...])
model_name = "extremal_" + "optimus5_synthetic_random_insert_if_uorf_2_start_1_stop_variable_loc_512" + "_mode_" + variant_mode + "_perturbation_" + perturbation_mode + "_area_" + str(area).replace(".", "")
np.save(model_name + "_importance_scores_test", importance_scores_test[1024:1536, ...])
model_name = "extremal_" + "optimus5_synthetic_random_insert_if_uorf_2_start_2_stop_variable_loc_512" + "_mode_" + variant_mode + "_perturbation_" + perturbation_mode + "_area_" + str(area).replace(".", "")
np.save(model_name + "_importance_scores_test", importance_scores_test[1536:2048, ...])
#Execute method on test set
n_to_test = x_test.shape[0]
area = 0.2
variant_mode = "preserve"
perturbation_mode = "fade"
masks = []
for i in range(n_to_test) :
if i % 100 == 0 :
print("Processing example " + str(i) + "...")
m, _ = extremal_perturbation(
model_pytorch, x_test_pytorch[i:i + 1], int(digit_test[i]),
reward_func=contrastive_reward,
debug=False,
jitter=False,
areas=[area],
variant=variant_mode,
perturbation=perturbation_mode,
num_levels=8,
step=3,
sigma=3
)
masks.append(np.expand_dims(m.cpu().numpy()[:, 0, ...], axis=-1))
importance_scores_test = np.concatenate(masks, axis=0)
#Visualize a few images
for plot_i in range(0, 5) :
print("Test sequence " + str(plot_i) + ":")
imp_s = np.tile(importance_scores_test[plot_i, :, :, 0], (4, 1)) * x_test[plot_i, 0, :, :].T
score_clip = None
plot_dna_logo(x_test[plot_i, 0, :, :], sequence_template='N'*50, figsize=(12, 1), plot_start=0, plot_end=50)
plot_importance_scores(imp_s, encoder.decode(x_test[plot_i, 0, :, :]), figsize=(12, 1), score_clip=score_clip, sequence_template='N'*50, plot_start=0, plot_end=50)
#Save predicted importance scores
model_name = "extremal_" + "optimus5_synthetic_random_insert_if_uorf_1_start_1_stop_variable_loc_512" + "_mode_" + variant_mode + "_perturbation_" + perturbation_mode + "_area_" + str(area).replace(".", "")
np.save(model_name + "_importance_scores_test", importance_scores_test[0:512, ...])
model_name = "extremal_" + "optimus5_synthetic_random_insert_if_uorf_1_start_2_stop_variable_loc_512" + "_mode_" + variant_mode + "_perturbation_" + perturbation_mode + "_area_" + str(area).replace(".", "")
np.save(model_name + "_importance_scores_test", importance_scores_test[512:1024, ...])
model_name = "extremal_" + "optimus5_synthetic_random_insert_if_uorf_2_start_1_stop_variable_loc_512" + "_mode_" + variant_mode + "_perturbation_" + perturbation_mode + "_area_" + str(area).replace(".", "")
np.save(model_name + "_importance_scores_test", importance_scores_test[1024:1536, ...])
model_name = "extremal_" + "optimus5_synthetic_random_insert_if_uorf_2_start_2_stop_variable_loc_512" + "_mode_" + variant_mode + "_perturbation_" + perturbation_mode + "_area_" + str(area).replace(".", "")
np.save(model_name + "_importance_scores_test", importance_scores_test[1536:2048, ...])
```
| github_jupyter |
Your name here.
Your section number here.
# Homework 5: Fitting
##### ** Submit this notebook to bourses to receive a credit for this assignment. **
Please complete this homework assignment in code cells in the iPython notebook. Please submit both a PDF of the jupyter notebook to bcourses and the notebook itself (.ipynb file). Note, that when saving as PDF you don't want to use the option with latex because it crashes, but rather the one to save it directly as a PDF.
## Problem 1: Gamma-ray peak
[Some of you may recognize this problem from Advanced Lab's Error Analysis Exercise. That's not an accident. You may also recognize this dataset from Homework04. That's not an accident either.]
You are given a dataset (peak.dat) from a gamma-ray experiment consisting of ~1000 hits. Each line in the file corresponds to one recorded gamma-ray event, and stores the the measured energy of the gamma-ray. We will assume that the energies are randomly distributed about a common mean, and that each event is uncorrelated to others. Read the dataset from the enclosed file and:
1. Produce a histogram of the distribution of energies. Choose the number of bins wisely, i.e. so that the width of each bin is smaller than the width of the peak, and at the same time so that the number of entries in the most populated bin is relatively large. Since this plot represents randomly-collected data, plotting error bars would be appropriate.
1. Fit the distribution to a Gaussian function using an unbinned fit (<i>Hint:</i> use <tt>scipi.stats.norm.fit()</tt> function), and compare the parameters of the fitted Gaussian with the mean and standard deviation computed in Homework04
1. Fit the distribution to a Gaussian function using a binned least-squares fit (<i>Hint:</i> use <tt>scipy.optimize.curve_fit()</tt> function), and compare the parameters of the fitted Gaussian and their uncertainties to the parameters obtained in the unbinned fit above.
1. Re-make your histogram from (1) with twice as many bins, and repeat the binned least-squares fit from (3) on the new histogram. How sensitive are your results to binning ?
1. How consistent is the distribution with a Gaussian? In other words, compare the histogram from (1) to the fitted curve, and compute a goodness-of-fit value, such as $\chi^2$/d.f.
```
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
import scipy.optimize as fitter
# Once again, feel free to play around with the matplotlib parameters
plt.rcParams['figure.figsize'] = 8,4
plt.rcParams['font.size'] = 14
energies = np.loadtxt('peak.dat') # MeV
```
Recall `plt.hist()` isn't great when you need error bars, so it's better to first use [`np.histogram()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html) -- which returns the counts in each bin, along with the edges of the bins (there are $n + 1$ edges for $n$ bins). Once you find the bin centers and errors on the counts, you can make the actual plot with [`plt.bar()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.bar.html). Start with something close to `bins = 25` as the second input parameter to `np.histogram()`.
```
# use numpy.histogram to get the counts and bin edges
# bin_centers = 0.5*(bin_edges[1:]+bin_edges[:-1]) works for finding the bin centers
# assume Poisson errors on the counts – errors go as the square root of the count
# now use plt.bar() to make the histogram with error bars (remember to label the plot)
```
You can use the list of `energies` directly as input to `scipy.stats.norm.fit()`; the returned values are the mean and standard deviation of a fit to the data.
```
# Find the mean and standard deviation using scipy.stats.norm.fit()
# Compare these to those computed in the previous homework (or just find them again here)
```
Now, using the binned values (found above with `np.histogram()`) and their errors use `scipy.optimize.curve_fit()` to fit the data.
```
# Remember, curve_fit() will need a model function defined
def model(x, A, mu, sigma):
'''Model function to use with curve_fit();
it should take the form of a 1-D Gaussian'''
# Also make sure you define some starting parameters for curve_fit (we typically called these par0 or p0 in the past workshop)
'''# You can use this to ensure the errors are greater than 0 to avoid division by 0 within fitter.curve_fit()
for i, err in enumerate(counts_err):
if err == 0:
counts_err[i] = 1'''
# Now use fitter.curve_fit() on the binned data and compare the best-fit parameters to those found by scipy.stats.norm.fit()
# It's also useful to plot the fitted curve over the histogram you made in part 1 to check that things are working properly
# At this point, it's also useful to find the chi^2 and reduced chi^2 value of this binned fit
```
Repeat this process with twice as many bins (i.e. now use `bins = 50` in `np.histogram()`, or a similar value). Compute the $\chi^2$ and reduced $\chi^2$ and compare these values, along with the best-fit parameters between the two binned fits. Feel free to continue to play with the number of bins and see how it changes the fit.
## Problem 2: Optical Pumping experiment
One of the experiments in the 111B (111-ADV) lab is the study of the optical pumping of atomic rubidium. In that experiment, we measure the resonant frequency of a Zeeman transition as a function of the applied current (local magnetic field). Consider a mock data set:
<table border="1" align="center">
<tr>
<td>Current <i>I</i> (Amps)
</td><td>0.0 </td><td> 0.2 </td><td> 0.4 </td><td> 0.6 </td><td> 0.8 </td><td> 1.0 </td><td> 1.2 </td><td> 1.4 </td><td> 1.6 </td><td> 1.8 </td><td> 2.0 </td><td> 2.2
</td></tr>
<tr>
<td>Frequency <i>f</i> (MHz)
</td><td> 0.14 </td><td> 0.60 </td><td> 1.21 </td><td> 1.94 </td><td> 2.47 </td><td> 3.07 </td><td> 3.83 </td><td> 4.16 </td><td> 4.68 </td><td> 5.60 </td><td> 6.31 </td><td> 6.78
</td></tr></table>
1. Plot a graph of the pairs of values. Assuming a linear relationship between $I$ and $f$, determine the slope and the intercept of the best-fit line using the least-squares method with equal weights, and draw the best-fit line through the data points in the graph.
1. From what s/he knows about the equipment used to measure the resonant frequency, your lab partner hastily estimates the uncertainty in the measurement of $f$ to be $\sigma(f) = 0.01$ MHz. Estimate the probability that the straight line you found is an adequate description of the observed data if it is distributed with the uncertainty guessed by your lab partner. (Hint: use scipy.stats.chi2 class to compute the quantile of the chi2 distribution). What can you conclude from these results?
1. Repeat the analysis assuming your partner estimated the uncertainty to be $\sigma(f) = 1$ MHz. What can you conclude from these results?
1. Assume that the best-fit line found in Part 1 is a good fit to the data. Estimate the uncertainty in measurement of $y$ from the scatter of the observed data about this line. Again, assume that all the data points have equal weight. Use this to estimate the uncertainty in both the slope and the intercept of the best-fit line. This is the technique you will use in the Optical Pumping lab to determine the uncertainties in the fit parameters.
1. Now assume that the uncertainty in each value of $f$ grows with $f$: $\sigma(f) = 0.03 + 0.03 * f$ (MHz). Determine the slope and the intercept of the best-fit line using the least-squares method with unequal weights (weighted least-squares fit)
```
import numpy as np
import matplotlib.pyplot as plt
from numpy.linalg import *
import scipy.stats
import scipy.optimize as fitter
# Use current as the x-variable in your plots/fitting
current = np.arange(0, 2.3, .2) # Amps
frequency = np.array([.14, .6, 1.21, 1.94, 2.47, 3.07, 3.83, 4.16, 4.68, 5.6, 6.31, 6.78]) # MHz
def linear_model(x, slope, intercept):
'''Model function to use with curve_fit();
it should take the form of a line'''
# Use fitter.curve_fit() to get the line of best fit
# Plot this line, along with the data points -- remember to label
```
The rest is pretty short, but the statistics might be a bit complicated. Ask questions if you need advice or help. Next, the problem is basically asking you to compute the $\chi^2$ for the above fit twice, once with $0.01$ as the error for each point (in the 'denominator' of the $\chi^2$ formula) and once with $0.1$.
These values can then be compared to a "range of acceptable $\chi^2$ values", found with `scipy.stats.chi2.ppf()` -- which takes two inputs. The second input should be the number of degrees of freedom used during fitting (# data points minus the 2 free parameters). The first input should be something like $0.05$ and $0.95$ (one function call of `scipy.stats.chi2.ppf()` for each endpoint fo the acceptable range). If the calculated $\chi^2$ statistic falls within this range, then the assumed uncertainty is reasonable.
Now, estimate the uncertainty in the frequency measurements, and use this to find the uncertainty in the best-fit parameters. [This document](https://pages.mtu.edu/~fmorriso/cm3215/UncertaintySlopeInterceptOfLeastSquaresFit.pdf) is a good resource for learning to propagate errors in the context of linear fitting.
Finally, repeat the fitting with the weighted errors (from the $\sigma(f)$ uncertainty formula) given to `scipy.optimize.curve_fit()`
| github_jupyter |
```
%matplotlib inline
import gym
import itertools
import matplotlib
import numpy as np
import sys
import tensorflow as tf
import collections
if "../" not in sys.path:
sys.path.append("../")
from lib.envs.cliff_walking import CliffWalkingEnv
from lib import plotting
matplotlib.style.use('ggplot')
env = CliffWalkingEnv()
class PolicyEstimator():
"""
Policy Function approximator.
"""
def __init__(self, learning_rate=0.01, scope="policy_estimator"):
with tf.variable_scope(scope):
self.state = tf.placeholder(tf.int32, [], "state")
self.action = tf.placeholder(dtype=tf.int32, name="action")
self.target = tf.placeholder(dtype=tf.float32, name="target")
# This is just table lookup estimator
state_one_hot = tf.one_hot(self.state, int(env.observation_space.n))
self.output_layer = tf.contrib.layers.fully_connected(
inputs=tf.expand_dims(state_one_hot, 0),
num_outputs=env.action_space.n,
activation_fn=None,
weights_initializer=tf.zeros_initializer)
self.action_probs = tf.squeeze(tf.nn.softmax(self.output_layer))
self.picked_action_prob = tf.gather(self.action_probs, self.action)
# Loss and train op
self.loss = -tf.log(self.picked_action_prob) * self.target
self.optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
self.train_op = self.optimizer.minimize(
self.loss, global_step=tf.contrib.framework.get_global_step())
def predict(self, state, sess=None):
sess = sess or tf.get_default_session()
return sess.run(self.action_probs, { self.state: state })
def update(self, state, target, action, sess=None):
sess = sess or tf.get_default_session()
feed_dict = { self.state: state, self.target: target, self.action: action }
_, loss = sess.run([self.train_op, self.loss], feed_dict)
return loss
class ValueEstimator():
"""
Value Function approximator.
"""
def __init__(self, learning_rate=0.1, scope="value_estimator"):
with tf.variable_scope(scope):
self.state = tf.placeholder(tf.int32, [], "state")
self.target = tf.placeholder(dtype=tf.float32, name="target")
# This is just table lookup estimator
state_one_hot = tf.one_hot(self.state, int(env.observation_space.n))
self.output_layer = tf.contrib.layers.fully_connected(
inputs=tf.expand_dims(state_one_hot, 0),
num_outputs=1,
activation_fn=None,
weights_initializer=tf.zeros_initializer)
self.value_estimate = tf.squeeze(self.output_layer)
self.loss = tf.squared_difference(self.value_estimate, self.target)
self.optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
self.train_op = self.optimizer.minimize(
self.loss, global_step=tf.contrib.framework.get_global_step())
def predict(self, state, sess=None):
sess = sess or tf.get_default_session()
return sess.run(self.value_estimate, { self.state: state })
def update(self, state, target, sess=None):
sess = sess or tf.get_default_session()
feed_dict = { self.state: state, self.target: target }
_, loss = sess.run([self.train_op, self.loss], feed_dict)
return loss
def actor_critic(env, estimator_policy, estimator_value, num_episodes, discount_factor=1.0):
"""
Actor Critic Algorithm. Optimizes the policy
function approximator using policy gradient.
Args:
env: OpenAI environment.
estimator_policy: Policy Function to be optimized
estimator_value: Value function approximator, used as a critic
num_episodes: Number of episodes to run for
discount_factor: Time-discount factor
Returns:
An EpisodeStats object with two numpy arrays for episode_lengths and episode_rewards.
"""
# Keeps track of useful statistics
stats = plotting.EpisodeStats(
episode_lengths=np.zeros(num_episodes),
episode_rewards=np.zeros(num_episodes))
Transition = collections.namedtuple("Transition", ["state", "action", "reward", "next_state", "done"])
for i_episode in range(num_episodes):
# Reset the environment and pick the fisrst action
state = env.reset()
episode = []
# One step in the environment
for t in itertools.count():
# Take a step
action_probs = estimator_policy.predict(state)
action = np.random.choice(np.arange(len(action_probs)), p=action_probs)
next_state, reward, done, _ = env.step(action)
# Keep track of the transition
episode.append(Transition(
state=state, action=action, reward=reward, next_state=next_state, done=done))
# Update statistics
stats.episode_rewards[i_episode] += reward
stats.episode_lengths[i_episode] = t
# Calculate TD Target
value_next = estimator_value.predict(next_state)
td_target = reward + discount_factor * value_next
td_error = td_target - estimator_value.predict(state)
# Update the value estimator
estimator_value.update(state, td_target)
# Update the policy estimator
# using the td error as our advantage estimate
estimator_policy.update(state, td_error, action)
# Print out which step we're on, useful for debugging.
print("\rStep {} @ Episode {}/{} ({})".format(
t, i_episode + 1, num_episodes, stats.episode_rewards[i_episode - 1]), end="")
if done:
break
state = next_state
return stats
tf.reset_default_graph()
global_step = tf.Variable(0, name="global_step", trainable=False)
policy_estimator = PolicyEstimator()
value_estimator = ValueEstimator()
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
# Note, due to randomness in the policy the number of episodes you need to learn a good
# policy may vary. ~300 seemed to work well for me.
stats = actor_critic(env, policy_estimator, value_estimator, 300)
plotting.plot_episode_stats(stats, smoothing_window=10)
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Goal" data-toc-modified-id="Goal-1"><span class="toc-item-num">1 </span>Goal</a></span></li><li><span><a href="#Var" data-toc-modified-id="Var-2"><span class="toc-item-num">2 </span>Var</a></span><ul class="toc-item"><li><span><a href="#Init" data-toc-modified-id="Init-2.1"><span class="toc-item-num">2.1 </span>Init</a></span></li></ul></li><li><span><a href="#DeepMAsED-SM" data-toc-modified-id="DeepMAsED-SM-3"><span class="toc-item-num">3 </span>DeepMAsED-SM</a></span><ul class="toc-item"><li><span><a href="#Config" data-toc-modified-id="Config-3.1"><span class="toc-item-num">3.1 </span>Config</a></span></li><li><span><a href="#Run" data-toc-modified-id="Run-3.2"><span class="toc-item-num">3.2 </span>Run</a></span></li></ul></li><li><span><a href="#--WAITING--" data-toc-modified-id="--WAITING---4"><span class="toc-item-num">4 </span>--WAITING--</a></span></li><li><span><a href="#Summary" data-toc-modified-id="Summary-5"><span class="toc-item-num">5 </span>Summary</a></span><ul class="toc-item"><li><span><a href="#Communities" data-toc-modified-id="Communities-5.1"><span class="toc-item-num">5.1 </span>Communities</a></span></li><li><span><a href="#Feature-tables" data-toc-modified-id="Feature-tables-5.2"><span class="toc-item-num">5.2 </span>Feature tables</a></span><ul class="toc-item"><li><span><a href="#No.-of-contigs" data-toc-modified-id="No.-of-contigs-5.2.1"><span class="toc-item-num">5.2.1 </span>No. of contigs</a></span></li><li><span><a href="#Misassembly-types" data-toc-modified-id="Misassembly-types-5.2.2"><span class="toc-item-num">5.2.2 </span>Misassembly types</a></span></li></ul></li></ul></li><li><span><a href="#sessionInfo" data-toc-modified-id="sessionInfo-6"><span class="toc-item-num">6 </span>sessionInfo</a></span></li></ul></div>
# Goal
* Replicate metagenome assemblies using intra-spec training genome dataset
* Richness = 0.5 (50% of all ref genomes used)
# Var
```
ref_dir = '/ebio/abt3_projects/databases_no-backup/DeepMAsED/GTDB_ref_genomes/intraSpec/'
ref_file = file.path(ref_dir, 'GTDBr86_genome-refs_train_clean.tsv')
work_dir = '/ebio/abt3_projects/databases_no-backup/DeepMAsED/train_runs/intra-species/diff_richness/n1000_r6_rich0p5/'
# params
pipeline_dir = '/ebio/abt3_projects/databases_no-backup/bin/deepmased/DeepMAsED-SM/'
```
## Init
```
library(dplyr)
library(tidyr)
library(ggplot2)
library(data.table)
source('/ebio/abt3_projects/software/dev/DeepMAsED/bin/misc_r_functions/init.R')
#' "cat {file}" in R
cat_file = function(file_name){
cmd = paste('cat', file_name, collapse=' ')
system(cmd, intern=TRUE) %>% paste(collapse='\n') %>% cat
}
```
# DeepMAsED-SM
## Config
```
config_file = file.path(work_dir, 'config.yaml')
cat_file(config_file)
```
## Run
```
(snakemake_dev) @ rick:/ebio/abt3_projects/databases_no-backup/bin/deepmased/DeepMAsED-SM
$ screen -L -S DM-intraS-rich0.5 ./snakemake_sge.sh /ebio/abt3_projects/databases_no-backup/DeepMAsED/train_runs/intra-species/diff_richness/n1000_r6_rich0p5/config.yaml cluster.json /ebio/abt3_projects/databases_no-backup/DeepMAsED/train_runs/intra-species/diff_richness/n1000_r6_rich0p5/SGE_log 48
```
# Summary
## Communities
```
comm_files = list.files(file.path(work_dir, 'MGSIM'), 'comm_wAbund.txt', full.names=TRUE, recursive=TRUE)
comm_files %>% length %>% print
comm_files %>% head
comms = list()
for(F in comm_files){
df = read.delim(F, sep='\t')
df$Rep = basename(dirname(F))
comms[[F]] = df
}
comms = do.call(rbind, comms)
rownames(comms) = 1:nrow(comms)
comms %>% dfhead
p = comms %>%
mutate(Perc_rel_abund = ifelse(Perc_rel_abund == 0, 1e-5, Perc_rel_abund)) %>%
group_by(Taxon) %>%
summarize(mean_perc_abund = mean(Perc_rel_abund),
sd_perc_abund = sd(Perc_rel_abund)) %>%
ungroup() %>%
mutate(neg_sd_perc_abund = mean_perc_abund - sd_perc_abund,
pos_sd_perc_abund = mean_perc_abund + sd_perc_abund,
neg_sd_perc_abund = ifelse(neg_sd_perc_abund <= 0, 1e-5, neg_sd_perc_abund)) %>%
mutate(Taxon = Taxon %>% reorder(-mean_perc_abund)) %>%
ggplot(aes(Taxon, mean_perc_abund)) +
geom_linerange(aes(ymin=neg_sd_perc_abund, ymax=pos_sd_perc_abund),
size=0.3, alpha=0.3) +
geom_point(size=0.5, alpha=0.4, color='red') +
labs(y='% abundance') +
theme_bw() +
theme(
axis.text.x = element_blank(),
panel.grid.major.x = element_blank(),
panel.grid.major.y = element_blank(),
panel.grid.minor.x = element_blank(),
panel.grid.minor.y = element_blank()
)
dims(10,2.5)
plot(p)
dims(10,2.5)
plot(p + scale_y_log10())
```
## Feature tables
```
feat_files = list.files(file.path(work_dir, 'map'), 'features.tsv.gz', full.names=TRUE, recursive=TRUE)
feat_files %>% length %>% print
feat_files %>% head
feats = list()
for(F in feat_files){
cmd = glue::glue('gunzip -c {F}', F=F)
df = fread(cmd, sep='\t') %>%
distinct(contig, assembler, Extensive_misassembly)
df$Rep = basename(dirname(dirname(F)))
feats[[F]] = df
}
feats = do.call(rbind, feats)
rownames(feats) = 1:nrow(feats)
feats %>% dfhead
```
### No. of contigs
```
feats_s = feats %>%
group_by(assembler, Rep) %>%
summarize(n_contigs = n_distinct(contig)) %>%
ungroup
feats_s$n_contigs %>% summary
```
### Misassembly types
```
p = feats %>%
mutate(Extensive_misassembly = ifelse(Extensive_misassembly == '', 'None',
Extensive_misassembly)) %>%
group_by(Extensive_misassembly, assembler, Rep) %>%
summarize(n = n()) %>%
ungroup() %>%
ggplot(aes(Extensive_misassembly, n, color=assembler)) +
geom_boxplot() +
scale_y_log10() +
labs(x='metaQUAST extensive mis-assembly', y='Count') +
coord_flip() +
theme_bw() +
theme(
axis.text.x = element_text(angle=45, hjust=1)
)
dims(8,4)
plot(p)
```
# sessionInfo
```
sessionInfo()
```
| github_jupyter |
```
import sys
sys.path.append('../code')
from utils import plot_utils
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import glob
import torch
import numpy as np
import pickle
import stg_node
from model.dyn_stg import SpatioTemporalGraphCVAEModel
from model.model_registrar import ModelRegistrar
from utils.scene_utils import create_batch_scene_graph
import timeit
import matplotlib.pyplot as plt
from scipy.integrate import cumtrapz
from PIL import Image
import imageio
import random
from collections import defaultdict
```
# Options
```
hyperparams = {
### Training
## Batch Sizes
'batch_size': 16,
## Learning Rate
'learning_rate': 0.001,
'min_learning_rate': 0.00001,
'learning_decay_rate': 0.9999,
## Optimizer
# 'optimizer': tf.train.AdamOptimizer,
'optimizer_kwargs': {},
'grad_clip': 1.0,
### Prediction
'minimum_history_length': 5, # 0.5 seconds
'prediction_horizon': 15, # 1.5 seconds (at least as far as the loss function is concerned)
### Variational Objective
## Objective Formulation
'alpha': 1,
'k': 3, # number of samples from z during training
'k_eval': 50, # number of samples from z during evaluation
'use_iwae': False, # only matters if alpha = 1
'kl_exact': True, # relevant only if alpha = 1
## KL Annealing/Bounding
'kl_min': 0.07,
'kl_weight': 1.0,
'kl_weight_start': 0.0001,
'kl_decay_rate': 0.99995,
'kl_crossover': 8000,
'kl_sigmoid_divisor': 6,
### Network Parameters
## RNNs/Summarization
'rnn_kwargs': {"dropout_keep_prob": 0.75},
'MLP_dropout_keep_prob': 0.9,
'rnn_io_dropout_keep_prob': 1.0,
'enc_rnn_dim_multiple_inputs': 8,
'enc_rnn_dim_edge': 8,
'enc_rnn_dim_edge_influence': 8,
'enc_rnn_dim_history': 32,
'enc_rnn_dim_future': 32,
'dec_rnn_dim': 128,
'dec_GMM_proj_MLP_dims': None,
'sample_model_during_dec': True,
'dec_sample_model_prob_start': 0.0,
'dec_sample_model_prob_final': 0.0,
'dec_sample_model_prob_crossover': 20000,
'dec_sample_model_prob_divisor': 6,
## q_z_xy (encoder)
'q_z_xy_MLP_dims': None,
## p_z_x (encoder)
'p_z_x_MLP_dims': 16,
## p_y_xz (decoder)
'fuzz_factor': 0.05,
'GMM_components': 16,
'log_sigma_min': -10,
'log_sigma_max': 10,
'log_p_yt_xz_max': 50,
### Discrete Latent Variable
'N': 2,
'K': 5,
## Relaxed One-Hot Temperature Annealing
'tau_init': 2.0,
'tau_final': 0.001,
'tau_decay_rate': 0.9999,
## Logit Clipping
'use_z_logit_clipping': False,
'z_logit_clip_start': 0.05,
'z_logit_clip_final': 3.0,
'z_logit_clip_crossover': 8000,
'z_logit_clip_divisor': 6
}
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
device = 'cpu'
data_dir = './data'
eval_data_dict_path = 'eval_data_dict_2_files_100_rows.pkl'
model_dir = './logs/models_28_Jan_2019_15_35_05'
robot_node = stg_node.STGNode('Al Horford', 'HomeC')
hyperparams['dynamic_edges'] = 'yes'
hyperparams['edge_addition_filter'] = [0.04, 0.06, 0.09, 0.12, 0.17, 0.25, 0.35, 0.5, 0.7, 1.0]
hyperparams['edge_removal_filter'] = [1.0, 0.7, 0.5, 0.35, 0.25, 0.17, 0.12, 0.09, 0.06, 0.04]
hyperparams['edge_state_combine_method'] = 'sum'
hyperparams['edge_influence_combine_method'] = 'bi-rnn'
hyperparams['edge_radius'] = 2.0 * 3.28084
if not torch.cuda.is_available() or device == 'cpu':
device = torch.device('cpu')
else:
if torch.cuda.device_count() == 1:
# If you have CUDA_VISIBLE_DEVICES set, which you should,
# then this will prevent leftover flag arguments from
# messing with the device allocation.
device = 'cuda:0'
device = torch.device(device)
print(device)
```
# Visualization
```
with open(os.path.join(data_dir, eval_data_dict_path), 'rb') as f:
eval_data_dict = pickle.load(f, encoding='latin1')
model_registrar = ModelRegistrar(model_dir, device)
model_registrar.load_models(699)
model_registrar = model_registrar.cpu()
# This keeps colors consistent across timesteps, rerun this cell if you want to reset the colours.
color_dict = defaultdict(dict)
plot_utils.plot_predictions(eval_data_dict, model_registrar,
robot_node, hyperparams,
device, dt=eval_data_dict['dt'], max_speed=40.76,
color_dict=color_dict,
data_id=0, t_predict=10,
figsize=(10, 10),
ylim=(0, 40), xlim=(0, 40),
num_samples=400,
radius_of_influence=hyperparams['edge_radius'],
node_circle_size=0.45,
circle_edge_width=1.0, line_alpha=0.9,
line_width=0.2, edge_width=4,
dpi=300, tick_fontsize=16,
robot_circle=None, omit_names=False,
legend_loc='best', title='',
xlabel='Longitudinal Court Position (ft)',
ylabel='Lateral Court Position (ft)'
)
```
| github_jupyter |
```
dataset=0 # [Fashion MNIST, CIFAR10]
import torch
from torchvision import transforms, datasets
import numpy as np
import matplotlib.pyplot as plt
print('Done')
# get x and y axis to quantify HP/LP structure
def get_axes(size_im):
f_axis_0=np.arange(size_im)
f_axis_0[f_axis_0>np.floor(size_im/2)]=np.flip(np.arange(np.ceil(size_im/2)-1)+1)
f_axis_0=np.fft.fftshift(f_axis_0)
f_axis_1=np.arange(size_im)
f_axis_1[f_axis_1>np.floor(size_im/2)]=np.flip(np.arange(np.ceil(size_im/2)-1)+1)
f_axis_1=np.fft.fftshift(f_axis_1)
Y,X=np.meshgrid(f_axis_0/size_im,f_axis_1/size_im)
return Y,X
# Define dataloader for training
data_transform=transforms.Compose([transforms.ToTensor()])
if dataset==0:
FMNIST_dataset=datasets.FashionMNIST(root='.', train=True,\
transform=data_transform, download=True)
dataset_loader=torch.utils.data.DataLoader(FMNIST_dataset)
elif dataset==1:
CIFAR10_dataset=datasets.CIFAR10(root='.', train=True,\
transform=data_transform, download=True)
dataset_loader=torch.utils.data.DataLoader(CIFAR10_dataset)
print('Defined data loader...')
n_classes=10
if dataset==0:
size_im=28
elif dataset==1:
size_im=32
h=np.hamming(size_im)
h_win=np.outer(h,h)
Y,X=get_axes(size_im)
euc_dist=np.sqrt(X**2+Y**2)
mean_fft=np.zeros((size_im,size_im,n_classes))
n_samples=np.zeros(n_classes)
all_mean_f_response=np.zeros(len(dataset_loader))
class_mean_f_response=np.zeros((int(len(dataset_loader)/n_classes),n_classes))
print('Started loop')
image_processed=0
for X, y in dataset_loader:
if dataset==0:
X=np.squeeze(X.numpy())
elif dataset==1:
X=np.squeeze(np.mean(X.numpy(),axis=1))
# GAUSSIAN BLUR X
# Get fft of input image
X=X*h_win
fft_sample=np.abs(np.fft.fft2(X))**2
mean_fft[:,:,y]+=fft_sample # unnormalized
fft_sample=np.fft.fftshift(fft_sample/np.sum(fft_sample)) # normalized
# Get HP/LP histogram
mean_f_response=np.mean(euc_dist*fft_sample)
all_mean_f_response[image_processed]=mean_f_response
class_mean_f_response[int(n_samples[y]),y]=mean_f_response
n_samples[y]+=1
image_processed+=1
if image_processed%10000==0:
print('Done with '+str(image_processed)+' images\n')
hist,bins=np.histogram(all_mean_f_response,bins=100)
if dataset==0:
classes=['T-shirt/top','Trouser','Pullover','Dress','Coat','Sandal','Shirt','Sneaker','Bag','Ankle boot']
elif dataset==1:
classes=['airplane','automobile','bird','cat','deer','dog','frog','horse','ship','truck']
print(n_samples)
plt.figure(1)
for i in range(n_classes):
plt.imshow(np.log10(np.fft.fftshift(mean_fft[:,:,i]/n_samples[i])))
plt.title(classes[i])
plt.colorbar()
plt.pause(0.1)
plt.figure(2)
plt.plot(bins[0:-1],hist)
plt.xlim((0,0.00034))
plt.figure(3)
for i in range(n_classes):
class_hist,class_bins=np.histogram(class_mean_f_response[:,i],bins=100)
plt.plot(class_bins[0:-1],class_hist,label=classes[i])
plt.xlim((0,0.00034))
plt.legend()
```
| github_jupyter |
# <center> Pandas, part 2 </center>
### By the end of this talk, you will be able to
- modify/clean columns
- evaluate the runtime of your scripts
- merge and append data frames
### <font color='LIGHTGRAY'>By the end of this talk, you will be able to</font>
- **modify/clean columns**
- **evaluate the runtime of your scripts**
- <font color='LIGHTGRAY'>merge and append data frames</font>
## What is data cleaning?
- the data you scrape from sites are often not in a format you can directly work with
- the temp column in weather.csv contains the temperature value but also other strings
- the year column in imdb.csv contains the year the movie came out but also - or () and roman numbers
- data cleaning means that you bring the data in a format that's easier to analyze/visualize
## How is it done?
- you read in your data into a pandas data frame and either
- modify the values of a column
- create a new column that contains the modified values
- either works, I'll show you how to do both
- ADVICE: when you save the modified data frame, it is usually good practice to not overwrite the original csv or excel file that you scraped.
- save the modified data into a new file instead
## Ways to modify a column
- there are several ways to do this
- with a for loop
- with a list comprehension
- using the .apply method
- we will compare run times
- investigate which approach is faster
- important when you work with large datasets (>1,000,000 lines)
## The task: runtime column in imdb.csv
- the column is string in the format 'n min' where n is the length of the movie in minutes
- for plotting purposes, it is better if the runtime is not a string but a number (float or int)
- you can't create a histogram of runtime using strings
- task: clean the runtime column and convert it to float
## Approach 1: for loop
```
# read in the data
import pandas as pd
df_imdb = pd.read_csv('data/imdb.csv')
print(df_imdb.head())
import time
start = time.time() # start the clock
for i in range(100): # repeat everything 100 times to get better estimate of elapsed time
# the actual code to clean the runtime column comes here
runtime_lst = []
for x in df_imdb['runtime']:
if type(x) == str:
runtime = float(x[:-4].replace(',',''))
else:
runtime = 0e0
runtime_lst.append(runtime)
df_imdb['runtime min'] = runtime_lst
end = time.time() # stop the timer
print('cpu time = ',end-start,'sec')
```
## Approach 2: list comprehension
```
start = time.time() # start the clock
for i in range(100): # repeat everything 100 times to get better estimate of elapsed time
# the actual code to clean the runtime column comes here
df_imdb['runtime min'] = [float(x[:-4].replace(',','')) if type(x) == str else 0e0 for x in df_imdb['runtime']]
end = time.time() # stop the timer
print('cpu time = ',end-start,'sec')
```
## Approach 3: the .apply method
```
def clean_runtime(x):
if type(x) == str:
runtime = float(x[:-4].replace(',',''))
else:
runtime = 0e0
return runtime
start = time.time() # start the clock
for i in range(100): # repeat everything 100 times to get better estimate of elapsed time
# the actual code to clean the runtime column comes here
df_imdb['runtime min'] = df_imdb['runtime'].apply(clean_runtime)
end = time.time() # stop the timer
print('cpu time = ',end-start,'sec')
```
## Summary
- the for loop is slower
- the list comprehension and the apply method are equally quick
- it is down to personal preference to choose between list comprehension and .apply
- **the same ranking is not quaranteed for a different task!**
- **always try a few different approaches if runtime is an issue (you work with large data)!**
## Exercise 1
Clean the `temp` column in the `data/weather.csv` file. The new temperature column should be an integer or a float. Work through at least one of the approaches we discussed. If you have time, work through all three methods. If you have even more time, look for other approaches to clean a column and time it using the `runtime` column of the imdb.csv. Try to beat my cpu time and find an even faster approach! :)
### <font color='LIGHTGRAY'>By the end of this talk, you will be able to</font>
- <font color='LIGHTGRAY'>modify/clean columns</font>
- <font color='LIGHTGRAY'>evaluate the runtime of your scripts</font>
- **merge and append data frames**
### How to merge dataframes?
Merge - data are distributed in multiple files
```
# We have two datasets from two hospitals
hospital1 = {'ID':['ID1','ID2','ID3','ID4','ID5','ID6','ID7'],'col1':[5,8,2,6,0,2,5],'col2':['y','j','w','b','a','b','t']}
df1 = pd.DataFrame(data=hospital1)
print(df1)
hospital2 = {'ID':['ID2','ID5','ID6','ID10','ID11'],'col3':[12,76,34,98,65],'col2':['q','u','e','l','p']}
df2 = pd.DataFrame(data=hospital2)
print(df2)
# we are interested in only patients from hospital1
#df_left = df1.merge(df2,how='left',on='ID') # IDs from the left dataframe (df1) are kept
#print(df_left)
# we are interested in only patients from hospital2
#df_right = df1.merge(df2,how='right',on='ID') # IDs from the right dataframe (df2) are kept
#print(df_right)
# we are interested in patiens who were in both hospitals
#df_inner = df1.merge(df2,how='inner',on='ID') # merging on IDs present in both dataframes
#print(df_inner)
# we are interested in all patients who visited at least one of the hospitals
#df_outer = df1.merge(df2,how='outer',on='ID') # merging on IDs present in any dataframe
#print(df_outer)
```
### How to append dataframes?
Append - new data comes in over a period of time. E.g., one file per month/quarter/fiscal year etc.
You want to combine these files into one data frame.
```
#df_append = df1.append(df2) # note that rows with ID2, ID5, and ID6 are duplicated! Indices are duplicated too.
#print(df_append)
df_append = df1.append(df2,ignore_index=True) # note that rows with ID2, ID5, and ID6 are duplicated!
#print(df_append)
d3 = {'ID':['ID23','ID94','ID56','ID17'],'col1':['rt','h','st','ne'],'col2':[23,86,23,78]}
df3 = pd.DataFrame(data=d3)
#print(df3)
df_append = df1.append([df2,df3],ignore_index=True) # multiple dataframes can be appended to df1
print(df_append)
```
### Exercise 2
- Create three data frames from raw_data_1, 2, and 3.
- Append the first two data frames and assign it to df_append.
- Merge the third data frame with df_append such that only subject_ids from df_append are present.
- Assign the new data frame to df_merge.
- How many rows and columns do we have in df_merge?
```
raw_data_1 = {
'subject_id': ['1', '2', '3', '4', '5'],
'first_name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'],
'last_name': ['Anderson', 'Ackerman', 'Ali', 'Aoni', 'Atiches']}
raw_data_2 = {
'subject_id': ['6', '7', '8', '9', '10'],
'first_name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'],
'last_name': ['Bonder', 'Black', 'Balwner', 'Brice', 'Btisan']}
raw_data_3 = {
'subject_id': ['1', '2', '3', '4', '5', '7', '8', '9', '10', '11'],
'test_id': [51, 15, 15, 61, 16, 14, 15, 1, 61, 16]}
```
### Always check that the resulting dataframe is what you wanted to end up with!
- small toy datasets are ideal to test your code.
### If you need to do a more complicated dataframe operation, check out pd.concat()!
| github_jupyter |
# Results: mutagenesis2 Original
<b> MIL </b> <i>stratified k fold Validation</i> is performed.
Metrics: <br>
- AUC
- Accuracie
### Import Libraries
```
import sys,os
import warnings
os.chdir('/Users/josemiguelarrieta/Documents/MILpy')
sys.path.append(os.path.realpath('..'))
from sklearn.utils import shuffle
import random as rand
import numpy as np
from data import load_data
warnings.filterwarnings('ignore')
from MILpy.functions.mil_cross_val import mil_cross_val
#Import Algorithms
from MILpy.Algorithms.simpleMIL import simpleMIL
from MILpy.Algorithms.MILBoost import MILBoost
from MILpy.Algorithms.maxDD import maxDD
from MILpy.Algorithms.CKNN import CKNN
from MILpy.Algorithms.EMDD import EMDD
from MILpy.Algorithms.MILES import MILES
from MILpy.Algorithms.BOW import BOW
```
### Load data
```
bags,labels,X = load_data('mutagenesis2_original')
folds = 5
runs = 5
```
#### Simple MIL [max]
```
SMILa = simpleMIL()
parameters_smil = {'type': 'max'}
print '\n========= SIMPLE MIL RESULT [MAX] ========='
AUC = []
ACCURACIE=[]
for i in range(runs):
print '\n run #'+ str(i)
#Shuffle Data
bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100))
accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=SMILa, folds=folds, parameters=parameters_smil, timer = True)
print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2))
AUC.append(auc)
ACCURACIE.append(accuracie)
print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE))
```
#### Simple MIL [min]
```
parameters_smil = {'type': 'min'}
print '\n========= SIMPLE MIL RESULT [MIN] ========='
AUC = []
ACCURACIE=[]
for i in range(runs):
print '\n run #'+ str(i)
bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100))
accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=SMILa, folds=folds,parameters=parameters_smil, timer=True)
print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2))
AUC.append(auc)
ACCURACIE.append(accuracie)
print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE))
```
#### Simple MIL [extreme]
```
parameters_smil = {'type': 'extreme'}
print '\n========= SIMPLE MIL RESULT [MIN] ========='
AUC = []
ACCURACIE=[]
for i in range(runs):
print '\n run #'+ str(i)
#Shuffle Data
bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100))
accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=SMILa, folds=folds,parameters=parameters_smil, timer=True)
print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2))
AUC.append(auc)
ACCURACIE.append(accuracie)
print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE))
```
#### Simple MIL [average]
```
parameters_smil = {'type': 'average'}
print '\n========= SIMPLE MIL RESULT [AVERAGE] ========='
AUC = []
ACCURACIE=[]
for i in range(runs):
print '\n run #'+ str(i)
bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100))
accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=SMILa, folds=folds,parameters=parameters_smil, timer=True)
print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2))
AUC.append(auc)
ACCURACIE.append(accuracie)
print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE))
```
#### Bag of Words
```
bow_classifier = BOW()
parameters_bow = {'k':100,'covar_type':'diag','n_iter':20}
print '\n========= BAG OF WORDS RESULT ========='
AUC = []
ACCURACIE=[]
for i in range(runs):
print '\n run #'+ str(i)
bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100))
accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=bow_classifier, folds=folds,parameters=parameters_bow, timer=True)
print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2))
AUC.append(auc)
ACCURACIE.append(accuracie)
print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE))
```
#### Citation KNN
```
cknn_classifier = CKNN()
parameters_cknn = {'references': 3, 'citers': 5}
print '\n========= CKNN RESULT ========='
AUC = []
ACCURACIE=[]
for i in range(runs):
print '\n run #'+ str(i)
bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100))
accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=cknn_classifier, folds=folds,parameters=parameters_cknn, timer=True)
print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2))
AUC.append(auc)
ACCURACIE.append(accuracie)
print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE))
```
#### Diverse Density
```
maxDD_classifier = maxDD()
print '\n========= DIVERSE DENSITY RESULT========='
AUC = []
ACCURACIE=[]
for i in range(runs):
print '\n run #'+ str(i)
bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100))
accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=maxDD_classifier, folds=folds,parameters={}, timer=True)
print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2))
AUC.append(auc)
ACCURACIE.append(accuracie)
print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE))
```
#### EM-DD
```
emdd_classifier = EMDD()
print '\n========= EM-DD RESULT ========='
AUC = []
ACCURACIE=[]
for i in range(runs):
print '\n run #'+ str(i)
bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100))
accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels.ravel(), model=emdd_classifier, folds=folds,parameters={}, timer=True)
print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2))
AUC.append(auc)
ACCURACIE.append(accuracie)
print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE))
```
#### MILBoost
```
milboost_classifier = MILBoost()
print '\n========= MILBOOST RESULT ========='
AUC = []
ACCURACIE=[]
for i in range(runs):
print '\n run #'+ str(i)
bags,labels = shuffle(bags, labels, random_state=rand.randint(0, 100))
accuracie, results_accuracie, auc,results_auc, elapsed = mil_cross_val(bags=bags,labels=labels, model=milboost_classifier, folds=folds,parameters={}, timer=True)
print '\n'+'AUC: ' + str(auc)+'\n'+'Accuracie: '+ str(accuracie)+'\n'+'Elapsed: '+ str(round(elapsed,2))
AUC.append(auc)
ACCURACIE.append(accuracie)
print '\n MEAN AUC: '+ str(np.mean(AUC)) + '\n MEAN ACCURACIE: '+ str(np.mean(ACCURACIE))
```
#### Miles
```
#Pending
```
| github_jupyter |
# Course 2 week 1 lecture notebook Exercise 03
<a name="combine-features"></a>
## Combine features
In this exercise, you will practice how to combine features in a pandas dataframe. This will help you in the graded assignment at the end of the week.
In addition, you will explore why it makes more sense to multiply two features rather than add them in order to create interaction terms.
First, you will generate some data to work with.
```
# Import pandas
import pandas as pd
# Import a pre-defined function that generates data
from utils import load_data
# Generate features and labels
X, y = load_data(100)
X.head()
feature_names = X.columns
feature_names
```
### Combine strings
Even though you can visually see feature names and type the name of the combined feature, you can programmatically create interaction features so that you can apply this to any dataframe.
Use f-strings to combine two strings. There are other ways to do this, but Python's f-strings are quite useful.
```
name1 = feature_names[0]
name2 = feature_names[1]
print(f"name1: {name1}")
print(f"name2: {name2}")
# Combine the names of two features into a single string, separated by '_&_' for clarity
combined_names = f"{name1}_&_{name2}"
combined_names
```
### Add two columns
- Add the values from two columns and put them into a new column.
- You'll do something similar in this week's assignment.
```
X[combined_names] = X['Age'] + X['Systolic_BP']
X.head(2)
```
### Why we multiply two features instead of adding
Why do you think it makes more sense to multiply two features together rather than adding them together?
Please take a look at two features, and compare what you get when you add them, versus when you multiply them together.
```
# Generate a small dataset with two features
df = pd.DataFrame({'v1': [1,1,1,2,2,2,3,3,3],
'v2': [100,200,300,100,200,300,100,200,300]
})
# add the two features together
df['v1 + v2'] = df['v1'] + df['v2']
# multiply the two features together
df['v1 x v2'] = df['v1'] * df['v2']
df
```
It may not be immediately apparent how adding or multiplying makes a difference; either way you get unique values for each of these operations.
To view the data in a more helpful way, rearrange the data (pivot it) so that:
- feature 1 is the row index
- feature 2 is the column name.
- Then set the sum of the two features as the value.
Display the resulting data in a heatmap.
```
# Import seaborn in order to use a heatmap plot
import seaborn as sns
# Pivot the data so that v1 + v2 is the value
df_add = df.pivot(index='v1',
columns='v2',
values='v1 + v2'
)
print("v1 + v2\n")
display(df_add)
print()
sns.heatmap(df_add);
```
Notice that it doesn't seem like you can easily distinguish clearly when you vary feature 1 (which ranges from 1 to 3), since feature 2 is so much larger in magnitude (100 to 300). This is because you added the two features together.
#### View the 'multiply' interaction
Now pivot the data so that:
- feature 1 is the row index
- feature 2 is the column name.
- The values are 'v1 x v2'
Use a heatmap to visualize the table.
```
df_mult = df.pivot(index='v1',
columns='v2',
values='v1 x v2'
)
print('v1 x v2')
display(df_mult)
print()
sns.heatmap(df_mult);
```
Notice how when you multiply the features, the heatmap looks more like a 'grid' shape instead of three vertical bars.
This means that you are more clearly able to make a distinction as feature 1 varies from 1 to 2 to 3.
### Discussion
When you find the interaction between two features, you ideally hope to see how varying one feature makes an impact on the interaction term. This is better achieved by multiplying the two features together rather than adding them together.
Another way to think of this is that you want to separate the feature space into a "grid", which you can do by multiplying the features together.
In this week's assignment, you will create interaction terms!
### This is the end of this practice section.
Please continue on with the lecture videos!
---
| github_jupyter |
Doc : https://doc.demarches-simplifiees.fr/pour-aller-plus-loin/graphql et https://demarches-simplifiees-graphql.netlify.app/query.doc.html
```
import requests
import json
import pandas as pd
pd.options.display.max_columns = 500
pd.options.display.max_rows = 500
import configparser
config = configparser.ConfigParser()
config.read('./secret.ini')
arr = []
endCursor = ''
hasNextPage = True
while(hasNextPage):
query = """query{
demarche(number: 30928) {
id
dossiers(state:accepte after:\""""+endCursor+"""\") {
nodes {
champs {
label
stringValue
}
annotations {
label
stringValue
}
number
id
state
demandeur {
... on PersonnePhysique {
civilite
nom
prenom
}
... on PersonneMorale {
siret
codePostal
naf
}
}
groupeInstructeur{
label
}
}
pageInfo {
hasNextPage
endCursor
}
}
}
}"""
headers = {"Authorization": "Bearer "+config['DS']['bearer']}
url = 'https://www.demarches-simplifiees.fr/api/v2/graphql'
r = requests.post(url, headers=headers, json={'query': query})
print(r.status_code)
data = r.json()
hasNextPage = data['data']['demarche']['dossiers']['pageInfo']['hasNextPage']
endCursor = data['data']['demarche']['dossiers']['pageInfo']['endCursor']
for d in data['data']['demarche']['dossiers']['nodes']:
mydict = {}
mydict['number'] = d['number']
mydict['id'] = d['id']
mydict['state'] = d['state']
mydict['siret'] = d['demandeur']['siret']
mydict['codePostal'] = d['demandeur']['codePostal']
mydict['naf'] = d['demandeur']['naf']
mydict['groupe_instructeur'] = d['groupeInstructeur']['label']
for c in d['champs']:
mydict[c['label']] = c['stringValue']
for a in d['annotations']:
mydict[a['label']] = a['stringValue']
arr.append(mydict)
df = pd.DataFrame(arr)
df.shape
df.state.value_counts()
dffinal = df[['id','siret','naf','state',
'Montant total du prêt demandé',
'Montant proposé','Durée du prêt','Quelle forme prend l\'aide ?',
'codePostal','Quels sont vos effectifs ?',
'groupe_instructeur','Département']]
dffinal = dffinal.rename(columns={
'naf':'code_naf',
'state':'statut',
'Montant total du prêt demandé':'montant_demande',
'Montant proposé':'montant',
'Quelle forme prend l\'aide ?':'type_aide',
'Durée du prêt':'duree',
'Quels sont vos effectifs ?':'effectifs',
'Département':'departement'
})
dffinal.effectifs = dffinal.effectifs.astype(float)
dffinal[dffinal['effectifs'] < 0]
dffinal.montant = dffinal.montant.astype(float)
dffinal.montant.sum()
dffinal['dep'] = dffinal['departement'].apply(lambda x: x.split(" - ")[0])
import time
siret_for_api = dffinal.siret.unique()
# Pour chaque SIRET on appelle l'API entreprise pour voir si on récupère des informations
# au niveau du SIRET
arr = []
i = 0
for siret in siret_for_api:
# On ne doit pas surcharger l'API donc on met un temps entre chaque requête (il y a un quota à ne pas dépasser)
# C'est un peu long...
time.sleep(0.3)
i = i + 1
if(i%10 == 0):
print(str(i))
row = {}
url = "https://entreprise.api.gouv.fr/v2/effectifs_annuels_acoss_covid/"+siret[:9]+"?non_diffusables=true&recipient=13001653800014&object=dgefp&context=%22dashboard%20aides%20aux%20entreprises%22&token="+config['API']['token']
try:
r = requests.get(url=url)
try:
mydict = r.json()
except:
pass
if "effectifs_annuels" in mydict:
arr.append(mydict)
except requests.exceptions.ConnectionError:
print("Error : "+siret)
dfapi = pd.DataFrame(arr)
dfapi = dfapi.drop_duplicates(subset=['siren'], keep='first')
dffinal['siren'] = dffinal['siret'].apply(lambda x: str(x)[:9])
dffinal = pd.merge(dffinal,dfapi,on='siren',how='left')
dffinal['effectifs_annuels'] = dffinal['effectifs_annuels'].apply(lambda x: int(float(x)) if x == x else x)
dffinal.loc[dffinal.effectifs_annuels.isna(),'effectifs_annuels']=dffinal['effectifs']
reg = pd.read_csv("https://raw.githubusercontent.com/etalab/dashboard-aides-entreprises/master/utils/region2019.csv",dtype=str)
dep = pd.read_csv("https://raw.githubusercontent.com/etalab/dashboard-aides-entreprises/master/utils/departement2019.csv",dtype=str)
dep = dep[['dep','reg']]
reg = reg[['reg','libelle']]
dep = pd.merge(dep,reg,on='reg',how='left')
dffinal = pd.merge(dffinal,dep,on='dep',how='left')
dffinal = dffinal[['montant','effectifs_annuels','reg','libelle','type_aide']]
dffinal = dffinal.rename(columns={'effectifs_annuels':'effectifs'})
dffinal.effectifs = dffinal.effectifs.astype(float)
dffinal.montant = dffinal.montant.astype(float)
arr = []
mydict = {}
mydict['type'] = []
mydict2 = {}
mydict2['libelle'] = 'Prêt à taux bonifié'
mydict2['nombre'] = str(dffinal[dffinal['type_aide'] == 'Prêt à taux bonifié'].shape[0])
mydict2['montant'] = str(dffinal[dffinal['type_aide'] == 'Prêt à taux bonifié'].montant.sum())
mydict2['effectifs'] = str(dffinal[dffinal['type_aide'] == 'Prêt à taux bonifié'].effectifs.sum())
mydict['type'].append(mydict2)
mydict2 = {}
mydict2['libelle'] = 'Avance remboursable'
mydict2['nombre'] = str(dffinal[dffinal['type_aide'] == 'Avance remboursable'].shape[0])
mydict2['montant'] = str(dffinal[dffinal['type_aide'] == 'Avance remboursable'].montant.sum())
mydict2['effectifs'] = str(dffinal[dffinal['type_aide'] == 'Avance remboursable'].effectifs.sum())
mydict['type'].append(mydict2)
mydict['nombre'] = str(dffinal.shape[0])
mydict['montant'] = str(dffinal.montant.sum())
mydict['effectifs'] = str(dffinal.effectifs.sum())
arr.append(mydict)
arr
with open('arpb-maille-national.json', 'w') as outfile:
json.dump(arr, outfile)
dffinal[dffinal['type_aide'] == 'Prêt à taux bonifié'].effectifs.sum()
arr = []
for r in reg.reg.unique():
if(dffinal[dffinal['reg'] == r].shape[0] > 0):
mydict = {}
mydict['type'] = []
mydict2 = {}
mydict2['libelle'] = 'Prêt à taux bonifié'
mydict2['nombre'] = str(dffinal[(dffinal['type_aide'] == 'Prêt à taux bonifié') & (dffinal['reg'] == r)].shape[0])
mydict2['montant'] = str(dffinal[(dffinal['type_aide'] == 'Prêt à taux bonifié') & (dffinal['reg'] == r)].montant.sum())
mydict2['effectifs'] = str(dffinal[(dffinal['type_aide'] == 'Prêt à taux bonifié') & (dffinal['reg'] == r)].effectifs.sum())
mydict['type'].append(mydict2)
mydict2 = {}
mydict2['libelle'] = 'Avance remboursable'
mydict2['nombre'] = str(dffinal[(dffinal['type_aide'] == 'Avance remboursable') & (dffinal['reg'] == r)].shape[0])
mydict2['montant'] = str(dffinal[(dffinal['type_aide'] == 'Avance remboursable') & (dffinal['reg'] == r)].montant.sum())
mydict2['effectifs'] = str(dffinal[(dffinal['type_aide'] == 'Avance remboursable') & (dffinal['reg'] == r)].effectifs.sum())
mydict['type'].append(mydict2)
mydict['nombre'] = str(dffinal[dffinal['reg'] == r].shape[0])
mydict['montant'] = str(dffinal[dffinal['reg'] == r].montant.sum())
mydict['effectifs'] = str(dffinal[dffinal['reg'] == r].effectifs.sum())
else:
mydict = {}
mydict['type'] = None
mydict['nombre'] = None
mydict['montant'] = None
mydict['effectifs'] = None
mydict['reg'] = r
mydict['libelle'] = reg[reg['reg'] == r].iloc[0]['libelle']
arr.append(mydict)
arr
with open('arpb-maille-regional.json', 'w') as outfile:
json.dump(arr, outfile)
arr = []
with open('arpb-maille-departemental.json', 'w') as outfile:
json.dump(arr, outfile)
dfgb = dffinal.groupby(['reg','libelle','type_aide'],as_index=False).sum()
dfgb.to_csv("prets-directs-etat-regional.csv",index=False)
dfgb = dfgb.groupby(['type_aide'],as_index=False).sum()
dfgb.to_csv("prets-directs-etat-national.csv",index=False)
dfgb
```
| github_jupyter |
# Import Modules
```
import os
print(os.getcwd())
import sys
import json
import pickle
import pandas as pd
# #########################################################
from methods import get_df_dft
```
# Read Data
## Read bulk_ids of octahedral and unique polymorphs
```
# ########################################################
data_path = os.path.join(
os.environ["PROJ_irox_oer"],
"workflow/creating_slabs/selecting_bulks",
"out_data/data.json")
with open(data_path, "r") as fle:
data = json.load(fle)
# ########################################################
bulk_ids__octa_unique = data["bulk_ids__octa_unique"]
df_dft = get_df_dft()
df_dft_i = df_dft[df_dft.index.isin(bulk_ids__octa_unique)]
# df_dft_i.sort_values("num_atoms", ascending=False).iloc[0:15]
# df_dft_i.sort_values?
```
```
# [print(i) for i in df_dft_i.index.tolist()]
directory = "out_data/all_bulks"
if not os.path.exists(directory):
os.makedirs(directory)
directory = "out_data/layered_bulks"
if not os.path.exists(directory):
os.makedirs(directory)
for i_cnt, (bulk_id_i, row_i) in enumerate(df_dft_i.iterrows()):
i_cnt_str = str(i_cnt).zfill(3)
atoms_i = row_i.atoms
atoms_i.write("out_data/all_bulks/" + i_cnt_str + "_" + bulk_id_i + ".cif")
```
# Reading `bulk_manual_classification.csv`
```
# df_bulk_class = pd.read_csv("./bulk_manual_classification.csv")
from methods import get_df_bulk_manual_class
df_bulk_class = get_df_bulk_manual_class()
df_bulk_class.head()
print("Total number of bulks being considered:", df_bulk_class.shape[0])
df_bulk_class_layered = df_bulk_class[df_bulk_class.layered == True]
print("Number of layered structures",
df_bulk_class[df_bulk_class.layered == True].shape)
for i_cnt, (bulk_id_i, row_i) in enumerate(df_bulk_class_layered.iterrows()):
i_cnt_str = str(i_cnt).zfill(3)
# #####################################################
row_dft_i = df_dft.loc[bulk_id_i]
# #####################################################
atoms_i = row_dft_i.atoms
# #####################################################
atoms_i.write("out_data/layered_bulks/" + i_cnt_str + "_" + bulk_id_i + ".cif")
```
```
# df_bulk_class = df_bulk_class.fillna(value=False)
# df[['a', 'b']] = df[['a','b']].fillna(value=0)
# df_bulk_class.fillna?
# def read_df_bulk_manual_class():
# """
# """
# # #################################################
# path_i = os.path.join(
# os.environ["PROJ_irox_oer"],
# "workflow/process_bulk_dft/manually_classify_bulks",
# "bulk_manual_classification.csv")
# df_bulk_class = pd.read_csv(path_i)
# # df_bulk_class = pd.read_csv("./bulk_manual_classification.csv")
# # #################################################
# # Filling empty spots of layerd column with False (if not True)
# df_bulk_class[["layered"]] = df_bulk_class[["layered"]].fillna(value=False)
# # Setting index
# df_bulk_class = df_bulk_class.set_index("bulk_id", drop=False)
# return(df_bulk_class)
```
| github_jupyter |
```
# A new notebook to test the waters of the Spotify API
# TODO: Find a way to make my model more complex.
import spotipy
from spotipy.oauth2 import SpotifyClientCredentials, SpotifyOAuth
import pandas as pd
SPOTIPY_CLIENT_ID = '305996eeec9c42cb807aebcd48a82b29'
SPOTIPY_SECRET_ID = '3699864be2834ad695827d8092e91812'
SPOTIPY_REDIRECT_URI = 'http://example.com'
# doing a test run with an authorization code flow
# Goal is to be able to get a scope that will read the user's libary.
# Getting my scope (authorization! see
# https://developer.spotify.com/documentation/general/guides/authorization-guide/#list-of-scopes.)
scope = "user-library-read"
# Setting up the API and setting it as sp,
# This will open a pop up window for the user to log in
# Once they login and give permission, then I will be able to get
# The top twenty songs that they have listened to
sp = spotipy.Spotify(auth_manager=SpotifyOAuth(scope=scope,
client_secret=SPOTIPY_SECRET_ID,
client_id=SPOTIPY_CLIENT_ID,
redirect_uri=SPOTIPY_REDIRECT_URI))
results = sp.current_user_saved_tracks(limit=50, offset=100)
for idx, item in enumerate(results['items']):
track = item['track']
print(idx, track['artists'][0]['name'], " - ", track['name'])
resultsdf = pd.DataFrame(track.items())
resultsdf
df = resultsdf.T
df
new_header = df.iloc[0]
df = df[1:]
df.columns = new_header
df
track
def analyze_liked_songs(scope, ID, CLIENT_ID, REDIRECT_URI):
# creating an empty DataFrame
song_feature_list = ['artist', 'album', 'track_name', 'track_id', 'popularity',
'danceability', 'energy', 'key', 'loudness',
'mode', 'speechiness', 'instrumentalness',
'liveness', 'valence', 'tempo', 'duration_ms',
'time_signature']
song_df = pd.DataFrame(columns = song_feature_list)
# authorizing the use of my liked songs list
sp = spotipy.Spotify(auth_manager=SpotifyOAuth(scope=scope,
client_secret=ID,
client_id=CLIENT_ID,
redirect_uri=REDIRECT_URI))
# Now we got to loop through every track in the liked songs list, extract
# features and append the features to the liked songs df
i = 0
offset = 0
# for speeds sake will be keeping the loop to only have 400 iterations.
while i < 400:
liked_songs = sp.current_user_saved_tracks(limit=50, offset=offset)['items']
for track in liked_songs:
# create an empty dict
song_features = {}
# getting metadata
song_features["artist"] = track["track"]["album"]["artists"][0]['name']
song_features["album"] = track['track']['album']['name']
song_features["track_name"] = track['track']['name']
song_features['track_id'] = track['track']['id']
song_features['popularity'] = track["track"]['popularity']
# Getting audio features
audio_features = sp.audio_features(song_features['track_id'])[0]
for feature in song_feature_list[5:]:
song_features[feature] = audio_features[feature]
track_df = pd.DataFrame(song_features, index = [0])
song_df = pd.concat([song_df, track_df], ignore_index=True)
i += 1
offset += 50
return song_df
scope = "user-library-read"
new_df = analyze_liked_songs(scope)
new_df
len(new_df)
def get_top_50(playlist_id, creator):
# creating an empty DataFrame
# Create empty dataframe
playlist_features_list = ["artist", "album", "track_name", "track_id", 'popularity',
"danceability", "energy", "key", "loudness", "mode", "speechiness",
"instrumentalness", "liveness", "valence", "tempo", "duration_ms", "time_signature"]
playlist_df = pd.DataFrame(columns = playlist_features_list)
# Create empty dict
playlist_features = {}
# Loop through every track in the playlist, extract features and append the features to the playlist df
playlist = sp.user_playlist_tracks(creator, playlist_id)["items"]
for track in playlist:
# Get metadata
playlist_features["artist"] = track["track"]["album"]["artists"][0]["name"]
playlist_features["album"] = track["track"]["album"]["name"]
playlist_features["track_name"] = track["track"]["name"]
playlist_features["track_id"] = track["track"]["id"]
playlist_features['popularity'] = track["track"]['popularity']
# Get audio features
audio_features = sp.audio_features(playlist_features["track_id"])[0]
for feature in playlist_features_list[5:]:
playlist_features[feature] = audio_features[feature]
# Concat the dfs
track_df = pd.DataFrame(playlist_features, index = [0])
playlist_df = pd.concat([playlist_df, track_df], ignore_index = True)
return playlist_df
top = get_top_50('37i9dQZF1DXcBWIGoYBM5M', 'spotify')
top
top['user'] = 0
top
def combine(playlist, liked_song):
# Setting up the "user id"
# This will be run with the model to find which songs are liked and which songs
# are from the top playlist...
playlist['user'] = 0
liked_song['user'] = 1
# Concating the dataframes.
full_song = pd.concat([playlist, liked_song], ignore_index=True)
return full_song
full = combine(top, new_df)
full
from surprise import Reader, Dataset, SVD
from surprise.model_selection.validation import cross_validate
def recommend(userid, df):
'''A function that will take a userid and predict music based their listen history
With this database, will only take ids ranging 0-200.
Returns a new column called "Estimate Score" which represents the chance that the
use will like the music
Score will not be high on average, since users are randomly generated.'''
# First passing through a if statment to make sure
# The ID matches the requirements.
if userid > 1:
return "ERROR: User ID too high, must be between 0-1"
elif userid < 0:
return "ERROR: User ID too low, must be between 0-1"
else:
print("Generating music recommendation...")
# Intantiating my reader and my data
reader = Reader(rating_scale=(0,50))
data = Dataset.load_from_df(df[['user', 'track_id', 'rating']],
reader)
# Intatntiating my SVD
svd = SVD()
# Running a 5-fold cross-validation
cross_validate(svd, data, measures=['RMSE', 'MAE'], cv=5,
verbose=True)
# Retraining the model using the entire dataset
trainset = data.build_full_trainset()
svd.fit(trainset)
# Model has been trained, time to use for prediction.
titles = df.copy()
titles['Estimate_Score'] = titles['track_id'].apply(lambda x: svd.predict(userid, x).est)
# Creating a mask that does not include songs the user already has.
mask = titles['user'] != userid
titles = titles[mask]
# Now returning the top 5 songs for that user.
titles = titles.sort_values(by=['Estimate_Score'], ascending=False)
titles = titles.head(5)
#titles = [titles.columns.values.tolist()] + titles.values.tolist()
return titles
recommend(1, full)
def make_rating(df):
"""
a function that will take in the audio values and return a total 'rating'
this is mostly a test to see how it effects the model.
"""
help = [df['popularity'], df['danceability'], df['energy'], df['key'],
df['loudness'], df['mode'], df['speechiness'], df['instrumentalness'],
df['liveness'], df['valence'], df['tempo'], df['duration_ms']]
df['rating'] = sum(help) / 12
df['rating'] = df['rating'] / 1000
return df
make_rating(full)
full['rating'].min()
full['rating'].max()
mask = full['rating'] == 120.01553961583333
full[mask]
```
| github_jupyter |
```
from cmp import *
import pdir
%matplotlib qt
# Control variables
plot_lattice_unfinished_1 = True
plot_lattice_unfinished_2 = True
plot_lattice_demo = True
plot_lattice_planes = True
plot_scattering_none = True
plot_scattering_systemic = True
plot_band_structure_none = True
plot_band_structure_strong = True
plot_nearly_free_band = True
```
### Lattice with unfinished unit cells
```
if plot_lattice_unfinished_1:
a1, a2, a3 = np.eye(3)
basis = np.array([[0, 0, 0],
[0.5, 0.5, 0.5]])
colors = ['xkcd:cement', 'b']
sizes = [2, 2]
grid_type = "latticevectors"
type_ = "primitive"
n_min = np.array([0, 0, 0])
n_max = np.array([1, 1, 1])
(atomic_positions, lattice_coefficients, atomic_colors, atomic_sizes,
lattice_position) = lattices.generator(a1, a2, a3, basis, colors, sizes,
n_min, n_max)
# Create the figure
fig = plt.figure(figsize=(2,2))
ax = fig.gca(projection="3d")
# Plot atoms
ax.scatter(atomic_positions[:, 0], atomic_positions[:, 1],
atomic_positions[:, 2], c=atomic_colors, s=atomic_sizes)
# Get the relevant gridlines:
g_col = 'k'
g_w = 0.5
pruned_lines = lattices.grid_lines(a1, a2, a3, atomic_positions,
lattice_position, grid_type)
for line in pruned_lines:
ax.plot(line[0], line[1], line[2], color=g_col, linewidth=g_w)
ax.set_aspect('equal')
ax.set_proj_type('ortho')
ax.grid(False)
ax.axis('off')
# make the panes transparent (the plot box)
ax.xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.zaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.view_init(15, -60)
fig.subplots_adjust(left=-0.15, right=1.15, top=1.15, bottom=-0.15)
fig.savefig('thesis/figures/lattice_unfinished_1.pdf')
```
### Lattice with unfinished unit cells 2
```
if plot_lattice_unfinished_2:
a1, a2, a3 = np.array([[0.5, 0.5, 0],
[0.5, 0, 0.5],
[0, 0.5, 0.5]])
basis = np.array([0, 0, 0])
colors = ['xkcd:cement']
sizes = [1]
grid_type = "latticevectors"
type_ = "primitive"
n_min = np.array([0, 0, 0])
n_max = np.array([2, 2, 2])
(atomic_positions, lattice_coefficients, atomic_colors, atomic_sizes,
lattice_position) = lattices.generator(a1, a2, a3, basis, colors, sizes,
n_min, n_max)
# Create the figure
fig = plt.figure(figsize=(2,2))
ax = fig.gca(projection="3d")
# Plot atoms
ax.scatter(atomic_positions[:, 0], atomic_positions[:, 1],
atomic_positions[:, 2], c=atomic_colors, s=atomic_sizes)
# Get the relevant gridlines:
g_col = 'k'
g_w = 0.3
pruned_lines = []
r_min, r_max = 0, 2
for nx in range(n_min[0], n_max[0] + 1):
for ny in range(n_min[1], n_max[1] + 1):
pruned_lines.append([np.array([nx, nx]),
np.array([ny, ny]),
np.array([r_min, r_max])])
for nz in range(n_min[2], n_max[2] + 1):
pruned_lines.append([np.array([nx, nx]),
np.array([r_min, r_max]),
np.array([nz, nz])])
for ny in range(n_min[1], n_max[1] + 1):
for nz in range(n_min[2], n_max[2] + 1):
pruned_lines.append([np.array([r_min, r_max]),
np.array([ny, ny]),
np.array([nz, nz])])
for line in pruned_lines:
ax.plot(line[0], line[1], line[2], color=g_col, linewidth=g_w)
ax.set_aspect('equal')
ax.set_proj_type('ortho')
ax.grid(False)
ax.axis('off')
# make the panes transparent (the plot box)
ax.xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.zaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.view_init(15, -100)
fig.subplots_adjust(left=-0.2, right=1.2, top=1.2, bottom=-0.2)
fig.savefig('thesis/figures/lattice_unfinished_2.pdf')
```
### Demo of proper lattice
```
if plot_lattice_demo:
fig, ax = Lattice(lattice_name="conventional bcc", sizes=1,
colors=['xkcd:cement', 'b'], returns=True)
margin = 0.2
fig.set_size_inches(2,2)
fig.subplots_adjust(left=-margin, right=1+margin, top=1+margin, bottom=-margin)
ax.view_init(10, -80)
fig.savefig('thesis/figures/lattice_demo_1.pdf')
fig, ax = Lattice(lattice_name="hexagonal", sizes=1,
returns=True)
margin = 0.2
fig.set_size_inches(2,2)
fig.subplots_adjust(left=-margin, right=1+margin, top=1+margin, bottom=-margin)
ax.view_init(18, -84)
fig.savefig('thesis/figures/lattice_demo_2.pdf')
```
### Family of lattice planes
```
if plot_lattice_planes:
fig, ax = Reciprocal(lattice_name="bcc", indices=(0,0,1), max_=(0,0,4), returns=True)
ax.view_init(10, -80)
margin = 0.2
fig.set_size_inches(2,2)
fig.subplots_adjust(left=-margin, right=1+margin, top=1+margin, bottom=-margin)
fig.savefig('thesis/figures/lattice_planes_1.pdf')
```
### Scattering!
```
if plot_scattering_none:
fig, ax, ax2 = Scattering(basis=np.array([[0, 0, 0],
[0.5, 0.5, 0],
[0.5, 0, 0.5],
[0, 0.5, 0.5]]),
form_factor=np.array([1, 0.5, 0.5, 0.5]),
highlight=[1,1,2],
returns=True)
ax.view_init(5, -50)
fig.savefig('thesis/figures/scattering_no_systemic.pdf')
if plot_scattering_systemic:
fig, ax, ax2 = Scattering(basis=np.array([[0, 0, 0],
[0.5, 0.5, 0],
[0.5, 0, 0.5],
[0, 0.5, 0.5]]),
form_factor=np.array([1, 1, 1, 1]),
colors=['xkcd:cement'] * 4,
returns=True)
ax.view_init(5, -55)
fig.savefig('thesis/figures/scattering_systemic.pdf')
```
### Band structures
```
if plot_band_structure_none:
fig, ax, ax2 = Band_structure(edges=True, returns=True)
ax.view_init(33, -48)
fig.savefig('thesis/figures/band_structure_none.pdf')
if plot_band_structure_strong:
fig, ax, ax2 = Band_structure(edges=True, V0=1, returns=True)
ax.view_init(33, -48)
fig.savefig('thesis/figures/band_structure_strong.pdf')
import itertools
def calc_1D_band_structure(V0=0, n_k=101, G_range=list(range(-3,4)),
potential=band_structure.VG_dirac, extra=0.1):
kx = np.linspace(-1 / 2 - extra, 1 / 2 + extra, n_k)
ky = np.linspace(-1 / 2 - extra, 1 / 2 + extra, n_k)
num_Gs = (len(G_range))**2
# First we create the relevant matrix for some k:
b1 = np.array([1, 0])
b2 = np.array([0, 1])
ms = np.array(list(itertools.product(G_range, G_range)))
recip = np.array([b1, b2])
Gs = ms @ recip
E = np.zeros((num_Gs, n_k))
VG_mat = band_structure.potential_matrix(range_=G_range, potential=potential, V0=V0)
kxs, kys = np.meshgrid(kx, ky)
for i in range(n_k):
k = np.array([kx[i], 0])
Diag = np.diag(lattices.mag(k - Gs)**2) / 2
Full = Diag + VG_mat
Eigs = np.linalg.eigvalsh(Full)
E[:, i] = Eigs
band_to_return = E[0]
return kx, band_to_return
if plot_nearly_free_band:
n_k = 201
G = 3
extra = 0.05
o1 = calc_1D_band_structure(V0=0,
n_k=n_k,
G_range=list(range(-G,G+1)),
extra=0)
k_free, E_free = o1
o2 = calc_1D_band_structure(V0=0.05,
n_k=n_k,
G_range=list(range(-G,G+1)),
extra=extra)
k_small, E_small = o2
E_small = E_small - np.amin(E_small)
fig = plt.figure()
ax = fig.gca()
fig.set_size_inches([2,2])
ax.set_xlabel('$k$')
ax.set_xticks([-np.pi, 0, np.pi])
ax.set_xticklabels(['$-\pi/a$', '0', '$\pi/a$'])
ax.set_yticks([])
ax.set_ylabel('$E$')
ax.plot(k_free * 2 * np.pi, E_free, '--')
ax.plot(k_small * 2 * np.pi, E_small)
fig.tight_layout()
fig.savefig('thesis/figures/nearly_free.pdf')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Jofdiazdi/TalleresSimpy/blob/master/Talleres/2.%20Simulacion%20de%20eventos%20discretos.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Simulación en Python usando Simpy: cola simple
---
Ahora veremos un ejemplo de simpy para la simulacion de una cola simple.
El enunciado es el siguiente:
Una peluquería tiene un peluquero que se demora entre 15 y 30 minutos por corte. La peluquería recibe en promedio 3 clientes por hora (es decir, uno cada 20 minutos). Se desea simular las llegadas y servicios de 5 clientes.
Una vez realizado el código, podremos variar cualquiera de éstos parámetros (tener varios peluqueros, cambiar el tiempo que se demoran por corte, simular para n clientes o para un tiempo específico como una cierta cantidad de minutos, etc).
Antes de empezar primero verificamos que simpy este instalado:
```
# EJECUTAR PARA INSTALAR simpy
!pip install simpy
```
## Simulación de eventos discretos
Primero se deben importar las librerias que vamos a usar:
**random**: Libreria que nos ayuda a generar numeros aleatoreos con las diferentes distribuciones.
**math**: Libreria matematica de python
```
# Cargando las bibliotecas a utilizar
import random # Para generar números pseudoaleatorios
import math # Para la funcion logaritmo
import simpy # Proporciona un entorno de simulacion
```
Primero definimos las variables que vamos a utilizar:
```
# Definoendo las constantes y variables a utilizar
SEMILLA = 30 #Semilla para replicar el comportamiento del sistema
NUM_PELUQUEROS = 1 # Numero de peluqueros en la peliqueria
TIEMPO_CORTE_MIN = 15 # Tiempo minimo de corte
TIEMPO_CORTE_MAX = 30 # Tiempo maximo de corte
T_LLEGADAS = 20 # Tiempo entre llegadas "Lamda"
TIEMPO_SIMULACION = 480 # Tiempo Maximo de simulacion
TOT_CLIENTES = 5 # Numero de clientes a atender
# Variables de desempeño
te = 0.0 # tiempo de espera total
dt = 0.0 # duracion de servicio total
fin = 0.0 # minuto en el que finaliza
```
## Modelando las llegadas de los usuarios
Lo primero que vamos amodelar sera el comportamiento de llegada de los usuarios,segun el enunciado se comprotan asi:
**_Tiempos de llegadas_**: El tiempo entre llegadas de los clientes a la peluquería es exponencial con media de λ=20 minutos. Los clientes son atendidos con base en la disciplina FIFO (primero en llegar, primero en salir), de modo que las llegadas son calculadas de acuerdo a la siguiente fórmula:
T_LLEGADAS = –λ ln(R)
Donde R es un número pseudoaleatorio.
Para esto python nos da la siguiente funcion que nos devuelve un numero aleatorio con distribucion exponencial: `random.expovariate(λ)`
```
def principal (env, personal): #Defnimos la funcion de llegadas
llegada = 0
i = 0
while i<TOT_CLIENTES: #Simulara la llegada de x clientes en este caso 5.
llegada = random.expovariate(T_LLEGADAS) # Generamos la proxima llegada con una Distribucion exponencial
yield env.timeout(llegada) # Agendamos la proxmia llegada en el tiempo generado es decir (env.now + llegada)
i += 1
env.process(cliente(env, 'Cliente %d' % i, personal)) #Creamos el porceso cliente para que sea ejecutado en el ambiente
```
## Modelando el servicio de corte
Ahora debemos modelar el comportamiento del servicio, en este caso el corte, el cual se comporta de la sigueinte manera:
**_Tiempos de servicio_**: Los tiempos de servicio son calculados de acuerdo a la siguiente fórmula:
tiempo_corte = TIEMPO_CORTE_MIN + (TIEMPO_CORTE_MAX – TIEMPO_CORTE_MIN)*R
Esto es: el mínimo de tiempo que se demora el peluquero, en nuestro ejemplo es 15, más la diferencia entre el máximo y el mínimo, en nuestro ejemplo serían 15 minutos (30 menos 15), multiplicado por un número pseudoaleatorio. El resultado nos dará un número entre 15 y 30.
Para facilitarnos la vida, la libreria random de python tiene una funcion la cual te da valores uniformes entre 2 numeros: `random.uniform(min,max)`
```
# Proceso de corte
def cortar(cliente):
global dt # Para poder acceder a la variable dt declarada anteriormente
tiempo_corte = random.uniform(TIEMPO_CORTE_MIN, TIEMPO_CORTE_MAX) # Distribucion uniforme
yield env.timeout(tiempo_corte) # deja correr el tiempo n minutos
print(" \o/ Corte listo a %s en %.2f minutos" % (cliente,tiempo_corte))
dt = dt + tiempo_corte # Acumula los tiempos de uso de la i
```
## Modelado del comportamiento del cliente
El comportamiento del cliente es muy sensillo
1. El cliente llega a la peluqueria
2. El cliente pide turno
3. Si el servidor esta desocupado pasa a ser atendido si no espera su turno
4. Espera el corte del cabello
5. Sale de la peluqueria
Tambien en medio del proceso aprovecharemos para ir calculando algunas variables de desempecho, en este caso el tiempo promedio de espera.
```
# Proceso del cliente
def cliente (env, name, personal ):
global te
global fin
llega = env.now # Llega el cleinte y guardasu hora de llegada
print ("---> %s llego a peluqueria en minuto %.2f" % (name, llega))
with personal.request() as request: # Pide su turno
yield request # Espera a ser atendido
pasa = env.now # Es atendido y guarda el minuto cuado comienza a ser atendido
espera = pasa - llega # Calcula el tiempo que espera -> Calculo del tiempo promedio de espera
te = te + espera # Acumula los tiempos de espera -> Calculo del tiempo promedio de espera
print ("**** %s pasa con peluquero en minuto %.2f habiendo esperado %.2f" % (name, pasa, espera))
yield env.process(cortar(name)) # Espera a que el corte termine
personal.release(request) #Libera el turno usado
deja = env.now # Termina el corte y guarda el minuto en que termina el proceso cortar
print ("<--- %s deja peluqueria en minuto %.2f" % (name, deja))
fin = deja # Conserva globalmente el tiempo en el que el servicio termina y el usuario sale del sistema
```
## Creacion de ambiente y recursos
A continuacion creamos el ambiente en donde se simulara el sistema, creamos el proceso inicial y corremos la simulacion.
### Recursos
Simpy nos ofrece una herramienta mu util a la hora de modelar recursos limitados como en este caso el personal que ofrese el sevicio.
Para ello podemos crear dichos recursos usando:
```
simpy.Resource(env, capacity)
```
Para acceder a ellos debemos usar dentrod el proceso que lo decesita las siguientes linies
```
with {Recurso}.Request() as req:
yield req
#Hacer cosas cuando se tenga el turno
yield env.timeout(tiempo que se uso el recurso)
{Recurso}.Release(req)
```
Las funciones request() y release() piden y liberan un turno respectivamente es muy importante la linea `yield req` ya que esta nos va a garantizar que se va a esperar el turno.
```
# PROGRAMA PRINCIPAL
print ("------------------- Bienvenido Simulacion Peluqueria ------------------")
random.seed (SEMILLA) # Se inicializa la semilla dle random
env = simpy.Environment() # Crea el entorno o ambiente de simulacion
personal = simpy.Resource(env, NUM_PELUQUEROS) #Crea los recursos (peluqueros)
env.process(principal(env, personal)) #Se crea el proceso que crea los clientes
env.run(until = TIEMPO_SIMULACION) #Inicia la simulacion hasta que no halla mas eventos o se cumpla el tiempo de simulacion
# Calculo de medidas de desempeño
print ("\n---------------------------------------------------------------------")
print ("\nIndicadores obtenidos: ")
lpc = te / fin
print ("\nLongitud promedio de la cola: %.2f" % lpc)
tep = te / TOT_CLIENTES
print ("Tiempo de espera promedio = %.2f" % tep)
upi = (dt / fin) / NUM_PELUQUEROS
print ("Uso promedio de la instalacion = %.2f" % upi)
print ("\n---------------------------------------------------------------------")
```
# CONCLUSIÓN
La presente simulación en Python usando Simpy es una herramienta para obtener resultados en forma más sencilla que si lo hiciéramos manualmente. Es tambien muy flexible y se pueden crear programas más elaborados que si utilizáramos una hoja de cálculo.
Python es un lenguaje multiplataforma, ademas, es software libre, por lo que es ideal para cualquier proyecto académico.
Es posible adaptar este programa para resolver diferentes problemas y se puede ampliar para obtener más datos cuya solución manual o matemática pudiera ser complicada.
## Referencias
Ejemplo tomado de la pagina: García (2018). Simulación en Python usando Simpy: llegadas y servicio. Disponible en https://naps.com.mx/blog/simulacion-en-python-usando-simpy/
Mas informacion en la libreria:
https://simpy.readthedocs.io/en/latest/
| github_jupyter |
# Sentiment Analysis
## Updating a Model in SageMaker
_Deep Learning Nanodegree Program | Deployment_
---
In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.
This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5.
## Instructions
Some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `# TODO: ...` comment. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.
> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
## Step 1: Downloading the data
The dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.
> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.
We begin by using some Jupyter Notebook magic to download and extract the dataset.
```
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
```
## Step 2: Preparing the data
The data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
```
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
```
## Step 3: Processing the data
Now that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
```
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
review_to_words(train_X[100])
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
```
### Extract Bag-of-Words features
For the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
```
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
len(train_X[100])
```
## Step 4: Classification using XGBoost
Now that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker.
### Writing the dataset
The XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
```
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
```
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.
For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
```
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/sentiment_update'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
test_X = train_X = val_X = train_y = val_y = None
```
### Uploading Training / Validation files to S3
Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.
For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.
Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.
For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
```
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-update'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
```
### Creating the XGBoost model
Now that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.
- Model Artifacts
- Training Code (Container)
- Inference Code (Container)
The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.
The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.
The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue.
```
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# First we create a SageMaker estimator object for our model.
xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# And then set the algorithm specific parameters.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
```
### Fit the XGBoost model
Now that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation.
```
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
```
### Testing the model
Now that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set.
To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object.
```
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
```
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
```
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
```
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
```
xgb_transformer.wait()
```
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
```
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
```
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
```
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
```
## Step 5: Looking at New Data
So now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.
However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing.
```
import new_data
new_X, new_Y = new_data.get_new_data()
# new_X[100]
```
**NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`.
### (TODO) Testing the current model
Now that we've loaded the new data, let's check to see how our current XGBoost model performs on it.
First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.
First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.
**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data.
```
# TODO: Create the CountVectorizer using the previously constructed vocabulary
vectorizer = CountVectorizer(vocabulary=vocabulary,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
# vectorizer.fit(train_X).toarray()
# TODO: Transform our new data set and store the transformed data in the variable new_XV
new_XV = vectorizer.transform(new_X).toarray()
```
As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`.
```
len(new_XV[100])
```
Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.
First, we save the data locally.
**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance.
```
# TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
```
Next, we upload the data to S3.
**TODO:** Upload the csv file created above to S3.
```
# TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting
# URI as new_data_location
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
```
Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.
**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`.
```
# TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until
# the batch transform job has finished.
xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
```
As usual, we copy the results of the batch transform job to our local instance.
```
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
```
Read in the results of the batch transform job.
```
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
```
And check the accuracy of our current model.
```
accuracy_score(new_Y, predictions)
```
So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.
In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.
Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.
To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.
**TODO:** Deploy the XGBoost model.
```
# TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'.
xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
```
### Diagnose the problem
Now that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews.
```
from sagemaker.predictor import csv_serializer
# We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization.
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
```
It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.
**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples.
```
def get_sample(in_X, in_XV, in_Y):
for idx, smp in enumerate(in_X):
res = round(float(xgb_predictor.predict(in_XV[idx])))
if res != in_Y[idx]:
yield smp, in_Y[idx]
gn = get_sample(new_X, new_XV, new_Y)
```
At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator.
```
print(next(gn))
```
After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.
To do this, we start by fitting a `CountVectorizer` to the new data.
```
new_vectorizer = CountVectorizer(max_features=5000,
preprocessor=lambda x: x, tokenizer=lambda x: x)
new_vectorizer.fit(new_X)
```
Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets.
```
original_vocabulary = set(vocabulary.keys())
new_vocabulary = set(new_vectorizer.vocabulary_.keys())
```
We can look at the words that were in the original vocabulary but not in the new vocabulary.
```
print(original_vocabulary - new_vocabulary)
```
And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary.
```
print(new_vocabulary - original_vocabulary)
```
These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.
**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?
**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data.
The function to construct the new data inserts the word `banana` randomly in the documents. However, this word was not part of the original vocabulary and, as it was inserted randomly, chances are that it occured many many times. In this sense, the model accuracy is affected by the appearance of a new and frequent word.
### (TODO) Build a new model
Supposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.
To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.
**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.
In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well.
```
new_XV = new_vectorizer.transform(new_X).toarray()
```
And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created.
```
len(new_XV[0])
```
Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3.
```
import pandas as pd
# Earlier we shuffled the training dataset so to make things simple we can just assign
# the first 10 000 reviews to the validation set and use the remaining reviews for training.
new_val_X = pd.DataFrame(new_XV[:10000])
new_train_X = pd.DataFrame(new_XV[10000:])
new_val_y = pd.DataFrame(new_Y[:10000])
new_train_y = pd.DataFrame(new_Y[10000:])
```
In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it.
```
new_X = None
```
Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance.
```
pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False)
pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False)
pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False)
```
Now that we've saved our data to the local instance, we can safely delete the variables to save on memory.
```
new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None
```
Lastly, we make sure to upload the new training and validation sets to S3.
**TODO:** Upload the new data as well as the new training and validation data sets to S3.
```
# TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3.
new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix)
new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix)
new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix)
```
Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.
**TODO:** Create a new XGBoost estimator object.
```
# TODO: First, create a SageMaker estimator object for our model.
new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use
role, # What is our current IAM Role
train_instance_count=1, # How many compute instances
train_instance_type='ml.m4.xlarge', # What kind of compute instances
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were
# used when training the original model.
new_xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=500)
```
Once the model has been created, we can train it with our new data.
**TODO:** Train the new XGBoost model.
```
# TODO: First, make sure that you create s3 input objects so that SageMaker knows where to
# find the training and validation data.
s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv')
s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv')
# TODO: Using the new validation and training data, 'fit' your new model.
new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation})
```
### (TODO) Check the new model
So now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.
To do this, we will first test our model on the new data.
**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.
**Question:** How might you address the leakage problem?
First, we create a new transformer based on our new XGBoost model.
**TODO:** Create a transformer object from the newly created XGBoost model.
```
# TODO: Create a transformer object from the new_xgb model
new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
```
Next we test our model on the new data.
**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable)
```
# TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to
# 'wait' for the transform job to finish.
new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
```
Copy the results to our local instance.
```
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
```
And see how well the model did.
```
predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(new_Y, predictions)
```
As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.
However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.
To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one.
```
cache_data = None
with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", "preprocessed_data.pkl")
test_X = cache_data['words_test']
test_Y = cache_data['labels_test']
# Here we set cache_data to None so that it doesn't occupy memory
cache_data = None
```
Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.
**TODO:** Transform the original test data using the new vocabulary.
```
# TODO: Use the new_vectorizer object that you created earlier to transform the test_X data.
test_X = new_vectorizer.transform(test_X).toarray()
```
Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it.
```
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
new_xgb_transformer.wait()
!aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
accuracy_score(test_Y, predictions)
```
It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model.
## Step 6: (TODO) Updating the Model
So we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.
Of course, to do this we need to create an endpoint configuration for our newly created model.
First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it.
```
new_xgb_transformer.model_name
```
Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.
**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook.
```
from time import gmtime, strftime
# TODO: Give our endpoint configuration a name. Remember, it needs to be unique.
new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# TODO: Using the SageMaker Client, construct the endpoint configuration.
new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(
EndpointConfigName = new_xgb_endpoint_config_name,
ProductionVariants = [{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": new_xgb_transformer.model_name,
"VariantName": "XGB-Model"
}])
```
Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.
Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.
**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier.
```
# TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name.
session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name)
```
And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method.
```
session.wait_for_endpoint(xgb_predictor.endpoint)
```
## Step 7: Delete the Endpoint
Of course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it.
```
xgb_predictor.delete_endpoint()
```
## Some Additional Questions
This notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.
For example,
- What other ways could the underlying distribution change?
- Is it a good idea to re-train the model using only the new data?
- What would change if the quantity of new data wasn't large. Say you only received 500 samples?
## Optional: Clean up
The default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
```
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
```
| github_jupyter |
# DCASE-2021 Audio-Video
Author: Maximo Cobos
```
# Import necessary standard packages
import tensorflow as tf
import numpy as np
import pandas as pd
from pathlib import Path
import matplotlib.pyplot as plt
#import tensorflow_addons as tfa
tf.version.VERSION
```
## Input Data
Specify path to folder containing the video dataset and the output path for the tfrecords:
```
# TFRecords folder
main_dir = '.\\tfrecords_gamma'
root_path = Path(main_dir)
# Train Fold
train_fold_path = '.\\dataset\\evaluation_setup\\fold1_train.csv'
traindf = pd.read_csv(train_fold_path, sep='\t', lineterminator='\r')
trainlist = traindf[traindf.columns[1]].tolist()
trainfiles = [Path(f).with_suffix('.tfrecords').name for f in trainlist]
#trainfiles = trainfiles[0:int(0.33*len(trainfiles))]
# Validation Fold
val_fold_path = '.\\dataset\\evaluation_setup\\fold1_test.csv'
valdf = pd.read_csv(val_fold_path, sep='\t', lineterminator='\r')
vallist = valdf[valdf.columns[1]].tolist()
valfiles = [Path(f).with_suffix('.tfrecords').name for f in vallist]
#valfiles = valfiles[0:int(0.33*len(valfiles))]
len(trainfiles), len(valfiles)
```
Get class weights:
```
def get_label(filepath):
'''Receives a path to a video and returns its label
'''
scn_dict = {'airport': 0, 'shopping_mall': 1, 'metro_station': 2,
'street_pedestrian': 3, 'public_square': 4, 'street_traffic': 5,
'tram': 6, 'bus': 7, 'metro': 8, 'park': 9}
fileid = Path(filepath).name
scn_id = fileid.split('-')[0]
label = scn_dict[scn_id]
return label
# Get labels
train_labels = [get_label(f) for f in trainfiles]
val_labels = [get_label(f) for f in valfiles]
trainfiles = [main_dir + '\\' + str(label) + '\\' + f for f,label in zip(trainfiles,train_labels)]
valfiles = [main_dir + '\\' + str(label) + '\\' + f for f,label in zip(valfiles,val_labels)]
N_val = len(valfiles)
# Get number of examples per class
num_class_ex = []
for i in range(10):
num_class_ex.append(train_labels.count(i))
# Get class weights
N_train = len(train_labels)
num_classes = 10
class_weights = []
for i in range(num_classes):
weight = ( 1 / num_class_ex[i]) * N_train / num_classes
class_weights.append(weight)
keylst = np.arange(0,len(class_weights))
class_weights = {keylst[i]: class_weights[i] for i in range(0, len(class_weights))}
print("Class weights: ", class_weights)
```
### Parsing function
```
def parse_sequence(sequence_example, avmode = 'audiovideo'):
"""this function is the sequence parser for the created TFRecords file"""
sequence_features = {'VideoFrames': tf.io.FixedLenSequenceFeature([], dtype=tf.string),
'Labels': tf.io.FixedLenSequenceFeature([], dtype=tf.int64)}
context_features = {'AudioFrames': tf.io.FixedLenFeature((96000,), dtype=tf.float32),
'length': tf.io.FixedLenFeature([], dtype=tf.int64)}
context, sequence = tf.io.parse_single_sequence_example(
sequence_example, context_features=context_features, sequence_features=sequence_features)
# get features context
seq_length = tf.cast(context['length'], dtype = tf.int32)
# decode video and audio
video = tf.io.decode_raw(sequence['VideoFrames'], tf.uint8)
video = tf.reshape(video, shape=(seq_length, 224, 224, 3))
audio = tf.cast(context['AudioFrames'], tf.float32)
audio = tf.reshape(audio, shape=(64, 500, 3))
label = tf.cast(sequence['Labels'], dtype = tf.int32)
video = tf.cast(video, tf.float32)
if avmode == 'audio':
return audio, label
elif avmode == 'video':
return video, label
elif avmode == 'audiovideo':
return video, audio, label
```
Check parsing function:
```
# Check parsing function
filesds = tf.data.Dataset.from_tensor_slices(trainfiles)
dataset = tf.data.TFRecordDataset(filesds)
dataset = dataset.map(lambda tf_file: parse_sequence(tf_file,'audiovideo'), num_parallel_calls=4)
datait = iter(dataset)
example = datait.get_next()
print(example[0].shape, example[1].shape, example[2].shape)
plt.imshow(example[0][0,::].numpy().astype(np.uint8))
plt.show()
plt.imshow(example[1][:,:,0]);
```
## Augmentation
Mix-Up
```
def sample_beta_distribution(size, concentration_0=0.2, concentration_1=0.2):
gamma_1_sample = tf.random.gamma(shape=[size], alpha=concentration_1)
gamma_2_sample = tf.random.gamma(shape=[size], alpha=concentration_0)
return gamma_1_sample / (gamma_1_sample + gamma_2_sample)
def mix_up(ds_one, ds_two, alpha=0.2):
# Unpack two datasets
images_one, labels_one = ds_one
images_two, labels_two = ds_two
batch_size = tf.shape(images_one)[0]
# Sample lambda and reshape it to do the mixup
l = sample_beta_distribution(batch_size, alpha, alpha)
x_l = tf.reshape(l, (batch_size, 1, 1, 1))
y_l = tf.reshape(l, (batch_size, 1))
# Perform mixup on both images and labels by combining a pair of images/labels
# (one from each dataset) into one image/label
images = images_one * x_l + images_two * (1 - x_l)
labels = labels_one * y_l + labels_two * (1 - y_l)
return (images, labels)
def mix_up_audiovideo(ds_one, ds_two, alpha=0.2):
# Unpack two datasets
imagesaudio_one, labels_one = ds_one
imagesaudio_two, labels_two = ds_two
images_one = imagesaudio_one[0]
audios_one = imagesaudio_one[1]
batch_size = tf.shape(images_one)[0]
images_two = imagesaudio_two[0]
audios_two = imagesaudio_two[1]
# Sample lambda and reshape it to do the mixup
l = sample_beta_distribution(batch_size, alpha, alpha)
x_laudio = tf.reshape(l, (batch_size, 1, 1, 1))
x_lvideo = tf.reshape(l, (batch_size, 1, 1, 1, 1))
y_l = tf.reshape(l, (batch_size, 1))
# Perform mixup on both images and labels by combining a pair of images/labels
# (one from each dataset) into one image/label
images = images_one * x_lvideo + images_two * (1 - x_lvideo)
audios = audios_one * x_laudio + audios_two * (1 - x_laudio)
labels = labels_one * y_l + labels_two * (1 - y_l)
images = tf.cast(images, tf.uint8)
return (images, audios), labels
```
## Pipeline
### Audiovisual Modality
Useful functions for the pipeline
```
def normalize_sp(sp):
sp = sp - tf.math.reduce_min(sp)
sp = sp / tf.math.reduce_max(sp)
sp = 2*(sp-0.5)
return sp
def reshape_batch(data, labels):
data_video = tf.reshape(data[0], shape=(-1,5,224,224,3))
data_audio = tf.reshape(data[1], shape=(-1,64,50,3))
data_labels = tf.reshape(labels, shape=(-1,10))
return (data_video, data_audio), data_labels
def random_cut_gammavideo(video, audio, cut_length, audio_fs = 44100, audio_hop = 882, video_fps = 5):
video_length = tf.shape(video)[0]
audio_length = tf.shape(audio)[1]
# Generate random index for initial frame
min_v = 0
cut_length_frames = tf.cast(tf.math.round(cut_length*video_fps), dtype=tf.dtypes.int32)
max_v = video_length - cut_length_frames
rnum = tf.random.uniform([1], minval=min_v, maxval=max_v, dtype=tf.dtypes.int32)
# Cut video
video = video[rnum[0]:rnum[0]+cut_length_frames,...]
# Cut audio accordingly
ini_frame = tf.math.round(tf.cast(rnum[0], dtype=tf.dtypes.float32)*(1/video_fps)*audio_fs*(1/audio_hop))
end_frame = tf.math.round(ini_frame + cut_length*audio_fs/audio_hop)
ini_frame = tf.cast(ini_frame, dtype=tf.dtypes.int32)
end_frame = tf.cast(end_frame, dtype=tf.dtypes.int32)
audio = audio[:,ini_frame:end_frame,:]
return video, audio
def process_ds_audiovideo(video, audio, label, mode):
# Cut randomly to 1 second
if mode == 'train':
video, audio = random_cut_gammavideo(video, audio, 1.0)
audio = normalize_sp(audio)
label = label[0]
label = tf.one_hot(label,10)
if mode == 'val':
video = tf.reshape(video, shape=(10,5,224,224,3))
audio = tf.transpose(audio,(1,0,2))
audio = tf.reshape(audio, shape=(10,50,64,3))
audio =tf.map_fn(fn=lambda t: normalize_sp(t) , elems=audio)
audio = tf.transpose(audio,(0,2,1,3))
label = label[0:10]
label = tf.one_hot(label,10)
return (video, audio), label
train_batch_size = 16
do_mixup = False
if do_mixup == True:
train_ds_one = tf.data.Dataset.from_tensor_slices(trainfiles)
train_ds_one = train_ds_one.shuffle(N_train)
train_ds_one = train_ds_one.repeat()
train_ds_one = tf.data.TFRecordDataset(train_ds_one)
train_ds_one = train_ds_one.map(lambda tf_file: parse_sequence(tf_file,'audiovideo'), num_parallel_calls=4)
train_ds_one = train_ds_one.map(lambda video, audio, label: process_ds_audiovideo(video, audio, label, 'train'), num_parallel_calls=4)
train_ds_one = train_ds_one.batch(train_batch_size)
train_ds_two = tf.data.Dataset.from_tensor_slices(trainfiles)
train_ds_two = train_ds_two.shuffle(N_train)
train_ds_two = train_ds_two.repeat()
train_ds_two = tf.data.TFRecordDataset(train_ds_two)
train_ds_two = train_ds_two.map(lambda tf_file: parse_sequence(tf_file,'audiovideo'), num_parallel_calls=4)
train_ds_two = train_ds_two.map(lambda video, audio, label: process_ds_audiovideo(video, audio, label, 'train'), num_parallel_calls=4)
train_ds_two = train_ds_two.batch(train_batch_size)
trainds = tf.data.Dataset.zip((train_ds_one, train_ds_two))
trainds = trainds.map(
lambda ds_one, ds_two: mix_up_audiovideo(ds_one, ds_two, alpha=0.3), num_parallel_calls=4
)
else:
trainds = tf.data.Dataset.from_tensor_slices(trainfiles)
trainds = trainds.shuffle(N_train)
trainds = trainds.repeat()
trainds = tf.data.TFRecordDataset(trainds)
trainds = trainds.map(lambda tf_file: parse_sequence(tf_file,'audiovideo'), num_parallel_calls=4)
trainds = trainds.map(lambda video, audio, label: process_ds_audiovideo(video, audio, label, 'train'), num_parallel_calls=4)
trainds = trainds.batch(train_batch_size)
val_batch_size = 16
valds = tf.data.Dataset.from_tensor_slices(valfiles)
valds = tf.data.TFRecordDataset(valds)
valds = valds.map(lambda tf_file: parse_sequence(tf_file,'audiovideo'), num_parallel_calls=4)
valds = valds.map(lambda video, audio, label: process_ds_audiovideo(video, audio, label, 'val'), num_parallel_calls=4)
#valds = valds.batch(val_batch_size)
#valds = valds.map(lambda data, labels: reshape_batch(data, labels), num_parallel_calls=4)
datait = iter(trainds)
example = datait.get_next()
print(example[0][0].shape, example[0][1].shape)
plt.imshow(example[0][0][0,0,:,:,:].numpy().astype(np.uint8))
plt.show()
plt.imshow(example[0][1][0,:,:,0]);
plt.show()
datait = iter(valds)
example = datait.get_next()
print(example[0][0].shape, example[0][1].shape, example[1].shape)
plt.imshow(example[0][0][0,0,:,:,:].numpy().astype(np.uint8))
plt.show()
plt.imshow(example[0][1][0,:,:,0]);
plt.colorbar()
plt.show()
```
## Audio Network
```
from tensorflow.keras.layers import (Conv2D, Dense, Permute, GlobalAveragePooling2D, GlobalMaxPooling2D,
Reshape, BatchNormalization, ELU, Lambda, Input, MaxPooling2D, Activation,
Dropout, add, multiply)
import tensorflow.keras.backend as k
from tensorflow.keras.models import Model
import tensorflow as tf
from tensorflow.keras.regularizers import l2
regularization = l2(0.0001)
def construct_asc_network_csse(include_classification=True, nclasses=10, **parameters):
"""
Args:
include_classification (bool): include classification layer
**parameters (dict): setting use to construct the network presented in
(https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9118879)
"""
nfilters = parameters['nfilters']
pooling = parameters['pooling']
dropout = parameters['dropout']
top_flatten = parameters['top_flatten']
ratio = parameters['ratio']
pre_act = parameters['pre_act']
spectrogram_dim = parameters['spectrogram_dim']
verbose = parameters['verbose']
inp = Input(shape=spectrogram_dim)
for i in range(0, len(nfilters)):
if i == 0:
x = conv_standard_post(inp, nfilters[i], ratio, pre_act=pre_act)
else:
x = conv_standard_post(x, nfilters[i], ratio, pre_act=pre_act)
x = MaxPooling2D(pool_size=pooling[i])(x)
x = Dropout(rate=dropout[i])(x)
# Javier network
if top_flatten == 'avg':
x = GlobalAveragePooling2D()(x)
elif top_flatten == 'max':
x = GlobalMaxPooling2D()(x)
if include_classification:
x = Dense(units=nclasses, activation='softmax', name='SP_Pred')(x)
model = Model(inputs=inp, outputs=x)
if verbose:
print(model.summary())
return model
def conv_standard_post(inp, nfilters, ratio, pre_act=False):
"""
Block presented in (https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9118879)
Args:
inp (tensor): input to the block
nfilters (int): number of filters of a specific block
ratio (int): ratio used in the channel excitation
pre_act (bool): presented in this work, use a pre-activation residual block
Returns:
"""
x1 = inp
if pre_act:
x = BatchNormalization()(inp)
x = ELU()(x)
x = Conv2D(nfilters, 3, padding='same')(x)
x = BatchNormalization()(x)
x = Conv2D(nfilters, 3, padding='same')(x)
else:
x = Conv2D(nfilters, 3, padding='same')(inp)
x = BatchNormalization()(x)
x = ELU()(x)
x = Conv2D(nfilters, 3, padding='same')(x)
x = BatchNormalization()(x)
# shortcut
x1 = Conv2D(nfilters, 1, padding='same')(x1)
x1 = BatchNormalization()(x1)
x = module_addition(x, x1)
x = ELU()(x)
x = channel_spatial_squeeze_excite(x, ratio=ratio)
x = module_addition(x, x1)
return x
def channel_spatial_squeeze_excite(input_tensor, ratio=16):
""" Create a spatial squeeze-excite block
Args:
input_tensor: input Keras tensor
ratio: number of output filters
Returns: a Keras tensor
References
- [Squeeze and Excitation Networks](https://arxiv.org/abs/1709.01507)
- [Concurrent Spatial and Channel Squeeze & Excitation in Fully Convolutional Networks]
(https://arxiv.org/abs/1803.02579)
"""
cse = squeeze_excite_block(input_tensor, ratio)
sse = spatial_squeeze_excite_block(input_tensor)
x = add([cse, sse])
return x
def squeeze_excite_block(input_tensor, ratio=16):
""" Create a channel-wise squeeze-excite block
Args:
input_tensor: input Keras tensor
ratio: number of output filters
Returns: a Keras tensor
References
- [Squeeze and Excitation Networks](https://arxiv.org/abs/1709.01507)
"""
init = input_tensor
channel_axis = 1 if k.image_data_format() == "channels_first" else -1
filters = _tensor_shape(init)[channel_axis]
se_shape = (1, 1, filters)
se = GlobalAveragePooling2D()(init)
se = Reshape(se_shape)(se)
se = Dense(filters // ratio, activation='relu', kernel_initializer='he_normal', use_bias=False)(se)
se = Dense(filters, activation='sigmoid', kernel_initializer='he_normal', use_bias=False)(se)
if k.image_data_format() == 'channels_first':
se = Permute((3, 1, 2))(se)
x = multiply([init, se])
return x
def spatial_squeeze_excite_block(input_tensor):
""" Create a spatial squeeze-excite block
Args:
input_tensor (): input Keras tensor
Returns: a Keras tensor
References
- [Concurrent Spatial and Channel Squeeze & Excitation in Fully Convolutional Networks]
(https://arxiv.org/abs/1803.02579)
"""
se = Conv2D(1, (1, 1), activation='sigmoid', use_bias=False,
kernel_initializer='he_normal')(input_tensor)
x = multiply([input_tensor, se])
return x
def module_addition(inp1, inp2):
"""
Module of addition of two tensors with same H and W, but can have different channels
If number of channels of the second tensor is the half of the other, this dimension is repeated
Args:
inp1 (tensor): one branch of the addition module
inp2 (tensor): other branch of the addition module
Returns:
"""
if k.int_shape(inp1)[3] != k.int_shape(inp2)[3]:
x = add(
[inp1, Lambda(lambda y: k.repeat_elements(y, rep=int(k.int_shape(inp1)[3] // k.int_shape(inp2)[3]),
axis=3))(inp2)])
else:
x = add([inp1, inp2])
return x
def _tensor_shape(tensor):
"""
Obtain shape in order to use channel excitation
Args:
tensor (tensor): input tensor
Returns:
"""
return tensor.get_shape()
audio_network_settings = {
'nfilters': (32, 64, 128),
#'pooling': [(2, 1), (2, 1), (2, 1)],
'pooling': [(1, 2), (1, 2), (1, 1)],
'dropout': [0.0, 0.0, 0.0],
'top_flatten': 'avg',
'ratio': 2,
'pre_act': False,
'spectrogram_dim': (64, 50, 3),
'verbose': True
}
audio_model = construct_asc_network_csse(include_classification=True, **audio_network_settings)
# Load weights
pretrained_audio_path = 'BEST_AUDIO_MODEL'
audio_model.load_weights(pretrained_audio_path)
```
## Video Network
```
from tensorflow.keras.layers import TimeDistributed, GRU, Activation, GlobalAveragePooling1D, Bidirectional
regularization = l2(0.001)
num_classes = 10
input_shape = (5,224,224,3)
input_vid = Input(shape = input_shape)
# Block 1
x = TimeDistributed(Conv2D(filters=64, kernel_size=3, strides=(1, 1), padding='same',
kernel_regularizer=l2(0.0002),
activation='relu'), name='block1_conv1')(input_vid)
x = TimeDistributed(Conv2D(filters=64, kernel_size=3, strides=(1, 1), padding='same',
kernel_regularizer=l2(0.0002),
activation='relu'), name='block1_conv2')(x)
x = TimeDistributed(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)), name="block1_pool")(x)
# Block 2
x = TimeDistributed(Conv2D(filters=128, kernel_size=3, strides=(1, 1), padding='same',
kernel_regularizer=l2(0.0002),
activation='relu'), name='block2_conv1')(x)
x = TimeDistributed(Conv2D(filters=128, kernel_size=3, strides=(1, 1), padding='same',
kernel_regularizer=l2(0.0002),
activation='relu'), name='block2_conv2')(x)
x = TimeDistributed(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)), name="block2_pool")(x)
# Block 3
x = TimeDistributed(Conv2D(filters=256, kernel_size=3, strides=(1, 1), padding='same',
kernel_regularizer=l2(0.0002),
activation='relu'), name='block3_conv1')(x)
x = TimeDistributed(Conv2D(filters=256, kernel_size=3, strides=(1, 1), padding='same',
kernel_regularizer=l2(0.0002),
activation='relu'), name='block3_conv2')(x)
x = TimeDistributed(Conv2D(filters=256, kernel_size=3, strides=(1, 1), padding='same',
kernel_regularizer=l2(0.0002),
activation='relu'), name='block3_conv3')(x)
x = TimeDistributed(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)), name="block3_pool")(x)
# Block 4
x = TimeDistributed(Conv2D(filters=512, kernel_size=3, strides=(1, 1), padding='same',
kernel_regularizer=l2(0.0002),
activation='relu'), name='block4_conv1')(x)
x = TimeDistributed(Conv2D(filters=512, kernel_size=3, strides=(1, 1), padding='same',
kernel_regularizer=l2(0.0002),
activation='relu'), name='block4_conv2')(x)
x = TimeDistributed(Conv2D(filters=512, kernel_size=3, strides=(1, 1), padding='same',
kernel_regularizer=l2(0.0002),
activation='relu'), name='block4_conv3')(x)
x = TimeDistributed(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)), name="block4_pool")(x)
# Block 5
x = TimeDistributed(Conv2D(filters=512, kernel_size=3, strides=(1, 1), padding='same',
kernel_regularizer=l2(0.0002),
activation='relu'), name='block5_conv1')(x)
x = TimeDistributed(Conv2D(filters=512, kernel_size=3, strides=(1, 1), padding='same',
kernel_regularizer=l2(0.0002),
activation='relu'), name='block5_conv2')(x)
x = TimeDistributed(Conv2D(filters=512, kernel_size=3, strides=(1, 1), padding='same',
kernel_regularizer=l2(0.0002),
activation='relu'), name='block5_conv3')(x)
x = TimeDistributed(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)), name="block5_pool")(x)
x = TimeDistributed(GlobalMaxPooling2D(), name='TD_C_GlobAvPooling2D')(x)
# Recurrent Block
fw = GRU(32, return_sequences=True, stateful=False, recurrent_dropout = 0.0, name='VID_RNN_fw')
bw = GRU(32, return_sequences=True, stateful=False, recurrent_dropout = 0.0, go_backwards=True, name='VID_RNN_bw')
x = Bidirectional(fw, backward_layer=bw, name='VID_RNN_bidir')(x)
x = Dropout(0.5, name='VID_C2_Dropout')(x) #0.35
x = Dense(num_classes, kernel_regularizer = regularization, name ='VID_C2_Dense')(x)
x = Activation('softmax', name = 'VID_C2_Act_softmax_1')(x)
x = GlobalAveragePooling1D(name='VID_Pred')(x)
video_model = Model(inputs=input_vid, outputs=x)
video_model.summary()
# load weights
pretrained_video_path = 'BEST_VIDEO_MODEL'
video_model.load_weights(pretrained_video_path)
```
## Join Sub-Networks (RNN)
```
audio_sub_input = audio_model.input
audioout = audio_model.layers[-1].output
x = audio_model.layers[-3].output
x = tf.keras.layers.Permute((2,1,3))(x)
x = tf.keras.layers.TimeDistributed(tf.keras.layers.GlobalAveragePooling1D())(x)
audio_sub_output = tf.keras.layers.AveragePooling1D(3,2)(x)
audio_subnetwork = Model(inputs=audio_sub_input, outputs=audioout)
# Freeze all layers
for layer in audio_subnetwork.layers:
print('Setting layer {} non-trainable'.format(layer.name))
layer.trainable = False
audio_subnetwork.summary()
video_sub_input = video_model.input
videoout = video_model.layers[-1].output
video_sub_output = video_model.layers[-6].output
video_subnetwork = Model(inputs=video_sub_input, outputs=videoout)
# Freeze all layers
for layer in video_subnetwork.layers:
print('Setting layer {} non-trainable'.format(layer.name))
layer.trainable = False
video_subnetwork.summary()
from tensorflow.keras.layers import concatenate
x = concatenate([audio_sub_output, video_sub_output])
fw = tf.keras.layers.GRU(64,return_sequences=True, stateful=False, dropout = 0.0, name='RNN_fw')
bw = tf.keras.layers.GRU(64,return_sequences=True, stateful=False, dropout = 0.0, go_backwards= True, name='RNN_bw')
x = tf.keras.layers.Bidirectional(fw, backward_layer=bw, name='RNN_bidir')(x)
x = tf.keras.layers.GlobalAveragePooling1D()(x)
x = Dense(num_classes, activation='softmax')(x)
x = concatenate([x, audioout, videoout])
x = Dense(num_classes, name ='MULTI_C2_Dense2')(x)
x = Activation('softmax', name='MULTI_Pred')(x)
multi_model = Model(inputs=[video_sub_input, audio_sub_input], outputs=x)
multi_model.summary()
```
## Compile and train
```
# learning_rate = 0.001
# weight_decay = 0.0001
# opt = tfa.optimizers.AdamW(learning_rate=learning_rate, weight_decay=weight_decay)
learning_rate = 0.0001
opt = tf.keras.optimizers.Adam(learning_rate = learning_rate)
multi_model.compile(
loss = {'MULTI_Pred': 'categorical_crossentropy'},
optimizer=opt,
metrics = {'MULTI_Pred': 'accuracy'},
)
from tensorflow.keras.callbacks import CSVLogger, ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
import os
callbacks = []
ckpt_dir = 'PATH_TO_STORE\checkpoints_multi'
model_name = 'multi_final'
callbacks.append(
ModelCheckpoint(
filepath=os.path.join(ckpt_dir, '%s-{epoch:02d}-{val_accuracy:.2f}.hdf5' % model_name),
monitor="val_accuracy",
mode="max",
save_best_only=True,
save_weights_only=True,
verbose=True,
)
)
callbacks.append(
EarlyStopping(
monitor="val_loss",
patience=80,
)
)
callbacks.append(
ReduceLROnPlateau(
monitor="val_loss",
factor=0.5,
patience=15,
verbose=True,
)
)
callbacks.append(
CSVLogger(
filename = os.path.join(ckpt_dir, '%s.csv' % model_name),
append = False,
)
)
# Train model
history = multi_model.fit(
trainds,
epochs=200,
steps_per_epoch= int(N_train/train_batch_size), # Set according to number of examples and training batch size
validation_data = valds,
#validation_steps = int(N_val/val_batch_size),
validation_steps = int(N_val),
callbacks=callbacks, # Include list of callbacks
#class_weight = class_weights,
)
plt.figure(figsize=(16,5))
plt.subplot(1,2,1)
# summarize history for accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.subplot(1,2,2)
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
```
### Final Fine-Tuning
```
multi_model.load_weights('BEST_AUDIOVISUAL_MODEL')
# Un-Freeze all layers
for layer in multi_model.layers:
print('Setting layer {} trainable'.format(layer.name))
layer.trainable = True
multi_model.summary()
learning_rate = 0.000001
opt = tf.keras.optimizers.Adam(learning_rate = learning_rate)
multi_model.compile(
loss = {'MULTI_Pred': 'categorical_crossentropy'},
optimizer=opt,
metrics = {'MULTI_Pred': 'accuracy'},
)
from tensorflow.keras.callbacks import CSVLogger, ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
import os
callbacks = []
ckpt_dir = 'PATH_TO_STORE\checkpoints_multi'
model_name = 'multi_finalun_2'
callbacks.append(
ModelCheckpoint(
filepath=os.path.join(ckpt_dir, '%s-{epoch:02d}-{val_accuracy:.2f}.hdf5' % model_name),
monitor="val_accuracy",
mode="max",
save_best_only=True,
save_weights_only=True,
verbose=True,
)
)
callbacks.append(
EarlyStopping(
monitor="val_loss",
patience=80,
)
)
callbacks.append(
ReduceLROnPlateau(
monitor="val_loss",
factor=0.5,
patience=15,
verbose=True,
)
)
callbacks.append(
CSVLogger(
filename = os.path.join(ckpt_dir, '%s.csv' % model_name),
append = False,
)
)
# Train model
history = multi_model.fit(
trainds,
epochs=200,
steps_per_epoch= int(N_train/train_batch_size), # Set according to number of examples and training batch size
validation_data = valds,
#validation_steps = int(N_val/val_batch_size),
validation_steps = int(N_val),
callbacks=callbacks, # Include list of callbacks
#class_weight = class_weights,
)
```
## Evaluate Model
```
def process_ds_eval(video, audio, label):
video = tf.reshape(video, shape=(10,5,224,224,3))
audio = tf.transpose(audio,(1,0,2))
audio = tf.reshape(audio, shape=(10,50,64,3))
audio =tf.map_fn(fn=lambda t: normalize_sp(t) , elems=audio)
audio = tf.transpose(audio,(0,2,1,3))
label = label[0:10]
label = tf.one_hot(label,10)
return (video, audio), label
evalds = tf.data.Dataset.from_tensor_slices(valfiles)
evalds = tf.data.TFRecordDataset(evalds)
evalds = evalds.map(lambda tf_file: parse_sequence(tf_file,'audiovideo'), num_parallel_calls=4)
evalds = evalds.map(lambda video, audio, label: process_ds_eval(video, audio, label), num_parallel_calls=4)
# it = iter(evalds)
# ex = it.get_next()
# ex[0][0].shape
multi_model.load_weights('FINAL_BEST_AUDIOVISUAL')
multi_model.evaluate(evalds)
```
| github_jupyter |

# Callysto’s Weekly Data Visualization
## Weekly Title
### Recommended grade level: 5-12
### Instructions
#### “Run” the cells to see the graphs
Click “Cell” and select “Run All”.<br> This will import the data and run all the code, so you can see this week's data visualization. Scroll to the top after you’ve run the cells.<br>

**You don’t need to do any coding to view the visualizations**.
The plots generated in this notebook are interactive. You can hover over and click on elements to see more information.
Email contact@callysto.ca if you experience issues.
### About this Notebook
Callysto's Weekly Data Visualization is a learning resource that aims to develop data literacy skills. We provide Grades 5-12 teachers and students with a data visualization, like a graph, to interpret. This companion resource walks learners through how the data visualization is created and interpreted by a data scientist.
The steps of the data analysis process are listed below and applied to each weekly topic.
1. Question - What are we trying to answer?
2. Gather - Find the data source(s) you will need.
3. Organize - Arrange the data, so that you can easily explore it.
4. Explore - Examine the data to look for evidence to answer the question. This includes creating visualizations.
5. Interpret - Describe what's happening in the data visualization.
6. Communicate - Explain how the evidence answers the question.
## Question
How much time do Canadians spend playing video games and how does this change with demographics? We will use official Statistics Canada data to examine this question.
### Goal
Our goal is to create a series of histograms to observe how much time Canadians spend gaming.
## Gather
The code below will import the Python programming libraries we need to gather and organize the data to answer our question.
```
import plotly.express as px #used to create interactive plots
```
The code below creates lists data from [this 2010 StatsCan table](https://www150.statcan.gc.ca/n1/pub/89-647-x/2011001/tbl/tbl31-eng.htm). The same study was done more recently in 2015. However, the more recent time use survey did not ask about video games.
Our lists are as follows:
| List Name | List Purpose |
|------------------------|------------------------------------------------------------------------------------------|
| categories | holds names for the age catagories for our bar chart |
| leisure_time | holds number of minutes in "leisure" activities for the average person on an average day |
| videogame_time_all | holds number of minutes spent gaming for the average person on an average day |
| videogame_time_players | holds number of minutes spent gaming for the average gamer on an average day |
```
## import data
categories = ["15 to 24", "25 to 34", "35 to 44", "45 to 54", "55 to 64", "65 to 74", "75 and over"]
leisure_time = [5*60+57, 4*60+53, 4*60+6, 4*60+44, 5*60+55, 7*60+19, 7*60+34]
videogame_time_all = [27, 10, 4, 4, 6, 6, 4]
videogame_time_players = [2*60+44, 2*60+34, 109, 127, 118, 133, 2*60+32]
```
## Organize
Since our data is just 4 simple lists there is no need to organize it further.
## Explore-1
The code below will be used to help us look for evidence to answer our question. This can involve looking at data in table format, applying math and statistics, and creating different types of visualizations to represent our data.
```
fig1 = px.bar(x=videogame_time_all, y=categories,
title="Average Number of Minutes Spent Playing Video Games Per Day",
labels={'y':'Age of Canadians - Years', 'x':'Minutes Gaming on Average Day'})
fig1.show()
```
## Interpret-1
Our first figure shows 15-24 year olds spending a lot more time than their older counterparts playing computer games but with a small bump for Canadians in early retirement age (55-64 and 65-75).
## Explore-2
```
fig2 = px.bar(x=videogame_time_players, y=categories,
title="Average Number of Minutes Spent Playing Video Games Per Day",
labels={'y':'Age of Canadians Who Play Computer Games - Years', 'x':'Minutes Gaming on Average Day'})
fig2.show()
```
## Interpret-2
There is a subtle difference between the last set of data, the data for the first figure, and this figure's data. The first calculated averages using all respondents to the census survey. This second figure just includes those who do actually play some computer games. Essentially, this second plot ignores any respondents who game zero hours on the average day.
We see a very different plot for this second figure. This figure is decidedly U-shaped. Those Canadians outside of working age seem to game the most.
## Explore-3
```
fig3 = px.bar(x=leisure_time, y=categories,
title="Average Number of Minutes Spent on Leisure Activities Per Week",
labels={'y':'Age of Canadians - Years', 'x':'Minutes of Leisure Activities Per Day'})
fig3.show()
```
## Interpret-3
This third plot isn't directly about gaming, but provides some context for the first few figures. It's showing how much free time each age group has that is spent on leisure including gaming. It seems to closely match the second figure.
## Communicate
Below we will reflect on the new information that is presented from the data. When we look at the evidence, think about what you perceive about the information. Is this perception based on what the evidence shows? If others were to view it, what perceptions might they have? These writing prompts can help you reflect.
- Why do you think the second and third charts are so alike?
- What does it mean that when you look at the population of Canadians, the average 15-24 year old spends much more time gaming than the 75 and over, but they're almost the same when you only look at people who game at least some?
- If we had current data, how do you think these plots would look?
[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
| github_jupyter |
# Udacity - Machine Learning Engineer Nanodegree
## Capstone Project
### Title: Development of a LSTM Network to Predict Students’ Answers on Exam Questions
### Implementation of DKT:
#### Part 1: Define constants
```
dataset = "data/ASSISTments_skill_builder_data.csv" # Dataset path
best_model_file = "saved_models/ASSISTments.best.model.weights.hdf5" # File to save the model.
train_log = "logs/dktmodel.train.log" # File to save the training log.
eval_log = "logs/dktmodel.eval.log" # File to save the testing log.
optimizer = "adagrad" # Optimizer to use
lstm_units = 250 # Number of LSTM units
batch_size = 20 # Batch size
epochs = 100 # Number of epochs to train
dropout_rate = 0.6 # Dropout rate
verbose = 1 # Verbose = {0,1,2}
testing_rate = 0.2 # Portion of data to be used for testing
validation_rate = 0.2 # Portion of training data to be used for validation
```
#### Part 2: Pre-processing
```
from Utils import *
dataset, num_skills = read_file(dataset)
X_train, X_val, X_test, y_train, y_val, y_test = split_dataset(dataset, validation_rate, testing_rate)
print("======== Data Summary ========")
print("Data size: %d" % len(dataset))
print("Training data size: %d" % len(X_train))
print("Validation data size: %d" % len(X_val))
print("Testing data size: %d" % len(X_test))
print("Number of skills: %d" % num_skills)
print("==============================")
```
#### Part 3: Building the model
```
from StudentModel import DKTModel, DataGenerator
# Create generators for training/testing/validation
train_gen = DataGenerator(X_train[0:10], y_train[0:10], num_skills, batch_size)
val_gen = DataGenerator(X_val[0:10], y_val[0:10], num_skills, batch_size)
test_gen = DataGenerator(X_test[0:10], y_test[0:10], num_skills, batch_size)
# Create model
student_model = DKTModel(num_skills=train_gen.num_skills,
num_features=train_gen.feature_dim,
optimizer=optimizer,
hidden_units=lstm_units,
batch_size=batch_size,
dropout_rate=dropout_rate)
```
#### Part 4: Train the Model
```
history = student_model.fit(train_gen,
epochs=epochs,
val_gen=val_gen,
verbose=verbose,
filepath_bestmodel=best_model_file,
filepath_log=train_log)
```
#### Part 5: Load the Model with the Best Validation Loss
```
student_model.load_weights(best_model_file)
```
#### Part 6: Test the Model
```
result = student_model.evaluate(test_gen, metrics=['auc','acc','pre'], verbose=verbose, filepath_log=eval_log)
```
| github_jupyter |
# Prepare data
```
from __future__ import print_function, division
import os
import torch
import pandas as pd
from skimage import io, transform
import numpy as np
import matplotlib.pyplot as plt
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
plt.ion() # interactive mode
landmarks_frame = pd.read_csv('data-faces/faces/face_landmarks.csv')
n = 65
img_name = landmarks_frame.iloc[n, 0]
landmarks = landmarks_frame.iloc[n, 1:]
landmarks = np.asarray(landmarks)
landmarks = landmarks.astype('float').reshape(-1, 2)
print('Image name: {}'.format(img_name))
print('Landmarks shape: {}'.format(landmarks.shape))
print('First 4 Landmarks: {}'.format(landmarks[:4]))
def show_landmarks(image, landmarks):
"""Show image with landmarks"""
plt.imshow(image)
plt.scatter(landmarks[:, 0], landmarks[:, 1], s=10, marker='.', c='r')
plt.pause(0.001) # pause a bit so that plots are updated
plt.figure()
show_landmarks(io.imread(os.path.join('data-faces/faces/', img_name)),
landmarks)
plt.show()
```
# Dataset class
torch.utils.data.Dataset is an abstract class representing a dataset. Your custom dataset should inherit Dataset and override the following methods:
* __len__ so that len(dataset) returns the size of the dataset.
* __getitem__ to support the indexing such that dataset[i] can be used to get iith sample.
Let’s create a dataset class for our face landmarks dataset. We will read the csv in __init__ but leave the reading of images to __getitem__. This is memory efficient because all the images are not stored in the memory at once but read as required.
Sample of our dataset will be a dict {'image': image, 'landmarks': landmarks}. Our dataset will take an optional argument transform so that any required processing can be applied on the sample. We will see the usefulness of transform in the next section.
```
class FaceLandmarksDataset(Dataset):
"""Face Landmarks dataset."""
def __init__(self, csv_file, root_dir, transform=None):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.landmarks_frame = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.landmarks_frame)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img_name = os.path.join(self.root_dir,
self.landmarks_frame.iloc[idx, 0])
image = io.imread(img_name)
landmarks = self.landmarks_frame.iloc[idx, 1:]
landmarks = np.array([landmarks])
landmarks = landmarks.astype('float').reshape(-1, 2)
sample = {'image': image, 'landmarks': landmarks}
if self.transform:
sample = self.transform(sample)
return sample
```
Let’s instantiate this class and iterate through the data samples. We will print the sizes of first 4 samples and show their landmarks.
```
face_dataset = FaceLandmarksDataset(csv_file='data-faces/faces/face_landmarks.csv',
root_dir='data-faces/faces/')
fig = plt.figure()
for i in range(len(face_dataset)):
sample = face_dataset[i]
print(i, sample['image'].shape, sample['landmarks'].shape)
ax = plt.subplot(1, 4, i + 1)
plt.tight_layout()
ax.set_title('Sample #{}'.format(i))
ax.axis('off')
show_landmarks(**sample)
if i == 3:
plt.show()
break
```
# Transforms
One issue we can see from the above is that the samples are not of the same size. Most neural networks expect the images of a fixed size. Therefore, we will need to write some preprocessing code. Let’s create three transforms:
* Rescale: to scale the image
* RandomCrop: to crop from image randomly. This is data augmentation.
* ToTensor: to convert the numpy images to torch images (we need to swap axes).
We will write them as callable classes instead of simple functions so that parameters of the transform need not be passed everytime it’s called. For this, we just need to implement __call__ method and if required, __init__ method. We can then use a transform like this:
```
tsfm = Transform(params)
transformed_sample = tsfm(sample)
```
```
class Rescale(object):
"""Rescale the image in a sample to a given size.
Args:
output_size (tuple or int): Desired output size. If tuple, output is
matched to output_size. If int, smaller of image edges is matched
to output_size keeping aspect ratio the same.
"""
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
self.output_size = output_size
def __call__(self, sample):
image, landmarks = sample['image'], sample['landmarks']
h, w = image.shape[:2]
if isinstance(self.output_size, int):
if h > w:
new_h, new_w = self.output_size * h / w, self.output_size
else:
new_h, new_w = self.output_size, self.output_size * w / h
else:
new_h, new_w = self.output_size
new_h, new_w = int(new_h), int(new_w)
img = transform.resize(image, (new_h, new_w))
# h and w are swapped for landmarks because for images,
# x and y axes are axis 1 and 0 respectively
landmarks = landmarks * [new_w / w, new_h / h]
return {'image': img, 'landmarks': landmarks}
class RandomCrop(object):
"""Crop randomly the image in a sample.
Args:
output_size (tuple or int): Desired output size. If int, square crop
is made.
"""
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
if isinstance(output_size, int):
self.output_size = (output_size, output_size)
else:
assert len(output_size) == 2
self.output_size = output_size
def __call__(self, sample):
image, landmarks = sample['image'], sample['landmarks']
h, w = image.shape[:2]
new_h, new_w = self.output_size
top = np.random.randint(0, h - new_h)
left = np.random.randint(0, w - new_w)
image = image[top: top + new_h,
left: left + new_w]
landmarks = landmarks - [left, top]
return {'image': image, 'landmarks': landmarks}
class ToTensor(object):
"""Convert ndarrays in sample to Tensors."""
def __call__(self, sample):
image, landmarks = sample['image'], sample['landmarks']
# swap color axis because
# numpy image: H x W x C
# torch image: C x H x W
image = image.transpose((2, 0, 1))
return {'image': torch.from_numpy(image),
'landmarks': torch.from_numpy(landmarks)}
```
# Compose transforms
Now, we apply the transforms on a sample.
Let’s say we want to rescale the shorter side of the image to 256 and then randomly crop a square of size 224 from it. i.e, we want to compose Rescale and RandomCrop transforms. torchvision.transforms.Compose is a simple callable class which allows us to do this.
```
scale = Rescale(256)
crop = RandomCrop(128)
composed = transforms.Compose([Rescale(256),
RandomCrop(224)])
# Apply each of the above transforms on sample.
fig = plt.figure()
sample = face_dataset[65]
for i, tsfrm in enumerate([scale, crop, composed]):
transformed_sample = tsfrm(sample)
ax = plt.subplot(1, 3, i + 1)
plt.tight_layout()
ax.set_title(type(tsfrm).__name__)
show_landmarks(**transformed_sample)
plt.show()
```
# Iterating through the dataset
Let’s put this all together to create a dataset with composed transforms. To summarize, every time this dataset is sampled:
* An image is read from the file on the fly
* Transforms are applied on the read image
* Since one of the transforms is random, data is augmented on sampling
We can iterate over the created dataset with a for i in range loop as before.
```
transformed_dataset = FaceLandmarksDataset(csv_file='data-faces/faces/face_landmarks.csv',
root_dir='data-faces/faces/',
transform=transforms.Compose([
Rescale(256),
RandomCrop(224),
ToTensor()
]))
for i in range(len(transformed_dataset)):
sample = transformed_dataset[i]
print(i, sample['image'].size(), sample['landmarks'].size())
if i == 3:
break
```
However, we are losing a lot of features by using a simple for loop to iterate over the data. In particular, we are missing out on:
* Batching the data
* Shuffling the data
* Load the data in parallel using multiprocessing workers.
torch.utils.data.DataLoader is an iterator which provides all these features. Parameters used below should be clear. One parameter of interest is collate_fn. You can specify how exactly the samples need to be batched using collate_fn. However, default collate should work fine for most use cases.
```
dataloader = DataLoader(transformed_dataset, batch_size=4,
shuffle=True, num_workers=0)
# Helper function to show a batch
def show_landmarks_batch(sample_batched):
"""Show image with landmarks for a batch of samples."""
images_batch, landmarks_batch = \
sample_batched['image'], sample_batched['landmarks']
batch_size = len(images_batch)
im_size = images_batch.size(2)
grid_border_size = 2
grid = utils.make_grid(images_batch)
plt.imshow(grid.numpy().transpose((1, 2, 0)))
for i in range(batch_size):
plt.scatter(landmarks_batch[i, :, 0].numpy() + i * im_size + (i + 1) * grid_border_size,
landmarks_batch[i, :, 1].numpy() + grid_border_size,
s=10, marker='.', c='r')
plt.title('Batch from dataloader')
for i_batch, sample_batched in enumerate(dataloader):
print(i_batch, sample_batched['image'].size(),
sample_batched['landmarks'].size())
# observe 4th batch and stop.
if i_batch == 3:
plt.figure()
show_landmarks_batch(sample_batched)
plt.axis('off')
plt.ioff()
plt.show()
break
```
# Afterword: torchvision
In this tutorial, we have seen how to write and use datasets, transforms and dataloader. torchvision package provides some common datasets and transforms. You might not even have to write custom classes. One of the more generic datasets available in torchvision is ImageFolder. It assumes that images are organized in the following way:
```
root/ants/xxx.png
root/ants/xxy.jpeg
root/ants/xxz.png
.
.
.
root/bees/123.jpg
root/bees/nsdf3.png
root/bees/asd932_.png
```
where ‘ants’, ‘bees’ etc. are class labels. Similarly generic transforms which operate on PIL.Image like RandomHorizontalFlip, Scale, are also available. You can use these to write a dataloader like this:
```
import torch
from torchvision import transforms, datasets
data_transform = transforms.Compose([
transforms.RandomSizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
hymenoptera_dataset = datasets.ImageFolder(root='hymenoptera_data/train',
transform=data_transform)
dataset_loader = torch.utils.data.DataLoader(hymenoptera_dataset,
batch_size=4, shuffle=True,
num_workers=4)
```
| github_jupyter |
# Convolutional Neural Networks with TensorFlow
"Deep Learning" is a general term that usually refers to the use of neural networks with multiple layers that synthesize the way the human brain learns and makes decisions. A convolutional neural network is a kind of neural network that extracts *features* from matrices of numeric values (often images) by convolving multiple filters over the matrix values to apply weights and identify patterns, such as edges, corners, and so on in an image. The numeric representations of these patterns are then passed to a fully-connected neural network layer to map the features to specific classes.
There are several commonly used frameworks for creating CNNs. In this notebook, we'll build a simple example CNN using TensorFlow.
## Install and import libraries
First, let's install and import the TensorFlow libraries we'll need.
```
!pip install --upgrade tensorflow
import tensorflow
from tensorflow import keras
print('TensorFlow version:',tensorflow.__version__)
print('Keras version:',keras.__version__)
```
## Explore the data
In this exercise, you'll train a CNN-based classification model that can classify images of geometric shapes. Let's take a look at the classes of shape the model needs to identify.
```
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import os
%matplotlib inline
# The images are in the data/shapes folder
data_folder = 'data/shapes'
# Get the class names
classes = os.listdir(data_folder)
classes.sort()
print(len(classes), 'classes:')
print(classes)
# Show the first image in each folder
fig = plt.figure(figsize=(8, 12))
i = 0
for sub_dir in os.listdir(data_folder):
i+=1
img_file = os.listdir(os.path.join(data_folder,sub_dir))[0]
img_path = os.path.join(data_folder, sub_dir, img_file)
img = mpimg.imread(img_path)
a=fig.add_subplot(1, len(classes),i)
a.axis('off')
imgplot = plt.imshow(img)
a.set_title(img_file)
plt.show()
```
## Prepare the data
Before we can train the model, we need to prepare the data. We'll divide the feature values by 255 to normalize them as floating point values between 0 and 1, and we'll split the data so that we can use 70% of it to train the model, and hold back 30% to validate it. When loading the data, the data generator will assing "hot-encoded" numeric labels to indicate which class each image belongs to based on the subfolders in which the data is stored. In this case, there are three subfolders - *circle*, *square*, and *triangle*, so the labels will consist of three *0* or *1* values indicating which of these classes is associated with the image - for example the label [0 1 0] indicates that the image belongs to the second class (*square*).
```
from tensorflow.keras.preprocessing.image import ImageDataGenerator
img_size = (128, 128)
batch_size = 30
print("Getting Data...")
datagen = ImageDataGenerator(rescale=1./255, # normalize pixel values
validation_split=0.3) # hold back 30% of the images for validation
print("Preparing training dataset...")
train_generator = datagen.flow_from_directory(
data_folder,
target_size=img_size,
batch_size=batch_size,
class_mode='categorical',
subset='training') # set as training data
print("Preparing validation dataset...")
validation_generator = datagen.flow_from_directory(
data_folder,
target_size=img_size,
batch_size=batch_size,
class_mode='categorical',
subset='validation') # set as validation data
classnames = list(train_generator.class_indices.keys())
print('Data generators ready')
```
## Define the CNN
Now we're ready to create our model. This involves defining the layers for our CNN, and compiling them for multi-class classification.
```
# Define a CNN classifier network
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense
# Define the model as a sequence of layers
model = Sequential()
# The input layer accepts an image and applies a convolution that uses 32 6x6 filters and a rectified linear unit activation function
model.add(Conv2D(32, (6, 6), input_shape=train_generator.image_shape, activation='relu'))
# Next we'll add a max pooling layer with a 2x2 patch
model.add(MaxPooling2D(pool_size=(2,2)))
# We can add as many layers as we think necessary - here we'll add another convolution and max pooling layer
model.add(Conv2D(32, (6, 6), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# And another set
model.add(Conv2D(32, (6, 6), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# A dropout layer randomly drops some nodes to reduce inter-dependencies (which can cause over-fitting)
model.add(Dropout(0.2))
# Flatten the feature maps
model.add(Flatten())
# Generate a fully-cpnnected output layer with a predicted probability for each class
# (softmax ensures all probabilities sum to 1)
model.add(Dense(train_generator.num_classes, activation='softmax'))
# With the layers defined, we can now compile the model for categorical (multi-class) classification
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print(model.summary())
```
## Train the model
With the layers of the CNN defined, we're ready to train the model using our image data. In the example below, we use 5 iterations (*epochs*) to train the model in 30-image batches, holding back 30% of the data for validation. After each epoch, the loss function measures the error (*loss*) in the model and adjusts the weights (which were randomly generated for the first iteration) to try to improve accuracy.
> **Note**: We're only using 5 epochs to minimze the training time for this simple example. A real-world CNN is usually trained over more epochs than this. CNN model training is processor-intensive, involving a lot of matrix and vector-based operations; so it's recommended to perform this on a system that can leverage GPUs, which are optimized for these kinds of calculation. This will take a while to complete on a CPU-based system - status will be displayed as the training progresses.
```
# Train the model over 5 epochs using 30-image batches and using the validation holdout dataset for validation
num_epochs = 5
history = model.fit(
train_generator,
steps_per_epoch = train_generator.samples // batch_size,
validation_data = validation_generator,
validation_steps = validation_generator.samples // batch_size,
epochs = num_epochs)
```
## View the loss history
We tracked average training and validation loss history for each epoch. We can plot these to verify that loss reduced as the model was trained, and to detect *overfitting* (which is indicated by a continued drop in training loss after validation loss has levelled out or started to increase).
```
%matplotlib inline
from matplotlib import pyplot as plt
epoch_nums = range(1,num_epochs+1)
training_loss = history.history["loss"]
validation_loss = history.history["val_loss"]
plt.plot(epoch_nums, training_loss)
plt.plot(epoch_nums, validation_loss)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(['training', 'validation'], loc='upper right')
plt.show()
```
## Evaluate model performance
We can see the final accuracy based on the test data, but typically we'll want to explore performance metrics in a little more depth. Let's plot a confusion matrix to see how well the model is predicting each class.
```
# Tensorflow doesn't have a built-in confusion matrix metric, so we'll use SciKit-Learn
import numpy as np
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
%matplotlib inline
print("Generating predictions from validation data...")
# Get the image and label arrays for the first batch of validation data
x_test = validation_generator[0][0]
y_test = validation_generator[0][1]
# Use the model to predict the class
class_probabilities = model.predict(x_test)
# The model returns a probability value for each class
# The one with the highest probability is the predicted class
predictions = np.argmax(class_probabilities, axis=1)
# The actual labels are hot encoded (e.g. [0 1 0], so get the one with the value 1
true_labels = np.argmax(y_test, axis=1)
# Plot the confusion matrix
cm = confusion_matrix(true_labels, predictions)
plt.imshow(cm, interpolation="nearest", cmap=plt.cm.Blues)
plt.colorbar()
tick_marks = np.arange(len(classnames))
plt.xticks(tick_marks, classnames, rotation=85)
plt.yticks(tick_marks, classnames)
plt.xlabel("Predicted Shape")
plt.ylabel("Actual Shape")
plt.show()
```
## Save the Trained model
Now that you've trained a working model, you can save it (including the trained weights) for use later.
```
# Save the trained model
modelFileName = 'models/shape_classifier.h5'
model.save(modelFileName)
del model # deletes the existing model variable
print('model saved as', modelFileName)
```
## Use the trained model
When you have a new image, you can use the saved model to predict its class.
```
from tensorflow.keras import models
import numpy as np
from random import randint
import os
%matplotlib inline
# Function to predict the class of an image
def predict_image(classifier, image):
from tensorflow import convert_to_tensor
# The model expects a batch of images as input, so we'll create an array of 1 image
imgfeatures = img.reshape(1, img.shape[0], img.shape[1], img.shape[2])
# We need to format the input to match the training data
# The generator loaded the values as floating point numbers
# and normalized the pixel values, so...
imgfeatures = imgfeatures.astype('float32')
imgfeatures /= 255
# Use the model to predict the image class
class_probabilities = classifier.predict(imgfeatures)
# Find the class predictions with the highest predicted probability
index = int(np.argmax(class_probabilities, axis=1)[0])
return index
# Function to create a random image (of a square, circle, or triangle)
def create_image (size, shape):
from random import randint
import numpy as np
from PIL import Image, ImageDraw
xy1 = randint(10,40)
xy2 = randint(60,100)
col = (randint(0,200), randint(0,200), randint(0,200))
img = Image.new("RGB", size, (255, 255, 255))
draw = ImageDraw.Draw(img)
if shape == 'circle':
draw.ellipse([(xy1,xy1), (xy2,xy2)], fill=col)
elif shape == 'triangle':
draw.polygon([(xy1,xy1), (xy2,xy2), (xy2,xy1)], fill=col)
else: # square
draw.rectangle([(xy1,xy1), (xy2,xy2)], fill=col)
del draw
return np.array(img)
# Create a random test image
classnames = os.listdir(os.path.join('data', 'shapes'))
classnames.sort()
img = create_image ((128,128), classnames[randint(0, len(classnames)-1)])
plt.axis('off')
plt.imshow(img)
# Use the classifier to predict the class
model = models.load_model(modelFileName) # loads the saved model
class_idx = predict_image(model, img)
print (classnames[class_idx])
```
## Further Reading
To learn more about training convolutional neural networks with TensorFlow, see the [TensorFlow documentation](https://www.tensorflow.org/overview).
## Challenge: Safari Image Classification
Hopefully this notebook has shown you the main steps in training and evaluating a CNN. Why not put what you've learned into practice with our Safari image classification challenge in the [/challenges/05 - Safari CNN Challenge.ipynb](./challenges/05%20-%20Safari%20CNN%20Challenge.ipynb) notebook?
> **Note**: The time to complete this optional challenge is not included in the estimated time for this exercise - you can spend as little or as much time on it as you like!
| github_jupyter |
# PyTorch 基础 : 张量
在第一章中我们已经通过官方的入门教程对PyTorch有了一定的了解,这一章会详细介绍PyTorch 里面的基础知识。
全部掌握了这些基础知识,在后面的应用中才能更加快速进阶,如果你已经对PyTorch有一定的了解,可以跳过此章
```
# 首先要引入相关的包
import torch
import numpy as np
#打印一下版本
torch.__version__
```
## 张量(Tensor)
张量的英文是Tensor,它是PyTorch里面基础的运算单位,与Numpy的ndarray相同都表示的是一个多维的矩阵。
与ndarray的最大区别就是,PyTorch的Tensor可以在 GPU 上运行,而 numpy 的 ndarray 只能在 CPU 上运行,在GPU上运行大大加快了运算速度。
下面我们生成一个简单的张量
```
x = torch.rand(2, 3)
x
```
以上生成了一个,2行3列的的矩阵,我们看一下他的大小:
```
# 可以使用与numpy相同的shape属性查看
print(x.shape)
# 也可以使用size()函数,返回的结果都是相同的
print(x.size())
```
张量(Tensor)是一个定义在一些向量空间和一些对偶空间的笛卡儿积上的多重线性映射,其坐标是|n|维空间内,有|n|个分量的一种量, 其中每个分量都是坐标的函数, 而在坐标变换时,这些分量也依照某些规则作线性变换。r称为该张量的秩或阶(与矩阵的秩和阶均无关系)。 (来自百度百科)
下面我们来生成一些多维的张量:
```
y=torch.rand(2,3,4,5)
print(y.size())
y
```
在同构的意义下,第零阶张量 (r = 0) 为标量 (Scalar),第一阶张量 (r = 1) 为向量 (Vector), 第二阶张量 (r = 2) 则成为矩阵 (Matrix),第三阶以上的统称为多维张量。
其中要特别注意的就是标量,我们先生成一个标量:
```
#我们直接使用现有数字生成
scalar =torch.tensor(3.1433223)
print(scalar)
#打印标量的大小
scalar.size()
```
对于标量,我们可以直接使用 .item() 从中取出其对应的python对象的数值
```
scalar.item()
```
特别的:如果张量中只有一个元素的tensor也可以调用`tensor.item`方法
```
tensor = torch.tensor([3.1433223])
print(tensor)
tensor.size()
tensor.item()
```
### 基本类型
Tensor的基本数据类型有五种:
- 32位浮点型:torch.FloatTensor。 (默认)
- 64位整型:torch.LongTensor。
- 32位整型:torch.IntTensor。
- 16位整型:torch.ShortTensor。
- 64位浮点型:torch.DoubleTensor。
除以上数字类型外,还有
byte和chart型
```
long=tensor.long()
long
half=tensor.half()
half
int_t=tensor.int()
int_t
flo = tensor.float()
flo
short = tensor.short()
short
ch = tensor.char()
ch
bt = tensor.byte()
bt
```
### Numpy转换
使用numpy方法将Tensor转为ndarray
```
a = torch.randn((3, 2))
# tensor转化为numpy
numpy_a = a.numpy()
print(numpy_a)
```
numpy转化为Tensor
```
torch_a = torch.from_numpy(numpy_a)
torch_a
```
***Tensor和numpy对象共享内存,所以他们之间的转换很快,而且几乎不会消耗什么资源。但这也意味着,如果其中一个变了,另外一个也会随之改变。***
### 设备间转换
一般情况下可以使用.cuda方法将tensor移动到gpu,这步操作需要cuda设备支持
```
cpu_a=torch.rand(4, 3)
cpu_a.type()
gpu_a=cpu_a.cuda()
gpu_a.type()
```
使用.cpu方法将tensor移动到cpu
```
cpu_b=gpu_a.cpu()
cpu_b.type()
```
如果我们有多GPU的情况,可以使用to方法来确定使用那个设备,这里只做个简单的实例:
```
#使用torch.cuda.is_available()来确定是否有cuda设备
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
#将tensor传送到设备
gpu_b=cpu_b.to(device)
gpu_b.type()
```
### 初始化
Pytorch中有许多默认的初始化方法可以使用
```
# 使用[0,1]均匀分布随机初始化二维数组
rnd = torch.rand(5, 3)
rnd
##初始化,使用1填充
one = torch.ones(2, 2)
one
##初始化,使用0填充
zero=torch.zeros(2,2)
zero
#初始化一个单位矩阵,即对角线为1 其他为0
eye=torch.eye(2,2)
eye
```
### 常用方法
PyTorch中对张量的操作api 和 NumPy 非常相似,如果熟悉 NumPy 中的操作,那么 他们二者 基本是一致的:
```
x = torch.randn(3, 3)
print(x)
# 沿着行取最大值
max_value, max_idx = torch.max(x, dim=1)
print(max_value, max_idx)
# 每行 x 求和
sum_x = torch.sum(x, dim=1)
print(sum_x)
y=torch.randn(3, 3)
z = x + y
print(z)
```
正如官方60分钟教程中所说,以_为结尾的,均会改变调用值
```
# add 完成后x的值改变了
x.add_(y)
print(x)
```
张量的基本操作都介绍的的差不多了,下一章介绍PyTorch的自动求导机制
| github_jupyter |
# Modul Python Bahasa Indonesia
## Seri Keempat
___
Coded by psychohaxer | Version 1.8 (2020.12.13)
___
Notebook ini berisi contoh kode dalam Python sekaligus outputnya sebagai referensi dalam coding. Notebook ini boleh disebarluaskan dan diedit tanpa mengubah atau menghilangkan nama pembuatnya. Selamat belajar dan semoga waktu Anda menyenangkan.
Catatan: Modul ini menggunakan Python 3
Notebook ini dilisensikan dibawah [MIT License](https://opensource.org/licenses/MIT).
___
## Bab 4 Pengenalan Tipe Data
Data yang disimpan di memori memiliki tipe yang berbeda – beda. Misalnya untuk panjang, akan disimpan dengan tipe bilangan. Nama orang akan disimpan dalam tipe string/karakter. Suhu akan disimpan dalam bentuk bilangan berkoma. Dan lain sebagainya. Masing – masing tipe data akan memiliki operasi yang berbeda – beda.
Python memiliki enam tipe data standar atau paling sering digunakan, yaitu:
1. Bilangan (Number)
2. String
3. List
4. Tuple
5. Set
6. Dictionary
### Bilangan (Number)
Bilangan (number) adalah salah satu tipe data dasar di Python. Python mendukung bilangan bulat (integer), bilangan pecahan (float), dan bilangan kompleks (complex). Masing–masing diwakili oleh kelas `int`, `float`, dan `complex`.
Integer adalah bilangan bulat, yaitu bilangan yang tidak mempunyai koma. Contohnya 1, 2, 100, -30, -5, 99999, dan lain sebagainya. Panjang integer di python tidak dibatasi jumlah digitnya. Selama memori masih cukup, maka sepanjang itulah jumlah digit yang akan ditampilkan.
Float adalah bilangan pecahan atau bilangan berkoma. Contohnya adalah 5.5, 3.9, 72.8, -1.5, -0.7878999, dan lain sebagainya. Panjang angka di belakang koma untuk float ini adalah 15 digit.
Bilangan kompleks (complex) adalah bilangan yang terdiri dari dua bagian, yaitu bagian yang real dan bagian yang imajiner. Contohnya adalah 3 + 2j, 9 – 5j, dan lain sebagainya.
Kita bisa menggunakan fungsi `type()` untuk mengetahui tipe data suatu objek di python.
```
## Inisialisasi Variabel
x = 5
y = 1.2
z = 7+4j
## Menampilkan nilai variabel dan tipenya
print(x, 'bertipe', type(x))
print(y, 'bertipe', type(y))
print(z, 'bertipe', type(z))
```
Tipe data integer di Python panjangnya bisa berapa saja. Panjangnya hanya dibatasi oleh besar memori yang tersedia. Tipe data float akurat sampai 17 angka di belakang koma.
```
## Inisialisasi Variabel
x = 123456789123456789123456789
y = 0.123456789123456789123456789
z = 2+9j
## Menampilkan nilai variabel
print(x)
print(y)
print(z)
```
### String
String adalah satu atau serangkaian karakter yang diletakkan diantara tanda kutip, baik tanda kutip tunggal (`'`) maupun ganda (`"`). Huruf, angka, maupun karakter lainnya yang digabung menjadi teks adalah contoh string.
String adalah tipe data yang anggotanya berurut dan memiliki indeks. Indeks dimulai dari angka 0 bila dimulai dari depan dan -1 bila diindeks dari belakang. Tiap karakter bisa diakses menggunakan indeksnya dengan format `string[indeks]`. Pada string juga bisa dilakukan <i>slicing</i> atau mengakses sekelompok substring dengan format `string[awal:akhir]`. Untuk jelasnya bisa diperhatikan contoh berikut.
```
## Deklarasi variabel string
kalimat = "Halo Python!"
## Menampilkan variabel kalimat
print(kalimat) ## print string lengkap
```
Slicing dibahas lebih lanjut di modul berikutnya. Berikut sedikit contoh slicing yang juga bisa dilakukan di list, tuple, set, bahkan dictionary.
```
## print karakter pertama
print(kalimat[0])
## print karakter terakhir
print(kalimat[-1])
## print dari indeks 4 - 6
print(kalimat[4:7])
## print dari indeks 0 - 3
print(kalimat[:4])
## print tipe variabel kalimat
print(type(kalimat))
```
String dapat digabung menggunakan koma (`,`) dan plus (`+`).
```
## menggabungkan string
a = "Hello"
b = "world!"
## menggunakan koma
print(a,b)
## menggunakan plus
print(a + b)
```
### List
List adalah tipe data yang berisi satu atau beberapa nilai di dalamnya. Nilai–nilai ini sering juga disebut item, elemen, atau anggota list. List dibuat dengan menempatkan semua item di dalam tanda kurung `[]`, dipisahkan oleh tanda koma. Anggota list bisa berisi satu tipe data, atau campuran. Indeks dimulai dari 0 dan bukan dari 1.
```
## List kosong
lst = []
## Menampilkan isi list
print(lst)
## List berisi integer/bilangan bulat
lst = [1,2,3,4,5]
## Menampilkan isi list
print(lst)
## List berisi string
lst = ['Modul','Python','Bahasa','Indonesia']
## Menampilkan isi list
print(lst)
# List bersarang
lst = [1, ['Modul','Python','Bahasa','Indonesia'], [1200, 3500, 'a'], 14+5j]
## Menampilkan isi list
print(lst)
```
### Tuple
Tuple adalah jenis data lain yang mirip dengan list. Perbedaannya dengan list adalah anggotanya tidak bisa diubah (immutable). List bersifat mutable, sedangkan tuple bersifat <i>immutable</i>. Sekali tuple dibuat, maka isinya tidak bisa dimodifikasi lagi. Karena sifatnya yang tidak bisa diubah nilainya, tuple lebih cepat diproses daripada list.
Tuple dideklarasikan dengan menggunakan tanda kurung `()`. dan anggotanya dipisahkan oleh tanda koma. Tuple berguna untuk data yang dimaksudkan tidak diubah isinya. Misalnya tuple komposisi warna untuk putih adalah (255,255,255).
Seperti halnya list, kita bisa mengakses anggota tuple dengan menggunakan indeksnya.
```
## Deklarasi tuple white dan red
white = (255,255,255)
red = (255,0,0)
## Menampilkan tuple
print(white)
print(red)
```
Tuple akan menghasilkan error jika nilainya diganti.
```
white[1] = 0
```
### Set
Set adalah salah satu tipe data di Python yang tidak berurut (unordered). Set memiliki anggota yang <u>unik (tidak ada duplikasi)</u>. Jadi misalnya <u>kalau kita meletakkan beberapa anggota yang sama di dalam set, maka hanya tersimpan ada satu</u>.
Set bisa digunakan untuk melakukan operasi himpunan matematika seperti gabungan, irisan, selisih, dan komplemen.
Set dibuat dengan meletakkan anggota–anggotanya di dalam tanda kurung kurawal `{ }`, dipisahkan menggunakan tanda koma. Kita juga bisa membuat set dari list dengan memasukkan list ke dalam fungsi `set()`.
Set bisa berisi data campuran, baik integer, float, string, dan lain sebagainya. Akan tetapi set tidak bisa berisi list, set, dan dictionary.
```
## Set berisi integer
my_set = {2,3,4}
## Menampilkan set
print(my_set)
## Membuat set dari list dengan fungsi set()
## Perhatikan data yang terduplikat akan hilang
my_list = [1,3,3,7,3,1,7]
my_set = set(my_list)
## Menampilkan set
print(my_set)
## Set berisi data campuran
my_set = {1, 2.0, "Python", (3,4,5)}
## Menampilkan set
print(my_set)
## Set tidak bisa berisi anggota list
## Contoh berikut akan muncul error TypeError
my_set = {1,2,[3,4,5]}
```
### Dictionary
Dictionary adalah tipe data yang tiap anggotanya terdiri dari pasangan kunci-nilai (key-value). Mirip dengan kamus dimana ada kata ada arti. Dictionary umumnya dipakai untuk data yang besar dan untuk mengakses anggota data secara acak. Anggota dictionary tidak memiliki indeks.
Dictionary dideklarasikan dengan menggunakan tanda kurung kurawal `{}`, dimana anggotanya memiliki bentuk `kunci:nilai` atau `key:value` dan tiap anggota dipisah tanda koma. Kunci dan nilainya bisa memiliki tipe sembarang.
Untuk mengakses nilai dari anggota dictionary, kita menggunakan key-nya.
```
## Deklarasi dictionary
angka = {1:'satu', 2:'dua', 'tiga':3}
## Menampilkan dictionary
print(angka)
## Mengakses elemen dictionary dengan key
print(angka[2])
```
___
## Latihan
1. Tampilkan dictionary yang berisi nama, umur, alamat!
```
## Ketik mulai dibawah sini
data_diri = {}
print(data_diri)
```
___
coded with ❤ by [psychohaxer](http://github.com/psychohaxer)
___
| github_jupyter |
```
import random
random.seed(a=613)
import numpy as np
import scProject
import scanpy as sc
patterns = sc.read_h5ad('patterns_anndata.h5ad')
dataset = sc.read_h5ad('/Users/asherbaraban/PycharmProjects/scProject/scProject/test/targetALS_elim_annotated_20200510/p6counts.h5ad')
dataset_filtered, patterns_filtered = scProject.matcher.filterAnnDatas(dataset, patterns, 'id')
import matplotlib.pyplot as plt
import numpy as np
microglia= dataset_filtered[dataset_filtered.obs['assigned_cell_type'].isin(['Microglia'])].copy()
others= dataset_filtered.obs['assigned_cell_type'].unique().remove_categories('Microglia')
rest = dataset_filtered[dataset_filtered.obs['assigned_cell_type'].isin(list(others))].copy()
print(microglia.shape, rest.shape, dataset_filtered.shape)
microglia.X = np.log2(microglia.X + 1e-30) #log transform for statistical tests
rest.X = np.log2(rest.X + 1e-30) #log transform for statistcal tests
plt.rcParams['figure.figsize']= [5,50]
df5 = scProject.stats.projectionDriver(patterns_filtered, microglia, rest,.999999999999,'gene_short_name', 5, display=False)
sigs5 = df5[0].index
fiveWCIS = df5[1].loc[sigs5]
fiveWCIS['rank'] = abs(fiveWCIS['Low']+fiveWCIS['High'])
fiveWCIS = fiveWCIS.sort_values(by='rank', ascending=False).head(50)
counter = len(fiveWCIS)-1
yAxis = []
plt.rcParams['figure.figsize']= [5, 15]
for idx,low, high,y in zip(list(fiveWCIS.index) ,fiveWCIS['Low'], fiveWCIS['High'], range(len(fiveWCIS))):
plt.plot((low, high), (counter, counter), '-', color='blue')
if counter is 0:
plt.plot((float(low+high)/2.0), counter,'o', color='blue', label='Mean')
else:
plt.plot((float(low+high)/2.0), counter,'o', color='blue')
yAxis.insert(0,idx)
counter-=1
plt.yticks(range(len(yAxis)), yAxis)
plt.title("Microglia Feature 5 Weighted CIs")
plt.plot((0,0), (0,len(yAxis)), '--', color='black')
plt.ylim(top= len(yAxis))
plt.ylim(bottom=-.5)
plt.legend()
plt.savefig("557MG/MicrogliaF5Weighted.pdf", bbox_inches='tight')
plt.show()
# Bon CIs
fiveWCIS = df5[0].loc[fiveWCIS.index]
counter = len(fiveWCIS)-1
yAxis = []
plt.rcParams['figure.figsize']= [5, 15]
for idx,low, high,y in zip(list(fiveWCIS.index) ,fiveWCIS['Low'], fiveWCIS['High'], range(len(fiveWCIS))):
plt.plot((low, high), (counter, counter), '-', color='blue')
if counter is 0:
plt.plot((float(low+high)/2.0), counter,'o', color='blue', label='Mean')
else:
plt.plot((float(low+high)/2.0), counter,'o', color='blue')
yAxis.insert(0,idx)
counter-=1
plt.yticks(range(len(yAxis)), yAxis)
plt.title("Microglia Feature 5 Ranked")
plt.plot((0,0), (0,len(yAxis)), '--', color='black')
plt.ylim(top= len(yAxis))
plt.ylim(bottom=-.5)
plt.legend()
plt.savefig("557MG/MicrogliaF5Bon.pdf", bbox_inches='tight')
plt.show()
df57 = scProject.stats.projectionDriver(patterns_filtered, microglia, rest,.999999999999,'gene_short_name', 57, display=False)
sigs57 = df57[0].index
five7WCIS = df57[1].loc[sigs57]
five7WCIS['rank'] = abs(five7WCIS['Low']+five7WCIS['High'])
five7WCIS = five7WCIS.sort_values(by='rank', ascending=False).head(50)
counter = len(five7WCIS)-1
yAxis = []
plt.rcParams['figure.figsize']= [5, 15]
for idx,low, high,y in zip(list(five7WCIS.index) ,five7WCIS['Low'], five7WCIS['High'], range(len(five7WCIS))):
plt.plot((low, high), (counter, counter), '-', color='blue')
if counter is 0:
plt.plot((float(low+high)/2.0), counter,'o', color='blue', label='Mean')
else:
plt.plot((float(low+high)/2.0), counter,'o', color='blue')
yAxis.insert(0,idx)
counter-=1
plt.yticks(range(len(yAxis)), yAxis)
plt.title("Microglia Feature 57 Weighted CIs")
plt.plot((0,0), (0,len(yAxis)), '--', color='black')
plt.ylim(top= len(yAxis))
plt.ylim(bottom=-.5)
plt.legend()
plt.savefig("557MG/MicrogliaF57Weighted.pdf", bbox_inches='tight')
plt.show()
# Bon CIs
five7WCIS = df57[0].loc[five7WCIS.index]
counter = len(five7WCIS)-1
yAxis = []
plt.rcParams['figure.figsize']= [5, 15]
for idx,low, high,y in zip(list(five7WCIS.index) ,five7WCIS['Low'], five7WCIS['High'], range(len(five7WCIS))):
plt.plot((low, high), (counter, counter), '-', color='blue')
if counter is 0:
plt.plot((float(low+high)/2.0), counter,'o', color='blue', label='Mean')
else:
plt.plot((float(low+high)/2.0), counter,'o', color='blue')
yAxis.insert(0,idx)
counter-=1
plt.yticks(range(len(yAxis)), yAxis)
plt.title("Microglia Feature 57 Bon CIs")
plt.plot((0,0), (0,len(yAxis)), '--', color='black')
plt.ylim(top= len(yAxis))
plt.ylim(bottom=-.5)
plt.legend()
plt.savefig("557MG/MicrogliaF57Bon.pdf", bbox_inches='tight')
plt.show()
# 57 exclusive
e57 = df57[0].index.difference(df5[0].index)
exclusive57WCIS = df57[1].loc[e57]
exclusive57WCIS['rank'] = abs(exclusive57WCIS['Low']+exclusive57WCIS['High'])
exclusive57WCIS = exclusive57WCIS.sort_values(by='rank', ascending=False)
counter = len(exclusive57WCIS)-1
yAxis = []
plt.rcParams['figure.figsize']= [5, 15]
for idx,low, high,y in zip(list(exclusive57WCIS.index) ,exclusive57WCIS['Low'], exclusive57WCIS['High'], range(len(exclusive57WCIS))):
plt.plot((low, high), (counter, counter), '-', color='blue')
if counter is 0:
plt.plot((float(low+high)/2.0), counter,'o', color='blue', label='Mean')
else:
plt.plot((float(low+high)/2.0), counter,'o', color='blue')
yAxis.insert(0,idx)
counter-=1
plt.yticks(range(len(yAxis)), yAxis)
plt.title("Microglia Feature 57 Exclusive Weighted CIs")
plt.plot((0,0), (0,len(yAxis)), '--', color='black')
plt.ylim(top= len(yAxis))
plt.ylim(bottom=-.5)
plt.legend()
plt.savefig("557MG/MicrogliaF57ExclusiveWeighted.pdf", bbox_inches='tight')
plt.show()
# Bon CIs
exclusive57CIS = df57[0].loc[exclusive57WCIS.index]
counter = len(exclusive57CIS)-1
yAxis = []
plt.rcParams['figure.figsize']= [5, 15]
for idx,low, high,y in zip(list(exclusive57CIS.index) ,exclusive57CIS['Low'], exclusive57CIS['High'], range(len(exclusive57CIS))):
plt.plot((low, high), (counter, counter), '-', color='blue')
if counter is 0:
plt.plot((float(low+high)/2.0), counter,'o', color='blue', label='Mean')
else:
plt.plot((float(low+high)/2.0), counter,'o', color='blue')
yAxis.insert(0,idx)
counter-=1
plt.yticks(range(len(yAxis)), yAxis)
plt.title("Microglia Feature 57 Exclusive Bon CIs")
plt.plot((0,0), (0,len(yAxis)), '--', color='black')
plt.ylim(top= len(yAxis))
plt.ylim(bottom=-.5)
plt.legend()
plt.savefig("557MG/MicrogliaF57ExclusiveBon.pdf", bbox_inches='tight')
plt.show()
# 5 exclusive
e5 = df5[0].index.difference(df57[0].index)
exclusive5WCIS = df5[1].loc[e5]
exclusive5WCIS['rank'] = abs(exclusive5WCIS['Low']+exclusive5WCIS['High'])
exclusive5WCIS = exclusive5WCIS.sort_values(by='rank', ascending=False)
counter = len(exclusive5WCIS)-1
yAxis = []
plt.rcParams['figure.figsize']= [5, 15]
for idx,low, high,y in zip(list(exclusive5WCIS.index) ,exclusive5WCIS['Low'], exclusive5WCIS['High'], range(len(exclusive5WCIS))):
plt.plot((low, high), (counter, counter), '-', color='blue')
if counter is 0:
plt.plot((float(low+high)/2.0), counter,'o', color='blue', label='Mean')
else:
plt.plot((float(low+high)/2.0), counter,'o', color='blue')
yAxis.insert(0,idx)
counter-=1
plt.yticks(range(len(yAxis)), yAxis)
plt.title("Microglia Feature 5 Exclusive Weighted CIs")
plt.plot((0,0), (0,len(yAxis)), '--', color='black')
plt.ylim(top= len(yAxis))
plt.ylim(bottom=-.5)
plt.legend()
plt.savefig("557MG/MicrogliaF5ExclusiveWeighted.pdf", bbox_inches='tight')
plt.show()
# 5 Exclusive Bon CIs
exclusive5CIS = df5[0].loc[exclusive5WCIS.index]
counter = len(exclusive5CIS)-1
yAxis = []
plt.rcParams['figure.figsize']= [5, 15]
for idx,low, high,y in zip(list(exclusive5CIS.index) ,exclusive5CIS['Low'], exclusive5CIS['High'], range(len(exclusive5CIS))):
plt.plot((low, high), (counter, counter), '-', color='blue')
if counter is 0:
plt.plot((float(low+high)/2.0), counter,'o', color='blue', label='Mean')
else:
plt.plot((float(low+high)/2.0), counter,'o', color='blue')
yAxis.insert(0,idx)
counter-=1
plt.yticks(range(len(yAxis)), yAxis)
plt.title("Microglia Feature 5 Exclusive Bon CIs")
plt.plot((0,0), (0,len(yAxis)), '--', color='black')
plt.ylim(top= len(yAxis))
plt.ylim(bottom=-.5)
plt.legend()
# plt.savefig("557MG/MicrogliaF57ExclusiveBon.pdf", bbox_inches='tight')
plt.show()
# shared genes
import pandas as pd
shared = df5[0].index.intersection(df57[0].index)
shared5WCI = df5[1].loc[shared]
shared57WCI = df57[1].loc[shared]
shared5WCI.columns = ['5Low', '5High']
shared57WCI.columns = ['57Low', '57High']
shared5WCI['rank5'] = abs(shared5WCI['5Low']+shared5WCI['5High'])
shared57WCI['rank57'] = abs(shared57WCI['57Low']+shared57WCI['57High'])
tog = pd.concat([shared57WCI, shared5WCI], axis=1)
tog['rank'] = tog['rank5'] + tog['rank57']
tog = tog.sort_values(by='rank', ascending=False).head(50)
# Bon CIs
sharedCIs = df5[0].loc[tog.index]
counter = len(sharedCIs)-1
yAxis = []
plt.rcParams['figure.figsize']= [5, 15]
for idx,low, high,y in zip(list(tog.index) ,sharedCIs['Low'], sharedCIs['High'], range(len(sharedCIs))):
plt.plot((low, high), (counter, counter), '-', color='blue')
if counter is 0:
plt.plot((float(low+high)/2.0), counter,'o', color='blue', label='Mean')
else:
plt.plot((float(low+high)/2.0), counter,'o', color='blue')
yAxis.insert(0,idx)
counter-=1
plt.yticks(range(len(yAxis)), yAxis)
plt.title("Microglia Feature 5 and 57 Shared Weighted CIs")
plt.plot((0,0), (0,len(yAxis)), '--', color='black')
plt.ylim(top= len(yAxis))
plt.ylim(bottom=-.5)
plt.legend()
plt.savefig("557MG/F5andF57shared.pdf", bbox_inches='tight')
plt.show()
```
| github_jupyter |
# Regex
In this lesson, we'll learn about a useful tool in the NLP toolkit: regex.
Let's consider two motivating examples:
#### 1. The phone number problem
Suppose we are given some data that includes phone numbers:
123-456-7890
123 456 7890
101 Howard
Some of the phone numbers have different formats (hyphens, no hyphens). Also, there are some errors in the data-- 101 Howard isn't a phon number! How can we find all the phone numbers?
#### 2. Creating our own tokens
In the previous lessons, we used sklearn or fastai to tokenize our text. What if we want to do it ourselves?
## The phone number problem
Suppose we are given some data that includes phone numbers:
123-456-7890
123 456 7890
(123)456-7890
101 Howard
Some of the phone numbers have different formats (hyphens, no hyphens, parentheses). Also, there are some errors in the data-- 101 Howard isn't a phone number! How can we find all the phone numbers?
We will attempt this without regex, but will see that this quickly leads to lot of if/else branching statements and isn't a veyr promising approach:
### Attempt 1 (without regex)
```
phone1 = "123-456-7890"
phone2 = "123 456 7890"
not_phone1 = "101 Howard"
string.digits
def check_phone(inp):
valid_chars = string.digits + ' -()'
for char in inp:
if char not in valid_chars:
return False
return True
assert(check_phone(phone1))
assert(check_phone(phone2))
assert(not check_phone(not_phone1))
```
### Attempt 2 (without regex)
```
not_phone2 = "1234"
assert(not check_phone(not_phone2))
def check_phone(inp):
nums = string.digits
valid_chars = nums + ' -()'
num_counter = 0
for char in inp:
if char not in valid_chars:
return False
if char in nums:
num_counter += 1
if num_counter==10:
return True
else:
return False
assert(check_phone(phone1))
assert(check_phone(phone2))
assert(not check_phone(not_phone1))
assert(not check_phone(not_phone2))
```
### Attempt 3 (without regex)
But we also need to extract the digits!
Also, what about:
34!NA5098gn#213ee2
```
not_phone3 = "34 50 98 21 32"
assert(not check_phone(not_phone3))
not_phone4 = "(34)(50)()()982132"
assert(not check_phone(not_phone3))
```
This is getting increasingly unwieldy. We need a different approach.
## Introducing regex
Useful regex resources:
- https://regexr.com/
- http://callumacrae.github.io/regex-tuesday/
- https://regexone.com/
**Best practice: Be as specific as possible.**
Parts of the following section were adapted from Brian Spiering, who taught the MSDS [NLP elective last summer](https://github.com/brianspiering/nlp-course).
### What is regex?
Regular expressions is a pattern matching language.
Instead of writing `0 1 2 3 4 5 6 7 8 9`, you can write `[0-9]` or `\d`
It is Domain Specific Language (DSL). Powerful (but limited language).
**What other DSLs do you already know?**
- SQL
- Markdown
- TensorFlow
### Matching Phone Numbers (The "Hello, world!" of Regex)
`[0-9][0-9][0-9]-[0-9][0-9][0-9]-[0-9][0-9][0-9][0-9]` matches US telephone number.
Refactored: `\d\d\d-\d\d\d-\d\d\d\d`
A **metacharacter** is one or more special characters that have a unique meaning and are NOT used as literals in the search expression. For example "\d" means any digit.
**Metacharacters are the special sauce of regex.**
Quantifiers
-----
Allow you to specify how many times the preceding expression should match.
`{}` is an extact qualifer
Refactored: `\d{3}-\d{3}-\d{4}`
Unexact quantifiers
-----
1. `?` question mark - zero or one
2. `*` star - zero or more
3. `+` plus sign - one or more |
### Regex can look really weird, since it's so concise
The best (only?) way to learn it is through practice. Otherwise, you feel like you're just reading lists of rules.
Let's take 15 minutes to begin working through the lessons on [regexone](https://regexone.com/).
**Reminder: Be as specific as possible!**
### Pros & Cons of Regex
**What are the advantages of regex?**
1. Concise and powerful pattern matching DSL
2. Supported by many computer languages, including SQL
**What are the disadvantages of regex?**
1. Brittle
2. Hard to write, can get complex to be correct
3. Hard to read
## Revisiting tokenization
In the previous lessons, we used a tokenizer. Now, let's learn how we could do this ourselves, and get a better understanding of tokenization.
What if we needed to create our own tokens?
```
import re
re_punc = re.compile("([\"\''().,;:/_?!—\-])") # add spaces around punctuation
re_apos = re.compile(r"n ' t ") # n't
re_bpos = re.compile(r" ' s ") # 's
re_mult_space = re.compile(r" *") # replace multiple spaces with just one
def simple_toks(sent):
sent = re_punc.sub(r" \1 ", sent)
sent = re_apos.sub(r" n't ", sent)
sent = re_bpos.sub(r" 's ", sent)
sent = re_mult_space.sub(' ', sent)
return sent.lower().split()
text = "I don't know who Kara's new friend is-- is it 'Mr. Toad'?"
' '.join(simple_toks(text))
text2 = re_punc.sub(r" \1 ", text); text2
text3 = re_apos.sub(r" n't ", text2); text3
text4 = re_bpos.sub(r" 's ", text3); text4
re_mult_space.sub(' ', text4)
sentences = ['All this happened, more or less.',
'The war parts, anyway, are pretty much true.',
"One guy I knew really was shot for taking a teapot that wasn't his.",
'Another guy I knew really did threaten to have his personal enemies killed by hired gunmen after the war.',
'And so on.',
"I've changed all their names."]
tokens = list(map(simple_toks, sentences))
tokens
```
Once we have our tokens, we need to convert them to integer ids. We will also need to know our vocabulary, and have a way to convert between words and ids.
```
import collections
PAD = 0; SOS = 1
def toks2ids(sentences):
voc_cnt = collections.Counter(t for sent in sentences for t in sent)
vocab = sorted(voc_cnt, key=voc_cnt.get, reverse=True)
vocab.insert(PAD, "<PAD>")
vocab.insert(SOS, "<SOS>")
w2id = {w:i for i,w in enumerate(vocab)}
ids = [[w2id[t] for t in sent] for sent in sentences]
return ids, vocab, w2id, voc_cnt
ids, vocab, w2id, voc_cnt = toks2ids(tokens)
ids
vocab
```
Q: what could be another name of the `vocab` variable above?
```
w2id
```
What are the uses of RegEx?
---
1. Find / Search
1. Find & Replace
2. Cleaning
Don't forgot about Python's `str` methods
-----
`str.<tab>`
str.find()
```
str.find?
```
Regex vs. String methods
-----
1. String methods are easier to understand.
1. String methods express the intent more clearly.
-----
1. Regex handle much broader use cases.
1. Regex can be language independent.
1. Regex can be faster at scale.
## What about unicode?
```
message = "😒🎦 🤢🍕"
re_frown = re.compile(r"😒|🤢")
re_frown.sub(r"😊", message)
```
## Regex Errors:
__False positives__ (Type I): Matching strings that we should __not__ have
matched
__False negatives__ (Type II): __Not__ matching strings that we should have matched
Reducing the error rate for a task often involves two antagonistic efforts:
1. Minimizing false positives
2. Minimizing false negatives
**Important to have tests for both!**
In a perfect world, you would be able to minimize both but in reality you often have to trade one for the other.
Useful Tools:
----
- [Regex cheatsheet](http://www.cheatography.com/davechild/cheat-sheets/regular-expressions/)
- [regexr.com](http://regexr.com/) Realtime regex engine
- [pyregex.com](https://pythex.org/) Realtime Python regex engine
Summary
----
1. We use regex as a metalanguage to find string patterns in blocks of text
1. `r""` are your IRL friends for Python regex
1. We are just doing binary classification so use the same performance metrics
1. You'll make a lot of mistakes in regex 😩.
- False Positive: Thinking you are right but you are wrong
- False Negative: Missing something
<center><img src="images/face_tat.png" width="700"/></center>
<br>
<br>
---
<center><img src="https://imgs.xkcd.com/comics/perl_problems.png" width="700"/></center>
<center><img src="https://imgs.xkcd.com/comics/regex_golf.png" width="700"/></center>
Regex Terms
----
- __target string__: This term describes the string that we will be searching, that is, the string in which we want to find our match or search pattern.
- __search expression__: The pattern we use to find what we want. Most commonly called the regular expression.
- __literal__: A literal is any character we use in a search or matching expression, for example, to find 'ind' in 'windows' the 'ind' is a literal string - each character plays a part in the search, it is literally the string we want to find.
- __metacharacter__: A metacharacter is one or more special characters that have a unique meaning and are NOT used as literals in the search expression. For example "." means any character.
Metacharacters are the special sauce of regex.
- __escape sequence__: An escape sequence is a way of indicating that we want to use a metacharacters as a literal.
In a regular expression an escape sequence involves placing the metacharacter \ (backslash) in front of the metacharacter that we want to use as a literal.
`'\.'` means find literal period character (not match any character)
Regex Workflow
---
1. Create pattern in Plain English
2. Map to regex language
3. Make sure results are correct:
- All Positives: Captures all examples of pattern
- No Negatives: Everything captured is from the pattern
4. Don't over-engineer your regex.
- Your goal is to Get Stuff Done, not write the best regex in the world
- Filtering before and after are okay.
| github_jupyter |
```
import numpy as np
import pprint
import sys
if "../" not in sys.path:
sys.path.append("../")
from lib.envs.gridworld import GridworldEnv
pp = pprint.PrettyPrinter(indent=2)
env = GridworldEnv()
# Taken from Policy Evaluation Exercise!
def policy_eval(policy, env, discount_factor=1.0, theta=0.00001):
"""
Evaluate a policy given an environment and a full description of the environment's dynamics.
Args:
policy: [S, A] shaped matrix representing the policy.
env: OpenAI env. env.P represents the transition probabilities of the environment.
env.P[s][a] is a list of transition tuples (prob, next_state, reward, done).
env.nS is a number of states in the environment.
env.nA is a number of actions in the environment.
theta: We stop evaluation once our value function change is less than theta for all states.
discount_factor: Gamma discount factor.
Returns:
Vector of length env.nS representing the value function.
"""
# Start with a random (all 0) value function
V = np.zeros(env.nS)
while True:
delta = 0
# For each state, perform a "full backup"
for s in range(env.nS):
v = 0
# Look at the possible next actions
for a, action_prob in enumerate(policy[s]):
# For each action, look at the possible next states...
for prob, next_state, reward, done in env.P[s][a]:
# Calculate the expected value
v += action_prob * prob * (reward + discount_factor * V[next_state])
# How much our value function changed (across any states)
delta = max(delta, np.abs(v - V[s]))
V[s] = v
# Stop evaluating once our value function change is below a threshold
if delta < theta:
break
return np.array(V)
def policy_improvement(env, policy_eval_fn=policy_eval, discount_factor=1.0):
"""
Policy Improvement Algorithm. Iteratively evaluates and improves a policy
until an optimal policy is found.
Args:
env: The OpenAI envrionment.
policy_eval_fn: Policy Evaluation function that takes 3 arguments:
policy, env, discount_factor.
discount_factor: gamma discount factor.
Returns:
A tuple (policy, V).
policy is the optimal policy, a matrix of shape [S, A] where each state s
contains a valid probability distribution over actions.
V is the value function for the optimal policy.
"""
def one_step_lookahead(state, V):
"""
Helper function to calculate the value for all action in a given state.
Args:
state: The state to consider (int)
V: The value to use as an estimator, Vector of length env.nS
Returns:
A vector of length env.nA containing the expected value of each action.
"""
A = np.zeros(env.nA)
for a in range(env.nA):
for prob, next_state, reward, done in env.P[state][a]:
A[a] += prob * (reward + discount_factor * V[next_state])
return A
# Start with a random policy
policy = np.ones([env.nS, env.nA]) / env.nA
while True:
# Evaluate the current policy
V = policy_eval_fn(policy, env, discount_factor)
# Will be set to false if we make any changes to the policy
policy_stable = True
# For each state...
for s in range(env.nS):
# The best action we would take under the currect policy
chosen_a = np.argmax(policy[s])
# Find the best action by one-step lookahead
# Ties are resolved arbitarily
action_values = one_step_lookahead(s, V)
best_a = np.argmax(action_values)
# Greedily update the policy
if chosen_a != best_a:
policy_stable = False
policy[s] = np.eye(env.nA)[best_a]
# If the policy is stable we've found an optimal policy. Return it
if policy_stable:
return policy, V
policy, v = policy_improvement(env)
print("Policy Probability Distribution:")
print(policy)
print("")
print("Reshaped Grid Policy (0=up, 1=right, 2=down, 3=left):")
print(np.reshape(np.argmax(policy, axis=1), env.shape))
print("")
print("Value Function:")
print(v)
print("")
print("Reshaped Grid Value Function:")
print(v.reshape(env.shape))
print("")
# Test the value function
expected_v = np.array([ 0, -1, -2, -3, -1, -2, -3, -2, -2, -3, -2, -1, -3, -2, -1, 0])
np.testing.assert_array_almost_equal(v, expected_v, decimal=2)
```
| github_jupyter |
### DCGAN by keras
References
1. https://blog.csdn.net/u010159842/article/details/79042195
2. https://github.com/myinxd/keras-dcgan
```
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
from keras import Input
from keras import applications
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Reshape
from keras.layers.core import Activation
from keras.layers.normalization import BatchNormalization
from keras.layers.convolutional import UpSampling2D
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.layers.core import Flatten
from keras.optimizers import SGD
from keras.datasets import mnist
import numpy as np
from PIL import Image
import argparse
import math
def generator_model():
model = Sequential()
model.add(Dense(units=1024, input_dim=100))
model.add(Activation('tanh'))
model.add(Dense(128*7*7))
model.add(BatchNormalization())
model.add(Activation('tanh'))
model.add(Reshape((7, 7, 128), input_shape=(128*7*7,)))
model.add(UpSampling2D(size=(2, 2)))
model.add(Conv2D(64, (5, 5), padding='same'))
model.add(Activation('tanh'))
model.add(UpSampling2D(size=(2, 2)))
model.add(Conv2D(1, (5, 5), padding='same'))
model.add(Activation('tanh'))
return model
def discriminator_model():
model = Sequential()
model.add(Conv2D(64, (5, 5),
padding='same',
input_shape=(28, 28, 1)))
model.add(Activation('tanh'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (5, 5)))
model.add(Activation('tanh'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(1024))
model.add(Activation('tanh'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
return model
def generator_containing_discriminator(g, d):
model = Sequential()
model.add(g)
d.trainable = False
model.add(d)
return model
def combine_images(generated_images):
num = generated_images.shape[0]
width = int(math.sqrt(num))
height = int(math.ceil(float(num)/width))
shape = generated_images.shape[1:3]
image = np.zeros((height*shape[0], width*shape[1]),
dtype=generated_images.dtype)
for index, img in enumerate(generated_images):
i = int(index/width)
j = index % width
image[i*shape[0]:(i+1)*shape[0], j*shape[1]:(j+1)*shape[1]] = \
img[ :, :, 0]
return image
def train(BATCH_SIZE):
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = (X_train.astype(np.float32) - 127.5)/127.5
X_train = X_train[:, :, :, None]
X_test = X_test[:, :, :, None]
# X_train = X_train.reshape((X_train.shape, 1) + X_train.shape[1:])
d = discriminator_model()
g = generator_model()
d_on_g = generator_containing_discriminator(g, d)
d_optim = SGD(lr=0.0005, momentum=0.9, nesterov=True)
g_optim = SGD(lr=0.0005, momentum=0.9, nesterov=True)
g.compile(loss='binary_crossentropy', optimizer="SGD")
d_on_g.compile(loss='binary_crossentropy', optimizer=g_optim)
d.trainable = True
d.compile(loss='binary_crossentropy', optimizer=d_optim)
d_loss_avg = np.zeros(100)
d_loss_std = np.zeros(100)
g_loss_avg = np.zeros(100)
g_loss_std = np.zeros(100)
for epoch in range(100):
print("Epoch is", epoch)
numbatch = int(X_train.shape[0]/BATCH_SIZE)
d_loss_epoch = np.zeros(numbatch)
g_loss_epoch = np.zeros(numbatch)
print("Number of batches", numbatch)
for index in range(numbatch):
noise = np.random.uniform(-1, 1, size=(BATCH_SIZE, 100))
image_batch = X_train[index*BATCH_SIZE:(index+1)*BATCH_SIZE]
generated_images = g.predict(noise, verbose=0)
if index % 100 == 0:
image = combine_images(generated_images)
image = image*127.5+127.5
Image.fromarray(image.astype(np.uint8)).save("results/"+
str(epoch)+"_"+str(index)+".png")
X = np.concatenate((image_batch, generated_images))
y = [1] * BATCH_SIZE + [0] * BATCH_SIZE
d_loss = d.train_on_batch(X, y)
d_loss_epoch[index] = d_loss
#print("batch %d d_loss : %f" % (index, d_loss))
noise = np.random.uniform(-1, 1, (BATCH_SIZE, 100))
d.trainable = False
g_loss = d_on_g.train_on_batch(noise, [1] * BATCH_SIZE)
g_loss_epoch[index] = g_loss
d.trainable = True
#print("batch %d g_loss : %f" % (index, g_loss))
if index % 10 == 9:
g.save_weights('generator', True)
d.save_weights('discriminator', True)
print("d_loss_avg: %7.5f, g_loss_avg: %7.5f" % (np.mean(d_loss_epoch), np.mean(g_loss_epoch)))
d_loss_avg[epoch] = np.mean(d_loss_epoch)
d_loss_std[epoch] = np.std(d_loss_epoch)
g_loss_avg[epoch] = np.mean(g_loss_epoch)
g_loss_std[epoch] = np.std(g_loss_epoch)
def generate(BATCH_SIZE, nice=False):
g = generator_model()
g.compile(loss='binary_crossentropy', optimizer="SGD")
g.load_weights('generator')
if nice:
d = discriminator_model()
d.compile(loss='binary_crossentropy', optimizer="SGD")
d.load_weights('discriminator')
noise = np.random.uniform(-1, 1, (BATCH_SIZE*20, 100))
generated_images = g.predict(noise, verbose=1)
d_pret = d.predict(generated_images, verbose=1)
index = np.arange(0, BATCH_SIZE*20)
index.resize((BATCH_SIZE*20, 1))
pre_with_index = list(np.append(d_pret, index, axis=1))
pre_with_index.sort(key=lambda x: x[0], reverse=True)
nice_images = np.zeros((BATCH_SIZE,) + generated_images.shape[1:3], dtype=np.float32)
nice_images = nice_images[:, :, :, None]
for i in range(BATCH_SIZE):
idx = int(pre_with_index[i][1])
nice_images[i, :, :, 0] = generated_images[idx, :, :, 0]
image = combine_images(nice_images)
else:
noise = np.random.uniform(-1, 1, (BATCH_SIZE, 100))
generated_images = g.predict(noise, verbose=1)
image = combine_images(generated_images)
image = image*127.5+127.5
Image.fromarray(image.astype(np.uint8)).save(
"generated_image.png")
class args(object):
mode_train="train"
mode_generate = "generate"
batch_size=128
nice="nice"
train(BATCH_SIZE=args.batch_size)
# generate(BATCH_SIZE=args.batch_size, nice=args.nice)
```
| github_jupyter |
### Clean Network
In this process developed by Charles Fox, we move from a GOSTnets raw graph object (see Extract from osm.pbf) to a routable network. This process is fairly bespoke, with several parameters and opportunities for significant simplification.
```
import geopandas as gpd
import os, sys, time
import pandas as pd
sys.path.append(r'C:\Users\charl\Documents\GitHub\GOST_PublicGoods\GOSTNets\GOSTNets')
import GOSTnet as gn
import importlib
import networkx as nx
import osmnx as ox
from shapely.ops import unary_union
from shapely.wkt import loads
from shapely.geometry import LineString, MultiLineString, Point
```
This function defines the order of GOSTnet functions we will call on the input network object. The verbose flag causes the process to save down intermediate files - helpful for troublehsooting
```
def CleanNetwork(G, wpath, country, UTM, WGS = {'init': 'epsg:4326'}, junctdist = 50, verbose = False):
### Topologically simplifies an input graph object by collapsing junctions and removing interstital nodes
# REQUIRED - G: a graph object containing nodes and edges. edges should have a property
# called 'Wkt' containing geometry objects describing the roads
# wpath: the write path - a drive directory for inputs and output
# country: this parameter allows for the sequential processing of multiple countries
# UTM: the epsg code of the projection, in metres, to apply the junctdist
# OPTIONAL - junctdist: distance within which to collapse neighboring nodes. simplifies junctions.
# Set to 0.1 if not simplification desired. 50m good for national (primary / secondary) networks
# verbose: if True, saves down intermediate stages for dissection
################################################################################################
# Squeezes clusters of nodes down to a single node if they are within the snapping tolerance
a = gn.simplify_junctions(G, UTM, WGS, junctdist)
# ensures all streets are two-way
a = gn.add_missing_reflected_edges(a)
#save progress
if verbose is True:
gn.save(a, 'a', wpath)
# Finds and deletes interstital nodes based on node degree
b = gn.custom_simplify(a)
# rectify geometry
for u, v, data in b.edges(data = True):
if type(data['Wkt']) == list:
data['Wkt'] = gn.unbundle_geometry(data['Wkt'])
# save progress
if verbose is True:
gn.save(b, 'b', wpath)
# For some reason CustomSimplify doesn't return a MultiDiGraph. Fix that here
c = gn.convert_to_MultiDiGraph(b)
# This is the most controversial function - removes duplicated edges. This takes care of two-lane but separate highways, BUT
# destroys internal loops within roads. Can be run with or without this line
c = gn.remove_duplicate_edges(c)
# Run this again after removing duplicated edges
c = gn.custom_simplify(c)
# Ensure all remaining edges are duplicated (two-way streets)
c = gn.add_missing_reflected_edges(c)
# save final
gn.save(c, '%s_processed' % country, wpath)
print('Edge reduction: %s to %s (%d percent)' % (G.number_of_edges(),
c.number_of_edges(),
((G.number_of_edges() - c.number_of_edges())/G.number_of_edges()*100)))
return c
```
This is the main process - and is only needed to fire off CleanNetwork. G objects can either be loaded from pickled graph objects, or can be passed in from extraction / other processing
WARNING: expect this step to take a while. It will produce a pickled graph object, a dataframe of the edges, and a dataframe of the nodes. The expectation is that this will only ave to be run once
```
UTMZs = {'SLE':3857} # Web Mercator
WGS = {'init': 'epsg:4326'}
countries = ['SLE']
wpath = r'C:\Users\charl\Documents\GOST\SierraLeone\RoadNet'
for country in countries:
print('\n--- processing for: %s ---\n' % country)
print('start: %s\n' % time.ctime())
print('Outputs can be found at: %s\n' % (wpath))
UTM = {'init': 'epsg:%d' % UTMZs[country]}
G = nx.read_gpickle(os.path.join(wpath, 'SL_uncleaned.pickle'))
G = CleanNetwork(G, wpath, country, UTM, WGS, 0.5, verbose = False)
print('\nend: %s' % time.ctime())
print('\n--- processing complete for: %s ---' % country)
```
| github_jupyter |
# Naive Bayes Models
In this lab you will work with **naive Bayes models**. Naive Bayes models are a surprisingly useful and effective simplification of the general Bayesian models. Naive Bayes models make the naive assumption of statistical independence of the features. In many cases, naive Bayes module are surprisingly effective despite violating the assumption of independence.
In simple terms, naive Bayes models use empirical distributions of the features to compute probabilities of the labels. The naive Bayes models can use most any family of distributions for the features. It is important to select the correct distribution family for the data you are working with. Common cases are:
- **Gaussian;** for continuous or numerical features.
- **Bernoulli;** for features with binary values.
- **Multinomial;** for features with more than two categories.
These is one pit fall, the model fails if a zero probability is encountered. This situation occurs when there is a 'hole' in the sample space where there are no samples. A simple smoothing procedure can deal with this problem. The smoothing hyperparameter, usually called alpha, is one of the few required for naive Bayes models.
Some properties of naive Bayes models are:
- Computational complexity is linear in number of parameter/features, making naive Bayes models highly scalable. There are out or core approaches suitable for massive datasets.
- Requires minimal data to produce models that generalizes well. If there are only a few cases per category to train a model a naive Bayes model can be a good choice.
- Have a simple and inherent regularization.
Naive Bayes models are used in many situations including:
- Document classification
- SPAM detection
- Image classification
As a first step, execute the code in the cell below to load the required packages to run the rest of this notebook.
```
from sklearn import preprocessing
from sklearn.naive_bayes import GaussianNB, BernoulliNB
#from statsmodels.api import datasets
from sklearn import datasets ## Get dataset from sklearn
import sklearn.model_selection as ms
import sklearn.metrics as sklm
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import numpy.random as nr
%matplotlib inline
```
To get a feel for these data, you will now load and plot them. The code in the cell below does the following:
1. Loads the iris data as a Pandas data frame.
2. Adds column names to the data frame.
3. Displays all 4 possible scatter plot views of the data.
Execute this code and examine the results.
```
def plot_iris(iris):
'''Function to plot iris data by type'''
setosa = iris[iris['Species'] == 'setosa']
versicolor = iris[iris['Species'] == 'versicolor']
virginica = iris[iris['Species'] == 'virginica']
fig, ax = plt.subplots(2, 2, figsize=(12,12))
x_ax = ['Sepal_Length', 'Sepal_Width']
y_ax = ['Petal_Length', 'Petal_Width']
for i in range(2):
for j in range(2):
ax[i,j].scatter(setosa[x_ax[i]], setosa[y_ax[j]], marker = 'x')
ax[i,j].scatter(versicolor[x_ax[i]], versicolor[y_ax[j]], marker = 'o')
ax[i,j].scatter(virginica[x_ax[i]], virginica[y_ax[j]], marker = '+')
ax[i,j].set_xlabel(x_ax[i])
ax[i,j].set_ylabel(y_ax[j])
## Import the dataset from sklearn.datasets
iris = datasets.load_iris()
## Create a data frame from the dictionary
species = [iris.target_names[x] for x in iris.target]
iris = pd.DataFrame(iris['data'], columns = ['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width'])
iris['Species'] = species
## Plot views of the iris data
plot_iris(iris)
```
You can see that Setosa (blue) is well separated from the other two categories. The Versicolor (orange) and the Virginica (green) show considerable overlap. The question is how well our classifier will separate these categories.
Scikit Learn classifiers require numerically coded numpy arrays for the features and as a label. The code in the cell below does the following processing:
1. Creates a numpy array of the features.
2. Numerically codes the label using a dictionary lookup, and converts it to a numpy array.
Execute this code.
```
Features = np.array(iris[['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width']])
levels = {'setosa':0, 'versicolor':1, 'virginica':2}
Labels = np.array([levels[x] for x in iris['Species']])
```
Next, execute the code in the cell below to split the dataset into test and training set. Notice that unusually, 100 of the 150 cases are being used as the test dataset.
```
## Randomly sample cases to create independent training and test data
nr.seed(1115)
indx = range(Features.shape[0])
indx = ms.train_test_split(indx, test_size = 100)
X_train = Features[indx[0],:]
y_train = np.ravel(Labels[indx[0]])
X_test = Features[indx[1],:]
y_test = np.ravel(Labels[indx[1]])
```
As is always the case with machine learning, numeric features must be scaled. The code in the cell below performs the following processing:
1. A Zscore scale object is defined using the `StandarScaler` function from the scikit-learn preprocessing package.
2. The scaler is fit to the training features. Subsequently, this scaler is used to apply the same scaling to the test data and in production.
3. The training features are scaled using the `transform` method.
Execute this code.
```
scale = preprocessing.StandardScaler()
scale.fit(X_train)
X_train = scale.transform(X_train)
```
Now you will define and fit a Gaussian naive Bayes model. A Gaussian model is appropriate here since all of the features are numeric.
The code in the cell below defines a Gaussian naive Bayes model object using the `GaussianNB` function from the scikit-learn naive_bayes package, and then fits the model. Execute this code.
```
NB_mod = GaussianNB()
NB_mod.fit(X_train, y_train)
```
Notice that the Gaussian naive Bayes model object has only one hyperparameter.
Next, the code in the cell below performs the following processing to score the test data subset:
1. The test features are scaled using the scaler computed for the training features.
2. The `predict` method is used to compute the scores from the scaled features.
Execute this code.
```
X_test = scale.transform(X_test)
scores = NB_mod.predict(X_test)
```
It is time to evaluate the model results. Keep in mind that the problem has been made deliberately difficult, by having more test cases than training cases.
The iris data has three species categories. Therefore it is necessary to use evaluation code for a three category problem. The function in the cell below extends code from previous labs to deal with a three category problem.
Execute this code, examine the results, and answer **Question 1** on the course page.
```
def print_metrics_3(labels, scores):
conf = sklm.confusion_matrix(labels, scores)
print(' Confusion matrix')
print(' Score Setosa Score Versicolor Score Virginica')
print('Actual Setosa %6d' % conf[0,0] + ' %5d' % conf[0,1] + ' %5d' % conf[0,2])
print('Actual Versicolor %6d' % conf[1,0] + ' %5d' % conf[1,1] + ' %5d' % conf[1,2])
print('Actual Vriginica %6d' % conf[2,0] + ' %5d' % conf[2,1] + ' %5d' % conf[2,2])
## Now compute and display the accuracy and metrics
print('')
print('Accuracy %0.2f' % sklm.accuracy_score(labels, scores))
metrics = sklm.precision_recall_fscore_support(labels, scores)
print(' ')
print(' Setosa Versicolor Virginica')
print('Num case %0.2f' % metrics[3][0] + ' %0.2f' % metrics[3][1] + ' %0.2f' % metrics[3][2])
print('Precision %0.2f' % metrics[0][0] + ' %0.2f' % metrics[0][1] + ' %0.2f' % metrics[0][2])
print('Recall %0.2f' % metrics[1][0] + ' %0.2f' % metrics[1][1] + ' %0.2f' % metrics[1][2])
print('F1 %0.2f' % metrics[2][0] + ' %0.2f' % metrics[2][1] + ' %0.2f' % metrics[2][2])
print_metrics_3(y_test, scores)
```
Examine these results. Notice the following:
1. The confusion matrix has dimension 3X3. You can see that most cases are correctly classified.
2. The overall accuracy is 0.91. Since the classes are roughly balanced, this metric indicates relatively good performance of the classifier, particularly since it was only trained on 50 cases. As was mentioned previously, naive Bayes models require only small amounts of training data.
3. The precision, recall and F1 for each of the classes is relatively good. Versicolor has the worst metrics since it has the largest number of misclassified cases.
To get a better feel for what the classifier is doing, the code in the cell below displays a set of plots showing correctly (as '+') and incorrectly (as 'o') cases, with the species color-coded. Execute this code and examine the results.
```
def plot_iris_score(iris, y_test, scores):
'''Function to plot iris data by type'''
## Find correctly and incorrectly classified cases
true = np.equal(scores, y_test).astype(int)
## Create data frame from the test data
iris = pd.DataFrame(iris)
levels = {0:'setosa', 1:'versicolor', 2:'virginica'}
iris['Species'] = [levels[x] for x in y_test]
iris.columns = ['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width', 'Species']
## Set up for the plot
fig, ax = plt.subplots(2, 2, figsize=(12,12))
markers = ['o', '+']
x_ax = ['Sepal_Length', 'Sepal_Width']
y_ax = ['Petal_Length', 'Petal_Width']
for t in range(2): # loop over correct and incorect classifications
setosa = iris[(iris['Species'] == 'setosa') & (true == t)]
versicolor = iris[(iris['Species'] == 'versicolor') & (true == t)]
virginica = iris[(iris['Species'] == 'virginica') & (true == t)]
# loop over all the dimensions
for i in range(2):
for j in range(2):
ax[i,j].scatter(setosa[x_ax[i]], setosa[y_ax[j]], marker = markers[t], color = 'blue')
ax[i,j].scatter(versicolor[x_ax[i]], versicolor[y_ax[j]], marker = markers[t], color = 'orange')
ax[i,j].scatter(virginica[x_ax[i]], virginica[y_ax[j]], marker = markers[t], color = 'green')
ax[i,j].set_xlabel(x_ax[i])
ax[i,j].set_ylabel(y_ax[j])
plot_iris_score(X_test, y_test, scores)
```
Examine these plots. You can see how the classifier has divided the feature space between the classes. Notice that most of the errors occur in the overlap region between Virginica and Versicolor. This behavior is to be expected.
## Summary
In this lab you have accomplished the following:
1. Used a Gaussian naive model to classify the cases of the iris data. The overall model performance was reasonable.
| github_jupyter |
# My toolbox for data exploration using Pandas
This is a basic set of data analysis tools and techniques that I use for prelimenary dataset analysis.
```
# Pandas is a common package for Python data exploration
import pandas as pd
import numpy as np
```
Load tabular data from a CSV file
I will use the Titanic dataset because of its popularity.
```
# import the training and testing datasets into train_df and test_df pandas dataframes. UPDATE TO MATCH YOUR PATHS
train_df = pd.read_csv('train.csv')
test_df = pd.read_csv('test.csv')
```
Explore dataset properties
```
# return metadata describing the train/test dataframes including the index dtype and columns, non-null values
# and memory usage. the null values discovery will be important in deciding missing value imputation technique
train_df.info()
test_df.info()
# this is another pandas method of determining missing values. As we can see, Age and Cabin have missing values.
# There are many methods for dealing with missing values. I will have more on data imputation later.
train_df.isnull().sum()
test_df.isnull().sum()
# return the number of rows and columns of the train/test dataframes
train_df.shape, test_df.shape
# return the number of elements in the train/test datafarmes
train_df.size
test_df.size
# return the column header for the train/test dataframes.
train_df.columns
# Notice how the "Survived" column/feature is missing from the test dataframe since
# this dataset used to validate the trained model's accuracy in predicting the "Survived" value
test_df.columns
# return the datatypes of the train dataframe columns/features
train_df.dtypes
# return the datatypes of the test dataframe columns/features
test_df.dtypes
# return descriptive statistics about the test dataframe including those that summarize the central tendency,
# dispersion and shape of a dataset’s distribution, excluding NaN (missing) values.
test_df.describe()
```
Return data values
```
# return top 4 rows of train dataframe
train_df.head(4)
# return bottom 4 rows of train dataframe
train_df.tail(4)
# return 5 random rows of train dataframe
train_df.sample(5)
# return values of selected columns of the train dataframe
train_df[['Fare','Survived','Sex']]
# return a range of rows based on start index and (up to but not including) end index
train_df.iloc[100:105]
# return row values based on row,column indexes
train_df.iloc[[110,220,315],[3,1]]
# return row values based on a range of indexes
train_df.iloc[110:115,[3,1]]
# iterate over rows and return selected columns.
# in example below, iterate over dataframe and return all rows but only columns: Name, Sex, Survived
for index, row in train_df.iterrows():
print(row['Name'], row['Sex'], str(row['Survived']))
```
-- this is a small sample of the returned rows
Braund, Mr. Owen Harris male 0
Cumings, Mrs. John Bradley (Florence Briggs Thayer) female 1
Heikkinen, Miss. Laina female 1
Futrelle, Mrs. Jacques Heath (Lily May Peel) female 1
Allen, Mr. William Henry male 0
Moran, Mr. James male 0
McCarthy, Mr. Timothy J male 0
Palsson, Master. Gosta Leonard male 0
Johnson, Mrs. Oscar W (Elisabeth Vilhelmina Berg) female 1
Nasser, Mrs. Nicholas (Adele Achem) female 1
Sandstrom, Miss. Marguerite Rut female 1
Bonnell, Miss. Elizabeth female 1
Saundercock, Mr. William Henry male 0
(truncated to fit)
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Load CSV data
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/csv"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/load_data/csv.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/load_data/csv.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/load_data/csv.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial provides examples of how to use CSV data with TensorFlow.
There are two main parts to this:
1. **Loading the data off disk**
2. **Pre-processing it into a form suitable for training.**
This tutorial focuses on the loading, and gives some quick examples of preprocessing. For a tutorial that focuses on the preprocessing aspect see the [preprocessing layers guide](https://www.tensorflow.org/guide/keras/preprocessing_layers#quick_recipes) and [tutorial](https://www.tensorflow.org/tutorials/structured_data/preprocessing_layers).
## Setup
```
import pandas as pd
import numpy as np
# Make numpy values easier to read.
np.set_printoptions(precision=3, suppress=True)
import tensorflow as tf
from tensorflow.keras import layers
```
## In memory data
For any small CSV dataset the simplest way to train a TensorFlow model on it is to load it into memory as a pandas Dataframe or a NumPy array.
A relatively simple example is the [abalone dataset](https://archive.ics.uci.edu/ml/datasets/abalone).
* The dataset is small.
* All the input features are all limited-range floating point values.
Here is how to download the data into a [Pandas `DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html):
```
abalone_train = pd.read_csv(
"https://storage.googleapis.com/download.tensorflow.org/data/abalone_train.csv",
names=["Length", "Diameter", "Height", "Whole weight", "Shucked weight",
"Viscera weight", "Shell weight", "Age"])
abalone_train.head()
```
The dataset contains a set of measurements of [abalone](https://en.wikipedia.org/wiki/Abalone), a type of sea snail.

[“Abalone shell”](https://www.flickr.com/photos/thenickster/16641048623/) (by [Nicki Dugan Pogue](https://www.flickr.com/photos/thenickster/), CC BY-SA 2.0)
The nominal task for this dataset is to predict the age from the other measurements, so separate the features and labels for training:
```
abalone_features = abalone_train.copy()
abalone_labels = abalone_features.pop('Age')
```
For this dataset you will treat all features identically. Pack the features into a single NumPy array.:
```
abalone_features = np.array(abalone_features)
abalone_features
```
Next make a regression model predict the age. Since there is only a single input tensor, a `keras.Sequential` model is sufficient here.
```
abalone_model = tf.keras.Sequential([
layers.Dense(64),
layers.Dense(1)
])
abalone_model.compile(loss = tf.keras.losses.MeanSquaredError(),
optimizer = tf.optimizers.Adam())
```
To train that model, pass the features and labels to `Model.fit`:
```
abalone_model.fit(abalone_features, abalone_labels, epochs=10)
```
You have just seen the most basic way to train a model using CSV data. Next, you will learn how to apply preprocessing to normalize numeric columns.
## Basic preprocessing
It's good practice to normalize the inputs to your model. The Keras preprocessing layers provide a convenient way to build this normalization into your model.
The layer will precompute the mean and variance of each column, and use these to normalize the data.
First you create the layer:
```
normalize = layers.Normalization()
```
Then you use the `Normalization.adapt()` method to adapt the normalization layer to your data.
Note: Only use your training data to `.adapt()` preprocessing layers. Do not use your validation or test data.
```
normalize.adapt(abalone_features)
```
Then use the normalization layer in your model:
```
norm_abalone_model = tf.keras.Sequential([
normalize,
layers.Dense(64),
layers.Dense(1)
])
norm_abalone_model.compile(loss = tf.losses.MeanSquaredError(),
optimizer = tf.optimizers.Adam())
norm_abalone_model.fit(abalone_features, abalone_labels, epochs=10)
```
## Mixed data types
The "Titanic" dataset contains information about the passengers on the Titanic. The nominal task on this dataset is to predict who survived.

Image [from Wikimedia](https://commons.wikimedia.org/wiki/File:RMS_Titanic_3.jpg)
The raw data can easily be loaded as a Pandas `DataFrame`, but is not immediately usable as input to a TensorFlow model.
```
titanic = pd.read_csv("https://storage.googleapis.com/tf-datasets/titanic/train.csv")
titanic.head()
titanic_features = titanic.copy()
titanic_labels = titanic_features.pop('survived')
```
Because of the different data types and ranges you can't simply stack the features into NumPy array and pass it to a `keras.Sequential` model. Each column needs to be handled individually.
As one option, you could preprocess your data offline (using any tool you like) to convert categorical columns to numeric columns, then pass the processed output to your TensorFlow model. The disadvantage to that approach is that if you save and export your model the preprocessing is not saved with it. The Keras preprocessing layers avoid this problem because they're part of the model.
In this example, you'll build a model that implements the preprocessing logic using [Keras functional API](https://www.tensorflow.org/guide/keras/functional). You could also do it by [subclassing](https://www.tensorflow.org/guide/keras/custom_layers_and_models).
The functional API operates on "symbolic" tensors. Normal "eager" tensors have a value. In contrast these "symbolic" tensors do not. Instead they keep track of which operations are run on them, and build representation of the calculation, that you can run later. Here's a quick example:
```
# Create a symbolic input
input = tf.keras.Input(shape=(), dtype=tf.float32)
# Perform a calculation using the input
result = 2*input + 1
# the result doesn't have a value
result
calc = tf.keras.Model(inputs=input, outputs=result)
print(calc(1).numpy())
print(calc(2).numpy())
```
To build the preprocessing model, start by building a set of symbolic `keras.Input` objects, matching the names and data-types of the CSV columns.
```
inputs = {}
for name, column in titanic_features.items():
dtype = column.dtype
if dtype == object:
dtype = tf.string
else:
dtype = tf.float32
inputs[name] = tf.keras.Input(shape=(1,), name=name, dtype=dtype)
inputs
```
The first step in your preprocessing logic is to concatenate the numeric inputs together, and run them through a normalization layer:
```
numeric_inputs = {name:input for name,input in inputs.items()
if input.dtype==tf.float32}
x = layers.Concatenate()(list(numeric_inputs.values()))
norm = layers.Normalization()
norm.adapt(np.array(titanic[numeric_inputs.keys()]))
all_numeric_inputs = norm(x)
all_numeric_inputs
```
Collect all the symbolic preprocessing results, to concatenate them later.
```
preprocessed_inputs = [all_numeric_inputs]
```
For the string inputs use the `tf.keras.layers.StringLookup` function to map from strings to integer indices in a vocabulary. Next, use `tf.keras.layers.CategoryEncoding` to convert the indexes into `float32` data appropriate for the model.
The default settings for the `tf.keras.layers.CategoryEncoding` layer create a one-hot vector for each input. A `layers.Embedding` would also work. See the [preprocessing layers guide](https://www.tensorflow.org/guide/keras/preprocessing_layers#quick_recipes) and [tutorial](../structured_data/preprocessing_layers.ipynb) for more on this topic.
```
for name, input in inputs.items():
if input.dtype == tf.float32:
continue
lookup = layers.StringLookup(vocabulary=np.unique(titanic_features[name]))
one_hot = layers.CategoryEncoding(max_tokens=lookup.vocab_size())
x = lookup(input)
x = one_hot(x)
preprocessed_inputs.append(x)
```
With the collection of `inputs` and `processed_inputs`, you can concatenate all the preprocessed inputs together, and build a model that handles the preprocessing:
```
preprocessed_inputs_cat = layers.Concatenate()(preprocessed_inputs)
titanic_preprocessing = tf.keras.Model(inputs, preprocessed_inputs_cat)
tf.keras.utils.plot_model(model = titanic_preprocessing , rankdir="LR", dpi=72, show_shapes=True)
```
This `model` just contains the input preprocessing. You can run it to see what it does to your data. Keras models don't automatically convert Pandas `DataFrames` because it's not clear if it should be converted to one tensor or to a dictionary of tensors. So convert it to a dictionary of tensors:
```
titanic_features_dict = {name: np.array(value)
for name, value in titanic_features.items()}
```
Slice out the first training example and pass it to this preprocessing model, you see the numeric features and string one-hots all concatenated together:
```
features_dict = {name:values[:1] for name, values in titanic_features_dict.items()}
titanic_preprocessing(features_dict)
```
Now build the model on top of this:
```
def titanic_model(preprocessing_head, inputs):
body = tf.keras.Sequential([
layers.Dense(64),
layers.Dense(1)
])
preprocessed_inputs = preprocessing_head(inputs)
result = body(preprocessed_inputs)
model = tf.keras.Model(inputs, result)
model.compile(loss=tf.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.optimizers.Adam())
return model
titanic_model = titanic_model(titanic_preprocessing, inputs)
```
When you train the model, pass the dictionary of features as `x`, and the label as `y`.
```
titanic_model.fit(x=titanic_features_dict, y=titanic_labels, epochs=10)
```
Since the preprocessing is part of the model, you can save the model and reload it somewhere else and get identical results:
```
titanic_model.save('test')
reloaded = tf.keras.models.load_model('test')
features_dict = {name:values[:1] for name, values in titanic_features_dict.items()}
before = titanic_model(features_dict)
after = reloaded(features_dict)
assert (before-after)<1e-3
print(before)
print(after)
```
## Using tf.data
In the previous section you relied on the model's built-in data shuffling and batching while training the model.
If you need more control over the input data pipeline or need to use data that doesn't easily fit into memory: use `tf.data`.
For more examples see the [tf.data guide](../../guide/data.ipynb).
### On in memory data
As a first example of applying `tf.data` to CSV data consider the following code to manually slice up the dictionary of features from the previous section. For each index, it takes that index for each feature:
```
import itertools
def slices(features):
for i in itertools.count():
# For each feature take index `i`
example = {name:values[i] for name, values in features.items()}
yield example
```
Run this and print the first example:
```
for example in slices(titanic_features_dict):
for name, value in example.items():
print(f"{name:19s}: {value}")
break
```
The most basic `tf.data.Dataset` in memory data loader is the `Dataset.from_tensor_slices` constructor. This returns a `tf.data.Dataset` that implements a generalized version of the above `slices` function, in TensorFlow.
```
features_ds = tf.data.Dataset.from_tensor_slices(titanic_features_dict)
```
You can iterate over a `tf.data.Dataset` like any other python iterable:
```
for example in features_ds:
for name, value in example.items():
print(f"{name:19s}: {value}")
break
```
The `from_tensor_slices` function can handle any structure of nested dictionaries or tuples. The following code makes a dataset of `(features_dict, labels)` pairs:
```
titanic_ds = tf.data.Dataset.from_tensor_slices((titanic_features_dict, titanic_labels))
```
To train a model using this `Dataset`, you'll need to at least `shuffle` and `batch` the data.
```
titanic_batches = titanic_ds.shuffle(len(titanic_labels)).batch(32)
```
Instead of passing `features` and `labels` to `Model.fit`, you pass the dataset:
```
titanic_model.fit(titanic_batches, epochs=5)
```
### From a single file
So far this tutorial has worked with in-memory data. `tf.data` is a highly scalable toolkit for building data pipelines, and provides a few functions for dealing loading CSV files.
```
titanic_file_path = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv")
```
Now read the CSV data from the file and create a `tf.data.Dataset`.
(For the full documentation, see `tf.data.experimental.make_csv_dataset`)
```
titanic_csv_ds = tf.data.experimental.make_csv_dataset(
titanic_file_path,
batch_size=5, # Artificially small to make examples easier to show.
label_name='survived',
num_epochs=1,
ignore_errors=True,)
```
This function includes many convenient features so the data is easy to work with. This includes:
* Using the column headers as dictionary keys.
* Automatically determining the type of each column.
```
for batch, label in titanic_csv_ds.take(1):
for key, value in batch.items():
print(f"{key:20s}: {value}")
print()
print(f"{'label':20s}: {label}")
```
Note: if you run the above cell twice it will produce different results. The default settings for `make_csv_dataset` include `shuffle_buffer_size=1000`, which is more than sufficient for this small dataset, but may not be for a real-world dataset.
It can also decompress the data on the fly. Here's a gzipped CSV file containing the [metro interstate traffic dataset](https://archive.ics.uci.edu/ml/datasets/Metro+Interstate+Traffic+Volume)

Image [from Wikimedia](https://commons.wikimedia.org/wiki/File:Trafficjam.jpg)
```
traffic_volume_csv_gz = tf.keras.utils.get_file(
'Metro_Interstate_Traffic_Volume.csv.gz',
"https://archive.ics.uci.edu/ml/machine-learning-databases/00492/Metro_Interstate_Traffic_Volume.csv.gz",
cache_dir='.', cache_subdir='traffic')
```
Set the `compression_type` argument to read directly from the compressed file:
```
traffic_volume_csv_gz_ds = tf.data.experimental.make_csv_dataset(
traffic_volume_csv_gz,
batch_size=256,
label_name='traffic_volume',
num_epochs=1,
compression_type="GZIP")
for batch, label in traffic_volume_csv_gz_ds.take(1):
for key, value in batch.items():
print(f"{key:20s}: {value[:5]}")
print()
print(f"{'label':20s}: {label[:5]}")
```
Note: If you need to parse those date-time strings in the `tf.data` pipeline you can use `tfa.text.parse_time`.
### Caching
There is some overhead to parsing the csv data. For small models this can be the bottleneck in training.
Depending on your use case it may be a good idea to use `Dataset.cache` or `data.experimental.snapshot` so that the csv data is only parsed on the first epoch.
The main difference between the `cache` and `snapshot` methods is that `cache` files can only be used by the TensorFlow process that created them, but `snapshot` files can be read by other processes.
For example, iterating over the `traffic_volume_csv_gz_ds` 20 times, takes ~15 seconds without caching, or ~2s with caching.
```
%%time
for i, (batch, label) in enumerate(traffic_volume_csv_gz_ds.repeat(20)):
if i % 40 == 0:
print('.', end='')
print()
```
Note: `Dataset.cache` stores the data form the first epoch and replays it in order. So using `.cache` disables any shuffles earlier in the pipeline. Below the `.shuffle` is added back in after `.cache`.
```
%%time
caching = traffic_volume_csv_gz_ds.cache().shuffle(1000)
for i, (batch, label) in enumerate(caching.shuffle(1000).repeat(20)):
if i % 40 == 0:
print('.', end='')
print()
```
Note: `snapshot` files are meant for *temporary* storage of a dataset while in use. This is *not* a format for long term storage. The file format is considered an internal detail, and not guaranteed between TensorFlow versions.
```
%%time
snapshot = tf.data.experimental.snapshot('titanic.tfsnap')
snapshotting = traffic_volume_csv_gz_ds.apply(snapshot).shuffle(1000)
for i, (batch, label) in enumerate(snapshotting.shuffle(1000).repeat(20)):
if i % 40 == 0:
print('.', end='')
print()
```
If your data loading is slowed by loading csv files, and `cache` and `snapshot` are insufficient for your use case, consider re-encoding your data into a more streamlined format.
### Multiple files
All the examples so far in this section could easily be done without `tf.data`. One place where `tf.data` can really simplify things is when dealing with collections of files.
For example, the [character font images](https://archive.ics.uci.edu/ml/datasets/Character+Font+Images) dataset is distributed as a collection of csv files, one per font.

Image by <a href="https://pixabay.com/users/wilhei-883152/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=705667">Willi Heidelbach</a> from <a href="https://pixabay.com/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=705667">Pixabay</a>
Download the dataset, and have a look at the files inside:
```
fonts_zip = tf.keras.utils.get_file(
'fonts.zip', "https://archive.ics.uci.edu/ml/machine-learning-databases/00417/fonts.zip",
cache_dir='.', cache_subdir='fonts',
extract=True)
import pathlib
font_csvs = sorted(str(p) for p in pathlib.Path('fonts').glob("*.csv"))
font_csvs[:10]
len(font_csvs)
```
When dealing with a bunch of files you can pass a glob-style `file_pattern` to the `experimental.make_csv_dataset` function. The order of the files is shuffled each iteration.
Use the `num_parallel_reads` argument to set how many files are read in parallel and interleaved together.
```
fonts_ds = tf.data.experimental.make_csv_dataset(
file_pattern = "fonts/*.csv",
batch_size=10, num_epochs=1,
num_parallel_reads=20,
shuffle_buffer_size=10000)
```
These csv files have the images flattened out into a single row. The column names are formatted `r{row}c{column}`. Here's the first batch:
```
for features in fonts_ds.take(1):
for i, (name, value) in enumerate(features.items()):
if i>15:
break
print(f"{name:20s}: {value}")
print('...')
print(f"[total: {len(features)} features]")
```
#### Optional: Packing fields
You probably don't want to work with each pixel in separate columns like this. Before trying to use this dataset be sure to pack the pixels into an image-tensor.
Here is code that parses the column names to build images for each example:
```
import re
def make_images(features):
image = [None]*400
new_feats = {}
for name, value in features.items():
match = re.match('r(\d+)c(\d+)', name)
if match:
image[int(match.group(1))*20+int(match.group(2))] = value
else:
new_feats[name] = value
image = tf.stack(image, axis=0)
image = tf.reshape(image, [20, 20, -1])
new_feats['image'] = image
return new_feats
```
Apply that function to each batch in the dataset:
```
fonts_image_ds = fonts_ds.map(make_images)
for features in fonts_image_ds.take(1):
break
```
Plot the resulting images:
```
from matplotlib import pyplot as plt
plt.figure(figsize=(6,6), dpi=120)
for n in range(9):
plt.subplot(3,3,n+1)
plt.imshow(features['image'][..., n])
plt.title(chr(features['m_label'][n]))
plt.axis('off')
```
## Lower level functions
So far this tutorial has focused on the highest level utilities for reading csv data. There are other two APIs that may be helpful for advanced users if your use-case doesn't fit the basic patterns.
* `tf.io.decode_csv` - a function for parsing lines of text into a list of CSV column tensors.
* `tf.data.experimental.CsvDataset` - a lower level csv dataset constructor.
This section recreates functionality provided by `make_csv_dataset`, to demonstrate how this lower level functionality can be used.
### `tf.io.decode_csv`
This function decodes a string, or list of strings into a list of columns.
Unlike `make_csv_dataset` this function does not try to guess column data-types. You specify the column types by providing a list of `record_defaults` containing a value of the correct type, for each column.
To read the Titanic data **as strings** using `decode_csv` you would say:
```
text = pathlib.Path(titanic_file_path).read_text()
lines = text.split('\n')[1:-1]
all_strings = [str()]*10
all_strings
features = tf.io.decode_csv(lines, record_defaults=all_strings)
for f in features:
print(f"type: {f.dtype.name}, shape: {f.shape}")
```
To parse them with their actual types, create a list of `record_defaults` of the corresponding types:
```
print(lines[0])
titanic_types = [int(), str(), float(), int(), int(), float(), str(), str(), str(), str()]
titanic_types
features = tf.io.decode_csv(lines, record_defaults=titanic_types)
for f in features:
print(f"type: {f.dtype.name}, shape: {f.shape}")
```
Note: it is more efficient to call `decode_csv` on large batches of lines than on individual lines of csv text.
### `tf.data.experimental.CsvDataset`
The `tf.data.experimental.CsvDataset` class provides a minimal CSV `Dataset` interface without the convenience features of the `make_csv_dataset` function: column header parsing, column type-inference, automatic shuffling, file interleaving.
This constructor follows uses `record_defaults` the same way as `io.parse_csv`:
```
simple_titanic = tf.data.experimental.CsvDataset(titanic_file_path, record_defaults=titanic_types, header=True)
for example in simple_titanic.take(1):
print([e.numpy() for e in example])
```
The above code is basically equivalent to:
```
def decode_titanic_line(line):
return tf.io.decode_csv(line, titanic_types)
manual_titanic = (
# Load the lines of text
tf.data.TextLineDataset(titanic_file_path)
# Skip the header row.
.skip(1)
# Decode the line.
.map(decode_titanic_line)
)
for example in manual_titanic.take(1):
print([e.numpy() for e in example])
```
#### Multiple files
To parse the fonts dataset using `experimental.CsvDataset`, you first need to determine the column types for the `record_defaults`. Start by inspecting the first row of one file:
```
font_line = pathlib.Path(font_csvs[0]).read_text().splitlines()[1]
print(font_line)
```
Only the first two fields are strings, the rest are ints or floats, and you can get the total number of features by counting the commas:
```
num_font_features = font_line.count(',')+1
font_column_types = [str(), str()] + [float()]*(num_font_features-2)
```
The `CsvDatasaet` constructor can take a list of input files, but reads them sequentially. The first file in the list of CSVs is `AGENCY.csv`:
```
font_csvs[0]
```
So when you pass the list of files to `CsvDataaset` the records from `AGENCY.csv` are read first:
```
simple_font_ds = tf.data.experimental.CsvDataset(
font_csvs,
record_defaults=font_column_types,
header=True)
for row in simple_font_ds.take(10):
print(row[0].numpy())
```
To interleave multiple files, use `Dataset.interleave`.
Here's an initial dataset that contains the csv file names:
```
font_files = tf.data.Dataset.list_files("fonts/*.csv")
```
This shuffles the file names each epoch:
```
print('Epoch 1:')
for f in list(font_files)[:5]:
print(" ", f.numpy())
print(' ...')
print()
print('Epoch 2:')
for f in list(font_files)[:5]:
print(" ", f.numpy())
print(' ...')
```
The `interleave` method takes a `map_func` that creates a child-`Dataset` for each element of the parent-`Dataset`.
Here, you want to create a `CsvDataset` from each element of the dataset of files:
```
def make_font_csv_ds(path):
return tf.data.experimental.CsvDataset(
path,
record_defaults=font_column_types,
header=True)
```
The `Dataset` returned by interleave returns elements by cycling over a number of the child-`Dataset`s. Note, below, how the dataset cycles over `cycle_length=3` three font files:
```
font_rows = font_files.interleave(make_font_csv_ds,
cycle_length=3)
fonts_dict = {'font_name':[], 'character':[]}
for row in font_rows.take(10):
fonts_dict['font_name'].append(row[0].numpy().decode())
fonts_dict['character'].append(chr(row[2].numpy()))
pd.DataFrame(fonts_dict)
```
#### Performance
Earlier, it was noted that `io.decode_csv` is more efficient when run on a batch of strings.
It is possible to take advantage of this fact, when using large batch sizes, to improve CSV loading performance (but try [caching](#caching) first).
With the built-in loader 20, 2048-example batches take about 17s.
```
BATCH_SIZE=2048
fonts_ds = tf.data.experimental.make_csv_dataset(
file_pattern = "fonts/*.csv",
batch_size=BATCH_SIZE, num_epochs=1,
num_parallel_reads=100)
%%time
for i,batch in enumerate(fonts_ds.take(20)):
print('.',end='')
print()
```
Passing **batches of text lines** to`decode_csv` runs faster, in about 5s:
```
fonts_files = tf.data.Dataset.list_files("fonts/*.csv")
fonts_lines = fonts_files.interleave(
lambda fname:tf.data.TextLineDataset(fname).skip(1),
cycle_length=100).batch(BATCH_SIZE)
fonts_fast = fonts_lines.map(lambda x: tf.io.decode_csv(x, record_defaults=font_column_types))
%%time
for i,batch in enumerate(fonts_fast.take(20)):
print('.',end='')
print()
```
For another example of increasing csv performance by using large batches see the [overfit and underfit tutorial](../keras/overfit_and_underfit.ipynb).
This sort of approach may work, but consider other options like `cache` and `snapshot`, or re-encoding your data into a more streamlined format.
| github_jupyter |
# Reccurent Neural Network with MNIST
## Introduction
Reccurent Neural Network 簡稱 RNN.跟之前提到的 CNN (找出特徵),Autoencoder (降維 重建) 不同.它關注的是 **時間序列** 有關的問題,舉個例子,一篇文章中的文字會是跟前後文有前因後果的,而如果想要製作一個文章產生器,就會需要用到 RNN.
那 RNN 是如何解決這個問題呢?.觀察下面這個 RNN 基本的結構圖.其中 `Xt` 以及 `Ht` 分別是 `t` 時刻的輸入以及輸出,可以看到 `Ht` 會跟 `Ht-1` 以及 `Xt` 有關,可以簡單地把它想像成多一個輸入的神經網路.

那如果我們依照時間序列展開 RNN 就會變成以下的樣子:

每一個時間點的輸出,除了跟當前的輸入有關,也會跟前一刻,前前一刻...等時間點的輸出有關係.如此就相當於把各個時間序列點的輸出連結了起來.
理論上來說這是一個很完美結構,可以處理許多跟時間序列有關的問題,但是實際上會遇到許多的問題,什麼問題呢?想像一下在訓練模型的時候會利用 Backpropagation 來更新權重,而 RNN 的輸出會跟前一刻有關,所以也會傳遞到前一刻的模型更新權重,依此類推.當這些更新同時發生的時候就可能會產生兩個結果,一個是梯度爆炸 (Gradient Exploding)另一個是梯度消失 (Gradient Vanishing).
### Gradient Exploding
梯度爆炸,也就是說隨著序列增加,後續權重的更新大到無法處理的地步,如下圖.而解決這個問題的方式比較簡單的方法就硬性規定一個更新上限,更新值大於上限就用上限值取代.
### Gradient Vanishing
梯度消失,也就是說隨著序列增加,後續權重的更新小到趨近於 0.使得 RNN 只**記得**最近發生的事情,而沒有辦法記起時間較久前的結果.而解決這個問題的方式就是 **LSTMs**,在[不可思議的 RNN](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)這篇文章中,提到了許多 RNN 非常有效的使用場景,就是因為使用了 LSTMs.
使用 LSTM 的方法非常簡單就是把 RNN cell 替換成 LSTM cell,關於 LSTM 的內容這裡不會細講,可以看這一篇非常棒的[文章](http://colah.github.io/posts/2015-08-Understanding-LSTMs/),目前請先想像成它可以有效的解決梯度消失的問題.

圖片來自 Udacity [course](https://www.youtube.com/watch?time_continue=4&v=VuamhbEWEWA)
## MNIST Test with RNN
接下來我們會用 Tensorflow 中的 RNN 來處理 MNIST 手寫數字辨識的問題.但是一個圖片要怎麼跟時間序列扯上關係呢?簡單的想法就是我們把 MNIST 中 28 x 28 維的資料,想像成 28 個時間點,而每一時間點就給 RNN 28 維的向量.換句話說就是讓 RNN **'一列一列地看'**手寫數字,當看完整個圖片的以後,把他的輸出 `H28` 先丟給一個全連結層之後再丟給分類器來決定說這個看到的數字屬於哪一類.
### Imports
```
import tensorflow as tf
from tensorflow.python.ops import rnn, rnn_cell
from libs.utils import weight_variable, bias_variable
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
print("Package loaded")
```
### Configuraions
設定相關的參數,其中要讓 RNN 看 MNIST 圖片的話就是,一次給它看一列 (n_input = 28維),由上看到下看 28 個輸入 (n_steps = 28 steps).
```
n_input = 28 # MNIST data input (image shape: 28*28)
n_steps = 28 # steps
n_hidden = 128 # number of neurons in fully connected layer
n_classes = 10 # (0-9 digits)
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])
weights = {
"w_fc" : weight_variable([n_hidden, n_classes], "w_fc")
}
biases = {
"b_fc" : bias_variable([n_classes], "b_fc")
}
```
### Adjust input x to RNN
因應 tensorflow 中 RNN 輸入的要求,現在要來變化輸入向量的形式
- 先把 x 交換維度成 `n_step, None, n_input`
```
x_transpose = tf.transpose(x, [1, 0, 2])
print("x_transpose shape: %s" % x_transpose.get_shape())
```
- 再來變成 `n_step * None, n_input`
```
x_reshape = tf.reshape(x_transpose, [-1, n_input])
print("x_reshape shape: %s" % x_reshape.get_shape())
```
- 最後會把它切成長度為 n_steps 的 list,其中第 i 個元素就是對應第 i 個 step.每個元素會有 None x n_inputs
```
x_split = tf.split(0, n_steps, x_reshape)
print("type of x_split: %s" % type(x_split))
print("length of x_split: %d" % len(x_split))
print("shape of x_split[0]: %s" % x_split[0].get_shape())
```
接下來就是要建立模型,這裡直接使用 RNN cell,其中 n_hidden 是輸出的維度為 128.
觀察它輸出的 h 是一個長度 28 的 list,第 i 個元素代表著第 i 個 step 的輸出,每個元素會有 None x n_hidden
```
basic_rnn_cell = rnn_cell.BasicRNNCell(n_hidden)
h, states = rnn.rnn(basic_rnn_cell, x_split, dtype=tf.float32)
print("type of outputs: %s" % type(h))
print("length of outputs: %d" % len(h))
print("shape of h[0]: %s" % h[0].get_shape())
print("type of states: %s" % type(states))
```
### fully connnected layer
接一個 128 -> 10 的全連結層
```
h_fc = tf.matmul(h[-1], weights['w_fc']) + biases['b_fc']
y_ = h_fc
```
### cost function
用 softmax 作分類
```
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(h_fc, y))
optimizer = tf.train.AdamOptimizer(0.01).minimize(cost)
```
### accuracy function
```
correct_prediction = tf.equal(tf.argmax(y_, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
```
### Training
```
batch_size = 100
init_op = tf.global_variables_initializer()
sess = tf.InteractiveSession()
sess.run(init_op)
variables_names =[v.name for v in tf.trainable_variables()]
for step in range(5000):
batch_x, batch_y = mnist.train.next_batch(batch_size)
batch_x = np.reshape(batch_x, (batch_size, n_steps, n_input))
cost_train, accuracy_train, states_train, rnn_out = sess.run([cost, accuracy, states, h[-1]], feed_dict = {x: batch_x, y: batch_y})
values = sess.run(variables_names)
rnn_out_mean = np.mean(rnn_out)
for k,v in zip(variables_names, values):
if k == 'RNN/BasicRNNCell/Linear/Matrix:0':
w_rnn_mean = np.mean(v)
if step < 1500:
if step % 100 == 0:
print("step %d, loss %.5f, accuracy %.3f, mean of rnn weight %.5f, mean of rnn out %.5f" % (step, cost_train, accuracy_train, w_rnn_mean, rnn_out_mean))
else:
if step%1000 == 0:
print("step %d, loss %.5f, accuracy %.3f, mean of rnn weight %.5f, mean of rnn out %.5f" % (step, cost_train, accuracy_train, w_rnn_mean, rnn_out_mean))
optimizer.run(feed_dict={x: batch_x, y: batch_y})
cost_test, accuracy_test = sess.run([cost, accuracy], feed_dict={x: np.reshape(mnist.test.images, [-1, 28, 28]), y: mnist.test.labels})
print("final loss %.5f, accuracy %.5f" % (cost_test, accuracy_test) )
```
### Result
可以看到預測結果非常不好,測試的準確率只有 10%,rnn 的權重平均值滿低的.下面印出最後輸出 128 維的矩陣,可以看到每個值都接近 1 或是 -1,然後搜尋一下以後知道 RNN 裡面輸出之前會經過一個 `tanh`,而當 `tanh` 在 1 或 -1 的時候做微分是 0.我想這樣的情形就是**Gradient Vanishing**了.
```
print h[-1].eval(feed_dict={x: np.reshape(mnist.test.images, [-1, 28, 28]), y: mnist.test.labels})[0,:]
```
### LSTM
因為上面 RNN 的表現非常不好,讓我們來用一下 tensorflow 中的 LSTMs cell 看看成果如何.而更改 cell 的設定非常簡單,只要把 BasicRNNCell 改成 BasicLSTMCell 就可以了.
```
tf.reset_default_graph()
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])
weights = {
"w_fc" : weight_variable([n_hidden, n_classes], "w_fc")
}
biases = {
"b_fc" : bias_variable([n_classes], "b_fc")
}
x_transpose = tf.transpose(x, [1, 0, 2])
x_reshape = tf.reshape(x_transpose, [-1, n_input])
x_split = tf.split(0, n_steps, x_reshape)
lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0)
h, states = rnn.rnn(lstm_cell, x_split, dtype=tf.float32)
h_fc = tf.matmul(h[-1], weights['w_fc']) + biases['b_fc']
y_ = h_fc
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(h_fc, y))
optimizer = tf.train.AdamOptimizer(0.01).minimize(cost)
correct_prediction = tf.equal(tf.argmax(y_, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
batch_size = 100
init_op = tf.global_variables_initializer()
sess = tf.InteractiveSession()
sess.run(init_op)
variables_names =[v.name for v in tf.trainable_variables()]
for step in range(5000):
batch_x, batch_y = mnist.train.next_batch(batch_size)
batch_x = np.reshape(batch_x, (batch_size, n_steps, n_input))
cost_train, accuracy_train, states_train, rnn_out = sess.run([cost, accuracy, states, h[-1]], feed_dict = {x: batch_x, y: batch_y})
values = sess.run(variables_names)
rnn_out_mean = np.mean(rnn_out)
for k,v in zip(variables_names, values):
if k == 'RNN/BasicLSTMCell/Linear/Matrix:0':
w_rnn_mean = np.mean(v)
if step < 1500:
if step % 100 == 0:
print("step %d, loss %.5f, accuracy %.3f, mean of lstm weight %.5f, mean of lstm out %.5f" % (step, cost_train, accuracy_train, w_rnn_mean, rnn_out_mean))
else:
if step%1000 == 0:
print("step %d, loss %.5f, accuracy %.3f, mean of lstm weight %.5f, mean of lstm out %.5f" % (step, cost_train, accuracy_train, w_rnn_mean, rnn_out_mean))
optimizer.run(feed_dict={x: batch_x, y: batch_y})
cost_test, accuracy_test = sess.run([cost, accuracy], feed_dict={x: np.reshape(mnist.test.images, [-1, 28, 28]), y: mnist.test.labels})
print("final loss %.5f, accuracy %.5f" % (cost_test, accuracy_test) )
```
Bingo!可以看到準確率提升到非常高,看來 LSTMs 真的解決了 RNN 的缺點.
## 小結
了解了 RNN 的架構,以及會遇到的問題 `Gradient Exploding` 以及 `Gradient Vanishing`,並且使用 MNIST 手寫數字資料集來練習 RNN.
在 MNIST 中純粹的 RNN 會遇到梯度消失問題,而改用 LSTMs 之後就成功了提高極多的準確度.
### 問題
- 找找看 tensorflow 有沒有顯示各階段 gradient 的函數
- 看看 RNN 的 backpropagation
## 學習資源連結
- [Colah Blog : Understanding LSTMs](http://colah.github.io/posts/2015-08-Understanding-LSTMs/)
- [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)
| github_jupyter |
```
import msiwarp as mx
from msiwarp.util.read_sbd import read_sbd_meta, read_spectrum_fs
from msiwarp.util.warp import to_mz, peak_density_mz, plot_range, get_mx_spectrum, generate_mean_spectrum
import matplotlib.pyplot as plt
import numpy as np
i_r = 200
# scaling to test impact of sigma on alignment performance
sigma_1 = 3.0e-7
epsilon = 2.55
print("using spectrum {} as reference, sigma: {} ppm, and epsilon: {:0.2f} ppm".format(i_r, sigma_1 * 1e6, 2 * epsilon * sigma_1 * 1e6))
fdir = 'datasets/orbitrap-liver/'
fpath_sbd = fdir + '5mixes_onratliver_50micron-centroided.sbd'
fpath_triplets_raw = fdir + 'triplets_raw.dat'
fpath_triplets_warped = fdir + 'triplets_warped.dat'
fpath_dispersion_csv = fdir + 'results/dispersion_100.csv'
fpath_scatter = fdir + 'results/scatter'
# experiment settings
instrument_type = 'orbitrap'
mz_begin = 150
mz_end = 1000
meta = read_sbd_meta(fpath_sbd)
spectra = [get_mx_spectrum(fpath_sbd, meta, i, sigma_1, instrument_type) for i in range(len(meta))]
tic = np.array([m[2] for m in meta])
xi = np.linspace(mz_begin, mz_end, 1000)
(yi, xp, yp) = peak_density_mz(spectra, xi, bandwidth=15, stride=100)
_, ax = plt.subplots(figsize=(8,6))
ax.plot(xi, yi)
ax.scatter(xp, yp)
ax.set_ylabel('peak density')
ax.set_xlabel('m/z')
plt.show()
# we're using the same warping nodes for all spectra here
node_mzs = (xp[:-1] + xp[1:]) / 2
node_mzs = np.array([mz_begin, *node_mzs, mz_end])
# setup warping parameters
n_steps = 33 # the slack of a warping node is +- (n_steps * s * sigma @ the node's m/z)
s = 2 * epsilon / n_steps
node_deltas = np.array([s * sigma_1 * mz ** (3/2) for mz in node_mzs])
nodes = mx.initialize_nodes(node_mzs, node_deltas, n_steps)
s_r = get_mx_spectrum(fpath_sbd, meta, i_r, sigma_1, instrument_type)
print("warping spectra...")
import time
t0 = time.time()
optimal_moves = mx.find_optimal_spectra_warpings(spectra, s_r, nodes, epsilon)
t1 = time.time()
print("found optimal warpings in {:0.2f}s".format(t1 - t0))
t2 = time.time()
warped_spectra = [mx.warp_peaks(s_i, nodes, o_i) for (s_i, o_i) in zip(spectra, optimal_moves)]
t3 = time.time()
print("warped spectra in {:0.2f}s".format(t3 - t2))
if mx.spectra_to_triplets(fpath_triplets_raw, spectra):
print("wrote raw triplets to file")
if mx.spectra_to_triplets(fpath_triplets_warped, warped_spectra):
print("wrote warped triplets to file")
n_points = 2000000
s_m = generate_mean_spectrum(spectra, n_points, sigma_1,
mz_begin, mz_end, tic, instrument_type)
s_m_100 = mx.peaks_top_n(s_m, 100)
mz_ref = np.sort(to_mz(s_m_100))
mass_tolerance = 5 # ppm
fig, ax = plt.subplots(1, 3, figsize=(12,4), sharey=True)
ax[0].set_ylabel('spectrum index')
for i, mz_i in enumerate(mz_ref[::35]):
d = mass_tolerance * mz_i / 1e6 # -+ mass_tolerance around reference mass
mz0 = mz_i - d
mz1 = mz_i + d
plot_range(fpath_triplets_raw, mz0, mz1, ax[i], 'tab:cyan', 5, in_ppm=True)
plot_range(fpath_triplets_warped, mz0, mz1, ax[i], 'tab:orange', 5, in_ppm=True)
ax[i].set_facecolor((0.0, 0.0, 0.0))
ax[i].set_title('m/z {:0.3f}'.format(mz_i))
ax[i].set_xticks([-mass_tolerance, 0, mass_tolerance])
ax[i].set_xlabel('relative shift (ppm)')
```
| github_jupyter |
# **一、与时间序列分析相关的部分基础知识/概念**
### 1.1 什么是时间序列
简而言之:
对某一个或者一组变量$x\left ( t \right )$进行观察测量,将在一系列时刻$t_{1},t_{2},⋯,t_{n}$所得到的离散数字组成的序列集合,称之为时间序列。
例如: 某股票A从2015年6月1日到2016年6月1日之间各个交易日的收盘价,可以构成一个时间序列;某地每天的最高气温可以构成一个时间序列。
一些特征:
**趋势**:是时间序列在长时期内呈现出来的持续向上或持续向下的变动。
**季节变动**:是时间序列在一年内重复出现的周期性波动。它是诸如气候条件、生产条件、节假日或人们的风俗习惯等各种因素影响的结果。
**循环波动**:是时间序列呈现出得非固定长度的周期性变动。循环波动的周期可能会持续一段时间,但与趋势不同,它不是朝着单一方向的持续变动,而是涨落相同的交替波动。
**不规则波动**:是时间序列中除去趋势、季节变动和周期波动之后的随机波动。不规则波动通常总是夹杂在时间序列中,致使时间序列产生一种波浪形或震荡式的变动。只含有随机波动的序列也称为**平稳序列**。
### 1.2 平稳性
平稳时间序列粗略地讲,一个时间序列,如果均值没有系统的变化(无趋势)、方差没有系统变化,且严格消除了周期性变化,就称之是平稳的。
```
IndexData = DataAPI.MktIdxdGet(indexID=u"",ticker=u"000001",beginDate=u"20130101",endDate=u"20140801",field=u"tradeDate,closeIndex,CHGPct",pandas="1")
IndexData = IndexData.set_index(IndexData['tradeDate'])
IndexData['colseIndexDiff_1'] = IndexData['closeIndex'].diff(1) # 1阶差分处理
IndexData['closeIndexDiff_2'] = IndexData['colseIndexDiff_1'].diff(1) # 2阶差分处理
IndexData.plot(subplots=True,figsize=(18,12))
```

上图中第一张图为上证综指部分年份的收盘指数,是一个**非平稳时间序列**;而下面两张为**平稳时间序列**(当然这里没有检验,只是为了让大家看出差异,关于检验序列的平稳性后续会讨论)
细心的朋友已经发现,下面两张图,实际上是对第一个序列做了**差分**处理,方差和均值基本平稳,成为了平稳时间序列,后面我们会谈到这种处理。
下面可以给出平稳性的定义了:
**严平稳**:
如果对所有的时刻t,任意正整数k和任意k个正整数$(t_{1},t_{2},...,t_{k})$,
$(r_{t_{1}},r_{t_{2}},...,r_{t_{k}})$
的联合分布与
$(r_{t_{1}+t},r_{t_{2}+t},...,r_{t_{k}+t})$
的联合分布相同,我们称时间序列{rt}是**严平稳**的。
也就是,
$(r_{t_{1}},r_{t_{2}},...,r_{t_{k}})$
的联合分布在时间的平移变换下保持不变,这是个很强的条件。而我们经常假定的是平稳性的一个较弱的方式

### 1.3 相关系数和自相关函数
#### 1.3.1 相关系数

```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
a = pd.Series([9,8,7,5,4,2])
b = a - a.mean() # 去均值
plt.figure(figsize=(10,4))
a.plot(label='a')
b.plot(label='mean removed a')
plt.legend()
```
#### 1.3.2 自相关函数 (Autocorrelation Function, ACF)


下面给出示例:
```
from scipy import stats
import statsmodels.api as sm # 统计相关的库
data = IndexData['closeIndex'] # 上证指数
m = 10 # 我们检验10个自相关系数
acf,q,p = sm.tsa.acf(data,nlags=m,qstat=True) ## 计算自相关系数 及p-value
out = np.c_[range(1,11), acf[1:], q, p]
output=pd.DataFrame(out, columns=['lag', "AC", "Q", "P-value"])
output = output.set_index('lag')
output
```


我们再来看看同期上证指数的日收益率序列:

### 1.4 白噪声序列和线性时间序列


| github_jupyter |
# Comparison between the magnetic field produced by a oblate ellipsoid and a sphere
### Import the required modules and functions
```
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import BoundaryNorm
from matplotlib.ticker import MaxNLocator
from fatiando import gridder, utils
from fatiando.gravmag import sphere
from fatiando.mesher import Sphere
import oblate_ellipsoid
from mesher import OblateEllipsoid
# Set some plot parameters
from matplotlib import rcParams
rcParams['figure.dpi'] = 300.
rcParams['font.size'] = 6
rcParams['xtick.labelsize'] = 'medium'
rcParams['ytick.labelsize'] = 'medium'
rcParams['axes.labelsize'] = 'large'
rcParams['legend.fontsize'] = 'medium'
rcParams['savefig.dpi'] = 300.
```
### Set some parameters for modelling
```
# The local-geomagnetic field
F, inc, dec = 60000, 50, 20
# Create a regular grid at z = 0 m
shape = (50, 50)
area = [-5000, 5000, -4000, 6000]
xp, yp, zp = gridder.regular(area, shape, z=0)
```
### Oblate ellipsoid versus sphere
This test compares the total-field anomalies produced by an onlate ellipsoid with that produced by a sphere. The ellipsoid has semi-axes $a$ and $b$ equal to `499.9 m` and `500.1 m`, respectively, and the sphere has a radius equal to `500 m`. Both bodies are centered at the point `(0, 0, 1000)` and have the same magnetization.
##### Triaxial ellipsoid
```
ellipsoid = OblateEllipsoid(0, 0, 1000, 499.9, 500.1, 40, -60, 180,
{'principal susceptibilities': [0.01, 0.01, 0.01],
'susceptibility angles': [-40, 90, 7],
'remanent magnetization': [0.7, -7, 10]})
magnetization = oblate_ellipsoid.magnetization(ellipsoid, F, inc, dec, demag=True)
magnetization
```
##### Sphere
```
spherical_body = Sphere(ellipsoid.x, ellipsoid.y, ellipsoid.z,
0.5*(ellipsoid.large_axis + ellipsoid.small_axis),
{'magnetization': magnetization})
spherical_body.props['magnetization']
```
##### Total-field anomalies
```
# total-field anomaly produced by the ellipsoid (in nT)
tf_t = oblate_ellipsoid.tf(xp, yp, zp, [ellipsoid],
F, inc, dec)
# total-field anomaly produced by the sphere (in nT)
tf_s = sphere.tf(xp, yp, zp, [spherical_body], inc, dec)
# residuals
tf_r = tf_t - tf_s
plt.figure(figsize=(3.15, 7))
plt.axis('scaled')
ranges = np.max(np.abs([np.min(tf_t), np.max(tf_t),
np.min(tf_s), np.max(tf_s)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,1)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
tf_t.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
cbar = plt.colorbar()
plt.annotate(s='(a)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.subplot(3,1,2)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
tf_s.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(b)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
ranges = np.max(np.abs([np.min(tf_r), np.max(tf_r)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,3)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
tf_r.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlabel('y (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(c)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.tight_layout()
plt.show()
```
##### Field components
```
# field components produced by the ellipsoid (in nT)
bx_t = oblate_ellipsoid.bx(xp, yp, zp, [ellipsoid],
F, inc, dec)
by_t = oblate_ellipsoid.by(xp, yp, zp, [ellipsoid],
F, inc, dec)
bz_t = oblate_ellipsoid.bz(xp, yp, zp, [ellipsoid],
F, inc, dec)
bt = [bx_t, by_t, bz_t]
# field components produced by the sphere (in nT)
bx_s = sphere.bx(xp, yp, zp, [spherical_body])
by_s = sphere.by(xp, yp, zp, [spherical_body])
bz_s = sphere.bz(xp, yp, zp, [spherical_body])
bs = [bx_s, by_s, bz_s]
# residuals
bx_r = bx_t - bx_s
by_r = by_t - by_s
bz_r = bz_t - bz_s
br = [bx_r, by_r, bz_r]
plt.figure(figsize=(3.15, 7))
plt.axis('scaled')
ranges = np.max(np.abs([np.min(bx_t), np.max(bx_t),
np.min(bx_s), np.max(bx_s)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,1)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
bx_t.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
cbar = plt.colorbar()
plt.annotate(s='(a)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.subplot(3,1,2)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
bx_s.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(b)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
ranges = np.max(np.abs([np.min(bx_r), np.max(bx_r)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,3)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
bx_r.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlabel('y (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(c)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.tight_layout()
plt.show()
plt.figure(figsize=(3.15, 7))
plt.axis('scaled')
ranges = np.max(np.abs([np.min(by_t), np.max(by_t),
np.min(by_s), np.max(by_s)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,1)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
by_t.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
cbar = plt.colorbar()
plt.annotate(s='(a)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.subplot(3,1,2)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
by_s.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(b)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
ranges = np.max(np.abs([np.min(by_r), np.max(by_r)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,3)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
by_r.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlabel('y (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(c)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.tight_layout()
plt.show()
plt.figure(figsize=(3.15, 7))
plt.axis('scaled')
ranges = np.max(np.abs([np.min(bz_t), np.max(bz_t),
np.min(bz_s), np.max(bz_s)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,1)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
bz_t.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
cbar = plt.colorbar()
plt.annotate(s='(a)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.subplot(3,1,2)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
bz_s.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(b)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
ranges = np.max(np.abs([np.min(bz_r), np.max(bz_r)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,3)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
bz_r.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlabel('y (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(c)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.tight_layout()
plt.show()
```
| github_jupyter |
# Recommendation Engine
## Building a Movie Recommendation Engine using MovieLens dataset
We will be using a MovieLens dataset. This dataset contains 100004 ratings across 9125 movies for 671 users. All selected users had at least rated 20 movies.
We are going to build a recommendation engine which will suggest movies for a user which he hasn't watched yet based on the movies which he has already rated. We will be using k-nearest neighbour algorithm which we will implement from scratch.
```
import pandas as pd
```
Movie file contains information like movie id, title, genre of movies and ratings file contains data like user id, movie id, rating and timestamp in which each line after header row represents one rating of one movie by one user.
```
movie_file = "data\movie_dataset\movies.csv"
movie_data = pd.read_csv(movie_file, usecols = [0, 1])
movie_data.head()
ratings_file = "data\\movie_dataset\\ratings.csv"
ratings_info = pd.read_csv(ratings_file, usecols = [0, 1, 2])
ratings_info.head()
movie_info = pd.merge(movie_data, ratings_info, left_on = 'movieId', right_on = 'movieId')
movie_info.head()
movie_info.loc[0:10, ['userId']]
movie_info[movie_info.title == "Toy Story (1995)"].head()
movie_info = pd.DataFrame.sort_values(movie_info, ['userId', 'movieId'], ascending = [0, 1])
movie_info.head()
```
Let us see the number of users and number of movies in our dataset
```
num_users = max(movie_info.userId)
num_movies = max(movie_info.movieId)
print(num_users)
print(num_movies)
```
how many movies were rated by each user and the number of users rated each movie
```
movie_per_user = movie_info.userId.value_counts()
movie_per_user.head()
users_per_movie = movie_info.title.value_counts()
users_per_movie.head()
```
Function to find top N favourite movies of a user
```
def fav_movies(current_user, N):
# get rows corresponding to current user and then sort by rating in descending order
# pick top N rows of the dataframe
fav_movies = pd.DataFrame.sort_values(movie_info[movie_info.userId == current_user],
['rating'], ascending = [0]) [:N]
# return list of titles
return list(fav_movies.title)
print(fav_movies(5, 3))
```
Lets build recommendation engine now
- We will use a neighbour based collaborative filtering model.
- The idea is to use k-nearest neighbour algorithm to find neighbours of a user
- We will use their ratings to predict ratings of a movie not already rated by a current user.
We will represent movies watched by a user in a vector - the vector will have values for all the movies in our dataset.
If a user hasn't rated a movie, it would be represented as NaN.
```
user_movie_rating_matrix = pd.pivot_table(movie_info, values = 'rating', index=['userId'], columns=['movieId'])
user_movie_rating_matrix.head()
```
Now, we will find the similarity between 2 users by using correlation
```
from scipy.spatial.distance import correlation
import numpy as np
def similarity(user1, user2):
# normalizing user1 rating i.e mean rating of user1 for any movie
# nanmean will return mean of an array after ignore NaN values
user1 = np.array(user1) - np.nanmean(user1)
user2 = np.array(user2) - np.nanmean(user2)
# finding the similarity between 2 users
# finding subset of movies rated by both the users
common_movie_ids = [i for i in range(len(user1)) if user1[i] > 0 and user2[i] > 0]
if(len(common_movie_ids) == 0):
return 0
else:
user1 = np.array([user1[i] for i in common_movie_ids])
user2 = np.array([user2[i] for i in common_movie_ids])
return correlation(user1, user2)
```
We will now use the similarity function to find the nearest neighbour of a current user
```
# nearest_neighbour_ratings function will find the k nearest neighbours of the current user and
# then use their ratings to predict the current users ratings for other unrated movies
def nearest_neighbour_ratings(current_user, K):
# Creating an empty matrix whose row index is userId and the value
# will be the similarity of that user to the current user
similarity_matrix = pd.DataFrame(index = user_movie_rating_matrix.index,
columns = ['similarity'])
for i in user_movie_rating_matrix.index:
# finding the similarity between user i and the current user and add it to the similarity matrix
similarity_matrix.loc[i] = similarity(user_movie_rating_matrix.loc[current_user],
user_movie_rating_matrix.loc[i])
# Sorting the similarity matrix in descending order
similarity_matrix = pd.DataFrame.sort_values(similarity_matrix,
['similarity'], ascending= [0])
# now we will pick the top k nearest neighbou
nearest_neighbours = similarity_matrix[:K]
neighbour_movie_ratings = user_movie_rating_matrix.loc[nearest_neighbours.index]
# This is empty dataframe placeholder for predicting the rating of current user using neighbour movie ratings
predicted_movie_rating = pd.DataFrame(index = user_movie_rating_matrix.columns, columns = ['rating'])
# Iterating all movies for a current user
for i in user_movie_rating_matrix.columns:
# by default, make predicted rating as the average rating of the current user
predicted_rating = np.nanmean(user_movie_rating_matrix.loc[current_user])
for j in neighbour_movie_ratings.index:
# if user j has rated the ith movie
if(user_movie_rating_matrix.loc[j,i] > 0):
predicted_rating += ((user_movie_rating_matrix.loc[j,i] -np.nanmean(user_movie_rating_matrix.loc[j])) *
nearest_neighbours.loc[j, 'similarity']) / nearest_neighbours['similarity'].sum()
predicted_movie_rating.loc[i, 'rating'] = predicted_rating
return predicted_movie_rating
```
Predicting top N recommendations for a current user
```
def top_n_recommendations(current_user, N):
predicted_movie_rating = nearest_neighbour_ratings(current_user, 10)
movies_already_watched = list(user_movie_rating_matrix.loc[current_user]
.loc[user_movie_rating_matrix.loc[current_user] > 0].index)
predicted_movie_rating = predicted_movie_rating.drop(movies_already_watched)
top_n_recommendations = pd.DataFrame.sort_values(predicted_movie_rating, ['rating'], ascending=[0])[:N]
top_n_recommendation_titles = movie_data.loc[movie_data.movieId.isin(top_n_recommendations.index)]
return list(top_n_recommendation_titles.title)
```
finding out the recommendations for a user
```
current_user = 140
print("User's favorite movies are : ", fav_movies(current_user, 5),
"\nUser's top recommendations are: ", top_n_recommendations(current_user, 3))
```
## Conclusion
We have built a movie recommendation engine using k-nearest neighbour algorithm implemented from scratch.
| github_jupyter |
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_03_3_save_load.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 3: Introduction to TensorFlow**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 3 Material
* Part 3.1: Deep Learning and Neural Network Introduction [[Video]](https://www.youtube.com/watch?v=zYnI4iWRmpc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_03_1_neural_net.ipynb)
* Part 3.2: Introduction to Tensorflow and Keras [[Video]](https://www.youtube.com/watch?v=PsE73jk55cE&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_03_2_keras.ipynb)
* **Part 3.3: Saving and Loading a Keras Neural Network** [[Video]](https://www.youtube.com/watch?v=-9QfbGM1qGw&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_03_3_save_load.ipynb)
* Part 3.4: Early Stopping in Keras to Prevent Overfitting [[Video]](https://www.youtube.com/watch?v=m1LNunuI2fk&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_03_4_early_stop.ipynb)
* Part 3.5: Extracting Weights and Manual Calculation [[Video]](https://www.youtube.com/watch?v=7PWgx16kH8s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_03_5_weights.ipynb)
# Google CoLab Instructions
The following code ensures that Google CoLab is running the correct version of TensorFlow.
Running the following code will map your GDrive to ```/content/drive```.
```
try:
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
COLAB = True
print("Note: using Google CoLab")
%tensorflow_version 2.x
except:
print("Note: not using Google CoLab")
COLAB = False
```
# Part 3.3: Saving and Loading a Keras Neural Network
Complex neural networks will take a long time to fit/train. It is helpful to be able to save these neural networks so that they can be reloaded later. A reloaded neural network will not require retraining. Keras provides three formats for neural network saving.
* **YAML** - Stores the neural network structure (no weights) in the [YAML file format](https://en.wikipedia.org/wiki/YAML).
* **JSON** - Stores the neural network structure (no weights) in the [JSON file format](https://en.wikipedia.org/wiki/JSON).
* **HDF5** - Stores the complete neural network (with weights) in the [HDF5 file format](https://en.wikipedia.org/wiki/Hierarchical_Data_Format). Do not confuse HDF5 with [HDFS](https://en.wikipedia.org/wiki/Apache_Hadoop). They are different. We do not use HDFS in this class.
Usually you will want to save in HDF5.
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
import pandas as pd
import io
import os
import requests
import numpy as np
from sklearn import metrics
save_path = "."
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/auto-mpg.csv",
na_values=['NA', '?'])
cars = df['name']
# Handle missing value
df['horsepower'] = df['horsepower'].fillna(df['horsepower'].median())
# Pandas to Numpy
x = df[['cylinders', 'displacement', 'horsepower', 'weight',
'acceleration', 'year', 'origin']].values
y = df['mpg'].values # regression
# Build the neural network
model = Sequential()
model.add(Dense(25, input_dim=x.shape[1], activation='relu')) # Hidden 1
model.add(Dense(10, activation='relu')) # Hidden 2
model.add(Dense(1)) # Output
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x,y,verbose=2,epochs=100)
# Predict
pred = model.predict(x)
# Measure RMSE error. RMSE is common for regression.
score = np.sqrt(metrics.mean_squared_error(pred,y))
print(f"Before save score (RMSE): {score}")
# save neural network structure to JSON (no weights)
model_json = model.to_json()
with open(os.path.join(save_path,"network.json"), "w") as json_file:
json_file.write(model_json)
# save neural network structure to YAML (no weights)
model_yaml = model.to_yaml()
with open(os.path.join(save_path,"network.yaml"), "w") as yaml_file:
yaml_file.write(model_yaml)
# save entire network to HDF5 (save everything, suggested)
model.save(os.path.join(save_path,"network.h5"))
```
The code below sets up a neural network and reads the data (for predictions), but it does not clear the model directory or fit the neural network. The weights from the previous fit are used.
Now we reload the network and perform another prediction. The RMSE should match the previous one exactly if the neural network was really saved and reloaded.
```
from tensorflow.keras.models import load_model
model2 = load_model(os.path.join(save_path,"network.h5"))
pred = model2.predict(x)
# Measure RMSE error. RMSE is common for regression.
score = np.sqrt(metrics.mean_squared_error(pred,y))
print(f"After load score (RMSE): {score}")
```
| github_jupyter |
# Mushroom Classification Using Different Classifiers
#### In this project, we will examine the data and create a machine learning algorithm that will detect if the mushroom is edible or poisonous by its specifications like cap shape, cap color, gill color, etc. using different classifiers.
#### The dataset used in this project is "mushrooms.csv" which contains 8124 instances of mushrooms with 23 features like cap-shape, cap-surface, cap-color, bruises, odor, etc. and is made available by UCI Machine Learning.
### Importing the packages
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import os
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.tree import export_graphviz
import graphviz
```
#### Checking the files in the directory
```
print(os.listdir("C:/Users/Kanchi/PycharmProjects/Mushroom-Classification"))
```
### Reading the csv file of the dataset
#### Pandas read_csv() function imports a CSV file (in our case, ‘mushrooms.csv’) to DataFrame format.
```
df = pd.read_csv("mushrooms.csv")
```
# Examining the Data
#### After importing the data, to learn more about the dataset, we'll use .head() .info() and .describe() methods.
```
df.head()
df.info()
df.describe()
```
### Shape of the dataset
```
print("Dataset shape:", df.shape)
```
### Visualizing the count of edible and poisonous mushrooms
```
df['class'].value_counts()
df["class"].unique()
count = df['class'].value_counts()
plt.figure(figsize=(8,7))
sns.barplot(count.index, count.values, alpha=0.8, palette="prism")
plt.ylabel('Count', fontsize=12)
plt.xlabel('Class', fontsize=12)
plt.title('Number of poisonous/edible mushrooms')
#plt.savefig("mushrooms1.png", format='png', dpi=900)
plt.show()
```
#### The dataset is balanced.
# Data Manipulation
#### The data is categorical so we’ll use LabelEncoder to convert it to ordinal. LabelEncoder converts each value in a column to a number.
#### This approach requires the category column to be of ‘category’ datatype. By default, a non-numerical column is of ‘object’ datatype. From the df.describe() method, we saw that our columns are of ‘object’ datatype. So we will have to change the type to ‘category’ before using this approach.
```
df = df.astype('category')
df.dtypes
labelencoder=LabelEncoder()
for column in df.columns:
df[column] = labelencoder.fit_transform(df[column])
df.head()
```
#### The column "veil-type" is 0 and not contributing to the data so we remove it.
```
df['veil-type']
df=df.drop(["veil-type"],axis=1)
```
### Quick look at the characteristics of the data
#### The violin plot below represents the distribution of the classification characteristics. It is possible to see that "gill-color" property of the mushroom breaks to two parts, one below 3 and one above 3, that may contribute to the classification.
```
df_div = pd.melt(df, "class", var_name="Characteristics")
fig, ax = plt.subplots(figsize=(16,6))
p = sns.violinplot(ax = ax, x="Characteristics", y="value", hue="class", split = True, data=df_div, inner = 'quartile', palette = 'Set1')
df_no_class = df.drop(["class"],axis = 1)
p.set_xticklabels(rotation = 90, labels = list(df_no_class.columns));
#plt.savefig("violinplot.png", format='png', dpi=900, bbox_inches='tight')
```
### Let's look at the correlation between the variables
```
plt.figure(figsize=(14,12))
sns.heatmap(df.corr(),linewidths=.1,cmap="Purples", annot=True, annot_kws={"size": 7})
plt.yticks(rotation=0);
#plt.savefig("corr.png", format='png', dpi=900, bbox_inches='tight')
```
#### Usually, the least correlating variable is the most important one for classification. In this case, "gill-color" has -0.53 so let's look at it closely.
```
df[['class', 'gill-color']].groupby(['gill-color'], as_index=False).mean().sort_values(by='class', ascending=False)
```
#### Let's look closely at the feature "gill-color".
```
new_var = df[['class', 'gill-color']]
new_var = new_var[new_var['gill-color']<=3.5]
sns.factorplot('class', col='gill-color', data=new_var, kind='count', size=4.5, aspect=.8, col_wrap=4);
#plt.savefig("gillcolor1.png", format='png', dpi=900, bbox_inches='tight')
new_var=df[['class', 'gill-color']]
new_var=new_var[new_var['gill-color']>3.5]
sns.factorplot('class', col='gill-color', data=new_var, kind='count', size=4.5, aspect=.8, col_wrap=4);
#plt.savefig("gillcolor2.png", format='png', dpi=900, bbox_inches='tight')
```
# Preparing the Data
##### Setting X and y axis and splitting the data into train and test respectively.
```
X = df.drop(['class'], axis=1)
y = df["class"]
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size=0.1)
```
# Classification Methods
## 1. Decision Tree Classification
```
from sklearn.tree import DecisionTreeClassifier
dt = DecisionTreeClassifier()
dt.fit(X_train, y_train)
os.environ["PATH"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin/'
dot_data = export_graphviz(dt, out_file=None,
feature_names=X.columns,
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
#graph.render(filename='DecisionTree')
graph
```
## Feature importance
#### By all methods examined before the feature that is most important is "gill-color".
```
features_list = X.columns.values
feature_importance = dt.feature_importances_
sorted_idx = np.argsort(feature_importance)
plt.figure(figsize=(8,7))
plt.barh(range(len(sorted_idx)), feature_importance[sorted_idx], align='center', color ="red")
plt.yticks(range(len(sorted_idx)), features_list[sorted_idx])
plt.xlabel('Importance')
plt.title('Feature importance')
plt.draw()
#plt.savefig("featureimp.png", format='png', dpi=900, bbox_inches='tight')
plt.show()
```
### Predicting and estimating the result
```
y_pred_dt = dt.predict(X_test)
print("Decision Tree Classifier report: \n\n", classification_report(y_test, y_pred_dt))
print("Test Accuracy: {}%".format(round(dt.score(X_test, y_test)*100, 2)))
```
### Confusion Matrix for Decision Tree Classifier
```
cm = confusion_matrix(y_test, y_pred_dt)
x_axis_labels = ["Edible", "Poisonous"]
y_axis_labels = ["Edible", "Poisonous"]
f, ax = plt.subplots(figsize =(7,7))
sns.heatmap(cm, annot = True, linewidths=0.2, linecolor="black", fmt = ".0f", ax=ax, cmap="Purples", xticklabels=x_axis_labels, yticklabels=y_axis_labels)
plt.xlabel("PREDICTED LABEL")
plt.ylabel("TRUE LABEL")
plt.title('Confusion Matrix for Decision Tree Classifier')
#plt.savefig("dtcm.png", format='png', dpi=900, bbox_inches='tight')
plt.show()
```
## 2. Logistic Regression Classification
```
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(solver="lbfgs", max_iter=500)
lr.fit(X_train, y_train)
print("Test Accuracy: {}%".format(round(lr.score(X_test, y_test)*100,2)))
```
#### Classification report of Logistic Regression Classifier
```
y_pred_lr = lr.predict(X_test)
print("Logistic Regression Classifier report: \n\n", classification_report(y_test, y_pred_lr))
```
### Confusion Matrix for Logistic Regression Classifier
```
cm = confusion_matrix(y_test, y_pred_lr)
x_axis_labels = ["Edible", "Poisonous"]
y_axis_labels = ["Edible", "Poisonous"]
f, ax = plt.subplots(figsize =(7,7))
sns.heatmap(cm, annot = True, linewidths=0.2, linecolor="black", fmt = ".0f", ax=ax, cmap="Purples", xticklabels=x_axis_labels, yticklabels=y_axis_labels)
plt.xlabel("PREDICTED LABEL")
plt.ylabel("TRUE LABEL")
plt.title('Confusion Matrix for Logistic Regression Classifier')
#plt.savefig("lrcm.png", format='png', dpi=900, bbox_inches='tight')
plt.show()
```
## 3. KNN Classification
```
from sklearn.neighbors import KNeighborsClassifier
best_Kvalue = 0
best_score = 0
for i in range(1,10):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train, y_train)
if knn.score(X_test, y_test) > best_score:
best_score = knn.score(X_train, y_train)
best_Kvalue = i
print("Best KNN Value: {}".format(best_Kvalue))
print("Test Accuracy: {}%".format(round(best_score*100,2)))
```
#### Classification report of KNN Classifier
```
y_pred_knn = knn.predict(X_test)
print("KNN Classifier report: \n\n", classification_report(y_test, y_pred_knn))
```
### Confusion Matrix for KNN Classifier
```
cm = confusion_matrix(y_test, y_pred_knn)
x_axis_labels = ["Edible", "Poisonous"]
y_axis_labels = ["Edible", "Poisonous"]
f, ax = plt.subplots(figsize =(7,7))
sns.heatmap(cm, annot = True, linewidths=0.2, linecolor="black", fmt = ".0f", ax=ax, cmap="Purples", xticklabels=x_axis_labels, yticklabels=y_axis_labels)
plt.xlabel("PREDICTED LABEL")
plt.ylabel("TRUE LABEL")
plt.title('Confusion Matrix for KNN Classifier')
#plt.savefig("knncm.png", format='png', dpi=900, bbox_inches='tight')
plt.show()
```
## 4. SVM Classification
```
from sklearn.svm import SVC
svm = SVC(random_state=42, gamma="auto")
svm.fit(X_train, y_train)
print("Test Accuracy: {}%".format(round(svm.score(X_test, y_test)*100, 2)))
```
#### Classification report of SVM Classifier
```
y_pred_svm = svm.predict(X_test)
print("SVM Classifier report: \n\n", classification_report(y_test, y_pred_svm))
```
### Confusion Matrix for SVM Classifier
```
cm = confusion_matrix(y_test, y_pred_svm)
x_axis_labels = ["Edible", "Poisonous"]
y_axis_labels = ["Edible", "Poisonous"]
f, ax = plt.subplots(figsize =(7,7))
sns.heatmap(cm, annot = True, linewidths=0.2, linecolor="black", fmt = ".0f", ax=ax, cmap="Purples", xticklabels=x_axis_labels, yticklabels=y_axis_labels)
plt.xlabel("PREDICTED LABEL")
plt.ylabel("TRUE LABEL")
plt.title('Confusion Matrix for SVM Classifier')
#plt.savefig("svmcm.png", format='png', dpi=900, bbox_inches='tight')
plt.show()
```
## 5. Naive Bayes Classification
```
from sklearn.naive_bayes import GaussianNB
nb = GaussianNB()
nb.fit(X_train, y_train)
print("Test Accuracy: {}%".format(round(nb.score(X_test, y_test)*100, 2)))
```
#### Classification report of Naive Bayes Classifier
```
y_pred_nb = nb.predict(X_test)
print("Naive Bayes Classifier report: \n\n", classification_report(y_test, y_pred_nb))
```
### Confusion Matrix for Naive Bayes Classifier
```
cm = confusion_matrix(y_test, y_pred_nb)
x_axis_labels = ["Edible", "Poisonous"]
y_axis_labels = ["Edible", "Poisonous"]
f, ax = plt.subplots(figsize =(7,7))
sns.heatmap(cm, annot = True, linewidths=0.2, linecolor="black", fmt = ".0f", ax=ax, cmap="Purples", xticklabels=x_axis_labels, yticklabels=y_axis_labels)
plt.xlabel("PREDICTED LABEL")
plt.ylabel("TRUE LABEL")
plt.title('Confusion Matrix for Naive Bayes Classifier')
#plt.savefig("nbcm.png", format='png', dpi=900, bbox_inches='tight')
plt.show()
```
## 6. Random Forest Classification
```
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=100, random_state=42)
rf.fit(X_train, y_train)
print("Test Accuracy: {}%".format(round(rf.score(X_test, y_test)*100, 2)))
```
#### Classification report of Random Forest Classifier
```
y_pred_rf = rf.predict(X_test)
print("Random Forest Classifier report: \n\n", classification_report(y_test, y_pred_rf))
```
### Confusion Matrix for Random Forest Classifier
```
cm = confusion_matrix(y_test, y_pred_rf)
x_axis_labels = ["Edible", "Poisonous"]
y_axis_labels = ["Edible", "Poisonous"]
f, ax = plt.subplots(figsize =(7,7))
sns.heatmap(cm, annot = True, linewidths=0.2, linecolor="black", fmt = ".0f", ax=ax, cmap="Purples", xticklabels=x_axis_labels, yticklabels=y_axis_labels)
plt.xlabel("PREDICTED LABEL")
plt.ylabel("TRUE LABEL")
plt.title('Confusion Matrix for Random Forest Classifier');
#plt.savefig("rfcm.png", format='png', dpi=900, bbox_inches='tight')
plt.show()
```
# Predictions
### Predicting some of the X_test results and matching it with true i.e. y_test values using Decision Tree Classifier.
```
preds = dt.predict(X_test)
print(preds[:36])
print(y_test[:36].values)
# 0 - Edible
# 1 - Poisonous
```
### As we can see the predicted and the true values match 100%.
# Conclusion
#### From the confusion matrix, we saw that our train and test data is balanced.
#### Most of the classification methods hit 100% accuracy with this dataset.
| github_jupyter |
**Chapter 1 – The Machine Learning landscape**
_This is the code used to generate some of the figures in chapter 1._
# Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
```
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "fundamentals"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
# Ignore useless warnings (see SciPy issue #5998)
import warnings
warnings.filterwarnings(action="ignore", message="^internal gelsd")
```
# Code example 1-1
This function just merges the OECD's life satisfaction data and the IMF's GDP per capita data. It's a bit too long and boring and it's not specific to Machine Learning, which is why I left it out of the book.
```
def prepare_country_stats(oecd_bli, gdp_per_capita):
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita,
left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace=True)
remove_indices = [0, 1, 6, 8, 33, 34, 35]
keep_indices = list(set(range(36)) - set(remove_indices))
return full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[keep_indices]
```
The code in the book expects the data files to be located in the current directory. I just tweaked it here to fetch the files in datasets/lifesat.
```
import os
datapath = os.path.join("datasets", "lifesat", "")
# Code example
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn.linear_model
# Load the data
oecd_bli = pd.read_csv(datapath + "oecd_bli_2015.csv", thousands=',')
gdp_per_capita = pd.read_csv(datapath + "gdp_per_capita.csv",thousands=',',delimiter='\t',
encoding='latin1', na_values="n/a")
# Prepare the data
country_stats = prepare_country_stats(oecd_bli, gdp_per_capita)
X = np.c_[country_stats["GDP per capita"]]
y = np.c_[country_stats["Life satisfaction"]]
# Visualize the data
country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction')
plt.show()
# Select a linear model
model = sklearn.linear_model.LinearRegression()
# Train the model
model.fit(X, y)
# Make a prediction for Cyprus
X_new = [[22587]] # Cyprus' GDP per capita
print(model.predict(X_new)) # outputs [[ 5.96242338]]
```
# Note: you can ignore the rest of this notebook, it just generates many of the figures in chapter 1.
机器学习项目成功的关键之一是用好特征工程进行训练
1. 特征选择:所有存在的特征中选取最有用的特征来进行训练
2. 特征提取:组合存在的特征,生成一个更有用的特征(如前面所看到的可以使用降维算法)
3. 手机新数据创建新特征
# Load and prepare Life satisfaction data
If you want, you can get fresh data from the OECD's website.
Download the CSV from http://stats.oecd.org/index.aspx?DataSetCode=BLI
and save it to `datasets/lifesat/`.
```
oecd_bli = pd.read_csv(datapath + "oecd_bli_2015.csv", thousands=',')
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
oecd_bli.head(2)
oecd_bli["Life satisfaction"].head()
```
# Load and prepare GDP per capita data
Just like above, you can update the GDP per capita data if you want. Just download data from http://goo.gl/j1MSKe (=> imf.org) and save it to `datasets/lifesat/`.
```
gdp_per_capita = pd.read_csv(datapath+"gdp_per_capita.csv", thousands=',', delimiter='\t',
encoding='latin1', na_values="n/a")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
gdp_per_capita.head(2)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita, left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace=True)
full_country_stats
full_country_stats[["GDP per capita", 'Life satisfaction']].loc["United States"]
remove_indices = [0, 1, 6, 8, 33, 34, 35]
keep_indices = list(set(range(36)) - set(remove_indices))
sample_data = full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[keep_indices]
missing_data = full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[remove_indices]
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3))
plt.axis([0, 60000, 0, 10])
position_text = {
"Hungary": (5000, 1),
"Korea": (18000, 1.7),
"France": (29000, 2.4),
"Australia": (40000, 3.0),
"United States": (52000, 3.8),
}
for country, pos_text in position_text.items():
pos_data_x, pos_data_y = sample_data.loc[country]
country = "U.S." if country == "United States" else country
plt.annotate(country, xy=(pos_data_x, pos_data_y), xytext=pos_text,
arrowprops=dict(facecolor='black', width=0.5, shrink=0.1, headwidth=5))
plt.plot(pos_data_x, pos_data_y, "ro")
save_fig('money_happy_scatterplot')
plt.show()
sample_data.to_csv(os.path.join("datasets", "lifesat", "lifesat.csv"))
sample_data.loc[list(position_text.keys())]
import numpy as np
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3))
plt.axis([0, 60000, 0, 10])
X=np.linspace(0, 60000, 1000)
plt.plot(X, 2*X/100000, "r")
plt.text(40000, 2.7, r"$\theta_0 = 0$", fontsize=14, color="r")
plt.text(40000, 1.8, r"$\theta_1 = 2 \times 10^{-5}$", fontsize=14, color="r")
plt.plot(X, 8 - 5*X/100000, "g")
plt.text(5000, 9.1, r"$\theta_0 = 8$", fontsize=14, color="g")
plt.text(5000, 8.2, r"$\theta_1 = -5 \times 10^{-5}$", fontsize=14, color="g")
plt.plot(X, 4 + 5*X/100000, "b")
plt.text(5000, 3.5, r"$\theta_0 = 4$", fontsize=14, color="b")
plt.text(5000, 2.6, r"$\theta_1 = 5 \times 10^{-5}$", fontsize=14, color="b")
save_fig('tweaking_model_params_plot')
plt.show()
from sklearn import linear_model
lin1 = linear_model.LinearRegression()
Xsample = np.c_[sample_data["GDP per capita"]]
ysample = np.c_[sample_data["Life satisfaction"]]
lin1.fit(Xsample, ysample)
t0, t1 = lin1.intercept_[0], lin1.coef_[0][0]
t0, t1
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3))
plt.axis([0, 60000, 0, 10])
X=np.linspace(0, 60000, 1000)
plt.plot(X, t0 + t1*X, "b")
plt.text(5000, 3.1, r"$\theta_0 = 4.85$", fontsize=14, color="b")
plt.text(5000, 2.2, r"$\theta_1 = 4.91 \times 10^{-5}$", fontsize=14, color="b")
save_fig('best_fit_model_plot')
plt.show()
cyprus_gdp_per_capita = gdp_per_capita.loc["Cyprus"]["GDP per capita"]
print(cyprus_gdp_per_capita)
cyprus_predicted_life_satisfaction = lin1.predict([[cyprus_gdp_per_capita]])[0][0]
cyprus_predicted_life_satisfaction
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3), s=1)
X=np.linspace(0, 60000, 1000)
plt.plot(X, t0 + t1*X, "b")
plt.axis([0, 60000, 0, 10])
plt.text(5000, 7.5, r"$\theta_0 = 4.85$", fontsize=14, color="b")
plt.text(5000, 6.6, r"$\theta_1 = 4.91 \times 10^{-5}$", fontsize=14, color="b")
plt.plot([cyprus_gdp_per_capita, cyprus_gdp_per_capita], [0, cyprus_predicted_life_satisfaction], "r--")
plt.text(25000, 5.0, r"Prediction = 5.96", fontsize=14, color="b")
plt.plot(cyprus_gdp_per_capita, cyprus_predicted_life_satisfaction, "ro")
save_fig('cyprus_prediction_plot')
plt.show()
sample_data[7:10]
(5.1+5.7+6.5)/3
backup = oecd_bli, gdp_per_capita
def prepare_country_stats(oecd_bli, gdp_per_capita):
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita,
left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace=True)
remove_indices = [0, 1, 6, 8, 33, 34, 35]
keep_indices = list(set(range(36)) - set(remove_indices))
return full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[keep_indices]
# Code example
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn
# Load the data
oecd_bli = pd.read_csv(datapath + "oecd_bli_2015.csv", thousands=',')
gdp_per_capita = pd.read_csv(datapath + "gdp_per_capita.csv",thousands=',',delimiter='\t',
encoding='latin1', na_values="n/a")
# Prepare the data
country_stats = prepare_country_stats(oecd_bli, gdp_per_capita)
X = np.c_[country_stats["GDP per capita"]]
y = np.c_[country_stats["Life satisfaction"]]
# Visualize the data
country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction')
plt.show()
# Select a linear model
model = sklearn.linear_model.LinearRegression()
# Train the model
model.fit(X, y)
# Make a prediction for Cyprus
X_new = [[22587]] # Cyprus' GDP per capita
print(model.predict(X_new)) # outputs [[ 5.96242338]]
oecd_bli, gdp_per_capita = backup
missing_data
position_text2 = {
"Brazil": (1000, 9.0),
"Mexico": (11000, 9.0),
"Chile": (25000, 9.0),
"Czech Republic": (35000, 9.0),
"Norway": (60000, 3),
"Switzerland": (72000, 3.0),
"Luxembourg": (90000, 3.0),
}
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(8,3))
plt.axis([0, 110000, 0, 10])
for country, pos_text in position_text2.items():
pos_data_x, pos_data_y = missing_data.loc[country]
plt.annotate(country, xy=(pos_data_x, pos_data_y), xytext=pos_text,
arrowprops=dict(facecolor='black', width=0.5, shrink=0.1, headwidth=5))
plt.plot(pos_data_x, pos_data_y, "rs")
X=np.linspace(0, 110000, 1000)
plt.plot(X, t0 + t1*X, "b:")
lin_reg_full = linear_model.LinearRegression()
Xfull = np.c_[full_country_stats["GDP per capita"]]
yfull = np.c_[full_country_stats["Life satisfaction"]]
lin_reg_full.fit(Xfull, yfull)
t0full, t1full = lin_reg_full.intercept_[0], lin_reg_full.coef_[0][0]
X = np.linspace(0, 110000, 1000)
plt.plot(X, t0full + t1full * X, "k")
save_fig('representative_training_data_scatterplot')
plt.show()
full_country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(8,3))
plt.axis([0, 110000, 0, 10])
from sklearn import preprocessing
from sklearn import pipeline
poly = preprocessing.PolynomialFeatures(degree=60, include_bias=False)
scaler = preprocessing.StandardScaler()
lin_reg2 = linear_model.LinearRegression()
pipeline_reg = pipeline.Pipeline([('poly', poly), ('scal', scaler), ('lin', lin_reg2)])
pipeline_reg.fit(Xfull, yfull)
curve = pipeline_reg.predict(X[:, np.newaxis])
plt.plot(X, curve)
save_fig('overfitting_model_plot')
plt.show()
full_country_stats.loc[[c for c in full_country_stats.index if "W" in c.upper()]]["Life satisfaction"]
gdp_per_capita.loc[[c for c in gdp_per_capita.index if "W" in c.upper()]].head()
plt.figure(figsize=(8,3))
plt.xlabel("GDP per capita")
plt.ylabel('Life satisfaction')
plt.plot(list(sample_data["GDP per capita"]), list(sample_data["Life satisfaction"]), "bo")
plt.plot(list(missing_data["GDP per capita"]), list(missing_data["Life satisfaction"]), "rs")
X = np.linspace(0, 110000, 1000)
plt.plot(X, t0full + t1full * X, "r--", label="Linear model on all data")
plt.plot(X, t0 + t1*X, "b:", label="Linear model on partial data")
ridge = linear_model.Ridge(alpha=10**9.5)
Xsample = np.c_[sample_data["GDP per capita"]]
ysample = np.c_[sample_data["Life satisfaction"]]
ridge.fit(Xsample, ysample)
t0ridge, t1ridge = ridge.intercept_[0], ridge.coef_[0][0]
plt.plot(X, t0ridge + t1ridge * X, "b", label="Regularized linear model on partial data")
plt.legend(loc="lower right")
plt.axis([0, 110000, 0, 10])
save_fig('ridge_model_plot')
plt.show()
backup = oecd_bli, gdp_per_capita
def prepare_country_stats(oecd_bli, gdp_per_capita):
return sample_data
# Replace this linear model:
import sklearn.linear_model
model = sklearn.linear_model.LinearRegression()
# with this k-neighbors regression model:
import sklearn.neighbors
model = sklearn.neighbors.KNeighborsRegressor(n_neighbors=3)
X = np.c_[country_stats["GDP per capita"]]
y = np.c_[country_stats["Life satisfaction"]]
# Train the model
model.fit(X, y)
# Make a prediction for Cyprus
X_new = np.array([[22587.0]]) # Cyprus' GDP per capita
print(model.predict(X_new)) # outputs [[ 5.76666667]]
```
| github_jupyter |
```
from __future__ import print_function, division
%matplotlib inline
%config InlineBackend.print_figure_kwargs = {'dpi' : 150}
import ipyparallel as ipp
client = ipp.Client()
client[:].use_dill()
lbview = client.load_balanced_view()
import numpy as np
with client[:].sync_imports():
import scipy
import scipy.stats
client[:].push(dict(np=np))
import qinfer as qi
from talk_figures import SampleTimeHeuristic, UnknownT2Model, COSYModel
from functools import partial
import matplotlib.pyplot as plt
plt.style.use('ggplot-rq')
plt.rcParams['figure.facecolor'] = 'black'
plt.rcParams['text.color'] = 'white'
plt.rcParams['grid.color'] = 'black'
plt.rcParams['axes.labelcolor'] = 'white'
plt.rcParams['axes.edgecolor'] = 'black'
plt.rcParams['xtick.color'] = 'white'
plt.rcParams['ytick.color'] = 'white'
plt.rcParams['axes.facecolor'] = '#444444'
nyquist_heuristic = partial(SampleTimeHeuristic, t_func=lambda k: 2750 * k / 2000)
exp_sparse_heuristic = partial(SampleTimeHeuristic, t_func=lambda k: 1.0049 ** k)
plt.rcParams['axes.prop_cycle'] = plt.cycler('color', [
'#D55E00',
'#56B4E9'
])
fig, subplots = plt.subplots(ncols=2, figsize=(8, 4))
for heuristic, label in (
(nyquist_heuristic, r'Uniform'),
(exp_sparse_heuristic, r'Exp.')
):
perf = qi.perf_test_multiple(400,
UnknownT2Model(), 10000,
qi.ProductDistribution(
qi.UniformDistribution([0, 1]),
qi.NormalDistribution(0.001, 0.00025 ** 2)
),
2000, heuristic,
progressbar=qi.IPythonProgressBar,
apply=lbview.apply,
extra_updater_args={
'resampler': qi.LiuWestResampler(a=0.9)
}
)
risk_by_param = ((perf['est'] - perf['true']) ** 2).mean(axis=0).T
for subplot, risk, param_name in zip(subplots, risk_by_param, map('${}$'.format, UnknownT2Model().modelparam_names)):
subplot.semilogy(risk, label=label)
subplot.set_title(param_name, loc='left')
subplot.set_xlabel('Bits of Data')
subplots[0].set_ylabel('Mean Squared Error')
plt.legend(ncol=3, bbox_to_anchor=(1, 1.105), columnspacing=1.0)
plt.savefig('figures/unknown-t2.png', dpi=250, facecolor='k', frameon=False)
```
### QBS Data ###
```
x = [100, 200, 300, 400, 500]
y = [0.0506, 0.0160, 0.0092, 0.0037, 0.0026]
fig = plt.figure(figsize=(4, 4))
plt.semilogy(x, y, 'o', markersize=10);
plt.xlim((50, 600))
plt.xlabel('Number of experiments/scan')
plt.ylabel(r'Median of error $|\vec{x}-\vec{x}_{0}|_2$')
fig.tight_layout()
plt.savefig('./figures/qbs-error-per-scan.png', dpi=500, facecolor='k', frameon=False)
```
### Impovrishment ###
```
particles_good = np.random.randn(1200, 2)
particles_bad = np.random.uniform(-4, 4, (400, 2))
wts_bad = np.product(scipy.stats.norm.pdf(particles_bad), axis=1)
wts_bad /= wts_bad.sum()
try: style_cycle = plt.rcParams['axes.prop_cycle']()
except:
from cycler import cycler
style_cycle = iter(cycler('color', plt.rcParams['axes.color_cycle']))
plt.figure(figsize=(8, 4))
ax = plt.subplot(1, 2, 1)
plt.scatter(particles_bad[:, 0], particles_bad[:, 1], s=1200 * wts_bad, **style_cycle.next())
plt.legend(['1200 Particles'],bbox_to_anchor=(1, 1.125), scatterpoints=1)
plt.gca().set_aspect('equal')
plt.subplot(1, 2, 2, sharex=ax, sharey=ax)
plt.scatter(particles_good[:, 0], particles_good[:, 1], s=1200 / len(particles_good), **style_cycle.next())
plt.legend(['400 Particles'],bbox_to_anchor=(1, 1.125), scatterpoints=1, markerscale=4)
plt.gca().set_aspect('equal')
plt.savefig('figures/impovrishment.png', format='png', dpi=300, frameon=False, facecolor="black")
```
### Example: Rabi/Ramsey ###
```
w = 70.3
w_max = 100.0
ts = np.pi * (1 + np.arange(100)) / (2 * w_max)
ideal_signal = np.sin(w * ts / 2) ** 2
n_shots = 100
counts = np.random.binomial(n=n_shots, p=ideal_signal)
plt.plot(ts, ideal_signal, label='Signal', lw=1)
plt.plot(ts, counts / n_shots, '.', label='Data', markersize=8)
plt.xlabel(u'Time (µs)')
plt.ylabel(r'Population')
plt.ylim(-0.01, 1.01)
plt.legend(ncol=2, bbox_to_anchor=(1, 1.15), numpoints=3)
plt.savefig('figures/rabi-example-signal.png', format='png', dpi=300, frameon=False, facecolor="black")
ideal_spectrum = np.abs(np.fft.fftshift(np.fft.fft(ideal_signal - ideal_signal.mean())))**2
spectrum = np.abs(np.fft.fftshift(np.fft.fft((counts - counts.mean()) / n_shots)))**2
ft_freq = 2 * np.pi * np.fft.fftshift(np.fft.fftfreq(n=len(counts), d=ts[1] - ts[0]))
plt.plot(ft_freq, ideal_spectrum, lw=1, label='Signal')
plt.plot(ft_freq, spectrum, '.', label='Data', markersize=8)
ylim = plt.ylim()
# plt.vlines(w, *ylim)
plt.xlim(xmin=0, xmax=100)
# plt.ylim(*ylim)
plt.legend(ncol=2, bbox_to_anchor=(1, 1.15), numpoints=3)
plt.xlabel('$\omega$ (MHz)')
plt.savefig('figures/rabi-example-spectrum.png', format='png', dpi=300, frameon=False, facecolor="black")
```
## COSY Model ##
```
def cosy_heuristic(updater, n_meas=10, base=65 / 64):
N = len(updater.data_record)
t_ratio = np.random.random()
expparams = np.zeros((1,), dtype=updater.model.expparams_dtype)
expparams['t'] = np.array([t_ratio, 1 - t_ratio]) * base ** N
expparams['n_meas'] = n_meas
return expparams
cosy_model = qi.BinomialModel(COSYModel())
cosy_prior = qi.UniformDistribution([[0, 1]] * 3)
cosy_performance = qi.perf_test_multiple(100,
cosy_model, 3000, cosy_prior,
300, partial(partial, cosy_heuristic, base=1.02),
apply=lbview.apply,
progressbar=qi.IPythonProgressBar
)
loss_J = (cosy_performance['est'][:, :, 2] - cosy_performance['true'][:, :, 2]) ** 2
plt.semilogy(np.mean(cosy_performance['loss'], axis=0), label=r'$\vec{x}$')
plt.semilogy(np.mean(loss_J, axis=0), label=r'$J\,$ Only')
plt.xlabel('# of Experiments (10 shots/ea)')
plt.ylabel('Mean Quadratic Loss')
plt.legend(ncol=2, bbox_to_anchor=(1, 1.15))
plt.savefig('figures/cosy-loss.png', format='png', dpi=300, frameon=False, facecolor="black")
```
### Rejection Sampling ###
```
xs = np.random.random((300,))
us = np.random.random((300,))
likes = np.sin(xs * 3 * np.pi / 2) ** 2
sorts = np.argsort(xs)
accepts = us < likes
posts = likes[sorts] / np.trapz(likes[sorts], x=xs[sorts])
fig, (ax_top, ax_bottom) = plt.subplots(nrows=2, ncols=1, sharex=True)
plt.subplots_adjust(hspace=0.4)
ax_top.plot(xs[sorts], likes[sorts], 'w', label=r'$\Pr(d | x)$')
ax_top.scatter(xs[accepts], us[accepts], c='#D55E00', label='Accepted')
ax_top.scatter(xs[~accepts], us[~accepts], c='#56B4E9', label='Rejected')
# plt.xlim(0, 1)
ax_top.set_ylim(0, 1)
# plt.xlabel('$x$')
ax_top.set_ylabel(r'$\Pr(\mathrm{accept} | \vec{x})$')
ax_top.legend(ncol=3, bbox_to_anchor=(1, 1.3), scatteryoffsets=[0.5])
ax_bottom.plot(xs[sorts], posts, 'w', label='Exact')
ax_bottom.hist(xs[accepts], bins=15, normed=True, label='Approx');
ax_bottom.set_xlabel('$x$')
ax_bottom.set_ylabel(r'$\Pr(x | \mathrm{accept})$')
ax_bottom.set_xlim(0, 1)
ax_bottom.legend(ncol=2, bbox_to_anchor=(1, 1.3), scatteryoffsets=[0.5])
plt.savefig('figures/rejs-example.png', format='png', dpi=300, frameon=False, facecolor="black")
```
## Random Walk ##
```
prior = qi.UniformDistribution([0, 1])
true_params = np.array([[0.5]])
model = qi.RandomWalkModel(qi.BinomialModel(qi.SimplePrecessionModel()), qi.NormalDistribution(0, 0.01**2))
updater = qi.SMCUpdater(model, 2000, prior)
expparams = np.array([(np.pi / 2, 40)], dtype=model.expparams_dtype)
data_record = []
trajectory = []
estimates = []
for idx in range(1000):
datum = model.simulate_experiment(true_params, expparams)
true_params = np.clip(model.update_timestep(true_params, expparams)[:, :, 0], 0, 1)
updater.update(datum, expparams)
data_record.append(datum)
trajectory.append(true_params[0, 0])
estimates.append(updater.est_mean()[0])
ts = 40 * np.pi / 2 * np.arange(len(data_record)) / 1e3
plt.plot(ts, trajectory, label='True')
plt.plot(ts, estimates, label='Estimated')
plt.xlabel(u'$t$ (µs)')
plt.ylabel(r'$\omega$ (GHz)')
plt.legend(ncol=2, bbox_to_anchor=(1, 1.125))
plt.title(r"$\cos^2(\omega t / 2)$", loc='left')
plt.savefig('figures/rabi-random-walk.png', format='png', dpi=300, frameon=False, facecolor="black")
```
## RB
```
n_exp = 200
seq_lengths = 1 + np.arange(n_exp)
n_shots = 200
counts = np.random.binomial(n_shots, 0.5 * 0.995 ** seq_lengths + 0.5)
>>> model = qi.BinomialModel(
... qi.RandomizedBenchmarkingModel()
... )
>>> prior = qi.PostselectedDistribution(
... qi.ProductDistribution(
... qi.UniformDistribution([0.8, 1]),
... qi.MultivariateNormalDistribution(
... np.array([0.498, 0.499]),
... np.diag([0.004, 0.002]) ** 2
... )
... ), model)
>>> updater = qi.SMCUpdater(model, 8000, prior)
>>> for idx_exp in range(n_exp):
... expparams = np.array(
... [(seq_lengths[idx_exp], n_shots)],
... dtype=model.expparams_dtype)
... updater.update(counts[idx_exp], expparams)
fig, (left, right) = plt.subplots(ncols=2, nrows=1, figsize=(12, 4))
plt.sca(right)
updater.plot_posterior_marginal(idx_param=0)
plt.vlines(0.995, *(plt.ylim() + ('w',)))
plt.legend(['Posterior', 'True'], ncol=2, bbox_to_anchor=(1, 1.15))
plt.ylabel(r'$\Pr(p | \mathrm{data})$')
right.yaxis.set_ticklabels([])
plt.sca(left)
plt.plot(seq_lengths, counts, '.', label='Data')
p, A, B = updater.est_mean()
plt.plot(seq_lengths, n_shots * (A * p ** seq_lengths + B), label='Est.')
plt.legend(ncol=2, numpoints=3, markerscale=1.4, bbox_to_anchor=(1, 1.15))
plt.xlabel('Seq. Length')
plt.ylabel('Counts')
plt.savefig('figures/rb-custom-updater.png', format='png', dpi=300, frameon=False, facecolor="black")
```
## Diffusive Coin Example ##
```
class DiffusiveCoinModel(qi.Model):
@property
def n_modelparams(self): return 2
@property
def modelparam_names(self): return ['p', r'\sigma']
@property
def expparams_dtype(self): return [('t', float)]
@property
def is_n_outcomes_constant(self): return True
def are_models_valid(self, modelparams):
return np.all([
modelparams[:, 0] >= 0,
modelparams[:, 0] <= 1,
modelparams[:, 1] >= 0
], axis=0)
def n_outcomes(self, expparams): return 2
def likelihood(self, outcomes, modelparams, expparams):
super(DiffusiveCoinModel, self).likelihood(outcomes, modelparams, expparams)
return qi.Model.pr0_to_likelihood_array(outcomes, 1 - modelparams[:, 0, None])
def update_timestep(self, modelparams, expparams):
p, sigma = modelparams.T
t = expparams['t']
p_new = np.clip((np.random.randn(*p.shape) * sigma)[:, None] * np.sqrt(t) + p[:, None], 0, 1)
mps_new = np.empty((p.shape[0], 2, t.shape[0]))
mps_new[:, 0, :] = p_new
mps_new[:, 1, :] = sigma[:, None]
return mps_new
prior = qi.UniformDistribution([[0, 1], [0, 1]])
model = DiffusiveCoinModel()
true = prior.sample(1)
t = 0.03
expparams = np.array([(t,)], dtype=model.expparams_dtype)
updater = qi.smc.SMCUpdater(model, 2000, prior)
true_hist = []
est_hist = []
for idx_exp in xrange(1200):
true = (np.array([[np.cos(idx_exp * np.pi / 40), 0]]) + 1) / 4 + (np.array([[np.cos(idx_exp * np.pi / 197), 0]]) + 1) / 4
outcome = model.simulate_experiment(true, expparams)
updater.update(outcome, expparams)
est_hist.append(updater.est_mean()[0])
true_hist.append(true[:, 0])
plt.figure(figsize=(7, 2.5))
plt.plot(range(1200), true_hist, label='True')
plt.plot(range(1200), est_hist, label='Estimated')
plt.xlabel(r'# of Observations', size=18)
plt.ylabel(r'$p$', size=18)
plt.yticks([0, 0.5, 1], size=12)
plt.xticks(size=12)
plt.legend(ncol=2, bbox_to_anchor=(1, 1.25))
plt.title("Biased Coin", loc='left')
plt.tight_layout()
plt.savefig('figures/diffusive-coin.png', format='png', dpi=300, frameon=False, facecolor="black")
```
### LF RejF ###
```
def lfrf_update(sim, datum, mean, var, m=500, batch_size=100):
xs = []
while len(xs) < m:
xs_batch = np.sqrt(var) * np.random.randn(batch_size) + mean
xs_batch = xs_batch[xs_batch >= 0]
sim_data = sim(xs_batch)
xs += xs_batch[sim_data == datum].tolist()
return np.mean(xs[:m]), np.var(xs[:m])
def cos_sim(omega, t):
return np.random.random() < np.cos(omega * t / 2) ** 2
def lfrf_errors(n_trials=500, n_exp=100, m=500):
err_hist = []
for idx_trial in range(n_trials):
mean = 0.5
var = 1 / 12
true = np.random.random()
mean_hist = []
var_hist = []
for idx_exp in range(n_exp):
t = 1 / np.sqrt(var)
w_ = mean
datum = cos_sim(true, t)
mean, var = lfrf_update(partial(cos_sim, t=t), datum, mean, var, m=m)
mean_hist.append(mean)
var_hist.append(var)
err_hist.append((np.array(mean_hist) - true) ** 2)
return np.array(err_hist)
err_hists = {
m: lfrf_errors(400, 100, m)
for m in (50, 400)
}
plt.figure(figsize=(7, 2.5))
plt.semilogy(np.median(err_hists[50], axis=0), label='$m = 50$')
plt.semilogy(np.median(err_hists[400], axis=0), label='$m = 400$')
plt.xlabel('Bits of Data')
plt.ylabel('Median Squared Error')
plt.legend(ncol=2, bbox_to_anchor=(1, 1.25))
plt.title(r'Rabi/Phase Est', loc='left')
plt.tight_layout()
plt.savefig('figures/lfrf.png', format='png', dpi=300, frameon=False, facecolor="black")
```
| github_jupyter |
# Academic Integrity Statement
As a matter of Departmental policy, **we are required to give you a 0** unless you **type your name** after the following statement:
> *I certify on my honor that I have neither given nor received any help, or used any non-permitted resources, while completing this evaluation.*
\[TYPE YOUR NAME HERE\]
### Partial Credit
Let us give you partial credit! If you're stuck on a problem and just can't get your code to run:
First, **breathe**. Then, do any or all of the following:
1. Write down everything relevant that you know about the problem, as comments where your code would go.
2. If you have non-functioning code that demonstrates some correct ideas, indicate that and keep it (commented out).
3. Write down pseudocode (written instructions) outlining your solution approach.
In brief, even if you can't quite get your code to work, you can still **show us what you know.**
## Problem 1 (50 points)
This problem has three parts which can be completed independently.
## Part A (20 points)
The code below is intended to model a PIC student who attends OH and uses what they learn to do better on future homework assignments. There are several functions intended to model this student's actions. The docstring correctly states the intended functionality of the class, but the code itself does not necessarily implement the docstring correctly.
```
import random
class PICStudent:
"""
A class representing a PIC student.
Includes the following instance variables:
- name (string), the name of the student.
- understanding (int or float), the student's understanding of Python.
Includes the following methods:
- add_name(), sets a user-specified value of name for the student.
No return value.
- say_hi(), prints a message containing the student's name.
- go_to_OH(), increases the student's understanding by one unit.
No return value.
- do_HW(), returns a score (int or float) out of 100 based on the student's
understanding of Python.
"""
pass
def add_name(PCS, name):
if type(PCS) != PICStudent:
raise TypeError("This function is designed to work with objects of class PICStudent")
PCS.name = name
def say_hi(PCS, name):
print("Hello! My name is " + str(self.name))
def go_to_OH(PCS):
if type(PCS) != PICStudent:
raise TypeError("This function is designed to work with objects of class PICStudent")
PCS.understanding += 1
def do_HW(PCS):
if type(PCS) != PICStudent:
raise TypeError("This function is designed to work with objects of class PICStudent")
score = max(75+25*random.random(), 25*PCS.understanding)
return score
```
First, **critique** this solution. For full credit, state **four (4) distinct issues** in this code. Your issues should include one of each of the following types:
- One way in which the code **does not match the docstring.**
- One way in which an **unexpected exception** could be raised.
- One way in which the code might give an **illogical or absurd output** without raising an exception.
- One additional issue. This could be a second instance of one of the above categories, or an issue of a completely different type.
There may be some overlap between these types of issues. For example, an illogical or absurd output could also be in contradiction of the docstring. In such a case, you can choose which category in which to count the issue, but must still describe a total of four distinct issues.
Feel free to add code cells as needed to demonstrate your critiques.
---
*There are a number of good critiques. If you gave significantly different answers than the ones shown here, but clearly justified your response and otherwise met the problem requirements, don't worry.*
1. **Failure to match docstring.** On initialization, class contains neither instance variables nor methods.
2. **Unexpected exception.**
- Calling `say_hi()` prior to `add_name()` will raise an `AttributeError` because `name` is not initialized.
- Calling `go_to_OH()` will always raise an `AttributeError` because `understanding` is not initialized.
3. **Illogical or absurd output.** Can achieve HW scores > 100. Can also be counted as a failure to match the docstring.
4. **Additional issue.** Many possibilities. In addition to the above:
- Highly repetitive type-checking, which can be eliminated in the object-oriented approach.
- No default provided for the `name`.
- Many more...
---
Second, **improve the code**. Write a modified version that (a) fully matches the supplied docstring and (b) fixes the issues that you indicated above. **It is not necessary to add new docstrings,** even if the old ones are no longer appropriate due to your changes. It is not necessary to demonstrate your code, although doing so may help us give you partial credit.
```
# your improvement
# other logical approaches are possible, especially when handling
# the relationship between self.understanding and self.do_HW().
import random
class PICStudent:
def __init__(self, name):
self.name = name
self.understanding = 0
def say_hi(PCS, name):
print("Hello! My name is " + str(self.name))
def go_to_OH(self):
self.understanding += 1
def do_HW(self):
score = max(75+25*random.random(), min(25*PCS.understanding, 100))
return score
```
## Part B (20 points)
Write a class that matches the following docstring. You may find it helpful to consult the [lecture notes](https://nbviewer.jupyter.org/github/PhilChodrow/PIC16A/blob/master/content/object_oriented_programming/inheritance_I.ipynb) in which we first defined `ArithmeticDict()`.
Then, demonstrate each of the examples given in the docstring.
```
class SubtractionDict(dict):
"""
a SubtractionDict includes all properties and methods of the dict class.
Additionally, implements dictionary subtraction via the - binary operator.
If SD1 and SD2 are both SubtractionDicts whose values are numbers (ints or floats),
and if all values in SD1 are larger than their corresponding values in SD2,
then SD1 - SD2 is a new SubtractionDict whose keys are the keys of SD1 and whose values
are the difference in values between SD1 and SD2. Keys present in SD1 but not in SD2 are
handled as though they are present in SD2 with value 0.
A ValueError is raised when:
1. SD2 contains keys not contained in SD1.
2. The result of subtraction would result in negative values.
If subtraction would result in a value of exactly zero, the key is instead
removed from the result.
Examples:
# making strawberry-onion pie
SD1 = SubtractionDict({"onions" : 3, "strawberries (lbs)" : 2})
SD2 = SubtractionDict({"onions" : 1, "strawberries (lbs)" : 1})
SD1 - SD2 # == SubtractionDict({"onions" : 2, "strawberries (lbs)" : 1})
# raises error
SD1 = SubtractionDict({"onions" : 3, "strawberries (lbs)" : 2})
SD2 = SubtractionDict({"onions" : 4, "strawberries (lbs)" : 1})
SD1 - SD2 # error
# raises error
SD1 = SubtractionDict({"onions" : 3, "strawberries (lbs)" : 2})
SD2 = SubtractionDict({"onions" : 1, "snozzberries (lbs)" : 1})
SD1 - SD2 # error
# key removed
SD1 = SubtractionDict({"onions" : 3, "strawberries (lbs)" : 2})
SD2 = SubtractionDict({"onions" : 1, "strawberries (lbs)" : 2})
SD1 - SD2 # == SubtractionDict({"onions" : 2})
"""
def __sub__(self, other):
# check whether there are any keys in the second argument
# not present in the first
for key in other.keys():
if key not in self:
raise ValueError("Second argument contains keys not \
present in first argument.")
# construct the dictionary of differences
# can also be done with a for-loop, with no penalty
# provided that the code is otherwise economical
difference = {key : self[key] - other.get(key, 0) for key in self.keys()}
# check to see if we got any negative values
# make a note if so
# it was not required to indicate which value
# was negative, but this is a friendly thing to do
if min(difference.values()) < 0:
raise ValueError("Subtraction resulted in negative quantity!")
# remove the zeros from the result
# can also construct with for-loop
no_zeroes = {key: val for key, val in difference.items() if val > 0}
# return the result
return no_zeroes
# example 1
SD1 = SubtractionDict({"onions" : 3, "strawberries (lbs)" : 2})
SD2 = SubtractionDict({"onions" : 1, "strawberries (lbs)" : 1})
SD1 - SD2 # == SubtractionDict({"onions" : 2, "strawberries (lbs)" : 1})
# example 2
SD1 = SubtractionDict({"onions" : 3, "strawberries (lbs)" : 2})
SD2 = SubtractionDict({"onions" : 4, "strawberries (lbs)" : 1})
SD1 - SD2
# example 3
SD1 = SubtractionDict({"onions" : 3, "strawberries (lbs)" : 2})
SD2 = SubtractionDict({"onions" : 1, "snozzberries (lbs)" : 1})
SD1 - SD2
# example 4
SD1 = SubtractionDict({"onions" : 3, "strawberries (lbs)" : 2})
SD2 = SubtractionDict({"onions" : 1, "strawberries (lbs)" : 2})
SD1 - SD2 # == SubtractionDict({"onions" : 2})
```
### Part C (10 points)
It is not required to write any functioning code for this problem.
**(I).** Document the `lookup()` function supplied below. You should include:
- A few well-placed comments.
- A docstring that makes clear what assumptions are made about the arguments and what the user can expect from the output.
**(II).** Then, state what the `lookup()` function would do if `d` is a `SubtractionDict` from Part B. Would it be possible to access information from `d` in this case? If so, explain. If not, describe the smallest and most robust change that you could make so that the user could use `lookup()` to access information from `d` when `d` is a `SubtractionDict`. For full credit, your modified code should not explicitly mention `SubtractionDict`s.
```
def lookup(d, x, default = None):
"""
NOTE: you were not required to notice this for full credit,
but this function basically replicates the `get()` method
of dictionaries.
Access a value from a dictionary corresponding to a supplied key,
with a default value returned in case that key is not found.
d: a dictionary
x: the desired key, whose value we attempt to retrieve from d
default: the default value returned in case x is not in d.keys()
return: d[x] if x in d.keys(), otherwise default.
"""
# check whether d is a dictionary
# raise an informative TypeError if not
if type(d) == dict:
raise TypeError("First argument must be a dict.")
# attempt to return the value matched to x in
# dictionary d
try:
return d[x]
# if x not found, return the default
except KeyError:
return default
```
---
The cleanest way to make this function work with the `SubtractionDict` is to replace
```python
if type(d) == dict:
raise TypeError("First argument must be a dict.")
```
with
```python
if not isinstance(d, dict):
raise TypeError("First argument must be an instance of class dict.")
```
This causes `lookup()` to work with *any* class that inherits from `dict`, including the `SubtractionDict` and the `ArithmeticDict` from the lecture notes.
---
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.