text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
## Frauchiger-Renner thought experiment in the collapse theories
### Installation instruction
It is recommended that you clone the qthought repository to your local machine and then run
in the qthought folder.
If you did not pip install qthought, you can use the following quick-fix by uncommenting and adapting to your local file path
```
#import sys
#import os
# to run the example, set the following path to the folder path of qthought on your machine
#sys.path.append(os.path.abspath('/Users/nuri/qthought/qthought'))
```
### Defining the protocol
The code below implements the Frauchiger Renner paradox with a collapse theory prescription of measurement, where agents treat each measurement as a collapse, and the setup can be represented as a branching tree of outcomes. In this case, we see that the paradox arising in the original paper does not take place. Prior to reading this, it is recommended to take a look at the PDF description file Frauchiger-Renner example.
First, we import the ProjectQ operations needed for the protocol: the required single-qubit gates and the control. We also import *Protocol* and *ProtocolStep* classes to be able to define steps of the protocol; *QuantumSystem* to operate quantum systems of different dimensionality; *Agent* class and all functions from the *collapse_theory* module; *consistency* class to be able to chain agents' statements. Additionally, we import *InitR* function which initializes a qubit in the state $\frac{1}{\sqrt{3}} |0> + \sqrt{\frac{2}{3}} |1>$.
```
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from projectq.ops import H, X, Measure
from projectq.meta import Control
from qthought.protocol import ProtocolStep
from qthought.quantumsystem import QuantumSystem
from qthought.agents import InferenceTable
from qthought.interpretations.collapse_theory import *
from qthought.FrauchigerRennerExample.FR_protocol import InitR
from qthought.logicalReasoning.consistency import consistency
```
The first action of the protocol (at time $t=1$) is the initilization of the qubit $R$ Alice has in her lab in the state $\frac{1}{\sqrt{3}} |0> + \sqrt{\frac{2}{3}} |1>$. After defining the action, we define the step of the protocol by specifying: domain of action; written description of the action, which will be used for printouts during the run; time of the step; and which action variable being described.
```
# Step 1: Initialize r
# ----------------------------------------------------------
@enable_branching()
def step1_action(qsys):
"""Prepares the subsystem `r` of a `QuantumSystem` in the Frauchiger-Renner initial state."""
InitR | qsys['r']
step1 = ProtocolStep(domain={'Qubit': ['r']},
descr='Initialize R',
time=1,
action=step1_action)
```
At $t=2$, Alice measures $R$ and writes the result in her memory.
```
# Step 2: Alice observes r
# ----------------------------------------------------------
@enable_branching(collapse_system='r')
def step2_action(qsys):
observe(qsys['Alice_memory'], qsys['r'])
step2 = ProtocolStep(domain={'AgentMemory(1)': ['Alice'],
'Qubit': ['r']},
descr='ALICE observes R',
time=2,
action=step2_action)
```
At $t=3$, Alice makes an inference based on her outcome.
```
# Step 3: Alice makes inference
# ----------------------------------------------------------
@enable_branching()
def step3_action(qsys):
qsys['Alice'].make_inference()
step3 = ProtocolStep(domain={'Agent(1,1)': ['Alice']},
descr='ALICE makes an inference',
time=3,
action=step3_action)
```
At $t=4$, Alice prepares the qubit $S$ based on her outcome: in the state $|0>$ if she obtain $a=0$, and in the state $\frac{1}{\sqrt{2}} |0> + \frac{1}{\sqrt{2}} |1>$ if she got $a=1$.
```
# Step 4: Alice prepares S
# ----------------------------------------------------------
@enable_branching()
def step4_action(qsys):
with Control(qsys['eng'], qsys['Alice_memory']):
H | qsys['s']
step4 = ProtocolStep(domain={'Qubit': ['s'],
'AgentMemory(1)': ['Alice']},
descr='Apply H to S controlled on ALICE_MEMORY',
time=4,
action=step4_action)
```
At $t=5$, Bob measures $S$ and writes the result down to his memory.
```
# Step 5: Bob measures S
# ----------------------------------------------------------
@enable_branching(collapse_system='s')
def step5_action(qsys):
observe(qsys['Bob_memory'], qsys['s'])
step5 = ProtocolStep(domain={'Qubit': ['s'],
'AgentMemory(1)': ['Bob']},
descr='BOB measures S',
time=5,
action=step5_action)
```
At $t=6$, Bob makes an inference based on his outcome.
```
# Step 6: Bob makes inference
# ----------------------------------------------------------
@enable_branching()
def step6_action(qsys):
qsys['Bob'].make_inference()
step6 = ProtocolStep(domain={'Agent(1,1)': ['Bob']},
descr='BOB makes an inference',
time=6,
action=step6_action)
```
At $t=7$, we need to reverse Alice's reasoning process for Ursula to be able to measure in the $|ok>$, $|fail>$ basis.
```
# Step 7: Reverse inference making in Alice
# ----------------------------------------------------------
@enable_branching()
def step7_action(qsys):
qsys['Alice'].make_inference(reverse=True)
observe(qsys['Alice_memory'], qsys['r'], reverse=True)
step7 = ProtocolStep(domain={'Agent(1,1)': ['Alice']},
descr='Reverse Alice reasoning (Step1: in ok --> 1(R)',
time=7,
action=step7_action)
```
Ursula measures Alice's lab in the $|ok>$, $|fail>$ basis (~ Bell basis). To do so, we first apply a Hadamard gate on $R$ at $t=8$, and then measure it in computational basis at $t=9$.
```
# Step 8: Hadamard on r
# ----------------------------------------------------------
@enable_branching()
def step8_action(qsys):
H | qsys['r']
step8 = ProtocolStep(domain={'Qubit': ['r']},
descr='Perform Hadamard on R (Step2: in ok --> 1(R)',
time=8,
action=step8_action)
# Step 9: Ursula measures Alices lab
# ----------------------------------------------------------
@enable_branching(collapse_system='r')
def step9_action(qsys):
observe(qsys['Ursula_memory'], qsys['r'])
step9 = ProtocolStep(domain={'Qubit': ['r'],
'AgentMemory(1)': ['Ursula']},
descr='URSULA measures ALICEs lab (i.e. r)',
time=9,
action=step9_action)
```
Ursula reasons based on her outcome at $t=10$, and announces it at $t=11$.
```
# Step 10: Ursula makes an inference
# ----------------------------------------------------------
@enable_branching()
def step10_action(qsys):
qsys['Ursula'].make_inference()
step10 = ProtocolStep(domain={'Agent(1,1)': ['Ursula']},
descr='URSULA makes inference',
time=10,
action=step10_action)
# Step 11: Ursula announces her prediction
# ----------------------------------------------------------
@enable_branching()
def step11_action(qsys):
Measure | qsys['Ursula_prediction']
print('!Measurement made on Ursula_prediction!')
print('Ursula prediction:', readout([qsys['Ursula_prediction']]))
step11 = ProtocolStep(domain={'Agent(1,1)': ['Ursula']},
descr='URSULA announces her prediction',
time=11,
action=step11_action)
```
Now we repeat the same procedure for Wigner measuring Bob's lab. First, we reverse Bob's reasoning process at $t=12$.
```
# Step 12: Reverse Bob's reasoning
# ----------------------------------------------------------
@enable_branching()
def step12_action(qsys):
qsys['Bob'].make_inference(reverse=True)
# qsys['Bob'].observe(qsys['s'], reverse=True)
observe(qsys['Bob_memory'], qsys['s'], reverse=True)
step12 = ProtocolStep(domain={'Agent(1,1)': ['Bob']},
descr='Reverse BOBs inference procedure',
time=12,
action=step12_action)
```
Wigner measures Bob's lab in the $|ok>$, $|fail>$ basis (~ Bell basis). To do so, we first apply a Hadamard gate on $S$ at $t=13$, measure it in computational basis at $t=14$, and subsequently check if Wigner gets outcome "ok".
```
# Step 13: Apply Hadamard on s
# ----------------------------------------------------------
@enable_branching()
def step13_action(qsys):
H | qsys['s']
step13 = ProtocolStep(domain={'Qubit': ['s']},
descr='Apply Hadamard on S, i.e. transform system S+BOB: ok --> 1(s) ',
time=13,
action=step13_action)
# Step 14: Check if Bob is in ok state
# ----------------------------------------------------------
def step14_action(qsys):
Measure | qsys['s']
print('!Measurement made on s!')
print('s-state:', readout([qsys['s']]))
step14 = ProtocolStep(domain={'Agent(1,1)': ['Bob']},
descr='Check if Bob+s is in ok state (corresponding to s: 1)',
time=14,
action=step14_action)
```
### Building up inference tables
Now we construct the inference tables according to which the inference qubits of different agents are initialized. First, we consider the inference table of Alice: she has to reason about Wigner's outcome, and for that we need to include the steps of what is happening in the Bob's lab ($t=5,6$), and Wigner's actions ($t=12,13$).
```
p_TA_steps = [step1, step2, step4, step5, step6,
step12, step13]
p_TA = sum(p_TA_steps)
p_TA
```
Alice makes a forward inference about a measurement outcome later in the experiment -- and none of her conclusions are deterministic!
```
TA = forward_inference(p_TA,
subsys_x='Alice_memory', t_x=2,
subsys_y='s', t_y=13,
silent=False)
TA
```
Now Bob reasons about Alice, making a backward inference about a measurement outcome earlier in the experiment.
```
p_TB_steps = [step1, step2, step4, step5]
p_TB = sum(p_TB_steps)
p_TB
TB = backward_inference(p_TB,
subsys_x='Alice_memory', t_x=2,
subsys_y='Bob_memory', t_y=5,
silent=False)
TB
```
Ursula reasons about Bob, using backward inference as well.
```
p_TU_steps = [step1, step2, step3, step4, step5,
step6, step7, step8 ,step9]
p_TU = sum(p_TU_steps)
p_TU
TU = backward_inference(p_TU,
subsys_x='Bob_memory', t_x=5,
subsys_y='Ursula_memory', t_y=9,
silent=False)
TU
```
### Combining the inference tables with consistency
Now the consistency rules come to play. They tell us how to combine the obtained inference tables -- in this case we don't have any special restrictions, as we use the classical modal logic where we are always free to conclude $A \Rightarrow C$ from knowing $A \Rightarrow B$ and $B \Rightarrow C$, regardless of which agent has produced the statement.
```
TA_final = TA
TB_final = consistency(TB, TA)
TU_final = consistency(TU, TB_final)
print(TA_final)
print(TB_final)
print(TU_final)
```
### Running the full protocol
Now we are ready to run the full protocol, and see if the "winning condition" (getting the inconsistency) is satisfied. In this case, no inferences can be made with probability 1, so the inconsistency ("winning condition") is never satisfied.
```
steps = [step1, step2, step3, step4, step5,
step6, step7, step8, step9, step10,
step12, step13]
p = sum(steps)
p
print('-'*70)
print('Requiring quantum system:')
qsys = QuantumSystem(p.get_requirements())
no_prediction_state = 1
qsys.print_wavefunction()
print('-'*70)
print('Initialize inference system')
qsys['Alice'].set_inference_table(TA_final, no_prediction_state)
qsys['Bob'].set_inference_table(TB_final, no_prediction_state)
qsys['Ursula'].set_inference_table(TU_final, no_prediction_state)
qsys['Alice'].prep_inference()
qsys['Bob'].prep_inference()
qsys['Ursula'].prep_inference()
qsys.print_wavefunction()
qtree = QuantumTree(qsys)
print('-'*70)
print('Run protocol:')
p.run_manual(qtree, silent=False)
print('-'*70)
print('Perform final measurements.')
states = to_flat_unique(get_possible_outcomes(qtree, 'all'))
possible_final_states = [np.binary_repr(a, len(qtree[0])) for a in states]
print('Possible outcome states:')
for state in possible_final_states:
state = state[::-1] # transform state to internal representation
print('--------------------------')
ok_bar = bool(int(state[qtree.get_position(0, 'Ursula_memory')[0]])) # True, iff Ursula_memory == 1
ok = bool(int(state[qtree.get_position(0, 's')[0]])) # True, iff s == 1
Upred = state[qtree.get_position(0, 'Ursula_prediction')[0]] # Ursula prediction state: 1 - cannot say,0 - fail
if ok_bar and ok: print('XXXXXXXXXXX WINNING XXXXXXXXXXXXX')
print('U. predicts fail:'.ljust(10), bool(1-int(Upred)))
print('ok_bar'.ljust(10), ok_bar)
print('ok'.ljust(10), ok)
if ok_bar and ok:
print('Winning state:', state[::-1])
print('XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')
```
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
ipl = pd.read_csv('matches.csv')
ipl.head()
ipl.shape
ipl.describe()
# Getting the frequency of most man of the match awards
ipl['player_of_match'].value_counts()
# Getting the frequency of top 10 most man of the match awards
ipl['player_of_match'].value_counts()[0:10]
# Getting the frequency of top 5 most man of the match awards
ipl['player_of_match'].value_counts()[0:5]
# Getting the names of top 5 most man of the match awards
list(ipl['player_of_match'].value_counts()[0:5].keys())
plt.figure(figsize=(8, 5))
plt.bar(list(ipl['player_of_match'].value_counts()[0:5].keys()), list(ipl['player_of_match'].value_counts()[0:5]), color = 'orange')
plt.title("Players with most man of the match awards")
plt.xlabel("Players")
plt.ylabel("Number of awards")
plt.show()
# Getting the frequency of the results column
ipl['result'].value_counts()
# Finding number of toss wins w.r.t each team
ipl['toss_winner'].value_counts()
# Extracting the records where a team, batting first has won the match
batting_first = ipl[ipl['win_by_runs'] != 0]
batting_first
# Making a histogram
plt.figure(figsize = (5, 5))
plt.hist(batting_first['win_by_runs'], color = 'purple')
plt.title("Distribution of Runs")
plt.xlabel("Runs")
plt.show()
# Finding out the number of wins w.r.t each team after batting first
batting_first['winner'].value_counts()
# Making a bar plot of top 3 teams with most wins after batting first
plt.figure(figsize= (6, 6))
plt.bar(list(batting_first['winner'].value_counts()[0:3].keys()), list(batting_first['winner'].value_counts()[0:3]), color = ['blue', 'yellow', 'orange'])
plt.title("Teams with most wins, batting first")
plt.xlabel("Teams")
plt.ylabel("Number of wins")
plt.show()
# Making a Pie Chart of win distribution, batting first
plt.figure(figsize = (17, 17))
plt.pie(list(batting_first['winner'].value_counts()), labels = list(batting_first['winner'].value_counts().keys()), autopct = '%0.1f%%')
plt.show()
# Extracting those records where a team has won after batting second
batting_second = ipl[ipl['win_by_wickets'] != 0]
# Lloking at the first five instances
batting_second.head()
# Making a histogram for frequency of wins w.r.t number of wickets
plt.figure(figsize = (7, 7))
plt.hist(batting_second['win_by_wickets'], color = 'orange', bins = 30)
plt.title("Distribution of Runs")
plt.xlabel("Number of wickets")
plt.show()
# Finding out the frequency of number of wins w.r.t each time after batting second
batting_second['winner'].value_counts()
# Making a bar plot of top 3 teams with most wins after batting first
plt.figure(figsize= (8, 8))
plt.bar(list(batting_second['winner'].value_counts()[0:3].keys()), list(batting_second['winner'].value_counts()[0:3]), color = ['purple', 'blue', 'red'])
plt.title("Teams with most wins, batting second")
plt.xlabel("Teams")
plt.ylabel("Number of wins")
plt.show()
# Making a Pie Chart of win distribution, batting second
plt.figure(figsize = (17, 17))
plt.pie(list(batting_second['winner'].value_counts()), labels = list(batting_second['winner'].value_counts().keys()), autopct = '%0.1f%%')
plt.show()
# Looking at the number of matches played in each city
ipl['city'].value_counts()
# Find out how many times a team has won the match after winning the toss
import numpy as np
np.sum(ipl['toss_winner'] == ipl['winner']) / ipl.shape[0] * 100 # Percentage of win
# Let's import another dataset
deliveries = pd.read_csv('deliveries.csv')
deliveries.head()
deliveries.shape
# get number of matches
deliveries['match_id'].unique()
match_1 = deliveries[deliveries['match_id'] == 1]
match_1
match_1.shape
srh = match_1[match_1['inning'] == 1]
srh
srh['batsman_runs'].value_counts()
srh['dismissal_kind'].value_counts()
rcb = match_1[match_1['inning'] == 2]
rcb['batsman_runs'].value_counts()
rcb['dismissal_kind'].value_counts()
```
| github_jupyter |
# Keras Tutorial : Facial Expression Recognition Challenge
### Using FER2013 faces dataset
By Yash
# About this notebook
This notebook consists of a detailed tutorial in keras for a kaggle problem called [Facial Expression Recognition](https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge). A dataset of images of people's faces is given. As the name suggests, we have to classify the facial expression into 7 categories : Angry, Disgust, Fear, Happy, Sad, Surprise, Neutral.
In this section, let's have an overall look on rest of the sections in this notebook.
1. This tutorial starts with a brief introduction to Convolution neural networks and it's application on image datasets.
2. Then the following section talks about the environment setup required like installing libraries and importing the datasets.
3. Once we have the playground ready, it's time to play ;). Before we dwell into building the AI model, understanding the data is very important. Unlike conventional programming, building a model in this field depends on the structure and distribution of data. So, in this section we do some Exploratory data analysis and get a feel on the structure of given data.
4. Most of the times, given datasets have few flaws. In this case, we will see that the number of datasets for each class have high variation. And a solution will be proposed for the same in Data-preprocessing.
5. Once we have the processed data, it time to build the architecture. This section first talks about the observations I personally drew by experimenting with various hyper-parameters like filter size, no. of filters, no. of layers, learning rates, etc. Then the model is trained and a bried assessment is given over a set of epochs.
6. Once the model is trained, we test it on the test data set and perform a final assessment.
7. The appendix consists of future works and references and other misclaneous things.
# 1. Introduction to convolution neural networks and it's application on image datasets.
In 1960s, attempts were made to mimic the brain with an expectation of achieving artificial intelligence. And that is how the neural network architecture or neural netwroks were proposed which was insipred from biological strucuture of the brain.
When initial neural networks were introduced and applied for solving visual tasks, all pixels were taken individually as an input and then process them through various layers of this neural network. This is called a fully-connected network. This infact worked well in few cases like the MNIST dataset (consists of images of handwritten digits).
<p align="center">
<img src="https://github.com/suraj2596/EIP/blob/master/FINAL/1.gif?raw=true"/>
</p>
But when more complex features have to be extracted like curves, shapes, patterns, etc... the fully connected networks tend to fail. This is due to the fact that they do not consider the spatial corelations among various pixels. To overcome this, various other methods were introduced but convolution neural networks was accepted the most and still beinf used.
Convolution neural networks (CNNs), as the name suggests, uses the methodology of convolution*. In convolution, we take 2 inputs or entities which have spatial occupancy, Eg; array or matrix and not a single variable. When the first entity is slidedover the second enitity and sum of multiplications is taken, each step will produce a value. This is the result of convolution. Same methodology can be extended for 2 dimensions. And this is used in CNNs.
<p align="center">
<img src="https://github.com/suraj2596/EIP/blob/master/FINAL/2.gif?raw=true"/>
</p>
The second entity in case of CNNs is called a filer or kernel. The resultant of the convolution is called feature map. So, when an input to a particular layer is used to convolve with one kernel or filter, a feature map is produced. number of feature maps is equal to number of filters.
If this structure is cascaded, it called a deep convolution neural networks.
*Strictly speaking, it should be called co-relation, not convolution. When 2 entities are convolved, the 2nd entity is supposed to be mirrored and taken into calculation.
# 2. Environment Setup
## Setting up Google Colab
**NOTE : This tutorial was made to be used on Goolge colab. If you are implementing in a local system, you can skip this section.**
```
#Linking drive to colab to store datasets
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
# Generate auth tokens for Colab
from google.colab import auth
auth.authenticate_user()
# Generate creds for the Drive FUSE library. Though the link asks you to verify twice, you don't have to!
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
# Create a directory
!mkdir -p drive
#mount Google Drive using the created directory.
!google-drive-ocamlfuse drive
!ls drive
```
## Required environment setup and downloading datasets
**NOTE : Irrespective of local or Colab instances, this section must be executed**
```
#This step is for the model to auto save the weights to your drive.
#So for the next time, you can resume where you left off.
#Let's call this the sync folder. Change the name of the folder based on your environment
path_to_save = 'drive/EIP/Facial_Expression_Recognition/'
#Echo the contents of the sync folder
print ('Files in the sync folder:')
!ls $path_to_save
#Assuming the compressed dataset is in the sync folder...
#Colab users, once downloaded, upload it to your sync folder and then execute this.
#It is a 92MB file.
!tar -xvf $path_to_save/fer2013.tar.gz
# Run this only when starting a new session. Syncing files in drive to pwd on colab instance.
!cp -a $path_to_save/ .
!ls
```
## Import required libraries
```
#Installing keras
!pip install -q keras
!pip install -q pathlib
#Framework related libraries
import keras
from keras.models import Model, Sequential
from keras.layers import Dense, Dropout, Flatten, Input, Activation
from keras.layers import MaxPooling2D, BatchNormalization, SeparableConv2D
from keras.optimizers import Adam
##---------------------------------------------##
#For loading models
from pathlib import Path
from keras.models import load_model
##---------------------------------------------##
##Data preprocessing
#for class balance
from sklearn.utils import class_weight
#For data augmentation
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
##---------------------------------------------##
# this part will prevent tensorflow to allocate all the avaliable GPU Memory
# backend
import tensorflow as tf
#tf.python.control_flow_ops = tf
from keras import backend as k
##---------------------------------------------##
# Don't pre-allocate memory; allocate as-needed
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
##---------------------------------------------##
# Create a session with the above options specified.
k.tensorflow_backend.set_session(tf.Session(config=config))
##---------------------------------------------##
#Data handling libraries
import numpy as np
import pandas as spd
##---------------------------------------------##
#Image handling libraries
import matplotlib
import matplotlib.pyplot as plt
##---------------------------------------------##
# This is a bit of magic to make matplotlib figures appear inline in the notebook rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 8) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
%load_ext autoreload
%autoreload 2
```
# 3. Exploratory Data Analysis
## Import and sample data
```
#Given data set is in csv format after unzipping.
file = 'fer2013/fer2013.csv'
df = spd.read_csv(file)
#Assign separate data frames to training, cross-validation and test data
#Training set
tr_set = df.loc[df['Usage'] == 'Training']
tr = tr_set.loc[:,'pixels']
tr_labels = tr_set.loc[:,'emotion']
#Cross-validation set
cv_set = df.loc[df['Usage'] == 'PublicTest']
cv = cv_set.loc[:,'pixels']
cv_labels = cv_set.loc[:,'emotion']
#Test set
te_set = df.loc[df['Usage'] == 'PrivateTest']
te = te_set.loc[:,'pixels']
te_labels = te_set.loc[:,'emotion']
```
Lets randomly sample 5 images of each emotion from training dtaset
```
tr_set_emotions_label = ['Angry','Disgust','Fear','Happy','Sad','Surprise','Neutral']
#size of each class will be stored in this array
t=np.zeros(np.size(tr_set_emotions_label))
#No. of images to sample from each class
sample_size=5
#Display those images
for k,e in enumerate(tr_set_emotions_label):
this_emotion_set = df.loc[(df['Usage'] == 'Training') & (df['emotion'] == k)]
t[k] = this_emotion_set.size
x=this_emotion_set.sample(n=sample_size).loc[:,'pixels']
for i,j in enumerate(x):
plt.subplot(t.size, sample_size,(sample_size*k)+i+1)
y = j.split(' ')
x = np.array(y).reshape(48,48)
plt.axis('off')
plt.grid('off')
plt.imshow(x.astype('uint8'))
plt.title(e)
```
## Inferences
1. The data provided is flattened/unrolled. To display them, we have to roll or reshape the vectors.
2. This dataset consists of faces of various expressions of all kinds of age, race and sex. That means the data is well generalized.
3. Faces in the images seemed to be centered. So, face detection and extraction can be skipped.
4. Images are in grey scale. That means fewer parameters.
5. Given the classes 'Surprise' and 'Happy', there might be high chance of misclassification. This is due to their high spatial corelation.
Let's explore data distribution among various classes
```
#Plotting no. of training examples vs their class
plt.bar(tr_set_emotions_label, t)
plt.ylabel('No. of training examples')
plt.ylabel('Emotion')
plt.title('Data Distribution among various classes')
plt.show()
```
* It can be observed that the class 'Disgust' has very less number of training data.
* And there is a high possibility of the model to predict 'Happy' if the training data isn't normalized.
* This problem can be solved in various ways:
1.** Assigning weightage** for each class using Keras 'class_weight'. This is preferred.
2.** Data augmentation** - Since the variation among the no. of training examples accross the classes is very high, using augmentation to just few classes to balance the data might not be a good idea. If we do that, there will be a rich variation in 'Happy' class and less variations in 'disgust' class and training over this data would be redundant. But applying Augmentation on all of the classes is a good idea.
3.** Removing data** from other classes. Loss of data is not usually encouraged.
4.** Using GANs** : New training data can be created from the respective class's distribution. This is an option too.
**NOTE : The labels on the x-axis are not in order of the labels 0-7**
# 4. Data Pre-processing
## Rolling the data
The given data is in form of csv file. Here, we convert them into an array of images and sort them into train, validation and test data.
```
#train data
tr_img = []
tr_l = np.zeros((np.size(tr),np.size(t)))
for p,i in enumerate(tr):
y = i.split(' ')
y = np.array(y).reshape(48,48,1)
tr_img.append(y)
tr_l[p,tr_labels[p]]=1
tr_img = np.array(tr_img)
print(tr_img.shape)
#cv data
cv_img = []
cv_l = np.zeros((np.size(cv),np.size(t)))
for p,i in enumerate(cv):
y = i.split(' ')
y = np.array(y).reshape(48,48,1)
cv_img.append(y)
cv_l[p,cv_labels[np.size(tr)+p]] = 1
cv_img = np.array(cv_img)
print(cv_img.shape)
#test data
te_img = []
te_l = np.zeros((np.size(te),np.size(t)))
for p,i in enumerate(te):
y = i.split(' ')
y = np.array(y).reshape(48,48,1)
te_img.append(y)
te_l[p,te_labels[np.size(tr) + np.size(cv) + p]]=1
te_img = np.array(te_img)
print(te_img.shape)
```
## Assigning weights to classes
```
#Finding the weightage to be given to each class
class_weights_array = np.max(t)//t
class_weight = dict(zip(np.arange(np.size(t)), class_weights_array))
print(class_weight)
```
## Callbacks
```
# checkpoint
from keras.callbacks import ModelCheckpoint, EarlyStopping
filepath= "weights-improvement-{epoch:02d}-{val_acc:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=True, mode='max')
earlyStopping = EarlyStopping(monitor='val_acc', min_delta=0.005, patience=8, verbose=0, mode='max')
callbacks_list = [checkpoint,earlyStopping]
```
## Data Augmentation
```
print('Using real-time data augmentation.')
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
rotation_range=15,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True)
datagen.fit(tr_img)
data_aug_x = datagen.flow(tr_img, tr_l, batch_size=64)
```
# 5. Acrhitecture
## Model
Few modifications are done to the network apart from the one specified in [zplure's](https://raw.githubusercontent.com/zlpure/Facial-Expression-Recognition/) repo so that it uses less parameters without compromising performance. The changes are:
* **Convolutions** :
* The latest trend is NOT to use fully connected layers and use convolution layers as much as possible. This way, the parameters used will be less and computation will be faster. I have eliminated the 2 **FC1024** fully connected layers at the bottom.
* Seperable convolutions are used here. Eg : instead of 5 x 5, a pair of 5 x 1 and 1 x 5 are used. This way, the number of parameters drastically decrease.
* **Filters** : Number of filters greatly affect no. of parameters. There must be good enough filters. Too many or too less affectes the accuracies. After few trail and errors, an optimal combination has been found.
* **Dropout** : As the model is designed for performing on few parameters, it's better to use a small dropout. Here, a dropout of 0.1 is being used as opposed to 0.45 in the repo.
* **Optimizer** : Adam optimizer is chosen for this problem. Initially, the default learning rate has been used. After 100th epoch, cyclic learning style has been implemented.
* **Mini-batch size** : A multiple of 32 is preferred. Becasue GPU's allocate in blocks of 32. Based on the batch size, a block size near to the multiple of 32 is allocated. o, it's better to use a bacth size in multiples of 32s to use the compute power effectively. Used a batch size of 64 in this tutorial.
<p align="center">
<img src="https://github.com/suraj2596/EIP/blob/master/FINAL/architecture.png?raw=true" width="350px" height="600px"/>
</p>
**OBSERVATIONS DRAWN FROM HYPERPARAMETER TUNING (MADE OVER 2000 EPOCHS)**
* Using kernel size 3 turned out to be unproductive. The accuracy of the model did not increase ata ll from the beginning. This was observed for 20 epochs and then terminated.
* Kernel size of 5 is used. This means few layers at the end have to be removed which decreased the parameters to 53k from 1M. Turns out 1M parameters was too much if only 1 dense/FC layer at the end is used.
* There was a quick rise in accuracy in early epochs but saturated after 50%. Though running the epochs continously thereafter shows increase in accuracy, the rise is very slow. So, I have used cyclic learning rates.
* Training the initial epochs on smaller images and furthering the training on actual images didn't have any affect on the smoothness of training. It's strange.
```
model = Sequential()
f =48
model.add(SeparableConv2D(f,(5,1),activation='relu', input_shape=(48,48,1)))
model.add(SeparableConv2D(f,(1,5),activation='relu'))
model.add(BatchNormalization())
model.add(SeparableConv2D(f,(5,1),activation='relu'))
model.add(SeparableConv2D(f,(1,5),activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(1, 1)))
##
f=96
model.add(SeparableConv2D(f,(5,1),activation='relu'))
model.add(SeparableConv2D(f,(1,5),activation='relu'))
model.add(BatchNormalization())
model.add(SeparableConv2D(f,(5,1),activation='relu'))
model.add(SeparableConv2D(f,(1,5),activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
#model.add(Dropout(0.1))
##
f=198
model.add(SeparableConv2D(f,(5,1),activation='relu'))
model.add(SeparableConv2D(f,(1,5),activation='relu'))
model.add(BatchNormalization())
model.add(SeparableConv2D(f,(5,1),activation='relu'))
model.add(SeparableConv2D(f,(1,5),activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
#model.add(Dropout(0.1))
model.add(Flatten())
model.add(Dense(7, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
```
## Epochs 1-175
While running a model, how do you asses the model's performance rather than waiting till the last epoch? Is there a way to see how the model is performing as it is being trainied? The answer is yes! And this section covers few basic techniques to keep in mind to assess the model's performance.
The performance of a model should be in the same level on new data as well as training data. Then we say the model has generalized well over the dataset. If it didn't generalize well, it'll lead to various problems.In one line, *monitoring the change of training accuracy and validation accuracy tells a lot about the model's training behaviour*
The following points need to be monitored while running a model.
---
** > The difference between the training accuracy and validation accuracy starts increasing : *Overfitting* **
* **Definition** : Overfitting means the model is trying to remember the training data to produce high training accuracy or less training error. When this happens, the model will perform poor on data which it hadn't seen earlier.
* **Reasons for a model to overfit and it's fixes** :
* **No. of Parameters** : If the nmber of parameters are more than necessary, model becomes very flexible and becomes capable of remembering the data. It's a good practice to start with parameters around 1 million and try running it for few epochs. Based on the performance, increase or decrease the parameters.
* **Regularization** : If you realise the model is if the model is overfitting after many epochs, you can add/increase dropout in the model rather than changing the no. of parameters itself. This way, dropout regularizes the models by trying to make it simpler.
* **Using Data Augmentation** : When data is augmented, ie, new data is created by minute perturbations, model becomes more robust to changes and generalizes well. This is usually if available data is less or the data has high spatial variations.
** > Both training and validation accuracies are less : *Underfitting* **
* **Definition** : Undefitting means the model is unable to learn to fit the data. As a result, it'll perform poor on any new data too.
* **Reasons for a model to underfit and it's fixes** : It is just opposite to overfitting. Refer the above sections for the reasons and it's fixes
<br /><br />
<p align="center">
<img src="https://github.com/suraj2596/EIP/blob/master/FINAL/3.png?raw=true" width="600px" ><br />
<em>An intuition for overfitting v/s underfitting. One must note that a Neural network deals with much more high dimensional data as opposed to 2D data shown in the above figures</em><br /><br />
</p>
---
**> If training or validation accuracy changes very slowly : Low value of learning rate**
* **Reasons and it's fixes**
* When the learning rate is less, the accuracies change very slowly over epochs. There is nothing wrong in doing this except it takes a lot of time for the model to converge.
* Usually, the default learning rates for optimizers in Keras work fine in most of the cases. In case the learning rate has to be changed, specify it before compiling the model.
**> If training or validation accuracy fluctuate a lot : High value of learning rate**
* **Reasons and it's fixes**
* When the learning rate is high, the accuracies start fluctuating at some point. This happens because it takes a bigger step when the model is near convergence. Due to this, the accuracies fluctuate.
* This would usually be the case after the model has ran multiple epochs. So, learning rates are reduced as we proceed down the no. of epochs. There are many ways the learning rates can be changed. They are called optimizers. In our case, we use Adam optimizer.
<p align="center">
<img src="https://github.com/suraj2596/EIP/blob/master/FINAL/4..gif?raw=true" width="400px" >
<img src="https://github.com/suraj2596/EIP/blob/master/FINAL/5..gif?raw=true" width="400px" ><br />
<em>Observe the red dots in the figures below. In case of left figure, it takes many iterations or epochs to converge to the minima. Inthe right figure, observe the fluctuation</em><br /><br />
</p>
If you want to know more about optimizers, [here](http://ruder.io/optimizing-gradient-descent/) is a good read.
### Epoch 1-10
**ASSESSMENT : Training and validation accuracies are closeby. Nothing to worry**
```
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
epoch = 0
model.fit_generator(data_aug_x,
epochs=10,
verbose=1,
initial_epoch=epoch,
callbacks=callbacks_list,
validation_data=(cv_img, cv_l),
class_weight = class_weight)
!cp *.hdf5 $path_to_save
```
### Epoch 11-30
**ASSESSMENT : Only validation accuracy is fluctuating near the end. But variation training accuracy seems to be smooth. This is due to the small size of validation set and the active learning of the model**
```
model.load_weights('weights-improvement-10-0.39.hdf5')
epoch=10
model.fit_generator(data_aug_x,
epochs=30,
verbose=1,
initial_epoch=epoch,
callbacks=callbacks_list,
validation_data=(cv_img, cv_l),
class_weight = class_weight)
!cp *.hdf5 $path_to_save
```
### Epoch 31-50
**ASSESSMENT : Nothing to worry here too**
```
model.load_weights('weights-improvement-29-0.51.hdf5')
epoch = 30
model.fit_generator(data_aug_x,
epochs=50,
verbose=1,
initial_epoch=epoch,
callbacks=callbacks_list,
validation_data=(cv_img, cv_l),
class_weight = class_weight)
!cp *.hdf5 $path_to_save
```
### Epoch 51-70
**ASSESSMENT : Very slight overfitting started to appear here. Let's wait few more epochs to address this problem. Thumb rule, take any action to rectify overfitting if the difference gradually increases and not due to fluctuations. Also, the difference must be more than 10%. This isn't a standard value. Just a measure I gained over experience.**
```
model.load_weights('weights-improvement-48-0.54.hdf5')
epoch = 50
model.fit_generator(data_aug_x,
epochs=70,
verbose=1,
initial_epoch=epoch,
callbacks=callbacks_list,
validation_data=(cv_img, cv_l),
class_weight = class_weight)
!cp *.hdf5 $path_to_save
```
### Epochs 71-80
**ASSESSMENT : Good amount of learning has happened here. We are still good. Nothing to worry**
```
model.load_weights('weights-improvement-63-0.55.hdf5')
epoch = 70
model.fit_generator(data_aug_x,
epochs=80,
verbose=1,
initial_epoch=epoch,
callbacks=callbacks_list,
validation_data=(cv_img, cv_l),
class_weight = class_weight)
!cp *.hdf5 $path_to_save
```
### Epoch 81-100
**ASSESSMENT : The learning seems to be capped. Let's reduce the learning rate from the next set of epochs.**
```
model.load_weights('weights-improvement-80-0.56.hdf5')
epoch = 80
model.fit_generator(data_aug_x,
epochs=100,
verbose=1,
initial_epoch=epoch,
callbacks=callbacks_list,
validation_data=(cv_img, cv_l),
class_weight = class_weight)
!cp *.hdf5 $path_to_save
```
### Epochs 100-120
**ASSESSMENT : Changing learning rate proved to be effective.**
* Started using EarlyStopping callback from this point. The function of this callback monitors the change in specified parameter, ie, in this case:"val_acc", and stops the training process if there isn't any considerable amount of change.
* This is specifically used when we reduce the learning rate as it slows down the learning and stopping the process early is better if there isn't much learning. Then we could use a different learning rate and proceed.
```
adam = Adam(lr=0.0005)
model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=['accuracy'])
model.load_weights('weights-improvement-97-0.58.hdf5')
epoch = 100
model.fit_generator(data_aug_x,
epochs=120,
verbose=1,
initial_epoch=epoch,
callbacks=callbacks_list,
validation_data=(cv_img, cv_l),
class_weight = class_weight)
!cp *.hdf5 $path_to_save
```
From epoch 108, we can see that there is not much change in validation accuracy but there is constant increase in training accuracy. Also, there is an increasing gap between training and validation accuracies which indicates overfitting. To solve this issue, cyclic learning rates can be used. Let's try using a learning rate of 0.003 ie 3 times the starting learning rate of Adam optimizer. This way, if the model got stuck in some local minima, it would come out. Then using smaller learning rates, we can proceed to global minimas
### Epochs 118-125
**ASSESSMENT : It is natural to observe a huge change when the learning rate is increased suddenly. In other words, this is desirable and favourable change.**
```
adam = Adam(lr=0.003)
model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=['accuracy'])
model.load_weights('weights-improvement-109-0.59.hdf5')
epoch = 117
model.fit_generator(data_aug_x,
epochs=125,
verbose=1,
initial_epoch=epoch,
callbacks=callbacks_list,
validation_data=(cv_img, cv_l),
class_weight = class_weight)
!cp *.hdf5 $path_to_save
```
### Epochs 125-150
**ASSESSMENT : Same fluctuations observed again. Seems like cyclic learning rates concept didn't have positive affect. I begin to suspect the reason for the model to saturate in terms of accuracy is due to the less nubmer of parameters.**
```
adam = Adam(lr=0.001)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.load_weights('weights-improvement-109-0.59.hdf5')
epoch = 140
model.fit_generator(data_aug_x,
epochs=175,
verbose=1,
initial_epoch=epoch,
callbacks=callbacks_list,
validation_data=(cv_img, cv_l),
class_weight = class_weight)
!cp *.hdf5 $path_to_save
```
### Epochs 155-166
**ASSESSMENT : Almost overfitting. Let's run for few more epochs to confirm overfitting**
```
adam = Adam(lr=0.0005)
model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=['accuracy'])
model.load_weights('weights-improvement-109-0.59.hdf5')
epoch = 155
model.fit_generator(data_aug_x,
epochs=170,
verbose=1,
initial_epoch=epoch,
callbacks=callbacks_list,
validation_data=(cv_img, cv_l),
class_weight = class_weight)
!cp *.hdf5 $path_to_save
```
### Epochs 166-180
**ASSESSMENT : The model seems to have reached it's capacity. Going any further will overfit the model. One way to avoid this is to remove dropout entirely. But previous versions didn't show much change.**
```
adam = Adam(lr=0.0005)
model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=['accuracy'])
model.load_weights('weights-improvement-162-0.60.hdf5')
epoch = 166
model.fit_generator(data_aug_x,
epochs=180,
verbose=1,
initial_epoch=epoch,
callbacks=callbacks_list,
validation_data=(cv_img, cv_l),
class_weight = class_weight)
!cp *.hdf5 $path_to_save
```
# 6. Performance on test data
## Test accuracy
```
#Loading the weights of the model at 177th epoch
model.load_weights('drive/EIP/Facial_Expression_Recognition/weights-improvement-177-0.60.hdf5')
score = model.evaluate(te_img,te_l, verbose=1)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
#model.save_weights("FER_model.h5")
```
### Confusion Matrix
```
from sklearn.metrics import confusion_matrix
te_prob = model.predict(te_img)
te_pred = np.zeros(np.shape(te_prob))
temp = np.argmax(te_prob, axis=1)
for i,k in enumerate(temp):
te_pred[i,k]=1
print(temp)
print(te_pred)
cm = confusion_matrix(te_labels, temp)
fig = plt.figure(figsize=(7,7))
f = fig.add_subplot(111)
for i in range(0,7):
for j in range(0,7):
f.text(i,j,cm[j,i],va='center', ha='center')
matplotlib.rcParams.update({'font.size': 18})
ticks = np.arange(len(tr_set_emotions_label))
f.set_xticks(ticks)
f.set_yticks(ticks)
f.set_xticklabels(tr_set_emotions_label)
f.set_yticklabels(tr_set_emotions_label)
plt.tight_layout()
plt.ylabel('Annotated')
plt.xlabel('Predicted')
plt.imshow(cm.astype('uint8'),cmap=plt.cm.PuBuGn)
```
# 7. Appendix
## Logs
* 30/6 : Final revision(DONE)
* 28/6 : Writing the tutorial(DONE)
* 27/6 : Implemented Cyclic learning rates with filter sixe of 7. Reached 61.5% in 50 epochs. Cyclic learing rate didn't have much affect.(DONE)
* 25/6 : Tried implementing Learning rate scheduler. No change in initial epochs(DONE)
* 23/6 : Tried few more parameters. Not much change observed except in increasing parameters(DONE)
* 21/6 : used kernel_size=5 with data augmentation and class weights. Amazing improvement in performance. Tuning Hyperparameters (DONE)
* 20/6 : Found few valuable observations during the hyperparameter tuning. (DONE)
* 19/6 : Experimenting with hyperparameters. (DONE)
* 18/6 : Ditching Winograd Implementation. Going with Seperable Conv. (DONE)
* 16/6 : Implementing winograd convolutions for 2D data (ABORTTED)
* 14/6 : Performing EDA and some data pre-processing. (DONE)
* 13/6 : Setting up Jupyter Notebook. (DONE)
* 12/6 : Understanding Winograg Convolution. (DONE)
* 10/6 : Starting literature survey. (DONE)
## References
* [zplure's architecture](https://github.com/zlpure/Facial-Expression-Recognition)
* [Keras documentation](https://keras.io/)
* [Cyclic Learning rates](https://arxiv.org/abs/1506.01186)
## Future Works
* Write the introduction to CNNs in much more detail.
* Added some more debugging stratagies with more detailed explanation.
* Use more augmented data.
* Initially, I have used 800,00 parameters. At that time, I was using the basic architecture. I observed that the model was overfitting heavily. Then I restricted my parameters to 200,000. After applying different filter sizes and cyclic learning rates and tuning hyper-parameters, I realised that I limited my parameters and focussed more on hyper-parameter tuning. Given more time, I would start from this point. I'm sure that the performance of the model would improve.
# Conclusion
The model has an test accuracy of 61.1%. The state of the art using a CNN is around 69.9%. Currently I stand position12 in the kaggle competition. I believe there is some room for improvement by using different architectures.
| github_jupyter |
```
import pandas as pd
import numpy as np
#for data visualisation
import matplotlib.pyplot as plt
import seaborn as sns
malData=pd.read_csv("MalwareData.csv",sep="|",low_memory = True)
X = malData.drop(['Name','md5','legitimate'],axis=1).values
y = malData['legitimate'].values
malData.shape
malData.describe()
malData.groupby(malData['legitimate']).size() #feature selection
legit=malData[0:41323].drop(["legitimate"],axis=1)
mal = malData[41323::].drop(["legitimate"],axis=1)
print("The shape of the legit dataset is: %s samples, %s features"%(legit.shape[0],legit.shape[1]))
print("the shape of the mal dataset is: %s samples, %s features"%(mal.shape[0],mal.shape[1]))
```
## Data Cleaning
```
y=malData["legitimate"]
malData=malData.drop(['legitimate'],axis=1)
malData=malData.drop(['Name'],axis=1)
malData=malData.drop(['md5'],axis=1)
```
## Data splitting
```
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(malData,y,test_size=0.2,random_state=42)
X_train.shape
```
## Model Building
## *1. Random Forest*
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
clf=RandomForestClassifier(max_depth=2,random_state=0)
randomModel=clf.fit(X_train,y_train)
from sklearn.metrics import f1_score,accuracy_score,plot_confusion_matrix,auc,confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
#Accuracy on the train dataset
train_pred=randomModel.predict(X_train)
accuracy_score(y_train,train_pred)
#Accuracy on the test dataset
prediction=randomModel.predict(X_test)
accuracy_score(y_test,prediction)
f1_score(y_test,prediction)
```
### CONFUSION MATRIX
```
titles_options = [("Confusion matrix,without normalization",None),("Normalized confusion matrix",'true')]
for title, normalize in titles_options:
disp = plot_confusion_matrix(randomModel, X_test, y_test,display_labels="legitimate",cmap=plt.cm.Blues,normalize=normalize,)
disp.ax_.set_title(title)
print(title)
print(disp.confusion_matrix)
plt.show()
```
## *2. Neural Network*
```
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
#define model
model = Sequential()
model.add(Dense(16, input_dim=54, activation="relu"))
model.add(Dense(8, activation="relu"))
model.add(Dense(4, activation="relu"))
model.add(Dense(1, activation="sigmoid"))
model.summary()#print model Summary
#compile model
model.compile(loss="binary_crossentropy" , optimizer="rmsprop", metrics=["accuracy"])
```
## Model Evaluation
```
#Fit MODEL
model.fit(X_train, y_train, epochs=5, batch_size=32)
#Accuracy on the training dataset
trainPred=model.predict(X_train)
trainPred=[1 if y>=0.5 else 0 for y in trainPred]
accuracy_score(y_train, trainPred)
#Accuracy on the test dataset
y_prediction=model.predict(X_test)
y_prediction=[1 if y>=0.5 else 0 for y in y_prediction]
accuracy_score(y_test, y_prediction)
confusion_matrix(y_test,y_prediction)
f1_score(y_test,y_prediction)
import pandas
import numpy
import sklearn.ensemble as ek
from sklearn.model_selection import train_test_split
from sklearn import tree
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import SelectFromModel
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import confusion_matrix
from sklearn.pipeline import make_pipeline
from sklearn import preprocessing
from sklearn import svm
from sklearn.linear_model import LinearRegression
from sklearn.neighbors import KNeighborsClassifier
model = { "DecisionTree":tree.DecisionTreeClassifier(max_depth=10),
"RandomForest":ek.RandomForestClassifier(n_estimators=50),
"Adaboost":ek.AdaBoostClassifier(n_estimators=50),
"GradientBoosting":ek.GradientBoostingClassifier(n_estimators=50),
"GNB":GaussianNB(), "K Nearest Neighbour": KNeighborsClassifier(),
"LinearRegression":LinearRegression()}
results = {}
for algo in model:
clf = model[algo]
clf.fit(X_train,y_train)
score = clf.score(X_test,y_test)
print ("%s : %s " %(algo, score))
results[algo] = score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
df=pd.read_csv('MalwareData.csv',sep='|')
df=df.drop(['Name','md5'],axis=1)
X=df.iloc[:,:-1]
y=df.iloc[:,-1]
df.shape
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
X_train.shape
X_test.shape
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X_train,y_train)
y_pred=lr.predict(X_test)
result1 = confusion_matrix(y_test, y_pred)
print('Confusion Matrix :')
print(result1)
print('Accuracy Score :',accuracy_score(y_test, y_pred))
print('Report : ')
print(classification_report(y_test, y_pred) )
from sklearn.naive_bayes import GaussianNB
nb = GaussianNB()
nb.fit(X_train,y_train)
y_pred1=nb.predict(X_test)
result2 = confusion_matrix(y_test, y_pred1)
print('Confusion Matrix :')
print(result2)
print('Accuracy Score :',accuracy_score(y_test, y_pred1))
print('Report : ')
print(classification_report(y_test, y_pred1))
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(X_train,y_train)
y_pred2=knn.predict(X_test)
result3 = confusion_matrix(y_test, y_pred2)
print('Confusion Matrix :')
print(result3)
print('Accuracy Score :',accuracy_score(y_test, y_pred2))
print('Report : ')
print(classification_report(y_test, y_pred2))
from sklearn.tree import DecisionTreeClassifier
tr = DecisionTreeClassifier()
tr.fit(X_train,y_train)
y_pred3=tr.predict(X_test)
result4 = confusion_matrix(y_test, y_pred3)
print('Confusion Matrix :')
print(result4)
print('Accuracy Score :',accuracy_score(y_test, y_pred3))
print('Report : ')
print(classification_report(y_test, y_pred3))
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
rf.fit(X_train,y_train)
y_pred4=rf.predict(X_test)
result5 = confusion_matrix(y_test, y_pred4)
print('Confusion Matrix :')
print(result5)
print('Accuracy Score :',accuracy_score(y_test, y_pred4))
print('Report : ')
print(classification_report(y_test, y_pred4))
names = ['LogisticRegression', 'NaiveBayes', 'K-NearestNeighbor','DecisionTree','RandomForest']
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
fpr1 , tpr1, thresholds = roc_curve(y_test,y_pred)
fpr2, tpr2, thresholds = roc_curve(y_test,y_pred1)
fpr3, tpr3, thresholds = roc_curve(y_test,y_pred2)
fpr4, tpr4, thresholds = roc_curve(y_test,y_pred3)
fpr5, tpr5, thresholds = roc_curve(y_test,y_pred4)
auc1 = roc_auc_score(y_test,y_pred)
auc2 = roc_auc_score(y_test,y_pred1)
auc3 = roc_auc_score(y_test,y_pred2)
auc4 = roc_auc_score(y_test,y_pred3)
auc5 = roc_auc_score(y_test,y_pred4)
fpr = [fpr1,fpr2,fpr3,fpr4,fpr5]
tpr = [tpr1,tpr2,tpr3,tpr4,tpr5]
auc = [auc1,auc2,auc3,auc4,auc5]
plt.figure(figsize=(15,10))
for i in range(0,5):
plt.plot(fpr[i],tpr[i],label = '%s (Sensitivity = %.3f)'%(names[i],auc[i]))
plt.xlim([0.0,1.0])
plt.ylim([0.0,1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Negative Rate')
plt.title('Reciever Operating Characteristic')
plt.legend(loc = 'lower right')
plt.show()
acc = [0.3040202825063383,0.7021369069177834,0.9864904020282507,0.9914885910901847,0.9935892792466497]
plt.figure(figsize=(8,5))
plt.subplot()
plt.bar(names, acc)
plt.suptitle('Accuracy of Models')
plt.show()
```
| github_jupyter |
# 2. Tool to identify some components that have caused the electrical events
<p> This jupyter notebook was used to manually identify some of the componennts that have caused the electrical events, that were previously hand-labeled. The components identified are <b> pumps, grinders (motor) and heaters </b>in the coffeemaker. </p> <b> This is the second notebook in the labeling pipeline of CREAM. </b>
<div class="alert alert-info">
<h3>Instructions for using this notebook</h3>
<p> In the following, we load the electrical events that have been previously labeled with the "1_electrical_events_labeling_tool.ipynb" notebook. </p>
<p> Proceed at the end of the notebook with the corresponding cell for the labeling. Follow the instructions given there. </p>
## Imports
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import h5py
import pandas as pd
import os
import sys
from pathlib import Path
from datetime import datetime
from datetime import timedelta
import math
import pdb
import scipy
# Add project path to path for import
project_path = os.path.abspath("..")
if project_path not in sys.path:
sys.path.append(project_path)
# Add module path to path for import
module_path = os.path.abspath("../data_utility/data_utility.py")
if module_path not in sys.path:
sys.path.append(module_path)
from data_utility import CREAM_Day # class to work with a day of the CREAM Dataset
%matplotlib notebook
# Intentional replication is necessary
%matplotlib notebook
%load_ext autoreload
# Reload all modules every time before executing the Python code typed.
%autoreload 2
# Import some graphical modules
from IPython.display import display, clear_output
from ipywidgets import Button, Layout, ButtonStyle, HBox, VBox, widgets, Output
from IPython.display import SVG, display, clear_output
import subprocess
import glob
```
## Global Functions
```
def plot_event_window(event_timestamp:pd.Timestamp, window_size, current_CREAM_day:CREAM_Day, concurrent_events_dict):
"""
Plots a window of window_size in each direction around the event_timestamp.
The event_timestamp marks the beginning of the minute where the event stopped.
So instead of directly using the event_timestamp, we plot the event_timestamp + 59 seconds
to mark the end of the minute in that the event stopped.
Therefore the event has happended before the point that is marked as a bold red line.
The current signal of the coffee maker is plotted.
The event type is the label the event gets.
If a concurrent_events_dict is provided, with the keys being the name of the event list and the values being the event dataframes,
all other events that happen within the window of interest are also plotted.
Appliance events are bold orange lines.
Other events are dashed red lines.
"""
# Import and set globals necessary for the click functions
global EVENT_TIMESTAMP
global WINDOW_START_TS
global COMPONENTS_DF
# Instead of taking the event timestamp directly we take the END of the minute
end_event_timestamp = event_timestamp + timedelta(seconds=59)
# Tackle border cases of the timestamp
if end_event_timestamp - timedelta(seconds=window_size) < current_CREAM_day.minimum_request_timestamp: # in case we are at the beginning of the day
duration_to_left = end_event_timestamp - current_CREAM_day.minimum_request_timestamp
duration_to_left = duration_to_left.total_seconds() # amount of data that we load now to the left of the current timestmap
duration_to_right = window_size #to the right we can load the full window
elif end_event_timestamp + timedelta(seconds=window_size) > current_CREAM_day.maximum_request_timestamp: # in case we are at the end of the day
duration_to_right = current_CREAM_day.maximum_request_timestamp - end_event_timestamp
duration_to_right = duration_to_right.total_seconds() #amount of data that we load now to the right of the current timestamp
duration_to_left = window_size #to the left we can load the full window
else: # if we have anough datapoints to the left and to the right to load the full WINDOW_SIZE in each direction
duration_to_left = window_size
duration_to_right = window_size
# Create the start- and end-timestamp and compute the overall duration of the window
duration = duration_to_left + duration_to_right
start_ts = end_event_timestamp - timedelta(seconds=duration_to_left)
end_ts = end_event_timestamp + timedelta(seconds=duration_to_right)
# Load the data
voltage, current = current_CREAM_day.load_time_frame(start_datetime=start_ts, duration=duration) #and WINDOW_SIZE seconds after the event
# Compute the index of the event, using the timestamp
end_event_index = current_CREAM_day.get_index_from_timestamp(start_ts, end_event_timestamp)
fig, ax = plt.subplots(1,1)
fig.canvas.mpl_connect('button_press_event', onclick) #append event to figure
xticks = np.arange(len(current))
ax.plot(xticks, current, markersize=0.1, alpha=0.6)
ax.tick_params(axis='x', rotation=90) #rotate the xlabels
if np.max(current) < 1: #in case of noise, show an appropriate range
ax.set_ylim([-6,6])
# Plot the event line
ax.axvline(end_event_index, color="red", linewidth=1.5)
# Add other events that happend within the window
if len(concurrent_events_dict) > 0:
for event_list_name, concurrent_events_df in concurrent_events_dict.items():
# If an already refined timestamp list (either product, or maintenance) is provided, one
# can plot the detailed end timestamps instead of the coarse grained ones that are not refined yet
if "End_Timestamp" in concurrent_events_df.columns:
ts_column_name = "End_Timestamp"
else:
ts_column_name = "Timestamp"
concurrent_events_df_roi = concurrent_events_df[(concurrent_events_df[ts_column_name] <= end_ts) & (concurrent_events_df[ts_column_name] >= start_ts)]
if len(concurrent_events_df_roi) > 0:
for i, row in concurrent_events_df_roi.iterrows():
# Get the event index
i = current_CREAM_day.get_index_from_timestamp(start_ts, row[ts_column_name])
# Some plotting adjustments, depending on the type of event that is plotted
if "component" in event_list_name:
color ="orange"
linewidth=1.5
else: # in case of product or maintenance events
color="red"
if "product" in event_list_name:
if "Product" in concurrent_events_df_roi.columns:
label = row.Product
elif "Event_Type" in concurrent_events_df_roi.columns:
label= row.Event_Type
else:
label = "unspecified"
linewidth=1.2
elif "maintenance" in event_list_name:
if "Activity" in concurrent_events_df_roi.columns:
label = row.Activity
elif "Event_Type" in concurrent_events_df_roi.columns:
label= row.Event_Type
else:
label = "unspecified"
linewidth=1.2
else:
label = "Unknown"
linewidth=0.6
# Plot the line
ax.axvline(i, color=color, linestyle=":", label=label, linewidth=linewidth)
if len(COMPONENTS_DF) > 1:
# use mask here because of misaligned indices
mask = (COMPONENTS_DF.Timestamp <= end_ts) & (COMPONENTS_DF.Timestamp >= start_ts)
concurrent_events_df_roi = COMPONENTS_DF.loc[mask.values]
concurrent_events_df_roi = concurrent_events_df_roi[concurrent_events_df_roi.Component!="unlabeled"] #only take the ones with an already labeled component
if len(concurrent_events_df_roi) > 0:
for i, row in concurrent_events_df_roi.iterrows():
i = current_CREAM_day.get_index_from_timestamp(start_ts, row.Timestamp)
ax.axvline(i, color="green", linestyle=":", label="already labeled end " + str(i))
# add time information to plot
samples_per_minute = current_CREAM_day.sampling_rate * 60 #every 60 seconds
if len(current) % samples_per_minute == 0: #just in case the parameters are changed and there are no full minutes in the signals
step = len(current) / samples_per_minute
for i in range(0, int(step+1)):
ax.axvline(i*samples_per_minute, color="black", ymax=0.1)
fig.suptitle("Event :" + "\n" + str(str(start_ts) + " - " + str(end_ts)))
ax.legend(loc='upper right')
EVENT_TIMESTAMP = event_timestamp
WINDOW_START_TS = start_ts
return fig, ax
```
## Global Variables
```
EVENT_INDEX = int(0) # index of the EVENTS_TO_LABEL_DF that the programm is currently at
EVENTS_TO_LABEL_DF = None # dataframe of the list of events to label
EVENT_TIMESTAMP = None # timestamp of the event that is in the current focus
WINDOW_START_TS = None # start timestamp of the window we are currently looking at
LAST_EVENT_CLICKED_LOC_LIST = [] # list of the locs of the last events clicked
LABELED_TIMESTAMP = None # the labeled timestamp
WINDOW_SIZE = int(120) # seconds, the window size in each direction around and event to be displayed
ALL_DAYS = ["2018-08-23" , "2018-08-24" , "2018-08-25", "2018-08-26" , "2018-08-27" , "2018-08-28" ,
"2018-08-29", "2018-08-30", "2018-08-31", "2018-09-01", "2018-09-02" , "2018-09-03" , "2018-09-04" ,
"2018-09-05", "2018-09-06", "2018-09-07", "2018-09-08" , "2018-09-09" , "2018-09-10", "2018-09-11", "2018-09-12"
"2018-09-13" ,"2018-09-14" ,"2018-09-15" , "2018-09-16", "2018-09-17", "2018-09-18","2018-09-19" , "2018-09-20" ,
"2018-09-21" , "2018-09-22" , "2018-09-23" ,"2018-09-24" ,"2018-09-25" ,"2018-09-26" , "2018-09-27", "2018-09-28" ,
"2018-09-29" , "2018-09-30" , "2018-10-01" ,"2018-10-02" , "2018-10-03" ,"2018-10-04", "2018-10-05" , "2018-10-06" ,
"2018-10-07", "2018-10-08" ]
```
## Widget functions for the UI
```
closest_event_loc = None
timestamp_clicked = None
def onclick(event):
"""
Function to be executed in case of a click event at a figure.
"""
global COMPONENTS_DF # Dataframe containing the component events
global COMPONENT_NAME # Name of the component currently labeled
global LAST_EVENT_CLICKED_LOC_LIST # list of locs of the last events clicked, used for deleting last click in case of errors
global EVENT_TIMESTAMP #timestamp of the event of interest that was autoamticcaly generated
global WINDOW_START_TS #start timestamp of the window we are currently looking at
global current_CREAM_day #object representing the current day in the CREAM dataset
global EVENT_INDEX # index of the EVENTS_TO_LABEL_DF that the programm is currently at
global closest_event_loc
global timestamp_clicked
# Take the event index from the click, convert it to a timestamp
timestamp_clicked = current_CREAM_day.get_timestamp_from_index(WINDOW_START_TS, math.floor(event.xdata))
if timestamp_clicked > EVENT_TIMESTAMP + timedelta(seconds=60):
print("The red timestamp is generated after the event is completed! Hence, do not place the click after it!")
return
event_before = COMPONENTS_DF[COMPONENTS_DF.Timestamp <= timestamp_clicked].iloc[-1]
event_after = COMPONENTS_DF[COMPONENTS_DF.Timestamp > timestamp_clicked].iloc[0]
delta_before = timestamp_clicked - event_before.Timestamp
delta_before = delta_before.total_seconds()
delta_after = event_after.Timestamp - timestamp_clicked
delta_after = delta_after.total_seconds()
if delta_before <= delta_after:
closest_event_loc = event_before.name
else:
closest_event_loc = event_after.name
COMPONENTS_DF.at[closest_event_loc, "Component"] = COMPONENT_NAME
# Store the loc to enable the delete function in case of errors
LAST_EVENT_CLICKED_LOC_LIST.append(closest_event_loc)
# Increment the index we are currently looking at
EVENT_INDEX += 1
return
def display_initial_event(event_index_p=0):
"""
Display the start event. This is set to 0 as per default!
In case of interruptions in the labeling process or in case of errors, you can restart labeling at
an arbitrary index using the event_index_p paramter.
"""
global COMPONENTS_DF # Dataframe containing the component events
global COMPONENT_NAME # Name of the component currently labeled
global LAST_EVENT_CLICKED_LOC_LIST # loc of the last event clicked, used for deleting last click in case of errors
global CONCURRENT_EVENTS_DICT # dictionary containg the events happening concurrently, used for plotting
global EVENTS_TO_LABEL_DF # dataframe of the list of events to label
global EVENT_INDEX # event index we are currently processing
global FIG # global figure object
global AX # global axis object
global current_CREAM_day # global CREAM_day object
global WINDOW_SIZE # global WINDOW_SIZE
plt.clf()
clear_output()
if EVENT_INDEX > len(EVENTS_TO_LABEL_DF)-1:
print("THIS WAS THE LAST EVENT! YOU ARE DONE!")
return
# For the timestamp we need to check if we need to create the corresponding CREAM_Day object, or if it already exists
event_timestamp = EVENTS_TO_LABEL_DF.iloc[EVENT_INDEX].Timestamp
event_date = str(EVENTS_TO_LABEL_DF.iloc[EVENT_INDEX].Date)
if current_CREAM_day.day_date != event_date: # if the event does not lie withing the current CREAM_day object, create a new one
day_path = os.path.join(PATH_TO_DATA, event_date)
current_CREAM_day = CREAM_Day(cream_day_location=day_path,use_buffer=True, buffer_size_files=2)
FIG, AX = plot_event_window(event_timestamp = EVENTS_TO_LABEL_DF.iloc[EVENT_INDEX].Timestamp,
window_size = WINDOW_SIZE,
current_CREAM_day = current_CREAM_day,
concurrent_events_dict = CONCURRENT_EVENTS_DICT)
FIG.show()
display(button_box)
def on_next_clicked(event):
global LABEL_DESTINATION_PATH #location where the event labels will be stored, is user specified
global WINDOW_START_TS #start timestamp of the window we are currently looking at
global COMPONENTS_DF # Dataframe containing the component events
global COMPONENT_NAME # Name of the component currently labeled
global LAST_EVENT_CLICKED_LOC_LIST # loc of the last event clicked, used for deleting last click in case of errors
global CONCURRENT_EVENTS_DICT # dictionary containg the events happening concurrently, used for plotting
global EVENTS_TO_LABEL_DF # dataframe of the list of events to label
global EVENT_INDEX # event index we are currently processing
global FIG # global figure object
global AX # global axis object
global current_CREAM_day # global CREAM_day object
global WINDOW_SIZE # global WINDOW_SIZE
save_labels(destination=LABEL_DESTINATION_PATH) #save it
plt.clf()
clear_output()
if EVENT_INDEX > len(EVENTS_TO_LABEL_DF)-1:
print("THIS WAS THE LAST EVENT! YOU ARE DONE!")
return
print("This is event number " + str(EVENT_INDEX) + " of " + str(len(EVENTS_TO_LABEL_DF)))
# For the timestamp we need to check if we need to create the corresponding CREAM_Day object, or if it already exists
event_timestamp = EVENTS_TO_LABEL_DF.iloc[EVENT_INDEX].Timestamp
event_date = str(EVENTS_TO_LABEL_DF.iloc[EVENT_INDEX].Date)
if current_CREAM_day.day_date != event_date: # if the event does not lie withing the current CREAM_day object, create a new one
day_path = os.path.join(PATH_TO_DATA, event_date)
current_CREAM_day = CREAM_Day(cream_day_location=day_path,use_buffer=True, buffer_size_files=2)
FIG, AX = plot_event_window(event_timestamp = EVENTS_TO_LABEL_DF.iloc[EVENT_INDEX].Timestamp,
window_size = WINDOW_SIZE,
current_CREAM_day = current_CREAM_day,
concurrent_events_dict = CONCURRENT_EVENTS_DICT)
FIG.show()
display(button_box)
def save_labels(destination: str):
global EVENT_INDEX
global COMPONENTS_DF
global COMPONENT_NAME
filename = "labeled_component_events.csv"
if EVENT_INDEX % 10 == 0 and EVENT_INDEX > 0: #every 10 events: before storing the new file, save the old one
os.rename(os.path.join(destination, filename), os.path.join(destination, "previous_component_event_labels.csv"))
#Store the new one
COMPONENTS_DF.to_csv(os.path.join(destination, filename), index=False)
def on_delete_clicked(event):
"""
Deletes the last click from every key in the event_dictionary and returns to the previous window
"""
global COMPONENTS_DF # Dataframe containing the component events
global COMPONENT_NAME # Name of the component currently labeled
global LAST_EVENT_CLICKED_LOC_LIST # loc of the last event clicked, used for deleting last click in case of errors
global CONCURRENT_EVENTS_DICT # dictionary containg the events happening concurrently, used for plotting
global EVENTS_TO_LABEL_DF # dataframe of the list of events to label
global EVENT_INDEX # event index we are currently processing
global FIG # global figure object
global AX # global axis object
if EVENT_INDEX <= 0 or LAST_EVENT_CLICKED_LOC_LIST is None: #we arrived at the first event again
print("This is the first event, you can not go further back in time!")
return
COMPONENTS_DF.at[LAST_EVENT_CLICKED_LOC_LIST[EVENT_INDEX], "Component"] = "unlabeled"
EVENT_INDEX = EVENT_INDEX - 1 # adjust EVENT_INDEX
FIG, AX = plot_event_window(event_timestamp = EVENTS_TO_LABEL_DF[EVENT_INDEX].Timestamp,
window_size = WINDOW_SIZE,
current_CREAM_day = current_CREAM_day,
concurrent_events_dict = CONCURRENT_EVENTS_DICT)
# Now display the previous event
plt.clf()
clear_output()
print("The current Event Index is " + str(EVENT_INDEX))
FIG.show()
display(button_box)
return EVENT_DICTIONARY
```
# Only touch this area in the notebook to alter variables like, for example, the path to the dataset
<div class="alert alert-danger">
<h3>//ToDo</h3>'
<p>Please specify the component name to label. </p>
</div>
```
COMPONENT_NAME = "millingplant" # 'pump', 'heater'
```
<div class="alert alert-danger">
<h3>//ToDo</h3>
<p>Please specify the path to the main-folder of "CREAM". </p>
</div>
```
PATH_TO_DATA = os.path.abspath(os.path.join("..", "..", "Datasets", "CREAM"))
```
<div class="alert alert-danger">
<h3>//ToDo</h3>
<p>Please specify the path to location where you want to store the labels. </p>
</div>
```
LABEL_DESTINATION_PATH = os.path.abspath(os.path.join("..", "..", "Datasets", "CREAM", "tmp"))
```
## Execute this cell to load the raw electrical events
<p> In the following, we load the electrical events that have been previously labeled with the "1_electrical_events_labeling_tool.ipynb" notebook. </p>
<p> Furthermore, we load the raw product and maintenance events, that contain the timestamps with a per minute precision </p>
```
#necessary for the plotting
# Load the events
day_path = os.path.join(PATH_TO_DATA, "2018-08-24") #arbitrary day to initialize the object
current_CREAM_day = CREAM_Day(cream_day_location=day_path,use_buffer=True, buffer_size_files=2)
# Load the electrical component events (the raw ones)
#COMPONENTS_DF = current_CREAM_day.load_component_events(os.path.join(PATH_TO_DATA, "raw_coffee_maker_logs", "raw_component_events.csv"), raw_file=True, filter_day=False)
# Load the product and the maintenance events (the raw ones, per minute events) and filter for the day
all_maintenance_events = current_CREAM_day.load_machine_events(os.path.join(PATH_TO_DATA, "raw_coffee_maker_logs", "raw_maintenance_events.csv"), raw_file=True, filter_day=False)
all_product_events = current_CREAM_day.load_machine_events(os.path.join(PATH_TO_DATA, "raw_coffee_maker_logs", "raw_product_events.csv"), raw_file=True, filter_day=False)
# Initalize the dictionary that is used to determine concurrent_events in the plot method
CONCURRENT_EVENTS_DICT = {"product_events" : all_product_events, "maintenance_events" : all_maintenance_events}
```
## Execute this cell to add the "Component" column to the raw_component events from labeling step 1
```
if "Component" not in COMPONENTS_DF.columns: #only if the column has not been created before
COMPONENTS_DF["Component"] = "unlabeled"
```
# Execute this cell to start the labeling
<p> Click into the figure as close as possible to the event you want to label. The closest event to your click
is then labeled accordingly. </p>
<p> To ease labeling and to raise awareness for concurrent events the follwoing lines are displayed: </p>
<p> Appliance event labels are shown in dashed orange lines </p>
<p> Any other product or maintenance event is show with a dashed red line </p>
<p> <b> The red line marks the point by that the event has to be finished latest! </b> </p>
<p> The short black lines represent one minute steps </p>
<p> If you think you are done with this event, click the green <b> "next" </b> button to load the next event and save the previous one </p>
<p> If you have selected <b> "next" </b> accidentially or still to remove the event you have labeled from the previous event, select the red <b> "delete last entry" </b >button </p>
<div class="alert alert-info">
<h4>Empty Figure or not in interactive mode</h4>
<p>If the plot does not load or is not in the interactive mode, reexecute the cell or reexcute the import cell</p>
</div>
<div class="alert alert-danger">
<h3> Do not use the zoom and other capabilities from the plot toolbar</h3>
<p>Clicks when zooming etc. also get registred as clicks for labels!</p>
</div>
```
if COMPONENT_NAME == "millingplant":
#build the events_to_label and the concurrent_events dict (schauen ob das schon gefilterted erwartet wird!)
EVENTS_TO_LABEL_DF = None # dataframe of the list of events to label
EVENTS_TO_LABEL_DF = all_maintenance_events[(all_maintenance_events.Activity == 'MillingPlantEspresso') |
(all_maintenance_events.Activity == 'MillingPlantCoffee')]
# sample a random subset, because there are a lot of them
np.random.seed(42)
sample_size = int(len(EVENTS_TO_LABEL_DF) * 0.15)
events_to_label_subset = np.random.choice(EVENTS_TO_LABEL_DF.index, sample_size, replace=False)
EVENTS_TO_LABEL_DF = EVENTS_TO_LABEL_DF.loc[events_to_label_subset]
EVENTS_TO_LABEL_DF.sort_index(inplace=True) #sort by index
print("Proceed with the labeleling of the millingplant events below!")
# Create and register Buttons
next_button = Button(description="Next -> ",style=ButtonStyle(button_color='green'))
delete_button = Button(description=" <- Delete last entry",style=ButtonStyle(button_color='red'))
button_box = HBox([next_button, delete_button])
next_button.on_click(on_next_clicked)
delete_button.on_click(on_delete_clicked)
# Display first event --> event_index is set to zero for the start
# In case of erros or interruptions, provide another event index to the display_initial_event function
display_initial_event(event_index_p=0)
elif COMPONENT_NAME == "pump":
EVENTS_TO_LABEL_DF = all_product_events[all_product_events.Product == 'hot_water']
EVENTS_TO_LABEL_DF.sort_index(inplace=True) #sort by index
print("Proceed with the labeling of the pump events below!")
# Create and register Buttons
next_button = Button(description="Next -> ",style=ButtonStyle(button_color='green'))
delete_button = Button(description=" <- Delete last entry",style=ButtonStyle(button_color='red'))
button_box = HBox([next_button, delete_button])
next_button.on_click(on_next_clicked)
delete_button.on_click(on_delete_clicked)
# Display first event --> event_index is set to zero for the start
# In case of erros or interruptions, provide another event index to the display_initial_event function
display_initial_event(event_index_p=0)
elif COMPONENT_NAME == "heater":
# Simply select all the events on saturdays to be heater events. we only label the on-events
# We have investigated the data (product events) and no other events can be found on saturdays
# Get the Saturday dates
day_information_df = current_CREAM_day.get_weekday_information(date=ALL_DAYS)
saturdays = day_information_df[day_information_df.Weekday == "Saturday"].Date.values
# Filter for the On-Events and the saturdays in the component events
mask = (COMPONENTS_DF.Event_Type == "On") & (COMPONENTS_DF.Date.isin(saturdays))
COMPONENTS_DF.at[mask, "Component"] = "heater"
# To signal that everything is finished
EVENTS_TO_LABEL_DF = []
print("The heating events have been labeled and saved!")
else:
raise ValueError("Component name is not available! Please use either millingplant, heater or pump")
```
| github_jupyter |
```
import os
import json
from pandas.io.json import json_normalize
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
# current version of seaborn generates a bunch of warnings that we'll ignore
warnings.filterwarnings('ignore')
sns.set_style('whitegrid')
import gc
import datetime
from sklearn.preprocessing import LabelEncoder
%matplotlib inline
color = sns.color_palette()
```
Loading data
```
df_train = pd.read_csv('input/train-flattened.csv',
dtype={'fullVisitorId': np.str}) # MUST change the fullVisitorId to string format (REQUIRED!)
df_test = pd.read_csv('input/test-flattened.csv',
dtype={'fullVisitorId': np.str})
# replace NaN with zero for column - totals.transactionRevenue (only do this for training data)
df_train['totals.transactionRevenue'].fillna(0, inplace=True)
const_cols = [c for c in df_train.columns if df_train[c].nunique(dropna=False)==1 ]
df_train_clean = df_train.drop(const_cols, axis=1)
df_test_clean = df_test.drop(const_cols, axis=1)
# Drop useless features
df_train_clean.drop(['sessionId', "trafficSource.campaignCode"], axis=1, inplace=True)
df_test_clean.drop(['sessionId'], axis=1, inplace=True)
```
Preprocessing for categorical features
```
# specify which categorical variables to replace NaN (and label encoding for later)
cat_cols = ["channelGrouping", "device.browser",
"device.deviceCategory", "device.operatingSystem",
"geoNetwork.city", "geoNetwork.continent",
"geoNetwork.country", "geoNetwork.metro",
"geoNetwork.networkDomain", "geoNetwork.region",
"geoNetwork.subContinent", "trafficSource.adContent",
"trafficSource.adwordsClickInfo.adNetworkType",
"trafficSource.adwordsClickInfo.gclId",
"trafficSource.adwordsClickInfo.page",
"trafficSource.adwordsClickInfo.slot", "trafficSource.campaign",
"trafficSource.keyword", "trafficSource.medium",
"trafficSource.referralPath", "trafficSource.source",
'trafficSource.adwordsClickInfo.isVideoAd', 'trafficSource.isTrueDirect']
for col in cat_cols:
# Replace NaN with 'missing'
df_train_clean[col] = df_train_clean[col].fillna('missing')
df_test_clean[col] = df_test_clean[col].fillna('missing')
def clean_small_cap(df):
for col in cat_cols:
#print('Different elements in the feature:', df_train_clean[col].unique())
before = df[col].unique()
cleaned = np.unique([str(x).lower() for x in df[col]])
if len(before) == len(cleaned):
# There is no small/capital letter issue:
print("no issue with {}".format(col))
else:
print("THERE ISSSSS with {}".format(col))
df[col] = [x.lower() for x in df[col]]
clean_small_cap(df_train_clean)
clean_small_cap(df_test_clean)
df_train_clean.corr()
```
Preprocessing for numerical features
```
# specify which numerical variables to replace NaN
num_cols = ["totals.hits", "totals.pageviews", "visitNumber", "visitStartTime", 'totals.bounces', 'totals.newVisits']
for col in num_cols:
# convert numerical variables to float
# Replace NaN with 0
df_train_clean[col] = df_train_clean[col].astype('float').fillna(0)
df_test_clean[col] = df_test_clean[col].astype('float').fillna(0)
```
Preprocessing for other type of features (Boolean value, dates)
```
df_train_clean['device.isMobile'] = df_train_clean['device.isMobile'].astype(int)
df_test_clean['device.isMobile'] = df_test_clean['device.isMobile'].astype(int)
# change int format to string format for the column - date
df_train_clean['date'] = df_train_clean['date'].astype(str)
df_test_clean['date'] = df_test_clean['date'].astype(str)
# add a new column - yearmonth
df_train_clean.insert(loc=2, column='yearmonth', value=df_train_clean['date'].str.slice(start=0, stop=-2))
df_test_clean.insert(loc=2, column='yearmonth', value=df_test_clean['date'].str.slice(start=0, stop=-2))
df_train_clean['date'] = pd.to_datetime(df_train_clean['date'], format='%Y%m%d')
df_test_clean['date'] = pd.to_datetime(df_test_clean['date'], format='%Y%m%d')
df_train_clean = df_train_clean.sort_values(by='date', ascending=True)
df_test_clean = df_test_clean.sort_values(by='date', ascending=True)
df_train_clean = df_train_clean.sort_values(by='date', ascending=True)
df_test_clean = df_test_clean.sort_values(by='date', ascending=True)
# training data
df_train_clean.insert(loc=2, column='year', value=df_train_clean.date.dt.year)
df_train_clean.insert(loc=3, column='month', value=df_train_clean.date.dt.month)
# +1 to make Monday=1.....until Sunday=7
df_train_clean.insert(loc=4, column='day', value=(df_train_clean.date.dt.dayofweek)+1)
# testing data
df_test_clean.insert(loc=2, column='year', value=df_test_clean.date.dt.year)
df_test_clean.insert(loc=3, column='month', value=df_test_clean.date.dt.month)
# +1 to make Monday=1.....until Sunday=7
df_test_clean.insert(loc=4, column='day', value=(df_test_clean.date.dt.dayofweek)+1)
```
Label Encoding
```
# reset index after we rearranged the rows based on date
df_train_clean.reset_index(drop=True, inplace=True)
df_test_clean.reset_index(drop=True, inplace=True)
df_train_clean["totals.transactionRevenue"] = df_train_clean["totals.transactionRevenue"].astype('float')
# Loop through each categorical column
for col in cat_cols:
label_encoder = LabelEncoder()
# use the label encoding based on training and testing data to capture all strings
label_encoder.fit(list(df_train_clean[col].values.astype('str')) + list(df_test_clean[col].values.astype('str')))
df_train_clean[col] = label_encoder.transform(list(df_train_clean[col].values.astype('str')))
df_test_clean[col] = label_encoder.transform(list(df_test_clean[col].values.astype('str')))
print('Label encoded: {}'.format(col))
```
## Data preparation
```
unused_feature = [
'totals.transactionRevenue',
'yearmonth',
'date',
'totals.bounces',
'trafficSource.adwordsClickInfo.adNetworkType',
'trafficSource.adwordsClickInfo.slot',
'trafficSource.adwordsClickInfo.page',
'trafficSource.adwordsClickInfo.isVideoAd',
'trafficSource.campaign',
'trafficSource.adContent',
'device.deviceCategory',
'geoNetwork.subContinent',
'year'
]
train_data = df_train_clean[df_train_clean.columns[~df_train_clean.columns.isin(unused_feature)]]
train_data.head()
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import TimeSeriesSplit
# Helper function
def train_valid_split_wrt_time(data, feature_column, percent_valid = 0.1):
last_indi_train = int(np.floor(data.shape[0]*(1-percent_valid)))
x_train, x_valid = data[feature_column].iloc[0:last_indi_train], \
data[feature_column].iloc[last_indi_train:]
y_train, y_valid = data['totals.transactionRevenue'].iloc[0:last_indi_train], \
data['totals.transactionRevenue'].iloc[last_indi_train:]
return x_train, x_valid, y_train, y_valid
def timeseries_cv(X, Y, model, cv=5):
mse=[]
tscv = TimeSeriesSplit(n_splits=cv)
for train_index, test_index in tscv.split(X):
X_train, X_test = X.iloc[train_index] , X.iloc[test_index]
Y_train, Y_test = Y.iloc[train_index] , Y.iloc[test_index]
clf4_rf = model
clf4_rf.fit(X_train,Y_train)
X_test['totals.transactionRevenue_predicted'] = clf4_rf.predict(X_test)
X_test_pred = X_test[['fullVisitorId','totals.transactionRevenue_predicted']]
X_test_pred['totals.transactionRevenue'] = Y_test
X_test_pred = X_test_pred.groupby('fullVisitorId').agg({'totals.transactionRevenue_predicted':'sum',
'totals.transactionRevenue':'sum'}).reset_index()
error = np.sqrt(mean_squared_error(X_test_pred['totals.transactionRevenue'].apply(lambda x: np.log1p(x)),
X_test_pred['totals.transactionRevenue_predicted'].apply(lambda x: np.log1p(x))))
print('Error: {}'.format(error))
mse.append(error)
return np.mean(mse)
```
===TESTING===
## Try LGBM
#### For installation of lightgbm
#### https://lightgbm.readthedocs.io/en/latest/Installation-Guide.html#macos
#### https://github.com/Microsoft/LightGBM/issues/1456
```
df_train_clean.columns
df_train_clean_grp = df_train_clean.groupby(
["fullVisitorId"]
).agg({'totals.transactionRevenue':'sum', 'visitNumber':'sum', 'totals.pageviews':'sum',
'totals.hits':'sum'})
df_train_clean_grp.head()
df_train_clean_grp['totals.transactionRevenue'] = np.log1p(df_train_clean_grp['totals.transactionRevenue'])
df_train_clean_grp.head()
df_test_clean_grp = df_test_clean.groupby(
["fullVisitorId"]
).agg({'visitNumber':'sum', 'totals.pageviews':'sum',
'totals.hits':'sum', ''})
df_test_clean_grp.head()
# Split the train dataset into development and valid based on time
feature_column_1 = ['visitNumber', 'totals.pageviews','totals.hits']
x_train, x_test, y_train, y_test = train_valid_split_wrt_time(df_train_clean_grp, feature_column_1)
# custom function to run light gbm model
def run_lgb(train_X, train_y, val_X, val_y, test_X):
params = {
"objective" : "regression",
"metric" : "rmse",
"num_leaves" : 30,
"min_child_samples" : 100,
"learning_rate" : 0.1,
"bagging_fraction" : 0.7,
"feature_fraction" : 0.5,
"bagging_frequency" : 5,
"bagging_seed" : 2018,
"verbosity" : -1
}
lgtrain = lgb.Dataset(train_X, label=train_y)
lgval = lgb.Dataset(val_X, label=val_y)
model = lgb.train(params, lgtrain, 1000, valid_sets=[lgval], early_stopping_rounds=100, verbose_eval=100)
pred_test_y = model.predict(test_X, num_iteration=model.best_iteration)
pred_val_y = model.predict(val_X, num_iteration=model.best_iteration)
return pred_test_y, model, pred_val_y
# Training the model #
pred_test, model, pred_val = run_lgb(x_train, y_train, x_test, y_test, df_test_clean_grp)
df_test_clean_grp.shape
len(pred_test)
df_test_clean_grp['PredictedLogRevenue'] = pred_test
df_test_clean_grp.shape
df_test_clean_grp[['PredictedLogRevenue']].to_csv('out_lgbm.csv')
```
===End of TESTING==
Recover geoNetwork source
```
df_train_clean['geoNetwork.continent'].unique()
df_train_clean['geoNetwork.subContinent'].unique()[3]
df_train_clean['geoNetwork.region'].unique()[3]
df_train_clean['geoNetwork.country'].unique()[0]
df_train_clean['geoNetwork.city'].unique()[3]
df_train_clean['geoNetwork.metro'].unique()[4]
df_train_clean[df_train_clean['geoNetwork.metro'] == 'Houston TX']['geoNetwork.city'].unique()
df_train_clean['geoNetwork.networkDomain'].unique()[0]
df_train_clean.columns
```
| github_jupyter |
## Tutorial on Units and UnitConverters in Parcels
In most applications, Parcels works with `spherical` meshes, where longitude and latitude are given in degrees, while depth is given in meters. But it is also possible to use `flat` meshes, where longitude and latitude are given in meters (note that the dimensions are then still called `longitude` and `latitude` for consistency reasons).
In all cases, velocities are given in m/s. So Parcels seemlessly converts between meters and degrees, under the hood. For transparency, this tutorial explain how this works.
Let's first import the relevant modules, and create dictionaries for the `U`, `V` and `temp` data arrays, with the velocities 1 m/s and the temperature 20C.
```
%matplotlib inline
from parcels import Field, FieldSet
import numpy as np
xdim, ydim = (10, 20)
data = {'U': np.ones((ydim, xdim), dtype=np.float32),
'V': np.ones((ydim, xdim), dtype=np.float32),
'temp': 20*np.ones((ydim, xdim), dtype=np.float32)}
dims = {'lon': np.linspace(-15, 5, xdim, dtype=np.float32),
'lat': np.linspace(35, 60, ydim, dtype=np.float32)}
```
We can convert these data and dims to a FieldSet object using `FieldSet.from_data`. We add the argument `mesh='spherical'` (this is the default option) to signal that all longitudes and latitudes are in degrees.
Plotting the `U` field indeed shows a uniform 1m/s eastward flow. The `.show()` method recognises that this is a spherical mesh and hence plots the northwest European coastlines on top.
```
fieldset = FieldSet.from_data(data, dims, mesh='spherical')
fieldset.U.show()
```
However, printing the velocites directly shows something perhaps surprising. Here, we use the square-bracket field-interpolation notation to print the field value at (5W, 40N, 0m depth) at time 0.
```
print((fieldset.U[0, 0, 40, -5], fieldset.V[0, 0, 40, -5], fieldset.temp[0, 0, 40, -5]))
```
While the temperature field indeed is 20C, as we defined, these printed velocities are much smaller.
This is because Parcels converts under the hood from m/s to degrees/s. This conversion is done with a `UnitConverter` object, which is stored in the `.units` attribute of each Field. Below, we print these
```
for fld in [fieldset.U, fieldset.V, fieldset.temp]:
print('%s: %s' % (fld.name, fld.units))
```
So the U field has a `GeographicPolar` UnitConverter object, the V field has a `Geographic` Unitconverter and the `temp` field has a `UnitConverter` object.
Indeed, if we multiply the value of the V field with 1852 * 60 (the number of meters in 1 degree of latitude), we get the expected 1 m/s.
```
print(fieldset.V[0, 0, 40, -5]* 1852*60)
```
### UnitConverters for `mesh='flat'`
If longitudes and latitudes are given in meters, rather than degrees, simply add `mesh='flat'` when creating the FieldSet object.
```
fieldset_flat = FieldSet.from_data(data, dims, mesh='flat')
fieldset_flat.U.show()
for fld in [fieldset_flat.U, fieldset_flat.V, fieldset_flat.temp]:
print('%s: %f %s' % (fld.name, fld[0, 0, 40, -5], fld.units))
```
Indeed, in this case all Fields have the same default `UnitConverter` object. Note that the coastlines have also gone in the plot, as `.show()` recognises that this is a flat mesh.
### UnitConverters for Diffusion fields
The units for Brownian diffusion are in $m^2/s$. If (and only if!) the diffusion fields are called `kh_zonal` and `kh_meridional`, Parcels will automatically assign the correct Unitconverter objects to these fields.
```
kh_zonal = 100 # in m^2/s
kh_meridional = 100 # in m^2/s
fieldset.add_field(Field('Kh_zonal', kh_zonal*np.ones((ydim, xdim), dtype=np.float32), grid=fieldset.U.grid))
fieldset.add_field(Field('Kh_meridional', kh_meridional*np.ones((ydim, xdim), dtype=np.float32), grid=fieldset.U.grid))
for fld in [fieldset.Kh_zonal, fieldset.Kh_meridional]:
print('%s: %e %s' % (fld.name, fld[0, 0, 40, -5], fld.units))
```
Here, the unitconverters are `GeographicPolarSquare` and `GeographicSquare`, respectively.
Indeed, multiplying with $(1852\cdot60)^2$ returns the original value
```
deg_to_m = 1852*60
print(fieldset.Kh_meridional[0, 0, 40, -5]*deg_to_m**2)
```
### Adding a UnitConverter object to a Field
So, to summarise, here is a table with all the conversions
| Field name | Converter object | Conversion for `mesh='spherical'`| Conversion for `mesh='flat'`|
|-------|-----------------|-----------------------------------|
| 'U' | `GeographicPolar`| $1852 \cdot 60 \cdot \cos(lat \cdot \frac{\pi}{180})$ | 1 |
| 'V' | `Geographic` | $1852 \cdot 60 $ | 1 |
| 'Kh_zonal' | `GeographicPolarSquare` | $(1852 \cdot 60 \cdot \cos(lat \cdot \frac{\pi}{180}))^2$ | 1 |
| 'Kh_meridional' | `GeographicSquare` | $(1852 \cdot 60)^2 $ | 1 |
| All other fields | `UnitConverter` | 1 | 1 |
Only four Field names are recognised and assigned an automatic UnitConverter object. This means that things might go very wrong when e.g. a velocity field is not called `U` or `V`.
Fortunately, you can always add a UnitConverter later, as explained below:
```
fieldset.add_field(Field('Ustokes', np.ones((ydim, xdim), dtype=np.float32), grid=fieldset.U.grid))
print(fieldset.Ustokes[0, 0, 40, -5])
```
This value for `Ustokes` of course is not as expected, since the mesh is spherical and hence this would mean 1 degree/s velocity. Assigning the correct `GeographicPolar` Unitconverter gives
```
from parcels.tools.converters import GeographicPolar
fieldset.Ustokes.units = GeographicPolar()
print(fieldset.Ustokes[0, 0, 40, -5])
print(fieldset.Ustokes[0, 0, 40, -5]*1852*60*np.cos(40*np.pi/180))
```
Alternatively, the UnitConverter can be set when the `FieldSet` or `Field` is created by using the `fieldtype` argument (use a dictionary in the case of `FieldSet` construction.
```
fieldset.add_field(Field('Ustokes2', np.ones((ydim, xdim), dtype=np.float32), grid=fieldset.U.grid, fieldtype='U'))
print(fieldset.Ustokes2[0, 0, 40, -5])
```
### Using velocities in units other than m/s
Some OGCM store velocity data in units of e.g. cm/s. For these cases, Field objects have a method `set_scaling_factor()`.
If your data is in cm/s and if you want to use the built-in Advection kernels, you will therefore have to use `fieldset.U.set_scaling_factor(100)` and `fieldset.V.set_scaling_factor(100)`.
```
fieldset.add_field(Field('Ucm', 0.01*np.ones((ydim, xdim), dtype=np.float32), grid=fieldset.U.grid))
fieldset.Ucm.set_scaling_factor(100)
print(fieldset.Ucm[0, 0, 40, -5])
```
| github_jupyter |
```
import nltk
from nltk.stem.lancaster import LancasterStemmer
stemmer = LancasterStemmer()
# Import package
import tensorflow as tf
import json
import tflearn
import numpy as np
import random
import pickle
with open('intents.json') as jsonFile:
data = json.load(jsonFile)
print(data['intents'])
try:
with open('data.pickle',"rb") as f:
words, labels, training, output = pickle.load(f)
except:
words = []
labels = []
# doc_x contain pattern of words
docs_x = []
# doc_y contain pattern of specific tag
docs_y = []
for intent in data["intents"]:
for pattern in intent["patterns"]:
# it's consider only root words by removing unnessary stuff from the sentance
# Use Tokenization : that will help to grab the perticular word from the sentance
# it will return the list which contain all the words in it
# nltk.download('punkt')
wrds = nltk.word_tokenize(pattern)
words.extend(wrds)
# append pattern of words
docs_x.append(wrds)
docs_y.append(intent["tag"])
# append tag in labels list
if intent["tag"] not in labels:
labels.append(intent["tag"])
print(labels)
print(docs_x)
print(docs_y)
# convert all the words into lowercase so that uppercase is not different then lowecase word
unvalid_data = ['?', ')', '(', ',', '.', '&']
words = [stemmer.stem(w.lower()) for w in words if w not in unvalid_data]
print(words)
# remove duplicate and sort
words = sorted(list(set(words)))
print(words)
# sort labels
labels = sorted(labels)
print(labels)
# we create a bag of words that will represent a any given pattern
# we create 1 hot encoding which will contain the 1 or 0 based on the word exist or not
# in the sentance
# As neural network only understand numeric value rather then a word that's we need to convert them into numeric encoding
# As bag of words represent by the encoding in the form 0 and 1
training = []
output = []
## if tag is present then it will be 1 or else 0 ( [0,0,0,0,1,0] in are case we have 6 tag )
out_empty = [0 for _ in range(len(labels))]
print(out_empty)
for x , doc in enumerate(docs_x):
bag = []
wrds= [stemmer.stem(w) for w in doc]
#print(wrds)
for w in words:
if w in wrds:
bag.append(1)
else:
bag.append(0)
# print(bag)
output_row = out_empty[:]
output_row[labels.index(docs_y[x])] = 1
# get the training and output
training.append(bag)
output.append(output_row)
training = np.array(training)
output = np.array(output)
#print(training)
#print(output)
with open('data.pickle',"wb") as f:
pickle.dump( (words, labels, training, output) , f)
```
**Tensorflow**
```
# remove warning
import warnings
warnings.simplefilter('ignore')
# work with tensorflow
tf.reset_default_graph()
# training[0] all list have same len so we can take training[1]
net = tflearn.input_data(shape=[None,len(training[0])])
# 2 pipes of 8 hidden layer
net = tflearn.fully_connected(net,8)
net = tflearn.fully_connected(net,8)
# activation="softmax" tells probabillity of each neuron in the list (helps to finds the response)
net = tflearn.fully_connected(net , len(output[0]), activation="softmax")
net = tflearn.regression(net)
model = tflearn.DNN(net)
# --------- Explanation--------------------
# INPUT DATA ---> HIDDEN LAYER ---> HIDDEN LAYER ----> OUTPUT DATA
# 45 input neurons --> 8 fully connected neurons --> 8 neurons ---> 6 neurons ("Softmax")
# n_epoch means how much time it will se our data
# try:
# model.load("model.tflearn")
# except:
model.fit(training, output, n_epoch=1000, batch_size=8, show_metric=True)
model.save("model.tflearn")
```
**Start prediction**
```
def beg_of_words(s, words):
# contain 0
bag = [0 for i in range(len(words))]
s_words = nltk.word_tokenize(s)
s_words = [stemmer.stem(word.lower()) for word in s_words]
## sentance (se)
for se in s_words:
for i,w in enumerate(words):
# that mean corrent words which we were looking at present in the sentace
if w == se:
bag[i] = 1
return np.array(bag)
```
**Chat Response**
```
def chat():
print("start talking with the bot (type 'quit' to exit) ")
print("\n");
while True:
user_input = input("Type something 😃 : ")
if user_input.lower() == 'quit':
break
# give the predicted response based on the word
result = model.predict([beg_of_words(user_input , words)])
#index of greated value in the list
result_index = np.argmax(result)
#print the tag
tag = labels[result_index]
print("Movie Genre is {}".format(tag))
# print the response
for intent in data['intents']:
if tag == intent['tag']:
response = intent['responses']
print("🤖 : {}".format(random.choice(response)))
print("\n")
chat()
```
| github_jupyter |
# Feature importance per signature type
This notebooks analyses which characters are more important for each individual signature type. In other words, what makes each cluster unique compared to all the other.
```
import numpy as np
import pandas as pd
import geopandas as gpd
import dask.dataframe
import matplotlib.pyplot as plt
import urbangrammar_graphics as ugg
import seaborn as sns
from matplotlib.lines import Line2D
from sklearn.ensemble import RandomForestClassifier
%time standardized_form = dask.dataframe.read_parquet("../../urbangrammar_samba/spatial_signatures/clustering_data/form/standardized/").set_index('hindex')
%time stand_fn = dask.dataframe.read_parquet("../../urbangrammar_samba/spatial_signatures/clustering_data/function/standardized/")
%time data = dask.dataframe.multi.concat([standardized_form, stand_fn], axis=1).replace([np.inf, -np.inf], np.nan).fillna(0)
%time data = data.drop(columns=["keep_q1", "keep_q2", "keep_q3"])
%time data = data.compute()
labels_l1 = pd.read_parquet("../../urbangrammar_samba/spatial_signatures/clustering_data/KMeans10GB.pq")
labels_l2_9 = pd.read_parquet("../../urbangrammar_samba/spatial_signatures/clustering_data/clustergram_cl9_labels.pq")
labels_l2_2 = pd.read_parquet("../../urbangrammar_samba/spatial_signatures/clustering_data/subclustering_cluster2_k3.pq")
labels = labels_l1.copy()
labels.loc[labels.kmeans10gb == 9, 'kmeans10gb'] = labels_l2_9['9'].values + 90
labels.loc[labels.kmeans10gb == 2, 'kmeans10gb'] = labels_l2_2['subclustering_cluster2_k3'].values + 20
outliers = [98, 93, 96, 97]
mask = ~labels.kmeans10gb.isin(outliers)
```
## Feature importance per cluster
```
labels.kmeans10gb.unique()
imps = pd.DataFrame()
for cluster in labels.kmeans10gb.unique():
if cluster not in outliers:
cluster_bool = labels.loc[mask]['kmeans10gb'].apply(lambda x: 1 if x == cluster else 0)
clf = RandomForestClassifier(n_estimators=10, n_jobs=-1, random_state=42, verbose=1)
clf = clf.fit(data.loc[mask].values, cluster_bool.values)
importances = pd.Series(clf.feature_importances_, index=data.columns).sort_values(ascending=False)
imps[f'cluster_{cluster}'] = importances.head(50).index.values
imps[f'cluster_{cluster}_vals'] = importances.head(50).values
chars = [c for c in imps.columns if 'vals' not in c]
imps[sorted(chars)]
imps.to_parquet("../../urbangrammar_samba/spatial_signatures/clustering_data/per_cluster_importance.pq")
ims = pd.read_parquet("../../urbangrammar_samba/spatial_signatures/clustering_data/per_cluster_importance.pq")
names.columns
n_chars = 10
names = ims[[c for c in ims.columns if "_vals" not in c]].head(n_chars)
values = ims[[c for c in ims.columns if "_vals" in c]].head(n_chars)
coded = {
'population': 'func_population',
'night_lights': 'func_night_lights',
'A, B, D, E. Agriculture, energy and water': 'func_workplace_abde',
'C. Manufacturing': 'func_workplace_c',
'F. Construction': 'func_workplace_f',
'G, I. Distribution, hotels and restaurants': 'func_workplace_gi',
'H, J. Transport and communication': 'func_workplace_hj',
'K, L, M, N. Financial, real estate, professional and administrative activities': 'func_workplace_klmn',
'O,P,Q. Public administration, education and health': 'func_workplace_opq',
'R, S, T, U. Other': 'func_workplace_rstu',
'Code_18_124': 'func_corine_124',
'Code_18_211': 'func_corine_211',
'Code_18_121': 'func_corine_121',
'Code_18_421': 'func_corine_421',
'Code_18_522': 'func_corine_522',
'Code_18_142': 'func_corine_142',
'Code_18_141': 'func_corine_141',
'Code_18_112': 'func_corine_112',
'Code_18_231': 'func_corine_231',
'Code_18_311': 'func_corine_311',
'Code_18_131': 'func_corine_131',
'Code_18_123': 'func_corine_123',
'Code_18_122': 'func_corine_122',
'Code_18_512': 'func_corine_512',
'Code_18_243': 'func_corine_243',
'Code_18_313': 'func_corine_313',
'Code_18_412': 'func_corine_412',
'Code_18_321': 'func_corine_321',
'Code_18_322': 'func_corine_322',
'Code_18_324': 'func_corine_324',
'Code_18_111': 'func_corine_111',
'Code_18_423': 'func_corine_423',
'Code_18_523': 'func_corine_523',
'Code_18_312': 'func_corine_312',
'Code_18_133': 'func_corine_133',
'Code_18_333': 'func_corine_333',
'Code_18_332': 'func_corine_332',
'Code_18_411': 'func_corine_411',
'Code_18_132': 'func_corine_132',
'Code_18_222': 'func_corine_222',
'Code_18_242': 'func_corine_242',
'Code_18_331': 'func_corine_331',
'Code_18_511': 'func_corine_511',
'Code_18_334': 'func_corine_334',
'Code_18_244': 'func_corine_244',
'Code_18_521': 'func_corine_521',
'mean': 'func_ndvi',
'supermarkets_nearest': 'func_supermarkets_nearest',
'supermarkets_counts': 'func_supermarkets_counts',
'listed_nearest': 'func_listed_nearest',
'listed_counts': 'func_listed_counts',
'fhrs_nearest': 'func_fhrs_nearest',
'fhrs_counts': 'func_fhrs_counts',
'culture_nearest': 'func_culture_nearest',
'culture_counts': 'func_culture_counts',
'nearest_water': 'func_water_nearest',
'nearest_retail_centre': 'func_retail_centrenearest',
'sdbAre': 'form_sdbAre',
'sdbPer': 'form_sdbPer',
'sdbCoA': 'form_sdbCoA',
'ssbCCo': 'form_ssbCCo',
'ssbCor': 'form_ssbCor',
'ssbSqu': 'form_ssbSqu',
'ssbERI': 'form_ssbERI',
'ssbElo': 'form_ssbElo',
'ssbCCM': 'form_ssbCCM',
'ssbCCD': 'form_ssbCCD',
'stbOri': 'form_stbOri',
'sdcLAL': 'form_sdcLAL',
'sdcAre': 'form_sdcAre',
'sscCCo': 'form_sscCCo',
'sscERI': 'form_sscERI',
'stcOri': 'form_stcOri',
'sicCAR': 'form_sicCAR',
'stbCeA': 'form_stbCeA',
'mtbAli': 'form_mtbAli',
'mtbNDi': 'form_mtbNDi',
'mtcWNe': 'form_mtcWNe',
'mdcAre': 'form_mdcAre',
'ltcWRE': 'form_ltcWRE',
'ltbIBD': 'form_ltbIBD',
'sdsSPW': 'form_sdsSPW',
'sdsSWD': 'form_sdsSWD',
'sdsSPO': 'form_sdsSPO',
'sdsLen': 'form_sdsLen',
'sssLin': 'form_sssLin',
'ldsMSL': 'form_ldsMSL',
'mtdDeg': 'form_mtdDeg',
'lcdMes': 'form_lcdMes',
'linP3W': 'form_linP3W',
'linP4W': 'form_linP4W',
'linPDE': 'form_linPDE',
'lcnClo': 'form_lcnClo',
'ldsCDL': 'form_ldsCDL',
'xcnSCl': 'form_xcnSCl',
'mtdMDi': 'form_mtdMDi',
'lddNDe': 'form_lddNDe',
'linWID': 'form_linWID',
'stbSAl': 'form_stbSAl',
'sddAre': 'form_sddAre',
'sdsAre': 'form_sdsAre',
'sisBpM': 'form_sisBpM',
'misCel': 'form_misCel',
'mdsAre': 'form_mdsAre',
'lisCel': 'form_lisCel',
'ldsAre': 'form_ldsAre',
'ltcRea': 'form_ltcRea',
'ltcAre': 'form_ltcAre',
'ldeAre': 'form_ldeAre',
'ldePer': 'form_ldePer',
'lseCCo': 'form_lseCCo',
'lseERI': 'form_lseERI',
'lseCWA': 'form_lseCWA',
'lteOri': 'form_lteOri',
'lteWNB': 'form_lteWNB',
'lieWCe': 'form_lieWCe',
}
types = {
0: "Countryside agriculture",
1: "Accessible suburbia",
3: "Open sprawl",
4: "Wild countryside",
5: "Warehouse/Park land",
6: "Gridded residential quarters",
7: "Urban buffer",
8: "Disconnected suburbia",
20: "Dense residential neighbourhoods",
21: "Connected residential neighbourhoods",
22: "Dense urban neighbourhoods",
90: "Local urbanity",
91: "Concentrated urbanity",
92: "Regional urbanity",
94: "Metropolitan urbanity",
95: "Hyper concentrated urbanity",
93: "outlier",
96: "outlier",
97: "outlier",
98: "outlier",
}
def cmap(name):
if "_q" in name:
name = name[:-3]
if coded[name][:4] == "form":
return ugg.COLORS[1]
if coded[name][:4] == "func":
return ugg.COLORS[4]
raise ValueError()
x = np.repeat(np.arange(0, 16), n_chars)
y = np.tile(np.arange(0, n_chars), 16) * - 1
colors = names.applymap(cmap).values.T.flatten()
alpha = values.values.T.flatten() / values.values.T.flatten().max()
ticks = [types[int(c[8:])] for c in names.columns]
fig, ax = plt.subplots(figsize=(16, n_chars))
ax.scatter(x, y, alpha=alpha, color=colors, marker="s", s=2500)
plt.tight_layout()
# ax.set_axis_off()
plt.xticks(np.arange(0, 16), ticks, rotation='vertical')
plt.yticks([0, -9], ["top predictor", "10th predictor"])
sns.despine(left=True, bottom=True)
# plt.savefig("figs/feature_imp_10.pdf")
```
| github_jupyter |
# Contents
* [Plot](#Plot)
* [Subplot](#Subplot)
* [Placement of ticks and custom tick labels](#Placement-of-ticks-and-custom-tick-labels)
* [Annotate](#Annotate)
* [Axis Grid](#Axis-Grid)
* [Axis spines](#Axis-spines)
* [Twin axes](#Twin-axes)
* [Axes where x and y is zero](#Axes-where-x-and-y-is-zero)
* [Figure](#Figure)
* [Figure size and aspect ratio (DPI)](#Figure-size-and-aspect-ratio-(DPI))
* [Saving figures](#Saving-figures)
* [Setting colors, linewidths, linetypes](#Setting-colors,-linewidths,-linetypes)
* [Scatter](#Scatter)
* [Histogram](#Histogram)
* [Other 2D plot styles](#Other-2D-plot-styles)
* [Colormap and contour figures](#Colormap-and-contour-figures)
* [Pcolor](#Pcolor)
* [Imshow](#Imshow)
* [Box Plot](#Box-Plot)
* [Referans Link](#Referans-Link)
```
import numpy as np
import matplotlib.pyplot as plt
#jupyter-notebook için bu satıra yazılmalıdır.
%matplotlib inline
```
## Plot
```
plt.plot([1,2,3,4]) #Çizgi çizmede kullanılır.
plt.ylabel("number") #Y eksenine isim verir.
plt.xlabel("time") #X eksenine isim verir.
plt.show() #Grafiği gösterir.
plt.plot([1,2,3,4],[1,4,9,16]) #ilk matris x, ikinci matris y eksenindedir.
plt.show()
plt.plot([1,2,3,4],[1,4,9,16],'ro') #r:kırmızı, o: daire
plt.axis([0,6,0,20]) #eksenlerin uzunluğu ayarlandı.
plt.show()
x=np.linspace(0,5,11)
y=x**2
print x
print y
plt.plot(x, y, 'r') # 'r' is the color red
plt.xlabel('X Axis Title Here')
plt.ylabel('Y Axis Title Here')
plt.title('String Title Here')
plt.show()
#linewidth: çizgi genişliği
plt.plot(x,y,'r--',linewidth=5.0)
plt.text(2.5, 10, r"$y=x^2$", fontsize=50, color="blue")
plt.show()
t=np.arange(0,5,0.2)
#r: red, b:blue, g:green
#--: kesikli çizgi, ^: üçgen, s:square(kare)
plt.plot(t,t,'r--', label = "t")
plt.plot(t,t**2,'bs', label = "t^2")
plt.plot(t,t**3, "g^", label = "t^3")
plt.legend() #açıklama kısmını çizer.
plt.show()
```
## Subplot
```
# plt.subplot(nrows, ncols, plot_number)
plt.subplot(1,2,1)
plt.plot(x, y, 'r--')
plt.subplot(1,2,2)
plt.plot(y, x, 'g*-');
# Use similar to plt.figure() except use tuple unpacking to grab fig and axes
# Empty canvas of 1 by 2 subplots
fig, axes = plt.subplots(nrows=1, ncols=3)
for ax in axes:
ax.plot(x, y, 'g')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_title('title')
fig.tight_layout() #Bu, örtüşen içerik olmaması için şekil tuvalindeki eksenlerin konumlarını otomatik olarak ayarlar.
fig, axes = plt.subplots(1, 3, figsize=(12, 4))
axes[0].plot(x, x**2, x, x**3)
axes[0].set_title("default axes ranges")
axes[1].plot(x, x**2, x, x**3)
axes[1].axis('tight')
axes[1].set_title("tight axes")
axes[2].plot(x, x**2, x, x**3)
axes[2].set_ylim([0, 60])
axes[2].set_xlim([2, 5])
axes[2].set_title("custom axes range");
def f(t):
return np.exp(-t) * np.cos(2*np.pi*t)
t1 = np.arange(0.0, 5.0, 0.1)
t2 = np.arange(0.0, 5.0, 0.02)
plt.figure(1)
plt.subplot(211)
plt.plot(t1, f(t1), 'bo', t2, f(t2), 'k')
plt.subplot(212)
plt.plot(t2, np.cos(2*np.pi*t2), 'r--')
plt.show()
```
### Placement of ticks and custom tick labels
```
fig, ax = plt.subplots(figsize=(10, 4))
ax.plot(x, x**2, x, x**3, lw=10)
ax.set_xticks([1, 2, 3, 4, 5])
ax.set_xticklabels([r'$\alpha$', r'$\beta$', r'$\gamma$', r'$\delta$', r'$\epsilon$'], fontsize=20)
yticks = [0, 50, 100, 150]
ax.set_yticks(yticks)
ax.set_yticklabels(["$%.1f$" % y for y in yticks], fontsize=18); # use LaTeX formatted labels
```
### Annotate
```
ax = plt.subplot(111)
t = np.arange(0.0, 5.0, 0.01)
s = np.cos(2*np.pi*t)
plt.plot(t, s, lw=2)
plt.annotate('local max', xy=(2, 1), xytext=(3, 1.5),
arrowprops=dict(facecolor='black', shrink=0.05),
)
plt.ylim(-2, 2) #Y eksenini boyutlandırdı.
plt.show()
```
### Axis Grid
```
fig, axes = plt.subplots(1, 2, figsize=(10,3))
# default grid appearance
axes[0].plot(x, x**2, x, x**3, lw=2)
axes[0].grid(True)
# custom grid appearance
axes[1].plot(x, x**2, x, x**3, lw=2)
axes[1].grid(color='b', alpha=0.5, linestyle='dashed', linewidth=3)
```
### Axis spines
```
fig, ax = plt.subplots(figsize=(6,2))
ax.spines['bottom'].set_color('blue')
ax.spines['top'].set_color('blue')
ax.spines['left'].set_color('red')
ax.spines['left'].set_linewidth(4)
# turn off axis spine to the right
ax.spines['right'].set_color("none")
ax.yaxis.tick_left() # only ticks on the left side
```
### Twin axes
```
fig, ax1 = plt.subplots()
ax1.plot(x, x**2, lw=2, color="blue")
ax1.set_ylabel(r"area $(m^2)$", fontsize=18, color="blue")
for label in ax1.get_yticklabels():
label.set_color("blue")
ax2 = ax1.twinx()
ax2.plot(x, x**3, lw=2, color="red")
ax2.set_ylabel(r"volume $(m^3)$", fontsize=18, color="red")
for label in ax2.get_yticklabels():
label.set_color("red")
```
### Axes where x and y is zero
```
fig, ax = plt.subplots()
ax.spines['right'].set_color('green')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0)) # set position of x spine to x=0
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0)) # set position of y spine to y=0
xx = np.linspace(-0.75, 1., 100)
ax.plot(xx, xx**3);
```
## Figure
```
fig = plt.figure()
help(fig.add_axes)
x=np.linspace(0,5,11)
y=x**2
# Create Figure (empty canvas)
fig = plt.figure()
# Add set of axes to figure
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # left, bottom, width, height (range 0 to 1)
# Plot on that set of axes
axes.plot(x, y, 'b')
axes.set_xlabel('Set X Label') # Notice the use of set_ to begin methods
axes.set_ylabel('Set y Label')
axes.set_title('Set Title')
# Creates blank canvas
fig = plt.figure()
axes1 = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # main axes
axes2 = fig.add_axes([0.2, 0.5, 0.4, 0.3]) # inset axes
# Larger Figure Axes 1
axes1.plot(x, y, 'b')
axes1.set_xlabel('X_label_axes1')
axes1.set_ylabel('Y_label_axes1')
axes1.set_title('Axes 1 Title')
# Insert Figure Axes 2
axes2.plot(y, x, 'r')
axes2.set_xlabel('X_label_axes2')
axes2.set_ylabel('Y_label_axes2')
axes2.set_title('Axes 2 Title');
x = np.linspace(0,2*np.pi,20)
y = np.linspace(0,5,20)
print x
print y
# Create Figure (empty canvas)
fig = plt.figure()
# Add set of axes to figure
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9], projection='polar') # left, bottom, width, height (range 0 to 1)
# Plot on that set of axes
axes.plot(x, y, 'b')
axes.set_xlabel('Set X Label')
axes.set_ylabel('Set y Label')
axes.set_title('Set Title');
```
### Figure size and aspect ratio (DPI)
```
x=np.linspace(0,5,11)
y=x**2
fig, axes = plt.subplots(figsize=(10,4), dpi=100)
axes.plot(x, y, 'r')
axes.set_xlabel('x')
axes.set_ylabel('y')
axes.set_title('title');
```
## Saving figures
```
fig.savefig("filename.png", dpi=200)
```
## Setting colors, linewidths, linetypes
```
fig, ax = plt.subplots(figsize=(12,6))
ax.plot(x, x+1, color="red", linewidth=0.25)
ax.plot(x, x+2, color="red", linewidth=0.50)
ax.plot(x, x+3, color="red", linewidth=1.00)
ax.plot(x, x+4, color="red", linewidth=2.00)
# possible linestype options ‘-‘, ‘–’, ‘-.’, ‘:’, ‘steps’
ax.plot(x, x+5, color="green", lw=3, linestyle='-')
ax.plot(x, x+6, color="green", lw=3, ls='-.')
ax.plot(x, x+7, color="green", lw=3, ls=':')
# custom dash
line, = ax.plot(x, x+8, color="black", lw=4.50)
line.set_dashes([5, 10, 15, 10]) # format: line length, space length, ...
# possible marker symbols: marker = '+', 'o', '*', 's', ',', '.', '1', '2', '3', '4', ...
ax.plot(x, x+ 9, color="blue", lw=3, ls='-', marker='+')
ax.plot(x, x+10, color="blue", lw=3, ls='--', marker='o')
ax.plot(x, x+11, color="blue", lw=3, ls='-', marker='s')
ax.plot(x, x+12, color="blue", lw=3, ls='--', marker='1')
# marker size and color
ax.plot(x, x+13, color="purple", lw=1, ls='-', marker='o', markersize=20)
ax.plot(x, x+14, color="purple", lw=1, ls='-', marker='o', markersize=4)
ax.plot(x, x+15, color="purple", lw=1, ls='-', marker='o', markersize=8, markerfacecolor="red")
ax.plot(x, x+16, color="purple", lw=1, ls='-', marker='s', markersize=8,
markerfacecolor="yellow", markeredgewidth=3, markeredgecolor="green");
```
## Scatter
```
plt.scatter(x,y)
data={'a':np.arange(50),
'c':np.random.randint(0,50,50),
'd':np.random.randn(50)}
data['b']=data['a']+10*np.random.randn(50)
data['d']=np.abs(data['d'])*100
#c:color, s:çap
plt.scatter('a','b',c='c',s='d',data=data) #daireler çizer.
plt.xlabel('a degerleri')
plt.ylabel('y degerleri')
plt.show()
```
### Histogram
```
# A histogram
n = np.random.randn(100000)
fig, axes = plt.subplots(1, 2, figsize=(12,4))
axes[0].hist(n)
axes[0].set_title("Default histogram")
axes[0].set_xlim((min(n), max(n)))
axes[1].hist(n, cumulative=True, bins=50)
axes[1].set_title("Cumulative detailed histogram")
axes[1].set_xlim((min(n), max(n)));
mu, sigma=100, 15
x=mu+sigma*np.random.randn(10000)
n,bins,patches=plt.hist(x,50,density=True,facecolor='g',alpha=0.75) #Histogram çizer.
plt.xlabel('Smarts')
plt.ylabel('Probability')
plt.title('Histogram of IQ')
plt.text(60, .025, r'$\mu=100,\ \sigma=15$') #Grafik üzerine yazar.
plt.axis([40, 160, 0, 0.03]) #Eksenleri boyutlandırdı.
plt.grid(True) #Kalvuz çizgileri
plt.show()
```
## Other 2D plot styles
```
names = ['group_a', 'group_b', 'group_c']
values = [1, 10, 100]
plt.figure(1, figsize=(9, 5)) #figürü boyutlandırdık.
plt.subplot(131) #figürü 1 satır 3 sütuna böler.
plt.bar(names, values) #Sütun grafiği
plt.title('Sutun Grafigi') #başlık ekler.
plt.subplot(132)
plt.scatter(names, values)
plt.title('Nokta Grafigi')
plt.subplot(133)
plt.plot(names, values)
plt.title('Cizgi Grafigi')
plt.suptitle('Categorical Plotting') #Başlık ekler.
plt.show()
n = np.array([0,1,2,3,4,5])
fig, axes = plt.subplots(1, 4, figsize=(12,3))
axes[0].scatter(xx, xx + 0.25*np.random.randn(len(xx)))
axes[0].set_title("scatter")
axes[1].step(n, n**2, lw=2)
axes[1].set_title("step")
axes[2].bar(n, n**2, align="center", width=0.5, alpha=0.5)
axes[2].set_title("bar")
axes[3].fill_between(x, x**2, x**3, color="green", alpha=0.5);
axes[3].set_title("fill_between");
```
## Colormap and contour figures
```
alpha = 0.7
phi_ext = 2 * np.pi * 0.5
def flux_qubit_potential(phi_m, phi_p):
return 2 + alpha - 2 * np.cos(phi_p) * np.cos(phi_m) - alpha * np.cos(phi_ext - 2*phi_p)
phi_m = np.linspace(0, 2*np.pi, 100)
phi_p = np.linspace(0, 2*np.pi, 100)
X,Y = np.meshgrid(phi_p, phi_m)
Z = flux_qubit_potential(X, Y).T
```
### Pcolor
```
fig, ax = plt.subplots()
p = ax.pcolor(X/(2*np.pi), Y/(2*np.pi), Z, cmap=plt.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max())
cb = fig.colorbar(p, ax=ax)
```
### Imshow
```
fig, ax = plt.subplots()
im = ax.imshow(Z, cmap=plt.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[0, 1, 0, 1])
im.set_interpolation('bilinear')
cb = fig.colorbar(im, ax=ax)
```
### Contour
```
fig, ax = plt.subplots()
cnt = ax.contour(Z, cmap=plt.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[0, 1, 0, 1])
```
## Box Plot
```
data = [np.random.normal(0, std, 100) for std in range(1, 4)]
# rectangular box plot
plt.boxplot(data,vert=True,patch_artist=True);
```
## Referans Link
### https://matplotlib.org/tutorials/introductory/pyplot.html
### https://matplotlib.org/gallery.html
### https://www.southampton.ac.uk/~fangohr/training/python/notebooks/Matplotlib.html
### http://www.labri.fr/perso/nrougier/teaching/matplotlib/
| github_jupyter |
# Stemming (and Inverse Stemming) words from multiple languages
For more information on the inner workings of the algorithm, refer to:
http://snowball.tartarus.org/algorithms/french/stemmer.html
The following content is derived from the quickstart guide [here](https://github.com/snowballstem/pystemmer/blob/master/docs/quickstart_python3.txt), which is [licensed](https://github.com/snowballstem/pystemmer/blob/master/LICENSE) under the MIT License and contains traces of the 3-Clause BSD License.
```
# !pip3 install pystemmer
# Quickstart
# This is a very brief introduction to the use of PyStemmer.
# First, import the library:
import Stemmer
# Just for show, we'll display a list of the available stemming algorithms:
print(Stemmer.algorithms())
# ['danish', 'dutch', 'english', 'finnish', 'french', 'german', 'hungarian', 'italian', 'norwegian', 'porter', 'portuguese', 'romanian', 'russian', 'spanish', 'swedish', 'turkish']
# Now, we'll get an instance of the french stemming algorithm:
stemmer = Stemmer.Stemmer('french')
# Stem a single word:
print(stemmer.stemWord('coder'))
# cod
# Stem a list of words:
print(stemmer.stemWords(['coder', 'codera']))
# ['cod', 'cod']
# Strings which are supplied are assumed to be unicode.
# We can use UTF-8 encoded input, too:
print(stemmer.stemWords(['coder', b'codera']))
# ['cod', b'cod']
# Each instance of the stemming algorithms uses a cache to speed up processing of
# common words. By default, the cache holds 10000 words, but this may be
# modified. The cache may be disabled entirely by setting the cache size to 0:
print(stemmer.maxCacheSize)
# 10000
stemmer.maxCacheSize = 1000
print(stemmer.maxCacheSize)
# 1000
```
## How to do Inverse Stemming?
We'll need to be able to do the backward pass too to convert the topics from the LDA back to words. For more information, see how the above Stemmer class is wrapped into the code of the current [project](https://github.com/ArtificiAI/Multilingual-Latent-Dirichlet-Allocation-LDA). This is done in [lda_service/logic/stemmer.py](https://github.com/ArtificiAI/Multilingual-Latent-Dirichlet-Allocation-LDA/blob/master/lda_service/logic/stemmer.py)
Quickly explained, inverse stemming can be done by keeping track of each original words that pointed to the stemmed version of that word, and their count. When doing the inverse stemming, the word with the top count can then be retrieved.
```python
# Original comments:
['Un super-chat marche sur le trottoir',
'Les super-chats aiment ronronner',
'Les chats sont ronrons',
'Un super-chien aboie',
'Deux super-chiens',
"Combien de chiens sont en train d'aboyer?"]
# Original comments without stop words:
['super-chat marche trottoir',
'super-chats aiment ronronner',
'chats ronrons',
'super-chien aboie',
'Deux super-chiens',
'Combien chiens train aboyer?']
# Stemmed comments:
['sup chat march trottoir',
'sup chat aiment ronron',
'chat ronron',
'sup chien aboi',
'deux sup chien',
'combien chien train aboi']
# Custom stemmer's cache that was saved for the inverse pass later on which will need to choose the top corresponding words back from their counts:
{'aboi': {'aboie': 1, 'aboyer': 1},
'aiment': {'aiment': 1},
'chat': {'chat': 1, 'chats': 2},
'chien': {'chien': 1, 'chiens': 2},
'combien': {'Combien': 1},
'deux': {'Deux': 1},
'march': {'marche': 1},
'ronron': {'ronronner': 1, 'ronrons': 1},
'sup': {'super': 4},
'train': {'train': 1},
'trottoir': {'trottoir': 1}}
```
| github_jupyter |
# Run BASICs on simulated data
For 40 genes and 10 spikes
- 500: 77.340, 235.535
- 1000: 149.159 451.640
- 5000: 1398.347
- 10000: 2433.064
- 20000: 4021.876
- 50000: 8715.963
- 100000: 16965.28, 30328.91
For 200 genes and 10 spikes
- 500: 294.071, 911.659
==scHOT==
For 3 genes
- 500: 1780.265
- 1000: 6474.038
- 5000: 34651.179
```
data_path <- "/data_volume/memento/simulation/runtime/"
setwd(data_path)
library('BASiCS')
library(zellkonverter)
```
### Time BASICS
```
for (num_cell in c(1000, 5000, 10000, 20000, 50000)) {
print(num_cell)
set.seed(1)
Counts <- matrix(rpois(50*num_cell, 2), ncol = num_cell)
rownames(Counts) <- c(paste0("Gene", 1:40), paste0("Spike", 1:10))
Tech <- c(rep(FALSE,40),rep(TRUE,10))
set.seed(2)
SpikeInput <- rgamma(10,1,1)
SpikeInfo <- data.frame("SpikeID" = paste0("Spike", 1:10),
"SpikeInput" = SpikeInput)
# With batch structure
DataExample <- newBASiCS_Data(Counts, Tech, SpikeInfo,
BatchInfo = rep(c(1,2), each = num_cell/2))
ptm <- proc.time()
Chain <- BASiCS_MCMC(Data = DataExample, N = 20000, Thin = 20, Burn = 10000, PrintProgress = FALSE, Regression = TRUE , WithSpikes=FALSE, Threads=1)
print(proc.time() - ptm)
}
```
### Time scHOT
```
library(SingleCellExperiment)
library(ggplot2)
library(scHOT)
library(scater)
library(Matrix)
library(matrixStats)
data(liver)
liver_pseudotime_hep <- liver$liver_pseudotime_hep
liver_branch_hep <- liver$liver_branch_hep
first_branch_cells <- liver$first_branch_cells
gene_to_test <- as.matrix(c("Birc5", "H2afz", "Tacc3"))
a <- sort(sample(c(0,1), length(first_branch_cells), replace=TRUE))
names(a) <- first_branch_cells
dim(liver_branch_hep[,first_branch_cells])
# num_cell = 176
for (num_cell in c(500000, 500, 1000, 2000, 10000, 50000, 100000)){
print(num_cell)
num_gene = 3
cell_names <- paste0("Cell", 1:num_cell)
counts <- matrix(rpois(num_gene*num_cell, 2), ncol = num_cell)
rownames(counts) <- paste0("Gene", 1:num_gene)
colnames(counts) <- cell_names
a <- sort(sample(c(0,1), num_cell, replace=TRUE))
names(a) <- cell_names
scHOT_traj <- scHOT_buildFromMatrix(
mat = counts,
cellData = list(pseudotime = a),
positionType = "trajectory",
positionColData = "pseudotime")
num_treatment = sum(a)
num_control = num_cell-num_treatment
weighted_matrix = matrix(0, nrow=2, ncol=num_cell)
weighted_matrix[1, seq(num_control)] = 1
weighted_matrix[2, seq(num_control, num_cell)] = 1
weighted_matrix = as(Matrix(weighted_matrix, sparse=TRUE), "dgCMatrix")
scHOT_traj@weightMatrix <- weighted_matrix
ptm <- proc.time()
scHOT_traj_wrap = scHOT(scHOT_traj,
testingScaffold = as.matrix(c('Gene1', 'Gene2', 'Gene3')),
higherOrderFunction = matrixStats::weightedVar,
higherOrderFunctionType = "weighted",
numberPermutations = 1000,
weightMatrix =weighted_matrix)
print(proc.time() - ptm)
}
```
### scHOT step by step
```
# scHOT_traj@testingScaffold
scHOT_traj <- scHOT_addTestingScaffold(scHOT_traj, gene_to_test)
# scHOT_traj@testingScaffold
# scHOT_traj@weightMatrix
scHOT_traj <- scHOT_setWeightMatrix(scHOT_traj,
positionType = "trajectory",
positionColData = c("pseudotime"),
type="block",
nrow.out = 50,
averageAcrossTrajectoryTies=TRUE,
span = 0.1)
# num_cells = 176
# num_treatment = sum(liver_pseudotime_hep[first_branch_cells])
# num_control = num_cells-num_treatment
# weighted_matrix = matrix(0, nrow=num_cells, ncol=num_cells)
# weighted_matrix[seq(num_control), seq(num_control)] = 1
# weighted_matrix[seq(num_control, num_cells), seq(num_control, num_cells)] = 1
# scHOT_traj@weightMatrix <- weighted_matrix
dim(scHOT_traj@weightMatrix)
#> [1] 176 176
class(scHOT_traj@weightMatrix)
#> [1] "dgCMatrix"
#> attr(,"package")
#> [1] "Matrix"
plot(scHOT_traj@weightMatrix[50,])
a = scHOT_traj@weightMatrix
a
seq(num_control, num_cells)
num_cells = 176
num_treatment = sum(liver_pseudotime_hep[first_branch_cells])
num_control = num_cells-num_treatment
weighted_matrix = matrix(0, nrow=num_cells, ncol=num_cells)
weighted_matrix[seq(num_control), seq(num_control)] = 1
weighted_matrix[seq(num_control, num_cells), seq(num_control, num_cells)] = 1
weight_matrix = Matrix()
scHOT_traj <- scHOT_buildFromMatrix(
mat = DataExample,
cellData = colData(DataExample),
positionType = "trajectory",
positionColData = "pseudotime")
scHOT_traj
scHOT_traj_wrap = scHOT(scHOT_traj,
testingScaffold = gene_to_test,
higherOrderFunction = matrixStats::weightedVar,
higherOrderFunctionType = "weighted",
numberPermutations = 50)
```
```
Chain <- BASiCS_MCMC(Data = Data, N = 20000, Thin = 10, Burn = 500, PrintProgress = FALSE, Regression = TRUE, WithSpikes=FALSE)
library('scHOT')
if (!requireNamespace("BiocManager", quietly = TRUE))
install.packages("BiocManager")
BiocManager::install("scHOT")
colData(Data)$BatchInfo
```
| github_jupyter |
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
#### Version Check
Note: Dendrograms are available in version 1.8.7+.
Run `pip install plotly --upgrade` to update your Plotly version.
```
import plotly
plotly.__version__
```
##### Basic Dendrogram
```
import plotly.plotly as py
import plotly.figure_factory as ff
import numpy as np
X = np.random.rand(15, 15)
dendro = ff.create_dendrogram(X)
dendro['layout'].update({'width':800, 'height':500})
py.iplot(dendro, filename='simple_dendrogram')
```
##### Set Color Threshold
```
import plotly.plotly as py
import plotly.figure_factory as ff
import numpy as np
X = np.random.rand(15, 15)
dendro = ff.create_dendrogram(X, color_threshold=1.5)
dendro['layout'].update({'width':800, 'height':500})
py.iplot(dendro, filename='simple_dendrogram_with_color_threshold')
```
##### Set Orientation and Add Labels
```
import plotly.plotly as py
import plotly.figure_factory as ff
import numpy as np
X = np.random.rand(10, 10)
names = ['Jack', 'Oxana', 'John', 'Chelsea', 'Mark', 'Alice', 'Charlie', 'Rob', 'Lisa', 'Lily']
fig = ff.create_dendrogram(X, orientation='left', labels=names)
fig['layout'].update({'width':800, 'height':800})
py.iplot(fig, filename='dendrogram_with_labels')
```
##### Plot a Dendrogram with a Heatmap
```
import plotly.plotly as py
import plotly.graph_objs as go
import plotly.figure_factory as ff
import numpy as np
from scipy.spatial.distance import pdist, squareform
# get data
data = np.genfromtxt("http://files.figshare.com/2133304/ExpRawData_E_TABM_84_A_AFFY_44.tab",
names=True,usecols=tuple(range(1,30)),dtype=float, delimiter="\t")
data_array = data.view((np.float, len(data.dtype.names)))
data_array = data_array.transpose()
labels = data.dtype.names
# Initialize figure by creating upper dendrogram
figure = ff.create_dendrogram(data_array, orientation='bottom', labels=labels)
for i in range(len(figure['data'])):
figure['data'][i]['yaxis'] = 'y2'
# Create Side Dendrogram
dendro_side = ff.create_dendrogram(data_array, orientation='right')
for i in range(len(dendro_side['data'])):
dendro_side['data'][i]['xaxis'] = 'x2'
# Add Side Dendrogram Data to Figure
for data in dendro_side['data']:
figure.add_trace(data)
# Create Heatmap
dendro_leaves = dendro_side['layout']['yaxis']['ticktext']
dendro_leaves = list(map(int, dendro_leaves))
data_dist = pdist(data_array)
heat_data = squareform(data_dist)
heat_data = heat_data[dendro_leaves,:]
heat_data = heat_data[:,dendro_leaves]
heatmap = [
go.Heatmap(
x = dendro_leaves,
y = dendro_leaves,
z = heat_data,
colorscale = 'Blues'
)
]
heatmap[0]['x'] = figure['layout']['xaxis']['tickvals']
heatmap[0]['y'] = dendro_side['layout']['yaxis']['tickvals']
# Add Heatmap Data to Figure
for data in heatmap:
figure.add_trace(data)
# Edit Layout
figure['layout'].update({'width':800, 'height':800,
'showlegend':False, 'hovermode': 'closest',
})
# Edit xaxis
figure['layout']['xaxis'].update({'domain': [.15, 1],
'mirror': False,
'showgrid': False,
'showline': False,
'zeroline': False,
'ticks':""})
# Edit xaxis2
figure['layout'].update({'xaxis2': {'domain': [0, .15],
'mirror': False,
'showgrid': False,
'showline': False,
'zeroline': False,
'showticklabels': False,
'ticks':""}})
# Edit yaxis
figure['layout']['yaxis'].update({'domain': [0, .85],
'mirror': False,
'showgrid': False,
'showline': False,
'zeroline': False,
'showticklabels': False,
'ticks': ""})
# Edit yaxis2
figure['layout'].update({'yaxis2':{'domain':[.825, .975],
'mirror': False,
'showgrid': False,
'showline': False,
'zeroline': False,
'showticklabels': False,
'ticks':""}})
# Plot!
py.iplot(figure, filename='dendrogram_with_heatmap')
dendro_side['layout']['xaxis']
```
### Reference
```
help(ff.create_dendrogram)
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'dendrograms.ipynb', 'python/dendrogram/', 'Python Dendrograms',
'How to make a dendrogram in Python with Plotly. ',
name = 'Dendrograms',
title = "Dendrograms | Plotly",
thumbnail='thumbnail/dendrogram.jpg', language='python',
has_thumbnail='true', display_as='scientific', order=6,
ipynb= '~notebook_demo/262')
```
| github_jupyter |
# Example: CanvasXpress splom Chart No. 7
This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at:
https://www.canvasxpress.org/examples/splom-7.html
This example is generated using the reproducible JSON obtained from the above page and the `canvasxpress.util.generator.generate_canvasxpress_code_from_json_file()` function.
Everything required for the chart to render is included in the code below. Simply run the code block.
```
from canvasxpress.canvas import CanvasXpress
from canvasxpress.js.collection import CXEvents
from canvasxpress.render.jupyter import CXNoteBook
cx = CanvasXpress(
render_to="splom7",
data={
"z": {
"Species": [
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"setosa",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"versicolor",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica",
"virginica"
]
},
"y": {
"vars": [
"s1",
"s2",
"s3",
"s4",
"s5",
"s6",
"s7",
"s8",
"s9",
"s10",
"s11",
"s12",
"s13",
"s14",
"s15",
"s16",
"s17",
"s18",
"s19",
"s20",
"s21",
"s22",
"s23",
"s24",
"s25",
"s26",
"s27",
"s28",
"s29",
"s30",
"s31",
"s32",
"s33",
"s34",
"s35",
"s36",
"s37",
"s38",
"s39",
"s40",
"s41",
"s42",
"s43",
"s44",
"s45",
"s46",
"s47",
"s48",
"s49",
"s50",
"s51",
"s52",
"s53",
"s54",
"s55",
"s56",
"s57",
"s58",
"s59",
"s60",
"s61",
"s62",
"s63",
"s64",
"s65",
"s66",
"s67",
"s68",
"s69",
"s70",
"s71",
"s72",
"s73",
"s74",
"s75",
"s76",
"s77",
"s78",
"s79",
"s80",
"s81",
"s82",
"s83",
"s84",
"s85",
"s86",
"s87",
"s88",
"s89",
"s90",
"s91",
"s92",
"s93",
"s94",
"s95",
"s96",
"s97",
"s98",
"s99",
"s100",
"s101",
"s102",
"s103",
"s104",
"s105",
"s106",
"s107",
"s108",
"s109",
"s110",
"s111",
"s112",
"s113",
"s114",
"s115",
"s116",
"s117",
"s118",
"s119",
"s120",
"s121",
"s122",
"s123",
"s124",
"s125",
"s126",
"s127",
"s128",
"s129",
"s130",
"s131",
"s132",
"s133",
"s134",
"s135",
"s136",
"s137",
"s138",
"s139",
"s140",
"s141",
"s142",
"s143",
"s144",
"s145",
"s146",
"s147",
"s148",
"s149",
"s150"
],
"smps": [
"Sepal.Length",
"Sepal.Width",
"Petal.Length",
"Petal.Width"
],
"data": [
[
5.1,
3.5,
1.4,
0.2
],
[
4.9,
3,
1.4,
0.2
],
[
4.7,
3.2,
1.3,
0.2
],
[
4.6,
3.1,
1.5,
0.2
],
[
5,
3.6,
1.4,
0.2
],
[
5.4,
3.9,
1.7,
0.4
],
[
4.6,
3.4,
1.4,
0.3
],
[
5,
3.4,
1.5,
0.2
],
[
4.4,
2.9,
1.4,
0.2
],
[
4.9,
3.1,
1.5,
0.1
],
[
5.4,
3.7,
1.5,
0.2
],
[
4.8,
3.4,
1.6,
0.2
],
[
4.8,
3,
1.4,
0.1
],
[
4.3,
3,
1.1,
0.1
],
[
5.8,
4,
1.2,
0.2
],
[
5.7,
4.4,
1.5,
0.4
],
[
5.4,
3.9,
1.3,
0.4
],
[
5.1,
3.5,
1.4,
0.3
],
[
5.7,
3.8,
1.7,
0.3
],
[
5.1,
3.8,
1.5,
0.3
],
[
5.4,
3.4,
1.7,
0.2
],
[
5.1,
3.7,
1.5,
0.4
],
[
4.6,
3.6,
1,
0.2
],
[
5.1,
3.3,
1.7,
0.5
],
[
4.8,
3.4,
1.9,
0.2
],
[
5,
3,
1.6,
0.2
],
[
5,
3.4,
1.6,
0.4
],
[
5.2,
3.5,
1.5,
0.2
],
[
5.2,
3.4,
1.4,
0.2
],
[
4.7,
3.2,
1.6,
0.2
],
[
4.8,
3.1,
1.6,
0.2
],
[
5.4,
3.4,
1.5,
0.4
],
[
5.2,
4.1,
1.5,
0.1
],
[
5.5,
4.2,
1.4,
0.2
],
[
4.9,
3.1,
1.5,
0.2
],
[
5,
3.2,
1.2,
0.2
],
[
5.5,
3.5,
1.3,
0.2
],
[
4.9,
3.6,
1.4,
0.1
],
[
4.4,
3,
1.3,
0.2
],
[
5.1,
3.4,
1.5,
0.2
],
[
5,
3.5,
1.3,
0.3
],
[
4.5,
2.3,
1.3,
0.3
],
[
4.4,
3.2,
1.3,
0.2
],
[
5,
3.5,
1.6,
0.6
],
[
5.1,
3.8,
1.9,
0.4
],
[
4.8,
3,
1.4,
0.3
],
[
5.1,
3.8,
1.6,
0.2
],
[
4.6,
3.2,
1.4,
0.2
],
[
5.3,
3.7,
1.5,
0.2
],
[
5,
3.3,
1.4,
0.2
],
[
7,
3.2,
4.7,
1.4
],
[
6.4,
3.2,
4.5,
1.5
],
[
6.9,
3.1,
4.9,
1.5
],
[
5.5,
2.3,
4,
1.3
],
[
6.5,
2.8,
4.6,
1.5
],
[
5.7,
2.8,
4.5,
1.3
],
[
6.3,
3.3,
4.7,
1.6
],
[
4.9,
2.4,
3.3,
1
],
[
6.6,
2.9,
4.6,
1.3
],
[
5.2,
2.7,
3.9,
1.4
],
[
5,
2,
3.5,
1
],
[
5.9,
3,
4.2,
1.5
],
[
6,
2.2,
4,
1
],
[
6.1,
2.9,
4.7,
1.4
],
[
5.6,
2.9,
3.6,
1.3
],
[
6.7,
3.1,
4.4,
1.4
],
[
5.6,
3,
4.5,
1.5
],
[
5.8,
2.7,
4.1,
1
],
[
6.2,
2.2,
4.5,
1.5
],
[
5.6,
2.5,
3.9,
1.1
],
[
5.9,
3.2,
4.8,
1.8
],
[
6.1,
2.8,
4,
1.3
],
[
6.3,
2.5,
4.9,
1.5
],
[
6.1,
2.8,
4.7,
1.2
],
[
6.4,
2.9,
4.3,
1.3
],
[
6.6,
3,
4.4,
1.4
],
[
6.8,
2.8,
4.8,
1.4
],
[
6.7,
3,
5,
1.7
],
[
6,
2.9,
4.5,
1.5
],
[
5.7,
2.6,
3.5,
1
],
[
5.5,
2.4,
3.8,
1.1
],
[
5.5,
2.4,
3.7,
1
],
[
5.8,
2.7,
3.9,
1.2
],
[
6,
2.7,
5.1,
1.6
],
[
5.4,
3,
4.5,
1.5
],
[
6,
3.4,
4.5,
1.6
],
[
6.7,
3.1,
4.7,
1.5
],
[
6.3,
2.3,
4.4,
1.3
],
[
5.6,
3,
4.1,
1.3
],
[
5.5,
2.5,
4,
1.3
],
[
5.5,
2.6,
4.4,
1.2
],
[
6.1,
3,
4.6,
1.4
],
[
5.8,
2.6,
4,
1.2
],
[
5,
2.3,
3.3,
1
],
[
5.6,
2.7,
4.2,
1.3
],
[
5.7,
3,
4.2,
1.2
],
[
5.7,
2.9,
4.2,
1.3
],
[
6.2,
2.9,
4.3,
1.3
],
[
5.1,
2.5,
3,
1.1
],
[
5.7,
2.8,
4.1,
1.3
],
[
6.3,
3.3,
6,
2.5
],
[
5.8,
2.7,
5.1,
1.9
],
[
7.1,
3,
5.9,
2.1
],
[
6.3,
2.9,
5.6,
1.8
],
[
6.5,
3,
5.8,
2.2
],
[
7.6,
3,
6.6,
2.1
],
[
4.9,
2.5,
4.5,
1.7
],
[
7.3,
2.9,
6.3,
1.8
],
[
6.7,
2.5,
5.8,
1.8
],
[
7.2,
3.6,
6.1,
2.5
],
[
6.5,
3.2,
5.1,
2
],
[
6.4,
2.7,
5.3,
1.9
],
[
6.8,
3,
5.5,
2.1
],
[
5.7,
2.5,
5,
2
],
[
5.8,
2.8,
5.1,
2.4
],
[
6.4,
3.2,
5.3,
2.3
],
[
6.5,
3,
5.5,
1.8
],
[
7.7,
3.8,
6.7,
2.2
],
[
7.7,
2.6,
6.9,
2.3
],
[
6,
2.2,
5,
1.5
],
[
6.9,
3.2,
5.7,
2.3
],
[
5.6,
2.8,
4.9,
2
],
[
7.7,
2.8,
6.7,
2
],
[
6.3,
2.7,
4.9,
1.8
],
[
6.7,
3.3,
5.7,
2.1
],
[
7.2,
3.2,
6,
1.8
],
[
6.2,
2.8,
4.8,
1.8
],
[
6.1,
3,
4.9,
1.8
],
[
6.4,
2.8,
5.6,
2.1
],
[
7.2,
3,
5.8,
1.6
],
[
7.4,
2.8,
6.1,
1.9
],
[
7.9,
3.8,
6.4,
2
],
[
6.4,
2.8,
5.6,
2.2
],
[
6.3,
2.8,
5.1,
1.5
],
[
6.1,
2.6,
5.6,
1.4
],
[
7.7,
3,
6.1,
2.3
],
[
6.3,
3.4,
5.6,
2.4
],
[
6.4,
3.1,
5.5,
1.8
],
[
6,
3,
4.8,
1.8
],
[
6.9,
3.1,
5.4,
2.1
],
[
6.7,
3.1,
5.6,
2.4
],
[
6.9,
3.1,
5.1,
2.3
],
[
5.8,
2.7,
5.1,
1.9
],
[
6.8,
3.2,
5.9,
2.3
],
[
6.7,
3.3,
5.7,
2.5
],
[
6.7,
3,
5.2,
2.3
],
[
6.3,
2.5,
5,
1.9
],
[
6.5,
3,
5.2,
2
],
[
6.2,
3.4,
5.4,
2.3
],
[
5.9,
3,
5.1,
1.8
]
]
},
"m": {
"Name": "Anderson's Iris data set",
"Description": "The data set consists of 50 Ss from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each S: the length and the width of the sepals and petals, in centimetres.",
"Reference": "R. A. Fisher (1936). The use of multiple measurements in taxonomic problems. Annals of Eugenics 7 (2): 179-188."
}
},
config={
"broadcast": True,
"colorBy": "Species",
"graphType": "Scatter2D",
"layoutAdjust": True,
"scatterPlotMatrix": "Species",
"scatterPlotMatrixType": "all",
"theme": "CanvasXpress"
},
width=613,
height=613,
events=CXEvents(),
after_render=[
[
"userEventsClick",
[
{
"x": {},
"y": {
"vars": [
"Bin1 (4 - 4.5)"
],
"smps": [
"setosa",
"Bin"
],
"data": [
[
5,
0.5
]
]
},
"z": {}
},
{
"srcElement": "splom7-events",
"target": "splom7-events",
"ac": {
"x": 31.015625,
"y": 581
},
"altKey": False,
"bubbles": True,
"button": 0,
"buttons": 0,
"cancelBubble": False,
"cancelable": True,
"clientX": 502,
"clientY": 919,
"ctrlKey": False,
"defaultPrevented": False,
"detail": 1,
"eventPhase": 0,
"isTrusted": True,
"layerX": 32,
"layerY": 582,
"metaKey": False,
"movementX": 0,
"movementY": 0,
"offsetX": 31,
"offsetY": 581,
"pageX": 502,
"pageY": 919,
"returnValue": True,
"screenX": 502,
"screenY": 919,
"shiftKey": False,
"timeStamp": 24240.89999999944,
"type": "click",
"which": 1,
"x": 502,
"y": 919,
"xMouseDown": False,
"yMouseDown": False
},
False,
[
0,
1,
0,
"Property:Species:setosa:z"
]
]
]
],
other_init_params={
"version": 35,
"events": False,
"info": False,
"afterRenderInit": False,
"noValidate": True
}
)
display = CXNoteBook(cx)
display.render(output_file="splom_7.html")
```
| github_jupyter |
# Exploratory Factor Analysis
We created a relatively large set of variables during feature building, several of which are correlated with one another. For instance, being born in the USA correlates highly with being born on the continent of North America. Now we would like to:
1. **Reduce the dimensionality of the feature space** to help prevent overfitting when building models.
2. **Find a representation of the observed variables in a lower dimensional latent space**. Reducing the variables to **latent factors** helps with interpretability of models.
The aim is to get a better understanding of the data and possibly to use the output in building machine learning models to predict Physics Nobel Laureates.
[Exploratory Factor Analysis](https://en.wikipedia.org/wiki/Exploratory_factor_analysis) (EFA) is a multivariate statistical method that was designed to uncover latent structure in a relatively large set of variables. [Factor Analysis](https://en.wikipedia.org/wiki/Factor_analysis) uses the [correlation matrix](https://en.wikipedia.org/wiki/Correlation_and_dependence#Correlation_matrices) of the variables to examine intercorrelations between the measured variables. It reduces the dimensionality of the matrix by finding groups of variables with high intra-correlation but with low intercorrelation with other groups of variables. A group of these variables is a construct known as a **factor** and in a good factor model the factors have meaning and can easily be labelled.
There are several different types of factor models. Since we have only categorical (i.e. binary) features, the one that seems most appropriate is [Multiple Correspondence Analysis](https://en.wikipedia.org/wiki/Multiple_correspondence_analysis) (MCA). It is essentially the counterpart of [Principal Components Analysis](https://en.wikipedia.org/wiki/Principal_component_analysis) (PCA) for categorical data. Fortunately, there is a nice python library called [prince](https://github.com/MaxHalford/prince) that implements MCA along with other factor analysis methods. We will be using the library in this analysis.
```
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from prince import MCA
%matplotlib inline
```
## Reading in the Data
First let's read in the training features and the target.
```
train_features = pd.read_csv('../data/processed/train-features.csv')
train_features.tail()
target = pd.read_csv('../data/processed/train-target.csv', squeeze=True)
target.tail()
```
## Suitability of Data for Factor Analysis
All factor analysis start with the same question: *Is the data suitable for factor analysis?* There are a few issues to address here. The first are with regards to the **minimum sample size** and **subjects-to-variables (STV) ratio**. There are numerous rules of thumb and various empirical studies differ in their findings. An excellent and comprehensive review of these are given in chapter 3 of [Best Practices in Exploratory Factor Analysis](https://www.researchgate.net/publication/209835856_Best_Practices_in_Exploratory_Factor_Analysis_Four_Recommendations_for_Getting_the_Most_From_Your_Analysis). A very good short summary is given in [The Minimum Sample Size in Factor Analysis](https://www.encorewiki.org/display/~nzhao/The+Minimum+Sample+Size+in+Factor+Analysis). To cut a very long story short, basically, the sample size of *N = 542* here is deemed sufficient by all researchers and even very good by some. However, the *STV ratio = 542 / 202 = 2.68* is considered unacceptably low by many researchers. But it is important to mention that indeed both references give examples of succesful factor analyses for lower values than this.
The last issue concerns **factorability of the correlation matrix** itself. According to Wikiversity's article on [Exploratory Factor Analysis](https://en.wikiversity.org/wiki/Exploratory_factor_analysis), "Factorability is the assumption that there are at least some correlations amongst the variables so that coherent factors can be identified. Basically, there should be some degree of collinearity among the variables but not an extreme degree or singularity among the variables". There are in fact two statistical tests for this: [Bartlett’s test of sphericity](https://en.wikipedia.org/wiki/Bartlett%27s_test) and the [Kaiser–Meyer–Olkin](https://www.statisticshowto.datasciencecentral.com/kaiser-meyer-olkin/) (KMO) test. However, we are not going to say too much about these as they are based on the assumption that the data is multivariate normal, which clearly isn't the case here.
The article [Establishing Evidence for Internal Structure Using
Exploratory Factor Analysis](https://www.tandfonline.com/doi/pdf/10.1080/07481756.2017.1336931) suggests that "an intercorrelation matrix is deemed factorable when the majority of the correlation coefficients
computed are in the moderate range wherein r values are between .20 and .80. If a significant
number of variables are producing values below .20 (i.e., items not representing same construct)
or above .80 (i.e., multicollinearity), the researcher should consider eliminating these items
before conducting an EFA (Field, 2013)". OK let's see if this is the case here, making sure to take into consideration the fact that it should not matter if the correlations are positive or negative.
```
train_features_numerical = train_features.drop('full_name', axis='columns')
train_features_numerical = train_features_numerical.replace({'yes': 1, 'no': 0, 'male': 1, 'female': 0})
correlation = train_features_numerical.corr()
correlation
print('Percent of correlations in range abs(0.2, 0.8): {0} %'.format(
round(100 * ((abs(correlation) > 0.2) & (abs(correlation) < 0.8)).sum().sum() /
len(correlation) ** 2))
)
```
As you can see, only a small percentage of the values are within this range. This would clearly fail the criteria given above. However, this is not the only viewpoint on this matter. In the article [Exploratory factor analysis: A five-step guide for
novices](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.414.4818&rep=rep1&type=pdf), Tabachnick and Fidell recommended inspecting the correlation matrix (often termed Factorability of R) for correlation coefficients over 0.30... If no correlations go beyond 0.30, then
the researcher should reconsider whether factor analysis is the appropriate statistical method
to utilise." Clearly, there are some correlations above an absolute value of 0.3 in the matrix, so by this criteria, the correlation matrix is factorable. As you can see there are a lot of contrasting recommendations in factor analysis! So for now let's proceed as there are some correlations amongst the variables.
Let's take a little digression for now to explain a subtle but important point. Some readers may be wondering why we are perfectly comfortable using a [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) to measure the correlation between binary-binary variable pairs? It is because the Pearson correlation coefficient calculated for two binary variables returns the [phi coefficient](https://en.wikipedia.org/wiki/Phi_coefficient).
## How Many Factors?
It is now time to perform the factor analysis and determine how many factors to retain. Again this is more of an art than a science as there are numerous recommended ways of doing this. Some of the simpler, most straightforward and intuitive ways are:
- **Cumulative variance accounted for by the retained factors**. Here, again there are a few recommendations, although most researchers do recommend the 75-90% range.
- **Scree plot**. A plot of the extracted factors against their eigenvalues in descending order of magnitude. Typically the elbow in the plot is identified where the larger eigenvalues end and the smaller eigenvalues begin. Factors to the left of the elbow are retained and those to the right are dropped. Note that this is quite subjective as sometimes there can be more than one elbow.
- **Kaiser Greater-Than-One-Rule** which says that only those factors with eigenvalues greater than 1 should be retained for interpretation. Again this is arbirtrary, however, an eigenvalue of 1 is the value at which a factor accounts for at least as much variance as any individual variable.
OK let's perform the factor analysis now and use these criteria to decide on the number of factors to retain.
```
mca = MCA(
n_components=10,
n_iter=10,
copy=True,
random_state=0,
engine='sklearn'
)
train_features = train_features.drop('full_name', axis='columns')
mca = mca.fit(train_features)
ax1 = sns.lineplot(x=range(1, 11), y=mca.eigenvalues_)
ax1.set_xlim(0, 10)
ax1.set_ylim(0, 1.0)
ax1.set_xlabel('Number of factors')
ax1.set_ylabel('Eigenvalues')
ax1.set_title('Scree plot')
ax1.set_xticks(range(0, 11, 2));
ax = sns.lineplot(x=range(1, 11), y=np.cumsum(mca.explained_inertia_))
ax.figure.set_size_inches((8, 6))
ax.set_xlabel('Number of factors')
ax.set_ylabel('Cumulative variance')
ax.set_title('Cumulative variance accounted for by factors')
ax.set_xlim((0, 10))
ax.set_ylim((0, 1.0))
ax.axhline(y=0.9, linestyle='--', color='r', linewidth=1.0);
```
The scree plot suggests taking 2 factors. However, the Kaiser rule suggests that these factors are very poor as the eigenvalues are small. All eigenvalues are well below 1, indicating that they explain far less variance than any individual feature. This is further corroborated by the cumulative variance plot, which shows that only about 33% of the variance in the data is explained by the first 10 factors.
These are not the only considerations when choosing the number of factors to retain. Also, very important are the following criteria:
- All factors should be interpretable. In other words, one should be able to coherently name and describe the set of collective variables in an underlying factor.
- There should be several variables that load onto each of the factors. Generally, the more variables per factor, the greater the reliability of the factor. Typically 3 or more variables per factor as a minimum.
- The model should be parsimonius meaning that certain variables should load highly onto a particular factor but load lowly on to other factors. Typically loadings with absolute values above 0.3, 0.4 or 0.5 with minimal cross loadings are recommended.
With these criteria considered, we find that a factor model with any number of factors seems implausible. To see this, we can examine the table below. The 0th factor doesn't make any intuitive sense at all.
```
factor_loadings = mca.column_coordinates(train_features)
factor_loadings.loc[factor_loadings[0] < -0.4, 0:5].sort_values(by=0, ascending=True)
```
It seems pretty clear that this factor analysis is going nowhere.
## Discussion
We suspect that the factor analysis results may be invalid due to the STV ratio and / or sample size. Another theory is that the features are just too sparse for factor analysis to extract any meaningful information from the correlations in the data. There is some discussion in the context of PCA in [how can one extract meaningful factors from a sparse matrix](https://stats.stackexchange.com/questions/4753/how-can-one-extract-meaningful-factors-from-a-sparse-matrix). If you recall, earlier we saw that only 7% of the values in the correlation matrix had absolute values of correlation coefficients between 0.2 and 0.8. In fact, most of the remaining 93% of the values have absolute values of correlation coefficients less than or equal to 0.2.
```
print('Percent of correlations less than or equal to abs(0.2): {0} %'.format(
round(100 * (abs(correlation) <= 0.2).sum().sum() / len(correlation) ** 2))
)
```
The sparsity was of course induced by the binary encoding of variables during feature construction. The point is that most physicists are only associated with a very small fraction of the features. Finding latent structure in such data is difficult. So where does this leave us now?
## Conclusion
The approaches we have taken so far have been fruitless in achieving the two goals we set out to achieve at the outset of this EFA. We must now look for alternative approaches. One such alternative is [Multidimensional scaling](https://en.wikipedia.org/wiki/Multidimensional_scaling) (MDS). This would have to use a distance "metric" such as the [Gower distance](https://stats.stackexchange.com/questions/15287/hierarchical-clustering-with-mixed-type-data-what-distance-similarity-to-use) since Euclidean distance is not appropriate for binary data. There is no well established implementation in python, although a [Gower Similarty Coefficient implementation in sklearn](https://github.com/scikit-learn/scikit-learn/issues/5884) may not be too far away. There is however a [rudimentary Gower python implementation](https://datascience.stackexchange.com/questions/8681/clustering-for-mixed-numeric-and-nominal-discrete-data), although according to the previous reference, it should be using the Jaccard coefficient for "present" vs "absent" binary variables. For the data, the Gower similarity would essentially reduce to a combination of the Jaccard and Dice coefficients, so coding it up would not be too difficult. However, using (sklearn's) MDS is not viable for dimensionality reduction since [sklearn's MDS implementation has no `transform` method](https://stackoverflow.com/questions/21962204/sklearn-manifold-mds-has-no-transform-method), which means that new data points cannot be projected onto the embedding space that the MDS was fit on. It is not clear how far off in sklearn this is as the [issue has been pending for a few years now](https://github.com/scikit-learn/scikit-learn/pull/6222).
Another [approach that is closely related to the previous one](https://stats.stackexchange.com/questions/87681/performing-pca-with-only-a-distance-matrix) is to use [kernel principal component analysis](https://en.wikipedia.org/wiki/Kernel_principal_component_analysis) with the Gower distance. This is possible in sklearn as [sklearn's kernel PCA](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.KernelPCA.html) implementation allows the use of distance "metrics" other than Euclidean distance through the `precomputed` parameter. However, one should be concerned about using any form of PCA to reduce the dimensionality of this binary data as PCA works with a centered gram matrix. Therefore PCA does not seem like a natural fit for binary data as there is [no reason to assume that the binary data is centered anywhere other than at the origin](https://stats.stackexchange.com/questions/16331/doing-principal-component-analysis-or-factor-analysis-on-binary-data).
An approach of dimensionality reduction that seems more attractive for the data is [Non-negative matrix factorization](https://en.wikipedia.org/wiki/Non-negative_matrix_factorization) (NMF) as the features matrix consists entirely of non-negative (i.e. binary) entries. Although, the general NMF is not suitable for the binary case as the approximation is not bounded from above, there is an extension of NMF to the binary case known as [binary matrix factorization](http://ranger.uta.edu/~chqding/papers/icdm07-binary.pdf) (BMF). Hong LiangJie has performed [reviews on BMF](http://www.hongliangjie.com/2011/03/15/reviews-on-binary-matrix-decomposition/) and states "In all, it seems that the performance advantages of specifically designed binary data models are small. However, the biggest advantage of these models is that they can give better interpretations sometimes."
We explored a [BMF model](http://nimfa.biolab.si/nimfa.methods.factorization.bmf.html) for dimensionality reduction using the [nimfa](http://nimfa.biolab.si/) library, the only python implementation we could find. At first it seemed promising, however, we found two major roadblocks. The first was that the **penalty function method** implemented is only really appropriate for dense binary data. This is discussed by Zhang in [Binary Matrix Factorization with Applications](http://ranger.uta.edu/~chqding/papers/icdm07-binary.pdf) along with the **thresholding algorithm**, which is more appropriate for sparse binary data. Unfortunately the **thresholding algorithm** is not implemented in nimfa. The second limitation is the same as that mentioned above for sklearn's MDS, there is no way of projecting new data points onto the embedding space that the model was fit on. As a last resort, we could have possibly [altered the code to perform this projection](https://github.com/marinkaz/nimfa/issues/43), however, this would require some testing and is not a completely satisfactory solution. In the end we have found another promising approach that is more powerful. We will be discuss and apply it in the next notebook.
| github_jupyter |
```
#hide
%reload_ext autoreload
%autoreload 2
%matplotlib inline
```
# Examples - Quantum Annealing
> Various Quantum Annealing examples using Quixotic
By default, these examples use simulated annealing as a local solver to allow execution on a laptop. To use quantum annealing on a D-Wave quantum device managed by Amazon Braket, simply set `backend='aws'` and supply the `device_arn` and `s3_folder` arguments in `QuantumAnnealer`, as [described in the documentation](https://amaiya.github.io/quixotic/#How-to-Execute-on-a-Quantum-Computer:).
```
#all_notest
```
## Structural Imbalance
This first example is related to structural imbalance and is adapted directly from examples developed by D-Wave Systems. A signed social network is one where the edges are either positive or negative. For instance, a positive edge between two nodes may represent friendly relations between the entities represented by the nodes. Negative edges may represent hostile relations. Structural imbalance measures the degree to which a social network can be divided into two groups with each group containing only positive (friendly) edges, and all negative edges are between groups. Edges that violate this rule (e.g., positive edges across groups, negative edges within groups) are referred to as frustration edges. A larger number of frustration edges means the network is more imbalanced. [Nakumura et al. (2011)](https://medium.com/r/?url=https%3A%2F%2Fkilthub.cmu.edu%2Farticles%2Fjournal_contribution%2FViolence_in_the_Balance_A_Structural_Analysis_of_How_Rivals_Allies_and_Third-Parties_Shape_Inter-Gang_Violence%2F6472160) showed that such structural imbalance is correlated with gang violence. Structural imbalance has also been applied to [global terrorism](https://medium.com/r/?url=https%3A%2F%2Fwww.lanl.gov%2Fprojects%2Fnational-security-education-center%2Finformation-science-technology%2Fdwave%2Fassets%2Fambrosiano_dwave2017.pdf) datasets, as well.
In this first example, we will measure structural imbalance of a real-world network of militant organizations. The dataset is from the [Stanford Militants Mapping Project](https://cisac.fsi.stanford.edu/mappingmilitants). A small subset of the dataset focusing on Syria in 2013 can be downloaded as a CSV file from [here](https://raw.githubusercontent.com/amaiya/quixotic/develop/nbs/sample_data/militant_groups_syria_2013.csv).
**STEP 1: Load the dataset as a Graph**
```
!wget -O /tmp/network.csv https://raw.githubusercontent.com/amaiya/quixotic/develop/nbs/sample_data/militant_groups_syria_2013.csv
!pip install -q pandas
import pandas as pd
df = pd.read_csv('/tmp/network.csv', index_col=0)
df.head()
```
Here, we will import the edges in the DataFrame to create a Networkx graph object:
```
import networkx as nx
G = nx.from_pandas_edgelist(df, edge_attr=True)
positions = nx.spring_layout(G, seed=1967)
nx.draw(G, with_labels=True, pos=positions)
```
**STEP 2: Run `QuantumAnnealer`**
```
from quixotic.core import QuantumAnnealer
imbalance, bicoloring = QuantumAnnealer(G, task='structural_imbalance').execute().results()
```
The `'structural_imbalance'` task returns two values: `imbalance` (list of discovered frustration edges) and `bicoloring` (dictionary showing which of the two groups each node was placed). We find that the network is **not** completely balanced, as there is one frustration edge between the group 1 and group 533.
```
list(imbalance.values())
```
We color the two discovered groups as yellow and magenta. Notice that edges among nodes of the same color are blue (for friendly) and edges among nodes of different colors are red (for hostile). The one exception is the hostile frustration edge between 1 and 533 mentioned above.
```
node_colors = ['yellow' if bicoloring[node] ==1 else 'magenta' for node in G.nodes]
edge_colors = ['blue' if G.edges[e]['sign'] ==1 else 'red' for e in G.edges]
for i, v in enumerate(G.nodes): G.nodes[v]['color'] = node_colors[i]
#pos = nx.bipartite_layout(G, nodes = [v for i,v in enumerate(G.nodes) if node_colors[i] == 'yellow'])
for u,v in G.edges: G.edges[(u,v)]['weight'] = 1 if G.nodes[u]['color'] == G.nodes[v]['color'] else 2
pos = nx.kamada_kawai_layout(G, weight='weight')
nx.draw(G, with_labels=True, node_color=node_colors, edges=G.edges, edge_color=edge_colors, pos=pos)
```
## Maximum Clique
```
# construct or load your input graph
import networkx as nx
n_nodes = 6
p = 0.5 # probability of an edge
seed = 1967
g = nx.erdos_renyi_graph(n_nodes, p=p, seed=seed)
positions = nx.spring_layout(g, seed=seed)
nx.draw(g, with_labels=True, pos=positions)
# approximate a solution using QAOA and extract results
from quixotic.core import QuantumAnnealer
qo = QuantumAnnealer(g, task='maximum_clique')
qo.execute()
nodes = qo.results()
# plot nodes comprising the solution
sub = g.subgraph(nodes)
nx.draw(g, pos=positions, with_labels=True)
nx.draw(sub, pos=positions, node_color="r", edge_color="r")
QuantumAnnealer.supported_tasks()
```
## Minimum Vertex Cover
Here, we wil solve a [Minimum Vertex Cover problem](https://en.wikipedia.org/wiki/Vertex_cover). The goal is to find the smallest number of nodes that cover every edge in the network.
```
# construct or load your input graph
import networkx as nx
n_nodes = 6
p = 0.5 # probability of an edge
seed = 1967
g = nx.erdos_renyi_graph(n_nodes, p=p, seed=seed)
positions = nx.spring_layout(g, seed=seed)
nx.draw(g, with_labels=True, pos=positions)
# approximate a solution using QAOA and extract results
from quixotic.core import QuantumAnnealer
qo = QuantumAnnealer(g, task='minimum_vertex_cover')
qo.execute()
nodes = qo.results()
# plot nodes comprising the solution
sub = g.subgraph(nodes)
nx.draw(g, pos=positions, with_labels=True)
nx.draw(sub, pos=positions, node_color="r", edge_color="r")
```
Interestingly, our solution is closer to the ground truth than the Local Ratio algorithm in networkx that simply returns all nodes. The Local Ratio algorithm returns "a set of vertices whose weight sum is no more than 2 * OPT" (see [this post](https://stackoverflow.com/questions/45451726/networkx-min-weighted-vertex-cover-in-python-returns-whole-set-instead-of-vertex) for more information).
```
from networkx.algorithms.approximation.vertex_cover import *
nx_nodes = min_weighted_vertex_cover(g)
# plot nodes comprising the solution
sub = g.subgraph(nx_nodes)
nx.draw(g, pos=positions, with_labels=True)
nx.draw(sub, pos=positions, node_color="r", edge_color="r")
```
### Formulating Your Problem as a QUBO
Finally, for more flexibility, you can formulate your problem directly as a quadratic unconstrained binary optimization problem (QUBO) and supply it directly to `QuantumAnnealer`. Here, we will solve the same Minimum Vertex Cover problem shown above by supplying QUBO expressed as a Python dictionary instead of a `graph` and `task`. The QUBO will represent a [Maximum Independent Set](https://en.wikipedia.org/wiki/Maximal_independent_set) problem. The complement of a maximum independent set solution is a [solution to the minimum vertex cover problem](https://en.wikipedia.org/wiki/Maximal_independent_set#Related_vertex_sets). We find an equivalent solution to the solution we found above.
```
# formulate the QUBO
cost = dict(g.nodes(data=None, default=1))
scale = max(cost.values())
Q = {(node, node): min(-cost[node] / scale, 0.0) for node in g}
Q.update({edge: 2.0 for edge in g.edges}) # use 2.0 as "tuning" LaGrange value for this problem
# solve QUBO using QuantumAnnealer
independent_set = QuantumAnnealer(qubo=Q).execute().results()
# nodes NOT in the independent set comprise the minimum vertex cover
solution_nodes = [v for v in g if v not in independent_set]
# plot solution as graph
sub = g.subgraph(solution_nodes)
nx.draw(g, pos=positions, with_labels=True)
nx.draw(sub, pos=positions, node_color="r", edge_color="r")
```
For more information on QUBOs, please see [this reference](https://leeds-faculty.colorado.edu/glover/511%20-%20QUBO%20Tutorial%20-%20updated%20version%20-%20May%204,%202019.pdf).
| github_jupyter |
# Gradient Checking
Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.
But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".
Let's do it!
```
# Packages
import numpy as np
from testCases import *
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
```
## 1) How does gradient checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient):
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small."
We know the following:
- $\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly.
- You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct.
Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct!
## 2) 1-dimensional gradient checking
Consider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input.
You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct.
<img src="images/1Dgrad_kiank.png" style="width:600px;height:250px;">
<caption><center> <u> **Figure 1** </u>: **1D linear model**<br> </center></caption>
The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation").
**Exercise**: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions.
```
# GRADED FUNCTION: forward_propagation
def forward_propagation(x, theta):
"""
Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
J -- the value of function J, computed using the formula J(theta) = theta * x
"""
### START CODE HERE ### (approx. 1 line)
J = None
### END CODE HERE ###
return J
x, theta = 2, 4
J = forward_propagation(x, theta)
print ("J = " + str(J))
```
**Expected Output**:
<table style=>
<tr>
<td> ** J ** </td>
<td> 8</td>
</tr>
</table>
**Exercise**: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J }{ \partial \theta} = x$.
```
# GRADED FUNCTION: backward_propagation
def backward_propagation(x, theta):
"""
Computes the derivative of J with respect to theta (see Figure 1).
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
dtheta -- the gradient of the cost with respect to theta
"""
### START CODE HERE ### (approx. 1 line)
dtheta = None
### END CODE HERE ###
return dtheta
x, theta = 2, 4
dtheta = backward_propagation(x, theta)
print ("dtheta = " + str(dtheta))
```
**Expected Output**:
<table>
<tr>
<td> ** dtheta ** </td>
<td> 2 </td>
</tr>
</table>
**Exercise**: To show that the `backward_propagation()` function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking.
**Instructions**:
- First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow:
1. $\theta^{+} = \theta + \varepsilon$
2. $\theta^{-} = \theta - \varepsilon$
3. $J^{+} = J(\theta^{+})$
4. $J^{-} = J(\theta^{-})$
5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$
- Then compute the gradient using backward propagation, and store the result in a variable "grad"
- Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula:
$$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$
You will need 3 Steps to compute this formula:
- 1'. compute the numerator using np.linalg.norm(...)
- 2'. compute the denominator. You will need to call np.linalg.norm(...) twice.
- 3'. divide them.
- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.
```
# GRADED FUNCTION: gradient_check
def gradient_check(x, theta, epsilon = 1e-7):
"""
Implement the backward propagation presented in Figure 1.
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
# Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.
### START CODE HERE ### (approx. 5 lines)
thetaplus = None # Step 1
thetaminus = None # Step 2
J_plus = None # Step 3
J_minus = None # Step 4
gradapprox = None # Step 5
### END CODE HERE ###
# Check if gradapprox is close enough to the output of backward_propagation()
### START CODE HERE ### (approx. 1 line)
grad = None
### END CODE HERE ###
### START CODE HERE ### (approx. 1 line)
numerator = None # Step 1'
denominator = None # Step 2'
difference = None # Step 3'
### END CODE HERE ###
if difference < 1e-7:
print ("The gradient is correct!")
else:
print ("The gradient is wrong!")
return difference
x, theta = 2, 4
difference = gradient_check(x, theta)
print("difference = " + str(difference))
```
**Expected Output**:
The gradient is correct!
<table>
<tr>
<td> ** difference ** </td>
<td> 2.9193358103083e-10 </td>
</tr>
</table>
Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in `backward_propagation()`.
Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it!
## 3) N-dimensional gradient checking
The following figure describes the forward and backward propagation of your fraud detection model.
<img src="images/NDgrad_kiank.png" style="width:600px;height:400px;">
<caption><center> <u> **Figure 2** </u>: **deep neural network**<br>*LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID*</center></caption>
Let's look at your implementations for forward propagation and backward propagation.
```
def forward_propagation_n(X, Y, parameters):
"""
Implements the forward propagation (and computes the cost) presented in Figure 3.
Arguments:
X -- training set for m examples
Y -- labels for m examples
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (5, 4)
b1 -- bias vector of shape (5, 1)
W2 -- weight matrix of shape (3, 5)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
Returns:
cost -- the cost function (logistic cost for one example)
"""
# retrieve parameters
m = X.shape[1]
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
# Cost
logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)
cost = 1./m * np.sum(logprobs)
cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)
return cost, cache
```
Now, run backward propagation.
```
def backward_propagation_n(X, Y, cache):
"""
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T) * 2
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 4./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
```
You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
**How does gradient checking work?**.
As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still:
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "`dictionary_to_vector()`" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.
The inverse function is "`vector_to_dictionary`" which outputs back the "parameters" dictionary.
<img src="images/dictionary_to_vector.png" style="width:600px;height:400px;">
<caption><center> <u> **Figure 2** </u>: **dictionary_to_vector() and vector_to_dictionary()**<br> You will need these functions in gradient_check_n()</center></caption>
We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that.
**Exercise**: Implement gradient_check_n().
**Instructions**: Here is pseudo-code that will help you implement the gradient check.
For each i in num_parameters:
- To compute `J_plus[i]`:
1. Set $\theta^{+}$ to `np.copy(parameters_values)`
2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$
3. Calculate $J^{+}_i$ using to `forward_propagation_n(x, y, vector_to_dictionary(`$\theta^{+}$ `))`.
- To compute `J_minus[i]`: do the same thing with $\theta^{-}$
- Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$
Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to `parameter_values[i]`. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute:
$$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$
```
# GRADED FUNCTION: gradient_check_n
def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7):
"""
Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n
Arguments:
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.
x -- input datapoint, of shape (input size, 1)
y -- true "label"
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
# Set-up variables
parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))
# Compute gradapprox
for i in range(num_parameters):
# Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".
# "_" is used because the function you have to outputs two parameters but we only care about the first one
### START CODE HERE ### (approx. 3 lines)
thetaplus = None # Step 1
thetaplus[i][0] = None # Step 2
J_plus[i], _ = None # Step 3
### END CODE HERE ###
# Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".
### START CODE HERE ### (approx. 3 lines)
thetaminus = None # Step 1
thetaminus[i][0] = None # Step 2
J_minus[i], _ = None # Step 3
### END CODE HERE ###
# Compute gradapprox[i]
### START CODE HERE ### (approx. 1 line)
gradapprox[i] = None
### END CODE HERE ###
# Compare gradapprox to backward propagation gradients by computing difference.
### START CODE HERE ### (approx. 1 line)
numerator = None # Step 1'
denominator = None # Step 2'
difference = None # Step 3'
### END CODE HERE ###
if difference > 2e-7:
print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y)
```
**Expected output**:
<table>
<tr>
<td> ** There is a mistake in the backward propagation!** </td>
<td> difference = 0.285093156781 </td>
</tr>
</table>
It seems that there were errors in the `backward_propagation_n` code we gave you! Good that you've implemented the gradient check. Go back to `backward_propagation` and try to find/correct the errors *(Hint: check dW2 and db1)*. Rerun the gradient check when you think you've fixed it. Remember you'll need to re-execute the cell defining `backward_propagation_n()` if you modify the code.
Can you get gradient check to declare your derivative computation correct? Even though this part of the assignment isn't graded, we strongly urge you to try to find the bug and re-run gradient check until you're convinced backprop is now correctly implemented.
**Note**
- Gradient Checking is slow! Approximating the gradient with $\frac{\partial J}{\partial \theta} \approx \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon}$ is computationally costly. For this reason, we don't run gradient checking at every iteration during training. Just a few times to check if the gradient is correct.
- Gradient Checking, at least as we've presented it, doesn't work with dropout. You would usually run the gradient check algorithm without dropout to make sure your backprop is correct, then add dropout.
Congrats, you can be confident that your deep learning model for fraud detection is working correctly! You can even use this to convince your CEO. :)
<font color='blue'>
**What you should remember from this notebook**:
- Gradient checking verifies closeness between the gradients from backpropagation and the numerical approximation of the gradient (computed using forward propagation).
- Gradient checking is slow, so we don't run it in every iteration of training. You would usually run it only to make sure your code is correct, then turn it off and use backprop for the actual learning process.
| github_jupyter |
# HERA Data Analysis Part I
## Workshop Leaders: Carina & Josh
# A) Warm-Up
Load HERA data file `zen.2458098.40887.xx.HH.uvOCR` (which is on your laptop in the `data` folder) using the `read_miriad` function from the `UVData` object in the `pyuvdata` module. Then answer these basic questions about the data:
i) Which antennas are there?
ii) How many baselines are there? Does this make sense given the number of antennas?
iii) How many frequencies are there, and what range does it cover (in MHz)? What is the width of each frequency channel (in MHz)?
iv) How many time integrations are there in total?
v) What LSTs are in the data (in hours)? Eliminate repeat entries using `np.unique`.
```
from pyuvdata import UVData
import numpy as np
# load data here
path = '/Users/josaitis/RFI_Analysis_7May/files/zen.2458611.17601.HH.uvh5'
#path = '/lustre/aoc/projects/hera/ajosaiti/May7_RFI/2458610/zen.2458611.17601.HH.uvh5'
uvd = UVData()
uvd.read_uvh5(path)
# your answers here (hint: look at the attributes of your uvd object)
print 'There are', len(uvd.antenna_numbers), 'antennas and they are:', uvd.antenna_numbers
print 'There are', uvd.Nbls, 'baselines, which corresponds to N(N-1)/2 cross-correlations plus N auto-correlations.'
print 'There are', uvd.Nfreqs, 'frequencies, covering a range of', \
np.min(uvd.freq_array)/1e6, 'MHz to',np.max(uvd.freq_array)/1e6,'MHz.'
print 'The width of each frequency channel is', (np.max(uvd.freq_array)-np.min(uvd.freq_array))/uvd.Nfreqs/1e6, 'MHz'
print 'There are', uvd.Ntimes, 'time integrations.'
print 'The LSTs are', np.unique(uvd.lst_array)*12/np.pi, 'in hours.'
```
# B) Accessing and Visualizing Data and Flags
Let's first look at our antenna layout to get an idea of antenna locations and spacings.
```
import matplotlib.pyplot as plt
%matplotlib notebook
antpos, ants = uvd.get_ENU_antpos() # this returns coordinates of each antenna and a list of the antennas
plt.figure()
plt.scatter(antpos[:,0], antpos[:,1], marker='.', color='k', s=3000) # plot the antenna positions with black circles
for aa,ant in enumerate(ants): # loop over antennas
plt.text(antpos[aa,0], antpos[aa,1], ants[aa], color='w', va='center', ha='center') # label antenna numbers
plt.xlabel('X-position (m)')
plt.ylabel('Y-position (m)')
plt.axis('equal');
```
Let's now access the visibility data in our file for baseline (65,71), a long E/W baseline, and plot the real and imaginary part of the visibility for the first time integration.
```
for aa,ant in enumerate(ants):
pols = np.array(['xx','yy'])
for pol in pols:
key=(ant,ant,str(pol))
vis=uvd.get_data(key)
plt.figure()
plt.plot(uvd.freq_array[0]/1e6, vis[0,:].real, 'b-', label='Real part')
plt.plot(uvd.freq_array[0]/1e6, vis[0,:].imag, 'r-', label='Imag part')
plt.xlabel('Frequency (MHz)')
plt.ylabel('Visibility (Jy)')
plt.title(str('Ant '+str(ant)+' '+str(pol)))
plt.grid(); plt.legend();
key = (26,26,'xx') # tuple containing baseline and polarization
vis = uvd.get_data(key) # returns data
print 'Shape of data:', vis.shape # shape of data is (times, freqs)
plt.figure()
plt.plot(uvd.freq_array[0]/1e6, vis[0,:].real, 'b-', label='Real part')
plt.plot(uvd.freq_array[0]/1e6, vis[0,:].imag, 'r-', label='Imag part')
plt.xlabel('Frequency (MHz)')
plt.ylabel('Visibility (Jy)')
plt.grid(); plt.legend();
```
Exercise: Choose a few frequency channels and plot them as a function of time.
```
# your answer here
chans = [100,300,750]
plt.figure()
for chan in chans:
plt.plot(np.unique(uvd.lst_array)*12/np.pi, np.abs(vis[:,chan]),
label=str(np.round(uvd.freq_array[0,chan]/1e6,3)) + ' MHz')
plt.xlabel('LST (hours)')
plt.ylabel('Visibility (Jy)')
plt.grid(); plt.legend();
```
# C) Waterfall Plots
Another useful way of visualizing our data is to plot it as a waterfall plot. A waterfall plot is a two-dimensional plot of the visibility (the cross-correlated signal between a pair of antennas) as a function of time (y-axis) and frequency (x-axis). We can use the `plt.imshow` to plot the amplitude and phase of the same baseline as above. Note that the keyword `extent` takes in 4 arguments which define the plot axes extent in the order of (xmin, xmax, ymin, ymax), and we've massaged our axes to display frequencies in MHz and times in LST hours.
```
import matplotlib
# Plot absolute value of visibility
plt.figure(figsize=(10,4))
plt.subplot(121)
plt.imshow(np.abs(vis), aspect='auto',extent=(np.min(uvd.freq_array[0])/1e6,np.max(uvd.freq_array[0])/1e6,np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.colorbar(label='Visibility Amplitude (Jy)')
plt.xlabel('Frequency (MHz)')
plt.ylabel('LST (hours)')
plt.title('Amplitude')
# Plot phase of visibility
plt.subplot(122)
plt.imshow(np.angle(vis), aspect='auto',
extent=(np.min(uvd.freq_array[0])/1e6,np.max(uvd.freq_array[0])/1e6,
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.colorbar(label='Visibility Phase')
plt.xlabel('Frequency (MHz)')
plt.ylabel('LST (hours)')
plt.title('Phase')
plt.tight_layout();
```
Some features to note from the waterfall plots:
* RFI
* Band edges
* Frequency and temporal structure
Exercise: Pick a short baseline and a long baseline of the same orientation. Make waterfall plots (in both amplitude and phase) for both. How do they differ?
```
# your answer here
vis1 = uvd.get_data((23,26,'xx')) # short E/W baseline
vis2 = uvd.get_data((23,26,'xx')) # long E/W baseline
plt.figure(figsize=(10,8))
plt.subplot(221)
plt.imshow(np.abs(vis1), aspect='auto',
extent=(np.min(uvd.freq_array[0])/1e6,np.max(uvd.freq_array[0])/1e6,
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.colorbar(label='Visibility Amplitude (Jy)')
plt.xlabel('Frequency (MHz)')
plt.ylabel('LST (hours)')
#plt.title('Amplitude (53,54)')
plt.subplot(222)
plt.imshow(np.angle(vis1), aspect='auto',
extent=(np.min(uvd.freq_array[0])/1e6,np.max(uvd.freq_array[0])/1e6,
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.colorbar(label='Visibility Phase')
plt.xlabel('Frequency (MHz)')
plt.ylabel('LST (hours)')
#plt.title('Phase (53,54)')
plt.subplot(223)
plt.imshow(np.abs(vis2), aspect='auto',
extent=(np.min(uvd.freq_array[0])/1e6,np.max(uvd.freq_array[0])/1e6,
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.colorbar(label='Visibility Amplitude (Jy)')
plt.xlabel('Frequency (MHz)')
plt.ylabel('LST (hours)')
#plt.title('Amplitude (65,71)')
plt.subplot(224)
plt.imshow(np.angle(vis2), aspect='auto',
extent=(np.min(uvd.freq_array[0])/1e6,np.max(uvd.freq_array[0])/1e6,
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.colorbar(label='Visibility Phase')
plt.xlabel('Frequency (MHz)')
plt.ylabel('LST (hours)')
#plt.title('Phase (65,71)')
plt.tight_layout();
```
Exericse: How do the waterfall plots differ for an E/W baseline vs. a N/S baseline (of approximately similar lengths)? Does this make sense?
```
# your answer here
vis1 = uvd.get_data((23,26,'xx')) # short E/W baseline
vis2 = uvd.get_data((23,25,'xx')) # short N/S baseline
plt.figure(figsize=(10,8))
plt.subplot(221)
plt.imshow(np.abs(vis1), aspect='auto',
extent=(np.min(uvd.freq_array[0])/1e6,np.max(uvd.freq_array[0])/1e6,
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.colorbar(label='Visibility Amplitude (Jy)')
plt.xlabel('Frequency (MHz)')
plt.ylabel('LST (hours)')
#plt.title('Amplitude (53,54)')
plt.subplot(222)
plt.imshow(np.angle(vis1), aspect='auto',
extent=(np.min(uvd.freq_array[0])/1e6,np.max(uvd.freq_array[0])/1e6,
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.colorbar(label='Visibility Phase')
plt.xlabel('Frequency (MHz)')
plt.ylabel('LST (hours)')
#plt.title('Phase (53,54)')
plt.subplot(223)
plt.imshow(np.abs(vis2), aspect='auto',
extent=(np.min(uvd.freq_array[0])/1e6,np.max(uvd.freq_array[0])/1e6,
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.colorbar(label='Visibility Amplitude (Jy)')
plt.xlabel('Frequency (MHz)')
plt.ylabel('LST (hours)')
#plt.title('Amplitude (38,68)')
plt.subplot(224)
plt.imshow(np.angle(vis2), aspect='auto',
extent=(np.min(uvd.freq_array[0])/1e6,np.max(uvd.freq_array[0])/1e6,
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.colorbar(label='Visibility Phase')
plt.xlabel('Frequency (MHz)')
plt.ylabel('LST (hours)')
#plt.title('Phase (38,68)')
plt.tight_layout();
```
# D) The Delay Transform
The delay transform is a clever technique we use to isolate bright foregrounds in our data (which we can then filter out). The delay transform is simply the Fourier transform of the visibility along frequency.
Exercise: Try implementing the delay transform using `np.fft.fft` by following the steps below. We will think about its interpretation after.
```
key = (23,26,'xx') # sample baseline and pol
vis = uvd.get_data(key) # get data
# 1) Fourier transform "vis" along the frequency axis (don't forget to fftshift after)
#vis_dt = # your answer here
vis_dt = np.fft.fftshift(np.fft.fft(vis,axis=1),axes=1) # Fourier-transform along frequency
# 2) Find the frequency width of a channel in GHz
#freq_width = # your answer here
freq_width = (np.max(uvd.freq_array)-np.min(uvd.freq_array))/uvd.Nfreqs/1e9 # GHz
#3) Convert frequencies to delays. Numpy's fftfreq function takes two arguments:
# the number of frequencies, and the frequeny width you calculated above
#delays = # your answer here
delays = np.fft.fftshift(np.fft.fftfreq(uvd.Nfreqs,freq_width))
plt.figure()
plt.imshow(np.abs(vis_dt), aspect='auto',
extent=(np.min(delays),np.max(delays),
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.colorbar(label='Visibility Amplitude (Jy)')
plt.xlabel('Delay (ns)')
plt.ylabel('LST (hours)')
plt.xlim(-1000,1000) # zoom-in
plt.title('Delay Transform');
```
Yuck! What happened? All that bright RFI - which are very well localized in frequency space - has spread out in delay space, causing those bright horizontal streaks.
Luckily, we flag RFI in our pipeline and the flags are saved as `uvd.get_flags(key)`.
Exercise: Plot a waterfall plot of the flags.
```
# your answer here
plt.figure()
plt.imshow(uvd.get_flags(key), aspect='auto',
extent=(np.min(uvd.freq_array[0])/1e6,np.max(uvd.freq_array[0])/1e6,
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.xlabel('Frequency (MHz)')
plt.ylabel('LST (hours)')
plt.title('Flags');
```
Exericse: Now plot the delay transform again, but this time multiply the visibility data by `~uvd.get_flags(key)` (the tilde is to invert the flags, because the saved flags have 1's where the flags are).
```
# your answer here
vis = uvd.get_data(key)*~uvd.get_flags(key)
vis_dt = np.fft.fftshift(np.fft.fft(vis,axis=1),axes=1) # Fourier-transform along frequency
freq_width = (np.max(uvd.freq_array)-np.min(uvd.freq_array))/uvd.Nfreqs/1e9 # GHz
delays = np.fft.fftshift(np.fft.fftfreq(uvd.Nfreqs,freq_width)) # convert frequencies to delays
plt.figure()
plt.imshow(np.abs(vis_dt), aspect='auto',
extent=(np.min(delays),np.max(delays),
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.colorbar(label='Visibility Amplitude (Jy)')
plt.xlabel('Delay (ns)')
plt.ylabel('LST (hours)')
#plt.xlim(-1000,1000)
plt.title('Delay Transform');
```
That's better. So what do we see here? All the bright stuff at low delay values correspond to bright foreground sources that are smooth in frequency (and therefore "peaky" in delay). This is nice for us because we've isolated the foregrounds and we can filter them out easily in this space.
Let's think about what "delay" means physically. A delay can be thought of as the time difference between when a lightwave hits one antenna and when it hits the second antenna. In other words, there is a time lag between light hitting each antenna depending on the direction it comes from and the orientation of the baseline.
Where would the light have to be coming from for the time delay to be zero? Where would it need to be coming from to produce a maximum delay (hint: we call this the horizon limit)?
Exercise: For the baseline we picked earlier, calculate the theoretical maximum delay (in nanoseconds), using $t = d/c$, where $t$ is the time delay, $d$ is the baseline distance, and $c$ is the speed of light. You can calculate the baseline distance using the saved `antpos` and `ants` variables from earlier, or approximate it using the antenna layout plot.
```
# your answer here
a1,a2 = key[0],key[1] # antennas involved in the baseline
print 'Antennas:',a1,a2
ind1 = np.where(ants == a1)[0] # where in antenna list the first antenna is
ind2 = np.where(ants == a2)[0] # where in antenna list the second antenna is
x1,y1 = antpos[ind1][0][0], antpos[ind1][0][1] # x and y coordinate values for the first antenna
x2,y2 = antpos[ind2][0][0], antpos[ind2][0][1] # x and y coordinate values for the second antenna
d = np.sqrt((x1-x2)**2 + (y1-y2)**2) # baseline distance in meters
c = 3e8 # speed of light in m/s
t = d/c * 1e9 # time delay in ns
print 'Time Delay:', np.round(t), 'ns'
```
A cool trick for a faster calculation is to convert your distance $d$ to feet. That number is also approximately the time delay in nanoseconds!
Also note that the foregrounds in our delay-transform plot spread out past this horizon limit. This is due to reflections and other effects. But, they are still well isolated and we can now filter them out!
```
from hera_cal import delay_filter
df = delay_filter.DelayFilter(uvd) # establish delay filter object
#df.load_data(uvd) # load UVData object
df.run_filter(to_filter=[key]) # filter the specific key we want (otherwise it takes a long time to do all keys)
vis_df = df.filtered_residuals[key] # filtered visibility (i.e. the cleaned visibility)
clean = df.CLEAN_models[key] # low delay modes we don't want (i.e. the stuff that gets filtered out)
# Plot
plt.figure(figsize=(10,3))
plt.subplot(131)
plt.imshow(np.abs(uvd.get_data(key)), aspect='auto',
extent=(np.min(uvd.freq_array[0])/1e6,np.max(uvd.freq_array[0])/1e6,
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.colorbar(label='Visibility Amplitude (Jy)')
plt.xlabel('Frequency (MHz)')
plt.ylabel('LST (hours)')
plt.title('Original Visibility')
plt.subplot(132)
plt.imshow(np.abs(vis_df), aspect='auto',
extent=(np.min(uvd.freq_array[0])/1e6,np.max(uvd.freq_array[0])/1e6,
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.colorbar(label='Visibility Amplitude (Jy)')
plt.xlabel('Frequency (MHz)')
plt.ylabel('LST (hours)')
plt.title('Cleaned Visibility')
plt.subplot(133)
plt.imshow(np.abs(clean), aspect='auto',
extent=(np.min(uvd.freq_array[0])/1e6,np.max(uvd.freq_array[0])/1e6,
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.colorbar(label='Visibility Phase')
plt.xlabel('Frequency (MHz)')
plt.ylabel('LST (hours)')
plt.title('CLEAN Components')
plt.tight_layout();
```
Comparing the left and middle plots, we see that we've reduced the amplitude of our visibility by a couple orders of magnitude! We also see that some of the foreground structure we see in the left plot has been removed, as the middle plot looks more noise-like. Delay-filtering is a crucial step of the HERA analysis pipeline - afterall, we are trying to find a tiny signal (the 21cm EoR signal) buried underneath a lot of stuff we don't care about!
# E) Extra Credit Exercises
1) Loop through all antennas and look at their auto-correlations (correlations between the same antennas, like (53,53) for example) to identify which antennas are dead/don't have data. Color those antennas red on the antenna layout plot.
2) Pick a baseline with obvious RFI in its visibility. Write an algorithm to identity the RFI, and compare your results to the actual flags.
3) Pick a baseline type (a common one in the array) and make a list containing all those redundant baselines. Then loop through them and run the delay filter on each visibility. Stack all the delay-filtered visibilities (average them). How much sensitivity did you gain by filtering and averaging?
```
# Exercise 1
dead_ants = [] # will hold dead antennas
for ant in ants: # loop over antennas
auto = uvd.get_flags((ant,ant,'xx')) # get auto-correlation flags
if np.all(auto): # if all flags are True
dead_ants.append(ant)
print 'Dead Antennas:',dead_ants
plt.figure()
plt.scatter(antpos[:,0], antpos[:,1], marker='.', color='k', s=3000) # plot the antenna positions with black circles
for aa,ant in enumerate(ants): # loop over antennas
plt.text(antpos[aa,0], antpos[aa,1], ants[aa], color='w', va='center', ha='center') # label antenna numbers
if ant in dead_ants: plt.scatter(antpos[aa,0], antpos[aa,1], marker='.', color='r', s=3000) # override with red
plt.xlabel('X-position (m)')
plt.ylabel('Y-position (m)')
plt.axis('equal');
# Exercise 2
key = (23,26,'xx')
data = uvd.get_data(key)
# My simple flagger will difference the data along time and frequency
# and look for values (derivatives, or sudden changes) that deviate more than 1-sigma from the mean
data = np.diff(data, axis=0) # difference along time
data = np.diff(data, axis=1) # difference along freq
mean_data = np.mean(data) # mean
std_data = np.std(data) # std
my_flags = np.zeros(data.shape) # all zero flags
my_flags[np.where(data > mean_data + 1*std_data)] = 1 # make the flag equal to 1
for f in range(data.shape[1]): # loop over freqs
if np.all(data[:,f] == 0): my_flags[:,f] = 1 # if data is 0 for all times, flag it
plt.figure(figsize=(10,4))
plt.subplot(121)
orig_flags = uvd.get_flags(key)
plt.imshow(orig_flags, aspect='auto',
extent=(np.min(uvd.freq_array[0])/1e6,np.max(uvd.freq_array[0])/1e6,
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.title("Flags")
plt.xlabel('Frequency (MHz)')
plt.ylabel('LST (hours)')
plt.subplot(122)
plt.imshow(my_flags, aspect='auto',
extent=(np.min(uvd.freq_array[0])/1e6,np.max(uvd.freq_array[0])/1e6,
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.title("My Flags")
plt.xlabel('Frequency (MHz)')
plt.ylabel('LST (hours)')
plt.tight_layout();
# Exercise 3
bls_red = [] # redundant baselines
for aa,a1 in enumerate(ants): # loop over all antenna pairs (all baselines) to find 14-m E/W redundant baselines
for a2 in ants[aa:]:
ind1 = np.where(ants == a1)[0][0] # index of antenna 1
ind2 = np.where(ants == a2)[0][0] # index of antenna 2
antpos_diff = np.abs(antpos[ind1] - antpos[ind2]) # difference in antenna position
if np.abs(antpos_diff[0]-14) < 1 and antpos_diff[1] < 0.5: bls_red.append((a1,a2))
stacked_vis = []
for bl in bls_red: # loop over redundant baselines
key = (bl[0],bl[1],'xx')
data = uvd.get_data(key)
df = delay_filter.Delay_Filter()
df.load_data(uvd)
df.run_filter(to_filter=[key]) # run delay filter
vis_df = df.filtered_residuals[key]
stacked_vis.append(vis_df) # save cleaned visibility
plt.figure(figsize=(10,4))
plt.subplot(121)
plt.imshow(np.abs(data), aspect='auto', norm=matplotlib.colors.LogNorm(1e-1,1e2),
extent=(np.min(uvd.freq_array[0])/1e6,np.max(uvd.freq_array[0])/1e6,
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.title(key)
plt.xlabel('Frequency (MHz)')
plt.ylabel('LST (hours)')
plt.colorbar()
plt.subplot(122)
plt.imshow(np.abs(np.mean(stacked_vis,axis=0)), aspect='auto', norm=matplotlib.colors.LogNorm(1e-1,1e2),
extent=(np.min(uvd.freq_array[0])/1e6,np.max(uvd.freq_array[0])/1e6,
np.max(np.unique(uvd.lst_array))*12/np.pi,np.min(np.unique(uvd.lst_array))*12/np.pi))
plt.title("Averaged Data for Redundant Baselines")
plt.xlabel('Frequency (MHz)')
plt.ylabel('LST (hours)')
plt.colorbar()
plt.tight_layout();
```
| github_jupyter |
# Lesson 06
## Analysis Module within ArcPy
# Objectives
- Examine the Analysis Tools
- Examine the Feature Analysis Tools
- Learn about the `tempfile` module
## Analysis Toolbox
- Collection of tools that perform the most fundamental GIS operations
- Five toolsets
+ Extract
+ Overlay
+ Pairwise Overlay
+ Proximity
+ Statistics
### Analysis Toolbox Example
- Here we need to split the center road line by each district
```
import os
import arcpy
from arcpy import env
from arcpy import da
env.overwriteOutput = True
results = []
cnt_lines = "./data/Centerlines.shp"
with da.SearchCursor("./data/School_Districts.shp", "SHAPE@") as srows:
for idx, row in enumerate(srows):
out_fc = os.path.join(env.scratchGDB, f"road_line_{idx}")
geom = row[0]
out_fc = arcpy.analysis.Clip(cnt_lines, geom, out_feature_class=out_fc)[0]
results.append(out_fc)
results
```
### Example Notes:
- Mixed cursor with standard tools
- Used geometry as input to tools
# The `tempfile` Module
```
import tempfile
```
**Purpose:** Create temporary file system objects such as files and directories
## Why Use `tempfile`?
1. Creating temporary files with unique names
2. Helps prevent the theft of data
3. Built in secure file generation
4. Automatic file deletion when process finishes
## Temporary Files
- Used with non-persistent files that are not shared between applications
- Files have no true name/extension
```
with tempfile.TemporaryFile() as temp:
print('temp:')
print(' {!r}'.format(temp))
print('temp.name:')
print(' {!r}'.format(temp.name))
```
## Temporary Files Continued
- There is the ability to read/write data into the temporary file
+ Defaults into `w+b`
```
with tempfile.TemporaryFile() as temp:
temp.write(b'Python is so Amazing') # notice the b for bytes
temp.seek(0) # rewind to the start of the file
print(temp.read()) # read the file
```
## Temporary Files Continued
- Allows for the usage of other write modes.
```
with tempfile.TemporaryFile(mode='w+t') as writer: # text data writer
writer.writelines(['**Shopping List**\n','Bread\n', 'Eggs\n', 'Milk\n'])
writer.seek(0)
for line in writer:
print(line)
```
## Named Temporary Files
- Create temporary files that can be shared during the context of that process
```
import pathlib
with tempfile.NamedTemporaryFile() as temp:
print('temp:')
print(' {0}'.format(temp))
print('temp.name:')
print(' {0}'.format(temp.name))
f = pathlib.Path(temp.name)
f"Do I Exist Anymore? {f.exists()}"
```
## Predicted Named Temporary Files
- Sometimes a file extension, prefix, or suffix is needed for a given temporary file
```
with tempfile.NamedTemporaryFile(suffix='.txt',
prefix='adv_python_') as temp:
print('temp:')
print(' ', temp)
print('temp.name:')
print(' ', temp.name)
```
## Temporary Directories
- There are built in commands to access the OS' temporary folder location
- `ArcPy` has temporary storage locations just like `tempfile`
```
print('gettempdir():', tempfile.gettempdir())
print('gettempprefix():', tempfile.gettempprefix())
env.scratchFolder, env.scratchGDB
```
## Building Temporary Directories
- Sometimes a temporary sub-folder is needed to store information for one piece the process
+ Think downloading information
```
pth = None
with tempfile.TemporaryDirectory(suffix='_storage',
prefix='adv_python_') as temp:
print(f'temp folder: {temp}')
pth = os.path.join(temp, "test.txt")
with open(pth, 'w') as writer:
writer.write('hello there!')
print("DO I EXIST: ", os.path.isfile(pth))
print("DO I EXIST: ", os.path.isfile(pth))
```
## Recall out original example
```
import os
import arcpy
from arcpy import env
from arcpy import da
env.overwriteOutput = True
results = []
cnt_lines = "./data/Centerlines.shp"
with da.SearchCursor("./data/School_Districts.shp", "SHAPE@") as srows:
for idx, row in enumerate(srows):
out_fc = os.path.join(env.scratchGDB, f"road_line_{idx}")
geom = row[0]
out_fc = arcpy.analysis.Clip(cnt_lines, geom, out_feature_class=out_fc)[0]
results.append(out_fc)
results
```
- Notice we manually set unique feature class names.
## Enhancements to Analysis
- Leverage `CreateUniqueNames`
- Use `tempfile` to generate folders and intermediate files
- Leverage scratch GDB, scratch folders and in_memory workspaces
```
import os
import arcpy
import pandas as pd
from arcpy import env
from arcpy import da
env.overwriteOutput = True
dfs = []
results = []
cnt_lines = "./data/Centerlines.shp"
with da.SearchCursor("./data/School_Districts.shp", ["SHAPE@", 'NAME_']) as srows:
for idx, row in enumerate(srows):
out_fc = arcpy.CreateUniqueName(os.path.join(env.scratchGDB, "road_line"))
geom = row[0]
road_lines = arcpy.analysis.Clip(cnt_lines, geom, out_feature_class=out_fc)[0]
arcpy.management.AddFields(road_lines, "LENGTH_MI FLOAT # # # #;POINTS LONG # # # #")
arcpy.management.CalculateGeometryAttributes(road_lines, [["length_mi", "LENGTH_GEODESIC"],
["points", "POINT_COUNT"]], "MILES_US")
with da.SearchCursor(road_lines, "*") as rows:
flds = rows.fields
for r in rows:
r = dict(zip(flds, r))
r['SCHOOL_DISTICT'] = row[1]
results.append(r)
del r
del rows
del idx, row
```
## Examine Number of Roads in Each District
```
df = pd.DataFrame(results)
import numpy as np
from matplotlib import cm
color = cm.plasma(np.linspace(.4,.8, 10))
df.SCHOOL_DISTICT.value_counts().plot(kind='bar', color=color)
with da.SearchCursor(results[0], "*") as rows:
print(rows.fields)
for row in rows:
print(row)
break
```
| github_jupyter |
# Table of Contents
<p><div class="lev1 toc-item"><a href="#ARTIFICIAL-INTELLIGENCE-AND-LIFE-IN-2030" data-toc-modified-id="ARTIFICIAL-INTELLIGENCE-AND-LIFE-IN-2030-1"><span class="toc-item-num">1 </span>ARTIFICIAL INTELLIGENCE AND LIFE IN 2030</a></div><div class="lev2 toc-item"><a href="#Preface" data-toc-modified-id="Preface-11"><span class="toc-item-num">1.1 </span>Preface</a></div><div class="lev2 toc-item"><a href="#Executive-Summary" data-toc-modified-id="Executive-Summary-12"><span class="toc-item-num">1.2 </span>Executive Summary</a></div><div class="lev2 toc-item"><a href="#Overview" data-toc-modified-id="Overview-13"><span class="toc-item-num">1.3 </span>Overview</a></div><div class="lev3 toc-item"><a href="#What-is-next-for-AI-research?" data-toc-modified-id="What-is-next-for-AI-research?-131"><span class="toc-item-num">1.3.1 </span>What is next for AI research?</a></div><div class="lev2 toc-item"><a href="#What-is-Artificial-Intelligence?" data-toc-modified-id="What-is-Artificial-Intelligence?-14"><span class="toc-item-num">1.4 </span>What is Artificial Intelligence?</a></div><div class="lev2 toc-item"><a href="#Section-II:-AI-by-Domain" data-toc-modified-id="Section-II:-AI-by-Domain-15"><span class="toc-item-num">1.5 </span>Section II: AI by Domain</a></div><div class="lev3 toc-item"><a href="#Education" data-toc-modified-id="Education-151"><span class="toc-item-num">1.5.1 </span>Education</a></div>
# ARTIFICIAL INTELLIGENCE AND LIFE IN 2030
## Preface
- The overarching purpose of the One Hundred year study's periodic expert review is to provide a collected and connected set of reflections about AI and its influences as the field advances.
## Executive Summary
- AI is a science and a set of computational technologies that are inspired by --but typically operate quite differently from -- the ways people use their nervous systems and bodies to sense, learn, reason and take action.
- While drawing on common research and technologies, AI systems are specialized to accomplish particular tasks. Each application requires years of focused research and a careful, unique construction.
- While the rate of progress in AI has been patchy and unpredictable, there have been significant advances since the field's inception 60 years ago.
- Substantial increases in the future uses of AI applications, including more self-driving cars, healthcare diagnostics and targeted treatment, and physical assistance for elder care can be expected.
- Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future. Instead, increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030.
## Overview
Many have already grown
accustomed to touching
and talking to their
smart phones. People’s
future relationships with
machines will become ever
more nuanced, fluid, and
personalized.
Society is now at a crucial
juncture in determining
how to deploy AI-based
technologies in ways that
promote rather than hinder
democratic values such
as freedom, equality, and
transparency.
Longer term, AI may be
thought of as a radically
different mechanism
for wealth creation in
which everyone should
be entitled to a portion of
the world’s AI-produced
treasures.
### What is next for AI research?
The field of AI is shifting
toward building intelligent
systems that can
collaborate effectively with
people, including creative
ways to develop interactive
and scalable ways for
people to teach robots.
These trends drive the currently “hot” areas of AI research into both fundamental methods and application areas:
Large-scale machine learning
- **Large-scale machine learning** concerns the design of learning algorithms, as well as scaling existing algorithms, to work with extremely large data sets. Deep learning, a class of learning procedures, has facilitated object recognition.
Large-scale machine learning
Many of the basic problems in machine learning (such as supervised and
unsupervised learning) are well-understood. A major focus of current efforts is to
scale existing algorithms to work with extremely large data sets. For example, whereas
traditional methods could afford to make several passes over the data set, modern
ones are designed to make only a single pass; in some cases, only sublinear methods
(those that only look at a fraction of the data) can be admitted.
- **Deep learning**, a class of learning procedures, has facilitated object recognition in images, video labeling, and activity recognition, and is making significant inroads into other areas of perception, such as audio, speech, and natural language processing.
- **Reinforcement learning** is a framework that shifts the focus of machine learning from pattern recognition to experience-driven sequential decision-making. It promises to carry AI applications forward toward taking actions in the real world. While largely confined to academia over the past several decades, it is now seeing some practical, real-world successes.
Whereas traditional machine learning has mostly focused on pattern mining,
reinforcement learning shifts the focus to decision making, and is a technology that
will help AI to advance more deeply into the realm of learning about and executing
actions in the real world. It has existed for several decades as a framework for
experience-driven sequential decision-making, but the methods have not found great
success in practice, mainly owing to issues of representation and scaling. However,
the advent of deep learning has provided reinforcement learning with a “shot in the
arm.” The recent success of AlphaGo, a computer program developed by Google
Deepmind that beat the human Go champion in a five-game match, was due in large
part to reinforcement learning. AlphaGo was trained by initializing an automated
agent with a human expert database, but was subsequently refined by playing a large
number of games against itself and applying reinforcement learning.
- **Robotics** is currently concerned with how to train a robot to interact with the world around it in generalizable and predictable ways, how to facilitate manipulation of objects in interactive environments, and how to interact with people. Advances in robotics will rely on commensurate advances to improve the reliability and generality of computer vision and other forms of machine perception.
- **Computer vision** is currently the most prominent form of machine perception. It has been the sub-area of AI most transformed by the rise of deep learning. For the first time, computers are able to perform some vision tasks better than people. Muchcurrent research is focused on automatic image and video captioning.
## What is Artificial Intelligence?
Intelligence lies on a
multi-dimensional
spectrum. According to
this view, the difference
between an arithmetic
calculator and a human
brain is not one of kind,
but of scale, speed,
degree of autonomy, and
generality.
## Section II: AI by Domain
### Education
Though quality education
will always require active
engagement by human
teachers, AI promises to
enhance education at
all levels, especially by
providing personalization
at scale.
The current absence of
sophisticated use of AI
technologies in schools,
colleges, and universities
may be explained by
the lack of financial
resources as well as the
lack of data establishing
the technologies’
effectiveness.
| github_jupyter |
## Business Understanding
I am interested in using a data analysis approach to know more about the situation of Women in Computer Programming. I hope to use the analysis results to provide some useful information to anyone who need this kind of research. The key questions I would like to answer are:
- what is the situation of women in Italy in the programming world compared to man?
- what is the identikit of women who work in computer programming in Italy? What are their qualifications?
- Instead in USA, what is the situation of super qualified women in tech ?
## Data Understanding
The data used in this analysi was Stack Overflow’s developer survey data from 2017 to 2019. Respondents from about 200 countries gave their answers to about 150 survey questions. This notebook attempted to use the survey questions to answer the three questions listed in the Business Understanding section.
## Gather Data
Data has been gathered by Stack Overflow survey. The following cells import necessary Python libraries, and read them into Pandas Dataframe.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from pandasql import sqldf
```
## Italy situation in 2017
## Prepare Data
The following cell help me to access Data, select the columns and the values that I need for my analysis.
```
#Read data
stack_2017 = pd.read_csv('2017_survey_results_public.csv')
stack_2017 = stack_2017[['Country','Gender','FormalEducation','Professional','Salary']]
Italy_2017 = stack_2017.loc[stack_2017['Country']=='Italy']
#In this part I used dropna in order to drop all null value in the column Gender. For my analysis I need only to know if the gender is Male of Female#
Ita_2017 = Italy_2017.dropna(subset = ['Gender'])
Ita_17 = Ita_2017[Ita_2017['Gender'].isin(['Male','Female'])]
Ita_Dev17 = Ita_17[Ita_17['Professional'].isin(['Professional developer'])]
Ita_F17 = Ita_Dev17.loc[Ita_Dev17['Gender']=='Female']
Ita_M17 = Ita_Dev17.loc[Ita_Dev17['Gender']=='Male']
```
## Results for Italy in 2017
```
# A first overview of the total number of people in tech world based on the data of the survey#
print(Ita_Dev17.shape[0])
# A first overview of the number of female in tech world#
print(Ita_F17.shape[0])
# A first overview of the number of male in tech world#
print(Ita_M17.shape[0])
perc_Ita_F = Ita_F17.shape[0] / Ita_Dev17.shape[0] *100
#Percentage of women in tech world in Italy #
"{:.2f} %".format(perc_Ita_F)
func = lambda q : sqldf(q , globals())
q = """
select *
from Ita_Dev17
where FormalEducation like '%Master%' or
FormalEducation like '%Bachelor%' or
FormalEducation like '%Professional%' or
FormalEducation like '%doctoral%';
"""
Dev_OverQual17_Ita = func(q)
Dev_OverQual17_Ita.head()
#percentage of woman developer overqualified on number of developer women#
Dev_OverQual_F17_Ita = len(Dev_OverQual17_Ita.loc[Dev_OverQual17_Ita['Gender']=='Female'])/Ita_F17.shape[0] * 100
#percentage of man developer overqualified on number of developer man#
Dev_OverQual_M17_Ita = len(Dev_OverQual17_Ita.loc[Dev_OverQual17_Ita['Gender']=='Male'])/Ita_M17.shape[0] * 100
# output: percentage of woman developer overqualified on number of developer women#
print("{:.2f} %".format(Dev_OverQual_F17_Ita))
# output: percentage of woman developer overqualified on number of developer women#
print("{:.2f} %".format(Dev_OverQual_M17_Ita))
## export outputs ##
Italy_2017.to_excel(r'C:\Users\moryb\OneDrive\Desktop\Project1\output\output_17\Ita_17\Italy_2017.xlsx', index=False, header=True)
Ita_2017.to_excel(r'C:\Users\moryb\OneDrive\Desktop\Project1\output\output_17\Ita_17\ItaDropnaByGender_2017.xlsx', index=False, header=True)
Ita_17.to_excel(r'C:\Users\moryb\OneDrive\Desktop\Project1\output\output_17\Ita_17\ItaMF_17.xlsx', index=False, header=True)
Ita_Dev17.to_excel(r'C:\Users\moryb\OneDrive\Desktop\Project1\output\output_17\Ita_17\ItaDev_17.xlsx', index=False, header=True)
Ita_F17.to_excel(r'C:\Users\moryb\OneDrive\Desktop\Project1\output\output_17\Ita_17\ItaDev_F17.xlsx', index=False, header=True)
Ita_M17.to_excel(r'C:\Users\moryb\OneDrive\Desktop\Project1\output\output_17\Ita_17\ItaDev_M17.xlsx', index=False, header=True)
Dev_OverQual17_Ita.to_excel(r'C:\Users\moryb\OneDrive\Desktop\Project1\output\output_17\Ita_17\ItaDev_OverQ17.xlsx', index=False, header=True)
```
## USA Situation in 2017
## Prepare Data
The following cell help me to access Data, select the columns and the values that I need for my analysis.
```
Usa_2017 = stack_2017.loc[stack_2017['Country']=='United States']
#In this part I used dropna in order to drop all null value in the column Gender. For my analysis I need only to know if the gender is Male of Female#
Usa_17 = Usa_2017.dropna(subset = ['Gender'])
USA_MF17 = Usa_17[Usa_17['Gender'].isin(['Male','Female'])]
Usa_Dev17 = USA_MF17[USA_MF17['Professional'].isin(['Professional developer'])]
Usa_F17 = Usa_Dev17.loc[Usa_Dev17['Gender']=='Female']
Usa_M17 = Usa_Dev17.loc[Usa_Dev17['Gender']=='Male']
```
## Results for Usa in 2017
```
# A first overview of the total number of people in tech world in Usa based on the data of the survey#
print(Usa_Dev17.shape[0])
# A first overview of the number of women in tech world in Usa based on the data of the survey#
print(Usa_F17.shape[0])
# A first overview of the total number of men in tech world in Usa based on the data of the survey#
print(Usa_M17.shape[0])
perc_Usa_F = Usa_F17.shape[0] / Usa_Dev17.shape[0] *100
#Percentage of women in tech world in Usa in 2017 #
"{:.2f} %".format(perc_Usa_F)
func_1 = lambda d : sqldf(d , globals())
d = """
select *
from Usa_Dev17
where FormalEducation like '%Master%' or
FormalEducation like'%Bachelor%' or
FormalEducation like'%Professional%' or
FormalEducation like'%doctoral%';
"""
Dev_OverQual17_Usa = func(d)
Dev_OverQual17_Usa.head()
### percentage of woman developer overqualified on number of developer women##
Dev_OverQual_F17_Usa = len(Dev_OverQual17_Usa.loc[Dev_OverQual17_Usa['Gender']=='Female'])/Usa_F17.shape[0] * 100
### percentage of man developer overqualified on number of developer man##
Dev_OverQual_M17_Usa = len(Dev_OverQual17_Usa.loc[Dev_OverQual17_Usa['Gender']=='Male'])/Usa_M17.shape[0] * 100
### percentage of woman developer overqualified on number of developer women##
print("{:.2f} %".format(Dev_OverQual_F17_Usa))
### percentage of man developer overqualified on number of developer man##
print("{:.2f} %".format(Dev_OverQual_M17_Usa))
########### export outputs #########################
Usa_2017.to_excel(r'C:\Users\moryb\OneDrive\Desktop\Project1\output\output_17\Usa_17\Usa_2017.xlsx', index=False, header=True)
Usa_17.to_excel(r'C:\Users\moryb\OneDrive\Desktop\Project1\output\output_17\Usa_17\UsaDropnaByGender_2017.xlsx', index=False, header=True)
USA_MF17.to_excel(r'C:\Users\moryb\OneDrive\Desktop\Project1\output\output_17\Usa_17\UsaMF_17.xlsx', index=False, header=True)
Usa_Dev17.to_excel(r'C:\Users\moryb\OneDrive\Desktop\Project1\output\output_17\Usa_17\UsaDev_17.xlsx', index=False, header=True)
Usa_F17.to_excel(r'C:\Users\moryb\OneDrive\Desktop\Project1\output\output_17\Usa_17\UsaDev_F17.xlsx', index=False, header=True)
Usa_M17.to_excel(r'C:\Users\moryb\OneDrive\Desktop\Project1\output\output_17\Usa_17\UsaDev_M17.xlsx', index=False, header=True)
Dev_OverQual17_Usa.to_excel(r'C:\Users\moryb\OneDrive\Desktop\Project1\output\output_17\Usa_17\UsaDev_OverQ17.xlsx', index=False, header=True)
```
| github_jupyter |
## Dogs v Cats super-charged!
```
# Put these at the top of every notebook, to get automatic reloading and inline plotting
%reload_ext autoreload
%autoreload 2
%matplotlib inline
# This file contains all the main external libs we'll use
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
PATH = "data/dogscats/"
sz=299
arch=resnext50
bs=28
tfms = tfms_from_model(arch, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
data = ImageClassifierData.from_paths(PATH, tfms=tfms, bs=bs, num_workers=4)
learn = ConvLearner.pretrained(arch, data, precompute=True, ps=0.5)
learn.fit(1e-2, 1)
learn.precompute=False
learn.fit(1e-2, 2, cycle_len=1)
learn.unfreeze()
lr=np.array([1e-4,1e-3,1e-2])
learn.fit(lr, 3, cycle_len=1)
learn.save('224_all_50')
learn.load('224_all_50')
log_preds,y = learn.TTA()
probs = np.mean(np.exp(log_preds),0)
accuracy_np(probs,y)
```
## Analyzing results
```
preds = np.argmax(probs, axis=1)
probs = probs[:,1]
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y, preds)
plot_confusion_matrix(cm, data.classes)
def rand_by_mask(mask): return np.random.choice(np.where(mask)[0], 4, replace=False)
def rand_by_correct(is_correct): return rand_by_mask((preds == data.val_y)==is_correct)
def plot_val_with_title(idxs, title):
imgs = np.stack([data.val_ds[x][0] for x in idxs])
title_probs = [probs[x] for x in idxs]
print(title)
return plots(data.val_ds.denorm(imgs), rows=1, titles=title_probs)
def plots(ims, figsize=(12,6), rows=1, titles=None):
f = plt.figure(figsize=figsize)
for i in range(len(ims)):
sp = f.add_subplot(rows, len(ims)//rows, i+1)
sp.axis('Off')
if titles is not None: sp.set_title(titles[i], fontsize=16)
plt.imshow(ims[i])
def load_img_id(ds, idx): return np.array(PIL.Image.open(PATH+ds.fnames[idx]))
def plot_val_with_title(idxs, title):
imgs = [load_img_id(data.val_ds,x) for x in idxs]
title_probs = [probs[x] for x in idxs]
print(title)
return plots(imgs, rows=1, titles=title_probs, figsize=(16,8))
def most_by_mask(mask, mult):
idxs = np.where(mask)[0]
return idxs[np.argsort(mult * probs[idxs])[:4]]
def most_by_correct(y, is_correct):
mult = -1 if (y==1)==is_correct else 1
return most_by_mask((preds == data.val_y)==is_correct & (data.val_y == y), mult)
plot_val_with_title(most_by_correct(0, False), "Most incorrect cats")
plot_val_with_title(most_by_correct(1, False), "Most incorrect dogs")
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
```
# Universal Sentence Encoder
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/s?q=google%2Funiversal-sentence-encoder%2F4%20OR%20google%2Funiversal-sentence-encoder-large%2F5"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub models</a>
</td>
</table>
This notebook illustrates how to access the Universal Sentence Encoder and use it for sentence similarity and sentence classification tasks.
The Universal Sentence Encoder makes getting sentence level embeddings as easy as it has historically been to lookup the embeddings for individual words. The sentence embeddings can then be trivially used to compute sentence level meaning similarity as well as to enable better performance on downstream classification tasks using less supervised training data.
## Setup
This section sets up the environment for access to the Universal Sentence Encoder on TF Hub and provides examples of applying the encoder to words, sentences, and paragraphs.
```
%%capture
!pip3 install seaborn
```
More detailed information about installing Tensorflow can be found at [https://www.tensorflow.org/install/](https://www.tensorflow.org/install/).
```
#@title Load the Universal Sentence Encoder's TF Hub module
from absl import logging
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns
module_url = "https://tfhub.dev/google/universal-sentence-encoder/4" #@param ["https://tfhub.dev/google/universal-sentence-encoder/4", "https://tfhub.dev/google/universal-sentence-encoder-large/5"]
model = hub.load(module_url)
print ("module %s loaded" % module_url)
def embed(input):
return model(input)
#@title Compute a representation for each message, showing various lengths supported.
word = "Elephant"
sentence = "I am a sentence for which I would like to get its embedding."
paragraph = (
"Universal Sentence Encoder embeddings also support short paragraphs. "
"There is no hard limit on how long the paragraph is. Roughly, the longer "
"the more 'diluted' the embedding will be.")
messages = [word, sentence, paragraph]
# Reduce logging output.
logging.set_verbosity(logging.ERROR)
message_embeddings = embed(messages)
for i, message_embedding in enumerate(np.array(message_embeddings).tolist()):
print("Message: {}".format(messages[i]))
print("Embedding size: {}".format(len(message_embedding)))
message_embedding_snippet = ", ".join(
(str(x) for x in message_embedding[:3]))
print("Embedding: [{}, ...]\n".format(message_embedding_snippet))
```
# Semantic Textual Similarity Task Example
The embeddings produced by the Universal Sentence Encoder are approximately normalized. The semantic similarity of two sentences can be trivially computed as the inner product of the encodings.
```
def plot_similarity(labels, features, rotation):
corr = np.inner(features, features)
sns.set(font_scale=1.2)
g = sns.heatmap(
corr,
xticklabels=labels,
yticklabels=labels,
vmin=0,
vmax=1,
cmap="YlOrRd")
g.set_xticklabels(labels, rotation=rotation)
g.set_title("Semantic Textual Similarity")
def run_and_plot(messages_):
message_embeddings_ = embed(messages_)
plot_similarity(messages_, message_embeddings_, 90)
```
## Similarity Visualized
Here we show the similarity in a heat map. The final graph is a 9x9 matrix where each entry `[i, j]` is colored based on the inner product of the encodings for sentence `i` and `j`.
```
messages = [
# Smartphones
"I like my phone",
"My phone is not good.",
"Your cellphone looks great.",
# Weather
"Will it snow tomorrow?",
"Recently a lot of hurricanes have hit the US",
"Global warming is real",
# Food and health
"An apple a day, keeps the doctors away",
"Eating strawberries is healthy",
"Is paleo better than keto?",
# Asking about age
"How old are you?",
"what is your age?",
]
run_and_plot(messages)
```
## Evaluation: STS (Semantic Textual Similarity) Benchmark
The [**STS Benchmark**](http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) provides an intristic evaluation of the degree to which similarity scores computed using sentence embeddings align with human judgements. The benchmark requires systems to return similarity scores for a diverse selection of sentence pairs. [Pearson correlation](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) is then used to evaluate the quality of the machine similarity scores against human judgements.
### Download data
```
import pandas
import scipy
import math
import csv
sts_dataset = tf.keras.utils.get_file(
fname="Stsbenchmark.tar.gz",
origin="http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz",
extract=True)
sts_dev = pandas.read_table(
os.path.join(os.path.dirname(sts_dataset), "stsbenchmark", "sts-dev.csv"),
error_bad_lines=False,
skip_blank_lines=True,
usecols=[4, 5, 6],
names=["sim", "sent_1", "sent_2"])
sts_test = pandas.read_table(
os.path.join(
os.path.dirname(sts_dataset), "stsbenchmark", "sts-test.csv"),
error_bad_lines=False,
quoting=csv.QUOTE_NONE,
skip_blank_lines=True,
usecols=[4, 5, 6],
names=["sim", "sent_1", "sent_2"])
# cleanup some NaN values in sts_dev
sts_dev = sts_dev[[isinstance(s, str) for s in sts_dev['sent_2']]]
```
### Evaluate Sentence Embeddings
```
sts_data = sts_dev #@param ["sts_dev", "sts_test"] {type:"raw"}
def run_sts_benchmark(batch):
sts_encode1 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_1'].tolist())), axis=1)
sts_encode2 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_2'].tolist())), axis=1)
cosine_similarities = tf.reduce_sum(tf.multiply(sts_encode1, sts_encode2), axis=1)
clip_cosine_similarities = tf.clip_by_value(cosine_similarities, -1.0, 1.0)
scores = 1.0 - tf.acos(clip_cosine_similarities) / math.pi
"""Returns the similarity scores"""
return scores
dev_scores = sts_data['sim'].tolist()
scores = []
for batch in np.array_split(sts_data, 10):
scores.extend(run_sts_benchmark(batch))
pearson_correlation = scipy.stats.pearsonr(scores, dev_scores)
print('Pearson correlation coefficient = {0}\np-value = {1}'.format(
pearson_correlation[0], pearson_correlation[1]))
```
| github_jupyter |
```
%matplotlib inline
import gym
import matplotlib
import numpy as np
import sys
from collections import defaultdict
if "../" not in sys.path:
sys.path.append("../")
from lib.envs.blackjack import BlackjackEnv
from lib import plotting
matplotlib.style.use('ggplot')
env = BlackjackEnv()
def make_epsilon_greedy_policy(Q, epsilon, nA):
"""
Creates an epsilon-greedy policy based on a given Q-function and epsilon.
Args:
Q: A dictionary that maps from state -> action-values.
Each value is a numpy array of length nA (see below)
epsilon: The probability to select a random action . float between 0 and 1.
nA: Number of actions in the environment.
Returns:
A function that takes the observation as an argument and returns
the probabilities for each action in the form of a numpy array of length nA.
"""
def policy_fn(observation):
A = np.ones(nA, dtype=float) * epsilon / nA
best_action = np.argmax(Q[observation])
A[best_action] += (1.0 - epsilon)
return A
return policy_fn
def mc_control_epsilon_greedy(env, num_episodes, discount_factor=1.0, epsilon=0.1):
"""
Monte Carlo Control using Epsilon-Greedy policies.
Finds an optimal epsilon-greedy policy.
Args:
env: OpenAI gym environment.
num_episodes: Number of episodes to sample.
discount_factor: Gamma discount factor.
epsilon: Chance the sample a random action. Float betwen 0 and 1.
Returns:
A tuple (Q, policy).
Q is a dictionary mapping state -> action values.
policy is a function that takes an observation as an argument and returns
action probabilities
"""
# Keeps track of sum and count of returns for each state
# to calculate an average. We could use an array to save all
# returns (like in the book) but that's memory inefficient.
returns_sum = defaultdict(float)
returns_count = defaultdict(float)
# The final action-value function.
# A nested dictionary that maps state -> (action -> action-value).
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# The policy we're following
policy = make_epsilon_greedy_policy(Q, epsilon, env.action_space.n)
for i_episode in range(1, num_episodes + 1):
# Print out which episode we're on, useful for debugging.
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# Generate an episode.
# An episode is an array of (state, action, reward) tuples
episode = []
state = env.reset()
for t in range(100):
probs = policy(state)
action = np.random.choice(np.arange(len(probs)), p=probs)
next_state, reward, done, _ = env.step(action)
episode.append((state, action, reward))
if done:
break
state = next_state
# Find all (state, action) pairs we've visited in this episode
# We convert each state to a tuple so that we can use it as a dict key
sa_in_episode = set([(tuple(x[0]), x[1]) for x in episode])
for state, action in sa_in_episode:
sa_pair = (state, action)
# Find the first occurance of the (state, action) pair in the episode
first_occurence_idx = next(i for i,x in enumerate(episode)
if x[0] == state and x[1] == action)
# Sum up all rewards since the first occurance
G = sum([x[2]*(discount_factor**i) for i,x in enumerate(episode[first_occurence_idx:])])
# Calculate average return for this state over all sampled episodes
returns_sum[sa_pair] += G
returns_count[sa_pair] += 1.0
Q[state][action] = returns_sum[sa_pair] / returns_count[sa_pair]
# The policy is improved implicitly by changing the Q dictionary
return Q, policy
Q, policy = mc_control_epsilon_greedy(env, num_episodes=500000, epsilon=0.1)
# For plotting: Create value function from action-value function
# by picking the best action at each state
V = defaultdict(float)
for state, actions in Q.items():
action_value = np.max(actions)
V[state] = action_value
plotting.plot_value_function(V, title="Optimal Value Function")
```
| github_jupyter |
# Advection-Diffusion
In this example, we will learn how to perform an advection-diffusion simulation of a given chemical species through a `Cubic` network. The algorithm can be applied to more complex networks in the same manner as described in this example. For the sake of simplicity, a one layer 3D cubic network is used here. On `OpenPNM`, 4 different space discretization schemes for the advection-diffusion problem are available and consist of:
1. Upwind
2. Hybrid
3. Powerlaw
4. Exponential
Depending on the Peclet number characterizing the transport (ratio of advective to diffusive fluxes), the solutions obtained using these schemes may differ. In order to achive a high numerical accuracy, the user should use either the `powerlaw` or the `exponential` schemes.
## Generating network
First, we need to generate a `Cubic` network. For now, we stick to a one layer 3d network, but you might as well try more complex networks!
```
import numpy as np
import openpnm as op
np.random.seed(10)
%matplotlib inline
ws = op.Workspace()
ws.settings["loglevel"] = 40
np.set_printoptions(precision=5)
net = op.network.Cubic(shape=[1, 20, 30], spacing=1e-4)
```
## Adding geometry
Next, we need to add a geometry to the generated network. A geometry contains information about size of the pores/throats in a network. `OpenPNM` has tons of prebuilt geometries that represent the microstructure of different materials such as Toray090 carbon papers, sand stone, electrospun fibers, etc. For now, we stick to a sample geometry called `StickAndBall` that assigns random values to pore/throat diameters.
```
geom = op.geometry.StickAndBall(network=net, pores=net.Ps, throats=net.Ts)
```
## Adding phase
Next, we need to add a phase to our simulation. A phase object(s) contain(s) thermophysical information about the working fluid(s) in the simulation. `OpenPNM` has tons of prebuilt phases as well! For this simulation, we use air as our working fluid.
```
air = op.phases.Air(network=net)
```
## Adding physics
Finally, we need to add a physics. A physics object contains information about the working fluid in the simulation that depend on the geometry of the network. A good example is diffusive conductance, which not only depends on the thermophysical properties of the working fluid, but also depends on the geometry of pores/throats.
```
phys_air = op.physics.Standard(network=net, phase=air, geometry=geom)
```
# Performing Stokes flow
Note that the advection diffusion algorithm assumes that velocity field is given. Naturally, we solve Stokes flow inside a pore network model to obtain the pressure field, and eventually the velocity field. Therefore, we need to run the `StokesFlow` algorithm prior to running our advection diffusion. There's a separate tutorial on how to run `StokesFlow` in `OpenPNM`, but here's a simple code snippet that does the job for us.
```
sf = op.algorithms.StokesFlow(network=net, phase=air)
sf.set_value_BC(pores=net.pores('left'), values=200.0)
sf.set_value_BC(pores=net.pores('right'), values=0.0)
sf.run();
```
It is essential that you attach the results from `StokesFlow` (i.e. pressure field) to the corresponding phase, since the results from any algorithm in `OpenPNM` are by default only attached to the algorithm object (in this case to `sf`). Here's how you can update your phase:
```
air.update(sf.results())
```
## Performing advection-diffusion
Now that everything is set up, it's time to perform our advection-diffusion simulation. For this purpose, we need to add corresponding algorithm to our simulation. As mentioned above, `OpenPNM` supports 4 different discretizations that may be used with the `AdvectionDiffusion` and `Dispersion` algorithms.
Setting the discretization scheme can be performed when defining the physics model as follows:
```
mod = op.models.physics.ad_dif_conductance.ad_dif
phys_air.add_model(propname='throat.ad_dif_conductance', model=mod, s_scheme='powerlaw')
```
Then, the advection-diffusion algorithm is defined by:
```
ad = op.algorithms.AdvectionDiffusion(network=net, phase=air)
```
Note that `network` and `phase` are required parameters for pretty much every algorithm we add, since we need to specify on which network and for which phase do we want to run the algorithm.
Note that you can also specify the discretization scheme by modifying the `settings` of our `AdvectionDiffusion` algorithm. You can choose between `upwind`, `hybrid`, `powerlaw`, and `exponential`.
It is important to note that the scheme specified within the algorithm's settings is only used when calling the `rate` method for post processing.
## Adding boundary conditions
Next, we need to add some boundary conditions to the simulation. By default, `OpenPNM` assumes zero flux for the boundary pores.
```
inlet = net.pores('left')
outlet = net.pores(['right', 'top', 'bottom'])
ad.set_value_BC(pores=inlet, values=100.0)
ad.set_value_BC(pores=outlet, values=0.0)
```
`set_value_BC` applies the so-called "Dirichlet" boundary condition to the specified pores. Note that unless you want to apply a single value to all of the specified pores (like we just did), you must pass a list (or `ndarray`) as the `values` parameter.
## Running the algorithm
Now, it's time to run the algorithm. This is done by calling the `run` method attached to the algorithm object.
```
ad.run();
```
# Post processing
When an algorithm is successfully run, the results are attached to the same object. To access the results, you need to know the quantity for which the algorithm was solving. For instance, `AdvectionDiffusion` solves for the quantity `pore.concentration`, which is somewhat intuitive. However, if you ever forget it, or wanted to manually check the quantity, you can take a look at the algorithm `settings`:
```
print(ad.settings)
```
Now that we know the quantity for which `AdvectionDiffusion` was solved, let's take a look at the results:
```
c = ad['pore.concentration']
```
## Heatmap
Since the network is 2d, we can simply reshape the results in form of a 2d array similar to the shape of the network and plot the heatmap of it using `matplotlib`.
```
print('Network shape:', net._shape)
c2d = c.reshape((net._shape))
#NBVAL_IGNORE_OUTPUT
import matplotlib.pyplot as plt
plt.imshow(c2d[0,:,:]);
plt.title('Concentration (mol/m$^3$)');
plt.colorbar();
```
| github_jupyter |
# ZenML: Create production-ready ML pipelines
Our goal here is to help you to get the first practical experience with our tool and give you a brief overview on some basic functionalities of ZenML. We will start local in the jupyter notebook but will transition over to a more robust environment with Kubeflow pipelines.
This guide is designed to provide a practical introduction to transitioning from local setup to a more production MLOps stack. If you want more detail, our [full documentation](https://docs.zenml.io/) provides more on the concepts and how to implement them.

## Install libraries
```
# Install the ZenML CLI tool and Tensorflow
!pip install zenml
!zenml integration install kubeflow -f
!zenml integration install sklearn -f
```
Once the installation is completed, you can go ahead and create your first ZenML repository for your project. As ZenML repositories are built on top of Git repositories, you can create yours in a desired empty directory through:
```
# Initialize a git repository
!git init
# Initialize ZenML's .zen file
!zenml init
```
# Start with the local stack
The above commands have automatically created a local MLOps stack for you and set it to active:
```
!zenml stack get
```

## Create your first pipeline with the local_stack
Let's first do the imports
```
import os
import logging
import numpy as np
from sklearn.base import ClassifierMixin
from zenml.integrations.sklearn.helpers.digits import get_digits, get_digits_model
from zenml.pipelines import pipeline
from zenml.steps import step
from zenml.steps.step_output import Output
```
## Define ZenML Steps
In the code that follows, you can see that we are defining the various steps of our pipeline. Each step is decorated with `@step`, the main abstraction that is currently available for creating pipeline steps.
The first step is an `importer` step that downloads a sample of the MNIST dataset.
```
@step
def importer() -> Output(
X_train=np.ndarray, X_test=np.ndarray, y_train=np.ndarray, y_test=np.ndarray
):
"""Loads the digits array as normal numpy arrays."""
X_train, X_test, y_train, y_test = get_digits()
return X_train, X_test, y_train, y_test
```
Then we add a `normalizer` step that takes as input the test set and the trained model and evaluates some final metrics.
```
@step
def normalizer(
X_train: np.ndarray, X_test: np.ndarray
) -> Output(X_train_normed=np.ndarray, X_test_normed=np.ndarray):
"""Normalize digits dataset with mean and standard deviation."""
X_train_normed = (X_train - np.mean(X_train)) / np.std(X_train)
X_test_normed = (X_test - np.mean(X_test)) / np.std(X_test)
return X_train_normed, X_test_normed
```
We then add a `trainer` step, that takes the normalized data and trains a sklearn model on the data.
```
@step(enable_cache=False)
def trainer(
X_train: np.ndarray,
y_train: np.ndarray,
) -> ClassifierMixin:
"""Train a simple sklearn classifier for the digits dataset."""
model = get_digits_model()
model.fit(X_train, y_train)
return model
```
Finally, we had an `evaluator` to see how we did on the dataset!
```
@step
def evaluator(
X_test: np.ndarray,
y_test: np.ndarray,
model: ClassifierMixin,
) -> float:
"""Calculate the accuracy on the test set"""
test_acc = model.score(X_test, y_test)
logging.info(f"Test accuracy: {test_acc}")
return test_acc
```
## Define ZenML Pipeline
A pipeline is defined with the `@pipeline` decorator. This defines the various steps of the pipeline and specifies the dependencies between the steps, thereby determining the order in which they will be run.
```
@pipeline
def mnist_pipeline(
importer,
normalizer,
trainer,
evaluator,
):
# Link all the steps together
X_train, X_test, y_train, y_test = importer()
X_trained_normed, X_test_normed = normalizer(X_train=X_train, X_test=X_test)
model = trainer(X_train=X_trained_normed, y_train=y_train)
evaluator(X_test=X_test_normed, y_test=y_test, model=model)
```
## Run the pipeline
Running the pipeline is as simple as calling the `run()` method on an instance of the defined pipeline.
```
# Initialize the pipeline
first_pipeline = mnist_pipeline(
importer=importer(),
normalizer=normalizer(),
trainer=trainer(),
evaluator=evaluator(),
)
first_pipeline.run()
```
# Transitioning to Kubeflow Pipelines
We got pretty good results on the digits model that we trained, but at some point we want to get out of this notebook local stack and go to a stack which looks more like production. Here is where the ZenML [Kubeflow Pipelines](https://github.com/kubeflow/pipelines) integration helps!
## Pre-requisites
In order to run this example, you need to have installed:
* [Docker](https://docs.docker.com/get-docker/)
* [K3D](https://k3d.io/v5.2.1/)
* [Kubectl](https://kubernetes.io/docs/tasks/tools/)
## Define requirements
```
%%writefile requirements.txt
scikit-learn
pandas
numpy
```
## Create a Kubeflow Stack

```
!zenml container-registry register local_registry --uri=localhost:5000
!zenml orchestrator register kubeflow_orchestrator --type=kubeflow
!zenml stack register local_kubeflow_stack -m local_metadata_store -a local_artifact_store -o kubeflow_orchestrator -c local_registry
!zenml stack set local_kubeflow_stack
```
## Lets spin the stack up
```
!zenml stack up
```
## Write the pipeline to disk
```
%%writefile run.py
# Copyright (c) ZenML GmbH 2021. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at:
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing
# permissions and limitations under the License.
import logging
import os
import numpy as np
from sklearn.base import ClassifierMixin
from zenml.integrations.sklearn.helpers.digits import get_digits, get_digits_model
from zenml.pipelines import pipeline
from zenml.steps import step
from zenml.steps.step_output import Output
# Path to a pip requirements file that contains requirements necessary to run
# the pipeline
requirements_file = os.path.join(os.path.dirname(__file__), "requirements.txt")
@step
def importer() -> Output(
X_train=np.ndarray, X_test=np.ndarray, y_train=np.ndarray, y_test=np.ndarray
):
"""Loads the digits array as normal numpy arrays."""
X_train, X_test, y_train, y_test = get_digits()
return X_train, X_test, y_train, y_test
@step
def normalizer(
X_train: np.ndarray, X_test: np.ndarray
) -> Output(X_train_normed=np.ndarray, X_test_normed=np.ndarray):
"""Normalize digits dataset with mean and standard deviation."""
X_train_normed = (X_train - np.mean(X_train)) / np.std(X_train)
X_test_normed = (X_test - np.mean(X_test)) / np.std(X_test)
return X_train_normed, X_test_normed
@step(enable_cache=False)
def trainer(
X_train: np.ndarray,
y_train: np.ndarray,
) -> ClassifierMixin:
"""Train a simple sklearn classifier for the digits dataset."""
model = get_digits_model()
model.fit(X_train, y_train)
return model
@step
def evaluator(
X_test: np.ndarray,
y_test: np.ndarray,
model: ClassifierMixin,
) -> float:
"""Calculate the accuracy on the test set"""
test_acc = model.score(X_test, y_test)
logging.info(f"Test accuracy: {test_acc}")
return test_acc
@pipeline(requirements_file=requirements_file)
def mnist_pipeline(
importer,
normalizer,
trainer,
evaluator,
):
# Link all the steps together
X_train, X_test, y_train, y_test = importer()
X_trained_normed, X_test_normed = normalizer(X_train=X_train, X_test=X_test)
model = trainer(X_train=X_trained_normed, y_train=y_train)
evaluator(X_test=X_test_normed, y_test=y_test, model=model)
if __name__ == "__main__":
# Run the pipeline
p = mnist_pipeline(
importer=importer(),
normalizer=normalizer(),
trainer=trainer(),
evaluator=evaluator(),
)
p.run()
# Initialize a new pipeline
!python run.py
```
# Post execution workflow
```
from zenml.core.repo import Repository
```
## Get repo
```
repo = Repository()
```
## Pipelines
```
pipelines = repo.get_pipelines()
```
## Retrieve the pipeline
```
mnist_pipeline = pipelines[0]
```
## Get the first run
```
runs = mnist_pipeline.runs # chronologically ordered
mnist_run = runs[0]
```
## Get the second run
```
kubeflow_mnist_run = runs[1]
```
## Get the steps (note the first step name is different)
```
mnist_run.steps
kubeflow_mnist_run.steps
```
## Check the results of the evaluator and compare
```
mnist_eval_step = mnist_run.get_step(name='evaluator')
kubeflow_mnist_eval_step = kubeflow_mnist_run.get_step(name='evaluator')
# One output is simply called `output`, multiple is a dict called `outputs`.
mnist_eval_step.output.read()
kubeflow_mnist_eval_step.output.read()
```
# Congratulations!
… and that's it!. If you came here without a hiccup, you must have successly installed ZenML, set up a ZenML repo, configured a training pipeline, executed it and evaluated the results. You have also deployed said pipeline to a production MLOps stack from right within your notebook! Hurray!
However, if you had a hiccup or you have some suggestions/questions regarding our framework, you can always check our [docs](https://docs.zenml.io/) or our [Github](https://github.com/zenml-io/zenml) or even better join us on our [Slack channel](https://zenml.io/slack-invite).
Cheers!
For more detailed information on all the components and steps that went into this short example, please continue reading [our more detailed documentation pages](https://docs.zenml.io/).
| github_jupyter |
# Exercise 2.01: Exploring Bitcoin Dataset
We explore the Bitcoin dataset in this Jupyter Notebook. First, we start by importing the required libraries.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
#### Magic Commands
Magic commands (those that start with `%`) are commands that modify a configuration of Jupyter Notebooks. A number of magic commands are available by default (see list [here](http://ipython.readthedocs.io/en/stable/interactive/magics.html))--and many more can be added with extensions. The magic command added in this section allows `matplotlib` to display our plots directly on the browser instead of having to save them on a local file.
```
%matplotlib inline
```
### Introduction
We will also import our custom set of normalization functions.
```
import normalizations
```
Let's load the dataset as a pandas `DataFrame`. This will make it easy to compute basic properties from the dataset and to clean any irregularities.
```
bitcoin = pd.read_csv('data/bitcoin_dataset.csv', date_parser=['date'])
bitcoin.head()
```
Our dataset contains 7 variables (i.e. columns). Here's what each one of them represents:
* `date`: date of the observation.
* `open`: open value of a single Bitcoin coin.
* `high`: highest value achieved during a given day period.
* `low`: lowest value achieved during a given day period.
* `close`: value at the close of the transaction day.
* `volume`: what is the total volume of Bitcoin that was exchanged during that day.
* `iso_week`: week number of a given year.
All values are in USD.
### Exploration
We will now explore the dataset timeseries to understand its patterns.
Let's first explore two variables: close price and volume.
```
bitcoin.set_index('date')['close'].plot(linewidth=2, figsize=(14, 4), color='#d35400')
#plt.plot(bitcoin['date'], bitcoin['close'])
#
# Make a similar plot for the volume variable here.
# How different is the volume data compared to
# the closing prices every day?
#
```
Now let's explore the range from 2017 to 2018. These are the years where the prices of bitcoin were high initially and then started declining and then gained back again!
```
bitcoin[(bitcoin['date'] >= '2017-01-01') & (bitcoin['date'] <= '2018-12-31') ].set_index('date')['close'].plot(
linewidth=2, figsize=(14, 4), color='#d35400')
#
# Again, make a similar plot for the volume variable.
#
```
### Preparing Dataset for Model
Neural networks typically work with either [matrices](https://en.wikipedia.org/wiki/Matrix_(mathematics)) or [tensors](https://en.wikipedia.org/wiki/Tensor). Our data needs to fit that structure before it can be used by either `keras` (or `tensorflow`).
Also, it is common practice to normalize data before using it to train a neural network. We will be using a normalization technique the evaluates each observation into a range between 0 and 1 in relation to the first observation in each week.
```
bitcoin.head()
```
First, let's remove data from older periods. We will keep only data from 2016 until the latest observation of 2020. Older observations may be useful to understand current prices. However, Bitcoin has gained so much popularity in recent years that including older data would require a more laborious treatment. We will leave that for a future exploration.
```
bitcoin_recent = bitcoin[bitcoin['date'] >= '2016-01-04']
```
Let's keep only the close and volume variables. We can use the other variables in another time.
```
bitcoin_recent = bitcoin_recent[['date', 'iso_week', 'close', 'volume']]
```
Now, let's normalize our data for both the `close` and `volume` variables.
```
bitcoin_recent['close_point_relative_normalization'] = bitcoin_recent.groupby('iso_week')['close'].apply(
lambda x: normalizations.point_relative_normalization(x))
#
# Now, apply the same normalization on the volume variable.
# Name that variable using the same convention
# from the previous example. Use the name:
#
# `volume_point_relative_normalization`
#
```
After the normalization procedure, our variables `close` and `volume` are now relative to the first observation of every week. We will be using these variables -- `close_point_relative_normalization` and `volume_point_relative_normalization`, respectivelly -- to train our LSTM model.
```
bitcoin_recent.set_index('date')['close_point_relative_normalization'].plot(
linewidth=2, figsize=(14, 4), color='#d35400')
#
# Now, plot he volume variable (`volume_point_relative_normalization`)
# in the same way as the plot above.
#
```
### Training and Test Sets
Let's divide the dataset into a training and a test set. In this case, we will use 80% of the dataset to train our LSTM model and 20% to evaluate its performance.
Given that the data is continuous, we use the last 20% of available weeks as a test set and the first 80% as a training set.
```
boundary = int(0.9 * bitcoin_recent['iso_week'].nunique())
train_set_weeks = bitcoin_recent['iso_week'].unique()[0:boundary]
test_set_weeks = bitcoin_recent[~bitcoin_recent['iso_week'].isin(train_set_weeks)]['iso_week'].unique()
train_set_weeks
test_set_weeks
```
Now, let's create the separate datasets for each operation.
```
train_dataset = bitcoin_recent[bitcoin_recent['iso_week'].isin(train_set_weeks)]
#
# Perform the same operation as above, but use the
# `test_set_weeks` list to create the variable `test_dataset`.
#
```
### Storing Output
Before closing this notebook, let's store the output of this exercise on disk to make sure it is easier to use this data as input to our neural network.
```
bitcoin_recent.to_csv('data/bitcoin_recent.csv', index=False)
train_dataset.to_csv('data/train_dataset.csv', index=False)
#
# Perform the same operation as above to save the test file to disk as well
#
```
### Summary
In this section, we explored the Bitcoin dataset. We learned that during year of 2017 the prices of Bitcoin skyrocketed and afterwards it started falling. It recovered back in 2018 to some extent but it is yet to reach what it reached in 2017. This phenomenon takes a long time to take place—and may be influenced by a number of external factors that this data alone doesn't explain (for instance, the emergence of other cryptocurrencies, the ban of cryptocurrencies in some markets).
Can we predict the price of Bitcoin in the future? What will it be 30 days from now? We will be building a deep learning model to explore that question in our next section.
| github_jupyter |
Antes de empezar, asegúrate de que todo va segun lo esperado. Primero, **reinicia el kernel** (en la barra de menu, selecciona Kernel$\rightarrow$Restart) y entonces **ejecuta todas las celdas** (en la barra de menu, selecciona Cell$\rightarrow$Run All).
Asegurate de rellenar cualquier lugar donde aparezca `YOUR CODE HERE` o `YOUR ANSWER HERE`.
# Enunciado de problema:
En el archivo adjunto 'quijote.txt' hemos dejado los primeros párrafos del libro que comparte nombre con el archivo.
La función **primerafrase(numerodeparrafo)** deberá abrir el archivo y devolver el contenido de la primera frase del párrafo indicado (siendo 0 el primer párrafo). La primera frase estará definida por los caracteres que se encuentran en el párrafo hasta llegar al primer símbolo de puntuación (cualquiera del grupo ,.:;), tal y como se muestra en el ejemplo a continuación.
- primerafrase(0)
- -> 'En un lugar de la Mancha'
- primerafrase(2)
- -> 'Con estas razones perdía el pobre caballero el juicio'
```
#vuestro código tendrá que ir en el espacio reservado en esta celda
# YOUR CODE HERE
```
<details>
<summary>Pincha aqui para ver la solución</summary>
<pre><code>
def primerafrase(parrafo):
#primero cargamos el fichero
fichero = open('quijote.txt')
#lo leemos
texto = fichero.read()
#y seleccionamos el párrafo indicado en el parámetro
parrafos = texto.split('\n')
#eliminamos los párrafos vacios tras el split
while '' in parrafos:
parrafos.remove('')
parrafoseleccionado=parrafos[parrafo]
#vamos a buscar donde acabaría la primera frase buscamos los delimitadores que nos han pedido
#y añadimos sus posiciones a una lista
delimitadores =[]
delimitadores.append(parrafoseleccionado.find(','))
delimitadores.append(parrafoseleccionado.find('.'))
delimitadores.append(parrafoseleccionado.find(':'))
delimitadores.append(parrafoseleccionado.find(';'))
#quitamos los -1 que son delimitadores no encontrados y nos puede dar lugar a errores
while -1 in delimitadores:
delimitadores.remove(-1)
#y obtenemos el primero que aparece con el comando min (el menor es el primero)
finfrase = min(delimitadores)
#nuestra frase sera pues
frase = parrafoseleccionado[:finfrase]
print(frase)
#acordaros de cerrar el fichero si no lo habeis abierto con un with
fichero.close()
return(frase)
<code><pre>
</details>
Este ejercicio es autoevaluable pincha en ver la solución para comprobar si lo has realizado correctamente.
| github_jupyter |
```
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
```
# The age-metallicity structure of the Milky Way disk
This IPython notebook runs through all the necessary steps to reproduce the plots found in the Mackereth et al. (2017) paper. In order for the following code to work, the user must have pre-calculated the required isochrone grid (using make_isochrone_grids.py), the effective selection function for the mono-age, mono-[Fe/H] populations used for the paper (using calc_effsel_monoage.py) and then downloaded the relevant age and distance tables for the APOGEE-DR12 data (see readme), and placed these in the 'catalogues' directory.
The first section goes through the plots used in the Data section of the paper. The second section guides the user through the creation of the plots in sections 4 and 5 of the paper, which form the bulk of the science results.
## Section 1 - General Plots
```
try:
reload(densprofiles)
reload(define_rgbsample)
reload(fitDens)
reload(fitDens.densprofiles)
reload(compareDataModel)
reload(mockDensData)
except NameError:
import densprofiles
import define_rgbsample
import fitDens
import compareDataModel
import mockDensData
%pylab inline
import numpy
from matplotlib.pyplot import *
import matplotlib.gridspec as gridspec
import os, os.path
import pickle
import copy
from galpy.util import bovy_plot, bovy_coords
import mpl_toolkits.axisartist as AA
import corner
import mwdust
import fitsio
from scipy.interpolate import interp1d
import statsmodels.api as sm
lowess = sm.nonparametric.lowess
params = {'axes.labelsize': 14, 'xtick.labelsize': 12, 'ytick.labelsize': 12, 'text.usetex': True, 'lines.linewidth' : 1, 'axes.titlesize' : 14, 'font.family' : 'serif'}
plt.rcParams.update(params)
columnwidth = 240./72.27
textwidth = 504.0/72.27
selectFile= '../savs/selfunc-nospdata.sav'
if os.path.exists(selectFile):
with open(selectFile,'rb') as savefile:
apo= pickle.load(savefile)
with open('../essf/maps/essf_rgb_green15_modelmh_feh-0.0_age1.0.sav','rb') as savefile:
locations= pickle.load(savefile)
effsel= pickle.load(savefile)
distmods= pickle.load(savefile)
with open('../essf/maps/essf_rgb_marshall06_modelmh_feh-0.0_age1.0.sav','rb') as savefile:
mlocations= pickle.load(savefile)
meffsel= pickle.load(savefile)
mdistmods= pickle.load(savefile)
# Fill in regions not covered by Marshall map
meffsel[meffsel < -0.5]= effsel[meffsel < -0.5]
# Get (lcen,bcen) for each location
lcen= numpy.zeros(len(locations))
bcen= numpy.zeros(len(locations))
hmax= numpy.zeros(len(locations))
for ii,loc in enumerate(locations):
if loc in apo.list_fields():
tlcen, tbcen= apo.glonGlat(loc)
lcen[ii]= tlcen
bcen[ii]= tbcen
hmax[ii]= apo.Hmax(loc,cohort='long')
if numpy.isnan(hmax[ii]):
hmax[ii]= apo.Hmax(loc,cohort='medium')
if numpy.isnan(hmax[ii]):
hmax[ii]= apo.Hmax(loc,cohort='short')
if loc not in apo.list_fields():
lcen[ii] = numpy.nan
bcen[ii] = numpy.nan
hmax[ii]= numpy.nan
# Get the locations of various subsamples
highbIndx= numpy.fabs(bcen) > 10.
outDiskIndx= (lcen > 150.)*(lcen < 250.)*(True-highbIndx)
betwDiskIndx= (lcen <= 150.)*(lcen >= 70.)*(True-highbIndx)
inDiskIndx= (lcen < 70.)*(lcen >= 25.)*(True-highbIndx)
bulgeIndx= ((lcen < 25.)+(lcen > 335.))*(True-highbIndx)
brightIndx= (hmax <= 12.21)
mediumIndx= (hmax > 12.21)*(hmax <= 12.81)
faintIndx= (hmax > 12.81)
ldata= None
data_highbIndx= None
data_outDiskIndx= None
data_betwDiskIndx= None
data_inDiskIndx= None
data_bulgeIndx= None
data_brightIndx= None
data_mediumIndx= None
data_faintIndx= None
def load_data(subsample='lowlow', add_ages=False, agebin=[0.,1.], fehbin=[0.,0.2], afebin=None, agetype='Martig', corrections=False):
global ldata
global data_highbIndx
global data_outDiskIndx
global data_betwDiskIndx
global data_inDiskIndx
global data_bulgeIndx
global data_brightIndx
global data_mediumIndx
global data_faintIndx
if subsample.lower() == 'all':
ldata= define_rgbsample.get_rgbsample(agetype=agetype)
elif subsample.lower() == 'alllowalpha':
ldata= define_rgbsample.get_rgbsample()
ldata= ldata[ldata[define_rgbsample._AFETAG] < 0.1]
elif subsample.lower() == 'lowlow':
ldata= define_rgbsample.get_lowlowsample()
elif subsample.lower() == 'highfeh':
ldata= define_rgbsample.get_highfehsample()
elif subsample.lower() == 'highalpha':
ldata= define_rgbsample.get_highalphasample()
elif subsample.lower() == 'solar':
ldata= define_rgbsample.get_solarsample()
elif subsample.lower() == 'fehage':
ldata= define_rgbsample.get_fehage(agebin=agebin, fehbin=fehbin, afebin=afebin, agetype=agetype, apply_corrections=corrections)
# Get the indices of the various subsamples defined above
data_highbIndx= numpy.zeros(len(ldata),dtype='bool')
for ii in range(len(ldata)):
if ldata[ii]['LOCATION_ID'] in numpy.array(locations)[highbIndx]: data_highbIndx[ii]= True
data_outDiskIndx= numpy.zeros(len(ldata),dtype='bool')
for ii in range(len(ldata)):
if ldata[ii]['LOCATION_ID'] in numpy.array(locations)[outDiskIndx]: data_outDiskIndx[ii]= True
data_betwDiskIndx= numpy.zeros(len(ldata),dtype='bool')
for ii in range(len(ldata)):
if ldata[ii]['LOCATION_ID'] in numpy.array(locations)[betwDiskIndx]: data_betwDiskIndx[ii]= True
data_inDiskIndx= numpy.zeros(len(ldata),dtype='bool')
for ii in range(len(ldata)):
if ldata[ii]['LOCATION_ID'] in numpy.array(locations)[inDiskIndx]: data_inDiskIndx[ii]= True
data_bulgeIndx= numpy.zeros(len(ldata),dtype='bool')
for ii in range(len(ldata)):
if ldata[ii]['LOCATION_ID'] in numpy.array(locations)[bulgeIndx]: data_bulgeIndx[ii]= True
data_brightIndx= numpy.zeros(len(ldata),dtype='bool')
for ii in range(len(ldata)):
if ldata[ii]['LOCATION_ID'] in numpy.array(locations)[brightIndx]: data_brightIndx[ii]= True
data_mediumIndx= numpy.zeros(len(ldata),dtype='bool')
for ii in range(len(ldata)):
if ldata[ii]['LOCATION_ID'] in numpy.array(locations)[mediumIndx]: data_mediumIndx[ii]= True
data_faintIndx= numpy.zeros(len(ldata),dtype='bool')
for ii in range(len(ldata)):
if ldata[ii]['LOCATION_ID'] in numpy.array(locations)[faintIndx]: data_faintIndx[ii]= True
load_data(subsample='fehage', fehbin=[-0.6,0.2], agebin=[0.,13.], agetype='Martig', corrections=True)
def alphaedge(fehs):
edge = np.zeros(len(fehs))
edge[fehs < 0] = (0.12/-0.6)*fehs[fehs < 0]+0.03
edge[fehs >= 0] = 0.03
return edge
```
### Figure 1 - the $\mathrm{[\alpha/Fe]}$-$\mathrm{[Fe/H]}$ distribution
```
low = ldata['AVG_ALPHAFE'] <= alphaedge(ldata['FE_H']-0.025)
high = ldata['AVG_ALPHAFE'] > alphaedge(ldata['FE_H']+0.025)
fig = plt.figure()
fehs = np.linspace(-0.6, 0.2, 100)
plt.scatter(ldata['FE_H'], ldata['AVG_ALPHAFE'], s=2, alpha=0.2, edgecolor=None, color='black', lw=0.)
plt.plot(fehs, alphaedge(fehs)+0.025, color='black', linestyle='dashed')
plt.plot(fehs, alphaedge(fehs)-0.025, color='black', linestyle='dashed')
plt.plot(fehs, alphaedge(fehs), color='black', linestyle='dashdot')
plt.fill_between(fehs, alphaedge(fehs)+0.025, 0.3*np.ones(100), color=plt.cm.coolwarm(0.8), alpha=0.2)
plt.fill_between(fehs, -0.2*np.ones(100), alphaedge(fehs)-0.025, color=plt.cm.coolwarm(0.2), alpha=0.2)
plt.text(-0.1,0.2, r'High $\mathrm{[\alpha/Fe]}$')
plt.text(-0.5,-0.1, r'Low $\mathrm{[\alpha/Fe]}$')
plt.xlim(-0.6,0.2)
plt.ylim(-0.2,0.3)
plt.ylabel(r'$\mathrm{[\alpha/Fe]}$')
plt.xlabel(r'$\mathrm{[Fe/H]}$')
fig.set_size_inches(1.5*columnwidth, columnwidth)
fig.tight_layout()
```
### Figure 2 - the spatial distribution
```
xbins = np.linspace(3.,15.,50)
ybins = np.linspace(-6.,6.,50)
fig, ax = plt.subplots(2,1, sharex=True, sharey=True)
s1 = ax[0].hist2d(ldata['MH50_GALR'][low], ldata['MH50_GALZ'][low], bins=[xbins,ybins], cmap=plt.cm.gist_earth_r)
s2 = ax[1].hist2d(ldata['MH50_GALR'][high], ldata['MH50_GALZ'][high], bins=[xbins,ybins], cmap=plt.cm.gist_earth_r)
ax[0].set_xlim(3.,15.)
ax[0].set_ylim(-4.,4.)
ax[0].set_ylabel(r'$Z\ \mathrm{[kpc]}$')
ax[1].set_ylabel(r'$Z\ \mathrm{[kpc]}$')
ax[1].set_xlabel(r'$R\ \mathrm{[kpc]}$')
#ax[0].set_xlabel(r'$R\ \mathrm{[Kpc]}$')
ax[0].text(4,-3, r'Low $\mathrm{[\alpha/Fe]}$')
ax[1].text(4,-3, r'High $\mathrm{[\alpha/Fe]}$')
plt.colorbar(s1[3], ax=ax[0], label=r'$N$')
plt.colorbar(s2[3], ax=ax[1], label=r'$N$')
fig.set_size_inches(1.5*columnwidth, 2*columnwidth)
fig.tight_layout()
```
### Figure 3 - Red-Clump distance comparison
```
import define_rcsample
from astropy.table import Table, join
import mpl_toolkits.axisartist as AA
dat = define_rcsample.get_rcsample()
ldat = define_rgbsample.get_rgbsample(add_ages=True)
rctab = Table(data=dat)
fulltab = Table(data=ldat)
tab = join(rctab,fulltab, keys='APOGEE_ID', uniq_col_name='{col_name}{table_name}', table_names=['RCLUMP',''])
distcomp = tab.as_array()
fig = plt.figure()
x = np.linspace(0.,8.,50)
y = np.linspace(0.,8.,50)
glatmask = (np.fabs(distcomp['GLAT']) > 6.)&(distcomp['FE_H'] >= -0.6)&(distcomp['FE_H'] <0.2)&(distcomp['Age'] < 13.)
ax1 = AA.Axes(fig,[0.2,0.45,0.65,0.5])
fig.add_axes(ax1)
s = ax1.hist2d(10**((distcomp['HAYDEN_DISTMOD_50']+5.)/5.)/1e3, 10**((distcomp['RC_DM_H']+5.)/5.)/1e3, bins=[x,y], cmap=plt.cm.gist_earth_r)
ax1.plot(x,y, color='Black', linestyle='dashed')
#
ax1.set_ylabel(r'$D_{RC}\ \mathrm{[Kpc]}$')
ax1.set_xticks([])
#ax1.set_ylim(-0.5,1.3)
ax2 = AA.Axes(fig,[0.2,0.15,0.65,0.3])
fig.add_axes(ax2)
x = 10**((distcomp['HAYDEN_DISTMOD_50']+5.)/5.)/1e3
y = 10**((distcomp['RC_DM_H']+5.)/5.)/1e3
delta = x-y
xbins = np.linspace(0.,8.,50)
ybins = np.linspace(-0.6,0.6,50)
ax2.hist2d(y,delta/x, bins=[xbins,ybins], cmap=plt.cm.gist_earth_r, vmin=0.,vmax=225.)
ax2.axhline(0, color='Black', linestyle='dashed')
ax2.set_ylabel(r'$\Delta D / D_{MH}$')
ax2.set_xlabel(r'$D_{MH}\ \mathrm{[Kpc]}$')
ax1.legend(loc=4)
ax2.set_ylim(-0.5,0.5)
ax2.set_yticks([-0.4,-0.2,0.,0.2,0.4])
#ax1.set_yticks([-0.4,0.,0.4,0.8,1.2])
fig.set_size_inches(1.5*columnwidth,1.5*columnwidth)
cax = fig.add_axes([0.87,0.15,0.02,0.8])
plt.colorbar(s[3], cax=cax, label=r'$N$')
```
### Figure 4 - Age correction fit
```
table = np.genfromtxt('../catalogues/martig2016_table1.txt', dtype=None, names=True, skip_header=2)
fig = plt.figure()
ax1 = AA.Axes(fig,[0.2,0.45,0.76,0.5])
fig.add_axes(ax1)
ax1.scatter(np.log10(table['Age_out']),np.log10(table['Age_in']), s=2, alpha=0.8, lw=0., color='Black')
xs = np.linspace(-0.3,1.2,100)
xsinterpolate = interp1d(xs,xs)
ax1.plot(xs,xs, color='Black', linestyle='dashed')
fit = lowess(np.log10(table['Age_out']),np.log10(table['Age_in']))#, delta=0.01, frac=0.5 )
ax1.plot(fit[:,1], fit[:,0], label='lowess fit')
#
ax1.set_ylabel(r'$\mathrm{\log{(age_{APOKASC})}}$')
ax1.set_xticks([])
fig.set_size_inches(columnwidth,columnwidth)
ax1.set_ylim(-0.5,1.3)
ax2 = AA.Axes(fig,[0.2,0.15,0.76,0.3])
fig.add_axes(ax2)
ax2.plot(fit[:,1], fit[:,0]-xsinterpolate(fit[:,1]))
ax2.scatter(np.log10(table['Age_out']),((np.log10(table['Age_in'])-np.log10(table['Age_out']))), s=2, alpha=0.8, lw=0., color='Black')
ax2.axhline(0, color='Black', linestyle='dashed')
ax2.set_ylabel(r'$\mathrm{\Delta\ age}$')
ax2.set_xlabel(r'$\mathrm{\log{(age_{model})}}$')
ax1.legend(loc=4)
ax2.set_ylim(-0.7,0.7)
ax2.set_yticks([-0.3,0.,0.3])
ax1.set_yticks([-0.4,0.,0.4,0.8,1.2])
fig.set_size_inches(1.5*columnwidth,1.5*columnwidth)
```
### Figure 5 - Raw star counts
```
agebins = np.arange(1.,14.,2.)
fehbins = np.arange(-0.6,0.3,0.1)
hc_numbins = []
load_data(subsample='fehage', add_ages=True, agebin=[0., 13.], fehbin=[-0.6,0.2], afebin='highclean', agetype='Martig', corrections=True)
dat = ldata#[data_medbIndx]
for j in range(0,len(fehbins)-1):
numbinfeh = []
for i in range(0,len(agebins)-1):
mask = (dat['Age'] >= agebins[i])&(dat['Age'] < agebins[i+1])&(dat['FE_H'] >= fehbins[j])&(dat['FE_H'] < fehbins[j+1])
ldata = dat[mask]
num_bin = len(ldata)
numbinfeh.append(num_bin)
hc_numbins.append(numbinfeh)
hc_numbins = np.array(hc_numbins)
lc_numbins = []
load_data(subsample='fehage', add_ages=True, agebin=[0., 13.], fehbin=[-0.6,0.2], afebin='lowclean', agetype='Martig', corrections=True)
dat = ldata#[data_medbIndx]
for j in range(0,len(fehbins)-1):
numbinfeh = []
for i in range(0,len(agebins)-1):
mask = (dat['Age'] >= agebins[i])&(dat['Age'] < agebins[i+1])&(dat['FE_H'] >= fehbins[j])&(dat['FE_H'] < fehbins[j+1])
ldata = dat[mask]
num_bin = len(ldata)
numbinfeh.append(num_bin)
lc_numbins.append(numbinfeh)
lc_numbins = np.array(lc_numbins)
hc_nums = hc_numbins.ravel()
lc_nums = lc_numbins.ravel()
norm = mpl.colors.Normalize(vmin=0., vmax=max(lc_nums))
norm2 = mpl.colors.Normalize(vmin=0., vmax=max(hc_nums))
cmap = mpl.cm.Greys
s_m = mpl.cm.ScalarMappable(cmap=cmap, norm=norm)
s_m2 = mpl.cm.ScalarMappable(cmap=cmap, norm=norm2)
s_m.set_array([])
s_m2.set_array([])
fig, ax = plt.subplots(2,1, sharex=True, sharey=True)
for j in range(0, len(fehbins)-1):
for i in range(0,len(agebins)-1):
hz = lc_numbins[j,i]
xedges = [fehbins[j], fehbins[j+1]]
yedgelow = [agebins[i], agebins[i]]
yedgehigh = [agebins[i+1], agebins[i+1]]
ax[0].fill_between(xedges, yedgelow,yedgehigh, color=s_m.to_rgba(hz))
for j in range(0, len(fehbins)-1):
for i in range(0,len(agebins)-1):
dhz = hc_numbins[j,i]
xedges = [fehbins[j], fehbins[j+1]]
yedgelow = [agebins[i], agebins[i]]
yedgehigh = [agebins[i+1], agebins[i+1]]
ax[1].fill_between(xedges, yedgelow,yedgehigh, color=s_m2.to_rgba(dhz))
#titles and axes
plt.ylim(1.,13.)
plt.xlim(-0.6,0.2)
ax[0].set_ylabel(r'$\mathrm{age}\ \mathrm{[Gyr]}$')
ax[1].set_ylabel(r'$\mathrm{age}\ \mathrm{[Gyr]}$')
#ax[0].set_xlabel(r'$\mathrm{[Fe/H]}$')
ax[1].set_xlabel(r'$\mathrm{[Fe/H]}$')
plt.colorbar(s_m, label=r'$N$', ax= ax[0])
plt.colorbar(s_m2, label=r'$N$', ax=ax[1])
ax[0].set_xticks([-0.6,-0.4,-0.2,-0.,0.2])
ax[0].text(-0.5,11,r'$\mathrm{Low\ [\alpha/Fe]}$', fontsize=15)
ax[1].text(-0.1,2,r'$\mathrm{High\ [\alpha/Fe]}$', fontsize=15)
fig.subplots_adjust(wspace=0.2)
fig.set_size_inches(1.5*columnwidth,2*columnwidth)
fig.tight_layout()
```
## Section 2 - Results & Discussion
The plots here require a little more complicated code so the following cells are quite long (you can turn off raw code using the button at the top of the notebook). You have to have run the fit_monoage.py and mass_script.py codes (see README) in order to have the results files saved in the ../savs directory - these cells wont work without them.
The systematic errors in figures 10 and 11 arise mostly due to differences in the Log(g) scale of the stellar evolution models and APOGEE - these are precalculated by performing the mass calculations (in mass_script.py) with the stellar models Log(g) shifted by $\pm 0.3$ dex. Users can perform these calculations if necessary by adjusting the loggcut parameter in mass_script.py. For simplicity, this run-through does not explicitly include these calculations.
```
savfile = open('../savs/paramsRGB_brokenexpflare_01dex2gyrbins_low_mass.dat', 'rb')
obj = pickle.load(savfile)
lagebins, lfehbins, lnumbins, lparamt, lsamples, lmassgrid, lm_samplegrid = obj
savfile = open('../savs/paramsRGB_brokenexpflare_01dex2gyrbins_high_mass.dat', 'rb')
obj = pickle.load(savfile)
hagebins, hfehbins, hnumbins, hparamt, hsamples, hmassgrid, hm_samplegrid = obj
def floating_axis(fig, position, invisible=True):
inax = AA.Axes(fig, position)
fig.add_axes(inax)
if invisible == True:
inax.axis["right","left","top"].set_visible(False)
return inax
def conv_data_position(fig,ax, position):
disp_pos = ax.transData.transform(position)
inv = fig.transFigure.inverted()
return inv.transform(disp_pos)
_SKIP = 10
_SIGNIF = 0.025
Rs = np.linspace(3.,15.,1001)
ages = [2,4,6,8,10,12]
cmap = plt.get_cmap('viridis')
y = np.arange(-0.6,0.3,0.1)
def surfdens_plot(samp,mgrid, hi_fullmgrid, lo_fullmgrid, ax, inv_ihRin=True, numbins= None, ntol=30, fage=True, alpha_norm='column'):
overplot=True
for ii, map in enumerate(samp):
if numbins != None:
if numbins[ii] < ntol:
continue
# Create all density profiles
tsamp= samp[ii,:,::_SKIP]
nsamples= len(tsamp[0])
tRs= numpy.tile(Rs,(nsamples,1)).T
ldp= numpy.empty((len(Rs),nsamples))
Rb= numpy.tile(numpy.exp(tsamp[3]),(len(Rs),1))
ihRin= numpy.tile(tsamp[0],(len(Rs),1))
if inv_ihRin == True:
ihRin = -numpy.tile(tsamp[0],(len(Rs),1))
ihRout= numpy.tile(tsamp[2],(len(Rs),1))
# Rb >= R0
leRb= (tRs <= Rb)*(Rb >= densprofiles._R0)
ldp[leRb]= ihRin[leRb]*(tRs[leRb]-densprofiles._R0)
gtRb= (tRs > Rb)*(Rb >= densprofiles._R0)
ldp[gtRb]= -ihRout[gtRb]*(tRs[gtRb]-densprofiles._R0)\
+ihRout[gtRb]*(Rb[gtRb]-densprofiles._R0)\
+ihRin[gtRb]*(Rb[gtRb]-densprofiles._R0)
# Rb < R0, normalize outer at R0
leRb= (tRs <= Rb)*(Rb < densprofiles._R0)
ldp[leRb]= ihRin[leRb]*(tRs[leRb]-densprofiles._R0)\
-ihRout[leRb]*(Rb[leRb]-densprofiles._R0)\
-ihRin[leRb]*(Rb[leRb]-densprofiles._R0)
gtRb= (tRs > Rb)*(Rb < densprofiles._R0)
ldp[gtRb]= -ihRout[gtRb]*(tRs[gtRb]-densprofiles._R0)
# Label and relative normalization
if fage == True:
tfeh= ages[ii]
if tfeh == 0.25: tfeh= 0.3
if tfeh == -0.0: tfeh= 0.0
if tfeh == -0.1: tfeh= -0.1
anorm= 10**(7-tfeh)
cnorm= tfeh/12.
cbnorm = (tfeh/12.)-0.1
if fage == False:
tfeh= ages[ii]
if tfeh == 0.25: tfeh= 0.3
if tfeh == -0.0: tfeh= 0.0
if tfeh == -0.1: tfeh= -0.1
anorm= 10**(-10.*(tfeh+0.1))
cnorm =(tfeh+0.6)/0.8
cbnorm = ((tfeh+0.6)/0.8)-0.1
if tfeh > 0.2: anorm= 10**(-12.*(tfeh+0.1))
if tfeh < -0.5: anorm= 10**(-12.*(tfeh+0.1))
anorm= 1./anorm # re-order
if alpha_norm == 'full':
alphanorm = np.max(fullmgrid)-np.min(fullmgrid)
alpha = (mgrid[ii]-np.min(fullmgrid))/(alphanorm)
if alpha_norm == 'column':
alphanorm = np.max(mgrid)-np.min(mgrid)
alpha = (mgrid[ii]-np.min(mgrid))/(alphanorm)
if alpha_norm == 'row':
if fage == True:
mrow = np.concatenate((hi_fullmgrid[:,ii], lo_fullmgrid[:,ii]))
alphanorm = np.max(mrow)-np.min(mrow)
alpha = (mgrid[ii] - np.min(mrow))/(alphanorm)
if fage == False:
mrow = np.concatenate((hi_fullmgrid[ii,:], lo_fullmgrid[ii,:]))
alphanorm = np.max(mrow)-np.min(mrow)
alpha = (mgrid[ii] - np.min(mrow))/(alphanorm)
if alpha <= 0:
alpha=0.0000000001
if alpha >= 1.:
alpha=0.9999999999
norm= numpy.exp(numpy.median(ldp,axis=1))[numpy.argmin(numpy.fabs(Rs-densprofiles._R0))]/anorm
ax.plot(Rs,numpy.exp(numpy.median(ldp,axis=1))/norm,
'-',
color=cmap(cnorm),
alpha = alpha)
ax.fill_between(Rs,
numpy.exp(numpy.sort(ldp,axis=1)[:,int(round(_SIGNIF*nsamples))])/norm,
numpy.exp(numpy.sort(ldp,axis=1)[:,int(round((1.-_SIGNIF)*nsamples))])/norm,
color=cmap(cbnorm),
lw=0.,
alpha = alpha)
ax.set_yscale('log')
if fage == True:
ax.set_ylim(1e-9,9e6)
if fage == False:
ax.set_ylim(1e-8,1e4)
ax.set_xlim(0.,16.)
ax.set_xlabel(r'$R$')
#plt.ylabel(r'$\Sigma(R)\times\mathrm{constant}$')
overplot=True
#ax.text(1,anorm,r''+str(ages[ii])+' Gyr', fontsize=12, color = cmap((tfeh/12)))
#ax.text(1,10**-7.5, r'$'+str(round(y[j],1))+'< \mathrm{[Fe/H]} < '+str(round(y[j+1],1))+'$', fontsize=12)
def hzprofile_plot(samp,mgrid, hi_fullmgrid, lo_fullmgrid, ax, numbins=None, ntol=30, fage=True, alpha_norm='row'):
for ii, map in enumerate(samp):
if numbins != None:
if numbins[ii] < ntol:
continue
# Create all flaring profiles
#Rmin= numpy.sort(map['RC_GALR_H'])[int(round(0.005*len(map)))]
#Rmax= numpy.sort(map['RC_GALR_H'])[numpy.amin([len(map)-1,int(round(0.995*len(map)))])]
tsamp= samp[ii,:,::_SKIP]
nsamples= len(tsamp[0])
tRs= numpy.tile(Rs,(nsamples,1)).T
ldp= numpy.empty((len(Rs),nsamples))
ldp= tsamp[1]*numpy.exp(tsamp[4]*(tRs-densprofiles._R0))
ldp= 1000./ldp # make it hz instead of its inverse
# Label and relative normalization
if fage == True:
tfeh= ages[ii]
if tfeh == 0.25: tfeh= 0.3
if tfeh == -0.0: tfeh= 0.0
if tfeh == -0.1: tfeh= -0.1
anorm= 10**(7-tfeh)
cnorm= tfeh/12.
cbnorm = (tfeh/12.)-0.1
if fage == False:
tfeh= ages[ii]
if tfeh == 0.25: tfeh= 0.3
if tfeh == -0.0: tfeh= 0.0
if tfeh == -0.1: tfeh= -0.1
anorm= 10**(-10.*(tfeh+0.1))
cnorm =(tfeh+0.6)/0.8
cbnorm = ((tfeh+0.6)/0.8)-0.1
if tfeh > 0.2: anorm= 10**(-12.*(tfeh+0.1))
if tfeh < -0.5: anorm= 10**(-12.*(tfeh+0.1))
offset= 1./anorm # re-order
if alpha_norm == 'full':
alphanorm = np.max(fullmgrid)-np.min(fullmgrid)
alpha = (mgrid[ii]-np.min(fullmgrid))/(alphanorm)
if alpha_norm == 'column':
alphanorm = np.max(mgrid)-np.min(mgrid)
alpha = (mgrid[ii]-np.min(mgrid))/(alphanorm)
if alpha_norm == 'row':
if fage == True:
mrow = np.concatenate((hi_fullmgrid[:,ii], lo_fullmgrid[:,ii]))
alphanorm = np.max(mrow)-np.min(mrow)
alpha = (mgrid[ii] - np.min(mrow))/(alphanorm)
if fage == False:
mrow = np.concatenate((hi_fullmgrid[ii,:], lo_fullmgrid[ii,:]))
alphanorm = np.max(mrow)-np.min(mrow)
alpha = (mgrid[ii] - np.min(mrow))/(alphanorm)
if alpha <= 0:
alpha=0.00001
if alpha >= 1.:
alpha=0.99999
ax.plot(Rs,numpy.median(ldp,axis=1)*offset,
'-',
color=cmap(cnorm),
alpha = alpha)
ax.fill_between(Rs,numpy.sort(ldp,axis=1)[:,int(round(_SIGNIF*nsamples))]*offset,
numpy.sort(ldp,axis=1)[:,int(round((1.-_SIGNIF)*nsamples))]*offset,
color=cmap(cbnorm),
alpha = alpha)
line= ax.plot(Rs,Rs*0.+300.*offset,color=cmap(cnorm), linestyle='dashed', alpha = alpha)
#line.set_dashes([8,6])
overplot= True
#ax.text(1,offset*300.,r''+str(ages[ii])+' Gyr', fontsize=12, color = cmap((tfeh/12)))
ax.set_yscale('log')
if fage == True:
ax.set_ylim(1e-5,1e9)
if fage == False:
ax.set_ylim(1e-6,1e6)
ax.set_xlim(0.,16.)
ax.set_xlabel(r'$R$')
def return_profile(samp, inv_ihRin=False, numbins= None, ntol=30, age=None, skip=10, signif=0.025, Rs = np.linspace(3.,15.,1001) ):
_SKIP = skip
_SIGNIF = signif
if numbins != None:
if numbins[ii] < ntol:
return None
# Create all density profiles
tsamp= samp[:,::_SKIP]
nsamples= len(tsamp[0])
tRs= numpy.tile(Rs,(nsamples,1)).T
ldp= numpy.empty((len(Rs),nsamples))
Rb= numpy.tile(numpy.exp(tsamp[3]),(len(Rs),1))
ihRin= numpy.tile(tsamp[0],(len(Rs),1))
if inv_ihRin == True:
ihRin = -numpy.tile(tsamp[0],(len(Rs),1))
ihRout= numpy.tile(tsamp[2],(len(Rs),1))
# Rb >= R0
leRb= (tRs <= Rb)*(Rb >= densprofiles._R0)
ldp[leRb]= ihRin[leRb]*(tRs[leRb]-densprofiles._R0)
gtRb= (tRs > Rb)*(Rb >= densprofiles._R0)
ldp[gtRb]= -ihRout[gtRb]*(tRs[gtRb]-densprofiles._R0)\
+ihRout[gtRb]*(Rb[gtRb]-densprofiles._R0)\
+ihRin[gtRb]*(Rb[gtRb]-densprofiles._R0)
# Rb < R0, normalize outer at R0
leRb= (tRs <= Rb)*(Rb < densprofiles._R0)
ldp[leRb]= ihRin[leRb]*(tRs[leRb]-densprofiles._R0)\
-ihRout[leRb]*(Rb[leRb]-densprofiles._R0)\
-ihRin[leRb]*(Rb[leRb]-densprofiles._R0)
gtRb= (tRs > Rb)*(Rb < densprofiles._R0)
ldp[gtRb]= -ihRout[gtRb]*(tRs[gtRb]-densprofiles._R0)
# Label and relative normalization
if age != None:
tfeh= round(age)
anorm= 10**(7-tfeh)
anorm= 1./anorm # re-order
else:
anorm = 1.
norm= numpy.exp(numpy.median(ldp,axis=1))[numpy.argmin(numpy.fabs(Rs-densprofiles._R0))]/anorm
prof = numpy.exp(numpy.median(ldp,axis=1))/norm
proflo = numpy.exp(numpy.sort(ldp,axis=1)[:,int(round(_SIGNIF*nsamples))])/norm
profhi = numpy.exp(numpy.sort(ldp,axis=1)[:,int(round((1.-_SIGNIF)*nsamples))])/norm
return Rs, prof, proflo, profhi
def movingmedian(x,y, bins, lo=25, hi=75):
out = np.zeros((len(bins)-1,3))
for ii in range(0,len(bins)-1):
mask = (y >= bins[ii])&(y < bins[ii+1])
bin = x[mask]
if len(bin) > 3:
out[ii,0] = np.percentile(bin,50)
out[ii,1] = np.percentile(bin,lo)
out[ii,2] = np.percentile(bin,hi)
if len(bin) <= 3:
out[ii,0] = np.nan
out[ii,1] = np.nan
out[ii,2] = np.nan
return out
def movingmean(x,y,weights,bins):
out = np.zeros((len(bins)-1,2))
for ii in range(0,len(bins)-1):
mask = (y >= bins[ii])&(y < bins[ii+1])
bin = x[mask]
if len(bin) > 3:
out[ii,0] = np.average(x[mask], weights=weights[mask])
out[ii,1] = np.sqrt(np.average((x[mask]-out[ii,0])**2, weights=weights[mask]))
if len(bin) <= 3:
out[ii] = np.nan
return out
```
## Figure 6 - The radial surface density profiles
```
cmap = plt.cm.viridis_r
fig = plt.figure()
fig.set_size_inches(2*textwidth,0.7*textwidth)
mainax = AA.Axes(fig,[0.1,0.02,0.8,0.96])
fig.add_axes(mainax)
plt.yscale('log')
alpha_norm = 'row'
mainax.set_xticks([])
js = [0,1,2,3,4,5]
ages = [-0.6,-0.5,-0.4,-0.3,-0.2,-0.1,0.0,0.1,0.2]
y = [1,3,5,7,9,11,13]
for j in js:
fltax = floating_axis(fig, [0.13*j+0.11,0.62,0.12,0.35])
surfdens_plot(hsamples[:,j], np.concatenate((hmassgrid[:,j,0],lmassgrid[:,j,0])), hmassgrid[:,:,0], lmassgrid[:,:,0], fltax, numbins=np.array(hnumbins)[:,j], fage=False, alpha_norm=alpha_norm)
fltax.set_xlim(3.,15.)
fig.text(0.13*j+0.14,0.49, r'$'+str(round(y[j],1))+'< \mathrm{age} < '+str(round(y[j+1],1))+'$', fontsize=10)
for j in js:
fltax = floating_axis(fig, [0.13*j+0.11,0.13,0.12,0.35])
surfdens_plot(lsamples[:,j], np.concatenate((lmassgrid[:,j,0],hmassgrid[:,j,0])), hmassgrid[:,:,0], lmassgrid[:,:,0], fltax, numbins=np.array(lnumbins)[:,j], fage=False, alpha_norm=alpha_norm)
fltax.set_xlim(3.,15.)
fig.text(0.48,0.05,r'$\mathrm{Low\ [\alpha/Fe]}$')
fig.text(0.48,0.93,r'$\mathrm{High\ [\alpha/Fe]}$')
limits = np.log10(fltax.get_ylim())
conversion = (0.96/0.3)*(np.fabs(limits[1])+np.fabs(limits[0]))
mainax.set_ylim(1e0,10**conversion)
cax = fig.add_axes([0.91,0.02,0.01,0.96])
norm = mpl.colors.Normalize(vmin=-0.6, vmax=0.2)
s_m = mpl.cm.ScalarMappable(cmap=cmap, norm=norm)
s_m.set_array([])
plt.colorbar(s_m, cax=cax, label=r'$\mathrm{[Fe/H]}$')
mainax.set_ylabel(r'$\Sigma(R)\times\mathrm{constant}$')
fig.text(0.15,0.80, r'$\mathrm{No\ Data}$')
```
## Figure 7 - the radial $h_Z$ profiles
```
cmap = plt.cm.viridis_r
fig = plt.figure()
fig.set_size_inches(2*textwidth,0.7*textwidth)
mainax = AA.Axes(fig,[0.1,0.02,0.8,0.96])
fig.add_axes(mainax)
plt.yscale('log')
mainax.set_xticks([])
js = [0,1,2,3,4,5]
ages = [-0.6,-0.5,-0.4,-0.3,-0.2,-0.1,0.0,0.1,0.2]
y = [1,3,5,7,9,11,13]
for j in js:
fig.text(0.13*j+0.14,0.49, r'$'+str(round(y[j],1))+'< \mathrm{age} < '+str(round(y[j+1],1))+'$', fontsize=10)
fltax = floating_axis(fig, [0.13*j+0.11,0.62,0.12,0.35])
hzprofile_plot(hsamples[:,j], np.concatenate((hmassgrid[:,j,0],lmassgrid[:,j,0])), hmassgrid[:,:,0], lmassgrid[:,:,0], fltax, numbins=np.array(hnumbins)[:,j], alpha_norm=alpha_norm, fage=False)
fltax.set_xlim(3.,15.)
for j in js:
fltax = floating_axis(fig, [0.13*j+0.11,0.13,0.12,0.35])
hzprofile_plot(lsamples[:,j], np.concatenate((lmassgrid[:,j,0],hmassgrid[:,j,0])), hmassgrid[:,:,0], lmassgrid[:,:,0], fltax, numbins=np.array(lnumbins)[:,j], alpha_norm=alpha_norm, fage=False)
fltax.set_xlim(3.,15.)
fig.text(0.48,0.05,r'$\mathrm{Low\ [\alpha/Fe]}$')
fig.text(0.48,0.93,r'$\mathrm{High\ [\alpha/Fe]}$')
limits = np.log10(fltax.get_ylim())
conversion = (0.96/0.3)*(np.fabs(limits[1])+np.fabs(limits[0]))
mainax.set_ylim(1e0,10**conversion)
cax = fig.add_axes([0.91,0.02,0.01,0.96])
norm = mpl.colors.Normalize(vmin=-0.6, vmax=0.2)
s_m = mpl.cm.ScalarMappable(cmap=cmap, norm=norm)
s_m.set_array([])
fig.text(0.15,0.80, r'$\mathrm{No\ Data}$')
plt.colorbar(s_m, cax=cax, label=r'$\mathrm{[Fe/H]}$')
mainax.set_ylabel(r'$h_Z(R)\times\mathrm{constant}$')
```
## calculating the mean $h_Z$ and $R_{\mathrm{flare}}$ in each age bin
```
from scipy.stats import gaussian_kde
from scipy.optimize import curve_fit
agebins = np.arange(1.,14.,2.)
agebincent = (np.array(agebins)[:-1]+np.array(agebins)[1:])/2.
fehbincent = (np.array(lfehbins)[:-1]+np.array(lfehbins)[1:])/2.
ntol=20
def gauss(x,mu,sigma,A):
return A*exp(-(x-mu)**2/2/sigma**2)
def bimodal(x,mu1,sigma1,A1,mu2,sigma2,A2):
return gauss(x,mu1,sigma1,A1)+gauss(x,mu2,sigma2,A2)
bins = np.linspace(-0.8,0.5,400)
outs = []
for i in range(0,len(agebincent)):
out = np.ones(len(bins))
for j in range(0,len(fehbincent)):
if np.array(lnumbins)[j,i] > ntol:
l_rfs = lsamples[j,i,4]
kde = gaussian_kde(l_rfs)
#plt.plot(bins,kde(bins)/sum(kde(bins)), color='Black', alpha=0.3)
out *= kde(bins)*lmassgrid[j,i,0]
outs.append(out)
l_rfs = []
l_rferrs = []
for i in outs:
params, cov = curve_fit(gauss,bins,i/sum(i), [-0.1,0.05,0.12])
l_rfs.append(params[0])
l_rferrs.append(params[1])
bins = np.linspace(-0.8,0.5,400)
outs = []
for i in range(1,len(agebincent)):
out = np.ones(len(bins))
for j in range(0,len(fehbincent)):
if np.array(hnumbins)[j,i] > ntol:
h_rfs = hsamples[j,i,4]
kde = gaussian_kde(h_rfs)
#plt.plot(bins,kde(bins)/sum(kde(bins)), color='Black', alpha=0.3)
out *= kde(bins)*hmassgrid[j,i,0]
outs.append(out)
h_rfs = []
h_rferrs = []
for i in outs:
params, cov = curve_fit(gauss,bins,i/sum(i), [-0.05,0.05,0.12])
h_rfs.append(params[0])
h_rferrs.append(params[1])
bins = np.linspace(0.,3.,400)
outs = []
for i in range(0,len(agebincent)):
out = np.ones(len(bins))
for j in range(0,len(fehbincent)):
if np.array(lnumbins)[j,i] > ntol:
l_hzs = 1/lsamples[j,i,1]
kde = gaussian_kde(l_hzs)
#plt.plot(bins,kde(bins)/sum(kde(bins)), color='Black', alpha=0.3)
out *= kde(bins)*lmassgrid[j,i,0]
outs.append(out)
l_hzs = []
l_hzerrs = []
for i in outs:
if sum(i) == 0:
l_hzs.append(np.nan)
l_errs.append(np.nan)
continue
i_x = bins[np.where(i/sum(i) == np.max(i/sum(i)))][0]
params, cov = curve_fit(gauss,bins,i/sum(i), [i_x,0.1,0.1])
l_hzs.append(params[0])
l_hzerrs.append(params[1])
bins = np.linspace(0,3.,400)
outs = []
for i in range(1,len(agebincent)):
out = np.ones(len(bins))
for j in range(0,len(fehbincent)):
if np.array(hnumbins)[j,i] > ntol:
h_hzs = 1/hsamples[j,i,1]
kde = gaussian_kde(h_hzs)
#plt.plot(bins,kde(bins)/sum(kde(bins)), color='Black', alpha=0.3)
out *= kde(bins)*hmassgrid[j,i,0]
outs.append(out)
h_hzs = []
h_hzerrs = []
for i in outs:
params, cov = curve_fit(gauss,bins,i/sum(i), [0.8,0.4,0.3])
h_hzs.append(params[0])
h_hzerrs.append(params[1])
```
## Figure 9 - Mean $h_Z$ and $R_{\mathrm{flare}}$ vs. Age (with surface mass density weighted mean)
```
agebincent = (np.array(lagebins)[:-1]+np.array(lagebins)[1:])/2.
fehbincent = (np.array(lfehbins)[:-1]+np.array(lfehbins)[1:])/2.
cmap = plt.cm.coolwarm
lgrid = []
ldgrid = []
for j in range(0,len(fehbincent)):
vals = []
dvals = []
for i in range(0,len(agebincent)):
hRin = np.median(1/lsamples[j,i,0,::10])
dhRin = np.std(1/lsamples[j,i,0,::10])
hz = np.median(1/lsamples[j,i,1,::10])
dhz = np.std(1/lsamples[j,i,1,::10])
hRout = np.median(1/lsamples[j,i,2,::10])
dhRout = np.std(1/lsamples[j,i,2,::10])
rb = np.median(lsamples[j,i,3,::10])
drb = np.std(lsamples[j,i,3,::10])
rf = np.median(lsamples[j,i,4,::10])
drf = np.std(lsamples[j,i,4,::10])
mass = lmassgrid[j,i,0]
m_samp = lm_samplegrid[j,i,:]
m_samp = m_samp[np.isfinite(m_samp)]
if lmassgrid[j,i,3] >= 0.:
low = mass - lmassgrid[j,i,3]
else:
low = 0.
hi = lmassgrid[j,i,4] - mass
nsamples = 1000
val = [hRin, hz, hRout, np.exp(rb), rf, mass]
dval = [dhRin, dhz, dhRout, drb, drf, low, hi]
vals.append(val)
dvals.append(dval)
lgrid.append(vals)
ldgrid.append(dvals)
lgrid= np.array(lgrid)
ldgrid = np.array(ldgrid)
hgrid = []
hmgrid = []
hdgrid = []
for j in range(0,len(fehbincent)):
vals = []
dvals = []
for i in range(0,len(agebincent)):
hRin = np.median(1/hsamples[j,i,0,::10])
dhRin = np.std(1/hsamples[j,i,0,::10])
hz = np.median(1/hsamples[j,i,1,::10])
dhz = np.std(1/hsamples[j,i,1,::10])
hRout = np.median(1/hsamples[j,i,2,::10])
dhRout = np.std(1/hsamples[j,i,2,::10])
rb = np.median(hsamples[j,i,3,::10])
drb = np.std(hsamples[j,i,3,::10])
rf = np.median(hsamples[j,i,4,::10])
drf = np.std(hsamples[j,i,4,::10])
mass = hmassgrid[j,i,0]
m_samp = hm_samplegrid[j,i,:]
m_samp = m_samp[np.isfinite(m_samp)]
low = mass - np.percentile(m_samp, 16)
hi = np.percentile(m_samp, 84) - mass
nsamples = 1000
val = [hRin, hz, hRout, np.exp(rb), rf, mass]
dval = [dhRin, dhz, dhRout, drb, drf, low, hi]
vals.append(val)
dvals.append(dval)
hgrid.append(vals)
hdgrid.append(dvals)
hgrid= np.array(hgrid)
hdgrid = np.array(hdgrid)
fig,ax = plt.subplots(1,2)
cmap= plt.cm.viridis
ages = (np.ones_like(lgrid[:,:,4])*((np.array(lagebins)[:-1]+np.array(lagebins)[1:])/2)).ravel()
jitters = np.random.uniform(low=-0.5,high=0.5, size=len(ages))
ages += jitters
lmask = np.array(lnumbins).ravel() > 30
hmask = np.array(hnumbins).ravel() > 30
lmed = movingmean(lgrid[:,:,1].ravel()[lmask],ages[lmask],lmassgrid[:,:,0].ravel()[lmask], lagebins)
hmed = movingmean(hgrid[:,:,1].ravel()[hmask],ages[hmask],hmassgrid[:,:,0].ravel()[hmask], hagebins)
comb_p = np.hstack((lgrid[:,:,1].ravel()[lmask],hgrid[:,:,1].ravel()[hmask]))
comb_age = np.hstack((ages[lmask], ages[hmask]))
comb_w = np.hstack((lmassgrid[:,:,0].ravel()[lmask],hmassgrid[:,:,0].ravel()[hmask]))
comb = movingmean(comb_p, comb_age, comb_w, hagebins)
ax[0].plot(agebincent,comb[:,0], color='Black', linestyle='dashed', label='Total Mean')
#plt.fill_between(agebincent, comb[:,0]-comb[:,1], comb[:,0]+comb[:,1], color='Gray', alpha=0.5, lw=0.)
ax[0].errorbar(agebincent,l_hzs, color=cmap(0.2), yerr=l_hzerrs, fmt='.', label=r'$\mathrm{Low\ [\alpha/Fe]}$')
ax[0].errorbar(agebincent[1:],h_hzs, color=cmap(0.8), yerr=h_hzerrs, fmt='.', label=r'$\mathrm{High\ [\alpha/Fe]}$')
#ax[0].fill_between(agebincent, lmed[:,0]-lmed[:,1], lmed[:,0]+lmed[:,1], color = cmap(0.2), alpha=0.5, lw=0.)
#ax[0].fill_between(agebincent, hmed[:,0]-hmed[:,1], hmed[:,0]+hmed[:,1], color = cmap(0.8), alpha=0.5, lw=0.)
#ax[0].plot(agebincent,lmed[:,0], color=cmap(0.2))
#ax[0].plot(agebincent,hmed[:,0], color=cmap(0.8))
ax[0].set_xlim(0.,13.)
ax[0].set_ylim(0.1,0.9)
ax[0].legend(loc=2, fontsize='small')
ax[0].set_ylabel(r'$h_Z\ \mathrm{[kpc]}$')
ax[0].set_xlabel(r'$\mathrm{age\ [Gyr]}$')
#ages = (np.ones_like(lgrid[:,:,4])*((lagebins[:-1]+lagebins[1:])/2)).ravel()
lmask = np.array(lnumbins).ravel() > 30
hmask = np.array(hnumbins).ravel() > 30
lmed = movingmean(lgrid[:,:,4].ravel()[lmask],ages[lmask],lmassgrid[:,:,0].ravel()[lmask], lagebins)
hmed = movingmean(hgrid[:,:,4].ravel()[hmask],ages[hmask],hmassgrid[:,:,0].ravel()[hmask], hagebins)
comb_p = np.hstack((lgrid[:,:,4].ravel()[lmask],hgrid[:,:,4].ravel()[hmask]))
comb_age = np.hstack((ages[lmask], ages[hmask]))
comb_w = np.hstack((lmassgrid[:,:,0].ravel()[lmask],hmassgrid[:,:,0].ravel()[hmask]))
comb = movingmean(comb_p, comb_age, comb_w, hagebins)
#ax[1].plot(agebincent,comb[:,0], color='Black', label='Weighted Mean')
#plt.fill_between(agebincent, comb[:,0]-comb[:,1], comb[:,0]+comb[:,1], color='Gray', alpha=0.5, lw=0.)
ax[1].errorbar(agebincent,l_rfs, color=cmap(0.2), yerr=l_rferrs, fmt='.', label=r'$\mathrm{Low\ [\alpha/Fe]}$')
ax[1].errorbar(agebincent[1:],h_rfs, color=cmap(0.8), yerr=h_rferrs, fmt='.', label=r'$\mathrm{High\ [\alpha/Fe]}$')
#ax[1].fill_between(agebincent, lmed[:,0]-lmed[:,1], lmed[:,0]+lmed[:,1], color = cmap(0.2), alpha=0.5, lw=0.)
#ax[1].fill_between(agebincent, hmed[:,0]-hmed[:,1], hmed[:,0]+hmed[:,1], color = cmap(0.8), alpha=0.5, lw=0.)
#ax[1].plot(agebincent,lmed[:,0], color=cmap(0.2))
#ax[1].plot(agebincent,hmed[:,0], color=cmap(0.8))
ax[1].set_xlim(0.,13.)
ax[1].set_ylim(-0.2,0.05)
ax[1].legend(loc=1, fontsize='small')
ax[1].set_ylabel(r'$R_{\mathrm{flare}}^{-1}\ \mathrm{[kpc^{-1}]}$')
ax[1].set_xlabel(r'$\mathrm{age\ [Gyr]}$')
l_hrin = lgrid[:,:,0].ravel()[lmask]
d_hrin = ldgrid[:,:,0].ravel()[lmask]
l_hrout = lgrid[:,:,2].ravel()[lmask]
d_hrout = ldgrid[:,:,2].ravel()[lmask]
agemask = ages < 13
delta = 1/(1/l_hrout[agemask]-1/l_hrin[agemask])
errs = np.sqrt(1/d_hrin[agemask]**2+1/d_hrout[agemask]**2)
errs = 1/errs
mm = movingmean(delta, ages[lmask][agemask], lmassgrid[:,:,0].ravel()[lmask][agemask], lagebins)
colors = np.zeros((len(delta),4))
'''
alphas = (lmassgrid[:,:,0].ravel()[lmask][hrmask]-np.min(lmassgrid[:,:,0].ravel()[lmask]))/(np.max(lmassgrid[:,:,0].ravel()[lmask])-np.min(lmassgrid[:,:,0].ravel()[lmask]))
alphas[alphas <= 0.] = 0.000001
alphas[alphas >= 1.] = 0.999999
colors[:,3] = alphas
ax[2].errorbar(ages[lmask][agemask], delta, yerr=errs, c='Black', fmt='.')
ax[2].plot(agebincent, mm[:,0], color='black', alpha=0.5)
ax[2].fill_between(agebincent, mm[:,0]-mm[:,1], mm[:,0]+mm[:,1], color='Gray', alpha=0.5, lw=0., )
ax[2].set_xlabel(r'$\mathrm{Age\ [Gyr]}$')
ax[2].set_ylabel(r'$(h_{R,\mathrm{out}}^{-1}-h_{R,\mathrm{in}}^{-1})^{-1}\ \mathrm{[Kpc]}$')
ax[2].legend(frameon=False)
ax[2].set_xlim(0.,13.)
ax[2].set_ylim(0.19,2.01)
ax[2].text(1,1.8,r'$\mathrm{Low\ [\alpha/Fe]\ only}$')
'''
fig.set_size_inches(1*textwidth, 1*columnwidth)
fig.tight_layout()
```
## Figure 8 - Profile Width vs age
```
fig = plt.figure()
l_hrin = lgrid[:,:,0].ravel()[lmask]
d_hrin = ldgrid[:,:,0].ravel()[lmask]
l_hrout = lgrid[:,:,2].ravel()[lmask]
d_hrout = ldgrid[:,:,2].ravel()[lmask]
agemask = ages < 13
delta = 1/(1/l_hrout[agemask]-1/l_hrin[agemask])
errs = np.sqrt(1/d_hrin[agemask]**2+1/d_hrout[agemask]**2)
errs = 1/errs
mm = movingmean(delta, ages[lmask][agemask], lmassgrid[:,:,0].ravel()[lmask][agemask], lagebins)
'''
colors = np.zeros((len(delta),4))
alphas = (lmassgrid[:,:,0].ravel()[lmask][hrmask]-np.min(lmassgrid[:,:,0].ravel()[lmask]))/(np.max(lmassgrid[:,:,0].ravel()[lmask])-np.min(lmassgrid[:,:,0].ravel()[lmask]))
alphas[alphas <= 0.] = 0.000001
alphas[alphas >= 1.] = 0.999999
colors[:,3] = alphas
'''
plt.errorbar(ages[lmask][agemask], delta, yerr=errs, c='Black', fmt='.')
plt.plot(agebincent, mm[:,0], color='black', alpha=0.5)
plt.fill_between(agebincent, mm[:,0]-mm[:,1], mm[:,0]+mm[:,1], color='Gray', alpha=0.5, lw=0., )
plt.xlabel(r'$\mathrm{age\ [Gyr]}$')
plt.ylabel(r'$(h_{R,\mathrm{out}}^{-1}-h_{R,\mathrm{in}}^{-1})^{-1}\ \mathrm{[kpc]}$')
plt.legend(frameon=False)
plt.xlim(0.,13.)
plt.ylim(0.19,2.01)
plt.text(1,1.8,r'$\mathrm{Low\ [\alpha/Fe]\ only}$')
fig.set_size_inches(1*columnwidth, columnwidth)
fig.tight_layout()
```
## Figure 10 - mass weighted age-$\mathrm{[Fe/H]}$ distribution as a function of $\mathrm{[\alpha/Fe]}$
```
ntol=30
fig,ax = plt.subplots(1,2, sharex=True, sharey=True)
mgrids = lgrid[:,:,5].ravel()
#mgrids = np.array(numbins).ravel()
lmask = np.where(np.array(lnumbins).ravel() > 15)
ltot = np.nansum(mgrids[lmask])#/(2*np.pi)
llow = np.nansum(ldgrid[:,:,5].ravel()[lmask])
lhi = np.nansum(ldgrid[:,:,6].ravel()[lmask])
norm = mpl.colors.Normalize(vmin=0., vmax=max(mgrids[lmask]))
norm2 = mpl.colors.Normalize(vmin=0., vmax=max(hgrid[:,:,5].ravel()))
c_m = mpl.cm.gist_earth_r
s_m = mpl.cm.ScalarMappable(cmap=c_m, norm=norm)
s_m2 = mpl.cm.ScalarMappable(cmap=c_m, norm=norm2)
s_m.set_array([])
s_m2.set_array([])
for j in range(0, len(lfehbins)-1):
for i in range(0,len(lagebins)-1):
tmass = lmassgrid[j,i,0]#/(2*np.pi)
#tmass = np.array(numbins)[j,i]
xedges = [lfehbins[j], lfehbins[j+1]]
yedgelow = [lagebins[i], lagebins[i]]
yedgehigh = [lagebins[i+1], lagebins[i+1]]
if np.array(lnumbins)[j,i] > ntol:
ax[0].fill_between(xedges, yedgelow,yedgehigh, color=s_m.to_rgba(tmass))
ax[0].set_xlim(-0.6,0.2)
ax[0].set_xlabel(r'$\mathrm{[Fe/H]}$')
ax[0].set_ylabel(r'$\mathrm{age\ [Gyr]}$')
#plt.colorbar(s_m, label=r'$\Sigma_{R_0}$ $\mathrm{[M_{\odot}\ pc^{2}]}$')
fig.set_size_inches(5,5)
fig.subplots_adjust(wspace=0.2)
plt.ylim(1.,13.)
mgrids = hgrid[:,:,5].ravel()
#mgrids = np.array(numbins).ravel()
hmask = np.where(np.array(hnumbins).ravel() > 15)
htot = np.nansum(mgrids[hmask])#/(2*np.pi)
hlow = np.nansum(hdgrid[:,:,5].ravel()[hmask])
hhi = np.nansum(hdgrid[:,:,6].ravel()[hmask])
for j in range(0, len(hfehbins)-1):
for i in range(0,len(hagebins)-1):
tmass = hmassgrid[j,i,0]#/(2*np.pi)
#tmass = np.array(numbins)[j,i]
xedges = [hfehbins[j], hfehbins[j+1]]
yedgelow = [hagebins[i], hagebins[i]]
yedgehigh = [hagebins[i+1], hagebins[i+1]]
ax[1].fill_between(xedges, yedgelow,yedgehigh, color=s_m2.to_rgba(tmass))
ax[1].set_xlim(-0.6,0.2)
ax[1].set_xlabel(r'$\mathrm{[Fe/H]}$')
#ax[1].set_ylabel(r'Age $\mathrm{[Gyr]}$')
fulltot = ltot+htot
fulllow = llow+hlow
fullhi = lhi+hhi
'''
for j in range(0, len(hfehbins)-1):
for i in range(0,len(hagebins)-1):
tmass = hmassgrid[j,i,0]+lmassgrid[j,i,0]#/(2*np.pi)
#tmass = np.array(numbins)[j,i]
xedges = [hfehbins[j], hfehbins[j+1]]
yedgelow = [hagebins[i], hagebins[i]]
yedgehigh = [hagebins[i+1], hagebins[i+1]]
ax[2].fill_between(xedges, yedgelow,yedgehigh, color=s_m.to_rgba(tmass))
ax[2].set_xlim(-0.6,0.2)
ax[2].set_xlabel(r'$\mathrm{[Fe/H]}$')
#ax[2].set_ylabel(r'Age $\mathrm{[Gyr]}$')
'''
#fig.subplots_adjust(wspace=0.2)
#fig.subplots_adjust(wspace=0.2, right=0.8, bottom=0.15)
ax[0].set_ylim(1.,13.)
#cax = fig.add_axes([0.87,0.21,0.02,0.72])
#plt.colorbar(s_m, ax=ax[2], label=r'$\Sigma_{R_0}$ $\mathrm{[M_{\odot}\ pc^{-2}]}$')
plt.colorbar(s_m, ax=ax[0], label=r'$\Sigma_{R_0}$ $\mathrm{[M_{\odot}\ pc^{-2}]}$')
plt.colorbar(s_m2, ax=ax[1], label=r'$\Sigma_{R_0}$ $\mathrm{[M_{\odot}\ pc^{-2}]}$')
cplt = 'Black'
ax[0].text(-0.55,2,r'$\mathrm{Low\ [\alpha/Fe]}$', fontsize=14, color=cplt)
ax[0].text(-0.55,12,r'$\Sigma_{R_0,\mathrm{tot}} =$', fontsize=8)
ax[0].text(-0.55,11,r'$'+str(round(ltot,1))+'^{+'+str(round(lhi,1))+'}_{-'+str(round(llow,1))+'}\mathrm{(stat.)}_{-1.9}^{+4.4}\mathrm{(syst.)}\ \mathrm{M_{\odot}\ pc^{-2}}$',fontsize=8, color=cplt)
ax[1].text(-0.55,12,r'$\Sigma_{R_0,\mathrm{tot}} =$', fontsize=8, color='White')
ax[1].text(-0.55,11,r'$'+str(round(htot,1))+'^{+'+str(round(hhi,1))+'}_{-'+str(round(hlow,1))+'}\mathrm{(stat.)}_{-0.6}^{+0.6}\mathrm{(syst.)}\ \mathrm{M_{\odot}\ pc^{-2}}$',fontsize=8, color='White')
#ax[2].text(-0.55,12,r'$\Sigma_{R_0,tot} ='+str(round(fulltot,1))+'^{+'+str(round(fullhi,1))+'}_{-'+str(round(fulllow,1))+'}\ \mathrm{M_{\odot}\ pc^{-2}}$',fontsize=10, color=cplt)
ax[1].text(-0.55,2.,r'$\mathrm{High\ [\alpha/Fe]}$', fontsize=14, color=cplt)
#ax[2].text(-0.55,2.,r'$\mathrm{total}$', fontsize=14, color=cplt)
ax[0].set_xticks([-0.6,-0.4,-0.2,0.,0.2])
ax[1].set_xticks([-0.6,-0.4,-0.2,0.,0.2])
fig.set_size_inches(1*textwidth,0.8*columnwidth)
fig.tight_layout()
#fig.subplots_adjust(right=0.86, wspace=0.05)
```
## Figure 11 - total mass weighted age-$\mathrm{[Fe/H]}$ distribution
```
from mpl_toolkits.axes_grid1 import make_axes_locatable
fig = plt.figure()
ax = plt.subplot()
mgrids = lgrid[:,:,5].ravel()+hgrid[:,:,5].ravel()
#mgrids = np.array(numbins).ravel()
lmask = np.where(np.array(lnumbins).ravel() > 15)
ltot = np.nansum(mgrids[lmask])#/(2*np.pi)
llow = np.nansum(ldgrid[:,:,5].ravel()[lmask])
lhi = np.nansum(ldgrid[:,:,6].ravel()[lmask])
norm = mpl.colors.Normalize(vmin=0., vmax=max(mgrids[lmask]))
c_m = mpl.cm.gist_earth_r
s_m = mpl.cm.ScalarMappable(cmap=c_m, norm=norm)
s_m.set_array([])
for j in range(0, len(hfehbins)-1):
for i in range(0,len(hagebins)-1):
tmass = hmassgrid[j,i,0]+lmassgrid[j,i,0]#/(2*np.pi)
#tmass = np.array(numbins)[j,i]
xedges = [hfehbins[j], hfehbins[j+1]]
yedgelow = [hagebins[i], hagebins[i]]
yedgehigh = [hagebins[i+1], hagebins[i+1]]
ax.fill_between(xedges, yedgelow,yedgehigh, color=s_m.to_rgba(tmass))
divider = make_axes_locatable(ax)
axHistx = divider.append_axes("top", size=1.2, pad=0.1)
axHisty = divider.append_axes("right", size=1.5, pad=0.1)
fehs = (np.ones_like(hmassgrid[:,:,0]).T*fehbincent).T
ages = (np.ones_like(hmassgrid[:,:,0])*agebincent)
axHistx.hist(fehs.ravel(), bins = lfehbins, weights = (hmassgrid[:,:,0]+lmassgrid[:,:,0]).ravel(), histtype='step', color='Black')
axHisty.hist(ages.ravel(), bins = lagebins, weights = (hmassgrid[:,:,0]+lmassgrid[:,:,0]).ravel(), histtype='step', color='Black', orientation='horizontal')
ax.set_xlim(-0.6,0.2)
ax.set_ylim(1,13.)
ax.set_xlabel(r'$\mathrm{[Fe/H]}$')
ax.set_ylabel(r'$\mathrm{age\ [Gyr]}$')
ax.set_xticks(np.arange(-0.6,0.2,0.1))
axHisty.set_yticks([])
axHistx.set_xticks([])
axHistx.set_ylim(0.,5.5)
axHisty.set_xticks([0.5,1.5,2.5,3.5,4.5,5.5])
axHisty.set_ylim(1.,13.)
axHistx.set_ylabel(r'$\Sigma_{R_0}\ \mathrm{[M_{\odot}\ pc^{-2}]}$')
axHisty.set_xlabel(r'$\Sigma_{R_0}\ \mathrm{[M_{\odot}\ pc^{-2}]}$')
cax = fig.add_axes([0.65,0.76,0.24,0.05])
cb = plt.colorbar(s_m, cax=cax, orientation='horizontal', label=r'$\Sigma_{R_0}$ $\mathrm{[M_{\odot}\ pc^{-2}]}$')
cb.set_ticks([0.,0.5,1.,1.5])
ax.text(-0.58,11,r'$\Sigma_{R_0,\mathrm{tot}} ='+str(round(fulltot,1))+'^{+'+str(round(fullhi,1))+'}_{-'+str(round(fulllow,1))+'}\mathrm{(stat.)}_{-2.4}^{+5.0}\mathrm{(syst.)}\ \mathrm{M_{\odot}\ pc^{-2}}$',fontsize=10, color=cplt)
fig.set_size_inches(0.8*textwidth,0.6*textwidth)
```
## Figure 12 - $h_Z$ PDF
```
lmask = (np.array(lnumbins).ravel() > 30)
hmask = (np.array(hnumbins).ravel() > 30) #& (hmassgrid.ravel() < 5.)
l_hz = lgrid[:,:,1].ravel()[lmask]
l_dhz = ldgrid[:,:,1].ravel()[lmask]
h_hz = hgrid[:,:,1].ravel()[hmask]
h_dhz = hdgrid[:,:,1].ravel()[hmask]
l_mass = lgrid[:,:,5].ravel()[lmask]
l_low = ldgrid[:,:,5].ravel()[lmask]
l_hi = ldgrid[:,:,6].ravel()[lmask]
l_merr = np.dstack((l_low,l_hi))[0].T
print np.shape(l_merr)
h_mass = hgrid[:,:,5].ravel()[hmask]
h_low = hdgrid[:,:,5].ravel()[hmask]
h_hi = hdgrid[:,:,6].ravel()[hmask]
h_merr = np.dstack((h_low,h_hi))[0].T
ages = (np.ones_like(lgrid[:,:,1])*((np.array(lagebins)[:-1]+np.array(lagebins)[1:])/2)).ravel()
norm = mpl.colors.Normalize(vmin=0, vmax=13.)
c_m = mpl.cm.viridis
s_m = mpl.cm.ScalarMappable(cmap=c_m, norm=norm)
s_m.set_array([])
print np.shape(ages)
print np.shape(l_hz)
fig = plt.figure()
plt.errorbar(l_hz, l_mass, xerr=l_dhz, yerr=l_merr, fmt=None, linewidth=1, ecolor=s_m.to_rgba(ages[lmask]), capsize=0.)
plt.scatter(l_hz, l_mass, c=s_m.to_rgba(ages[lmask]), lw=0., s=5)
plt.errorbar(h_hz, h_mass, xerr=h_dhz, yerr=h_merr, fmt=None, ecolor=s_m.to_rgba(ages[hmask]), linewidth=1, capsize=0.)
plt.scatter(h_hz, h_mass, c=s_m.to_rgba(ages[hmask]), lw=0., s=5)
bins = np.linspace(0.1,1.2,10)
llabel = r'$\mathrm{Low\ [\alpha/Fe]}$'
hlabel = r'$\mathrm{High\ [\alpha/Fe]}$'
plt.hist(l_hz, weights=l_mass, bins=bins, color=cmap(0.2), histtype='step', label=llabel)
plt.hist(h_hz[np.isfinite(h_mass)], bins=bins,weights=h_mass[np.isfinite(h_mass)],color=cmap(0.8), histtype='step', label=hlabel)
stack_hz = np.hstack((l_hz,h_hz))
stack_mass = np.hstack((l_mass,h_mass))
plt.hist(stack_hz[np.isfinite(stack_mass)], bins=bins,weights=stack_mass[np.isfinite(stack_mass)],color='black', histtype='step', linestyle='dashed', alpha=0.5)
plt.yscale('log')
plt.ylim(1e-3,2e1)
plt.xlim(0.,1.4)
plt.ylabel(r'$\Sigma_{R_0}$ $\mathrm{[M_{\odot}\ pc^{-2}]}$')
plt.xlabel(r'$h_Z$ $\mathrm{[kpc]}$')
plt.legend(frameon=False)
plt.colorbar(s_m, label=r'$\mathrm{age\ [Gyr]}$')
fig.set_size_inches(1.5*columnwidth,columnwidth)
fig.tight_layout()
```
## Figure 13 - The surface-mass density profiles (combined in age and $\mathrm{[[Fe/H]}$)
```
def ret_map_surfdens(samp, j, i, inv_ihRin=True, _SIGNIF=0.025):
tsamp= samp[j,i,:,::_SKIP]
nsamples= len(tsamp[0])
tRs= numpy.tile(Rs,(nsamples,1)).T
ldp= numpy.empty((len(Rs),nsamples))
Rb= numpy.tile(numpy.exp(tsamp[3]),(len(Rs),1))
ihRin= numpy.tile(tsamp[0],(len(Rs),1))
if inv_ihRin == True:
ihRin = -numpy.tile(tsamp[0],(len(Rs),1))
ihRout= numpy.tile(tsamp[2],(len(Rs),1))
# Rb >= R0
leRb= (tRs <= Rb)*(Rb >= densprofiles._R0)
ldp[leRb]= ihRin[leRb]*(tRs[leRb]-densprofiles._R0)
gtRb= (tRs > Rb)*(Rb >= densprofiles._R0)
ldp[gtRb]= -ihRout[gtRb]*(tRs[gtRb]-densprofiles._R0)\
+ihRout[gtRb]*(Rb[gtRb]-densprofiles._R0)\
+ihRin[gtRb]*(Rb[gtRb]-densprofiles._R0)
# Rb < R0, normalize outer at R0
leRb= (tRs <= Rb)*(Rb < densprofiles._R0)
ldp[leRb]= ihRin[leRb]*(tRs[leRb]-densprofiles._R0)\
-ihRout[leRb]*(Rb[leRb]-densprofiles._R0)\
-ihRin[leRb]*(Rb[leRb]-densprofiles._R0)
gtRb= (tRs > Rb)*(Rb < densprofiles._R0)
ldp[gtRb]= -ihRout[gtRb]*(tRs[gtRb]-densprofiles._R0)
# Label and relative normalization
norm= numpy.exp(numpy.median(ldp,axis=1))[numpy.argmin(numpy.fabs(Rs-densprofiles._R0))]
return Rs,numpy.exp(numpy.median(ldp,axis=1))/norm,numpy.exp(numpy.sort(ldp,axis=1)[:,int(round(_SIGNIF*nsamples))])/norm,numpy.exp(numpy.sort(ldp,axis=1)[:,int(round((1.-_SIGNIF)*nsamples))])/norm
_SKIP = 10
_SIGNIF = 0.025
Rs= numpy.linspace(3.,15.,1001)
lnumbins = np.array(lnumbins)
hnumbins = np.array(hnumbins)
ntol = 30.
densprofs = []
densprofs_lo = []
densprofs_hi = []
totalmass = []
for ii,age in enumerate(agebincent):
tdensprofs = []
tdensprofs_lo = []
tdensprofs_hi = []
totmass = 0
for jj,feh in enumerate(fehbincent):
mass = lgrid[jj,ii,5]
if lnumbins[jj,ii] < ntol:
mass = 0.
rs, ldp, ldp_lo, ldp_hi = ret_map_surfdens(lsamples, jj,ii, inv_ihRin=True)
ldp *= mass#/(2*np.pi)
ldp_lo *= mass
ldp_hi *= mass
totmass += mass#/(2*np.pi)
tdensprofs.append(ldp)
tdensprofs_lo.append(ldp_lo)
tdensprofs_hi.append(ldp_hi)
tdensprofs = np.array(tdensprofs)
tdensprofs_lo = np.array(tdensprofs_lo)
tdensprofs_hi = np.array(tdensprofs_hi)
if totmass != 0:
totaldensprof = np.nansum(tdensprofs, axis=0)/totmass
totaldensprof_lo = np.nansum(tdensprofs_lo, axis=0)/totmass
totaldensprof_hi = np.nansum(tdensprofs_hi, axis=0)/totmass
densprofs.append(totaldensprof)
densprofs_lo.append(totaldensprof_lo)
densprofs_hi.append(totaldensprof_hi)
totalmass.append(totmass)
if totmass == 0:
totaldensprof = np.nansum(tdensprofs, axis=0)*0.
totaldensprof_lo = np.nansum(tdensprofs_lo, axis=0)*0.
totaldensprof_hi = np.nansum(tdensprofs_hi, axis=0)*0.
densprofs.append(totaldensprof)
densprofs_lo.append(totaldensprof_lo)
densprofs_hi.append(totaldensprof_hi)
totalmass.append(totmass)
ldensprofs = np.array(densprofs)
ldensprofs_lo = np.array(densprofs_lo)
ldensprofs_hi = np.array(densprofs_hi)
ltotalmass = np.array(totalmass)
densprofs = []
densprofs_lo = []
densprofs_hi = []
totalmass = []
for ii,age in enumerate(agebincent):
tdensprofs = []
tdensprofs_lo = []
tdensprofs_hi = []
totmass = 0
for jj,feh in enumerate(fehbincent):
mass = hgrid[jj,ii,5]
if hnumbins[jj,ii] < ntol:
mass = 0.
rs, ldp, ldp_lo, ldp_hi = ret_map_surfdens(hsamples, jj,ii, inv_ihRin=True)
ldp *= mass#/(2*np.pi)
ldp_lo *= mass
ldp_hi *= mass
totmass += mass#/(2*np.pi)
tdensprofs.append(ldp)
tdensprofs_lo.append(ldp_lo)
tdensprofs_hi.append(ldp_hi)
tdensprofs = np.array(tdensprofs)
tdensprofs_lo = np.array(tdensprofs_lo)
tdensprofs_hi = np.array(tdensprofs_hi)
if totmass != 0:
totaldensprof = np.nansum(tdensprofs, axis=0)/totmass
totaldensprof_lo = np.nansum(tdensprofs_lo, axis=0)/totmass
totaldensprof_hi = np.nansum(tdensprofs_hi, axis=0)/totmass
densprofs.append(totaldensprof)
densprofs_lo.append(totaldensprof_lo)
densprofs_hi.append(totaldensprof_hi)
totalmass.append(totmass)
if totmass == 0:
totaldensprof = np.nansum(tdensprofs, axis=0)*0.
totaldensprof_lo = np.nansum(tdensprofs_lo, axis=0)*0.
totaldensprof_hi = np.nansum(tdensprofs_hi, axis=0)*0.
densprofs.append(totaldensprof)
densprofs_lo.append(totaldensprof_lo)
densprofs_hi.append(totaldensprof_hi)
totalmass.append(totmass)
print densprofs
hdensprofs = np.array(densprofs)
hdensprofs_lo = np.array(densprofs_lo)
hdensprofs_hi = np.array(densprofs_hi)
htotalmass = np.array(totalmass)
norm = mpl.colors.Normalize(vmin=0., vmax=13.)
cmap = plt.cm.viridis
s_m = mpl.cm.ScalarMappable(cmap=cmap, norm=norm)
s_m.set_array([])
columnwidth = 240./72.27
textwidth = 504.0/72.27
fig, ax = plt.subplots(3,1,sharex=True, sharey=True)
ax[0].plot(rs,np.sum(ldensprofs.T*ltotalmass, axis=1), color=cmap(0.2), label=llabel)
ax[0].fill_between(rs, np.sum(ldensprofs_lo.T*(ltotalmass), axis=1), np.sum(ldensprofs_hi.T*(ltotalmass), axis=1), color=cmap(0.2), alpha=0.5)
ax[0].plot(rs,np.sum(hdensprofs.T*htotalmass, axis=1), color=cmap(0.8), label=hlabel)
ax[0].fill_between(rs, np.sum(hdensprofs_lo.T*(htotalmass), axis=1), np.sum(hdensprofs_hi.T*(htotalmass), axis=1), color=cmap(0.8), alpha=0.5)
alldensprofs = np.hstack((ldensprofs.T,hdensprofs.T))
alldensprofs_lo = np.hstack((ldensprofs_lo.T, hdensprofs_lo.T))
alldensprofs_hi = np.hstack((ldensprofs_hi.T, hdensprofs_hi.T))
alltotmass = np.hstack((ltotalmass,htotalmass))
ax[0].plot(rs,np.sum(alldensprofs*alltotmass, axis=1), color='black', linestyle='dashed', label='$\mathrm{total}$')
ax[0].fill_between(rs, np.sum(alldensprofs_lo*(alltotmass), axis=1), np.sum(alldensprofs_hi*(alltotmass), axis=1), color='Gray', alpha=0.5)
ax[0].set_yscale('log')
ax[0].legend(loc=3, frameon=False)
ax[0].set_xlabel(r'$R\ \mathrm{[kpc]}$')
ax[0].set_ylabel(r'$\Sigma_{R_0}$ $\mathrm{[M_{\odot}\ pc^{-2}]}$')
print np.sum(ldensprofs_lo.T*ltotalmass, axis=1)
alldens = []
allmass = []
for ii,prof in enumerate(densprofs):
tdensprof = np.stack((ldensprofs[ii],hdensprofs[ii]))
ttotalmass = np.hstack((ltotalmass[ii], htotalmass[ii]))
ax[1].plot(rs,np.sum(tdensprof.T*ttotalmass, axis=1), color = s_m.to_rgba(agebincent[ii]))
alldens.extend(tdensprof)
allmass.extend(ttotalmass)
alldens = np.array(alldens)
allmass = np.array(allmass)
ax[1].plot(rs,np.sum(alldens.T*allmass, axis=1), color='black', linestyle='dashed', label='$\mathrm{total}$')
ax[1].legend(loc=1, frameon=False)
ax[1].set_yscale('log')
ax[1].set_ylim(1e-2,1.1e1)
ax[1].set_xlabel(r'$R\ \mathrm{[kpc]}$')
ax[1].set_ylabel(r'$\Sigma_{R_0}$ $\mathrm{[M_{\odot}\ pc^{-2}]}$')
densprofs = []
totalmass = []
for jj,feh in enumerate(fehbincent):
tdensprofs = []
totmass = 0
for ii,age in enumerate(agebincent):
mass = lmassgrid[jj,ii,0]
if lnumbins[jj,ii] < ntol:
mass = 0.
rs, ldp, ldp_lo, ldp_hi = ret_map_surfdens(lsamples, jj,ii, inv_ihRin=True)
ldp *= mass#/(2*np.pi)
totmass += mass#/(2*np.pi)
tdensprofs.append(ldp)
tdensprofs = np.array(tdensprofs)
if totmass != 0:
totaldensprof = np.nansum(tdensprofs, axis=0)/totmass
densprofs.append(totaldensprof)
totalmass.append(totmass)
if totmass == 0:
totaldensprof = np.nansum(tdensprofs, axis=0)*0.
densprofs.append(totaldensprof)
totalmass.append(totmass)
ldensprofs = np.array(densprofs)
ltotalmass = np.array(totalmass)
densprofs = []
totalmass = []
for jj,feh in enumerate(fehbincent):
tdensprofs = []
totmass = 0
for ii,age in enumerate(agebincent):
mass = hmassgrid[jj,ii,0]
if hnumbins[jj,ii] < ntol:
mass = 0.
rs, ldp, ldp_lo, ldp_hi = ret_map_surfdens(hsamples, jj,ii, inv_ihRin=True)
ldp *= mass#/(2*np.pi)
totmass += mass#/(2*np.pi)
tdensprofs.append(ldp)
tdensprofs = np.array(tdensprofs)
if totmass != 0:
totaldensprof = np.nansum(tdensprofs, axis=0)/totmass
densprofs.append(totaldensprof)
totalmass.append(totmass)
if totmass == 0:
totaldensprof = np.nansum(tdensprofs, axis=0)*0.
densprofs.append(totaldensprof)
totalmass.append(totmass)
hdensprofs = np.array(densprofs)
htotalmass = np.array(totalmass)
cax = fig.add_axes([0.77,0.42,0.02,0.23])
plt.colorbar(s_m, cax=cax,label=r'$\mathrm{age}\ \mathrm{[Gyr]}$',)
norm = mpl.colors.Normalize(vmin=-0.6, vmax=0.2)
cmap = plt.cm.viridis_r
s_m = mpl.cm.ScalarMappable(cmap=cmap, norm=norm)
s_m.set_array([])
alldens = []
allmass = []
for ii,prof in enumerate(densprofs):
tdensprof = np.stack((ldensprofs[ii],hdensprofs[ii]))
ttotalmass = np.hstack((ltotalmass[ii], htotalmass[ii]))
ax[2].plot(rs,np.sum(tdensprof.T*ttotalmass, axis=1), color = s_m.to_rgba(fehbincent[ii]))
alldens.extend(tdensprof)
allmass.extend(ttotalmass)
alldens = np.array(alldens)
allmass = np.array(allmass)
ax[2].plot(rs,np.sum(alldens.T*allmass, axis=1), color='black', linestyle='dashed', label=r'$\mathrm{total}$')
ax[2].legend(loc=1, frameon=False)
ax[2].set_yscale('log')
ax[2].set_ylim(1e-2,4e1)
ax[2].set_xlabel(r'$R\ \mathrm{[kpc]}$')
ax[2].set_ylabel(r'$\Sigma_{R_0}$ $\mathrm{[M_{\odot}\ pc^{-2}]}$')
fig.set_size_inches(1.3*columnwidth,1*textwidth)
fig.subplots_adjust(left=0.2,bottom=0.1,right=0.75,top=0.97, hspace=0.3)
cax = fig.add_axes([0.77,0.11,0.02,0.23])
plt.colorbar(s_m, cax=cax,label=r'$\rm{[Fe/H]}$')
ax[1].text(5,4e-2, r'$\rm{f(age)}$', fontsize=14)
ax[2].text(5,4e-2, r'$\rm{f([Fe/H])}$', fontsize=14)
```
## Figure 14 - MDF's at different radii
```
def surfdensatR(R, samp, mass):
rs, prof, proflo, profhi = return_profile(samp,inv_ihRin=True)
prof = prof*mass
proflo = proflo*mass
profhi = profhi*mass
return prof[numpy.argmin(numpy.fabs(rs-R))], proflo[numpy.argmin(numpy.fabs(rs-R))], profhi[numpy.argmin(numpy.fabs(rs-R))]
radii = [5.,8.5,12.]
massR = []
massRlo = []
massRhi =[]
for R in radii:
masses = []
masseslo = []
masseshi = []
for i in range(0, len(fehbincent)):
fehmass = 0.
fehmasslo = 0.
fehmasshi = 0.
for j in range(0,len(agebincent)):
m, mlo, mhigh = surfdensatR(R, lsamples[i,j,:], lmassgrid[i,j,0])
#mlo = surfdensatR(R, lsamples[i,j,:], lmassgrid[i,j,3])
#mhigh = surfdensatR(R, lsamples[i,j,:], lmassgrid[i,j,4])
#m += surfdensatR(R, hsamples[i,j,:], hmassgrid[i,j,0])
fehmass += m
fehmasslo += mlo
fehmasshi += mhigh
masses.append(fehmass)
masseslo.append(fehmasslo)
masseshi.append(fehmasshi)
massR.append(masses/np.sum(masses))
massRlo.append(masseslo/np.sum(masseslo))
massRhi.append(masseshi/np.sum(masseshi))
fig = plt.figure()
for i in range(0,len(radii)):
plt.plot(fehbincent,np.array(massR)[i], c = plt.cm.viridis((radii[i]-4)/8.), label=r'$R = '+str(radii[i])+'\ \mathrm{kpc}$')
plt.fill_between(fehbincent, np.array(massRlo)[i], np.array(massRhi)[i], color = plt.cm.viridis((radii[i]-4)/8.), alpha=0.5)
plt.ylim(0.,0.35)
plt.legend(frameon=False, loc=2)
plt.ylabel(r'$\Sigma(R)/\Sigma_{R,\mathrm{tot}}$')
plt.xlabel(r'$\mathrm{[Fe/H]}$')
fig.set_size_inches(1.5*columnwidth, columnwidth)
fig.tight_layout()
```
## Figure 14 - $R_{\mathrm{peak}}$ as a function of age and $\mathrm{[Fe/H]}$
```
fig,ax = plt.subplots(2,1)
cmap= plt.cm.viridis
ages = (np.ones_like(lgrid[:,:,4])*((np.array(lagebins)[:-1]+np.array(lagebins)[1:])/2)).ravel()
fehs = (np.ones_like(lgrid[:,:,4]).T*((np.array(lfehbins)[:-1]+np.array(lfehbins)[1:])/2)).T.ravel()
agejitters = np.random.uniform(low=-0.5,high=0.5, size=len(ages))
fehjitters = np.random.uniform(low=-0.025, high=0.025, size=len(fehs))
ages += agejitters
fehs += fehjitters
lmask = np.array(lnumbins).ravel() > 30
hmask = np.array(hnumbins).ravel() > 30
lmed = movingmean(lgrid[:,:,3].ravel()[lmask],ages[lmask],lmassgrid[:,:,0].ravel()[lmask], lagebins)
hmed = movingmean(hgrid[:,:,3].ravel()[hmask],ages[hmask],hmassgrid[:,:,0].ravel()[hmask], hagebins)
comb_p = np.hstack((lgrid[:,:,3].ravel()[lmask],hgrid[:,:,3].ravel()[hmask]))
comb_age = np.hstack((ages[lmask], ages[hmask]))
comb_w = np.hstack((lmassgrid[:,:,0].ravel()[lmask],hmassgrid[:,:,0].ravel()[hmask]))
comb = movingmean(comb_p, comb_age, comb_w, hagebins)
#ax[0].plot(agebincent,comb[:,0], color='Black', linestyle='dashed', label='Total Mean')
#plt.fill_between(agebincent, comb[:,0]-comb[:,1], comb[:,0]+comb[:,1], color='Gray', alpha=0.5, lw=0.)
ax[0].errorbar(ages[lmask],lgrid[:,:,3].ravel()[lmask], color=cmap(0.2), yerr=np.exp(ldgrid[:,:,3].ravel()[lmask]), fmt='.', label=r'$\mathrm{Low\ [\alpha/Fe]}$')
#ax[0].errorbar(ages[hmask],hgrid[:,:,3].ravel()[hmask], color=cmap(0.8),yerr=np.exp(hdgrid[:,:,3].ravel()[hmask]), fmt='.', label=r'$\mathrm{High\ [\alpha/Fe]}$')
ax[0].fill_between(agebincent, lmed[:,0]-lmed[:,1], lmed[:,0]+lmed[:,1], color = cmap(0.2), alpha=0.5, lw=0.)
#ax[0].fill_between(agebincent, hmed[:,0]-hmed[:,1], hmed[:,0]+hmed[:,1], color = cmap(0.8), alpha=0.5, lw=0.)
ax[0].plot(agebincent,lmed[:,0], color=cmap(0.2))
#ax[0].plot(agebincent,hmed[:,0], color=cmap(0.8))
#ax[0].legend(loc=2, fontsize='small')
ax[0].set_ylabel(r'$R_{\mathrm{peak}}\ \mathrm{[kpc]}$')
ax[0].set_xlabel(r'$\mathrm{age\ [Gyr]}$')
lmed = movingmean(lgrid[:,:,3].ravel()[lmask],fehs[lmask],lmassgrid[:,:,0].ravel()[lmask], lfehbins)
hmed = movingmean(hgrid[:,:,3].ravel()[hmask],fehs[hmask],hmassgrid[:,:,0].ravel()[hmask], hfehbins)
comb_p = np.hstack((lgrid[:,:,3].ravel()[lmask],hgrid[:,:,3].ravel()[hmask]))
comb_age = np.hstack((fehs[lmask], fehs[hmask]))
comb_w = np.hstack((lmassgrid[:,:,0].ravel()[lmask],hmassgrid[:,:,0].ravel()[hmask]))
comb = movingmean(comb_p, comb_age, comb_w, hfehbins)
#ax[1].plot(fehbincent,comb[:,0], color='Black', linestyle='dashed', label='Total Mean')
#plt.fill_between(agebincent, comb[:,0]-comb[:,1], comb[:,0]+comb[:,1], color='Gray', alpha=0.5, lw=0.)
ax[1].errorbar(fehs[lmask],lgrid[:,:,3].ravel()[lmask], color=cmap(0.2), yerr=np.exp(ldgrid[:,:,3].ravel()[lmask]), fmt='.', label=r'$\mathrm{Low\ [\alpha/Fe]}$')
#ax[1].errorbar(fehs[hmask],hgrid[:,:,3].ravel()[hmask], color=cmap(0.8),yerr=np.exp(hdgrid[:,:,3].ravel()[hmask]), fmt='.', label=r'$\mathrm{High\ [\alpha/Fe]}$')
ax[1].fill_between(fehbincent, lmed[:,0]-lmed[:,1], lmed[:,0]+lmed[:,1], color = cmap(0.2), alpha=0.5, lw=0.)
#ax[1].fill_between(fehbincent, hmed[:,0]-hmed[:,1], hmed[:,0]+hmed[:,1], color = cmap(0.8), alpha=0.5, lw=0.)
ax[1].plot(fehbincent,lmed[:,0], color=cmap(0.2))
#ax[1].plot(fehbincent,hmed[:,0], color=cmap(0.8))
ax[0].text(1,12.5,r'$\mathrm{Low\ [\alpha/Fe]\ Only}$')
ax[1].text(-0.05,12.5,r'$\mathrm{Low\ [\alpha/Fe]\ Only}$')
ax[1].set_ylabel(r'$R_{\mathrm{peak}}\ \mathrm{[kpc]}$')
ax[1].set_xlabel(r'$\mathrm{[Fe/H]}$')
ax[1].set_xticks([-0.6,-0.4,-0.2,0.,0.2])
fig.set_size_inches(1.5*columnwidth, 2*columnwidth)
fig.tight_layout()
```
| github_jupyter |
# TF Ranking
In this Notebook, we run through a simplified example to highlight some of the features of the TF Ranking library and demonstrate an end-to-end execution.
The general recipe is a short list of four main steps:
1. Compose a function to **read** input data and prepare a Tensorflow Dataset;
2. Define a **scoring** function that, given a (set of) query-document feature vector(s), produces a score indicating the query's level of relevance to the document;
3. Create a **loss** function that measures how far off the produced scores from step (2) are from the ground truth; and,
4. Define evaluation **metrics**.
A final step makes use of standard Tensorflow API to create, train, and evaluate a model.
We have included in the TF Ranking library a default implementation of data readers (in the `tensorflow_ranking.data` module), loss functions (in `tensorflow_ranking.losses`), and popular evaluation metrics (in `tensorflow_ranking.metrics`) that may be further tailored to your needs as we shall show later in this Notebook.
### Preparation
In what follows, we will assume the existence of a dataset that is split into training and test sets and that are stored at `data/train.txt` and `data/test.txt` respectively. We further assume that the dataset is in the LibSVM format and lines in the training and test files are sorted by query ID -- an assumption that holds for many popular learning-to-rank benchmark datasets.
We have included in our release a toy (randomly generated) dataset in the `data/` directory. However, to learn a more interesting model, you may copy your dataset of choice to the `data/` directory. Please ensure the format of your dataset conforms to the requirements above. Alternatively, you may edit this Notebook to plug in a customized input pipeline for a non-comformant dataset.
# Get Started with TF Ranking
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/ranking/blob/master/tensorflow_ranking/examples/tf_ranking_libsvm.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/ranking/blob/master/tensorflow_ranking/examples/tf_ranking_libsvm.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
### Dependencies and Global Variables
Let us start by importing libraries that will be used throughout this Notebook. We also enable the "eager execution" mode for convenience and demonstration purposes.
```
! pip install tensorflow_ranking
import tensorflow as tf
import tensorflow_ranking as tfr
tf.enable_eager_execution()
tf.executing_eagerly()
```
Next, we will download a dummy dataset in LibSVM format.Note that you can replace these datasets with public or custom datasets.
We also define some global parameters.
```
! wget -O "/tmp/train.txt" "https://raw.githubusercontent.com/tensorflow/ranking/master/tensorflow_ranking/examples/data/train.txt"
! wget -O "/tmp/test.txt" "https://raw.githubusercontent.com/tensorflow/ranking/master/tensorflow_ranking/examples/data/test.txt"
# Store the paths to files containing training and test instances.
# As noted above, we will assume the data is in the LibSVM format
# and that the content of each file is sorted by query ID.
_TRAIN_DATA_PATH="/tmp/train.txt"
_TEST_DATA_PATH="/tmp/test.txt"
# Define a loss function. To find a complete list of available
# loss functions or to learn how to add your own custom function
# please refer to the tensorflow_ranking.losses module.
_LOSS="pairwise_logistic_loss"
# In the TF-Ranking framework, a training instance is represented
# by a Tensor that contains features from a list of documents
# associated with a single query. For simplicity, we fix the shape
# of these Tensors to a maximum list size and call it "list_size,"
# the maximum number of documents per query in the dataset.
# In this demo, we take the following approach:
# * If a query has fewer documents, its Tensor will be padded
# appropriately.
# * If a query has more documents, we shuffle its list of
# documents and trim the list down to the prescribed list_size.
_LIST_SIZE=100
# The total number of features per query-document pair.
# We set this number to the number of features in the MSLR-Web30K
# dataset.
_NUM_FEATURES=136
# Parameters to the scoring function.
_BATCH_SIZE=32
_HIDDEN_LAYER_DIMS=["20", "10"]
```
### Input Pipeline
The first step to construct an input pipeline that reads your dataset and produces a `tensorflow.data.Dataset` object. In this example, we will invoke a LibSVM parser that is included in the `tensorflow_ranking.data` module to generate a `Dataset` from a given file.
We parameterize this function by a `path` argument so that the function can be used to read both training and test data files.
```
def input_fn(path):
train_dataset = tf.data.Dataset.from_generator(
tfr.data.libsvm_generator(path, _NUM_FEATURES, _LIST_SIZE),
output_types=(
{str(k): tf.float32 for k in range(1,_NUM_FEATURES+1)},
tf.float32
),
output_shapes=(
{str(k): tf.TensorShape([_LIST_SIZE, 1])
for k in range(1,_NUM_FEATURES+1)},
tf.TensorShape([_LIST_SIZE])
)
)
train_dataset = train_dataset.shuffle(1000).repeat().batch(_BATCH_SIZE)
return train_dataset.make_one_shot_iterator().get_next()
```
### Scoring Function
Next, we turn to the scoring function which is arguably at the heart of a TF Ranking model. The idea is to compute a relevance score for a (set of) query-document pair(s). The TF-Ranking model will use training data to learn this function.
Here we formulate a scoring function using a feed forward network. The function takes the features of a single example (i.e., query-document pair) and produces a relevance score.
```
def example_feature_columns():
"""Returns the example feature columns."""
feature_names = [
"%d" % (i + 1) for i in range(0, _NUM_FEATURES)
]
return {
name: tf.feature_column.numeric_column(
name, shape=(1,), default_value=0.0) for name in feature_names
}
def make_score_fn():
"""Returns a scoring function to build `EstimatorSpec`."""
def _score_fn(context_features, group_features, mode, params, config):
"""Defines the network to score a documents."""
del params
del config
# Define input layer.
example_input = [
tf.layers.flatten(group_features[name])
for name in sorted(example_feature_columns())
]
input_layer = tf.concat(example_input, 1)
cur_layer = input_layer
for i, layer_width in enumerate(int(d) for d in _HIDDEN_LAYER_DIMS):
cur_layer = tf.layers.dense(
cur_layer,
units=layer_width,
activation="tanh")
logits = tf.layers.dense(cur_layer, units=1)
return logits
return _score_fn
```
### Evaluation Metrics
We have provided an implementation of popular Information Retrieval evalution metrics in the TF Ranking library.
```
def eval_metric_fns():
"""Returns a dict from name to metric functions.
This can be customized as follows. Care must be taken when handling padded
lists.
def _auc(labels, predictions, features):
is_label_valid = tf_reshape(tf.greater_equal(labels, 0.), [-1, 1])
clean_labels = tf.boolean_mask(tf.reshape(labels, [-1, 1], is_label_valid)
clean_pred = tf.boolean_maks(tf.reshape(predictions, [-1, 1], is_label_valid)
return tf.metrics.auc(clean_labels, tf.sigmoid(clean_pred), ...)
metric_fns["auc"] = _auc
Returns:
A dict mapping from metric name to a metric function with above signature.
"""
metric_fns = {}
metric_fns.update({
"metric/ndcg@%d" % topn: tfr.metrics.make_ranking_metric_fn(
tfr.metrics.RankingMetricKey.NDCG, topn=topn)
for topn in [1, 3, 5, 10]
})
return metric_fns
```
### Putting It All Together
We are now ready to put all of the components above together and create an `Estimator` that can be used to train and evaluate a model.
```
def get_estimator(hparams):
"""Create a ranking estimator.
Args:
hparams: (tf.contrib.training.HParams) a hyperparameters object.
Returns:
tf.learn `Estimator`.
"""
def _train_op_fn(loss):
"""Defines train op used in ranking head."""
return tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.train.get_global_step(),
learning_rate=hparams.learning_rate,
optimizer="Adagrad")
ranking_head = tfr.head.create_ranking_head(
loss_fn=tfr.losses.make_loss_fn(_LOSS),
eval_metric_fns=eval_metric_fns(),
train_op_fn=_train_op_fn)
return tf.estimator.Estimator(
model_fn=tfr.model.make_groupwise_ranking_fn(
group_score_fn=make_score_fn(),
group_size=1,
transform_fn=None,
ranking_head=ranking_head),
params=hparams)
```
Let us instantiate and initialize the `Estimator` we defined above.
```
hparams = tf.contrib.training.HParams(learning_rate=0.05)
ranker = get_estimator(hparams)
```
Now that we have a correctly initialized `Estimator`, we will train a model using the training data. We encourage you to experiment with different number of steps here and below.
```
ranker.train(input_fn=lambda: input_fn(_TRAIN_DATA_PATH), steps=100)
```
Finally, let us evaluate our model on the test set.
```
ranker.evaluate(input_fn=lambda: input_fn(_TEST_DATA_PATH), steps=100)
```
### Visualization
The train and evaluation steps above by default store checkpoints, metrics, and other useful information about your network to a temporary directory on disk. We encourage you to visualize this data using [Tensorboard](http://www.tensorflow.org/guide/summaries_and_tensorboard). In particular, you can launch Tensorboard and point it to where your model data is stored as follows:
First, let's find out the path to the log directory created by the process above.
```
ranker.model_dir
```
Launch Tensorboard in shell using:
$ tensorboard --logdir=<ranker.model_dir output>
| github_jupyter |
[](https://github.com/awslabs/aws-data-wrangler)
# 32 - AWS Lake Formation - Glue Governed tables
### This tutorial assumes that your IAM user/role has the required Lake Formation permissions to create and read AWS Glue Governed tables
## Table of Contents
* [1. Read Governed table](#1.-Read-Governed-table)
* [1.1 Read PartiQL query](#1.1-Read-PartiQL-query)
* [1.1.1 Read within transaction](#1.1.1-Read-within-transaction)
* [1.1.2 Read within query as of time](#1.1.2-Read-within-query-as-of-time)
* [1.2 Read full table](#1.2-Read-full-table)
* [2. Write Governed table](#2.-Write-Governed-table)
* [2.1 Create new Governed table](#2.1-Create-new-Governed-table)
* [2.1.1 CSV table](#2.1.1-CSV-table)
* [2.1.2 Parquet table](#2.1.2-Parquet-table)
* [2.2 Overwrite operations](#2.2-Overwrite-operations)
* [2.2.1 Overwrite](#2.2.1-Overwrite)
* [2.2.2 Append](#2.2.2-Append)
* [2.2.3 Create partitioned Governed table](#2.2.3-Create-partitioned-Governed-table)
* [2.2.4 Overwrite partitions](#2.2.4-Overwrite-partitions)
* [3. Multiple read/write operations within a transaction](#2.-Multiple-read/write-operations-within-a-transaction)
## 1. Read Governed table
## 1.1 Read PartiQL query
```
import awswrangler as wr
database = "gov_db" # Assumes a Glue database registered with Lake Formation exists in the account
table = "gov_table" # Assumes a Governed table exists in the account
catalog_id = "111111111111" # AWS Account Id
# Note 1: If a transaction_id is not specified, a new transaction is started
df = wr.lakeformation.read_sql_query(
sql=f"SELECT * FROM {table};",
database=database,
catalog_id=catalog_id
)
```
### 1.1.1 Read within transaction
```
transaction_id = wr.lakeformation.start_transaction(read_only=True)
df = wr.lakeformation.read_sql_query(
sql=f"SELECT * FROM {table};",
database=database,
transaction_id=transaction_id
)
```
### 1.1.2 Read within query as of time
```
import calendar
import time
query_as_of_time = query_as_of_time = calendar.timegm(time.gmtime())
df = wr.lakeformation.read_sql_query(
sql=f"SELECT * FROM {table} WHERE id=:id; AND name=:name;",
database=database,
query_as_of_time=query_as_of_time,
params={"id": 1, "name": "Ayoub"}
)
```
## 1.2 Read full table
```
df = wr.lakeformation.read_sql_table(
table=table,
database=database
)
```
## 2. Write Governed table
## 2.1 Create a new Governed table
### Enter your bucket name:
```
import getpass
bucket = getpass.getpass()
```
If a governed table does not exist, it can be created by passing an S3 `path` argument. Make sure your IAM user/role has enough permissions in the Lake Formation database
### 2.1.1 CSV table
```
import pandas as pd
table = "gov_table_csv"
df=pd.DataFrame({
"col": [1, 2, 3],
"col2": ["A", "A", "B"],
"col3": [None, "test", None]
})
# Note 1: If a transaction_id is not specified, a new transaction is started
# Note 2: When creating a new Governed table, `table_type="GOVERNED"` must be specified. Otherwise the default is to create an EXTERNAL_TABLE
wr.s3.to_csv(
df=df,
path=f"s3://{bucket}/{database}/{table}/", # S3 path
dataset=True,
database=database,
table=table,
table_type="GOVERNED"
)
```
### 2.1.2 Parquet table
```
table = "gov_table_parquet"
df = pd.DataFrame({"c0": [0, None]}, dtype="Int64")
wr.s3.to_parquet(
df=df,
path=f"s3://{bucket}/{database}/{table}/",
dataset=True,
database=database,
table=table,
table_type="GOVERNED",
description="c0",
parameters={"num_cols": str(len(df.columns)), "num_rows": str(len(df.index))},
columns_comments={"c0": "0"}
)
```
## 2.2 Overwrite operations
### 2.2.1 Overwrite
```
df = pd.DataFrame({"c1": [None, 1, None]}, dtype="Int16")
wr.s3.to_parquet(
df=df,
dataset=True,
mode="overwrite",
database=database,
table=table,
description="c1",
parameters={"num_cols": str(len(df.columns)), "num_rows": str(len(df.index))},
columns_comments={"c1": "1"}
)
```
### 2.2.2 Append
```
df = pd.DataFrame({"c1": [None, 2, None]}, dtype="Int8")
wr.s3.to_parquet(
df=df,
dataset=True,
mode="append",
database=database,
table=table,
description="c1",
parameters={"num_cols": str(len(df.columns)), "num_rows": str(len(df.index) * 2)},
columns_comments={"c1": "1"}
)
```
### 2.2.3 Create partitioned Governed table
```
table = "gov_table_parquet_partitioned"
df = pd.DataFrame({"c0": ["foo", None], "c1": [0, 1]})
wr.s3.to_parquet(
df=df,
path=f"s3://{bucket}/{database}/{table}/",
dataset=True,
database=database,
table=table,
table_type="GOVERNED",
partition_cols=["c1"],
description="c0+c1",
parameters={"num_cols": "2", "num_rows": "2"},
columns_comments={"c0": "zero", "c1": "one"}
)
```
### 2.2.4 Overwrite partitions
```
df = pd.DataFrame({"c0": [None, None], "c1": [0, 2]})
wr.s3.to_parquet(
df=df,
dataset=True,
mode="overwrite_partitions",
database=database,
table=table,
partition_cols=["c1"],
description="c0+c1",
parameters={"num_cols": "2", "num_rows": "3"},
columns_comments={"c0": "zero", "c1": "one"}
)
```
## 3. Multiple read/write operations within a transaction
```
read_table = "gov_table_parquet"
write_table = "gov_table_multi_parquet"
transaction_id = wr.lakeformation.start_transaction(read_only=False)
df = pd.DataFrame({"c0": [0, None]}, dtype="Int64")
wr.s3.to_parquet(
df=df,
path=f"s3://{bucket}/{database}/{write_table}_1",
dataset=True,
database=database,
table=f"{write_table}_1",
table_type="GOVERNED",
transaction_id=transaction_id,
)
df2 = wr.lakeformation.read_sql_table(
table=read_table,
database=database,
transaction_id=transaction_id,
use_threads=True
)
df3 = pd.DataFrame({"c1": [None, 1, None]}, dtype="Int16")
wr.s3.to_parquet(
df=df2,
path=f"s3://{bucket}/{database}/{write_table}_2",
dataset=True,
mode="append",
database=database,
table=f"{write_table}_2",
table_type="GOVERNED",
transaction_id=transaction_id,
)
wr.lakeformation.commit_transaction(transaction_id=transaction_id)
```
| github_jupyter |
```
from __future__ import division
import collections
import numpy as np
import matplotlib.pyplot as plt
## NN libs
import keras
from keras import backend as K
from keras import regularizers
from keras.utils import to_categorical
from keras.optimizers import SGD, Adam
from keras.layers import *
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Model
from keras.callbacks import TensorBoard
import config
import os, numpy as np, pandas, sklearn, scipy.signal as signal
import mido
import matplotlib.pyplot as plt
%matplotlib inline
# local libs
import config, models, setup, compression, ncd_evaluation
import midi
import midi.decode
from midi import generators as g
from utils import io, models_io, utils, plot, string
from capsule.layers import Capsule, Length
from capsule.capsulefunctions import squash, softmax, margin_loss
context = setup.init(max_bars=4)
c = context
n = 500 * 1
dim4 = True
multiTrack = True
reduce_dims = midi.ReduceDimsOptions.NONE # GLOBAL
dn = 'drum_midi/'
v = None # float | None
x_train, labels = setup.import_data(context, n, dim4=dim4, reduce_dims=reduce_dims,
dirname=dn, multiTrack=multiTrack, velocity=v, r=True)
genres = [string.extract_labels_from_filename(k) for k in labels]
# genre_dict = setup.build_label_dict(genres)
[(i,k) for i,k in enumerate(labels)]
unique_genres = set(genre[-1] for genre in genres)
len(unique_genres)
x_train.shape
# genres = [string.extract_labels_from_filename(k) for k in labels]
genre_dict = setup.build_label_dict(genres)
input_shape = x_train[0].shape
timesteps = input_shape[0]
notes = input_shape[1]
input_shape
latent_dim = 10
epsilon_std = 1.0
batch_size = 128
epochs = 500
name = 'non-functional_model.h5'
fn = config.model_dir + name
vae, encoder, generator = models.build(input_shape, latent_dim)
vae.load_weights(fn)
# utils.reload(plot)
i,j = 0, 10
m = 40
y = midi.decode.identity(c, vae.predict(x_train[:50]))
plot.single(x_train[i,:m])
plot.single(y[i,:m])
plot.single(x_train[j,:m])
plot.single(y[j,:m])
```
## Style transfer
```
m = 500
x_train_encoded = encoder.predict(x_train[:m], batch_size=batch_size)
x_train_encoded.shape
x_train.shape
fn = config.plots_dir + 'transformations-best_dims.pkl'
best_dims = io.load(fn)
fn = config.plots_dir + 'transformations.pkl'
transformations = io.load(fn)
fn = config.plots_dir + 'min_transformations.pkl'
min_transformations = io.load(fn)
list(transformations.keys())[:]
len(transformations.keys())
# select n random transformations
def choose_transformations(transformations, n=20, randomize=False, scale=2):
t_chosen = collections.defaultdict(dict)
k_a = np.random.choice(list(transformations.keys()), n)
for genre_a in k_a:
genre_b = np.random.choice(list(transformations[genre_a].keys()))
if randomize:
v = np.zeros_like(transformations[genre_a][genre_b])
# v[np.random.randint(0,v.shape[0])] = 0.001
v = np.random.random(v.shape[0])
t_chosen[genre_a][genre_b] = v
else:
v = transformations[genre_a][genre_b] * scale
if max(v) > 0:
v = np.clip(v, 0.2, 0.5)
print(max(v))
else:
v = np.clip(v, -0.5, 0.2)
print(min(v))
t_chosen[genre_a][genre_b] = v
return t_chosen
t_chosen = choose_transformations(min_transformations, randomize=False)
def choose_samples(genre_dict, transformations):
genre_dict2 = collections.defaultdict(dict)
for genre_a in transformations.keys():
for genre_b in transformations[genre_a].keys():
samples = transformations[genre_a][genre_b]
i = np.random.choice(np.arange(samples.shape[0]))
genre_dict2[genre_a][genre_b] = [i]
return genre_dict2
genre_dict2 = choose_samples(genre_dict, t_chosen)
utils.reload(ncd_evaluation)
grid = [0, 0.05, 0.1, 0.25, 0.5, 0.75, 1]
grid = np.linspace(0,1,20)
db, x_result, db2, meta = ncd_evaluation.transform(x_train_encoded, genre_dict2, t_chosen,
generator, grid, amt1=None, amt2=None, v=1)
# io.save(db, config.results_dir + 'applied_transformations-db.pkl')
# io.save(db2, config.results_dir + 'applied_transformations-db2.pkl')
# io.save(x_result, config.results_dir + 'applied_transformations-x_result.pkl')
# io.save(meta, config.results_dir + 'applied_transformations-meta.pkl')
db = io.load(config.results_dir + 'applied_transformations-db.pkl')
db2 = io.load(config.results_dir + 'applied_transformations-db2.pkl')
x_result = io.load(config.results_dir + 'applied_transformations-x_result.pkl')
meta = io.load(config.results_dir + 'applied_transformations-meta.pkl')
x_result[0][0].shape # (transformation, grid, samples, timesteps, notes, 1)
len(x_result), len(x_result[0]), x_result[0][0].shape #transformations, grid, n samples
len(x_result), len(x_result[0])
# genre_dict
# # utils.reload(plot)
# plot.single(x_result[0,1], figsize=(20,40))
# plot.single(x_result[0,1, :40])
# # utils.reload(midi.decode, midi.encode, midi)
# utils.reload(plot)
# c = context
# k = list(db.keys())[2]
# _,keys,i = utils.get(db[k], recursion_depth=0, i=0)
# x = midi.decode.identity(c, x_result[i])
# print(k, keys)
# print(x.shape)
# # utils.reload(plot)
# for i in range(x.shape[0]):
# plot.single(x[i,:80], figsize=(12, 8)) # 8, 12
# utils.reload(plot)
# # plot.single(x[0,:40], figsize=(8, 12))
# plot.multi(x, 80, figsize=(30,20), v=1)
io.reset_midis_dir()
# for i in range(5):
for j,d in enumerate(genre_dict2.values()):
for i_list in d.values():
i = i_list[0]
mid = midi.decode.track(context, x_train[i])
io.export_midifile(mid, config.export_dir + 'test-original-%i'%i, name='test-original-%i'%i)
mid = midi.decode.track(context, x_result[j][0][0])
io.export_midifile(mid, config.export_dir + 'test-original-copy-%i'%i, name='test-original-copy-%i'%i)
# plot.single(x_train[i,:80])
# plot.single(x_result[j][0][0][:80])
utils.reload(midi, io)
def save_sample_grid(x, i, grid_i, grid, v=0):
value = grid[grid_i]
name = 'incr-transform'
suffix = name + ('{id-%i-%i-%s}'%(i, grid_i, str(round(value,2))))
fn = config.export_dir + suffix
if v: print(fn)
mid = midi.decode.track(context, x, v=0)
io.export_midifile(mid, fn, name=suffix)
def save_sample(x, i, v=0):
name = 'incr-transform'
suffix = name + ('{id-%i}'%i)
fn = config.export_dir + suffix
if v: print(fn)
mid = midi.decode.track(context, x, v=0)
io.export_midifile(mid, fn, name=suffix)
# genre_dict2
db2
# t_chosen # values lower than 0.01
utils.reload(midi, midi.pitches, midi.decode)
io.reset_midis_dir()
for i, grid_samples in enumerate(x_result):
# for i, k in enumerate(keys):
# transformation = x_result[k]
seq = [grid_samples[i][0] for i in range(len(grid_samples))]
# for i in range(len(grid_samples)):
# plot.single(grid_samples[i][0][:40])
seq = midi.concatenate(seq)
save_sample(seq, i)
# a = grid_samples[0] - grid_samples[-1]
# plot.single(a[0][:40], figsize=(3,3))
# print(np.sum(a), seq.shape)
# for grid_i, grid_sample in enumerate(grid_samples):
# for sample in grid_sample:
# save_sample(sample, i, grid_i, grid)
dn = config.results_dir + 'plots/'
dn
utils.reload(plot, io, midi.decode, midi.encode)
figsize = (30,10)
figsize=(15,10)
# plot.single(x_train[0], figsize=figsize, fn=dn+'2')
def plot_(i=0, grid=[0,2,3,4], prefix='name'):
# x = np.array([x_result[i][grid_i][0] for grid_i in grid])
# plot.multi(x, fn=dn+'a')
for grid_i in grid:
x = x_result[3][grid_i][0]
x = midi.decode.identity(c,x)
print(x.shape)
print(i,grid_i)
fn = dn + prefix + '-%i-%i'%(i,grid_i)
# fn = None
x = np.array(x[:80])
plot.single(x, figsize=figsize, fn=fn)
plot_(3, prefix='country-jazz')
plot_(5, grid=[0,-3], prefix='-')
# x_result[0][0][0].shape
# utils.reload(plot, midi.decode)
for i in [0, 10, -1]:
plot.single(x_result[0][i][0], figsize=(20,20), transform=context)
labels[0]
# utils.reload(midi, midi.encode, midi.decode, plot)
# # x = vae.predict(x_train[:9])
# x = x_train[0]
# x_ = midi.decode.identity(context,x)
# for xx in [x, x_]:
# print(xx.shape)
# plot.single(xx[:40])
```
| github_jupyter |
```
#!/usr/bin/python
import sys
import pickle
import pandas as pd
import numpy as np
from functools import partial
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
# from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import SimpleImputer#, \
# IterativeImputer # requires sklearn.experimental.enable_iterative_imputer
# from sklearn.feature_selection import SelectPercentile, SelectFromModel, f_classif, mutual_info_classif, chi2,\
# SelectFpr, SelectFdr, RFECV
from sklearn.feature_selection import SelectPercentile, f_classif, mutual_info_classif, chi2
# from sklearn.decomposition import FastICA, IncrementalPCA, KernelPCA, PCA, TruncatedSVD
from sklearn.cluster import KMeans
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn import svm
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.metrics import confusion_matrix, precision_recall_fscore_support
### My imports
sys.path.append('tools/')
from dos2unix import crlf_to_lf # Borrowed and modified from multiple sources.
from train_test import run_skl, get_base_perfs, search_em_all
from feature_engineering import set_all_ratios, quant_flag_all, out_flag_all, flag_signs, add_k_means_n
### Udacity imports (deprecated)
# from feature_format import featureFormat, targetFeatureSplit
# from tester import dump_classifier_and_data
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
##########################################################################################
### Udacity comments are in [].
### [Load the dictionary containing the dataset]
### Make the dict a dataframe because they're easier to work with.
data_df = None #pd.DataFrame()
fp = crlf_to_lf(f_in_path='data/final_project_dataset.pkl')
with open(fp, 'rb') as data_file:
data_df = pd.DataFrame(pickle.load(data_file)).T
##########################################################################################
### [Task 1: Select what features you'll use.]
### Task 1: Clean up and select what features and subsets *not* to use.
### (Further feature selection will happen after feature engineering.)
print('Cleaning data.')
### Drop email_address because it's a signature.
data_df.drop(columns='email_address', inplace=True)
### Drop the TOTAL row.
data_df.drop(labels=['TOTAL', 'THE TRAVEL AGENCY IN THE PARK'], inplace=True)
### Handle missing values here.
### Replacing 'NaN' with None had a weird result in which values from some
### rows were copied into the missing values of neighboring rows. No idea why.
### Using np.nan did not have that result as far as I can tell.
### But it is a float missing value and thus casts the column as float,
### or as object when other values are not floats.
data_df.replace(to_replace='NaN', value=np.nan, inplace=True)
### [features_list is a list of strings, each of which is a feature name.
### The first feature must be "poi".]
### (if using featureFormat(), which I don't).
### All units are in USD.
fin_features = ['salary', 'bonus', 'long_term_incentive', 'deferred_income', 'deferral_payments',
'loan_advances', 'other', 'expenses', 'director_fees', 'total_payments',
'exercised_stock_options', 'restricted_stock', 'restricted_stock_deferred', 'total_stock_value']
pay_features = fin_features[:10]
stock_features = fin_features[10:]
### Units are number of emails messages;
email_features = ['to_messages', 'from_poi_to_this_person', 'from_messages', 'from_this_person_to_poi',
'shared_receipt_with_poi']
### Boolean, represented as integer.
POI_label = ['poi']
### The first feature must be "poi" if using featureFormat().
features_list = POI_label + fin_features + email_features
### Imputation recasts as float, but as object if left as bool, so set it to int for now.
data_df['poi'] = data_df['poi'].astype(dtype=int)
### Belfer's financial data is shifted one column to the right.
### Shift it one to the left, financial data only.
### Make total_stock_value np.nan for consistency until imputation, but could be 0.
### May remove this row for so many NaNs, but fix it now anyway.
data_df.loc[data_df.index == 'BELFER ROBERT', fin_features] \
= data_df.loc[data_df.index == 'BELFER ROBERT', fin_features].shift(periods=-1, axis='columns',
fill_value=np.nan)
### Bhatnagar's financial data is shifted one to the left.
### Shift it one to the right, financial data only.
### Make salary np.nan.
data_df.loc[data_df.index == 'BHATNAGAR SANJAY', fin_features] \
= data_df.loc[data_df.index == 'BHATNAGAR SANJAY', fin_features].shift(periods=1, axis='columns',
fill_value=np.nan)
### Set totals to sum of values where any values are not NaN.
### i.e. don't make 0 totals NaN, even though some NaN values may be included.
### Makes these rows consistent with other rows that include NaNs and numbers yet have a nonNaN total.
data_df.loc[~(data_df[pay_features].isna().all(axis='columns')), 'total_payments'] \
= data_df[pay_features[:-1]].sum(axis='columns')
data_df.loc[~(data_df[stock_features].isna().all(axis='columns')), 'total_stock_value'] \
= data_df[stock_features[:-1]].sum(axis='columns')
### Add one to Glisan's to_message to at least equal shared_receipt_with_poi.
data_df.loc['GLISAN JR BEN F', 'to_messages'] = 874
### Drop features that are too sparse.
drop_feats_lst = ['loan_advances']
data_df.drop(columns=drop_feats_lst, inplace=True)
fin_features = [feat for feat in fin_features if feat not in drop_feats_lst]
pay_features = [feat for feat in pay_features if feat not in drop_feats_lst]
stock_features = [feat for feat in stock_features if feat not in drop_feats_lst]
email_features = [feat for feat in email_features if feat not in drop_feats_lst]
features_list = [feat for feat in features_list if feat not in drop_feats_lst]
### Removed 'email' as signature upon loading.
### Drop persons who have NaN payment totals or NaN stock totals or NaN to_messages or NaN from_messages,
### and are missing 70% of their values.
### (Already made sure that all totals are not NaN if they have subvalues.)
nan_limit = 0.7 * len(data_df.columns)
sparse_records_idx_arr = \
data_df.loc[data_df['total_payments'].isna() \
| data_df['total_stock_value'].isna() \
| data_df['to_messages'].isna() \
| data_df['from_messages'].isna()]\
.loc[data_df.isna().sum(axis='columns') > nan_limit]\
.index.values
data_df.drop(labels=sparse_records_idx_arr, inplace=True)
### This leaves 123 records over 19 features.
### Make a quick baseline model for comparison.
### Alphabetize index before split for Udacity compatibility because that's what they'll do.
### I knew this, but missed it until the end. I'd decided from the start not to use their deprecated
### scripts based on dictionaries, specifically feature_format, and wrote my own using pandas.
### The rest of my cleaning and engineering are based on a different split.
### My mistake. Rookie lesson learned: pay closer attention to what the legacy code does, especially
### expected input/output structures.
data_df.sort_index(inplace=True)
### Split now for baseline model, but also before further processing, outlier removal, scaling, engineering,
### or else test set info leaks into training set.
### Even imputation could if using multivariate imputation or median.
### Decisions on how to treat the data should not be influenced by test set either.
X_train, X_test, y_train, y_test \
= train_test_split(data_df[features_list[1:]], data_df[['poi']], test_size=.3, random_state=42)
### Some algorithms want 1D y data.
y_train_1d = np.ravel(y_train.astype(bool))
y_test_1d = np.ravel(y_test.astype(bool))
### Split train set again for a baseline model that won't touch the final test set.
X_train_base, X_test_base, y_train_base, y_test_base \
= train_test_split(X_train, y_train, test_size=.3, random_state=42)
y_train_1d_base = np.ravel(y_train_base.astype(bool))
y_test_1d_base = np.ravel(y_test_base.astype(bool))
### Impute with 0.
imp_0 = SimpleImputer(missing_values=np.nan, strategy='constant', fill_value=0, copy=False)
imp_0 = imp_0.fit(X=X_train_base)
X_train_base_imp0 = pd.DataFrame(data=imp_0.transform(X=X_train_base), columns=X_train_base.columns,
index=X_train_base.index)
X_test_base_imp0 = pd.DataFrame(data=imp_0.transform(X=X_test_base), columns=X_test_base.columns,
index=X_test_base.index)
### For metrics dataframe if you want to save and inspect them.
ordered_cols_lst = ['nonPOI_prec', 'POI_prec', 'nonPOI_rec', 'POI_rec', 'nonPOI_f', 'POI_f', 'nonPOI_sup',
'POI_sup', 't_neg', 'f_neg', 'f_pos', 't_pos', 'train_t', 'predict_t', 'model']
base_perf_df = pd.DataFrame(columns=ordered_cols_lst)
clf_dict = {'dt_clf': DecisionTreeClassifier, 'rf_clf': RandomForestClassifier, 'ab_clf': AdaBoostClassifier,
'kn_clf': KNeighborsClassifier, 'gnb_clf': GaussianNB, 'svc_clf': svm.SVC}
print('\nBaseline model performance metrics, no engineered features or tuning, imputed NaNs with 0, using a train-test split of the training set:\n')
for key, method in clf_dict.items():
_, _, _, _, perf_sr = run_skl(method=method, X_train=X_train_base_imp0,
y_train=y_train_1d_base,
X_test=X_test_base_imp0,
y_test=y_test_1d_base,
perf_series=key)
base_perf_df = base_perf_df.append(perf_sr)
### Save a full copy of basic split sets for baseline comparison once final model is built.
X_train_base = X_train.copy()
X_train_base = pd.DataFrame(data=imp_0.transform(X=X_train_base), columns=X_train_base.columns,
index=X_train_base.index)
X_test_base = X_train.copy()
X_test_base = pd.DataFrame(data=imp_0.transform(X=X_test_base), columns=X_test_base.columns,
index=X_test_base.index)
y_train_base = np.ravel(y_train.astype(bool))
y_test_base = np.ravel(y_test.astype(bool))
##########################################################################################
### Task 2: Remove/handle outliers
print('Handling outliers.')
### Dropped ['TOTAL', 'THE TRAVEL AGENCY IN THE PARK'] row upon loading.
### Drop features that are too sparse.
### Drop 'other' because it's ill-defined and seems overly represented within important features.
### The nebulous nature of it seems like a good fit for fraud, but high gross 'other' amounts are more associated
### with nonPOIs than POIs if anything.
drop_feats_lst = ['director_fees', 'restricted_stock_deferred', 'other']
X_train.drop(columns=drop_feats_lst, inplace=True)
X_test.drop(columns=drop_feats_lst, inplace=True)
data_df.drop(columns=drop_feats_lst, inplace=True)
fin_features = [feat for feat in fin_features if feat not in drop_feats_lst]
pay_features = [feat for feat in pay_features if feat not in drop_feats_lst]
stock_features = [feat for feat in stock_features if feat not in drop_feats_lst]
email_features = [feat for feat in email_features if feat not in drop_feats_lst]
features_list = [feat for feat in features_list if feat not in drop_feats_lst]
del drop_feats_lst
### Don't drop records now because it will mess up the split for Udacity.
### Could drop earlier and resplit, but I've already done a lot of EDA behind the scenes.
### NaN his financials instead.
X_train.loc[['POWERS WILLIAM'], pay_features] = np.nan
data_df.loc[['POWERS WILLIAM'], pay_features] = np.nan
### Bivariate linear regression of the ratios between to/from/shared with POIs and
### total to and from messages revealed that top coding to_messages and from_messages
### may slightly aid nonPOI precision.
### Only top coding the training set in order to bias the model,
### because I am less concerned with accuracy than I am with POI recall,
### and by extension, nonPOI precision.
X_train['to_messages'] = X_train['to_messages'].apply(lambda x: x if x < 12000 or np.isnan(x) else 12000)
X_train['from_messages'] = X_train['from_messages'].apply(lambda x: x if x < 8000 or np.isnan(x) else 8000)
data_df.loc[X_train.index]['to_messages'] \
= data_df.loc[X_train.index]['to_messages'].apply(lambda x: x if x < 12000 or np.isnan(x) else 12000)
data_df.loc[X_train.index]['from_messages'] \
= data_df.loc[X_train.index]['from_messages'].apply(lambda x: x if x < 8000 or np.isnan(x) else 8000)
### Not sure whether top coding these will really help or hinder, if anything at all.
### But, it appears to potentially aid POI recall in some cases
### when comparing payments to totals, and it's more in line with best practices.
### Only really affects Frevert.
top = X_train['total_payments'].dropna().sort_values()[-2]
X_train['total_payments'] = X_train['total_payments'].apply(lambda x : x if x < top or np.isnan(x) else top)
data_df.loc[X_train.index]['total_payments'] \
= data_df.loc[X_train.index]['total_payments'].apply(lambda x : x if x < top or np.isnan(x) else top)
top = X_train['long_term_incentive'].dropna().sort_values()[-2]
X_train['long_term_incentive'] = \
X_train['long_term_incentive'].apply(lambda x : x if x < top or np.isnan(x) else top)
data_df.loc[X_train.index]['long_term_incentive'] \
= data_df.loc[X_train.index]['long_term_incentive'].apply(lambda x : x if x < top or np.isnan(x) else top)
### Same story as Powers, NaN all of Belfer instead of simply dropping.
X_train.loc['BELFER ROBERT'] = np.nan
# belfers_poi = data_df.loc['BELFER ROBERT']['poi']
data_df.loc['BELFER ROBERT', features_list[1:]]= np.nan
# data_df.loc['BELFER ROBERT']['poi'] = belfers_poi
### After look at distributions of ratios of features, more top/bottom coding. ###
### Nan Bannantine's salary, and bottom code salary.
X_train.loc['BANNANTINE JAMES M', 'salary'] = np.nan
data_df.loc['BANNANTINE JAMES M', 'salary'] = np.nan
bottom = X_train['salary'].dropna().sort_values(ascending=False)[-2]
X_train['salary'] = X_train['salary'].apply(lambda x : x if x > bottom or np.isnan(x) else bottom)
data_df.loc[X_train.index]['salary'] \
= data_df.loc[X_train.index]['salary'].apply(lambda x : x if x > bottom or np.isnan(x) else bottom)
### These two only have one, very low payment value.
# X_train.loc[['HAYES ROBERT E', 'HAUG DAVID L'], pay_features] = np.nan
# data_df.loc[['HAYES ROBERT E', 'HAUG DAVID L'], pay_features] = np.nan
X_train.loc[['HAYES ROBERT E'], pay_features] = np.nan
data_df.loc[['HAYES ROBERT E'], pay_features] = np.nan
### Top code deferred_income.
top = X_train['deferred_income'].dropna().sort_values(ascending=True)[-3]
X_train['deferred_income'] = X_train['deferred_income'].apply(lambda x : x if x < top or np.isnan(x) else top)
data_df.loc[X_train.index]['deferred_income'] = \
data_df.loc[X_train.index]['deferred_income'].apply(lambda x : x if x < top or np.isnan(x) else top)
del top
del bottom
##########################################################################################
### Task 3: Create new feature(s)
print('Engineering features.')
### Start with all ratios, within respective subspaces (fin:fin, e:e).
### Add financial ratios within subspaces to data sets.
pay_feats_divby_df = set_all_ratios(df=X_train, denoms=pay_features, numers=pay_features)
stock_feats_divby_df = set_all_ratios(df=X_train, denoms=stock_features, numers=stock_features)
### Only plausible email ratios (all reciprocals still, to get the 0s to infs):
to_lst = ['to_messages', 'from_poi_to_this_person', 'shared_receipt_with_poi']
from_lst = ['from_messages', 'from_this_person_to_poi']
email_to_divby_df = set_all_ratios(df=X_train, denoms=to_lst, numers=to_lst)
email_from_divby_df = set_all_ratios(df=X_train, denoms=from_lst, numers=from_lst)
X_train = pd.concat(objs=[X_train, pay_feats_divby_df, stock_feats_divby_df, email_to_divby_df,
email_from_divby_df], axis=1)
### Do for test set.
pay_feats_divby_df = set_all_ratios(df=X_test, denoms=pay_features, numers=pay_features)
stock_feats_divby_df = set_all_ratios(df=X_test, denoms=stock_features, numers=stock_features)
email_to_divby_df = set_all_ratios(df=X_test, denoms=to_lst, numers=to_lst)
email_from_divby_df = set_all_ratios(df=X_test, denoms=from_lst, numers=from_lst)
X_test = pd.concat(objs=[X_test, pay_feats_divby_df, stock_feats_divby_df, email_to_divby_df,
email_from_divby_df], axis=1)
### Do for full set.
pay_feats_divby_df = set_all_ratios(df=data_df, denoms=pay_features, numers=pay_features)
stock_feats_divby_df = set_all_ratios(df=data_df, denoms=stock_features, numers=stock_features)
email_to_divby_df = set_all_ratios(df=data_df, denoms=to_lst, numers=to_lst)
email_from_divby_df = set_all_ratios(df=data_df, denoms=from_lst, numers=from_lst)
data_df = pd.concat(objs=[data_df, pay_feats_divby_df, stock_feats_divby_df, email_to_divby_df,
email_from_divby_df], axis=1)
del to_lst
del from_lst
### Set all np.inf to np.nan.
set_inf = lambda col: col.apply(func=(lambda x: np.nan if abs(x) == abs(np.inf) else x))
X_train = X_train.apply(func=set_inf)
X_test = X_test.apply(func=set_inf)
data_df = data_df.apply(func=set_inf)
### Remove all features containing less than 30% training observations.
drop_lst = list(X_train.count().loc[X_train.count() < .3 * len(X_train.index)].index)
X_train.drop(columns=drop_lst, inplace=True)
X_test.drop(columns=drop_lst, inplace=True)
data_df.drop(columns=drop_lst, inplace=True)
pay_feats_divby_lst = [feat for feat in list(pay_feats_divby_df.columns) if not feat in drop_lst]
stock_feats_divby_lst = [feat for feat in list(stock_feats_divby_df.columns) if not feat in drop_lst]
email_feats_divby_lst = [feat for feat in list(email_to_divby_df.columns) if not feat in drop_lst] \
+ [feat for feat in list(email_from_divby_df.columns) if not feat in drop_lst]
fin_features = [feat for feat in fin_features if feat not in drop_lst] + pay_feats_divby_lst \
+ stock_feats_divby_lst
pay_features = [feat for feat in pay_features if feat not in drop_lst]
stock_features = [feat for feat in stock_features if feat not in drop_lst]
email_features = [feat for feat in email_features if feat not in drop_lst] + email_feats_divby_lst
features_list = [feat for feat in features_list if feat not in drop_lst] + pay_feats_divby_lst \
+ stock_feats_divby_lst + email_feats_divby_lst
del drop_lst
# ### Create features that flag mambership in various quantiles, outliership, and x > 0.
# ### Use multiple quantiles: quartiles, quintiles, and deciles.
# ### Retain np.nans.
# to_flag_lst = fin_features + email_features
# ### Could write a function, but I'll just paste and edit.
# ### Flag train set.
# fin_quant_flags_df = quant_flag_all(df=X_train[fin_features], quant_df=X_train[fin_features])
# email_quant_flags_df = quant_flag_all(df=X_train[email_features], quant_df=X_train[email_features])
# fin_out_flags_df = out_flag_all(df=X_train[fin_features], quant_df=X_train[fin_features])
# email_out_flags_df = out_flag_all(df=X_train[email_features], quant_df=X_train[email_features])
# sign_flags_df = flag_signs(df=X_train[to_flag_lst])
# X_train = pd.concat(objs=[X_train, fin_quant_flags_df, email_quant_flags_df, fin_out_flags_df,
# email_out_flags_df, sign_flags_df], axis=1)
# ### Flag test set.
# fin_quant_flags_df = quant_flag_all(df=X_test[fin_features], quant_df=X_train[fin_features])
# email_quant_flags_df = quant_flag_all(df=X_test[email_features], quant_df=X_train[email_features])
# fin_out_flags_df = out_flag_all(df=X_test[fin_features], quant_df=X_train[fin_features])
# email_out_flags_df = out_flag_all(df=X_test[email_features], quant_df=X_train[email_features])
# sign_flags_df = flag_signs(df=X_test[to_flag_lst])
# X_test = pd.concat(objs=[X_test, fin_quant_flags_df, email_quant_flags_df, fin_out_flags_df,
# email_out_flags_df, sign_flags_df], axis=1)
# ### Flag whole set.
# fin_quant_flags_df = quant_flag_all(df=data_df[fin_features], quant_df=X_train[fin_features])
# email_quant_flags_df = quant_flag_all(df=data_df[email_features], quant_df=X_train[email_features])
# fin_out_flags_df = out_flag_all(df=data_df[fin_features], quant_df=X_train[fin_features])
# email_out_flags_df = out_flag_all(df=data_df[email_features], quant_df=X_train[email_features])
# sign_flags_df = flag_signs(df=data_df[to_flag_lst])
# data_df = pd.concat(objs=[data_df, fin_quant_flags_df, email_quant_flags_df, fin_out_flags_df,
# email_out_flags_df, sign_flags_df], axis=1)
# ### Create and update feature lists.
# fin_quant_flags_lst = list(fin_quant_flags_df.columns)
# email_quant_flags_lst = list(email_quant_flags_df.columns)
# quant_flags_lst = fin_quant_flags_lst + email_quant_flags_lst
# fin_out_flags_lst = list(fin_out_flags_df.columns)
# email_out_flags_lst = list(email_out_flags_df.columns)
# out_flags_lst = fin_out_flags_lst + email_out_flags_lst
# fin_features += fin_quant_flags_lst + fin_out_flags_lst
# email_features += email_quant_flags_lst + email_out_flags_lst
# sign_flags_lst = list(sign_flags_df.columns)
# features_list = features_list + quant_flags_lst + out_flags_lst + sign_flags_lst
# del to_flag_lst
# del fin_quant_flags_df
# del email_quant_flags_df
# del fin_out_flags_df
# del email_out_flags_df
# del sign_flags_df
### Scale features.
### Would be ideal to scale in the pipeline, but I initially experimented with iterative imputation
### and feeding bools into sklearn. Not worth rewriting for this project.
### Just do min-max on floats, not bools (some are objects for now because np.nan)
float_feats_lst = fin_features + email_features
# bool_feats_lst = sign_flags_lst
scaler = MinMaxScaler()
train_floats = pd.DataFrame(data=scaler.fit_transform(X=X_train[float_feats_lst]),
columns=float_feats_lst, index=X_train.index)
X_train_scaled = pd.concat(objs=[train_floats], axis=1)#, X_train[bool_feats_lst]], axis=1)
test_floats = pd.DataFrame(data=scaler.transform(X=X_test[float_feats_lst]),
columns=float_feats_lst,index=X_test.index)
X_test_scaled = pd.concat(objs=[test_floats], axis=1)#, X_test[bool_feats_lst]], axis=1)
all_floats = pd.DataFrame(data=scaler.transform(X=data_df[float_feats_lst]),
columns=float_feats_lst, index=data_df.index)
data_df_scaled = pd.concat(objs=[data_df['poi'], all_floats], axis=1)#, data_df[bool_feats_lst]], axis=1)
del float_feats_lst
del scaler
del train_floats
del test_floats
del all_floats
del X_train
del X_test
del data_df
### Impute missing values:
### Financial features to 0, email features to median, and bools to mode.
### Restore bools to bool (from object because np.nan)
imp0 = SimpleImputer(missing_values=np.nan, strategy='constant', fill_value=0)
imp_med = SimpleImputer(missing_values=np.nan, strategy='median')
imp_mod = SimpleImputer(missing_values=np.nan, strategy='most_frequent')
### Financial features to 0.
fin_train_df = pd.DataFrame(data=imp0.fit_transform(X=X_train_scaled[fin_features]),
columns=fin_features, index=X_train_scaled.index)
fin_test_df = pd.DataFrame(data=imp0.transform(X=X_test_scaled[fin_features]),
columns=fin_features, index=X_test_scaled.index)
fin_all_df = pd.DataFrame(data=imp0.transform(X=data_df_scaled[fin_features]),
columns=fin_features, index=data_df_scaled.index)
### email features to median.
email_train_df = pd.DataFrame(data=imp_med.fit_transform(X=X_train_scaled[email_features]),
columns=email_features, index=X_train_scaled.index)
email_test_df = pd.DataFrame(data=imp_med.transform(X=X_test_scaled[email_features]),
columns=email_features, index=X_test_scaled.index)
email_all_df = pd.DataFrame(data=imp_med.transform(X=data_df_scaled[email_features]),
columns=email_features, index=data_df_scaled.index)
# ### Bools to mode.
# ### Restore bools to bool (from object because np.nan)
# bool_train_df = (pd.DataFrame(data=imp_mod.fit_transform(X=X_train_scaled[bool_feats_lst]),
# columns=bool_feats_lst, index=X_train_scaled.index)).astype(bool)
# bool_test_df = pd.DataFrame(data=imp_mod.transform(X=X_test_scaled[bool_feats_lst]),
# columns=bool_feats_lst, index=X_test_scaled.index).astype(bool)
# bool_all_df = pd.DataFrame(data=imp_mod.transform(X=data_df_scaled[bool_feats_lst]),
# columns=bool_feats_lst, index=data_df_scaled.index).astype(bool)
### Concat
X_train_scaled_imp = pd.concat(objs=[fin_train_df, email_train_df], axis=1)#, bool_train_df], axis=1)
X_test_scaled_imp = pd.concat(objs=[fin_test_df, email_test_df], axis=1)#, bool_test_df], axis=1)
data_df_scaled_imp = pd.concat(objs=[data_df_scaled['poi'], fin_all_df, email_all_df], axis=1)#, bool_all_df], axis=1)
del fin_train_df
del email_train_df
# del bool_train_df
del fin_test_df
del email_test_df
# del bool_test_df
del fin_all_df
del email_all_df
# del bool_all_df
# del bool_feats_lst
del X_train_scaled
del X_test_scaled
del data_df_scaled
# ### sklearn predictions as features
# # 1) Kmeans cluster.
# train_cluster_subspace, test_cluster_subspace \
# = add_k_means_n(X_train=X_train_scaled_imp, X_test=X_test_scaled_imp)
# X_train_scaled_imp_k = pd.concat(objs=[X_train_scaled_imp, train_cluster_subspace], axis=1)
# X_test_scaled_imp_k = pd.concat(objs=[X_test_scaled_imp, test_cluster_subspace], axis=1)
# train_cluster_subspace, test_cluster_subspace \
# = add_k_means_n(X_train=X_train_scaled_imp, X_test=data_df_scaled_imp[features_list[1:]])
# data_df_scaled_imp_k = pd.concat(objs=[data_df_scaled_imp, test_cluster_subspace], axis=1)
# k_means_feats_lst = k_means_feats_lst = list(train_cluster_subspace.columns)
# features_list += k_means_feats_lst
# del train_cluster_subspace
# del test_cluster_subspace
# del X_train_scaled_imp
# del X_test_scaled_imp
# del data_df_scaled_imp
################################################################################################
### [Task 4: Try a varity of classifiers
### Please name your classifier clf for easy export below.
### Note that if you want to do PCA or other multi-stage operations,
### you'll need to use Pipelines. For more info:
### http://scikit-learn.org/stable/modules/pipeline.html]
# # [Provided to give you a starting point. Try a variety of classifiers.]
# from sklearn.naive_bayes import GaussianNB
# clf = GaussianNB()
### Construct baseline performance with all features before tuning/selection.
print('\nBaseline models using all engineered features, mixed imputation, train-test split of train set:\n')
### Split train set again for a baseline model that won't touch the final test set.
X_train_base, X_test_base, y_train_base, y_test_base \
= train_test_split(X_train_scaled_imp, y_train, test_size=.3, random_state=42)
# = train_test_split(X_train_scaled_imp_k, y_train, test_size=.3, random_state=42)
y_train_1d_base = np.ravel(y_train_base.astype(bool))
y_test_1d_base = np.ravel(y_test_base.astype(bool))
### Save metrics as a dataframe if you want to save the object and inspect it later.
base_perf_engineered_df = pd.DataFrame(columns=ordered_cols_lst)
base_perfs_dict = {'base_perf_engineered': base_perf_engineered_df}
imp_sets_dict = {'base_perf_engineered': [X_train_base, X_test_base]}
### Modifies the base_perfs_dict in place, because dict has no deep copy method.
get_base_perfs(base_perfs_dict=base_perfs_dict, imp_sets_dict=imp_sets_dict, clf_dict=clf_dict, y_train=y_train_1d_base,
y_test=y_test_1d_base)
base_perfs_dict['first_base'] = base_perf_df
################################################################################################
### [Task 5: Tune your classifier to achieve better than .3 precision and recall
### using our testing script. Check the tester.py script in the final project
### folder for details on the evaluation method, especially the test_classifier
### function. Because of the small size of the dataset, the script uses
### stratified shuffle split cross validation. For more info:
### http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.StratifiedShuffleSplit.html]
### Because the proliferation of features has led to overfit
### (see gridsearch notebooks in the supplemental material folder),
### remove quantile flags, outlier flags, sign flags, and cluster flags,
### leaving the original base features (that were not removed) and the ratio features.
drop_lst = []#quant_flags_lst + out_flags_lst + sign_flags_lst + k_means_feats_lst
keep_lst = [feat for feat in features_list[1:] if feat not in drop_lst]
# X_train_trimmed = X_train_scaled_imp_k[keep_lst]
# X_test_trimmed = X_test_scaled_imp_k[keep_lst]
# data_df_trimmed = data_df_scaled_imp_k[['poi'] + keep_lst]
X_train_trimmed = X_train_scaled_imp[keep_lst]
X_test_trimmed = X_test_scaled_imp[keep_lst]
data_df_trimmed = data_df_scaled_imp[['poi'] + keep_lst]
print('Tuning model.')
### GridSearchCV inputs:
n_jobs = -1
### Callables to pass into parameter grid:
mutual_info_classif_partial = partial(mutual_info_classif, random_state=42)
DecisionTreeClassifier_partial = partial(DecisionTreeClassifier, random_state=42)
RandomForestClassifier_partial = partial(RandomForestClassifier, random_state=42, n_jobs=n_jobs)
AdaBoostClassifier_partial = partial(AdaBoostClassifier, random_state=42)
svm_SVC_partial = partial(svm.SVC, random_state=42)
KNeighborsClassifier_partial = partial(KNeighborsClassifier, n_jobs=n_jobs)
### Would be ideal to scale in the pipeline, but I initially experimented with iterative imputation
### and feeding bools into sklearn, and so scaled first. Not worth rewriting for this project.
selectors = {
'sel_per': {
'sel': SelectPercentile(),
'params': {
'sel_per__score_func': [f_classif, chi2, mutual_info_classif_partial],
'sel_per__percentile': [2, 4, 6, 8, 10, 12, 14]
}
}
}
decomps = {
'empty' : None
# 'fica': {
# 'dec': FastICA(),
# 'params': {
# 'fica__algorithm': ['parallel', 'deflation'],
# 'fica__fun': ['logcosh', 'exp', 'cube'],
# 'fica__random_state': [42]
# }
# },
# 'ipca': {
# 'dec': IncrementalPCA(),
# 'params': {
# ### defaults
# }
# },
# 'kpca': {
# 'dec': KernelPCA(),
# 'params': {
# 'kpca__kernel': ['linear', 'poly', 'rbf', 'sigmoid', 'cosine',
# 'precomputed'],
# 'kpca__random_state': [42],
# 'kpca__n_jobs': [n_jobs]
# }
# },
### PCA kept throwing an error that the data contained nans, inf,
### or too large dtypes, despite no nans, infs, nor wrong types per
### replications of the sklearn (and numpy) condition checks that threw the
### errors ("errors" because using PCA threw the error from sklearn script,
### but further investigation showed that PCA's get_precision method (or another method) may have
### thrown it from a NumPy script (see gridsearch notebook for tracing), and that
### did not include dtype size).
### Maybe a problem with the transformed data handed off from
### SelectPercentile, but I'm done messing around with it. Just need to finish. Skip PCA.
# 'pca': {
# 'dec': PCA(),
# 'params': {
# 'pca__random_state': [42]
# }
# },
# 'tsvd': {
# 'dec': TruncatedSVD(),
# 'params': {
# 'tsvd__n_components': [2, 4, 8, 16, 32, 64, 128],
# 'tsvd__algorithm': ['arpack', 'randomized'],
# 'tsvd__random_state': [42]
# }
# }
}
classifiers = {
# 'dt_clf': {
# 'clf': DecisionTreeClassifier(),
# 'params': {
# 'dt_clf__random_state': [42]
# }
# },
'rf_clf': {
'clf': RandomForestClassifier(),
'params': {
'rf_clf__n_estimators': [1, 2, 3, 4, 5, 6, 7],
'rf_clf__max_features': ['sqrt', 'log2'],
'rf_clf__max_depth': [8, 16, 24],
'rf_clf__min_samples_split': [2],
'rf_clf__min_samples_leaf': [1, 2, 3, 4],
'rf_clf__bootstrap': [True, False],
'rf_clf__random_state': [42],
'rf_clf__n_jobs': [n_jobs]
}
},
# 'ab_clf': {
# 'clf': AdaBoostClassifier(),
# 'params': {
# 'ab_clf__base_estimator': [
# DecisionTreeClassifier_partial(),
# RandomForestClassifier_partial(),
# AdaBoostClassifier_partial(),
# svm_SVC_partial(),
# KNeighborsClassifier_partial(),
# GaussianNB()
# ],
# 'ab_clf__n_estimators': [8, 16, 24, 32, 40, 48, 56],
# 'ab_clf__algorithm': ['SAMME', 'SAMME.R'],
# 'ab_clf__random_state': [42]
# }
# },
# 'kn_clf': {
# 'clf': KNeighborsClassifier(),
# 'params': {
# 'kn_clf__n_neighbors': [2, 3, 4, 5, 6, 7, 8, 9, 10],
# 'kn_clf__weights': ['uniform', 'distance'],
# 'kn_clf__algorithm': ['ball_tree', 'kd_tree', 'brute'],
# 'kn_clf__leaf_size': [4, 8, 12, 16, 20, 24, 30],
# 'kn_clf__n_jobs': [n_jobs]
# }
# },
# 'gnb_clf': {
# 'clf': GaussianNB(),
# 'params': {
# # Defaults
# }
# },
}
print('\nGrid search output:\n')
imp_gscvs_dict = {}
imp_gscvs_dict['mixed_impute_trimmed'] \
= search_em_all(X_train=X_train_trimmed, y_train=y_train_1d, selectors=selectors,
decomps=decomps, classifiers=classifiers, pipe_verbose=True,
scoring='recall_weighted', n_jobs=-1)
### Can try with multiple datasets for comparison.
# imp_gscvs_dict['other_set'] \
# = search_em_all(X_train=X_train_other_set, y_train=y_train_1d, selectors=selectors,
# decomps=decomps, classifiers=classifiers, pipe_verbose=True,
# scoring='recall_weighted', n_jobs=-1)
with open('data/imp_gscvs_dict_last.pkl', 'wb') as file:
pickle.dump(obj=imp_gscvs_dict, file=file)
### Check some performance metrics of final model.
get_f = lambda precision, recall: 2 * ((precision * recall) / (precision + recall))
for name, gscv in imp_gscvs_dict['mixed_impute_trimmed'].items():
print('Best model key:', name, '\n')
print('Best score:\n', gscv.best_score_, '\n')
print('Best estimator:\n', gscv.best_estimator_, '\n')
clf = gscv.best_estimator_
pred = clf.predict(X_test_trimmed)
conf = confusion_matrix(y_true=y_test_1d, y_pred=pred)
print('Confusion matrix:\n', conf, '\n')
prf = precision_recall_fscore_support(y_true=y_test_1d, y_pred=pred)
print('Precision, recall, f beta score, support:\n', prf, '\n')
print('Custom F beta using nonPOI precision and POI recall:\n', get_f(prf[0][0], prf[1][1]), '\n')
print('\n')
### Check final features.
print('\nAll features and their scores provided by scoring function mutual_info_classif:')
feature_scores_sr = pd.Series(data=clf.named_steps['sel_per'].scores_, index=X_train_trimmed.columns)
print(feature_scores_sr.sort_values(ascending=False))
print('\nSelected features and their scores provided by selecting function mutual_info_classif:')
num_selected = int((0.01 * clf.named_steps['sel_per'].percentile) * len(X_train_trimmed.columns))
print(feature_scores_sr.sort_values(ascending=False).head(num_selected))
y_train_base.shape
y_test_base.shape
### Compare to baseline using whole set.
### Store metrics in a dataframe in case you want to save and inspect it.
base_perf_full_df= pd.DataFrame(columns=ordered_cols_lst)
print('\nBaseline models, no engineered features, no tuning, impute with 0, full train-test split:\n')
for key, method in clf_dict.items():
_, _, _, prf, perf_sr = run_skl(method=method, X_train=X_train_base,
y_train=y_train_base,
X_test=X_test_base,
y_test=y_test_base,
perf_series=key)
print('Custom F beta using nonPOI precision and POI recall:\n', get_f(prf[0][0], prf[1][1]), '\n')
base_perf_full_df = base_perf_full_df.append(perf_sr)
print('\nBaseline models, cleaned set, human-selected and engineered features, scaled, mixed imputation, no tuning:\n')
base_perf_engineered_trimmed_df = pd.DataFrame(columns=ordered_cols_lst)
for key, method in clf_dict.items():
_, _, _, prf, perf_sr = run_skl(method=method, X_train=X_train_trimmed,
y_train=y_train_1d,
X_test=X_test_trimmed,
y_test=y_test_1d,
perf_series=key)
print('Custom F beta using nonPOI precision and POI recall:\n', get_f(prf[0][0], prf[1][1]), '\n')
base_perf_full_df = base_perf_df.append(perf_sr)
### [Task 6: Dump your classifier, dataset, and features_list so anyone can
### check your results ...]
print('Saving classifier, data, and features list.')
features_list = keep_lst
data_df_trimmed['poi'] = data_df_trimmed['poi'].astype(bool)
my_dataset = data_df_trimmed.T.to_dict()
### Would be ideal to scale in the pipeline, but I initially experimented with iterative imputation
### and feeding bools into sklearn, and so scaled first. Not worth rewriting for this project.
clf = imp_gscvs_dict['mixed_impute_trimmed']['sel_per_empty_rf_clf'].best_estimator_
CLF_PICKLE_FILENAME = "my_classifier.pkl"
DATASET_PICKLE_FILENAME = "my_dataset.pkl"
FEATURE_LIST_FILENAME = "my_feature_list.pkl"
with open(CLF_PICKLE_FILENAME, "wb") as clf_outfile:
pickle.dump(clf, clf_outfile)
with open(DATASET_PICKLE_FILENAME, "wb") as dataset_outfile:
pickle.dump(my_dataset, dataset_outfile)
with open(FEATURE_LIST_FILENAME, "wb") as featurelist_outfile:
pickle.dump(features_list, featurelist_outfile)
### [... You do not need to change anything below, but make sure
### that the version of poi_id.py that you submit can be run on its own and
### generates the necessary .pkl files for validating your results.]
### Deprecated
# dump_classifier_and_data(clf, my_dataset, features_list)
unpickled_data_dict = {}
with open('my_dataset.pkl', 'rb') as file:
unpickled_data_dict = pickle.load(file=file)
unpickled_data_dict
df = pd.DataFrame(data=unpickled_data_dict).T
df['poi'].count()
df['poi'].sum()
set(df['poi'].values)
df.loc['LOCKHART EUGENE E']
```
| github_jupyter |
<!--BOOK_INFORMATION-->
<a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a>
*This notebook contains an excerpt from the book [Machine Learning for OpenCV](https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv) by Michael Beyeler.
The code is released under the [MIT license](https://opensource.org/licenses/MIT),
and is available on [GitHub](https://github.com/mbeyeler/opencv-machine-learning).*
*Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.
If you find this content useful, please consider supporting the work by
[buying the book](https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv)!*
<!--NAVIGATION-->
< [Combining Decision Trees Into a Random Forest](10.02-Combining-Decision-Trees-Into-a-Random-Forest.ipynb) | [Contents](../README.md) | [Implementing AdaBoost](10.04-Implementing-AdaBoost.ipynb) >
# Using Random Forests for Face Recognition
A popular dataset that we haven't talked much about yet is the **Olivetti face dataset**.
The Olivetti face dataset was collected in 1990 by AT&T Laboratories Cambridge. The
dataset comprises facial images of 40 distinct subjects, taken at different times and under
different lighting conditions. In addition, subjects varied their facial expression
(open/closed eyes, smiling/not smiling) and their facial details (glasses/no glasses).
Images were then quantized to 256 grayscale levels and stored as unsigned 8-bit integers.
Because there are 40 distinct subjects, the dataset comes with 40 distinct target labels.
Recognizing faces thus constitutes an example of a **multiclass classification** task.
## Loading the dataset
Like many other classic datasets, the Olivetti face dataset can be loaded using scikit-learn:
```
from sklearn.datasets import fetch_olivetti_faces
dataset = fetch_olivetti_faces()
X = dataset.data
y = dataset.target
```
Although the original images consisted of 92 x 112 pixel images, the version available
through scikit-learn contains images downscaled to 64 x 64 pixels.
To get a sense of the dataset, we can plot some example images. Let's pick eight indices
from the dataset in a random order:
```
import numpy as np
np.random.seed(21)
idx_rand = np.random.randint(len(X), size=8)
```
We can plot these example images using Matplotlib, but we need to make sure we reshape
the column vectors to 64 x 64 pixel images before plotting:
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(14, 8))
for p, i in enumerate(idx_rand):
plt.subplot(2, 4, p + 1)
plt.imshow(X[i, :].reshape((64, 64)), cmap='gray')
plt.axis('off')
```
You can see how all the faces are taken against a dark background and are upright. The
facial expression varies drastically from image to image, making this an interesting
classification problem. Try not to laugh at some of them!
## Preprocessing the dataset
Before we can pass the dataset to the classifier, we need to preprocess it following the best
practices from [Chapter 4](04.00-Representing-Data-and-Engineering-Features.ipynb), *Representing Data and Engineering Features*.
Specifically, we want to make sure that all example images have the same mean grayscale
level:
```
n_samples, n_features = X.shape
X -= X.mean(axis=0)
```
We repeat this procedure for every image to make sure the feature values of every data
point (that is, a row in `X`) are centered around zero:
```
X -= X.mean(axis=1).reshape(n_samples, -1)
```
The preprocessed data can be visualized using the preceding code:
```
plt.figure(figsize=(14, 8))
for p, i in enumerate(idx_rand):
plt.subplot(2, 4, p + 1)
plt.imshow(X[i, :].reshape((64, 64)), cmap='gray')
plt.axis('off')
plt.savefig('olivetti-pre.png')
```
## Training and testing the random forest
We continue to follow our best practice to split the data into training and test sets:
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, random_state=21
)
```
Then we are ready to apply a random forest to the data:
```
import cv2
rtree = cv2.ml.RTrees_create()
```
Here we want to create an ensemble with 50 decision trees:
```
num_trees = 50
eps = 0.01
criteria = (cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS,
num_trees, eps)
rtree.setTermCriteria(criteria)
```
Because we have a large number of categories (that is, 40), we want to make sure the
random forest is set up to handle them accordingly:
```
rtree.setMaxCategories(len(np.unique(y)))
```
We can play with other optional arguments, such as the number of data points required in a
node before it can be split:
```
rtree.setMinSampleCount(2)
```
However, we might not want to limit the depth of each tree. This is again, a parameter we
will have to experiment with in the end. But for now, let's set it to a large integer value,
making the depth effectively unconstrained:
```
rtree.setMaxDepth(1000)
```
Then we can fit the classifier to the training data:
```
rtree.train(X_train, cv2.ml.ROW_SAMPLE, y_train);
```
We can check the resulting depth of the tree using the following function:
```
rtree.getMaxDepth()
```
This means that although we allowed the tree to go up to depth 1000, in the end only 25
layers were needed.
The evaluation of the classifier is done once again by predicting the labels first (`y_hat`) and
then passing them to the `accuracy_score` function:
```
_, y_hat = rtree.predict(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_hat)
```
We find 87% accuracy, which turns out to be much better than with a single decision tree:
```
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(random_state=21, max_depth=25)
tree.fit(X_train, y_train)
tree.score(X_test, y_test)
```
Not bad! We can play with the optional parameters to see if we get better. The most
important one seems to be the number of trees in the forest. We can repeat the experiment
with a forest made from 100 trees:
```
num_trees = 100
eps = 0.01
criteria = (cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS,
num_trees, eps)
rtree.setTermCriteria(criteria)
rtree.train(X_train, cv2.ml.ROW_SAMPLE, y_train);
_, y_hat = rtree.predict(X_test)
accuracy_score(y_test, y_hat)
```
With this configuration, we get 91% accuracy!
Another interesting use case of decision tree ensembles is Adaptive Boosting or AdaBoost.
<!--NAVIGATION-->
< [Combining Decision Trees Into a Random Forest](10.02-Combining-Decision-Trees-Into-a-Random-Forest.ipynb) | [Contents](../README.md) | [Implementing AdaBoost](10.04-Implementing-AdaBoost.ipynb) >
| github_jupyter |
# Table of Contents
<p><div class="lev1 toc-item"><a href="#TensorFlow-Tutorial" data-toc-modified-id="TensorFlow-Tutorial-1"><span class="toc-item-num">1 </span>TensorFlow Tutorial</a></div><div class="lev2 toc-item"><a href="#1---Exploring-the-Tensorflow-Library" data-toc-modified-id="1---Exploring-the-Tensorflow-Library-11"><span class="toc-item-num">1.1 </span>1 - Exploring the Tensorflow Library</a></div><div class="lev3 toc-item"><a href="#1.1---Linear-function" data-toc-modified-id="1.1---Linear-function-111"><span class="toc-item-num">1.1.1 </span>1.1 - Linear function</a></div><div class="lev3 toc-item"><a href="#1.2---Computing-the-sigmoid" data-toc-modified-id="1.2---Computing-the-sigmoid-112"><span class="toc-item-num">1.1.2 </span>1.2 - Computing the sigmoid</a></div><div class="lev3 toc-item"><a href="#1.3----Computing-the-Cost" data-toc-modified-id="1.3----Computing-the-Cost-113"><span class="toc-item-num">1.1.3 </span>1.3 - Computing the Cost</a></div><div class="lev3 toc-item"><a href="#1.4---Using-One-Hot-encodings" data-toc-modified-id="1.4---Using-One-Hot-encodings-114"><span class="toc-item-num">1.1.4 </span>1.4 - Using One Hot encodings</a></div><div class="lev3 toc-item"><a href="#1.5---Initialize-with-zeros-and-ones" data-toc-modified-id="1.5---Initialize-with-zeros-and-ones-115"><span class="toc-item-num">1.1.5 </span>1.5 - Initialize with zeros and ones</a></div><div class="lev1 toc-item"><a href="#2---Building-your-first-neural-network-in-tensorflow" data-toc-modified-id="2---Building-your-first-neural-network-in-tensorflow-2"><span class="toc-item-num">2 </span>2 - Building your first neural network in tensorflow</a></div><div class="lev3 toc-item"><a href="#2.0---Problem-statement:-SIGNS-Dataset" data-toc-modified-id="2.0---Problem-statement:-SIGNS-Dataset-201"><span class="toc-item-num">2.0.1 </span>2.0 - Problem statement: SIGNS Dataset</a></div><div class="lev3 toc-item"><a href="#2.1---Create-placeholders" data-toc-modified-id="2.1---Create-placeholders-202"><span class="toc-item-num">2.0.2 </span>2.1 - Create placeholders</a></div><div class="lev3 toc-item"><a href="#2.2---Initializing-the-parameters" data-toc-modified-id="2.2---Initializing-the-parameters-203"><span class="toc-item-num">2.0.3 </span>2.2 - Initializing the parameters</a></div><div class="lev3 toc-item"><a href="#2.3---Forward-propagation-in-tensorflow" data-toc-modified-id="2.3---Forward-propagation-in-tensorflow-204"><span class="toc-item-num">2.0.4 </span>2.3 - Forward propagation in tensorflow</a></div><div class="lev3 toc-item"><a href="#2.4-Compute-cost" data-toc-modified-id="2.4-Compute-cost-205"><span class="toc-item-num">2.0.5 </span>2.4 Compute cost</a></div><div class="lev3 toc-item"><a href="#2.5---Backward-propagation-&-parameter-updates" data-toc-modified-id="2.5---Backward-propagation-&-parameter-updates-206"><span class="toc-item-num">2.0.6 </span>2.5 - Backward propagation & parameter updates</a></div><div class="lev3 toc-item"><a href="#2.6---Building-the-model" data-toc-modified-id="2.6---Building-the-model-207"><span class="toc-item-num">2.0.7 </span>2.6 - Building the model</a></div><div class="lev3 toc-item"><a href="#2.7---Test-with-your-own-image-(optional-/-ungraded-exercise)" data-toc-modified-id="2.7---Test-with-your-own-image-(optional-/-ungraded-exercise)-208"><span class="toc-item-num">2.0.8 </span>2.7 - Test with your own image (optional / ungraded exercise)</a></div>
# TensorFlow Tutorial
Welcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow:
- Initialize variables
- Start your own session
- Train algorithms
- Implement a Neural Network
Programing frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code.
## 1 - Exploring the Tensorflow Library
To start, you will import the library:
```
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
%matplotlib inline
np.random.seed(1)
```
Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example.
$$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
```
y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y') # Define y. Set to 39
loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss
init = tf.global_variables_initializer() # When init is run later (session.run(init)),
# the loss variable will be initialized and ready to be computed
with tf.Session() as session: # Create a session and print the output
session.run(init) # Initializes the variables
print(session.run(loss)) # Prints the loss
```
Writing and running programs in TensorFlow has the following steps:
1. Create Tensors (variables) that are not yet executed/evaluated.
2. Write operations between those Tensors.
3. Initialize your Tensors.
4. Create a Session.
5. Run the Session. This will run the operations you'd written above.
Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run `init=tf.global_variables_initializer()`. That initialized the loss variable, and in the last line we were finally able to evaluate the value of `loss` and print its value.
Now let us look at an easy example. Run the cell below:
```
a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c)
```
As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
```
sess = tf.Session()
print(sess.run(c))
```
Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**.
Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later.
To specify values for a placeholder, you can pass in values by using a "feed dictionary" (`feed_dict` variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.
```
# Change the value of x in the feed_dict
x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()
```
When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session.
Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph.
### 1.1 - Linear function
Lets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector.
**Exercise**: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):
```python
X = tf.constant(np.random.randn(3,1), name = "X")
```
You might find the following functions helpful:
- tf.matmul(..., ...) to do a matrix multiplication
- tf.add(..., ...) to do an addition
- np.random.randn(...) to initialize randomly
```
# GRADED FUNCTION: linear_function
def linear_function():
"""
Implements a linear function:
Initializes W to be a random tensor of shape (4,3)
Initializes X to be a random tensor of shape (3,1)
Initializes b to be a random tensor of shape (4,1)
Returns:
result -- runs the session for Y = WX + b
"""
np.random.seed(1)
### START CODE HERE ### (4 lines of code)
X = tf.constant(np.random.randn(3, 1), name='X')
W = tf.constant(np.random.randn(4, 3), name='W')
b = tf.constant(np.random.randn(4, 1), name='b')
Y = tf.add(tf.matmul(W, X), b)
### END CODE HERE ###
# Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
### START CODE HERE ###
sess = tf.Session()
result = sess.run(Y)
### END CODE HERE ###
# close the session
sess.close()
return result
print( "result = " + str(linear_function()))
```
*** Expected Output ***:
<table>
<tr>
<td>
**result**
</td>
<td>
[[-2.15657382]
[ 2.95891446]
[-1.08926781]
[-0.84538042]]
</td>
</tr>
</table>
### 1.2 - Computing the sigmoid
Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmoid function of an input.
You will do this exercise using a placeholder variable `x`. When running the session, you should use the feed dictionary to pass in the input `z`. In this exercise, you will have to (i) create a placeholder `x`, (ii) define the operations needed to compute the sigmoid using `tf.sigmoid`, and then (iii) run the session.
** Exercise **: Implement the sigmoid function below. You should use the following:
- `tf.placeholder(tf.float32, name = "...")`
- `tf.sigmoid(...)`
- `sess.run(..., feed_dict = {x: z})`
Note that there are two typical ways to create and use sessions in tensorflow:
**Method 1:**
```python
sess = tf.Session()
# Run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
sess.close() # Close the session
```
**Method 2:**
```python
with tf.Session() as sess:
# run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
# This takes care of closing the session for you :)
```
```
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
results -- the sigmoid of z
"""
### START CODE HERE ### ( approx. 4 lines of code)
# Create a placeholder for x. Name it 'x'.
x = tf.placeholder(tf.float32, name='x')
# compute sigmoid(x)
sigmoid = tf.sigmoid(x)
# Create a session, and run it. Please use the method 2 explained above.
# You should use a feed_dict to pass z's value to x.
with tf.Session() as sess:
# Run session and call the output "result"
result = sess.run(sigmoid, feed_dict={x: z})
### END CODE HERE ###
return result
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12)))
```
*** Expected Output ***:
<table>
<tr>
<td>
**sigmoid(0)**
</td>
<td>
0.5
</td>
</tr>
<tr>
<td>
**sigmoid(12)**
</td>
<td>
0.999994
</td>
</tr>
</table>
<font color='blue'>
**To summarize, you how know how to**:
1. Create placeholders
2. Specify the computation graph corresponding to operations you want to compute
3. Create the session
4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values.
### 1.3 - Computing the Cost
You can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{[2](i)}$ and $y^{(i)}$ for i=1...m:
$$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$
you can do it in one line of code in tensorflow!
**Exercise**: Implement the cross entropy loss. The function you will use is:
- `tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)`
Your code should input `z`, compute the sigmoid (to get `a`) and then compute the cross entropy cost $J$. All this can be done using one call to `tf.nn.sigmoid_cross_entropy_with_logits`, which computes
$$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{[2](i)}) + (1-y^{(i)})\log (1-\sigma(z^{[2](i)})\large )\small\tag{2}$$
```
# GRADED FUNCTION: cost
def cost(logits, labels):
"""
Computes the cost using the sigmoid cross entropy
Arguments:
logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
labels -- vector of labels y (1 or 0)
Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"
in the TensorFlow documentation. So logits will feed into z, and labels into y.
Returns:
cost -- runs the session of the cost (formula (2))
"""
### START CODE HERE ###
# Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
z = tf.placeholder(tf.float32, name='z')
y = tf.placeholder(tf.float32, name='y')
# Use the loss function (approx. 1 line)
cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z, labels=y)
# Create a session (approx. 1 line). See method 1 above.
sess = tf.Session()
# Run the session (approx. 1 line).
cost = sess.run(cost, feed_dict={z: logits, y:labels})
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return cost
logits = sigmoid(np.array([0.2,0.4,0.7,0.9]))
cost = cost(logits, np.array([0,0,1,1]))
print ("cost = " + str(cost))
```
** Expected Output** :
<table>
<tr>
<td>
**cost**
</td>
<td>
[ 1.00538719 1.03664088 0.41385433 0.39956614]
</td>
</tr>
</table>
### 1.4 - Using One Hot encodings
Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:
<img src="images/onehot.png" style="width:600px;height:150px;">
This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code:
- tf.one_hot(labels, depth, axis)
**Exercise:** Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use `tf.one_hot()` to do this.
```
# GRADED FUNCTION: one_hot_matrix
def one_hot_matrix(labels, C):
"""
Creates a matrix where the i-th row corresponds to the ith class number and the jth column
corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
will be 1.
Arguments:
labels -- vector containing the labels
C -- number of classes, the depth of the one hot dimension
Returns:
one_hot -- one hot matrix
"""
### START CODE HERE ###
# Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
C = tf.constant(C, name='C')
# Use tf.one_hot, be careful with the axis (approx. 1 line)
one_hot_matrix = tf.one_hot(indices=labels, depth=C, axis=0)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session (approx. 1 line)
one_hot = sess.run(one_hot_matrix)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return one_hot
labels = np.array([1,2,3,0,2,1])
one_hot = one_hot_matrix(labels, C = 4)
print ("one_hot = " + str(one_hot))
```
**Expected Output**:
<table>
<tr>
<td>
**one_hot**
</td>
<td>
[[ 0. 0. 0. 1. 0. 0.]
[ 1. 0. 0. 0. 0. 1.]
[ 0. 1. 0. 0. 1. 0.]
[ 0. 0. 1. 0. 0. 0.]]
</td>
</tr>
</table>
### 1.5 - Initialize with zeros and ones
Now you will learn how to initialize a vector of zeros and ones. The function you will be calling is `tf.ones()`. To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively.
**Exercise:** Implement the function below to take in a shape and to return an array (of the shape's dimension of ones).
- tf.ones(shape)
```
# GRADED FUNCTION: ones
def ones(shape):
"""
Creates an array of ones of dimension shape
Arguments:
shape -- shape of the array you want to create
Returns:
ones -- array containing only ones
"""
### START CODE HERE ###
# Create "ones" tensor using tf.ones(...). (approx. 1 line)
ones = tf.ones(shape)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session to compute 'ones' (approx. 1 line)
ones = sess.run(ones)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return ones
print ("ones = " + str(ones([3])))
```
**Expected Output:**
<table>
<tr>
<td>
**ones**
</td>
<td>
[ 1. 1. 1.]
</td>
</tr>
</table>
# 2 - Building your first neural network in tensorflow
In this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:
- Create the computation graph
- Run the graph
Let's delve into the problem you'd like to solve!
### 2.0 - Problem statement: SIGNS Dataset
One afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.
- **Training set**: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).
- **Test set**: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).
Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.
Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels.
<img src="images/hands.png" style="width:800px;height:350px;"><caption><center> <u><font color='purple'> **Figure 1**</u><font color='purple'>: SIGNS dataset <br> <font color='black'> </center>
Run the following code to load the dataset.
```
# Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
```
Change the index below and run the cell to visualize some examples in the dataset.
```
# Example of a picture
index = 0
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
```
As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
```
# Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten/255.
X_test = X_test_flatten/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)
print ("number of training examples = " + str(X_train.shape[1]))
print ("number of test examples = " + str(X_test.shape[1]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
```
**Note** that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing.
**Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one.
**The model** is *LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX*. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes.
### 2.1 - Create placeholders
Your first task is to create placeholders for `X` and `Y`. This will allow you to later pass your training data in when you run your session.
**Exercise:** Implement the function below to create the placeholders in tensorflow.
```
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_x, n_y):
"""
Creates the placeholders for the tensorflow session.
Arguments:
n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
n_y -- scalar, number of classes (from 0 to 5, so -> 6)
Returns:
X -- placeholder for the data input, of shape [n_x, None] and dtype "float"
Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"
Tips:
- You will use None because it let's us be flexible on the number of examples you will for the placeholders.
In fact, the number of examples during test/train is different.
"""
### START CODE HERE ### (approx. 2 lines)
X = tf.placeholder(tf.float32, shape=[n_x, None], name='X')
Y = tf.placeholder(tf.float32, shape=[n_y, None], name='Y')
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(12288, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
```
**Expected Output**:
<table>
<tr>
<td>
**X**
</td>
<td>
Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1)
</td>
</tr>
<tr>
<td>
**Y**
</td>
<td>
Tensor("Placeholder_2:0", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2)
</td>
</tr>
</table>
### 2.2 - Initializing the parameters
Your second task is to initialize the parameters in tensorflow.
**Exercise:** Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use:
```python
W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
```
Please use `seed = 1` to make sure your results match ours.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
"""
Initializes parameters to build a neural network with tensorflow. The shapes are:
W1 : [25, 12288]
b1 : [25, 1]
W2 : [12, 25]
b2 : [12, 1]
W3 : [6, 12]
b3 : [6, 1]
Returns:
parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
"""
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 6 lines of code)
W1 = tf.get_variable("W1", [25, 12288], initializer=tf.contrib.layers.xavier_initializer(seed=1))
b1 = tf.get_variable("b1", [25, 1], initializer=tf.zeros_initializer())
W2 = tf.get_variable("W2", [12, 25], initializer=tf.contrib.layers.xavier_initializer(seed=1))
b2 = tf.get_variable("b2", [12, 1], initializer=tf.zeros_initializer())
W3 = tf.get_variable("W3", [6, 12], initializer=tf.contrib.layers.xavier_initializer(seed=1))
b3 = tf.get_variable("b3", [6, 1], initializer=tf.zeros_initializer())
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2,
"W3": W3,
"b3": b3}
return parameters
tf.reset_default_graph()
with tf.Session() as sess:
parameters = initialize_parameters()
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table>
<tr>
<td>
**W1**
</td>
<td>
< tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**b1**
</td>
<td>
< tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**W2**
</td>
<td>
< tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**b2**
</td>
<td>
< tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref >
</td>
</tr>
</table>
As expected, the parameters haven't been evaluated yet.
### 2.3 - Forward propagation in tensorflow
You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are:
- `tf.add(...,...)` to do an addition
- `tf.matmul(...,...)` to do a matrix multiplication
- `tf.nn.relu(...)` to apply the ReLU activation
**Question:** Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at `z3`. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need `a3`!
```
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
"""
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:
Z1 = tf.add(tf.matmul(W1, X), b1) # Z1 = np.dot(W1, X) + b1
A1 = tf.nn.relu(Z1) # A1 = relu(Z1)
Z2 = tf.add(tf.matmul(W2, A1), b2) # Z2 = np.dot(W2, a1) + b2
A2 = tf.nn.relu(Z2) # A2 = relu(Z2)
Z3 = tf.add(tf.matmul(W3, A2), b3) # Z3 = np.dot(W3,Z2) + b3
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
print("Z3 = " + str(Z3))
```
**Expected Output**:
<table>
<tr>
<td>
**Z3**
</td>
<td>
Tensor("Add_2:0", shape=(6, ?), dtype=float32)
</td>
</tr>
</table>
You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation.
### 2.4 Compute cost
As seen before, it is very easy to compute the cost using:
```python
tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))
```
**Question**: Implement the cost function below.
- It is important to know that the "`logits`" and "`labels`" inputs of `tf.nn.softmax_cross_entropy_with_logits` are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.
- Besides, `tf.reduce_mean` basically does the summation over the examples.
```
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
"""
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
"""
# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
logits = tf.transpose(Z3)
labels = tf.transpose(Y)
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits,
labels=labels))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
print("cost = " + str(cost))
```
**Expected Output**:
<table>
<tr>
<td>
**cost**
</td>
<td>
Tensor("Mean:0", shape=(), dtype=float32)
</td>
</tr>
</table>
### 2.5 - Backward propagation & parameter updates
This is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.
After you compute the cost function. You will create an "`optimizer`" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.
For instance, for gradient descent the optimizer would be:
```python
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)
```
To make the optimization you would do:
```python
_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
```
This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.
**Note** When coding, we often use `_` as a "throwaway" variable to store values that we won't need to use later. Here, `_` takes on the evaluated value of `optimizer`, which we don't need (and `c` takes the value of the `cost` variable).
### 2.6 - Building the model
Now, you will bring it all together!
**Exercise:** Implement the model. You will be calling the functions you had previously implemented.
```
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
num_epochs = 1500, minibatch_size = 32, print_cost = True):
"""
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
n_y = Y_train.shape[0] # n_y : output size
costs = [] # To keep track of the cost
# Create Placeholders of shape (n_x, n_y)
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_x, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0. # Defines a cost related to an epoch
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , minibatch_cost = sess.run([optimizer, cost],
feed_dict={X: minibatch_X,
Y: minibatch_Y})
### END CODE HERE ###
epoch_cost += minibatch_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# lets save the parameters in a variable
parameters = sess.run(parameters)
print ("Parameters have been trained!")
# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
return parameters
```
Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!
```
parameters = model(X_train, Y_train, X_test, Y_test)
```
**Expected Output**:
<table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
0.999074
</td>
</tr>
<tr>
<td>
**Test Accuracy**
</td>
<td>
0.716667
</td>
</tr>
</table>
Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.
**Insights**:
- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting.
- Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters.
### 2.7 - Test with your own image (optional / ungraded exercise)
Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right!
```
import scipy
from PIL import Image
from scipy import ndimage
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "thumbs_up.jpg"
## END CODE HERE ##
# We preprocess your image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
my_image_prediction = predict(my_image, parameters)
plt.imshow(image)
print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))
```
You indeed deserved a "thumbs-up" although as you can see the algorithm seems to classify it incorrectly. The reason is that the training set doesn't contain any "thumbs-up", so the model doesn't know how to deal with it! We call that a "mismatched data distribution" and it is one of the various of the next course on "Structuring Machine Learning Projects".
<font color='blue'>
**What you should remember**:
- Tensorflow is a programming framework used in deep learning
- The two main object classes in tensorflow are Tensors and Operators.
- When you code in tensorflow you have to take the following steps:
- Create a graph containing Tensors (Variables, Placeholders ...) and Operations (tf.matmul, tf.add, ...)
- Create a session
- Initialize the session
- Run the session to execute the graph
- You can execute the graph multiple times as you've seen in model()
- The backpropagation and optimization is automatically done when running the session on the "optimizer" object.
```
%load_ext version_information
%version_information tensorflow, numpy, skimage, scipy, PIL
```
| github_jupyter |
# Analog vs Digital Transmission
In this notebook we will explore the potential advantages of digital transmission over analog transmission. We will consider the case of transmission over a long (e.g. transoceanic) cable in which several repeaters are used to compensate for the attenuation introduced by the transmission.
Remember that if each cable segment introduces an attenuation of $1/G$, we can recover the original amplitude by boosting the signal with a repeater with gain $G$. However, if the signal has accumulated additive noise, the noise will be amplified as well so that, after $N$ repeaters, the noise will have been amplified $N$ times:
$$
\hat{x}_N(t) = x(t) + NG\sigma(t)
$$
If we use a digital signal, on the other hand, we can threshold the signal after each repeater and virtually eliminate the noise at each stage, so that even after several repeaters the trasmission is still noise-free.
Let's start with the standard initial bookkeeping...
```
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import IPython
from scipy.io import wavfile
plt.rcParams["figure.figsize"] = (14,4)
```
Now we can read in an audio file from disk; we can plot it and play it back. The `wavfile.read()` function returns the audio data and the playback rate, which we will need to pass to the playback functions.
```
rate, s = wavfile.read('speech.wav')
plt.plot(s);
IPython.display.Audio(s, rate=rate)
```
## The "Analog" and "Digital" Signals ##
We will now create two version of the audio signal, an "analog" version and a "digital" version. Obviously the analog version is just a simulation, since we're using a digital computer; we will assume that, by using floating point values, we're in fact close enough to infinite precision. In the digital version of the signal, on the other hand, the audio samples will only take integer values between -100 and +100 (i.e. we will use approximately 8 bits per audio sample).
```
# the analog signal is simply rescaled between -100 and +100
# largest element in magnitude:
norm = 1.0 / max(np.absolute([min(s), max(s)]))
sA = 100.0 * s * norm
# the digital version is clamped to the integers
sD = np.round(sA)
```
Rememeber that there is no free lunch and quantization implies a loss of quality; this initial loss (that we can minimize by using more bits per sample) is the price to pay for digital transmission. We can plot the error and compute the Signal to Noise Ratio (SNR) of the quantized signal
```
plt.plot(sA-sD);
```
as expected, the error is between -0.5 and +0.5, since in the "analog" signal the values are real-valued, whereas in the "digital" version they can only take integer values. As for the SNR,
```
# we will be computing SNRs later as well, so let's define a function
def SNR(noisy, original):
# power of the error
err = np.linalg.norm(original-noisy)
# power of the signal
sig = np.linalg.norm(original)
# SNR in dBs
return 10 * np.log10(sig/err)
print ('SNR = %f dB' % SNR(sD, sA))
```
Can we hear the 17dB difference? A bit...
```
IPython.display.Audio(sA, rate=rate)
IPython.display.Audio(sD, rate=rate)
```
## Transmission ##
Let's now define a function that represents the net effect of transmitting audio over a cable segment terminated by a repeater:
* the signal is attenuated
* the signal is accumulates additive noise as it propagates through the cable
* the signal is amplified to the original amplitude by the repeater
```
def repeater(x, noise_amplitude, attenuation):
# first, create the noise
noise = np.random.uniform(-noise_amplitude, noise_amplitude, len(x))
# attenuation
x = x * attenuation
# noise
x = x + noise
# gain compensation
return x / attenuation
```
we can use the repeater for both analog and digital signals. Transmission of the analog signal is simply a sequence of repeaters:
```
def analog_tx(x, num_repeaters, noise_amplitude, attenuation):
for n in range(0, num_repeaters):
x = repeater(x, noise_amplitude, attenuation)
return x
```
For digital signals, however, we can rectify the signal after each repeater, because we know that values should only be integer-valued:
```
def digital_tx(x, num_repeaters, noise_amplitude, attenuation):
for n in range(0, num_repeaters):
x = np.round(repeater(x, noise_amplitude, attenuation))
return x
```
Let's compare transmission schemes
```
NUM_REPEATERS = 70
NOISE_AMPLITUDE = 0.2
ATTENUATION = 0.5
yA = analog_tx(sA, NUM_REPEATERS, NOISE_AMPLITUDE, ATTENUATION)
print ('Analog trasmission: SNR = %f dB' % SNR(yA, sA))
yD = digital_tx(sD, NUM_REPEATERS, NOISE_AMPLITUDE, ATTENUATION)
print ('Digital trasmission: SNR = %f dB' % SNR(yD, sA))
```
As you can see, the SNR after digital transmission has not changed! Now the difference between audio clips should be easy to hear:
```
IPython.display.Audio(yA, rate=rate)
IPython.display.Audio(yD, rate=rate)
```
Note however that, if the noise amplitude exceeds a certain value, digital transmission degrades even less gracefully than analog transmission:
```
NOISE_AMPLITUDE = 0.3
yA = analog_tx(sA, NUM_REPEATERS, NOISE_AMPLITUDE, ATTENUATION)
print ('Analog trasmission: SNR = %f dB' % SNR(yA, sA))
yD = digital_tx(sD, NUM_REPEATERS, NOISE_AMPLITUDE, ATTENUATION)
print ('Digital trasmission: SNR = %f dB' % SNR(yD, sA))
```
| github_jupyter |
# Introduction
In this notebook, we will do a comprehensive analysis of the Android app market by comparing thousands of apps in the Google Play store.
```
```
# About the Dataset of Google Play Store Apps & Reviews
**Data Source:** <br>
App and review data was scraped from the Google Play Store by Lavanya Gupta in 2018. Original files listed [here](
https://www.kaggle.com/lava18/google-play-store-apps).
# Import Statements
```
import pandas as pd
import plotly.express as px
```
# Notebook Presentation
```
# Show numeric output in decimal format e.g., 2.15
pd.options.display.float_format = '{:,.2f}'.format
```
# Read the Dataset
```
df_apps = pd.read_csv('apps.csv')
```
# Data Cleaning
**Challenge**: How many rows and columns does `df_apps` have? What are the column names? Look at a random sample of 5 different rows with [.sample()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sample.html).
```
print(df_apps.shape)
print(df_apps.columns)
df_apps
print(df_apps[df_apps["App"] == "Subway Surfers"])
df_apps.groupby("Category").count()
df_apps.sample(5)
```
### Drop Unused Columns
**Challenge**: Remove the columns called `Last_Updated` and `Android_Version` from the DataFrame. We will not use these columns.
```
refined_df_apps = df_apps.drop(['Last_Updated', 'Android_Ver'], axis=1)
refined_df_apps
```
### Find and Remove NaN values in Ratings
**Challenge**: How may rows have a NaN value (not-a-number) in the Ratings column? Create DataFrame called `df_apps_clean` that does not include these rows.
```
refined_df_apps.Rating.isna().values.sum()
df_apps_clean = refined_df_apps.dropna()
df_apps_clean
```
### Find and Remove Duplicates
**Challenge**: Are there any duplicates in data? Check for duplicates using the [.duplicated()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.duplicated.html) function. How many entries can you find for the "Instagram" app? Use [.drop_duplicates()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html) to remove any duplicates from `df_apps_clean`.
```
print(df_apps_clean.duplicated())
print("---------------------------------")
print(df_apps_clean.duplicated().sum())
print("---------------------------------")
print(df_apps_clean.duplicated(subset="App"))
print("---------------------------------")
print(df_apps_clean.duplicated(subset="App").sum())
print(df_apps_clean[df_apps_clean.App == "Instagram"].count())
print("---------------------------------")
df_apps_clean[df_apps_clean.App == "Instagram"]
df_apps_clean.drop_duplicates(subset="App", keep="first", inplace=True)
print(df_apps_clean.shape)
print("---------------------------------")
df_apps_clean[df_apps_clean.App == "Instagram"]
```
# Find Highest Rated Apps
**Challenge**: Identify which apps are the highest rated. What problem might you encounter if you rely exclusively on ratings alone to determine the quality of an app?
```
print(df_apps_clean.Rating.sort_values(ascending=False))
print(df_apps_clean.sort_values(by=["Rating"], ascending=False))
df_apps_clean.sort_values(by=["Installs", "Rating"], ascending=False)
```
# Find 5 Largest Apps in terms of Size (MBs)
**Challenge**: What's the size in megabytes (MB) of the largest Android apps in the Google Play Store. Based on the data, do you think there could be limit in place or can developers make apps as large as they please?
```
print(df_apps_clean.Size_MBs.sort_values(ascending=False))
print(df_apps_clean.sort_values(by=["Size_MBs"], ascending=False))
df_apps_clean.sort_values(by=["Size_MBs"], ascending=False).head(10)
```
# Find the 5 App with Most Reviews
**Challenge**: Which apps have the highest number of reviews? Are there any paid apps among the top 50?
```
print(df_apps_clean.sort_values(by=["Reviews"], ascending=False))
print(df_apps_clean.sort_values(by=["Reviews"], ascending=False).head(50))
print("--------------------------------------------------------")
print(df_apps_clean.sort_values(by=["Reviews"], ascending=False).head(50)["Type"] == "Paid")
print("--------------------------------------------------------")
print(df_apps_clean.sort_values(by=["Reviews"], ascending=False).head(50)["Type"].isin(["Paid"]))
print("--------------------------------------------------------")
print(df_apps_clean.sort_values(by=["Reviews"], ascending=False).head(50)["Type"].isin(["Paid"]).values.any())
print("--------------------------------------------------------")
print(df_apps_clean.sort_values(by=["Reviews"], ascending=False).head(50)["Type"].isin(["Paid"]).values.sum())
print("--------------------------------------------------------")
df_apps_clean_top_50_by_review = df_apps_clean.sort_values(by=["Reviews"], ascending=False).head(50)
df_apps_clean_top_50_by_review[df_apps_clean_top_50_by_review["Type"] == "Paid"]
```
# Plotly Pie and Donut Charts - Visualise Categorical Data: Content Ratings
```
ratings = df_apps_clean.Content_Rating.value_counts()
ratings
fig = px.pie(labels=ratings.index, values=ratings.values, names=ratings.index, title="Apps by Content Rating")
fig.show()
fig = px.pie(labels=ratings.index, values=ratings.values, names=ratings.index, title="Apps by Content Rating")
fig.update_traces(textposition="outside", textinfo="percent+label")
fig.show()
fig = px.pie(
labels=ratings.index,
values=ratings.values,
names=ratings.index,
title="Apps by Content Rating",
hole=0.7,
)
fig.update_traces(textposition="outside", textinfo="percent+label")
fig.show()
```
# Numeric Type Conversion: Examine the Number of Installs
**Challenge**: How many apps had over 1 billion (that's right - BILLION) installations? How many apps just had a single install?
Check the datatype of the Installs column.
Count the number of apps at each level of installations.
Convert the number of installations (the Installs column) to a numeric data type. Hint: this is a 2-step process. You'll have make sure you remove non-numeric characters first.
```
print(type(df_apps_clean.Installs[10744]))
print("---------------------------------------")
print(df_apps_clean.Installs.dtypes)
print("---------------------------------------")
print(df_apps_clean.describe())
print("---------------------------------------")
df_apps_clean.info()
print(df_apps_clean.Installs.sort_values(ascending=False).head(20))
print("---------------------------------------")
df_apps_clean[["App", "Installs"]].groupby("Installs").count()
df_apps_clean.Installs = df_apps_clean.Installs.astype("str").str.replace(",", "")
df_apps_clean.Installs = pd.to_numeric(df_apps_clean.Installs)
print(df_apps_clean[["App", "Installs"]].groupby("Installs").count())
print("---------------------------------------")
df_apps_clean.sort_values(by=["Installs"] ,ascending=False).head(30)
```
# Find the Most Expensive Apps, Filter out the Junk, and Calculate a (ballpark) Sales Revenue Estimate
Let's examine the Price column more closely.
**Challenge**: Convert the price column to numeric data. Then investigate the top 20 most expensive apps in the dataset.
Remove all apps that cost more than $250 from the `df_apps_clean` DataFrame.
Add a column called 'Revenue_Estimate' to the DataFrame. This column should hold the price of the app times the number of installs. What are the top 10 highest grossing paid apps according to this estimate? Out of the top 10 highest grossing paid apps, how many are games?
```
print(df_apps_clean.Price.describe())
print("---------------------------------------")
print(df_apps_clean.Price.sort_values(ascending=False).head(20))
print("---------------------------------------")
print(df_apps_clean.sort_values(by=["Price"], ascending=False).head(20))
print("---------------------------------------")
df_apps_clean.Price = df_apps_clean.Price.astype("str").str.replace("$", "")
df_apps_clean.Price = pd.to_numeric(df_apps_clean.Price)
df_apps_clean.sort_values(by=["Price"], ascending=False).head(20)
```
### The most expensive apps sub $250
```
print(df_apps_clean[df_apps_clean.Price < 250].sort_values(by=["Price"], ascending=False).head(10))
df_apps_clean = df_apps_clean[df_apps_clean.Price < 250]
df_apps_clean.sort_values(by=["Price"], ascending=False).head(10)
```
### Highest Grossing Paid Apps (ballpark estimate)
```
df_apps_clean["Revenue_Estimate"] = df_apps_clean.Installs.mul(df_apps_clean.Price)
print(df_apps_clean.info())
df_apps_clean.sort_values("Revenue_Estimate", ascending=False)[:10]
```
# Plotly Bar Charts & Scatter Plots: Analysing App Categories
```
print(df_apps_clean.Category.nunique())
top_10_category = df_apps_clean.Category.value_counts()[:10]
top_10_category
category_revenue = df_apps_clean.groupby("Category").agg({"Installs": pd.Series.sum, "Revenue_Estimate": pd.Series.sum})
category_revenue.sort_values(by=["Revenue_Estimate"], ascending=False)
category_installs = df_apps_clean.groupby("Category").agg({"Installs": pd.Series.sum})
print(category_installs)
print(category_installs.sort_values(by=["Installs"], ascending=False))
```
### Vertical Bar Chart - Highest Competition (Number of Apps)
```
bar = px.bar(x=top_10_category.index, y=top_10_category.values)
bar.show()
```
### Horizontal Bar Chart - Most Popular Categories (Highest Downloads)
```
category_installs.sort_values(by=["Installs"], ascending=True, inplace=True)
h_bar = px.bar(
x = category_installs.Installs,
y = category_installs.index,
orientation = "h",
)
h_bar.show()
h_bar = px.bar(
x = category_installs.Installs,
y = category_installs.index,
orientation = "h",
title = "Category Popularity"
)
h_bar.update_layout(xaxis_title="Number of Downloads", yaxis_title="Category")
h_bar.show()
```
### Category Concentration - Downloads vs. Competition
**Challenge**:
* First, create a DataFrame that has the number of apps in one column and the number of installs in another:
<img src=https://imgur.com/uQRSlXi.png width="350">
* Then use the [plotly express examples from the documentation](https://plotly.com/python/line-and-scatter/) alongside the [.scatter() API reference](https://plotly.com/python-api-reference/generated/plotly.express.scatter.html)to create scatter plot that looks like this.
<img src=https://imgur.com/cHsqh6a.png>
*Hint*: Use the size, hover_name and color parameters in .scatter(). To scale the yaxis, call .update_layout() and specify that the yaxis should be on a log-scale like so: yaxis=dict(type='log')
```
category_apps = df_apps_clean.groupby("Category").agg({"Installs": pd.Series.sum, "App": pd.Series.count})
print(category_apps.sort_values(by=["App"], ascending=False))
fig = px.scatter(
data_frame=category_apps,
title="Category Concentration",
x="App", y="Installs",
size="App",
color="Installs",
hover_name=category_apps.index)
fig.update_layout(yaxis=dict(type="log"))
fig.show()
```
# Extracting Nested Data from a Column
**Challenge**: How many different types of genres are there? Can an app belong to more than one genre? Check what happens when you use .value_counts() on a column with nested values? See if you can work around this problem by using the .split() function and the DataFrame's [.stack() method](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html).
```
print(df_apps_clean.Genres.unique())
print("----------------------------------")
print(f"unique genres: {len(df_apps_clean.Genres.unique())}")
print("----------------------------------")
print(df_apps_clean.Genres.value_counts().sort_values(ascending=True)[:10])
split = df_apps_clean.Genres.str.split(";", expand=True)
print("----------------------------------")
print(split[60:70])
print("----------------------------------")
stack = split.stack()
print(f"stack shape: {stack.shape}")
print("----------------------------------")
print(stack[60:70])
num_genres = stack.value_counts()
print("----------------------------------")
print(f"number of genres: {len(num_genres)}")
print("----------------------------------")
print(num_genres)
fig = px.bar(
x = num_genres.index,
y = num_genres.values,
title = "Top Genres",
hover_name = num_genres.index,
color = num_genres.values,
color_continuous_scale = "plotly3"
)
fig.update_layout(xaxis_title="Genre", yaxis_title="Apps", coloraxis_showscale=False)
fig.update_xaxes(tickangle=45)
fig.show()
```
# Colour Scales in Plotly Charts - Competition in Genres
**Challenge**: Can you create this chart with the Series containing the genre data?
<img src=https://imgur.com/DbcoQli.png width=400>
Try experimenting with the built in colour scales in Plotly. You can find a full list [here](https://plotly.com/python/builtin-colorscales/).
* Find a way to set the colour scale using the color_continuous_scale parameter.
* Find a way to make the color axis disappear by using coloraxis_showscale.
```
```
# Grouped Bar Charts: Free vs. Paid Apps per Category
```
print(df_apps_clean.Type.value_counts())
print("----------------------------------")
df_free_vs_paid = df_apps_clean.groupby(["Category", "Type"], as_index=False).agg({"App": pd.Series.count})
print(df_free_vs_paid.head(10))
print("----------------------------------")
print(df_free_vs_paid.shape)
print("----------------------------------")
df_free_vs_paid.sort_values("App", ascending=False).head(30)
```
**Challenge**: Use the plotly express bar [chart examples](https://plotly.com/python/bar-charts/#bar-chart-with-sorted-or-ordered-categories) and the [.bar() API reference](https://plotly.com/python-api-reference/generated/plotly.express.bar.html#plotly.express.bar) to create this bar chart:
<img src=https://imgur.com/LE0XCxA.png>
You'll want to use the `df_free_vs_paid` DataFrame that you created above that has the total number of free and paid apps per category.
See if you can figure out how to get the look above by changing the `categoryorder` to 'total descending' as outlined in the documentation here [here](https://plotly.com/python/categorical-axes/#automatically-sorting-categories-by-name-or-total-value).
```
fig = px.bar(
data_frame = df_free_vs_paid,
x = "Category",
y = "App",
color = "Type",
# hover_name = "Category"
)
fig.update_layout(barmode="group", yaxis=dict(type="log"))
fig.update_xaxes(categoryorder="total descending", tickangle=45)
fig.show()
```
# Plotly Box Plots: Lost Downloads for Paid Apps
**Challenge**: Create a box plot that shows the number of Installs for free versus paid apps. How does the median number of installations compare? Is the difference large or small?
Use the [Box Plots Guide](https://plotly.com/python/box-plots/) and the [.box API reference](https://plotly.com/python-api-reference/generated/plotly.express.box.html) to create the following chart.
<img src=https://imgur.com/uVsECT3.png>
```
fig = px.box(
data_frame = df_apps_clean,
title = "How Many Downloads are Paid Apps Giving Up?",
x = "Type",
y = "Installs",
color = "Type",
points = "all",
notched = True
)
fig.update_layout(yaxis=dict(type="log"))
fig.show()
```
# Plotly Box Plots: Revenue by App Category
**Challenge**: See if you can generate the chart below:
<img src=https://imgur.com/v4CiNqX.png>
Looking at the hover text, how much does the median app earn in the Tools category? If developing an Android app costs $30,000 or thereabouts, does the average photography app recoup its development costs?
Hint: I've used 'min ascending' to sort the categories.
```
df_paid_apps = df_apps_clean[df_apps_clean.Type == "Paid"]
fig = px.box(
data_frame = df_paid_apps,
title = "How Much Can Paid App Earn?",
x = "Category",
y = "Revenue_Estimate",
# points = "all",
# notched = True
)
fig.update_layout(yaxis=dict(type="log"))
fig.update_xaxes(categoryorder="min ascending", tickangle=45)
fig.show()
```
# How Much Can You Charge? Examine Paid App Pricing Strategies by Category
**Challenge**: What is the median price price for a paid app? Then compare pricing by category by creating another box plot. But this time examine the prices (instead of the revenue estimates) of the paid apps. I recommend using `{categoryorder':'max descending'}` to sort the categories.
```
print(df_paid_apps.Price.median())
print(df_paid_apps[df_paid_apps.Category == "MEDICAL"].Price.median())
print(df_paid_apps[df_paid_apps.Category == "WEATHER"].Price.median())
print(df_paid_apps[df_paid_apps.Category == "BUSINESS"].Price.median())
print(df_paid_apps[df_paid_apps.Category == "DATING"].Price.median())
print(df_paid_apps[df_paid_apps.Category == "FAMILY"].Price.median())
df_paid_apps
fig = px.box(
data_frame = df_paid_apps,
title = "Price per Category",
x = "Category",
y = "Price"
)
fig.update_layout(
xaxis_title = "Category",
yaxis_title = "Paid App Price",
yaxis = dict(type="log"),
xaxis={"categoryorder": "max descending"}
)
fig.show()
```
| github_jupyter |
# Data Engineering in Python with databolt - Quickly Load Any Type of CSV (d6tlib/d6tstack)
Vendors often send large datasets in multiple files. Often there are missing and misaligned columns between files that have to be manually cleaned. With DataBolt File Stack you can easily stack them together into one consistent dataset.
Features include:
* Quickly check column consistency across multiple files
* Fix added/missing columns
* Fix renamed columns
* Out of core functionality to process large files
* Export to pandas, CSV, SQL, parquet
* Fast export to postgres and mysql with out of core support
In this workbook we will demonstrate the usage of the d6tstack library.
```
import importlib
import pandas as pd
import glob
import d6tstack.combine_csv as d6tc
import d6tstack
```
## Get sample data
We've created some dummy sample data which you can download.
```
import urllib.request
cfg_fname_sample = 'test-data.zip'
urllib.request.urlretrieve("https://github.com/d6t/d6tstack/raw/master/"+cfg_fname_sample, cfg_fname_sample)
import zipfile
zip_ref = zipfile.ZipFile(cfg_fname_sample, 'r')
zip_ref.extractall('.')
zip_ref.close()
```
## Use Case: Checking Column Consistency
Let's say you receive a bunch of csv files you want to ingest them, say for example into pandas, dask, pyspark, database.
```
cfg_fnames = list(glob.glob('test-data/input/test-data-input-csv-clean-*.csv'))
print(cfg_fnames)
```
### Check column consistency across all files
Even if you think the files have a consistent column layout, it worthwhile using `d6tstack` to assert that that is actually the case. It's very quick to do even with very many large files!
```
# get previews
c = d6tc.CombinerCSV(cfg_fnames) # all_strings=True makes reading faster
col_sniff = c.sniff_columns()
print('all columns equal?', c.is_all_equal())
print('')
print('which columns are present in which files?')
print('')
print(c.is_column_present())
print('')
print('in what order do columns appear in the files?')
print('')
print(col_sniff['df_columns_order'].reset_index(drop=True))
```
### Preview Combined Data
You can see a preview of what the combined data from all files will look like.
```
c.combine_preview()
```
### Read All Files to Pandas
You can quickly load the combined data into a pandas dataframe with a single command.
```
c.to_pandas().head()
```
## Use Case: Identifying and fixing inconsistent columns
The first case was clean: all files had the same columns. It happens very frequently that the data schema changes over time with columns being added or deleted over time. Let's look at a case where an extra columns got added.
```
cfg_fnames = list(glob.glob('test-data/input/test-data-input-csv-colmismatch-*.csv'))
print(cfg_fnames)
# get previews
c = d6tc.CombinerCSV(cfg_fnames) # all_strings=True makes reading faster
col_sniff = c.sniff_columns()
print('all columns equal?', c.is_all_equal())
print('')
print('which columns are unique?', col_sniff['columns_unique'])
print('')
print('which files have unique columns?')
print('')
print(c.is_column_present_unique())
c.to_pandas().head() # keep all columns
d6tc.CombinerCSV(cfg_fnames, columns_select_common=True).to_pandas().head()
```
# Use Case: align renamed columns. Select subset of columns
Say a column has been renamed and now the data doesn't line up with the data from the old column name. You can easily fix such a situation by using `CombinerCSVAdvanced` which allows you to rename columns and automatically lines up the data. It also allows you to just load data from a subset of columns.
```
cfg_fnames = list(glob.glob('test-data/input/test-data-input-csv-renamed-*.csv'))
c = d6tc.CombinerCSV(cfg_fnames)
print(c.is_column_present_unique())
```
The column `sales` got renamed to `revenue` in the March file, this would causes problems when reading the files.
```
col_sniff = c.sniff_columns()
c.combine_preview()[['filename']+col_sniff['columns_unique']]
```
You can pass the columns you want to rename to `columns_rename` and it will rename and align those columns.
```
# only select particular columns
cfg_col_sel = ['date','sales','cost','profit'] # don't select profit2
# rename colums
cfg_col_rename = {'sales':'revenue'} # rename all instances of sales to revenue
c = d6tc.CombinerCSV(cfg_fnames, columns_rename = cfg_col_rename, columns_select = cfg_col_sel)
c.combine_preview()
```
## Case: Identify change in column order
If you read your files into a database this will be a real problem because it look like the files are all the same whereas in fact they have changes. This is because programs like dask or sql loaders assume the column order is the same. With `d6tstack` you can easily identify and fix such a case.
```
cfg_fnames = list(glob.glob('test-data/input/test-data-input-csv-reorder-*.csv'))
print(cfg_fnames)
# get previews
c = d6tc.CombinerCSV(cfg_fnames) # all_strings=True makes reading faster
col_sniff = c.sniff_columns()
```
Here we can see that all columns are not equal
```
print('all columns equal?', col_sniff['is_all_equal'])
print('')
print('in what order do columns appear in the files?')
print('')
print(col_sniff['df_columns_order'].reset_index(drop=True))
c.combine_preview() # automatically puts it in the right order
```
# Customize separator and pass pd.read_csv() params
You can pass additional parameters such as separators and any params for `pd.read_csv()` to the combiner.
```
c = d6tc.CombinerCSV(cfg_fnames, sep=',', read_csv_params={'header': None})
col_sniff = c.sniff_columns()
print(col_sniff)
```
# CSV out of core functionality
If your files are large you don't want to read them all in memory and then save. Instead you can write directly to the output file.
```
c.to_csv_combine('test-data/output/test.csv')
```
# Auto Detect pd.read_csv() settings
```
### Detect CSV settings across a single file
cfg_sniff = d6tstack.sniffer.sniff_settings_csv([cfg_fnames[0]])
print(cfg_sniff)
```
### Detect CSV settings across multiple files
```
# finds common csv across all files
cfg_sniff = d6tstack.sniffer.sniff_settings_csv(cfg_fnames)
print(cfg_sniff)
```
| github_jupyter |
## Dependencies
```
!pip install --quiet /kaggle/input/kerasapplications
!pip install --quiet /kaggle/input/efficientnet-git
import warnings, glob
from tensorflow.keras import Sequential, Model
import efficientnet.tfkeras as efn
from cassava_scripts import *
seed = 0
seed_everything(seed)
warnings.filterwarnings('ignore')
```
### Hardware configuration
```
# TPU or GPU detection
# Detect hardware, return appropriate distribution strategy
strategy, tpu = set_up_strategy()
AUTO = tf.data.experimental.AUTOTUNE
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
```
# Model parameters
```
BATCH_SIZE = 8 * REPLICAS
HEIGHT = 512
WIDTH = 512
CHANNELS = 3
N_CLASSES = 5
TTA_STEPS = 0 # Do TTA if > 0
```
# Augmentation
```
def data_augment(image, label):
p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_1 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_2 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_3 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# Flips
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)
if p_spatial > .75:
image = tf.image.transpose(image)
# Rotates
if p_rotate > .75:
image = tf.image.rot90(image, k=3) # rotate 270º
elif p_rotate > .5:
image = tf.image.rot90(image, k=2) # rotate 180º
elif p_rotate > .25:
image = tf.image.rot90(image, k=1) # rotate 90º
# # Pixel-level transforms
# if p_pixel_1 >= .4:
# image = tf.image.random_saturation(image, lower=.7, upper=1.3)
# if p_pixel_2 >= .4:
# image = tf.image.random_contrast(image, lower=.8, upper=1.2)
# if p_pixel_3 >= .4:
# image = tf.image.random_brightness(image, max_delta=.1)
# Crops
if p_crop > .7:
if p_crop > .9:
image = tf.image.central_crop(image, central_fraction=.7)
elif p_crop > .8:
image = tf.image.central_crop(image, central_fraction=.8)
else:
image = tf.image.central_crop(image, central_fraction=.9)
elif p_crop > .4:
crop_size = tf.random.uniform([], int(HEIGHT*.8), HEIGHT, dtype=tf.int32)
image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
# # Crops
# if p_crop > .6:
# if p_crop > .9:
# image = tf.image.central_crop(image, central_fraction=.5)
# elif p_crop > .8:
# image = tf.image.central_crop(image, central_fraction=.6)
# elif p_crop > .7:
# image = tf.image.central_crop(image, central_fraction=.7)
# else:
# image = tf.image.central_crop(image, central_fraction=.8)
# elif p_crop > .3:
# crop_size = tf.random.uniform([], int(HEIGHT*.6), HEIGHT, dtype=tf.int32)
# image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
return image, label
```
## Auxiliary functions
```
# Datasets utility functions
def resize_image(image, label):
image = tf.image.resize(image, [HEIGHT, WIDTH])
image = tf.reshape(image, [HEIGHT, WIDTH, CHANNELS])
return image, label
def process_path(file_path):
name = get_name(file_path)
img = tf.io.read_file(file_path)
img = decode_image(img)
img, _ = scale_image(img, None)
# img = center_crop(img, HEIGHT, WIDTH)
return img, name
def get_dataset(files_path, shuffled=False, tta=False, extension='jpg'):
dataset = tf.data.Dataset.list_files(f'{files_path}*{extension}', shuffle=shuffled)
dataset = dataset.map(process_path, num_parallel_calls=AUTO)
if tta:
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.map(resize_image, num_parallel_calls=AUTO)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO)
return dataset
```
# Load data
```
database_base_path = '/kaggle/input/cassava-leaf-disease-classification/'
submission = pd.read_csv(f'{database_base_path}sample_submission.csv')
display(submission.head())
TEST_FILENAMES = tf.io.gfile.glob(f'{database_base_path}test_tfrecords/ld_test*.tfrec')
NUM_TEST_IMAGES = count_data_items(TEST_FILENAMES)
print(f'GCS: test: {NUM_TEST_IMAGES}')
model_path_list = glob.glob('/kaggle/input/99-cassava-leaf-effnetb3-scl-cce-512x512/*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep='\n')
```
# Model
```
def encoder_fn(input_shape):
inputs = L.Input(shape=input_shape, name='input_image')
base_model = efn.EfficientNetB3(input_tensor=inputs,
include_top=False,
weights=None,
pooling='avg')
model = Model(inputs=inputs, outputs=base_model.output)
return model
def classifier_fn(input_shape, N_CLASSES, encoder, trainable=True):
for layer in encoder.layers:
layer.trainable = trainable
inputs = L.Input(shape=input_shape, name='input_image')
features = encoder(inputs)
features = L.Dropout(.5)(features)
features = L.Dense(512, activation='relu')(features)
features = L.Dropout(.5)(features)
output = L.Dense(N_CLASSES, activation='softmax', name='output', dtype='float32')(features)
output_healthy = L.Dense(1, activation='sigmoid', name='output_healthy', dtype='float32')(features)
output_cmd = L.Dense(1, activation='sigmoid', name='output_cmd', dtype='float32')(features)
model = Model(inputs=inputs, outputs=[output, output_healthy, output_cmd])
return model
with strategy.scope():
encoder = encoder_fn((None, None, CHANNELS))
model = classifier_fn((None, None, CHANNELS), N_CLASSES, encoder, trainable=True)
model.summary()
```
# Test set predictions
```
files_path = f'{database_base_path}test_images/'
test_size = len(os.listdir(files_path))
test_preds = np.zeros((test_size, N_CLASSES))
for model_path in model_path_list:
print(model_path)
K.clear_session()
model.load_weights(model_path)
if TTA_STEPS > 0:
test_ds = get_dataset(files_path, tta=True).repeat()
ct_steps = TTA_STEPS * ((test_size/BATCH_SIZE) + 1)
preds = model.predict(test_ds, steps=ct_steps, verbose=1)[0][:(test_size * TTA_STEPS)]
preds = np.mean(preds.reshape(test_size, TTA_STEPS, N_CLASSES, order='F'), axis=1)
test_preds += preds / len(model_path_list)
else:
test_ds = get_dataset(files_path, tta=False)
x_test = test_ds.map(lambda image, image_name: image)
test_preds += model.predict(x_test)[0] / len(model_path_list)
test_preds = np.argmax(test_preds, axis=-1)
test_names_ds = get_dataset(files_path)
image_names = [img_name.numpy().decode('utf-8') for img, img_name in iter(test_names_ds.unbatch())]
submission = pd.DataFrame({'image_id': image_names, 'label': test_preds})
submission.to_csv('submission.csv', index=False)
display(submission.head())
```
| github_jupyter |
Basis Pursuit DeNoising
=======================
This example demonstrates the use of class [admm.bpdn.BPDN](http://sporco.rtfd.org/en/latest/modules/sporco.admm.bpdn.html#sporco.admm.bpdn.BPDN) to solve the Basis Pursuit DeNoising (BPDN) problem [[16]](http://sporco.rtfd.org/en/latest/zreferences.html#id16)
$$\mathrm{argmin}_\mathbf{x} \; (1/2) \| D \mathbf{x} - \mathbf{s} \|_2^2 + \lambda \| \mathbf{x} \|_1 \;,$$
where $D$ is the dictionary, $\mathbf{x}$ is the sparse representation, and $\mathbf{s}$ is the signal to be represented. In this example the BPDN problem is used to estimate the reference sparse representation that generated a signal from a noisy version of the signal.
```
from __future__ import print_function
from builtins import input
import numpy as np
from sporco.admm import bpdn
from sporco import util
from sporco import plot
plot.config_notebook_plotting()
```
Configure problem size, sparsity, and noise level.
```
N = 512 # Signal size
M = 4*N # Dictionary size
L = 32 # Number of non-zero coefficients in generator
sigma = 0.5 # Noise level
```
Construct random dictionary, reference random sparse representation, and test signal consisting of the synthesis of the reference sparse representation with additive Gaussian noise.
```
# Construct random dictionary and random sparse coefficients
np.random.seed(12345)
D = np.random.randn(N, M)
x0 = np.zeros((M, 1))
si = np.random.permutation(list(range(0, M-1)))
x0[si[0:L]] = np.random.randn(L, 1)
# Construct reference and noisy signal
s0 = D.dot(x0)
s = s0 + sigma*np.random.randn(N,1)
```
Set BPDN solver class options.
```
opt = bpdn.BPDN.Options({'Verbose': False, 'MaxMainIter': 500,
'RelStopTol': 1e-3, 'AutoRho': {'RsdlTarget': 1.0}})
```
Select regularization parameter $\lambda$ by evaluating the error in recovering the sparse representation over a logarithmicaly spaced grid. (The reference representation is assumed to be known, which is not realistic in a real application.) A function is defined that evalues the BPDN recovery error for a specified $\lambda$, and this function is evaluated in parallel by [sporco.util.grid_search](http://sporco.rtfd.org/en/latest/modules/sporco.util.html#sporco.util.grid_search).
```
# Function computing reconstruction error at lmbda
def evalerr(prm):
lmbda = prm[0]
b = bpdn.BPDN(D, s, lmbda, opt)
x = b.solve()
return np.sum(np.abs(x-x0))
# Parallel evalution of error function on lmbda grid
lrng = np.logspace(1, 2, 20)
sprm, sfvl, fvmx, sidx = util.grid_search(evalerr, (lrng,))
lmbda = sprm[0]
print('Minimum ℓ1 error: %5.2f at 𝜆 = %.2e' % (sfvl, lmbda))
```
Once the best $\lambda$ has been determined, run BPDN with verbose display of ADMM iteration statistics.
```
# Initialise and run BPDN object for best lmbda
opt['Verbose'] = True
b = bpdn.BPDN(D, s, lmbda, opt)
x = b.solve()
print("BPDN solve time: %.2fs" % b.timer.elapsed('solve'))
```
Plot comparison of reference and recovered representations.
```
plot.plot(np.hstack((x0, x)), title='Sparse representation',
lgnd=['Reference', 'Reconstructed'])
```
Plot lmbda error curve, functional value, residuals, and rho
```
its = b.getitstat()
fig = plot.figure(figsize=(15, 10))
plot.subplot(2, 2, 1)
plot.plot(fvmx, x=lrng, ptyp='semilogx', xlbl='$\lambda$',
ylbl='Error', fig=fig)
plot.subplot(2, 2, 2)
plot.plot(its.ObjFun, xlbl='Iterations', ylbl='Functional', fig=fig)
plot.subplot(2, 2, 3)
plot.plot(np.vstack((its.PrimalRsdl, its.DualRsdl)).T,
ptyp='semilogy', xlbl='Iterations', ylbl='Residual',
lgnd=['Primal', 'Dual'], fig=fig)
plot.subplot(2, 2, 4)
plot.plot(its.Rho, xlbl='Iterations', ylbl='Penalty Parameter', fig=fig)
fig.show()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from collections import OrderedDict
import time
from sklearn.metrics import mean_squared_error,roc_auc_score,mean_absolute_error,log_loss
import sys
sys.path.append('benchmark/')
from gammli_test import gammli
from xgb_test import xgb
from svd_test import svd
from deepfm_fm_test import deepfm_fm
import sys
sys.path.append('../')
from gammli.GAMMLI import GAMMLI
from gammli.DataReader import data_initialize
from gammli.utils import local_visualize
from gammli.utils import global_visualize_density
from gammli.utils import feature_importance_visualize
from gammli.utils import plot_trajectory
from gammli.utils import plot_regularization
import tensorflow as tf
tf.compat.v1.enable_eager_execution()
data= pd.read_csv('data/simulation/simulation_classification.csv')
task_type = "Classification"
meta_info = OrderedDict()
meta_info['uf_1']={'type': 'continues','source':'user'}
meta_info['uf_2']={'type': 'continues','source':'user'}
meta_info['uf_3']={'type': 'continues','source':'user'}
meta_info['uf_4']={'type': 'continues','source':'user'}
meta_info['uf_5']={'type': 'continues','source':'user'}
meta_info['if_1']={'type': 'continues','source':'item'}
meta_info['if_2']={'type': 'continues','source':'item'}
meta_info['if_3']={'type': 'continues','source':'item'}
meta_info['if_4']={'type': 'continues','source':'item'}
meta_info['if_5']={'type': 'continues','source':'item'}
meta_info['user_id']={"type":"id",'source':'user'}
meta_info['item_id']={"type":"id",'source':'item'}
meta_info['target']={"type":"target",'source':''}
#the best shrinkage is 0.840000
#the best combination is 0.768746
lx_params = {
"rank": 3,
"main_effect_epochs":500,
"interaction_epochs" : 500 ,
"tuning_epochs" : 50 ,
"mf_training_iters": 300,
"u_group_num":50,
"i_group_num":50,
"best_shrinkage":0.65,
"best_combination":0.4,
"auto_tune":False,
"verbose":False
}
deepfm_fm_params = {
"epochs":200,
"loss_type" : 'logloss' ,
"eval_metric" : log_loss ,
"greater_is_better": False,
"verbose":False,
"early_stopping":True
}
result_gammli = gammli('warm',data, meta_info,task_type , random_state=0, params=lx_params)
result_svd = svd('warm',data, meta_info, task_type , random_state=0)
result_deepfm, result_fm = deepfm_fm('warm',data, meta_info,task_type , random_state=0, params=deepfm_fm_params)
result_xgb = xgb('warm',data, meta_info, task_type , random_state=0)
result_sim_std = pd.concat([result_gammli,result_svd,result_xgb,result_deepfm,result_fm],0)
result_sim_std.to_csv('result/simulation_classification/simulation_classification_result.csv',index=None)
```
## explanation
```
data= pd.read_csv('data/simulation/simulation_classification.csv')
task_type = "Classification"
meta_info = OrderedDict()
meta_info['uf_1']={'type': 'continues','source':'user'}
meta_info['uf_2']={'type': 'continues','source':'user'}
meta_info['uf_3']={'type': 'continues','source':'user'}
meta_info['uf_4']={'type': 'continues','source':'user'}
meta_info['uf_5']={'type': 'continues','source':'user'}
meta_info['if_1']={'type': 'continues','source':'item'}
meta_info['if_2']={'type': 'continues','source':'item'}
meta_info['if_3']={'type': 'continues','source':'item'}
meta_info['if_4']={'type': 'continues','source':'item'}
meta_info['if_5']={'type': 'continues','source':'item'}
meta_info['user_id']={"type":"id",'source':'user'}
meta_info['item_id']={"type":"id",'source':'item'}
meta_info['target']={"type":"target",'source':''}
train , test = train_test_split(data,test_size=0.2 ,random_state=0)
tr_x, tr_Xi, tr_y, tr_idx, te_x, te_Xi, te_y, val_x, val_Xi, val_y, val_idx, meta_info, model_info, sy, sy_t = data_initialize(train,test,meta_info,task_type ,'warm', 0, True)
model = GAMMLI(model_info=model_info, meta_info=meta_info, subnet_arch=[20, 10],interact_arch=[20, 10],activation_func=tf.tanh, batch_size=1000, lr_bp=0.01, auto_tune=False,
interaction_epochs=500,main_effect_epochs=70,tuning_epochs=50,loss_threshold_main=0.01,loss_threshold_inter=0.01,combine_range=0.9,
verbose=True, early_stop_thres=100,interact_num=10,u_group_num=50,i_group_num=50,scale_ratio=0.8,n_power_iterations=5,n_oversamples=0,
mf_training_iters=300,change_mode=True,convergence_threshold=0.001,max_rank=3,wc='warm',interaction_restrict='intra')
model.fit(tr_x, val_x, tr_y, val_y, tr_Xi, val_Xi, tr_idx, val_idx)
simu_dir = 'result'
data_dict_logs = model.final_gam_model.summary_logs(save_dict=False)
data_dict_logs.update({"err_train_mf":model.final_mf_model.mf_mae,
"err_val_mf":model.final_mf_model.mf_valmae})
plot_trajectory(data_dict_logs, folder=simu_dir, name="s1_traj_plot", log_scale=True, save_png=False, save_eps=False)
plot_regularization(data_dict_logs, folder=simu_dir, name="s1_regu_plot", log_scale=True, save_png=False, save_eps=False)
data_dict = model.final_gam_model.global_explain(0,save_dict=False,)
feature_importance_visualize(data_dict, save_png=False, folder=simu_dir, name='s1_feature')
global_visualize_density(data_dict, save_png=False, folder=simu_dir, name='s1_global')
data_dict_local = model.local_explain(0,0,tr_x,tr_Xi,tr_y)
local_visualize(data_dict_local, save_png=False, folder=simu_dir, name='s1_local',task_type='Classification')
model.relation_plot(0.6,5,False)
new = te_x[91,:].reshape(1,-1)
_ = model.cold_start_analysis(new,'item',1.96)
```
| github_jupyter |
## Tools that we will use
- **Pip**
Python's official package manager, and is most commonly used to install packages published on the Python Package Index (PyPI).
Run:
```bash
pip --version
```
- **Conda**
Anaconda package manager that automates the process of installing, updating, and removing packages.
`pip` is a general-purpose manager for Python packages; `conda` is a language-agnostic cross-platform environment manager.
Run:
```bash
conda info
conda env list
where conda # this output should be in your PATH variable
```
See *Conda_Cheatsheet.pdf* inside the Student_Resources for more useful commands.
- **Anaconda**
Free and open source distribution that comes with a lot of tools right of the box, it comes with either Python or R, a package manager called conda, and a lot of other libraries and packages pre-installed, these packages are usually related to analytics and scientific and help to plot and compute large amounts of data.
Anaconda is an option that people prefer because it simplify issues related to installing different python version of python, creating different environments, privileges issues, etc.
If you are on Windows, use Anaconda Prompt instead of Windows Command Prompt.
Run
```bash
where anaconda
```
- **Jupyter**
*Project Jupyter* was created from *iPython Notebooks* with the intention to be robust and language agnostic (Julia, and R notebooks).Designed for creating reproducible computational narratives.
*JupyterLab* is an interactive development environment for working with notebooks, code and data. JupyterLab has full support for Jupyter notebooks. Additionally, JupyterLab enables you to use text editors, terminals, data file viewers, and other custom components side by side with notebooks in a tabbed work area.
- **Git**
To publish workshop’s sections (.ipynb files)
https://help.github.com/articles/working-with-jupyter-notebook-files-on-github/
- **Python**
Interpreted cross-platform high-level programming language for general-purpose programming.
Useful commands:
```bash
where python # this output should be in your PATH variable
python hello_world.py #to run a script
python hello_human.py Victor
```
## Tools that we will not be using
- **Miniconda**
Another software distribution Miniconda is essentially an installer for an empty conda environment, containing only Conda and Python, so that you can install what you need from scratch.
- **IPython**
Currently, IPython fulfill 2 roles: being the python backend to the jupyter notebook, (aka kernel) and an interactive python shell.
## To run a command on Jupyter Notebook:
```
%run hello_world.py
%run hello_human.py Victor
```
| github_jupyter |
# OGGM flowlines: where are they?
In this notebook we show how to access the OGGM flowlines location before, during, and after a run.
Some of the code shown here will make it to the OGGM codebase [eventually](https://github.com/OGGM/oggm/issues/1111).
```
from oggm import cfg, utils, workflow, tasks, graphics
from oggm.core import flowline
import salem
import xarray as xr
import pandas as pd
import numpy as np
import geopandas as gpd
import matplotlib.pyplot as plt
cfg.initialize(logging_level='WARNING')
```
## Get ready
```
# Where to store the data
cfg.PATHS['working_dir'] = utils.gettempdir(dirname='OGGM-flowlines', reset=True)
# Which glaciers?
rgi_ids = ['RGI60-11.00897']
# We start from prepro level 3 with all data ready
gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=3, prepro_border=40)
gdir = gdirs[0]
gdir
```
## Where is the terminus of the RGI glacier?
There are several ways to get the terminus, depending on what you want. They are also not necessarily exact same:
### Terminus as the lowest point on the glacier
```
# Get the topo data and the glacier mask
with xr.open_dataset(gdir.get_filepath('gridded_data')) as ds:
topo = ds.topo
# Glacier outline raster
mask = ds.glacier_ext
topo.plot();
topo_ext = topo.where(mask==1)
topo_ext.plot();
# Get the terminus
terminus = topo_ext.where(topo_ext==topo_ext.min(), drop=True)
# Project its coordinates from the local UTM to WGS-84
t_lon, t_lat = salem.transform_proj(gdir.grid.proj, 'EPSG:4326', terminus.x[0], terminus.y[0])
print('lon, lat:', t_lon, t_lat)
print('google link:', f'https://www.google.com/maps/place/{t_lat},{t_lon}')
```
### Terminus as the lowest point on the main centerline
```
# Get the centerlines
cls = gdir.read_pickle('centerlines')
# Get the coord of the last point of the main centerline
cl = cls[-1]
i, j = cl.line.coords[-1]
# These coords are in glacier grid coordinates. Let's convert them to lon, lat:
t_lon, t_lat = gdir.grid.ij_to_crs(i, j, crs='EPSG:4326')
print('lon, lat:', t_lon, t_lat)
print('google link:', f'https://www.google.com/maps/place/{t_lat},{t_lon}')
```
### Terminus as the lowest point on the main flowline
"centerline" in the OGGM jargon is not the same as "flowline". Flowlines have a fixed dx and their terminus is not necessarily exact on the glacier outline. Code-wise it's very similar though:
```
# Get the flowlines
cls = gdir.read_pickle('inversion_flowlines')
# Get the coord of the last point of the main centerline
cl = cls[-1]
i, j = cl.line.coords[-1]
# These coords are in glacier grid coordinates. Let's convert them to lon, lat:
t_lon, t_lat = gdir.grid.ij_to_crs(i, j, crs='EPSG:4326')
print('lon, lat:', t_lon, t_lat)
print('google link:', f'https://www.google.com/maps/place/{t_lat},{t_lon}')
```
### Bonus: convert the centerlines to a shapefile
```
output_dir = utils.mkdir('outputs')
utils.write_centerlines_to_shape(gdirs, path=f'{output_dir}/centerlines.shp')
sh = gpd.read_file(f'{output_dir}/centerlines.shp')
sh.plot();
```
Remember: the "centerlines" are not the same things as "flowlines" in OGGM. The later objects undergo further quality checks, such as the impossibility for ice to "climb", i.e. have negative slopes. The flowlines are therefore sometimes shorter than the centerlines:
```
utils.write_centerlines_to_shape(gdirs, path=f'{output_dir}/flowlines.shp', flowlines_output=True)
sh = gpd.read_file(f'{output_dir}/flowlines.shp')
sh.plot();
```
## Flowline geometry after a run: with the new flowline diagnostics (new in v1.6.0!!)
Starting from OGGM version 1.6.0, the choice can be made to store the OGGM flowline diagnostics. This can
be done, either by using the `store_fl_diagnostics=True` keyword argument to the respective simulation task
or by setting `cfg.PARAMS['store_fl_diagnostics']=True` as a global parameter (*Note: the `store_fl_diagnostic_variables`
parameter allows you to control what variables are saved*). Here we use the keyword argument:
```
tasks.init_present_time_glacier(gdir)
tasks.run_constant_climate(gdir, nyears=100, y0=2000, store_fl_diagnostics=True);
```
We will now open the flowline diagnostics you just stored during the run. Each flowline is stored as one group in the main dataset, and there are as many groups as flowlines. "Elevation band flowlines" always results in one flowline (`fl_0`), but there might be more depending on the glacier and/or settings you used.
Here we have more than one flowline, we pick the last one:
```
f = gdir.get_filepath('fl_diagnostics')
with xr.open_dataset(f) as ds:
# We use the "base" grouped dataset to learn about the flowlines
fl_ids = ds.flowlines.data
# We pick the last flowline (the main one)
with xr.open_dataset(f, group=f'fl_{fl_ids[-1]}') as ds:
# The data is compressed - it's a good idea to store it to memory
# before playing with it
ds = ds.load()
ds
```
The graphic routines in OGGM can't (yet) be used to plot the flowline diagnostics, but here are some plots
made with xarray alone.
The following plot shows the volume timeseries of the simulation **for this one flowline, not the entire glacier!**:
```
ds.volume_m3.sum(dim='dis_along_flowline').plot();
```
The glacier surface height is needed for the next plot. This can be computed by adding up the heigth of the glacier bed and the glacier thickness.
```
surface_m = ds.bed_h + ds.thickness_m
```
Here we will plot the glacier at the start and the end of the simulation, but feel free to use other years that have been simulated.
```
# Here the glacier in the first and last year of the run are plotted in transparent blue.
plt.fill_between(ds.dis_along_flowline, surface_m.sel(time=0), ds.bed_h, color='C0', alpha=0.30)
plt.fill_between(ds.dis_along_flowline, surface_m.sel(time=50), ds.bed_h, color='C1', alpha=0.30)
# Here we plot the glacier surface in both years and the glacier bed.
surface_m.sel(time=0).plot(label='Initial glacier surface', color='C0')
surface_m.sel(time=50).plot(label='Glacier surface at year 50', color='C1')
ds.bed_h.plot(label='glacier bed', color='k')
plt.legend(); plt.ylabel('Elevation [m]');
```
The flowline diagnostics also store velocities along the flowline:
```
# Note that velocity at the first step of the simulation is NaN
ds.ice_velocity_myr.sel(time=[1, 10, 20, 100]).plot(hue='time');
```
### Location of the terminus over time
Let's find the indices where the terminus is (i.e. the last point where ice is thicker than 1m), and link these to the lon, lat positions along the flowlines. First, lets get the data we need into pandas dataframes:
```
# Convert the xarray data to pandas
df_coords = ds[['point_lons', 'point_lats']].to_dataframe()
df_thick = ds.thickness_m.to_pandas().T
df_thick[[0, 50, 100]].plot(title='Main flowline thickness');
```
The first method to locate the terminus uses fancy pandas functions but may be more cryptic for less experienced pandas users:
```
# Nice trick from https://stackoverflow.com/questions/34384349/find-index-of-last-true-value-in-pandas-series-or-dataframe
dis_term = (df_thick > 1)[::-1].idxmax()
# Select the terminus coordinates at these locations
loc_over_time = df_coords.loc[dis_term].set_index(dis_term.index)
# Plot them over time
loc_over_time.plot.scatter(x='point_lons', y='point_lats', c=loc_over_time.index, colormap='viridis');
plt.title('Location of the terminus over time');
# Plot them on a google image - you need an API key for this
# api_key = ''
# from motionless import DecoratedMap, LatLonMarker
# dmap = DecoratedMap(maptype='satellite', key=api_key)
# for y in [0, 20, 40, 60, 80, 100]:
# tmp = loc_over_time.loc[y]
# dmap.add_marker(LatLonMarker(tmp.lat, tmp.lon, ))
# print(dmap.generate_url())
```
<img src='https://maps.googleapis.com/maps/api/staticmap?key=AIzaSyDWG_aTgfU7CeErtIzWfdGxpStTlvDXV_o&maptype=satellite&format=png&scale=1&size=400x400&sensor=false&language=en&markers=%7C46.818796056851475%2C10.802746777546085%7C46.81537664036365%2C10.793672904092187%7C46.80792268953582%2C10.777563608554978%7C46.7953190811109%2C10.766412086223571%7C46.79236232808986%2C10.75236937607986%7C46.79236232808986%2C10.75236937607986'>
And now, method 2: less fancy but maybe easier to read?
```
for yr in [0, 20, 40, 60, 80, 100]:
# Find the last index of the terminus
p_term = np.nonzero(df_thick[yr].values > 1)[0][-1]
# Print the location of the terminus
print(f'Terminus pos at year {yr}', df_coords.iloc[p_term][['point_lons', 'point_lats']].values)
```
## Flowline geometry after a run: with `FileModel`
This method uses another way to output the model geometry (`model_geometry` files), which are less demanding in disk space but require more computations to restore the full geometry after a run (OGGM will help you to do that). This method is to be preferred to the above if you want to save disk space or if you don't have yet access to flowline diagnostics files.
Let's do a run first:
```
cfg.PARAMS['store_model_geometry'] = True # We want to get back to it later
tasks.init_present_time_glacier(gdir)
tasks.run_constant_climate(gdir, nyears=100, y0=2000);
```
We use a `FileModel` to read the model output:
```
fmod = flowline.FileModel(gdir.get_filepath('model_geometry'))
```
A FileModel behaves like a OGGM's `FlowlineModel`:
```
fmod.run_until(0) # Point the file model to year 0 in the output
graphics.plot_modeloutput_map(gdir, model=fmod) # plot it
fmod.run_until(100) # Point the file model to year 100 in the output
graphics.plot_modeloutput_map(gdir, model=fmod) # plot it
# Bonus - get back to e.g. the volume timeseries
fmod.volume_km3_ts().plot();
```
OK, now create a table of the main flowline's grid points location and bed altitude (this does not change with time):
```
fl = fmod.fls[-1] # Main flowline
i, j = fl.line.xy # xy flowline on grid
lons, lats = gdir.grid.ij_to_crs(i, j, crs='EPSG:4326') # to WGS84
df_coords = pd.DataFrame(index=fl.dis_on_line*gdir.grid.dx)
df_coords.index.name = 'Distance along flowline'
df_coords['lon'] = lons
df_coords['lat'] = lats
df_coords['bed_elevation'] = fl.bed_h
df_coords.plot(x='lon', y='lat');
df_coords['bed_elevation'].plot();
```
Now store a time varying array of ice thickness, surface elevation along this line:
```
years = np.arange(0, 101)
df_thick = pd.DataFrame(index=df_coords.index, columns=years, dtype=np.float64)
df_surf_h = pd.DataFrame(index=df_coords.index, columns=years, dtype=np.float64)
df_bed_h = pd.DataFrame()
for year in years:
fmod.run_until(year)
fl = fmod.fls[-1]
df_thick[year] = fl.thick
df_surf_h[year] = fl.surface_h
df_thick[[0, 50, 100]].plot();
plt.title('Ice thickness at three points in time');
f, ax = plt.subplots()
df_surf_h[[0, 50, 100]].plot(ax=ax);
df_coords['bed_elevation'].plot(ax=ax, color='k');
plt.title('Glacier elevation at three points in time');
```
### Location of the terminus over time
Let's find the indices where the terminus is (i.e. the last point where ice is thicker than 1m), and link these to the lon, lat positions along the flowlines.
The first method uses fancy pandas functions but may be more cryptic for less experienced pandas users:
```
# Nice trick from https://stackoverflow.com/questions/34384349/find-index-of-last-true-value-in-pandas-series-or-dataframe
dis_term = (df_thick > 1)[::-1].idxmax()
# Select the terminus coordinates at these locations
loc_over_time = df_coords.loc[dis_term].set_index(dis_term.index)
# Plot them over time
loc_over_time.plot.scatter(x='lon', y='lat', c=loc_over_time.index, colormap='viridis');
plt.title('Location of the terminus over time');
```
And now, method 2: less fancy but maybe easier to read?
```
for yr in [0, 20, 40, 60, 80, 100]:
# Find the last index of the terminus
p_term = np.nonzero(df_thick[yr].values > 1)[0][-1]
# Print the location of the terminus
print(f'Terminus pos at year {yr}', df_coords.iloc[p_term][['lon', 'lat']].values)
```
## Comments on "elevation band flowlines"
If you use elevation band flowlines, the location of the flowlines is not known: indeed, the glacier is an even more simplified representation of the real world one. In this case, if you are interested in tracking the terminus position, you may need to use tricks, such as using the retreat from the terminus with time, or similar.
## What's next?
- return to the [OGGM documentation](https://docs.oggm.org)
- back to the [table of contents](welcome.ipynb)
| github_jupyter |
# Web Scraping with APIs
### What is API?
An **Application Programming Interface (API)** enables developers to create repetitive but highly sophisticated software with minimal code. APIs act as prepacked functionality that developer can drop into their code. An example would be apps that use map based location. Almost all these softwares did not build their own map technology because it would be a costly endeavor for any small company. Instead, they most likely used a pre-built map API, like Google's. Furthermore,there is no overhead cost of using API.
### How does API work?
Basically, APIs act as an intermediary for two pieces of software. Implementing APIs into code requres two steps:
1. The script must create a **Get** query sent to API with parameters
2. After request is sent, the API returns a response that is typically encoded in a JSON format.
Note: **JSON** is the primary format in which data is passed back and forth to APIs and most API servers will send the responses in JSON format.
### Why using API?
Since APIs provide us direct access to the data from the web server, it is always a better option over building a web scraper from scratch. Web scraper could be used to extract data when the web server does not provide an API to access the data.
## Part I - Creating API Requests
In this section, we are going to explore the first step of using APIs creating the request. To practice, we are using this free API on [upcitemdb.com](https://devs.upcitemdb.com/). Select the "Explorer FREE" service with no sign up required.
<br>
<br>
<div>
<img src="images/upcitemdb.png"/>
</div>
<br>
This software translates barcodes into a whole host of information including the product's name and brand. To see an example of what an API call should look like, use their online GUI. To demo, we just use a Crosley Furniture barcode.
#### Sending API Request:
<br>
<div>
<img src="images/upcitemdb_request.png"/>
</div>
<br>
#### Return JSON Format:
<div>
<img src="images/upcitemdb_get.png"/>
</div>
<br>
This is a lot of information to break down. But, for now, let's just focus at the request URL line at the top. This line contain two factors, the base URL, or everything preceding the question mark, and the parameters of the API call, or everything after the question mark. The base URL is independent of our parameters and will be the same for any barcode we'll look up. On the other hand, the parameters are specific to this barcode search. This may sound complex, but this breaks down very simplly into code.
Let' start by contructing the base URL for the API call!
```
# Import the dependencies
import requests
# Assigning the base URL
baseURL = 'https://api.upcitemdb.com/prod/trial/lookup'
# Construct the parameter for the API call
# The parameter is a Python dictionary
parameters ={'upc': '710244229739'}
# Send the GET query containing the parameters and the base URL
response = requests.get(baseURL, params=parameters)
# Print the response URL attribute
print(response.url)
```
## Part II - Parsing through JSON
In the previous section, we build the API request to the UPCitemdb interface and now it's time to work with the API's response. Recall that APIs transfer information in the form of JSON documents, which is not a native data structure in Python. So to make the response more navigable, we need the help of a library called **JSON**. JSON has a function called **loads** that converts JSON documents into dictionaries, so making the API's response much more Python-friendly.
```
# Import the JSON package
import json
# Create the response content
content = response.content
# Print the content
print(content)
# Convert JSON document into dictionaries
info = json.loads(content)
# Print the info dictionary
print(type(info))
print(info)
```
Now that we have a Python dictionary, we can extract the data that we need from the dictionary by the keys. We want to extract the item's title, brand name, highest price, and lowest price. All the keys are within the "items" key, so we begin by extracting the item from the dictionary.
```
# Extract the item from the dictionary
item = info['items']
# Extract the item info form item
itemInfo = item[0]
# Extrac the title of the item
title = itemInfo['title']
# Extract the brand name of the item
brand = itemInfo['brand']
# Extract the highest price
high = itemInfo['highest_recorded_price']
# Extract the lowest price
low = itemInfo['lowest_recorded_price']
# Print the titel and brand
print("Item Title: ", title)
print("Item Brand: ", brand)
print("Highest Price: ", high)
print("Lowest Price: ", low)
```
## Part III - Using API Keys
For most of the API services, **API key** is used to mandate access the software. Making key mandatory for an API allows the developer to inspect who's calling their software as well as monitor how many calls each client makes. This is important as there's an overhead cost for API developer who has to constantly update their interface's software. By monitoring their clients calls, they can appropriately price pans for each client's needs. As an result, some APIs are locked behind accounts or pay walls, so we need to make an account with the organization hosting the API to obtain keys for access.
In this demo, we are using OpenWeather Map's API. You can create a free account [here](https://home.openweathermap.org/users/sign_up) which gives you access to these base features. Once you completed the registration process, they will send you an unique API key for your account that you must use when construting the requests.
<br>
<br>
<div>
<img src="images/OpenWeather.png"/>
</div>
<br>
Head to their documentation for three-hour five-day forecasts, we can see how the API is called under the API call header.
<br>
<br>
<div>
<img src="images/OpenWeather_doc.png"/>
</div>
<br>
We follow the same logic by creating the base URL and construct the parameters for the API call. In this example, the API call needs a city name and a country code. We are choosing the city Seattle with the country code US.
### Attach the API Key:
To make sure the interface responds to us, we need to attach the API key with our GET request. To do this, we store the API key in a separated Python file (config.py). You can insert your own key to this file to follow the steps in this demo. We are going to import the API key directly from this file.
The reason we store the API key in a separated file because we do not want to include the API key in our code that could expose the private key to others. It is very dangerous exposing the API key to the public because most of these services are linked to your payment account.
```
# Import the dependencies and API key
import requests
import json
from config import api_key
# Assign the base URL
# Remember to attach the http protocol to the front of the URL
baseURL = "http://api.openweathermap.org/data/2.5/forecast"
# Construct the parameter and API key for the request
parameters = {'APPID':api_key, 'q':'Seattle,US'}
# Create the request
response = requests.get(baseURL, params=parameters)
# Print the content of the response
print(response.content)
# Convert JSON document into dictionaries
info = json.loads(response.content)
# Print the dicitionary
print(info)
```
## Part IV - Linking API Calls
Now that we have seen how to make an API call for a single request. We would like to briefly talk about linking API calls. This strategy revolves around chaining information between APIs to generate complex processes with minimal coding. With the countless APIs on the market today, creating such a software is more feasible than ever. Here we have pulled up [RapidAPI.com](https://rapidapi.com/), which is a marketplace that contains hundreds of free APIs for software developers.
<br>
<br>
<div>
<img src="images/RapidAPI.png"/>
</div>
<br>
This can be a creative playground for cool projects. There is APIs for text messaging, weather info, recipes, text analysis, and facial recognition. So for example, by linking APIs we could create a program that could receive texts using Twilio's API that contain a food article then reduce the article to just key ingredient names with text analysis API. And finally, generate a recipe based off the initial article we texted in using a food recipe API. This complex process can be generated with just three API calls, which we have discovered is quite a simple process. So hopefully armed with this context, you can venture forth and complete your own complex processes using multiple APIs.
| github_jupyter |
# 02.01 - BASIC STRUCTURES
```
!wget --no-cache -O init.py -q https://raw.githubusercontent.com/rramosp/20201.xai4eng/master/content/init.py
import init; init.init(force_download=False); init.get_weblink()
```
## Introduction to Python
Python is **interpreted** and **dynamically typed**. Observe in the following lines:
- variables `a` and `b` have specific data types even if they are not explicitly defined.
- python guesses their types by how they are being used (`\\` is integer division, while `\` is floating point division)
- python keeps executing stuff while no error is found $\rightarrow$ python is interpreted
```
a = 4
print (a, type(a))
b = 9.
print (a, type(b))
c = 9/4
print (c, type(c))
d = 9//4
print (d, type(d))
print (x)
```
but types can be enforced, or used for data conversion
```
a = 1
b = float(a)
print (a, "::", b)
s = "el valor de 'a' es "+str(a)
print (s)
```
## Notebook cell outputs
```
a = 1
b = float(a)
a,b
a = 1
a
b = float(a)
b
a = 1
print (a)
b = float(a)
b
```
## Lists
ordered sequences of objects of **any** kind with cool indexing.
indexing **starts at zero**.
```
b = [1,10, 20.5, "hola", 10., 12, 13, 14., 15,16, 17., 18, 19, 20.]
print (len(b))
print (b[:3])
print (b[3:])
print (b[-3:])
print (b[4])
print (b[5:10])
print (b[-5:-2])
print (b[::2])
print (b[::-1])
b.append('elfin')
b
```
can use variables as indexes
```
import numpy as np
i = np.random.randint(len(b))
print (i, '-->', b[i])
```
truly **any** object
```
a = 32
b = 10
s = [1,2,3,"hola", [10, "nunca", 90], a==b, -32]
print (s)
print (len(s))
s[4]
s[4][1]
s[3][1]
s[2][1]
```
**some list operations**
```
a = ["hola", 2, "adios"]
b = [-10., 1, [3, 4]]
a + b
a + a
a*2
a - b
2 in a
"hola" not in b
[3,4] in b
```
## Strings
a string is like a special kind of list
```
a = "en un lugar de la mancha"
a
a[3:]
a[-10:]
a[::-1]
a[::2]
'lugar' in a
```
with special operations
```
words = a.split()
words
"::".join(words)
a.upper()
a.startswith("en")
a.endswith("lugar")
a.find('lugar')
a[a.find('lugar'):]
```
## `for` loops
unlike other languages, `for` loops are defined over **iterables**. A `list` is an iterable, and there are many other iterables.
Python syntax is **indented**.
Observe python determines the semantics of `*` according to the context
```
b = [1,10, 20.5, "hola", 10.]
for i in b:
print (i, "-->", type(i), "x2 -->", i*2)
```
another example of python as interpreted language
```
b = [1,10, 20.5, "hola", 10.]
for i in b:
print (i, "-->", type(i), "x2 -->", i**2)
```
**iterators**
sometimes we do not need to hold all the contents of a list in memory, but they can be generated as they are being asked for.
```
k = range(-3,10,2)
k
```
size in memory
```
k.__sizeof__()
for i in k:
print (i, end=", ")
big_range = range(0,100000,2)
big_range.__sizeof__()
s = 0
for i in big_range:
s += i
s
```
if converted to a list, then they are physically in memory
```
a = list(big_range)
a.__sizeof__()
```
zip iterators
```
a = ["nada", "adios", 10.]
b = [1,10, 20.5, "hola", 10.]
for i,j in zip(a,b):
print (i,j)
```
## Other control structures
string formatting / if / then / break / continue / while, etc.
```
b = [1,10, 20.5, "hola", 10., 12]
for i in b:
if i=='hola':
break
print (i, type(i))
b = [1,10, 20.5, "hola", 10., 12]
for i in b:
if i=='hola':
break
elif type(i)==int:
print ("INT", end=" ")
elif i>10:
print (">10", end=" ")
print (i, type(i))
for i in b:
if i=='hola':
continue
elif type(i)==int:
print ("INT", end=" ")
elif i>10:
print (">10", end=" ")
else:
print ("UNK", end=" ")
print (i, type(i))
for pos, i in enumerate(b):
if i=='hola':
continue
print ("%02d :: %5.1f :: %20s"%(pos,i,type(i)))
i=1
while i<len(b):
print (i, b[i], type(b[i]))
i += 1
```
## Dictionaries
```
d = {"i1": 16, "nombre": "haskel", "edad": 32, 20: "el numero 20"}
d['i1']
d[20]
d.values()
d.keys()
for i in d.keys():
print("la clave", i, "tiene el valor", d[i])
for k,v in d.items():
print("la clave", k, "tiene el valor", v)
```
and can be updated
```
d[48] = "otro numero"
d["nada"] = 0
for k,v in d.items():
print("la clave", k, "tiene el valor", v)
```
## Tuples
tuples are **inmutable** lists
```
a = (10,3,4,5)
a[3], a[::-1], sum(a)
a[3]=1
```
and thus can be used for keys, indexing, etc.
```
d = {}
d["hola"] = 3
d[1] = "one"
d[(4,5)] = "a tuple"
d
```
however
```
d[[4,5]]
```
| github_jupyter |
```
import pickle
import sys
import cv2
import numpy as np
import os
import os.path
import torch
import torch.utils.data as data
sys.path.append('/home/raymond/project/DOTA_PyTorch/DOTA_devkit') # 保证DOTA_devkit可用的关键
import torchvision.transforms as transforms
from PIL import Image
from DOTA_devkit import dota_utils as util
from DOTA_devkit import DOTA
# from .voc_eval import voc_eval # VOCdevkit
"""
VOC_CLASSES = ('__background__', # always index 0
'aeroplane', 'bicycle', 'bird', 'boat',
'bottle', 'bus', 'car', 'cat', 'chair',
'cow', 'diningtable', 'dog', 'horse',
'motorbike', 'person', 'pottedplant',
'sheep', 'sofa', 'train', 'tvmonitor')
"""
DOTA_CLASSES = ('plane', 'baseball-diamond', 'bridge', 'ground-track-field', 'small-vehicle', 'large-vehicle', 'ship', 'tennis-court',
'basketball-court', 'storage-tank', 'soccer-ball-field', 'roundabout', 'harbor', 'swimming-pool', 'helicopter')
class DotaAnnTrans:
"""Transforms a DOTA annotation into a Tensor of bbox coords and label index
Initilized with a dictionary lookup of classnames to indexes
Arguments:
class_to_ind (dict, optional): dictionary lookup of classnames -> indexes
(default: alphabetic indexing of VOC's 20 classes)
keep_difficult (bool, optional): keep difficult instances or not
(default: False)
height (int): height
width (int): width
"""
def __init__(self, class_to_ind=None, keep_difficult=True,parseMode = 'parse_dota_rec'):
self.class_to_ind = class_to_ind or dict(
zip(DOTA_CLASSES, range(len(DOTA_CLASSES))))
self.keep_difficult = keep_difficult
self.parseMode = parseMode
if self.parseMode == 'parse_dota_rec':
self.parsekw = 'bndbox'
else:
self.parsekw = 'poly'
def __call__(self, target):
"""
Arguments:
target (annotation) : the target annotation to be made usable
will be an DOTA.anns
Returns:
a list containing lists of bounding boxes [bbox coords, class name]
"""
res = np.empty((0, 5))
'''
在此详细解析label文件
for obj in target.iter('object'):
difficult = int(obj.find('difficult').text) == 1
if not self.keep_difficult and difficult:
continue
name = obj.find('name').text.lower().strip()
bbox = obj.find('bndbox')
pts = ['xmin', 'ymin', 'xmax', 'ymax']
bndbox = []
# 坐标框append
for i, pt in enumerate(pts):
cur_pt = int(bbox.find(pt).text) - 1
# scale height or width
# cur_pt = cur_pt / width if i % 2 == 0 else cur_pt / height
bndbox.append(cur_pt)
'''
labels = []
# 解析DOTA.loadAnns返回的Anns
if self.parseMode == 'parse_dota_rec':
for num,ann in enumerate(target):
labels.append([])
labels[num].extend(list(ann['bndbox']))
labels[num].append(self.class_to_ind[ann['name']])
# 通过np的vstack进行垂直方向的数组叠加
res = np.vstack((res, labels)) # [xmin, ymin, xmax, ymax, label_ind]
# img_id = target.find('filename').text[:-4]
#
return res # [[xmin, ymin, xmax, ymax, label_ind], ... ]
class DOTADetection(data.Dataset):
"""DOTA Detection Dataset Object
input is image, target is annotation
Arguments:
rootPath (string): filepath to DOTA dataset folder, will be '/media/raymond/MainDrive/Dataset/DOTA'
image_set (string): imageset to use (eg. 'train', 'val', 'test', 'train_test')
(default: 'train')
(None) transform (callable, optional): transformation to perform on the
input image
preproc : pre-procced of images(eg: data augment) and annotations
(default: None) (在train.py中被调用,preproc类在data_augment.py里)
target_transform (callable, optional): transformation to perform on the
target `annotation`
(eg: take in caption string, return tensor of word indices)
dataset_name (string, optional): which dataset to load
(default: 'DOTA')
parseMode: choose the format for parsing the annotation
(default:'parse_dota_rec', which anns will be parsed as [xmin, ymin, xmax, ymax]
catNms: choose the category of obejcts that will be loaded
(default: [] , means all)
"""
def __init__(self, rootPath, image_sets='train', preproc=None, target_transform=None,
dataset_name='DOTA', parseMode='parse_dota_rec', catNms=[]):
self.rootPath = rootPath
self.image_set = image_sets
# 构造DOTA加载路径
self.path = os.path.join(self.rootPath, self.image_set)
# 预处理,默认None
self.preproc = preproc
# target_transform = AnnotationTransform
self.target_transform = target_transform
self.name = dataset_name
# 解析方式
self.parseMode = parseMode
# DOTA筛选类别
self.catNms = catNms
# 加载DOTA(imgIDs, anns)
self.dataset = DOTA.DOTA(self.path, parseMode=self.parseMode)
self.imgIDs = self.dataset.getImgIds(self.catNms)
# 将类别编码为数字
# self.class_to_ind = dict(zip(DOTA_CLASSES,range(len(DOTA_CLASSES))))
def __getitem__(self, index):
img_id = self.imgIDs[index]
# 调用DOTA devkit 的方法load imgs(背后是cv2的imread)
img = self.dataset.loadImgs(img_id)[0]
target = self.dataset.loadAnns(imgId=img_id)
# target 即是 Label
#
# img = cv2.imread(self._imgpath % img_id, cv2.IMREAD_COLOR)
# height, width, _ = img.shape
'''
详细解析Anns'''
if self.target_transform is not None:
target = self.target_transform(target)
'''
数据增强,resize图像等一系列操作都在data_augment的preproc里面'''
if self.preproc is not None:
# preproc
img, target = self.preproc(img, target)
# print(img.size())
# target = self.target_transform(target, width, height)
# print(target.shape)
return img, target
def __len__(self):
return len(self.imgIDs)
from data_augment import preproc
root = '/media/raymond/MainDrive/Dataset/DOTA'
img_dim = 512
rgb_means = (104,117,123)
rgb_std = (1,1,1)
p = 0.6
dota_test = DOTADetection(root,
image_sets='train_test',
# preproc=(img_dim, rgb_means, rgb_std, p),
target_transform=DotaAnnTrans()
)
dota_dataloader = data.DataLoader(dota_test,
batch_size=4,
shuffle=False,
num_workers=2,
pin_memory=True)
dataiter = data.dataloader.DataLoaderIter(dota_dataloader)
epoch_size = len(dota_test)
print(epoch_size)
print(next(dataiter))
```
| github_jupyter |
# Bring your own pipe-mode algorithm to Amazon SageMaker
_**Create a Docker container for training SageMaker algorithms using Pipe-mode**_
---
## Contents
1. [Overview](#Overview)
1. [Preparation](#Preparation)
1. [Permissions](#Permissions)
1. [Code](#Code)
1. [train.py](#train.py)
1. [Dockerfile](#Dockerfile)
1. [Customize](#Customize)
1. [Train](#Train)
1. [Conclusion](#Conclusion)
---
## Overview
SageMaker Training supports two different mechanisms with which to transfer training data to a training algorithm: File-mode and Pipe-mode.
In File-mode, training data is downloaded to an encrypted EBS volume prior to commencing training. Once downloaded, the training algorithm trains by reading the downloaded training data files.
On the other hand, in Pipe-mode, the input data is transferred to the algorithm while it is training. This poses a few significant advantages over File-mode:
* In File-mode, training startup time is proportional to size of the input data. In Pipe-mode, the startup delay is constant, independent of the size of the input data. This translates to much faster training startup for training jobs with large GB/PB-scale training datasets.
* You do not need to allocate (and pay for) a large disk volume to be able to download the dataset.
* Throughput on IO-bound Pipe-mode algorithms can be multiple times faster than on equivalent File-mode algorithms.
However, these advantages come at a cost - a more complicated programming model than simply reading from files on a disk. This notebook aims to clarify what you need to do in order to use Pipe-mode in your custom training algorithm.
---
## Preparation
_This notebook was created and tested on an ml.t2.medium notebook instance._
Let's start by specifying:
- S3 URIs `s3_training_input` and `s3_model_output` that you want to use for training input and model data respectively. These should be within the same region as the Notebook Instance, training, and hosting. Since the "algorithm" you're building here doesn't really have any specific data-format, feel free to point `s3_training_input` to any s3 dataset you have, the bigger the dataset the better to test the raw IO throughput performance. For this example, the Boston Housing dataset will be copied over to your s3 bucket.
- The `training_instance_type` to use for training. More powerful instance types have more CPU and bandwidth which would result in higher throughput.
- The IAM role arn used to give training access to your data.
### Permissions
Running this notebook requires permissions in addition to the normal `SageMakerFullAccess` permissions. This is because you'll be creating a new repository in Amazon ECR. The easiest way to add these permissions is simply to add the managed policy `AmazonEC2ContainerRegistryFullAccess` to the role that you used to start your notebook instance. There's no need to restart your notebook instance when you do this, the new permissions will be available immediately.
```
import boto3
import pandas as pd
import sagemaker
# to load the boston housing dataset
from sklearn.datasets import *
# Get SageMaker session & default S3 bucket
role = sagemaker.get_execution_role()
sagemaker_session = sagemaker.Session()
region = sagemaker_session.boto_region_name
s3 = sagemaker_session.boto_session.resource("s3")
bucket = sagemaker_session.default_bucket() # replace with your own bucket name if you have one
# helper functions to upload data to s3
def write_to_s3(filename, bucket, prefix):
filename_key = filename.split(".")[0]
key = "{}/{}/{}".format(prefix, filename_key, filename)
return s3.Bucket(bucket).upload_file(filename, key)
def upload_to_s3(bucket, prefix, filename):
url = "s3://{}/{}/{}".format(bucket, prefix, filename)
print("Writing data to {}".format(url))
write_to_s3(filename, bucket, prefix)
```
If you have a larger dataset you want to try, here is the place to swap in your dataset.
```
filename = "boston_house.csv"
# Download files from sklearns.datasets
tabular_data = load_boston()
tabular_data_full = pd.DataFrame(tabular_data.data, columns=tabular_data.feature_names)
tabular_data_full["target"] = pd.DataFrame(tabular_data.target)
tabular_data_full.to_csv(filename, index=False)
```
Upload the dataset to your bucket. You'll find it with the 'pipe_bring_your_own/training' prefix.
```
prefix = "pipe_bring_your_own/training"
training_data = "s3://{}/{}".format(bucket, prefix)
print("Training data in {}".format(training_data))
upload_to_s3(bucket, prefix, filename)
```
## Code
For the purposes of this demo you're going to write an extremely simple “training” algorithm in Python. In essence it will conform to the specifications required by SageMaker Training and will read data in Pipe-mode but will do nothing with the data, simply reading it and throwing it away. You're doing it this way to be able to illustrate only exactly what's needed to support Pipe-mode without complicating the code with a real training algorithm.
In Pipe-mode, data is pre-fetched from S3 at high-concurrency and throughput and streamed into Unix Named Pipes (aka FIFOs) - one FIFO per Channel per epoch. The algorithm must open the FIFO for reading and read through to <EOF> (or optionally abort mid-stream) and close its end of the file descriptor when done. It can then optionally wait for the next epoch's FIFO to get created and commence reading, iterating through epochs until it has achieved its completion criteria.
For this example, you'll need two supporting files:
### train.py
`train.py` simply iterates through 5 epochs on the `training` Channel. Each epoch involves reading the training data stream from a FIFO named `/opt/ml/input/data/training_${epoch}`. At the end of the epoch the code simply iterates to the next epoch, waits for the new epoch's FIFO to get created and continues on.
A lot of the code in `train.py` is merely boilerplate code, dealing with printing log messages, trapping termination signals etc. The main code that iterates through reading each epoch's data through its corresponding FIFO is the following:
```
!pygmentize train.py
```
### Dockerfile
You can use any of the preconfigured Docker containers that SageMaker provides, or build one from scratch. This example uses the [PyTorch - AWS Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md), then adds `train.py`, and finally runs `train.py` when the entrypoint is launched. To learn more about bring your own container training options, see the [Amazon SageMaker Training Toolkit](https://github.com/aws/sagemaker-training-toolkit).
```
%cat Dockerfile
```
## Customize
To fetch the PyTorch AWS Deep Learning Container (DLC), first login to ECR.
```
%%sh
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 763104351884.dkr.ecr.us-east-1.amazonaws.com
```
Next, build your custom docker container, tagging it with the name "pipe_bring_your_own".
```
%%sh
docker build -t pipe_bring_your_own .
```
With the container built, you can now tag it with the full name you will need when calling it for training (`ecr_image`). Then upload your custom container to ECR.
```
account = !aws sts get-caller-identity --query Account --output text
algorithm_name = "pipe_bring_your_own"
ecr_image = '{}.dkr.ecr.{}.amazonaws.com/{}:latest'.format(account[0], region, algorithm_name)
print('ecr_image: {}'.format(ecr_image))
ecr_client = boto3.client('ecr')
try:
response = ecr_client.describe_repositories(
repositoryNames=[
algorithm_name,
],
)
print("Repo exists...")
except Exception as e:
create_repo = ecr_client.create_repository(repositoryName=algorithm_name)
print("Created repo...")
!docker tag {algorithm_name} {ecr_image}
!docker push {ecr_image}
```
## Train
Now, you will use the `Estimator` function and pass in the information needed to run the training container in SageMaker.
Note that `input_mode` is the parameter required for you to set pipe mode for this training run. Also note that the `base_job_name` doesn't let you use underscores, so that's why you're using dashes.
```
from sagemaker.estimator import Estimator
estimator = Estimator(
image_uri=ecr_image,
role=role,
base_job_name="pipe-bring-your-own-test",
instance_count=1,
instance_type="ml.c4.xlarge",
input_mode="Pipe",
)
# Start training
estimator.fit(training_data)
```
Note the throughput logged by the training logs above. By way of comparison a File-mode algorithm will achieve at most approximately 150MB/s on a high-end `ml.c5.18xlarge` and approximately 75MB/s on a `ml.m4.xlarge`.
---
## Conclusion
There are a few situations where Pipe-mode may not be the optimum choice for training in which case you should stick to using File-mode:
* If your algorithm needs to backtrack or skip ahead within an epoch. This is simply not possible in Pipe-mode since the underlying FIFO cannot not support `lseek()` operations.
* If your training dataset is small enough to fit in memory and you need to run multiple epochs. In this case may be quicker and easier just to load it all into memory and iterate.
* Your training dataset is not easily parse-able from a streaming source.
In all other scenarios, if you have an IO-bound training algorithm, switching to Pipe-mode may give you a significant throughput-boost and will reduce the size of the disk volume required. This should result in both saving you time and reducing training costs.
You can read more about building your own training algorithms in the [SageMaker Training documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo.html).
| github_jupyter |
```
from __future__ import print_function
import numpy as np
import tensorflow as tf
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report,confusion_matrix
# https://machinelearningmastery.com/reproducible-results-neural-networks-keras/
np.random.seed(1)
tf.random.set_seed(2)
NGRAMS = 2
FEATURE_LEN = 128
EPOCHS = 15
SAMPLES = 50000
df = pd.read_csv('../train-test/data/phishtank_2016.csv.bz2')
df.dropna(subset=['domain'], inplace=True)
df
sdf = df.drop_duplicates('domain')
sdf
try:
sdf.groupby('target').agg({'domain': 'count'})
except:
pass
adf = pd.read_csv('../train-test/data/top-1m.csv.zip', header=None)
adf.columns = ['rank', 'domain']
adf
ldf = adf[['domain']].head(SAMPLES)
pdf = sdf[['domain']].sample(SAMPLES, random_state=21)
ldf['phishing'] = False
pdf['phishing'] = True
tdf = pd.concat([ldf, pdf])
tdf
```
## Preprocessing the input data
```
if True:
# build n-gram list
vect = CountVectorizer(analyzer='char', max_df=0.3, min_df=3, ngram_range=(NGRAMS, NGRAMS), lowercase=False)
#vect = CountVectorizer(analyzer='char', ngram_range=(NGRAMS, NGRAMS), lowercase=False)
a = vect.fit_transform(tdf.domain)
vocab = vect.vocabulary_
# sort n-gram by freq (highest -> lowest)
words = []
for b in vocab:
c = vocab[b]
#print(b, c, a[:, c].sum())
words.append((a[:, c].sum(), b))
#break
words = sorted(words, reverse=True)
words_list = [w[1] for w in words]
num_words = len(words_list)
print("num_words = %d" % num_words)
def find_ngrams(text, n):
a = zip(*[text[i:] for i in range(n)])
wi = []
for i in a:
w = ''.join(i)
try:
idx = words_list.index(w)
except:
idx = 0
wi.append(idx)
return wi
# build X from index of n-gram sequence
X = np.array(tdf.domain.apply(lambda c: find_ngrams(c, NGRAMS)))
else:
data = tdf.domain.str.cat()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print('data has %d characters, %d unique.' % (data_size, vocab_size))
char_to_ix = { ch:i for i,ch in enumerate(chars) }
ix_to_char = { i:ch for i,ch in enumerate(chars) }
num_words = vocab_size
X = np.array(tdf.domain.apply(lambda c: [char_to_ix[a] for a in c]))
X
# check max/avg feature
X_len = []
for x in X:
X_len.append(len(x))
max_feature_len = max(X_len)
avg_feature_len = int(np.mean(X_len))
print("Max feature len = %d, Avg. feature len = %d" % (max_feature_len, avg_feature_len))
y = np.array(tdf.phishing.astype('category').cat.codes)
# Split train and test dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=21, stratify=y)
```
## Train a LSTM model
```
import keras
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding, Dropout, Activation
from keras.layers import LSTM
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
from keras.models import load_model
max_features = num_words # 20000
feature_len = FEATURE_LEN # avg_feature_len # cut texts after this number of words (among top max_features most common words)
batch_size = 32
print(len(X_train), 'train sequences')
print(len(X_test), 'test sequences')
print('Pad sequences (samples x time)')
X_train = sequence.pad_sequences(X_train, maxlen=feature_len)
X_test = sequence.pad_sequences(X_test, maxlen=feature_len)
print('X_train shape:', X_train.shape)
print('X_test shape:', X_test.shape)
```
```
from keras.models import load_model
vocab_path = 'models/phish_cat_vocab_2016.csv'
model_path = 'models/phish_cat_lstm_2016.h5'
vdf = pd.read_csv(vocab_path)
vocab = vdf.vocab.tolist()
print(len(vocab))
model = load_model(model_path)
```
## Confusion Matrix
```
y_pred = model.predict_classes(X_test, verbose=1)
y_probs = model.predict_proba(X_test, verbose=1) # to predict probability
target_names = list(tdf.phishing.astype('category').cat.categories)
target_names = [str(t) for t in target_names]
print(classification_report(y_test, y_pred, target_names=target_names))
print(confusion_matrix(y_test, y_pred))
```
## Train a Random Forest Model
```
from sklearn.ensemble import RandomForestClassifier
# Create the model with 100 trees
rf_model = RandomForestClassifier(n_estimators=100,
bootstrap = True,
max_features = 'sqrt')
# Fit on training data
rf_model.fit(X_train, y_train)
# Actual class predictions
rf_y_pred = rf_model.predict(X_test)
# Probabilities for each class
rf_y_probs = rf_model.predict_proba(X_test)[:, 1]
```
## Confusion Matrix
```
p = rf_y_probs
target_names = list(tdf.phishing.astype('category').cat.categories)
target_names = [str(t) for t in target_names]
print(classification_report(y_test, rf_y_pred, target_names=target_names))
print(confusion_matrix(y_test, rf_y_pred))
```
## Train an SVM Model
```
from sklearn import svm
svc_model = svm.SVC(probability=True)
# Fit on training data
svc_model.fit(X_train, y_train)
# Actual class predictions
svc_y_pred = svc_model.predict(X_test)
# Probabilities for each class
svc_y_probs = svc_model.predict_proba(X_test)[:, 1]
```
## Confusion Matrix
```
p = svc_y_probs
target_names = list(tdf.phishing.astype('category').cat.categories)
target_names = [str(t) for t in target_names]
print(classification_report(y_test, svc_y_pred, target_names=target_names))
print(confusion_matrix(y_test, y_pred))
```
```
import matplotlib.pyplot as plt
#from sklearn.metrics import plot_roc_curve
from sklearn.metrics import roc_curve, roc_auc_score
lstm_fpr, lstm_tpr, lstm_thresholds = roc_curve ( y_test , y_probs)
rf_fpr, rf_tpr, rf_thresholds = roc_curve ( y_test , rf_y_probs)
svc_fpr, svc_tpr, svc_thresholds = roc_curve ( y_test , svc_y_probs)
lstm_auc = roc_auc_score(y_test, y_probs)
rf_auc = roc_auc_score(y_test, rf_y_probs)
svc_auc = roc_auc_score(y_test, svc_y_probs)
lstm_auc, rf_auc, svc_auc
```
```
fig = plt.figure(1, figsize=(12, 8))
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(lstm_fpr, lstm_tpr, label='LSTM (area = {:.3f})'.format(lstm_auc))
plt.plot(rf_fpr, rf_tpr, label='RF (area = {:.3f})'.format(rf_auc))
plt.plot(svc_fpr, svc_tpr, label='SVC (area = {:.3f})'.format(svc_auc))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
plt.show()
fig.savefig('./roc-phish-2016-lstm-rf-svc.eps', format='eps', dpi=300);
```
## Save model
```
model.save('./models/phish_cat_lstm_2016.h5')
words_df = pd.DataFrame(words_list, columns=['vocab'])
words_df.to_csv('./models/phish_cat_vocab_2016.csv', index=False, encoding='utf-8')
import pickle
pickle.dump(rf_model, open('./models/phish_cat_2016_rf.pickle', 'wb'))
pickle.dump(svc_model, open('./models/phish_cat_2016_svm.pickle', 'wb'))
```
| github_jupyter |
# DNA complement and reverse
This notebook was used to benchmark the different approach of computing
the complement of a sequence.
Takes about 15ms to analyse a 1e7 long sequence.
```
# First, let us create a sequence.
from biokit.sequence.benchmark import SequenceBenchmark
def create_sequence(expectedLength=1e6):
s = SequenceBenchmark()
return s.create_sequence(expectedLength)
sequence = create_sequence(1e7)
```
## BioPython
```
from Bio.Seq import Seq
from Bio.Alphabet import IUPAC
from Bio.SeqUtils import GC
seq1 = Seq(sequence, IUPAC.unambiguous_dna)
GC(seq1)
%%timeit -n 3
seq1 = Seq(sequence, IUPAC.unambiguous_dna)
res = seq1.complement()
len(seq1)
```
## Python with lists
```
class DNAList(object):
bases = ["A", "C", "T", "G"]
cbases = {"T":"A", "G":"C", "A":"T", "C":"G"}
def __init__(self, data):
self.data = data[:]
def get_complement(self):
res = "".join((self.cbases[x] for x in self))
return res
def __getitem__(self, index):
if type(index) == slice:
return "".join(self.data[index])
return self.data[index]
seq2 = DNAList(sequence[0:1000000]) # slow
%%timeit -n 3
res = seq2.get_complement()
```
## Python with strings
This is a simplified version of what is done in BioPython and BioKit
```
import string
# with strings as ACGT / TGCA, there is no improvment
trans = bytes.maketrans(b'ACGTagct', b'TGCAtgca')
class DNA(object):
def __init__(self, sequence):
self.sequence = sequence
def complement(self):
return self.sequence.translate(trans)
d = DNA(sequence)
%%timeit -n 3
res = d.complement()
#fh = open("test.fasta", "w")
#fh.write(res)
#fh.close()
```
## Another way with Pandas?
```
import pandas as pd
d = pd.Series(sequence)
# here we just store the sequence so the timeseries is
# just a container. Instanciation is 50 times longer
# than a simple class storing the sequence as a string though
trans = bytes.maketrans(b'ACGTacgt', b'TGCAtgca')
%%timeit -n 3
res = "".join(d).translate(trans)
```
## BioKit
```
from biokit.sequence import dna
d = dna.DNA(sequence)
%%timeit -n 10
# maketrans
d.get_complement()
```
## cython test
- Cythonise does not make it faster. probably the reason being that maketrans is alredy optimised
- THe code below provides a cython version of the code implemented earlier (CyTransSeq) and a cythonised version of a dictionary implementation.
- Neither are faster than the maketrans
- lookup and lut table do not help either
- references: http://nbviewer.ipython.org/urls/gist.github.com/Jorge-C/d51b48d3e18897c46ea2/raw/73d7e11e4b72d6ba90e0021931afa230e63031e9/cython+sequences.ipynb?create=1
```
%load_ext cython
%%cython --annotate
# source http://nbviewer.ipython.org/urls/gist.github.com/Jorge-C/d51b48d3e18897c46ea2/raw/73d7e11e4b72d6ba90e0021931afa230e63031e9/cython+sequences.ipynb?create=1
cimport cython
import numpy as np
cimport numpy as cnp
table = bytes.maketrans(b'ACGTacgt',
b'TGCAtgca')
cdef class CyTransSeq(object):
"""Simply defining the class as a cython one, uses translation table"""
cdef public str seq
def __cinit__(self, seq):
self.seq = seq
cpdef rc(self):
return list(reversed(self.seq.translate(table)))
cpdef complement(self):
return self.seq.translate(table)
cdef class CySeq(object):
"""Translation using a dict, cythonized"""
_complement_map = {
'A': 'T', 'C':'G', 'G':'C', 'T':'A',
'a': 't', 'c':'g', 'g':'c', 't':'a'}
cdef public str seq
def __cinit__(self, seq):
self.seq = seq
cdef _rc(self):
result = []
for base in reversed(self.seq):
result.append(self._complement_map[base])
return result
cdef _complement1(self):
result = []
for base in self.seq:
result.append(self._complement_map[base])
return result
def complement1(self):
return self._complement1()
def complement2(self):
return self._complement2()
cdef _complement2(self):
return [self._complement_map[base] for base in self.seq]
def rc(self):
return self._rc()
seq = CyTransSeq(sequence)
%%timeit -n 3
res = seq.complement()
```
## Plot
```
import matplotlib as plt
%matplotlib
import pylab
N = [1e5, 1e6, 5e6, 1e7, 5e7, 1e8]
import time
timesBioKit = []
timesC = []
timesBio = []
timesCyt = []
for el in N:
# BioKit
factor = int(el / 50.) #100 is the length of the small string
subseq = "AGCTTTTCATTCTGACTGCAACGGGCAATATGTCAGTGTCTCGTTGCAAA"
sequence_in = "".join([subseq]*factor)
seq = dna.DNA(sequence_in)
t1 = time.time()
res = seq.get_complement()
t2 = time.time()
timesBioKit.append(t2-t1)
#t1 = time.time()
#res = seq.get_complement_c()
#t2 = time.time()
#timesC.append(t2-t1)
# biopython
seqbio = Seq(sequence_in, IUPAC.unambiguous_dna)
t1 = time.time()
seqbio.complement()
t2 = time.time()
timesBio.append(t2-t1)
# cython
seqcyt = CyTransSeq(sequence_in)
t1 = time.time()
seqcyt.complement()
t2 = time.time()
timesCyt.append(t2-t1)
print(el)
%pylab inline
pylab.clf()
pylab.loglog(N, timesBioKit, 'o-', N, timesBio, 'gx-',
N, timesCyt,
'ko-')
pylab.legend(['timesBioKit', 'timesBio', 'times Cython'],
loc='best')
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import os
import pickle
from glob import glob
from concurrent.futures import ProcessPoolExecutor, as_completed
import numpy as np
import pandas as pd
from scipy import stats
import settings as conf
from utils import is_number, chunker
```
# Load S-PrediXcan results
## From Rapid GWAS project
```
from results.spredixcan import PhenoResults
_path = os.path.join(conf.SPREDIXCAN_RESULTS_DIR['RapidGWASProject'] + '/*')
display(_path)
all_spredixcan_results_dirs = glob(_path)
display(len(all_spredixcan_results_dirs))
assert len(all_spredixcan_results_dirs) == conf.SPREDIXCAN_EXPECTED_PHENOTYPES['RapidGWASProject']
all_spredixcan_phenotypes = [PhenoResults(p) for p in all_spredixcan_results_dirs]
display(len(all_spredixcan_phenotypes))
assert len(all_spredixcan_phenotypes) == conf.SPREDIXCAN_EXPECTED_PHENOTYPES['RapidGWASProject']
```
## From GTEx GWAS manuscript
```
_path = os.path.join(conf.SPREDIXCAN_RESULTS_DIR['GTEX_GWAS'] + '/*')
display(_path)
all_extra_results_dirs = glob(_path)
display(len(all_extra_results_dirs))
assert len(all_extra_results_dirs) == conf.SPREDIXCAN_EXPECTED_PHENOTYPES['GTEX_GWAS']
all_extra_results_dirs[:5]
_file_pattern = 'spredixcan_igwas_gtexmashrv8_(?P<code>[^/]+)__PM__(?P<tissue>.+)\.csv$'
all_extra_phenotypes = [PhenoResults(p, _file_pattern) for p in all_extra_results_dirs]
all_extra_phenotypes_plain_names = pd.Index([p.pheno_info.get_plain_name() for p in all_extra_phenotypes])
display(len(all_extra_phenotypes))
assert len(all_extra_phenotypes) == conf.SMULTIXCAN_EXPECTED_PHENOTYPES['GTEX_GWAS']
```
# S-PrediXcan: direction of effect
## Effect direction: most significant
### Compute results
```
def _get_combined_results(phenos):
return {
pheno.pheno_info.get_plain_name():
pheno.get_most_significant_effect_direction()
for pheno in phenos
}
def _run_all(phenotype_chunks, n_jobs=conf.N_JOBS_HIGH):
all_results = {}
with ProcessPoolExecutor(max_workers=n_jobs) as executor:
tasks = [executor.submit(_get_combined_results, chunk) for chunk in phenotype_chunks]
for future in as_completed(tasks):
res = future.result()
all_results.update(res)
return all_results
# phenotype_chunks = chunker(all_spredixcan_phenotypes[:5] + all_extra_phenotypes[:5], 2)
phenotype_chunks = chunker(all_spredixcan_phenotypes + all_extra_phenotypes, 25)
all_results = _run_all(phenotype_chunks, n_jobs=20)
len(all_results)
```
### Create DataFrame
```
_n_expected_phenos = np.sum(list(conf.SMULTIXCAN_EXPECTED_PHENOTYPES.values()))
display(_n_expected_phenos)
assert len(all_results) == _n_expected_phenos, len(all_results)
# the category dtype is for efficiency in storage/loading
spredixcan_genes_effect_directions = pd.DataFrame(all_results, dtype='category')
spredixcan_genes_effect_directions.index.rename('gene_name', inplace=True)
assert spredixcan_genes_effect_directions.index.is_unique
display(spredixcan_genes_effect_directions.shape)
display(spredixcan_genes_effect_directions.head())
# Remove genes with no results
#spredixcan_genes_effect_directions = spredixcan_genes_effect_directions.dropna(axis=0, how='all')
# how many entries are nan
spredixcan_genes_effect_directions.isna().sum().sum()
pd.Series(spredixcan_genes_effect_directions.values.flatten()).dropna().astype(float).unique()
display(f'Results shape: {spredixcan_genes_effect_directions.shape}')
assert spredixcan_genes_effect_directions.shape == (22518, _n_expected_phenos), spredixcan_genes_effect_directions.shape
```
## Testing
```
spredixcan_genes_effect_directions.loc[
[
'ENSG00000000419',
'ENSG00000000457',
'ENSG00000000460',
'ENSG00000186090', # zero
'ENSG00000007202', # zero
],
[
'N02-Diagnoses_main_ICD10_N02_Recurrent_and_persistent_haematuria',
'Astle_et_al_2016_Reticulocyte_count',
'PGC_ADHD_EUR_2017',
'IMMUNOBASE_Systemic_lupus_erythematosus_hg19',
]
]
assert spredixcan_genes_effect_directions.loc['ENSG00000000419', 'N02-Diagnoses_main_ICD10_N02_Recurrent_and_persistent_haematuria'] == -1.0
assert spredixcan_genes_effect_directions.loc['ENSG00000000457', 'N02-Diagnoses_main_ICD10_N02_Recurrent_and_persistent_haematuria'] == 1.0
assert spredixcan_genes_effect_directions.loc['ENSG00000000460', 'N02-Diagnoses_main_ICD10_N02_Recurrent_and_persistent_haematuria'] == -1.0
assert spredixcan_genes_effect_directions.loc['ENSG00000000419', 'Astle_et_al_2016_Reticulocyte_count'] == -1.0
assert spredixcan_genes_effect_directions.loc['ENSG00000000457', 'Astle_et_al_2016_Reticulocyte_count'] == 1.0
assert spredixcan_genes_effect_directions.loc['ENSG00000000460', 'Astle_et_al_2016_Reticulocyte_count'] == 1.0
assert spredixcan_genes_effect_directions.loc['ENSG00000186090', 'PGC_ADHD_EUR_2017'] == 0.0
assert spredixcan_genes_effect_directions.loc['ENSG00000007202', 'PGC_ADHD_EUR_2017'] == 0.0
assert spredixcan_genes_effect_directions.loc['ENSG00000007202', 'IMMUNOBASE_Systemic_lupus_erythematosus_hg19'] == 0.0
```
The code below was used to write the assert above; see for each gene if first and last (min and max) correspond to sign above
```
rapid_gwas_dir = conf.SPREDIXCAN_RESULTS_DIR['RapidGWASProject']
gtex_gwas_dir = conf.SPREDIXCAN_RESULTS_DIR['GTEX_GWAS']
%%bash -s "$rapid_gwas_dir"
cd $1/N02
parallel 'cat {} | cut -f1-3 -d, | column -t -s, | grep "ENSG00000000419"' ::: *.csv | sort -k3 -g | sed -e 1b -e '$!d'
echo ""
parallel 'cat {} | cut -f1-3 -d, | column -t -s, | grep "ENSG00000000457"' ::: *.csv | sort -k3 -g | sed -e 1b -e '$!d'
echo ""
parallel 'cat {} | cut -f1-3 -d, | column -t -s, | grep "ENSG00000000460"' ::: *.csv | sort -k3 -g | sed -e 1b -e '$!d'
%%bash -s "$gtex_gwas_dir"
cd $1/Astle_et_al_2016_Reticulocyte_count
parallel 'cat {} | cut -f1-3 -d, | column -t -s, | grep "ENSG00000000419"' ::: *.csv | sort -k3 -g | sed -e 1b -e '$!d'
echo ""
parallel 'cat {} | cut -f1-3 -d, | column -t -s, | grep "ENSG00000000457"' ::: *.csv | sort -k3 -g | sed -e 1b -e '$!d'
echo ""
parallel 'cat {} | cut -f1-3 -d, | column -t -s, | grep "ENSG00000000460"' ::: *.csv | sort -k3 -g | sed -e 1b -e '$!d'
%%bash -s "$gtex_gwas_dir"
cd $1/PGC_ADHD_EUR_2017
parallel 'cat {} | cut -f1-3 -d, | column -t -s, | grep "ENSG00000186090"' ::: *.csv | sort -k3 -g | sed -e 1b -e '$!d'
echo ""
parallel 'cat {} | cut -f1-3 -d, | column -t -s, | grep "ENSG00000007202"' ::: *.csv | sort -k3 -g | sed -e 1b -e '$!d'
%%bash -s "$gtex_gwas_dir"
cd $1/IMMUNOBASE_Systemic_lupus_erythematosus_hg19
parallel 'cat {} | cut -f1-3 -d, | column -t -s, | grep "ENSG00000007202"' ::: *.csv | sort -k3 -g | sed -e 1b -e '$!d'
```
### Save
```
spredixcan_genes_effect_directions.shape
spredixcan_genes_effect_directions.head()
# Save
spredixcan_genes_effect_directions_filename = os.path.join(conf.GENE_ASSOC_DIR, f'spredixcan-mashr-effect_direction-most_signif.pkl.xz')
display(spredixcan_genes_effect_directions_filename)
spredixcan_genes_effect_directions.to_pickle(spredixcan_genes_effect_directions_filename)
```
### Save in HDF5 format for webapp
```
spredixcan_genes_effect_directions = pd.read_pickle(spredixcan_genes_effect_directions_filename)
spredixcan_genes_effect_directions.shape
from utils import simplify_string_for_hdf5
os.makedirs(conf.GENE_ASSOC_DIR, exist_ok=True)
OUTPUT_HDF5_FILE = os.path.join(conf.GENE_ASSOC_DIR, 'spredixcan-mashr-effect_direction-most_signif.h5')
display(OUTPUT_HDF5_FILE)
with pd.HDFStore(OUTPUT_HDF5_FILE, mode='w', complevel=1) as store:
for col in spredixcan_genes_effect_directions.columns:
#print('.', flush=True, end='')
clean_col = simplify_string_for_hdf5(col)
store[clean_col] = spredixcan_genes_effect_directions[col].astype(float)
# testing
with pd.HDFStore(OUTPUT_HDF5_FILE, mode='r') as store:
store_keys = list(store.keys())
assert len(store_keys) == spredixcan_genes_effect_directions.shape[1]
display(store_keys[:5])
clean_col = simplify_string_for_hdf5('N02-Diagnoses_main_ICD10_N02_Recurrent_and_persistent_haematuria')
data = store[clean_col]
assert data.shape == (22518,), data.shape
assert data.loc['ENSG00000000419'] == -1.0
assert data.loc['ENSG00000000457'] == 1.0
assert data.loc['ENSG00000000460'] == -1.0
clean_col = simplify_string_for_hdf5('Astle_et_al_2016_Reticulocyte_count')
data = store[clean_col]
assert data.shape == (22518,), data.shape
assert data.loc['ENSG00000000419'] == -1.0
assert data.loc['ENSG00000000457'] == 1.0
assert data.loc['ENSG00000000460'] == 1.0
clean_col = simplify_string_for_hdf5('PGC_ADHD_EUR_2017')
data = store[clean_col]
assert data.shape == (22518,), data.shape
assert data.loc['ENSG00000186090'] == 0.0
assert data.loc['ENSG00000007202'] == 0.0
clean_col = simplify_string_for_hdf5('IMMUNOBASE_Systemic_lupus_erythematosus_hg19')
data = store[clean_col]
assert data.shape == (22518,), data.shape
assert data.loc['ENSG00000007202'] == 0.0
```
| github_jupyter |
```
import numpy as np
import torch
from torch.utils.data import Dataset, DataLoader
from torch.nn import functional as F
import torch.nn as nn
import torchvision.transforms as transforms
import torch.optim as optim
from torch.autograd import Function
from torchvision import models
from torchvision import utils
from matplotlib import pyplot as plt
from torchvision import datasets, models, transforms
#Create the dataset loader
input_path = 'Dataset/'
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
data_transforms = {
'train':
transforms.Compose([
transforms.Resize((256,256)),
transforms.RandomCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize
]),
'validation':
transforms.Compose([
transforms.Resize((256,256)),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize
]),
'test':
transforms.Compose([
transforms.Resize((256,256)),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize
])
}
image_datasets = {
'train':
datasets.ImageFolder(input_path + 'train', data_transforms['train']),
'validation':
datasets.ImageFolder(input_path + 'validation', data_transforms['validation']),
'test':
datasets.ImageFolder(input_path + 'test', data_transforms['test'])
}
dataloaders = {
'train':
torch.utils.data.DataLoader(image_datasets['train'],
batch_size=32,
shuffle=True,
num_workers=0),
'validation':
torch.utils.data.DataLoader(image_datasets['validation'],
batch_size=32,
shuffle=False,
num_workers=0),
'test':
torch.utils.data.DataLoader(image_datasets['test'],
batch_size=32,
shuffle=False,
num_workers=0)
}
#Load the weights of Resident Fellow (Teacher)
class DenseNet121(nn.Module):
def __init__(self, out_size):
super(DenseNet121, self).__init__()
self.densenet121 = models.densenet121(pretrained=True)
num_ftrs = self.densenet121.classifier.in_features
self.densenet121.classifier = nn.Sequential(
nn.Linear(num_ftrs, out_size),
)
def forward(self, x):
x = self.densenet121(x)
return x
model = DenseNet121(3).cuda()
model = torch.nn.DataParallel(model)
model = torch.load('RF_model/Pretrain_DenseNet')
print("model loaded")
device = torch.device("cuda:0")
model.to(device);
#Test the performance of Resident Fellow (Teacher)
running_corrects = 0
model.eval()
for inputs, labels in dataloaders['test']:
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
running_corrects += torch.sum(preds == labels.data)
epoch_acc = running_corrects.double() / len(image_datasets['test'])
print(epoch_acc.data.cpu().numpy())
#Set up parameters
from lsoftmax import AngularPenaltySMLoss
import copy
#ArcFace Loss
num_classes = 3
#Make sure the feature size is 1280, this number will changes when network changes
criterion1 = AngularPenaltySMLoss(1280,num_classes)
criterion2 = nn.KLDivLoss()
num_epochs = 50
#Training and save model weights
#Loop through turning parameter of "a" in paper
for a in [0.2,0.4,0.6,0.8]:
#Loop through turning parameter of "T" in paper
for T in [1,5,10]:
#Create the model, change the output layer to 3
best_acc = 0
num_classes = 3
mobilenet = models.mobilenet_v2(pretrained=True)
mobilenet.classifier[1] = nn.Linear(1280,num_classes)
#Load pretrained clean weights for training noisy network
#mobilenet = torch.load('mobilenet/clean')
mobilenet.to(device);
#Set up Adam optimizer
optimizer = optim.Adam(mobilenet.parameters(),lr=0.0002, betas=(0.9, 0.999))
#Fix the weights of Resident Fellow (Teacher)
for param in model.parameters():
param.requires_grad = False
model.eval()
for epoch in range(num_epochs):
#Extract the features for calculate the ArcFace Loss
feat = copy.deepcopy(mobilenet)
feat.classifier = feat.classifier[0]
print('Epoch {}/{}'.format(epoch+1, num_epochs))
print('-' * 10)
for phase in ['train', 'validation']:
if phase == 'train':
mobilenet.train()
else:
mobilenet.eval()
running_loss = 0.0
running_corrects = 0
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
outputs = mobilenet(inputs)
feats = feat(inputs)
#ArcFace Loss
loss1 = criterion1(feats, labels)
soft_target = model(inputs)
outputs_S = F.log_softmax(outputs/T, dim=1)
outputs_T = F.softmax(soft_target/T, dim=1)
loss2 = criterion2(outputs_S, outputs_T) * T * T
loss = (1-a)*loss1 + a*loss2
if phase == 'train':
optimizer.zero_grad()
loss.backward()
optimizer.step()
_, preds = torch.max(outputs, 1)
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / len(image_datasets[phase])
epoch_acc = running_corrects.double() / len(image_datasets[phase])
if phase =='validation':
if epoch_acc>best_acc:
torch.save(mobilenet, 'MobileNet_weights/a='+str(a)+'T='+str(T))
best_acc = epoch_acc
print('{} loss: {:.4f}, acc: {:.4f}'.format(phase,
epoch_loss,
epoch_acc))
#Testing the learned Medical Student
num_classes = 3
mobilenet = models.mobilenet_v2(pretrained=True)
mobilenet.classifier[1] = nn.Linear(1280,num_classes)
mobilenet = torch.load('MobileNet_weights/best_validation_weights')
mobilenet.to(device)
mobilenet.eval();
running_corrects = 0
for inputs, labels in dataloaders['test']:
inputs = inputs.to(device)
labels = labels.to(device)
outputs = mobilenet(inputs)
_, preds = torch.max(outputs, 1)
running_corrects += torch.sum(preds == labels.data)
epoch_acc = running_corrects.double() / len(image_datasets['test'])
print(epoch_acc.data.cpu().numpy())
```
| github_jupyter |
# Gentle Introduction to Pytorch Autograd with Linear Regression
#### By Michael Przystupa
** By the end of this tutorial students will be able to:**:
- Analytically solve a linear regression
- Explain what pytorch automatically handles with it's autograd library
- Construct a linear model using the pytorch module
## Introduction
Linear regression is a fairly common practise and is a good starting place with introducing you building models using the pytorch library. We'll start by showing how analytically you can solve linear regression analytically which we can use to compare how our pytorch model compares.
First, let's import the libraries we'll need:
```
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import numpy as np
import matplotlib.pyplot as plt
#just your typical imports in python
```
## The Data Set
We'll be working with a toy dataset which is just a linear function with added guassian noise:
\begin{equation}
f(x) = ax + b + N(0, 1)
\end{equation}
In real machine learning, your data won't be so nice, but we want something we can easily look at to get some intuition about how the model works.
** note ** : In general, we do not KNOW the function our data came from. That's why we want to build models to try and approximate something that looks like our data COULD have come from it.
```
def f( x, a= 2.0, b = 1.0, add_noise=True):
if add_noise:
n = np.random.normal(0.0, 1.0)
else:
n = 0.0
return a * x + b + n
X = np.linspace(-5.0, 5.0, num= 200)
y = np.array([f(x) for x in X])
plt.plot(X, y, 'g^')
plt.show()
```
## Analytically
Clearly, there is some relationship in this data, and we'd like to represent this some how with a function of the form
\begin{equation}
w_{1} x + w_{2} = \hat y
\end{equation}
In machine learning, to find these w's we generally want to have some objective to see how good the weights are. One example is minimizing the mean-squared error which for our equation looks like this:
\begin{equation}
Objective(w) = \sum_{i=1}^{n} (w_{1} x_{i} + w_{2} + - y_{i}) ^{2}
\end{equation}
You'll have to trust me on this, but we can solve this directly by doing the following manipulation:
\begin{equation}
w = (X^{t} X)^{-1} (X^{t} y)
\end{equation}
where X is our data matrix and y is our labels, and w is our weights we want to parameterize
```
# Solving problem analytically:
X = np.stack((X, np.ones(X.shape[0])), axis=1) #this is so we have w_{2}
y = np.array([y]).transpose() # has to do with shape of data
X_t = X.transpose()
#Calculating each of the components
X_tX = np.matmul(X_t, X)
X_ty = np.matmul(X_t, y)
w = np.matmul(np.linalg.inv(X_tX) , (X_ty))
#this will show the best weights we can do with the data
print('w_0 = {}, w_1 = {}'.format(w[0], w[1]))
#Plotting our approximations of true values
#lambda is a key word in python for defining smaller functions
lin_model = lambda x : f(x, w[0], w[1], add_noise=False)
y_hat = [lin_model(x) for x in X[:,0]]
plt.plot(X[:,0], y, 'g^')
plt.plot(X[:,0], y_hat, 'r--')
plt.show()
```
This is all well and good...if you can calculate your inverse. In that case, you're going to have to get creative and do some optimization.
## Stochastic Gradient Descent (S.G.D)
This can be fairly math heavy, but here's the gist of what we're going to do:
\begin{equation}
w = w - \alpha * \nabla Objective(w)
\end{equation}
where $\alpha$ is referred to as the learning rate and is the amount we will update our current weights by at each step and $\nabla Objective(w)$ means we want to take the gradient with respect to our loss. One minor thing with this is that in S.G.D. we are making updates based off single examples in our data and this is done usually for performance reasons (although there is theory about why it works well too).
Maybe this all seems a little scary, particularly if you're not sure what a gradient is. Thankfully, pytorch makes it so you don't need to pull out your old calculus textbook and handles calculating these for us as we'll see.
## Pytorch's autograd system
The most useful part of pytorch is the autograd system. The gist of it is that it will automatically calculate gradients for any operations done on a pytorch tensor, so we have to say good bye to our good friend numpy, which is easy to do:
```
X = torch.from_numpy(X).float()
y = torch.from_numpy(y).float()
# Now, to do SGD we'll need a learning rate and an initial set of weights:
alpha = 1e-3
w = torch.randn(2, requires_grad=True) #requires grad is key to do this
```
To use pytorch's autograd feature, we're going to calculate the prediction with our current weights and call backward on our loss function. When you call .backward() on a tensor of interest, it calculates the gradient and we can access it with the .grad field of tensors to do our update
```
order = torch.randperm(X.size()[0])
#redefine our linear model to use the tensor version of w
for _ in range(0, 10):
total_loss = 0.0
for i in order:
x = X[i,:]
y_hat = torch.matmul(x, w) #ax + b esentially
loss = F.mse_loss(y_hat[0], y[i,0]) #our objective function
loss.backward() #where our gradients get calculated
total_loss += loss.item()
with torch.no_grad():
w -= alpha * w.grad
w.grad.zero_()
print('Loss on epoch {}: {}'.format(_, total_loss / X.size()[0]))
print(w)
X = X.numpy()
y = y.numpy()
w = w.detach().numpy()
lin_model = lambda x : f(x, w[0], w[1], add_noise=False)
y_hat = [lin_model(x) for x in X[:,0]]
plt.plot(X[:,0], y, 'g^')
plt.plot(X[:,0], y_hat, 'r--')
plt.show()
```
As we can see, our model now performsabout the same as our analytical solution, but the advantage is that doing it this way is always possible and doesn't rely on our inverse having to exist.
## Going full Pytorch
Now the above loop was to illustrate what pytorch is doing. We cheated a little bit by also using the .backward() call instead of calculating it our selves, but that's the reason WHY these deep learning frameworks are great, because you don't HAVE to do this all by hand.
```
X = torch.from_numpy(X).float()
y = torch.from_numpy(y).float()
class LinRegression(nn.Module):
def __init__(self):
super(LinRegression, self).__init__()
self.linear = nn.Linear(2, 1, bias=False )
#the bias term is generally set to True, but we added the 1's column to x
def forward(self, x):
return self.linear(x)
model = LinRegression()
sgd = optim.SGD(model.parameters(), lr=1e-3)
order = torch.randperm(X.size()[0])
for _ in range(0, 10):
total_loss = 0.0
for i in order:
x = X[i,:]
y_hat = model(x)
loss = F.mse_loss(y_hat[0], y[i,0]) #our objective function
loss.backward() #where our gradients get calculated
total_loss += loss.item()
sgd.step()
sgd.zero_grad()
print('Loss on epoch {}: {}'.format(_, total_loss / X.size()[0]))
y_hat = model(X)
y_hat = y_hat.detach().numpy()
X = X.numpy()
y = y.numpy()
plt.plot(X[:,0], y, 'g^')
plt.plot(X[:,0], y_hat, 'r--')
plt.show()
```
## Conclusions
In this tutorial we've seen how to solve linear regression using pytorch's autograd system and how to build models with pytorch. We saw several pytorch packages including torch, torch.optim, torch.nn, and torch.nn.functional. We then saw how to use these modules together to train our model.
## Exercise: Logistic Regression
Logistic regression is a sort of extension of linear regression for the classification setting.
Using what you've seen in this tutorial and class, build a logistic regression model to train a model for the following toy dataset:
```
def f(x):
if x > 0.0:
return 1.0
else:
return 0.0
X = np.linspace(-5.0, 5.0, num= 200)
y = np.array([f(x) for x in X])
plt.plot(X, y, 'g^')
plt.show()
#starter code:
X = torch.from_numpy(X)
y = torch.from_numpy(y)
class LogisticRegression(nn.Module):
def __init__(self):
super(LogisticRegression, self).__init__()
#insert code here
def forward(self, x):
#insert code
return x
# some set-up
order = torch.randperm(X.size()[0])
#training loop
for _ in range(0, 10):
total_loss = 0.0
for i in order:
x = X[i]
y_hat = model(x)
#insert call to loss function here
loss.backward() #where our gradients get calculated
total_loss += loss.item()
sgd.step()
sgd.zero_grad()
print('Loss on epoch {}: {}'.format(_, total_loss / X.size()[0]))
y_hat = model(X)
y_hat = y_hat.detach().numpy()
X = X.numpy()
y = y.numpy()
plt.plot(X[:,0], y, 'g^')
plt.plot(X[:,0], y_hat, 'r--')
plt.show()
```
| github_jupyter |
In this demo, we use knowledge distillation to train a ResNet-18 model for image classification. We will show how to provide teacher model, student model, data loaders, inference pipeline and other arguments to the toolkit and start knowledge distillation training.
```
!pip install pytorch-lightning
```
## Download the toolkit
```
!git clone https://github.com/georgian-io/Knowledge-Distillation-Toolkit.git
%cd Knowledge-Distillation-Toolkit/
import yaml
from collections import ChainMap
import torch
import torch.nn.functional as F
from torchvision.models.resnet import ResNet, BasicBlock
from torchvision import datasets, transforms
from knowledge_distillation.kd_training import KnowledgeDistillationTraining
```
## Define the student model and teacher model
```
class StudentModel(ResNet):
def __init__(self):
super(StudentModel, self).__init__(BasicBlock, [2, 2, 2, 2], num_classes=10) #ResNet18
self.conv1 = torch.nn.Conv2d(1, 64,
kernel_size=(7, 7),
stride=(2, 2),
padding=(3, 3), bias=False)
def forward(self, batch, temperature=1):
logits = super(StudentModel, self).forward(batch)
logits = logits / temperature
prob = F.softmax(logits, dim=0)
log_prob = F.log_softmax(logits, dim=0)
return {"logits":logits, "prob":prob, "log_prob":log_prob}
class TeacherModel(ResNet):
def __init__(self):
super(TeacherModel, self).__init__(BasicBlock, [3, 4, 6, 3], num_classes=10) #ResNet34
self.conv1 = torch.nn.Conv2d(1, 64,
kernel_size=(7, 7),
stride=(2, 2),
padding=(3, 3), bias=False)
def forward(self, batch, temperature=1):
logits = super(TeacherModel, self).forward(batch)
logits = logits / temperature
prob = F.softmax(logits, dim=0)
log_prob = F.log_softmax(logits, dim=0)
return {"logits":logits, "prob":prob, "log_prob":log_prob}
```
## Define the inference pipeline
```
class inference_pipeline:
def __init__(self, device):
self.device = device
def run_inference_pipeline(self, model, data_loader):
accuracy = 0
model.eval()
with torch.no_grad():
for i, data in enumerate(data_loader):
X, y = data[0].to(self.device), data[1].to(self.device)
outputs = model(X)
predicted = torch.max(outputs["prob"], 1)[1]
accuracy += predicted.eq(y.view_as(predicted)).sum().item()
accuracy = accuracy / len(data_loader.dataset)
return {"inference_result": accuracy}
def get_data_for_kd_training(batch):
data = torch.cat([sample[0] for sample in batch], dim=0)
data = data.unsqueeze(1)
return data,
```
## Read from demo_config.yaml, which contains all argument set up
```
config = yaml.load(open('./examples/resnet_compression_demo/demo_config.yaml','r'), Loader=yaml.FullLoader)
device = torch.device("cuda")
```
## Create training and validation data loaders
We will use the MNIST dataset
```
# Create data loaders for training and validation
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
train_kwargs = {'batch_size': 16, 'num_workers': 0}
test_kwargs = {'batch_size': 1000, 'num_workers': 0}
train_dataset = datasets.MNIST('./data', train=True, download=True, transform=transform)
test_dataset = datasets.MNIST('./data', train=False, transform=transform)
train_data_loader = torch.utils.data.DataLoader(train_dataset, collate_fn=get_data_for_kd_training, **train_kwargs)
test_loader = torch.utils.data.DataLoader(test_dataset, **test_kwargs)
val_data_loaders = {"accuracy_on_validation_set": test_loader}
```
## Create an instance of inference pipeline
```
# Create inference pipeline for validating the student model
inference_pipeline_example = inference_pipeline(device)
```
## Create an instance of student model and teacher model
```
# Create student and teacher model
student_model = StudentModel()
teacher_model = TeacherModel()
teacher_model.load_state_dict(torch.load("./examples/resnet_compression_demo/trained_model/resnet34_teacher.pt"))
```
## Pass data loaders, student and teacher model, inference pipeline and other argument set up into `KnowledgeDistillationTraining`
```
# Train a student model with knowledge distillation and get its performance on dev set
KD_resnet = KnowledgeDistillationTraining(train_data_loader = train_data_loader,
val_data_loaders = val_data_loaders,
inference_pipeline = inference_pipeline_example,
student_model = student_model,
teacher_model = teacher_model,
num_gpu_used = config["knowledge_distillation"]["general"]["num_gpu_used"],
final_loss_coeff_dict = config["knowledge_distillation"]["final_loss_coeff"],
logging_param = ChainMap(config["knowledge_distillation"]["general"],
config["knowledge_distillation"]["optimization"],
config["knowledge_distillation"]["final_loss_coeff"],
config["knowledge_distillation"]["pytorch_lightning_trainer"]),
**ChainMap(config["knowledge_distillation"]["optimization"],
config["knowledge_distillation"]["pytorch_lightning_trainer"],
config["knowledge_distillation"]["comet_info"])
)
```
## Start knowledge distillation training
```
KD_resnet.start_kd_training()
```
As the above output shows, validation accuracy of the student model improves in every training epoch
```
```
| github_jupyter |
# NLP Techniques
## Input -> clean data, check if the data makes sense
## NLP Techniques -> specifically designed for text data
## Output -> plot can help to check if we have what we are looking for
### 1) Sentiment Analysis
### We use the Corpus(original text) to have all words
### We use TextBlob (nltk)
### We use Naive Bayes (statistical methods)
```
# Polarity Subjectivity
# -1(Negative) <----------->(Positive)+1 0(Objective-fact)<---------------->(Subjective-Opion+1)
from textblob import TextBlob
TextBlob("I love Python").sentiment
TextBlob("great").sentiment # it took the average of the word great
TextBlob("not great").sentiment # generally when the word not is infront of word it is Negative
import pandas as pd
data = pd.read_pickle('files\\corpus.pkl')
#df_data_clean['transcript']['Dave']
pol = lambda x: TextBlob(x).sentiment.polarity
sub = lambda x: TextBlob(x).sentiment.subjectivity
data['Polarity'] = data['transcript'].apply(pol)
data['Subjectivity'] = data['transcript'].apply(sub)
data
# Lets plot the results
import matplotlib.pyplot as plt
plt.figure(figsize=(10,8))
for comedian in data.index:
x = data['Polarity'][comedian]
y = data['Subjectivity'][comedian]
plt.text(x+.001,y+.001, comedian, fontsize=8)
plt.xlim(-.001,.12)
plt.scatter(x,y)
plt.xlabel(f"Negative <-----------> Positive\nPolarity")
plt.ylabel(f"Fact <----------->Opion\nSubjectivity")
```
### 2) Topic Modeling
### We use the Document-Term Matrix(words) the order does not matter
### We use gensim
### We use Latent Dirichlet Allocation(LDA) L(hidden), D(type of probability distribution)
### We use nltk for some parts of speech tagging
#### Example
#### I like bananas and oranges Topic A - 100%
#### Frogs and fish live in ponds Topic B - 100%
#### Kittens and puppies are fluffy Topic B - 100%
#### I had spinach and apple smothie Topic A - 100%
#### My kitten loves kale Topic A - 60% and Topic B - 40%
#### Topic A-> 40% banana, 30% kale, 10% breakfast... Food
#### TOpic B-> 30% kitten, 20% puppy, 10% frog, 5% cute... Animal
#### Every document is a mix of Topics
#### Every topic is a mix of words
```
# Using gensin to do all the work behind the separation of topic and words
# We need to inform the document-term matrix, number of topics and number of iterations
data = pd.read_pickle('files\\dtm_stop.pkl')
data
# to use gensim first install -> conda install -c conda-forge gensim
from gensim import matutils, models
import scipy.sparse
tdm = data.transpose()
tdm.head()
# we need to put the document-term matrix into a new gensim format
# from df -> sparse matrix
sparse_counts = scipy.sparse.csr_matrix(tdm)
corpus = matutils.Sparse2Corpus(sparse_counts)
corpus
# gensim requires dictionary of the all terms and theirs respective location in the term-document matrix
import pickle
cv = pickle.load(open("files\\cv_stop.pkl","rb"))
id2word = dict((v,k) for k, v in cv.vocabulary_.items())
```
# Attempt #1 - Topics in generall
```
# now we need to specify 2 other parameters as well
# the number of topics and the number of passes
lda = models.LdaModel(corpus=corpus, id2word=id2word, num_topics=4, passes=10)
lda.print_topics()
```
# Attempt #2 - Only Nouns
```
# create a function to pull out nouns from a string of text
from nltk import word_tokenize, pos_tag
def nouns(text):
'''Give a string of text, tokenize the text and pull out only the nouns'''
is_noun = lambda pos:pos[:2] == "NN"
tokenized = word_tokenize(text)
all_nouns = [word for word,pos in pos_tag(tokenized) if is_noun(pos)]
return ' '.join(all_nouns)
# read the cleaned data, before the CounterVectorizer step
data_clean = pd.read_pickle('files\\data_clean.pkl')
data_clean
# apply the nouns function to the transcript to filter only nouns
data_nouns = pd.DataFrame(data_clean.transcript.apply(nouns))
data_nouns
# now whe know some other stopwords related to the nouns
# Lets update our document-term matrix with the new list of stop words
from sklearn.feature_extraction import text
from sklearn.feature_extraction.text import CountVectorizer
# another word to stopwords
add_stop_words = ['like','im','know','just','thats','people','youre','think','yeah','said','year','years','yes']
# add new stop words
stop_words = text.ENGLISH_STOP_WORDS.union(add_stop_words)
# re-create document-term matrix
cvn = CountVectorizer(stop_words=stop_words) # using the new stopwords
data_cvn = cvn.fit_transform(data_nouns.transcript) # apply into column transcript
data_dtmn = pd.DataFrame(data_cvn.toarray(),columns=cvn.get_feature_names()) # create another df with the words and frequency
data_dtmn.index = data_nouns.index # use the index from df
data_dtmn
# create the gensim corpus
corpusn = matutils.Sparse2Corpus(scipy.sparse.csr_matrix(data_dtmn.transpose()))
# create the vocabulary dictionary
id2wordn = dict((v,k) for k,v in cvn.vocabulary_.items())
# lets start with 4 topics
ldan = models.LdaModel(corpus=corpusn, num_topics=4, id2word=id2wordn, passes=10)
ldan.print_topics()
```
# Attempt #3 - Only Nouns/Adjectives
```
def nouns_adj(text):
'''Give a string of text, tokenize the text and pull out only the nouns or adjetives'''
is_noun_adj = lambda pos:pos[:2] == "NN" or pos[:2] == "JJ"
tokenized = word_tokenize(text)
nouns_adj = [word for word,pos in pos_tag(tokenized) if is_noun_adj(pos)]
return ' '.join(nouns_adj)
# apply the nouns function to the transcript to filter only nouns or adjetives
data_nouns_adj = pd.DataFrame(data_clean.transcript.apply(nouns_adj))
data_nouns_adj
# create a new document-term matrix using only nouns and adjectives, remore common words
# re-create document-term matrix
cvna = CountVectorizer(stop_words=stop_words, max_df=.8) # using the new stopwords
data_cvna = cvna.fit_transform(data_nouns_adj.transcript) # apply into column transcript
data_dtmna = pd.DataFrame(data_cvna.toarray(),columns=cvna.get_feature_names()) # create another df with the words and frequency
data_dtmna.index = data_nouns_adj.index # use the index from df
data_dtmna
# create the gensim corpus
corpusna = matutils.Sparse2Corpus(scipy.sparse.csr_matrix(data_dtmna.transpose()))
# create the vocabulary dictionary
id2wordna = dict((v,k) for k,v in cvna.vocabulary_.items())
# lets start with 4 topics
ldana = models.LdaModel(corpus=corpusna, num_topics=4, id2word=id2wordna, passes=10)
ldana.print_topics()
```
# Identify Topics in Each Document
```
# Our final LDA model
ldna = models.LdaModel(corpus=corpusna, num_topics=4, id2word=id2wordna, passes=80)
ldana.print_topics()
```
### Topics
#### Topic 0: train,wife,chinese,asian
#### Topic 1: indian, russel, asian, chinese
#### Topic 2: coronavirus, mask, helicopter, president, meeting, cold
#### Topic 3: indian, russel, nose, son, india
```
# Now, take a look at which topic each transcript contains
corpus_transformed = ldana[corpusna]
list(zip([a for [(a,b)] in corpus_transformed], data_dtmna.index))
```
### Topics
#### Topic 0: train,wife,chinese,asian ->Ronny
#### Topic 1: indian, russel, asian, chinese ->Nobody
#### Topic 2: coronavirus, mask, helicopter, president, meeting, cold ->Dave
#### Topic 3: indian, russel, nose, son, india ->Russel
| github_jupyter |
# Text Similarity
<div class="alert alert-info">
This tutorial is available as an IPython notebook at [Malaya/example/similarity](https://github.com/huseinzol05/Malaya/tree/master/example/similarity).
</div>
<div class="alert alert-info">
This module trained on both standard and local (included social media) language structures, so it is save to use for both.
</div>
```
%%time
import malaya
string1 = 'Pemuda mogok lapar desak kerajaan prihatin isu iklim'
string2 = 'Perbincangan isu pembalakan perlu babit kerajaan negeri'
string3 = 'kerajaan perlu kisah isu iklim, pemuda mogok lapar'
string4 = 'Kerajaan dicadang tubuh jawatankuasa khas tangani isu alam sekitar'
news1 = 'Tun Dr Mahathir Mohamad mengakui pembubaran Parlimen bagi membolehkan pilihan raya diadakan tidak sesuai dilaksanakan pada masa ini berikutan isu COVID-19'
tweet1 = 'DrM sembang pilihan raya tak boleh buat sebab COVID 19'
```
### Calculate similarity using doc2vec
We can use any word vector interface provided by Malaya to use doc2vec similarity interface.
Important parameters,
1. `aggregation`, aggregation function to accumulate word vectors. Default is `mean`.
* ``'mean'`` - mean.
* ``'min'`` - min.
* ``'max'`` - max.
* ``'sum'`` - sum.
* ``'sqrt'`` - square root.
2. `similarity` distance function to calculate similarity. Default is `cosine`.
* ``'cosine'`` - cosine similarity.
* ``'euclidean'`` - euclidean similarity.
* ``'manhattan'`` - manhattan similarity.
#### Using word2vec
I will use `load_news`, word2vec from wikipedia took a very long time. wikipedia much more accurate.
```
vocab_news, embedded_news = malaya.wordvector.load_news()
w2v = malaya.wordvector.load(embedded_news, vocab_news)
doc2vec = malaya.similarity.doc2vec(w2v)
```
#### predict for 2 strings
```
doc2vec.predict_proba([string1], [string2], aggregation = 'mean', soft = False)
```
#### predict batch of strings
```
doc2vec.predict_proba([string1, string2], [string3, string4])
```
#### visualize heatmap
```
doc2vec.heatmap([string1, string2, string3, string4])
```
Different similarity function different percentage.
### Calculate similarity using deep encoder
We can use any encoder models provided by Malaya to use encoder similarity interface, example, BERT, XLNET, and skip-thought. Again, these encoder models not trained to do similarity classification, it just encode the strings into vector representation.
Important parameters,
1. `similarity` distance function to calculate similarity. Default is `cosine`.
* ``'cosine'`` - cosine similarity.
* ``'euclidean'`` - euclidean similarity.
* ``'manhattan'`` - manhattan similarity.
#### using xlnet
```
xlnet = malaya.transformer.load(model = 'xlnet')
encoder = malaya.similarity.encoder(xlnet)
```
#### predict for 2 strings
```
encoder.predict_proba([string1], [string2])
```
#### predict batch of strings
```
encoder.predict_proba([string1, string2, news1, news1], [string3, string4, husein, string1])
```
#### visualize heatmap
```
encoder.heatmap([string1, string2, string3, string4])
```
### List available Transformer models
```
malaya.similarity.available_transformer()
```
We trained on [Quora Question Pairs](https://github.com/huseinzol05/Malay-Dataset#quora), [translated SNLI](https://github.com/huseinzol05/Malay-Dataset#snli) and [translated MNLI](https://github.com/huseinzol05/Malay-Dataset#mnli)
Make sure you can check accuracy chart from here first before select a model, https://malaya.readthedocs.io/en/latest/Accuracy.html#similarity
**You might want to use ALXLNET, a very small size, 49MB, but the accuracy is still on the top notch.**
### Load transformer model
In this example, I am going to load `alxlnet`, feel free to use any available models above.
```
model = malaya.similarity.transformer(model = 'alxlnet')
```
### Load Quantized model
To load 8-bit quantized model, simply pass `quantized = True`, default is `False`.
We can expect slightly accuracy drop from quantized model, and not necessary faster than normal 32-bit float model, totally depends on machine.
```
quantized_model = malaya.similarity.transformer(model = 'alxlnet', quantized = True)
```
#### predict batch
```python
def predict_proba(self, strings_left: List[str], strings_right: List[str]):
"""
calculate similarity for two different batch of texts.
Parameters
----------
string_left : List[str]
string_right : List[str]
Returns
-------
result : List[float]
"""
```
you need to give list of left strings, and list of right strings.
first left string will compare will first right string and so on.
similarity model only supported `predict_proba`.
```
model.predict_proba([string1, string2, news1, news1], [string3, string4, tweet1, string1])
quantized_model.predict_proba([string1, string2, news1, news1], [string3, string4, tweet1, string1])
```
#### visualize heatmap
```
model.heatmap([string1, string2, string3, string4])
```
### Vectorize
Let say you want to visualize sentences in lower dimension, you can use `model.vectorize`,
```python
def vectorize(self, strings: List[str]):
"""
Vectorize list of strings.
Parameters
----------
strings : List[str]
Returns
-------
result: np.array
"""
```
```
texts = [string1, string2, string3, string4, news1, tweet1]
r = quantized_model.vectorize(texts)
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
tsne = TSNE().fit_transform(r)
tsne.shape
plt.figure(figsize = (7, 7))
plt.scatter(tsne[:, 0], tsne[:, 1])
labels = texts
for label, x, y in zip(
labels, tsne[:, 0], tsne[:, 1]
):
label = (
'%s, %.3f' % (label[0], label[1])
if isinstance(label, list)
else label
)
plt.annotate(
label,
xy = (x, y),
xytext = (0, 0),
textcoords = 'offset points',
)
```
### Stacking models
More information, you can read at https://malaya.readthedocs.io/en/latest/Stack.html
If you want to stack zero-shot classification models, you need to pass labels using keyword parameter,
```python
malaya.stack.predict_stack([model1, model2], List[str], strings_right = List[str])
```
We will passed `strings_right` as `**kwargs`.
```
alxlnet = malaya.similarity.transformer(model = 'alxlnet')
albert = malaya.similarity.transformer(model = 'albert')
tiny_bert = malaya.similarity.transformer(model = 'tiny-bert')
malaya.stack.predict_stack([alxlnet, albert, tiny_bert], [string1, string2, news1, news1],
strings_right = [string3, string4, tweet1, string1])
```
| github_jupyter |
# Essential Extensions for Jupyter NB
## Introduction
#### Too long, didn't read:
- Table of Contents 2 - Dynamic table of contents for navigating notebooks
- Collapsible Headings - Easily collapse whole sections of your notebook
- Snippets Menu - Menu to insert common code snippets from the most popular libraries.
- Notify - When cell execution is taking a long time, get notified when it finishes.
- LiveMarkdownPreview - Preview your markdown cells in real time as you type
This notebook is meant to quickly introduce you to all the extensions. Going back and forth between the NBConfigurator tab and here, and refreshing the notebook will break the flow of that, so we are going to turn on/off all extensions introduced in the section at the same time, and then you'll only need to refresh the page once.
The path to turn each extension off and on via command line can be found by clicking on the extension in the Nbextensions tab and looking at require_path. In this case it's `snippets_menu/main` 
#### Enabling essential extensions (important)
```
!jupyter nbextension enable collapsible_headings/main
!jupyter nbextension enable toc2/main
!jupyter nbextension enable snippets_menu/main
!jupyter nbextension enable livemdpreview/livemdpreview
!jupyter nbextension enable notify/notify
```
Note: don't refresh the page yet, as we will use the refresh as an opportunity to show how to navigate the page using the toc2 extension.
### ToC2 (Table of Contents Extension)
<strong>If you work in long notebooks, and especially if you do [notebook development](https://nbdev.fast.ai/), this extension will change your life. </strong>
ToC2 provides you with a dynamically generated table of contents based on your notebook header hierarchy. You can display the table of contents as a sidebar in the space that jupyter normally wastes. This will allow you to easily jump to different sections of your notebook. Also each header will be given a number for it's section and subsection, like 2.0, 2.1, 2.1.1...etc.
In a minute, when you refresh the page, you'll notice a few more options on your toolbar

The navigate button will show you a table of contents (you may need to resize the window by dragging from the lower right corner), but my preferred option is a dedicated sidebar. You get that by clicking the toc button the red arrow is pointing at.
So go ahead and refresh the page, click that button, and then come back here by finding and clicking the ToC2 (Table of Contents Extension) section in your new ToC sidebar
#### Tips and Customization
A great feature of ToC is that when you run large chunks of the notebook, it will visualize this by highlighting any sections of the ToC sidebar that have code cells, and changing their colors as they execute to show their progress.
You can customize ToC directly in the new sidebar by clicking on the settings gearbox next to "Contents". Below is an image with my recommended settings. I'd especially recommend "Display ToC window at startup", and "Leave h1 items out of ToC" as h1 is usually reserved for NB Titles, and will give your ToC an unnecessary layer of nesting

### Collapsible Headings
<strong>Long notebooks don't have to be long anymore. If you use code folding in your favorite editor, you should be using collapsible headings in Jupyter Notebook </strong>
The collapsible headings extension is exactly what it sounds like, it adds little arrows next to each section that, when you click, collapse that section and all it's subsections. Like code folding, it's great for staying focused on only what you need to see at that moment. It adds no new toolbar buttons, just little arrows next to your headings that will collapse everything between that heading, and the next heading of equal size. Go ahead and try it out

#### Tips and Customization
There are a few things you can do to make this even better (all these settings can be adjusted by clicking on nbconfigurator -> Collapsible Headings -> then scrolling down.
- Keyboard shortcuts for folding/unfolding one/all sections

- Keyboard shortcuts to insert a new section below. When you hit 'b' on a collapsed cell, it will insert a new cell immediately below inside the subsection, but what you'll most often want is to insert a new header cell below the entire folded group.

- A setting so that collapsing your Table of Contents also collapses the corresponding nb section

One more tip: Make sure you keep your headings in their own cells. Any markdown content in the same cell as the heading won't be collapsible.
### Snippets Menu
<strong>Are you still struggling to remember code you use all the time? If these 3 lines just won't stick in your brain:</strong>
`%matplotlib inline`
`%load_ext autoreload`
`%autoreload 2`
<strong>There's an extension for that</strong>

Snippets Menu is an add on to the toolbar in the form of a dropdown menu with the code for common operations in the most common libraries. If opening a new window and googling the code/syntax of something is a routine part of your workflow, this extension will save you time and tabs.
#### Tips and Customization
Possibly the best feature of Snippets Menu is the ability to fully customize your snippets by both adding to and removing from the defaults. I am really excited to see what the community comes up with here, and am especially excited to see some experts build fastai, pytorch, Latex menus and more.
### Notify
<strong>Do you sometimes run code that takes a while to complete, so you do something else and interrupt your attention every 1-2 minutes to come back and check on it? Notify is a better way</strong>
Notify is an extension that starts a timer whenever jupyter is running a cell. If the execution exceeds a certain amount of time (user defined) you are notified when it's done.

Try setting it to 5 (seconds) and then run the code in the next cell to see what happens.
```
import time
for i in range(6):
print(i+1)
time.sleep(1)
```
This is awesome but something is missing. Notify doesn't keep me updated on the process, and estimate how long it will take to complete. It's outside the scope of this tutorial, but this is what the [fastprogress library](https://github.com/fastai/fastprogress) is for. It is extremely easy to use in 99% of cases, just import and wrap your iterable with pbar, and then get your notification from notify when it's all done.
```
# skip this cell if you don't have fastprogress, or better yet install it now by uncommenting the line below
# !pip install fastprogress
from fastprogress import progress_bar as pbar
for i in pbar(range(110)): #changed from for i in range(110):
time.sleep(0.05)
```
#### Tips and Customization
```
import time
time.sleep(15)
time.sleep(65)
```
### LiveMarkdownPreview
<strong>We love jupyter because it is dynamic and interactive. Why do I have to type my markdown and then execute the cell to see what it looks like, I should be able to preview it in real time.</strong>
This extension requires no new knowledge, it just works. You type in a markdown cell and it displays the rendered output as your readers will see it. No more repeatedly hitting shift-enter to see if you wrote the correct markdown. This plus a snippets menu for markdown and you'll be unstoppable.
#### Customization
There are just two options for LiveMarkdownPreview in the configurator. One for how often the preview is updated (default 500ms seems perfect), and the other is whether you want the output to appear on the side, or below the cell you're working in. The default is below and I'd recommend keeping it that way, but the choice is yours.
### Disabling Essential Extensions
My essential extensions might not be essential for you, so if you'd like to disable one or more of them, uncomment and run the relevant line, or toggle them in your nbconfigurator
```
# !jupyter nbextension disable collapsible_headings/main
# !jupyter nbextension disable toc2/main
# !jupyter nbextension disable snippets_menu/main
# !jupyter nbextension disable livemdpreview/livemdpreview
# !jupyter nbextension disable notify/notify
```
| github_jupyter |
# Python Dictionaries
## Dictionaries
* Collection of Key - Value pairs
* also known as associative array
* also known as map
* HashMap is another alias
* unordered
* the order of insertion is preserved in Python since 3.6+
* keys unique in one dictionary
* dictionaries can be nested (could have lists inside, dictionaries, and of course primitives such as strings)
* storing, extracting
```
emptyd = {} #shortest using curly braces
len(emptyd)
type(emptyd)
emptyd
empty_dict_2 = dict() # alternative syntax
empty_dict_2
tel = {'jack': 4098, 'sape': 4139} # so { key: value, key2:value2, and so on}
# there is no string requirement for values to be of same type
print(tel)
# so what can be keys ? what happens if we add second identical key
tel = {'jack': 4098, 'sape': 4139, 'jack': 9000}
# here i am overwriting old tel with new tel dictionary
# you generally should not do this, keys should be unique
print(tel)
# add a new key-value pair
tel['guido'] = 4127
print(tel.keys())
print(tel.values())
# add key 'valdis' with value 4127 to our tel dictionary
tel['valdis'] = 4127
tel # so values can be same
tel['Valdis'] = 4127 # so key Valdis is different from key valdis
tel
tel['Valdis']
tel['Liga']
# if we do not want to get errors when checking for nonexistant keys we will need some way of dealing with this
tel['valdis'] = [9000,2640,2911] # so values can be nested
tel
tel_list = [['jack',9000], ['valdis',9000]]
tel_list
#get value from key in dictionary
# very fast even in large dictionaries! O(1)
# this means even a huge dictionary will retrieve the values in constant time - ie very quickly
tel['jack']
tel['sape'] = 54545
# try adding your own name to tel dictionary
tel['valdis']
tel['Valdis'] # this should give error because no such key exists
# check for key in our dictionary
'valdis' in tel
'peteris' in tel
# key = 'nnevaldis'
key = 'valdis'
if key in tel:
print(tel[key]) # we print the value of the key if there is such a key
else:
print("No such key")
def check_key(my_dict, key, default="No such key"):
if key in my_dict:
return my_dict[key]
else:
return default
check_key(tel, 'Valdis')
check_key(tel, "Liga")
check_key(tel, "Peter", "No peter here!")
# Turns all that work on my nice function was not needed because Python already provides this :)
type(None)
tel.get('valdis') # gets value by key without errors
# so you can use get method on a dictionary and it will return value or None by default if no key exists
print(tel.get('nevaldis')) #by default on bad key we get None
tel.get('nevaldis', 555-1212) # we can change the return on bad keys to our own
tel.get('nevaldis', '555-1212') # we can change the return on bad keys to our own
tel.get('Liga', "Liga where are you?")
tel['Liga'] = 911 # so we set key Liga to value 911
tel.get('Liga', "Liga where are you?")
# remove key value pair # rarer
tel['sape'] = 665453 # create key or overwrite if it existed
del tel['sape']
tel
tel.get('sape', 'Sorry no such key')
'valdis' in tel.keys()
# this will be slower going through all the key:value pairs
4127 in tel.values()
112 in tel.values()
type(tel.values())
tel.values()
dir(tel.values())
tel.values()
value_list = list(tel.values())
value_list # now this list lives separately from our tel dictionary
key_list = list(tel.keys())
key_list # again are living their lives separate from our dictionary
tel['irv'] = 4127
tel
# tel.values().count(4127) will not work we need list
# tel.values.count(4127) dict_values has no count
telvalues = list(tel.values())
telvalues.count(4127)
list(tel.values()).count(4127)
list(tel.values()).count(9004127)
tel['jack'] = 9999
def set_key_value(my_dict, key, value):
if key not in my_dict:
my_dict[key] = value
print(f"Added new key {key} value {value} pair")
else:
print("You already have key", key, "value", tel[key])
set_key_value(tel, "police", 911)
tel
set_key_value(tel, "police", 911)
# the above code can be written using setdefault method
tel.setdefault("rtu", 777700) # so this will only work once
tel
type(tel)
# the above code can be written using setdefault method
tel.setdefault("rtu", 9635678) # so this will only work once, if key exists nothing will happen
tel
tel.get('jack'), tel['jack'] # the diffence being that get doesnt throw errors, but gives None for no key
%%timeit
tel.get('jack')
# so that was about 0.4 millionth of a second operation, quite fast
%%timeit
tel['jack']
# so get is about 50 % slower due to extra check for key existance
tel[1] = 5555 # we can use numbers as keys in our dictionary but generally best avoided
tel
tel[1]
# problem is that numbers as keys look like list but dictionaries work differently
myphonelist=[3432432,242342,24,234,2432]
myphonelist[1] # hard to distinguish without type or smart IDE (development enivorment)
valdisphones = tel.get('valdis')
valdisphones # this is just a reference to a list
# how to get last phone number from irvingphones ?
valdisphones[-1]
# since Python 3.7 the original insertion order of keys is preserved
print(tel)
del tel[1]
tel
sorted(tel.keys())
sorted_tel = {}
for key in sorted(tel.keys()):
sorted_tel[key] = tel[key]
# this looping is not free and would take the size the whole dictionary plus sorting is also takes time
sorted_tel
telkeys = list(tel.keys())
telkeys
newkeys = []
for key in telkeys:
newkeys.append(str(key))
newkeys
shopdict = {"potatoes":8, "carrots": 5, "beets": 3, "pumpkins":2, "chocolate":10}
shopdict
for key,value in shopdict.items(): # key and value are just names we made up also common is k,v
print(key,value)
print(f"Buying {value} kgs of {key}")
# another method of looping is to just use key
for key in shopdict: # so only key is provided
print(key, shopdict[key]) # no need for shopdict.get since key is guaranteed here
# it is normal to adjust values in a dictionary inside loop
for key in shopdict:
shopdict[key] += 10 # same as shopdict[key] = shopdict[key] + 10
shopdict
# however you should not add new keys to dictionary through which we are looping through that will leaad to no good !
newlist = []
newdict = {}
for index,(key,value) in enumerate(shopdict.items(), start=101):
print(index,key,value)
newlist.append([index,key,value])
newdict[key] = (index,value)
print(newlist)
print(newdict)
newdict[102] # looks like a list but it's a dictionary
newdict2 = {}
for item in newlist:
newdict2[item[1]] = {'id':item[0],'quantity':item[2]}
newdict2 # so this will be dictionary with values being separate dictionaries
# so we created a new dictionary which holds dictionaries inside, so nested structure
# working with JSON type of data this is quite common
# can you get me quantity of beets needed from newdict2 ?
newdict2['beets'] # this would work as well newdict2.get('beets')
newdict2['beets']['quantity']
newdict2.get('beets').get('quantity') # somewhat less common due to mostly laziness i think :0
# there are other of creating dictionaries
t3 = dict([['a', 1],['b',2],[3, 'c'], ['3', '3c']]) # so passing 2-D list (tuple would work as well)
t3
t3[3],t3['3'] # two different keys integer 3 and string 3
# alternative way of creating a dictionary using list of tuples ()
t2=dict([('sape', 4139), ('guido', 4127), ('jack', 4098)])
print(t2)
names = ['Valdis', 'valdis', 'Antons', 'Anna', 'Kārlis', 'karlis']
names
sorted(names)
```
* `globals()` always returns the dictionary of the module namespace
* `locals()` always returns a dictionary of the current namespace
* `vars()` returns either a dictionary of the current namespace (if called with no argument) or the dictionary of the argument.
```
globals()
'print(a,b)' in globals()['In']
vars().keys()
_61
_i1 # so this was my first command today
```
## Let's look a some more dictionary methods
https://docs.python.org/3/tutorial/datastructures.html#dictionaries
```
tel
tel['jack']
poppedvalue = tel.pop('jack', "No KEY Found")
poppedvalue
tel # so key jack is gone only its value is saved in poppedvalue
# while lists often use pop, dictionaries use pop less
poppedvalue = tel.pop('jack', "No KEY Found") # so i set default return if no key found
poppedvalue
poppedvalue = tel.pop('jack') # we get key error if there is no key and no default
poppedvalue
type(tel)
mytuple = tel.popitem() # so this removs and returns poth key AND value which was LAST inserted
mytuple
tel
tel.clear() # IN PLACE deletion of dictionary meaning dictionary remains empty...
tel
tel.update({"Valdis":2640, "Liga":2911}) # IN PLACE so update takes another dictionary to update ours
tel
tel.update([("key1","val1"), ("key2", "val2")])
tel
tel.update([("key1","val100"), ("key3", "val300")]) # this should UPDATE key1 -> somevalue here val100
tel
# be very very careful when changing dictionary when looping
# you should NOT mutate dictionary when looping
# two solutions to this
# one solution go through copy and modify original sort of TOP-DOWN approach
# so let's see about removing key value pairs with 00 in value
needle = "00"
for key, value in tel.copy().items(): # so we loop through copy
if needle in str(value):
print(f"Found {needle} in {value}, key is{key}")
tel.pop(key)
tel
tel.update([("key1","val100"), ("key3", "val300")]) # this should UPDATE key1 -> somevalue here val100
tel
# another approach is to build up a new dictionary so BOTTOM-UP approach
new_dict = {}
needle = "00"
for key, value in tel.items(): # we could use tel.copy.items() but no need here since we are not changing dict size
if needle not in str(value):
print(f"Did not find {needle} in {value}, key is{key} -> adding to new dict")
new_dict[key] = value
new_dict
# this approach is so common that there is a shorter way of building dictionaries
# called Dictionary Comprehension (remember List Comprehensions?)
new_dict_2 = {k:v for k,v in tel.items() if needle not in str(v)} # not all values were strings
new_dict_2
new_dict3 = {k:str(v) for k,v in new_dict_2.items()}
new_dict3
square_dict = {f"{n} squared:":n*n for n in range(10)}
square_dict
square_dict['8 squared:'] # in this case list with indexes would be just as good
new_dict4 = dict.fromkeys("kartupelis", 500) # so a way of quickly initializing dictionary counter
new_dict4
mytext = "abracadabra my magic"
# how do i count all letter frequency?
my_counter = {} # for storing our buckets (keys)
for char in mytext:
if char in my_counter:
my_counter[char] += 1 # add 1 to existing value from key char
else:
my_counter[char] = 1 # initialize counter
my_counter
# this is so common that Python has a special dictionary just for counting
from collections import Counter
my_count_2 = Counter(mytext)
my_count_2.most_common()
my_count_2
# we can store anything in dictionaries
# including other dictionaries and lists
mydict = {'mylist':[1,2,6,6,"Badac"], 55:165, 'innerd':{'a':100,'b':[1,2,6]}}
mydict
# get 6 out of mydict
mydict['innerd']
mydict['innerd']['b']
# get 6 out of mydict
mydict['innerd']['b'][-1]
#sum all values under mydict['innerd']['b']
sum(mydict['innerd']['b']),min(mydict['innerd']['b']),max(mydict['innerd']['b'])
mydict.keys()
# we can use numeric keys as well!
mydict[55]
mydict['55'] = 330
mydict
mlist = mydict['mylist']
mlist
mytext = mlist[-1]
mytext
mychar = mytext[-3]
mychar
mydict
# get letter d
mydict['mylist'][-1][-3]
# get letter d
mydict['mylist'][-1][2]
mydict
mydict['mylist'][-1][2]
mlist[-1][2]
mydict['real55'] = mydict[55]
del mydict[55]
mydict
sorted(mydict.keys())
myresult = mydict.get('Vadfadfafd')
type(myresult)
mydict.keys()
mydict.get(55)
mydict.get('innerd')
mydict.get('55')
# we get None on nonexisting key instead of KeyError
mydict.get('53253242452')
# here we will get KeyError on nonexisting key
mydict['53253242452']
mydict.get("badkey") == None
mydict
# we can check if dictionary has any items without checking len(mydict)
if mydict:
print(mydict)
print(mydict)
key,value = mydict.popitem()
key, value
mydict['mykey'] ='myvalue'
mydict
mydict
tel
del tel[1]
tel
tel.update({'valdis':1800, 'peteris':900, 'liga':911}) # in place addition meaning tel is modified
tel
tel.fromkeys(['Val','Baiba','Cālis'], 25) # here tel really didnt do anything
{}.fromkeys(['Val','Baiba','Cālis'], 25) # we could use {} really didnt do anything
tel.fromkeys(['jack', 'valdis']) # so again tel really was not needed could use {}
tel
newdictwithdefaultvalues = {}.fromkeys(['Val','Baiba','Cālis'], 25)
newdictwithdefaultvalues
# what setdefault does
key= "Valdis"
defaultvalue = 9000
if key in tel.keys():
print("Returning default", tel[key])
else:
tel[key] = defaultvalue
tel
tel.setdefault('baiba', 4524352455000)
tel
# notice that value of 'baiba' does not change after the first set
# same as
# if not 'b' in mydict:
# mydict['b'] = 3333
mydict = {}
mydict.setdefault('keyb', 53543543333)
mydict
mydict.setdefault('emptykey')
mydict
# change dictionary key value pair ONLY if key does not exist
mydict.setdefault('a', 'aaaaaaaa')
mydict
# here we overwite no matter what
mydict['a'] = 'changed a value'
mydict
# and we clear our dictionary
mydict.clear() # clear removes all key-value pairs from dictionary IN PLACE
mydict
type(mydict)
# better not change mydict to int but you could do it
mydict = 5
type(mydict)
import random # this is standart python library for randomness
random.randint(1,6) # randint gives includes BOTH and start and end
```

```
# generate 100 random dice throws and save in myrandoms
random.seed(42) # so always get same pseudo-random numbers
# myrandoms = []
# for _ in range(100):
# myrandoms.append(random.randint(1,6))
# list comprehension same as above
myrandoms = [random.randint(1,6) for _ in range(1_000)]
myrandoms[:15]
mycounter = {}
# count how many time each digit appears in myrandoms
# loop through all myrandoms and count
for num in myrandoms:
# print(num)
# check key for existance
if num in mycounter:
mycounter[num] += 1 # mycounter[num] = mycounter[num]+1
else:
mycounter[num] = 1
mycounter
from collections import Counter
# https://docs.python.org/3/library/collections.html#collections.Counter
pycounter = Counter(myrandoms)
pycounter
type(pycounter) # like a dictionary but with some extra benefits
pycounter.most_common(3)
dict(pycounter) # so you can remove the extra benefits
dicounter = dict(pycounter)
dicounter
type(mycounter)
# idiom for looping through dictionary each key - value pair
for key, value in mycounter.items():
print("key", key, "value", value)
# squaredict = {"1 squared":1, "2 squared":4} # lidz 10
squaredict = {}
for x in range(1,11):
print("Working with x",x)
# print("My key should look like:")
print(f"{x} squared")
squaredict[f"{x} squared"] = x**2
squaredict
squares = {f"{x} squared":x**2 for x in range(1,11)} # dictionary comprehension
squares
squaredict = {x:x**2 for x in range(1,6)}
squaredict
squarelist = [x**2 for x in range(1,6)] #list comprehension x is a name we came up with
squarelist
# so we can create a dictionary from our string by enumerating it and using index as key
kdict = {f"Letter {i} is":c for i,c in enumerate(list("kartupelis"), start=1)}
kdict
kdict['Letter 1 is']
ldict = {f"{i}" : c for i,c in enumerate(list("kartupelis"), start = 101)}
ldict
word = "kartupelis"
char_codes = {c:ord(c) for c in word}
char_codes
tel
# One last imporant thing, when looping through dictionary we must NOT modify the key size
# we can modify values but we must NOT add or delete keys to what we loop through
value_to_delete = 911
for k,v in tel.copy().items(): # you need a create a copy for iterating
if v == value_to_delete:
del tel[k]
tel
tel.update({'police':911, 'liga':911})
tel
# Using list comprehension to create a new dictionary and overwite old one is fine too
value_to_delete = 911
tel = {k:v for k,v in tel.items() if v != value_to_delete}
tel
# Dictionary Keys have to be immutable - so lists, dictionaries can not be keys but tuple CAN
tel[("Valdis",180)] = 2910 # this is fine
tel
# Again the whole purpose of dictionary is to get values very quickly from keys
# you can use dictionary as a in memory database - temporarily
```
| github_jupyter |
```
import json
import pandas as pd
try:
import requests
except:
!pip install requests
import requests
try:
from tqdm import tqdm
except:
!pip install tqdm
from tqdm import tqdm
# Playlists
df_spotify = pd.read_csv('../data/2000_spotify_sample.csv.gz')
df_spotify
# Se crean las querys para obtener respuesta de la API.
# https://developer.spotify.com/documentation/web-api/reference/#
set_uris = df_spotify.artist_uri.unique()
querys = []
spotify_ids = ""
max_ids = 50 # Cantidad máxima de ids por query (depende del endpoint)
for idx, track in enumerate(set_uris, start=1):
spotify_ids += track.split(":")[-1]
if idx%max_ids == 0:
querys.append(spotify_ids)
spotify_ids = ""
continue
spotify_ids += ","
if spotify_ids != '':
querys.append(spotify_ids[:-1])
len(querys)
```
# Credenciales
```
# https://developer.spotify.com/documentation/general/guides/authorization/client-credentials/
# Pide una autorización 'Basic' con: <client_id + : + client_secret> en base64.
import base64
client_id = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
client_secret = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
usr_pass = client_id + ':' + client_secret
auth_b64 = base64.b64encode(usr_pass.encode()).decode()
url = "https://accounts.spotify.com/api/token"
payload='grant_type=client_credentials'
headers = {
'Content-Type': 'application/x-www-form-urlencoded',
'Authorization': 'Basic ' + auth_b64
}
response = requests.request("POST", url, headers=headers, data=payload)
credentials = response.json()
credentials
```
# Utilizando la API
```
headers = {
'Content-Type': 'application/json',
'Authorization': credentials['token_type'] + ' ' + credentials['access_token']
}
file_name = "artist_sample_test_2"
URL = "https://api.spotify.com/v1/artists?ids="
# se guardan las respuestas en un .json
with open('../data/{}.json'.format(file_name), 'w') as f:
f.write('[ \n')
for idx, query in enumerate(tqdm(querys), start=1):
try:
response = requests.request("GET", URL + query, headers=headers)
json.dump(response.json(), f, indent=2)
if idx < len(querys):
f.write(', \n')
except Exception as err:
console.log('Error: ', err)
f.write('] \n')
# Ojo con el path.
with open('../data/{}.json'.format(file_name), 'r') as f:
json_file = json.load(f)
df_spotify = pd.DataFrame(json_file)
df_spotify
df_explode = df_spotify.explode('artists').reset_index()
df_explode
df_json = df_explode.to_json(orient='records')
df_final = pd.json_normalize(json.loads(df_json), meta=['artists'])
df_final
```
# Serialización
```
df_final.to_csv("../data/{}.csv.gz".format(file_name),
index=False,
compression="gzip")
```
| github_jupyter |
### Machine Learning for Engineers: [DecisionTree](https://www.apmonitor.com/pds/index.php/Main/DecisionTree)
- [Decision Tree](https://www.apmonitor.com/pds/index.php/Main/DecisionTree)
- Source Blocks: 4
- Description: Introduction to Decision Tree
- [Course Overview](https://apmonitor.com/pds)
- [Course Schedule](https://apmonitor.com/pds/index.php/Main/CourseSchedule)
```
from sklearn.tree import DecisionTreeClassifier
dtree = DecisionTreeClassifier(max_depth=10,random_state=101,\
max_features=None,min_samples_leaf=5)
dtree.fit(XA,yA)
yP = dtree.predict(XB)
from sklearn import datasets
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import numpy as np
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier(max_depth=10,random_state=101,\
max_features=None,min_samples_leaf=5)
# The digits dataset
digits = datasets.load_digits()
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))
# Split into train and test subsets (50% each)
X_train, X_test, y_train, y_test = train_test_split(
data, digits.target, test_size=0.5, shuffle=False)
# Learn the digits on the first half of the digits
classifier.fit(X_train, y_train)
# Test on second half of data
n = np.random.randint(int(n_samples/2),n_samples)
print('Predicted: ' + str(classifier.predict(digits.data[n:n+1])[0]))
# Show number
plt.imshow(digits.images[n], cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()
# Split a dataset based on an attribute and an attribute value
def test_split(index, value, dataset):
left, right = list(), list()
for row in dataset:
if row[index] < value:
left.append(row)
else:
right.append(row)
return left, right
# Calculate the Gini index for a split dataset
def gini_index(groups, classes):
# count all samples at split point
n_instances = float(sum([len(group) for group in groups]))
# sum weighted Gini index for each group
gini = 0.0
for group in groups:
size = float(len(group))
# avoid divide by zero
if size == 0:
continue
score = 0.0
# score the group based on the score for each class
for class_val in classes:
p = [row[-1] for row in group].count(class_val) / size
score += p * p
# weight the group score by its relative size
gini += (1.0 - score) * (size / n_instances)
return gini
# Select the best split point for a dataset
def get_split(dataset):
class_values = list(set(row[-1] for row in dataset))
b_index, b_value, b_score, b_groups = 999, 999, 999, None
for index in range(len(dataset[0])-1):
for row in dataset:
groups = test_split(index, row[index], dataset)
gini = gini_index(groups, class_values)
if gini < b_score:
b_index, b_value, b_score, b_groups = index, row[index], gini, groups
return {'index':b_index, 'value':b_value, 'groups':b_groups}
# Create a terminal node value
def to_terminal(group):
outcomes = [row[-1] for row in group]
return max(set(outcomes), key=outcomes.count)
# Create child splits for a node or make terminal
def split(node, max_depth, min_size, depth):
left, right = node['groups']
del(node['groups'])
# check for a no split
if not left or not right:
node['left'] = node['right'] = to_terminal(left + right)
return
# check for max depth
if depth >= max_depth:
node['left'], node['right'] = to_terminal(left), to_terminal(right)
return
# process left child
if len(left) <= min_size:
node['left'] = to_terminal(left)
else:
node['left'] = get_split(left)
split(node['left'], max_depth, min_size, depth+1)
# process right child
if len(right) <= min_size:
node['right'] = to_terminal(right)
else:
node['right'] = get_split(right)
split(node['right'], max_depth, min_size, depth+1)
# Build a decision tree
def build_tree(train, max_depth, min_size):
root = get_split(train)
split(root, max_depth, min_size, 1)
return root
# Print a decision tree
def print_tree(node, depth=0):
if isinstance(node, dict):
print('%s[X%d < %.3f]' % ((depth*' ', (node['index']+1), node['value'])))
print_tree(node['left'], depth+1)
print_tree(node['right'], depth+1)
else:
print('%s[%s]' % ((depth*' ', node)))
dataset = [[2.771244718,1.784783929,0],
[1.728571309,1.169761413,0],
[3.678319846,2.81281357,0],
[3.961043357,2.61995032,0],
[2.999208922,2.209014212,0],
[7.497545867,3.162953546,1],
[9.00220326,3.339047188,1],
[7.444542326,0.476683375,1],
[10.12493903,3.234550982,1],
[6.642287351,3.319983761,1]]
tree = build_tree(dataset, 1, 1)
print_tree(tree)
# Make a prediction with a decision tree
def predict(node, row):
if row[node['index']] < node['value']:
if isinstance(node['left'], dict):
return predict(node['left'], row)
else:
return node['left']
else:
if isinstance(node['right'], dict):
return predict(node['right'], row)
else:
return node['right']
# predict with a stump
stump = {'index': 0, 'right': 1, 'value': 6.642287351, 'left': 0}
for row in dataset:
prediction = predict(stump, row)
print('Expected=%d, Got=%d' % (row[-1], prediction))
```
| github_jupyter |
### Load Input Data
```
PROCESSED_DATA = "./processed-data"
import torch
import pandas as pd
import numpy as np
dataset = pd.read_csv('{}/latestSequence.csv'.format(PROCESSED_DATA), header = 0)
dataset.set_index(dataset.columns[0], inplace=True)
print(dataset[-24:])
input_data = np.array(dataset)
mean = np.mean(input_data, axis=0)
std = np.std(input_data, axis=0)
input_data = (input_data - mean)/std
input_data = torch.Tensor(input_data).unsqueeze(0)
print(input_data.shape)
```
### Set Model
```
import torch
import torch.nn as nn
class SimpleLSTM(nn.Module):
def __init__(self, input_size = 7, output_size = 1, hidden_size=100, num_layers=1):
super().__init__()
self.input_size = input_size
self.output_size = output_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(self.input_size, self.hidden_size, self.num_layers, batch_first=True)
self.fc = nn.Linear(self.hidden_size, self.output_size)
def init_hidden(self, batch_size):
hidden = torch.zeros(self.num_layers, batch_size, self.hidden_size)
cell = torch.zeros(self.num_layers, batch_size, self.hidden_size)
return hidden, cell
def forward(self, x):
# hidden, cell state init
h, c = self.init_hidden(x.size(0))
h, c = h.to(x.device), c.to(x.device)
out, (h, c) = self.lstm(x, (h, c))
final_output = self.fc(out[:, -1:, :])
final_output = torch.squeeze(final_output, dim = 1) # shape (100,1)
return final_output
```
### Load Model and Predict
The input data are 24 set of 480 hours data till 00, 01, ..., 23 hour on Aug. 21th (Yesterday)
The predictions are temperatures at 00 hour, 01 hour, ..., 23 hour on Aug. 22th (Today)
```
model_path = './saved/LSTM/best_model.pt'
model = SimpleLSTM()
checkpoint = torch.load(model_path)
state_dict = checkpoint['net']
model.load_state_dict(state_dict)
preds = []
for count in range(0,24):
pred = model(input_data[:, count:480+count, :])
pred = pred.item() * std[0] + mean[0] # de-normalization
preds.append(pred)
count = 0
for pred in preds:
print("{:02d} - {:2.2f}".format(count, pred))
count = count+1
# The actual temp was 17.0
# http://climate.weather.gc.ca/climate_data/hourly_data_e.html?hlyRange=2008-07-15%7C2019-08-07&dlyRange=2008-07-15%7C2019-08-07&mlyRange=%7C&StationID=47267&Prov=ON&urlExtension=_e.html&searchType=stnProv&optLimit=specDate&StartYear=1840&EndYear=2019&selRowPerPage=25&Line=74&Month=8&Day=1&lstProvince=ON&timeframe=1&Year=2019
```
### Summary
1) LSTM Model was trained on Kingston Climate Station's data between 2015.1.1. ~ 2019.8.21
Weakness - The prediction is dependent on only one station in Kingston
2) Model gets 480 hours data as input and predicts the temperature 24 hours later
| github_jupyter |
### Stock Market Prediction And Forecasting Using Stacked LSTM
### Import the Libraries
```
import numpy as np
import pandas as pd
from pandas_datareader import data, wb
from pandas.util.testing import assert_frame_equal
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
import math
import datetime
import plotly
import cufflinks as cf
cf.go_offline()
%matplotlib inline
```
### Set Duration
```
start = datetime.datetime(2015, 7, 11)
end = datetime.datetime(2020, 7, 11)
```
### Import the data using DataReader
```
df = data.DataReader("GOOG",'yahoo',start,end)
df.head()
df.tail()
```
### Exploratory Data Analysis
#### Maximum Closing Rate
```
df.xs(key='Close',axis=1).max()
```
#### Visualization (Closing Rate)
```
df.xs(key='Close',axis=1).iplot()
```
#### 30-day Moving Average for Close Price
```
plt.figure(figsize=(12,5))
df['Close'].loc['2019-07-10':'2020-07-10'].rolling(window=30).mean().plot(label='30 Day Moving Avg.')
df['Close'].loc['2019-07-10':'2020-07-10'].plot(label='Close')
plt.legend()
df0 = df[['Open','High','Low','Close']].loc['2019-07-10':'2020-07-10']
df0.iplot(kind='candle')
df['Close'].loc['2019-07-10':'2020-07-10'].ta_plot(study='sma',periods=[9,18,27])
```
#### Let's Reset the Index to Close
```
df1=df.reset_index()['Close']
df1
```
#### Using MinMaxScaler
```
from sklearn.preprocessing import MinMaxScaler
scaler=MinMaxScaler(feature_range=(0,1))
df1=scaler.fit_transform(np.array(df1).reshape(-1,1))
print(df1)
```
#### Splitting the Close data into Train and Test sets
```
training_size=int(len(df1)*0.70)
test_size=len(df1)-training_size
train_data,test_data=df1[0:training_size,:],df1[training_size:len(df1),:1]
training_size,test_size
train_data
# convert an array of values into a dataset matrix
def create_dataset(dataset, time_step=1):
dataX, dataY = [], []
for i in range(len(dataset)-time_step-1):
a = dataset[i:(i+time_step), 0] ###i=0, 0,1,2,3-----99 100
dataX.append(a)
dataY.append(dataset[i + time_step, 0])
return np.array(dataX), np.array(dataY)
# reshape into X=t,t+1,t+2,t+3 and Y=t+4
time_step = 100
X_train, y_train = create_dataset(train_data, time_step)
X_test, y_test = create_dataset(test_data, time_step)
print(X_train.shape), print(y_train.shape)
print(X_test.shape), print(y_test.shape)
# reshape input to be [samples, time steps, features] which is required for LSTM
X_train =X_train.reshape(X_train.shape[0],X_train.shape[1] , 1)
X_test = X_test.reshape(X_test.shape[0],X_test.shape[1] , 1)
```
### Stacked LSTM Model
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
model=Sequential()
model.add(LSTM(50,return_sequences=True,input_shape=(100,1)))
model.add(LSTM(50,return_sequences=True))
model.add(LSTM(50))
model.add(Dense(1))
model.compile(loss='mean_squared_error',optimizer='adam')
model.summary()
model.summary()
model.fit(X_train,y_train,validation_data=(X_test,y_test),epochs=100,batch_size=64,verbose=1)
```
### Lets Predict
```
train_predict=model.predict(X_train)
test_predict=model.predict(X_test)
# Transformback to original form
train_predict=scaler.inverse_transform(train_predict)
test_predict=scaler.inverse_transform(test_predict)
### Calculate RMSE performance metrics
from sklearn.metrics import mean_squared_error
math.sqrt(mean_squared_error(y_train,train_predict))
### Test Data RMSE
math.sqrt(mean_squared_error(y_test,test_predict))
```
### Let's Visualize the Predictions
```
# shift train predictions for plotting
look_back=100
trainPredictPlot = np.empty_like(df1)
trainPredictPlot[:, :] = np.nan
trainPredictPlot[look_back:len(train_predict)+look_back, :] = train_predict
# shift test predictions for plotting
testPredictPlot = np.empty_like(df1)
testPredictPlot[:, :] = np.nan
testPredictPlot[len(train_predict)+(look_back*2)+1:len(df1)-1, :] = test_predict
# plot baseline and predictions
plt.plot(scaler.inverse_transform(df1))
plt.plot(trainPredictPlot)
plt.plot(testPredictPlot)
plt.show()
len(test_data)
x_input=test_data[278:].reshape(1,-1)
x_input.shape
temp_input=list(x_input)
temp_input=temp_input[0].tolist()
temp_input
# demonstrate prediction for next 10 days
from numpy import array
lst_output=[]
n_steps=100
i=0
while(i<30):
if(len(temp_input)>100):
#print(temp_input)
x_input=np.array(temp_input[1:])
print("{} day input {}".format(i,x_input))
x_input=x_input.reshape(1,-1)
x_input = x_input.reshape((1, n_steps, 1))
#print(x_input)
yhat = model.predict(x_input, verbose=0)
print("{} day output {}".format(i,yhat))
temp_input.extend(yhat[0].tolist())
temp_input=temp_input[1:]
#print(temp_input)
lst_output.extend(yhat.tolist())
i=i+1
else:
x_input = x_input.reshape((1, n_steps,1))
yhat = model.predict(x_input, verbose=0)
print(yhat[0])
temp_input.extend(yhat[0].tolist())
print(len(temp_input))
lst_output.extend(yhat.tolist())
i=i+1
print(lst_output)
```
### Predictions for Next 30 Days
```
day_new=np.arange(1,101)
day_pred=np.arange(101,131)
len(df1)
plt.plot(day_new,scaler.inverse_transform(df1[1160:]))
plt.plot(day_pred,scaler.inverse_transform(lst_output))
df3=df1.tolist()
df3.extend(lst_output)
plt.plot(df3[1200:])
df3=scaler.inverse_transform(df3).tolist()
plt.plot(df3)
```
**CONCLUSION:** Here, we can see that the predictions seem to be close to Perfect. The Error rates are pretty low which acts as a good sign for our model.
| github_jupyter |
# Classify Genres and Emotions in Songs Using Deep Learning
## Description:
The goal of this lab is to recognize the genre and extract the emotions from spectrograms of music songs. We are given 2 datasets:
- Free Music Archive (FMA) genre that contains 3834 samples from 20 music genres.
- Multitask music dataset that contains 1497 samples with labels about the emotions such as valence, energy and danceability.
All samples came from spectrograms, that have been extracted from clips of 30 seconds from different songs.
We will analyze the spectrograms using deep learning architectures such as Recurrent Neural Networks and Convolutional Neural Networks. The exercise is separated in 5 parts:
1. Data analysis and familiarize with spectrograms.
2. Implement classifiers about the music genre using the FMA dataset.
3. Implement regression models for predicting valence, energy and danceability.
4. Use of modern training techniques, such as transfer and multitask learning, to improve the previous results.
5. Submit results in the Kaggle competition of the exercise.
## Implementation
In the prepare lab, we will classify music genres using the spectrograms.
```
# Import necessary libraries
import numpy as np
import copy
import re
import os
import pandas as pd
import random
import librosa.display
import matplotlib.pyplot as plt
# sklearn
from sklearn.metrics import f1_score, accuracy_score, recall_score, classification_report
from sklearn.preprocessing import LabelEncoder
# Pytorch
import torch
from torch import nn
from torch import optim
from torch.utils.data import Dataset
from torch.utils.data import SubsetRandomSampler, DataLoader
```
### Step 0: Familiarize with kaggle kernels
Open a private kernel in kaggle and load the data. Run the command **os.listdir("../input/patreco3-multitask-affective-music/data/")** to check the subfolders of the dataset. Try enabling and disabling the GPU and commit your changes.
```
os.listdir("../input/patreco3-multitask-affective-music/data/")
```
## Step 1: Familiarize with spectrograms in mel scale
1. Choose two files randomly
```
fixed = True
if not fixed:
# Open train_labels.txt file and choose two random lines (with different labels).
with open('../input/patreco3-multitask-affective-music/data/fma_genre_spectrograms/train_labels.txt', 'r') as f:
lines = f.readlines()
# The first element is the headers.
train_size = len(lines) - 1
idx_1 = random.randint(1,train_size)
filename_1, label_1 = lines[idx_1].split()
label_2 = label_1
while (label_2 == label_1):
idx_2 = random.randint(1,train_size)
filename_2, label_2 = lines[idx_2].split()
else:
filename_1 = '63227.fused.full.npy.gz'
label_1 = 'Blues'
filename_2 = '28466.fused.full.npy.gz'
label_2 = 'Jazz'
print('1st file: %s %s' %(filename_1, label_1))
print('2nd file: %s %s' %(filename_2, label_2))
```
- Load the spectograms and keep the mel-spectograms.
```
spec_1 = np.load('../input/patreco3-multitask-affective-music/data/fma_genre_spectrograms/train/' + filename_1.strip(".gz"))
spec_2 = np.load('../input/patreco3-multitask-affective-music/data/fma_genre_spectrograms/train/' + filename_2.strip(".gz"))
print('Shape of 1st spectrogram:')
print(spec_1.shape)
print('Shape of 2nd spectrogram:')
print(spec_2.shape)
mel_1 = spec_1[:128]
mel_2 = spec_2[:128]
```
- Plot the spectrograms.
```
plt.rcParams['figure.figsize'] = [30, 10]
plt.subplot(1, 2, 1)
plt.title(label_1, fontsize=30)
librosa.display.specshow(mel_1)
plt.subplot(1, 2, 2)
plt.title(label_2, fontsize=30)
librosa.display.specshow(mel_2)
```
In the above spectrograms, the horizontal axis represents time, and the vertical axis represents frequency. A third dimension indicating the amplitude of a particular frequency at a particular time is represented by the intensity of color of each point in the image. If we compare the above spectrograms, we can see that the Blues music has lower frequencies the entire time, while the Jazz music has many changes in the frequency over time. More generally, songs from the same genre will have similar frequency changes over time.
## Step 2: beat-synced spectrograms
- Print the shape of the mel-spectrograms.
```
print('Shape of 1st mel-spectrogram:')
print(mel_1.shape)
print('Shape of 2nd mel-spectrogram:')
print(mel_2.shape)
```
We can see that the timesteps of the mel-spectrograms are 1291 and 1291 respectively. If we trained our LSTM in these data, our model will not be efficient enough, since the length of each sample will be very large. In order to resolve this problem, we should synchronize the spectrogram with the beat of the music. This is done by taking the median between the points of the beat. The final files are placed in '/kaggle/input/patreco3-multitask-affective-music/data/fma_genre_spectrograms_beat/'.
```
spec_1_beat = np.load('../input/patreco3-multitask-affective-music/data/fma_genre_spectrograms_beat/train/' + filename_1.strip(".gz"))
spec_2_beat = np.load('../input/patreco3-multitask-affective-music/data/fma_genre_spectrograms_beat/train/' + filename_2.strip(".gz"))
mel_1_beat = spec_1_beat[:128]
mel_2_beat = spec_2_beat[:128]
print('Shape of 1st mel-spectrogram after beat-sync:')
print(mel_1_beat.shape)
print('Shape of 2nd mel-spectrogram after beat-sync:')
print(mel_2_beat.shape)
plt.subplot(1, 2, 1)
plt.title(label_1, fontsize=30)
librosa.display.specshow(mel_1_beat)
plt.subplot(1, 2, 2)
plt.title(label_2, fontsize=30)
librosa.display.specshow(mel_2_beat)
```
We observe that the changes in the frequency are almost the same, despite the fact that the timesteps are less.
## Step 3: Familiarize with chromagrams
```
chroma_1 = spec_1[128:]
chroma_2 = spec_2[128:]
print('Shape of 1st chromagram:')
print(chroma_1.shape)
print('Shape of 2nd chromagram:')
print(chroma_2.shape)
chroma_1_beat = spec_1_beat[128:]
chroma_2_beat = spec_2_beat[128:]
print('Shape of 1st chromagram after beat-sync:')
print(chroma_1_beat.shape)
print('Shape of 2nd chromagram after beat-sync:')
print(chroma_2_beat.shape)
plt.subplot(1, 2, 1)
plt.title(label_1, fontsize=30)
librosa.display.specshow(chroma_1)
plt.subplot(1, 2, 2)
plt.title(label_2, fontsize=30)
librosa.display.specshow(chroma_2)
plt.subplot(1, 2, 1)
plt.title(label_1, fontsize=30)
librosa.display.specshow(chroma_1_beat)
plt.subplot(1, 2, 2)
plt.title(label_2, fontsize=30)
librosa.display.specshow(chroma_2_beat)
```
## Step 4: Load and Analyze data
- Combine similar classes and remove underrepresented classes. Similar classes are combined because our classifier will find it difficult to distinguish them. On the other hand, classes that are underrepresented will not be correctly recognized because their available training data will not be enough.
```
class_mapping = {
'Rock': 'Rock',
'Psych-Rock': 'Rock',
'Indie-Rock': None,
'Post-Rock': 'Rock',
'Psych-Folk': 'Folk',
'Folk': 'Folk',
'Metal': 'Metal',
'Punk': 'Metal',
'Post-Punk': None,
'Trip-Hop': 'Trip-Hop',
'Pop': 'Pop',
'Electronic': 'Electronic',
'Hip-Hop': 'Hip-Hop',
'Classical': 'Classical',
'Blues': 'Blues',
'Chiptune': 'Electronic',
'Jazz': 'Jazz',
'Soundtrack': None,
'International': None,
'Old-Time': None
}
```
- Split dataset in train and validation set.
```
def torch_train_val_split(dataset, batch_train, batch_eval, val_size=.2, shuffle=True, seed=None):
# Creating data indices for training and validation splits:
dataset_size = len(dataset)
indices = list(range(dataset_size))
val_split = int(np.floor(val_size * dataset_size))
if shuffle:
np.random.seed(seed)
np.random.shuffle(indices)
# Rearrange train and validation set
train_indices = indices[val_split:]
val_indices = indices[:val_split]
# Creating PT data samplers and loaders:
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = DataLoader(dataset,
batch_size=batch_train,
sampler=train_sampler)
val_loader = DataLoader(dataset,
batch_size=batch_eval,
sampler=val_sampler)
return train_loader, val_loader
```
- Define some useful functions for loading spectrograms and chromagrams
```
def read_fused_spectrogram(spectrogram_file):
spectrogram = np.load(spectrogram_file)
return spectrogram.T
def read_mel_spectrogram(spectrogram_file):
spectrogram = np.load(spectrogram_file)[:128]
return spectrogram.T
def read_chromagram(spectrogram_file):
spectrogram = np.load(spectrogram_file)[128:]
return spectrogram.T
```
- Define an encoder for the labels.
```
class LabelTransformer(LabelEncoder):
def inverse(self, y):
try:
return super(LabelTransformer, self).inverse_transform(y)
except:
return super(LabelTransformer, self).inverse_transform([y])
def transform(self, y):
try:
return super(LabelTransformer, self).transform(y)
except:
return super(LabelTransformer, self).transform([y])
```
- Define a PaddingTransformer in order to convert all input sequences to the same length.
```
class PaddingTransform(object):
def __init__(self, max_length, padding_value=0):
self.max_length = max_length
self.padding_value = padding_value
def __call__(self, s):
if len(s) == self.max_length:
return s
if len(s) > self.max_length:
return s[:self.max_length]
if len(s) < self.max_length:
s1 = copy.deepcopy(s)
pad = np.zeros((self.max_length - s.shape[0], s.shape[1]), dtype=np.float32)
s1 = np.vstack((s1, pad))
return s1
```
- Define Pytorch dataset
```
class SpectrogramDataset(Dataset):
def __init__(self, path, class_mapping=None, train=True, max_length=-1, read_spec_fn=read_fused_spectrogram):
t = 'train' if train else 'test'
p = os.path.join(path, t)
self.index = os.path.join(path, "{}_labels.txt".format(t))
self.files, labels = self.get_files_labels(self.index, class_mapping)
self.feats = [read_spec_fn(os.path.join(p, f)) for f in self.files]
self.feat_dim = self.feats[0].shape[1]
self.lengths = [len(i) for i in self.feats]
self.max_length = max(self.lengths) if max_length <= 0 else max_length
self.zero_pad_and_stack = PaddingTransform(self.max_length)
self.label_transformer = LabelTransformer()
if isinstance(labels, (list, tuple)):
self.labels = np.array(self.label_transformer.fit_transform(labels)).astype('int64')
def get_files_labels(self, txt, class_mapping):
with open(txt, 'r') as fd:
lines = [l.rstrip().split('\t') for l in fd.readlines()[1:]]
files, labels = [], []
for l in lines:
label = l[1]
if class_mapping:
label = class_mapping[l[1]]
if not label:
continue
# Kaggle automatically unzips the npy.gz format so this hack is needed
_id = l[0].split('.')[0]
npy_file = '{}.fused.full.npy'.format(_id)
files.append(npy_file)
labels.append(label)
return files, labels
def __getitem__(self, item):
# Return a tuple in the form (padded_feats, label, length)
l = min(self.lengths[item], self.max_length)
return self.zero_pad_and_stack(self.feats[item]), self.labels[item], l
def __len__(self):
return len(self.labels)
```
- Load mel spectrograms
```
mel_specs = SpectrogramDataset(
'../input/patreco3-multitask-affective-music/data/fma_genre_spectrograms/',
train=True,
class_mapping=class_mapping,
max_length=-1,
read_spec_fn=read_mel_spectrogram)
train_loader_mel, val_loader_mel = torch_train_val_split(mel_specs, 32 ,32, val_size=.33)
test_mel = SpectrogramDataset(
'../input/patreco3-multitask-affective-music/data/fma_genre_spectrograms/',
train=False,
class_mapping=class_mapping,
max_length=-1,
read_spec_fn=read_mel_spectrogram)
test_loader_mel = DataLoader(test_mel, batch_size=32)
```
- Load beat synced mel spectrograms
```
beat_mel_specs = SpectrogramDataset(
'../input/patreco3-multitask-affective-music/data/fma_genre_spectrograms_beat/',
train=True,
class_mapping=class_mapping,
max_length=-1,
read_spec_fn=read_mel_spectrogram)
train_loader_beat_mel, val_loader_beat_mel = torch_train_val_split(beat_mel_specs, 32 ,32, val_size=.33)
test_beat_mel = SpectrogramDataset(
'../input/patreco3-multitask-affective-music/data/fma_genre_spectrograms_beat/',
train=False,
class_mapping=class_mapping,
max_length=-1,
read_spec_fn=read_mel_spectrogram)
test_loader_beat_mel = DataLoader(test_beat_mel, batch_size=32)
```
- Load beat synced chromagrams
```
beat_chroma = SpectrogramDataset(
'../input/patreco3-multitask-affective-music/data/fma_genre_spectrograms_beat/',
train=True,
class_mapping=class_mapping,
max_length=-1,
read_spec_fn=read_chromagram)
train_loader_beat_chroma, val_loader_beat_chroma = torch_train_val_split(beat_chroma, 32 ,32, val_size=.33)
test_beat_chroma = SpectrogramDataset(
'../input/patreco3-multitask-affective-music/data/fma_genre_spectrograms_beat/',
train=False,
class_mapping=class_mapping,
max_length=-1,
read_spec_fn=read_chromagram)
test_loader_beat_chroma = DataLoader(test_beat_chroma, batch_size=32)
```
- Load fused speectrogram + chromagram for the full (non-beat-synced) data
```
specs_fused = SpectrogramDataset(
'../input/patreco3-multitask-affective-music/data/fma_genre_spectrograms/',
train=True,
class_mapping=class_mapping,
max_length=-1,
read_spec_fn=read_fused_spectrogram)
train_loader, val_loader = torch_train_val_split(specs_fused, 32 ,32, val_size=.33)
test = SpectrogramDataset(
'../input/patreco3-multitask-affective-music/data/fma_genre_spectrograms/',
train=False,
class_mapping=class_mapping,
max_length=-1,
read_spec_fn=read_fused_spectrogram)
test_loader = DataLoader(test, batch_size=32)
```
- Display 2 histograms, one before class_mapping and one after.
```
# Load the beat sync mel-spectrograms without class mapping.
beat_mel_specs_nomap = SpectrogramDataset(
'../input/patreco3-multitask-affective-music/data/fma_genre_spectrograms_beat/',
train=True,
max_length=-1,
read_spec_fn=read_mel_spectrogram)
# Keep all the train labels before the mapping.
labels_before = []
for i in range(len(beat_mel_specs_nomap)):
_, label, _ = beat_mel_specs_nomap[i]
labels_before.append(label)
# Keep all the train labels after the mapping.
labels_after = []
for i in range(len(beat_mel_specs)):
_, label, _ = beat_mel_specs[i]
labels_after.append(label)
# Plot the histograms side by side.
plt.rcParams['figure.figsize'] = [25, 10]
plt.subplot(1, 2, 1)
plt.title('Before class mapping', color='w', fontsize=30)
plt.hist(labels_before, bins=20)
plt.subplot(1, 2, 2)
plt.hist(labels_after, bins=10)
plt.title('After class mapping', color='w', fontsize=30)
```
## Step 5: Music Genre Classification using LSTM
- Define LSTM
```
class BasicLSTM(nn.Module):
def __init__(self, input_dim, rnn_size, output_dim, num_layers, bidirectional=False, dropout=0):
super(BasicLSTM, self).__init__()
self.bidirectional = bidirectional
self.rnn_size = rnn_size
self.feature_size = rnn_size * 2 if self.bidirectional else rnn_size
self.num_layers = num_layers
self.dropout = dropout
# --------------- Insert your code here ---------------- #
# Initialize the LSTM, Dropout, Output layers
self.lstm = nn.LSTM(input_dim, self.rnn_size, self.num_layers, bidirectional=self.bidirectional, batch_first=True, dropout=self.dropout)
self.linear = nn.Linear(self.feature_size, output_dim)
def forward(self, x, lengths):
"""
x : 3D numpy array of dimension N x L x D
N: batch index
L: sequence index
D: feature index
lengths: N x 1
"""
# --------------- Insert your code here ---------------- #
# Obtain the model's device ID
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# You must have all of the outputs of the LSTM, but you need only the last one (that does not exceed the sequence length)
# To get it use the last_timestep method
# Then pass it through the remaining network
if self.bidirectional:
h0 = torch.zeros(self.num_layers*2, x.size(0), self.rnn_size).double().to(DEVICE)
c0 = torch.zeros(self.num_layers*2, x.size(0), self.rnn_size).double().to(DEVICE)
else:
h0 = torch.zeros(self.num_layers, x.size(0), self.rnn_size).double().to(DEVICE)
c0 = torch.zeros(self.num_layers, x.size(0), self.rnn_size).double().to(DEVICE)
# Forward propagate LSTM
lstm_out, _ = self.lstm(x, (h0, c0))
# Forward propagate Linear
last_outputs = self.linear(self.last_timestep(lstm_out, lengths, self.bidirectional))
return last_outputs
def last_timestep(self, outputs, lengths, bidirectional=False):
"""
Returns the last output of the LSTM taking into account the zero padding
"""
if bidirectional:
forward, backward = self.split_directions(outputs)
last_forward = self.last_by_index(forward, lengths)
last_backward = backward[:, 0, :]
# Concatenate and return - maybe add more functionalities like average
return torch.cat((last_forward, last_backward), dim=-1)
else:
return self.last_by_index(outputs, lengths)
@staticmethod
def split_directions(outputs):
direction_size = int(outputs.size(-1) / 2)
forward = outputs[:, :, :direction_size]
backward = outputs[:, :, direction_size:]
return forward, backward
@staticmethod
def last_by_index(outputs, lengths):
# Obtain the model's device ID
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Index of the last output for each sequence.
idx = (lengths - 1).view(-1, 1).expand(outputs.size(0),
outputs.size(2)).unsqueeze(1).to(DEVICE)
return outputs.gather(1, idx).squeeze()
```
- Define a function that trains the model for an epoch.
```
def train_dataset(_epoch, dataloader, model, loss_function, optimizer):
# IMPORTANT: switch to train mode
# Εnable regularization layers, such as Dropout
model.train()
running_loss = 0.0
# Οbtain the model's device ID
device = next(model.parameters()).device
for index, batch in enumerate(dataloader, 1):
# Get the inputs (batch)
inputs, labels, lengths = batch
# Move the batch tensors to the right device
inputs = inputs.to(device)
labels = labels.to(device)
# Step 1 - zero the gradients
# Remember that PyTorch accumulates gradients.
# We need to clear them out before each batch!
optimizer.zero_grad()
# Step 2 - forward pass: y' = model(x)
y_preds = model(inputs, lengths)
# Step 3 - compute loss: L = loss_function(y, y')
loss = loss_function(y_preds, labels)
# Step 4 - backward pass: compute gradient wrt model parameters
loss.backward()
# Step 5 - update weights
optimizer.step()
# Accumulate loss in a variable.
running_loss += loss.data.item()
return running_loss / index
```
- Define a function that evaluates the model in an epoch.
```
def eval_dataset(dataloader, model, loss_function):
# IMPORTANT: switch to eval mode
# Disable regularization layers, such as Dropout
model.eval()
running_loss = 0.0
y_pred = [] # the predicted labels
y = [] # the gold labels
# Obtain the model's device ID
device = next(model.parameters()).device
# IMPORTANT: in evaluation mode, we don't want to keep the gradients
# so we do everything under torch.no_grad()
with torch.no_grad():
for index, batch in enumerate(dataloader, 1):
# Get the inputs (batch)
inputs, labels, lengths = batch
# Step 1 - move the batch tensors to the right device
inputs = inputs.to(device)
labels = labels.to(device)
# Step 2 - forward pass: y' = model(x)
y_preds = model(inputs, lengths) # EX9
# Step 3 - compute loss: L = loss_function(y, y')
# We compute the loss only for inspection (compare train/test loss)
# because we do not actually backpropagate in test time
loss = loss_function(y_preds, labels)
# Step 4 - make predictions (class = argmax of posteriors)
y_preds_arg = torch.argmax(y_preds, dim=1)
# Step 5 - collect the predictions, gold labels and batch loss
y_pred.append(y_preds_arg.cpu().numpy())
y.append(labels.cpu().numpy())
# Accumulate loss in a variable
running_loss += loss.data.item()
return running_loss / index, (y, y_pred)
```
### Test the model by training only in one batch to make it to overfit.
For convenience, we save the models and each time we want to analyze them we load them back.
```
# Define useful parameters that are the same for all the models.
num_mel = 128
num_chroma = 12
n_classes = 10
train_batch = next(iter(train_loader_beat_mel))
RNN_SIZE = 32
EPOCHS = 5000
model = BasicLSTM(num_mel, RNN_SIZE, n_classes, 1, bidirectional=True)
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.double()
model.to(DEVICE)
loss_function = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
for epoch in range(EPOCHS):
# IMPORTANT: switch to train mode
# Εnable regularization layers, such as Dropout
model.train()
running_loss = 0.0
# Οbtain the model's device ID
device = next(model.parameters()).device
# Get the inputs (batch)
inputs, labels, lengths = train_batch
# Move the batch tensors to the right device
inputs = inputs.to(device)
labels = labels.to(device)
# Step 1 - zero the gradients
# Remember that PyTorch accumulates gradients.
# We need to clear them out before each batch!
optimizer.zero_grad()
# Step 2 - forward pass: y' = model(x)
y_preds = model(inputs, lengths)
# Step 3 - compute loss: L = loss_function(y, y')
loss = loss_function(y_preds, labels)
# Step 4 - backward pass: compute gradient wrt model parameters
loss.backward()
# Step 5 - update weights
optimizer.step()
# Accumulate loss in a variable.
running_loss = loss.data.item()
if epoch%100 == 0:
print("Epoch %d with loss: %f" %(epoch, running_loss))
torch.save(model, './overtrained_model')
```
### Train the model in the mel spectrograms
```
RNN_SIZE = 32
EPOCHS = 500
model = BasicLSTM(num_mel, RNN_SIZE, 10, 1, bidirectional=True)
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.double()
model.to(DEVICE)
loss_function = nn.CrossEntropyLoss().to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
for epoch in range(EPOCHS):
# Train the model for one epoch
train_dataset(epoch, train_loader_mel, model, loss_function, optimizer)
# Evaluate the performance of the model, on both data sets
train_loss, (y_train_gold, y_train_pred) = eval_dataset(train_loader_mel, model, loss_function)
val_loss, (y_val_gold, y_val_pred) = eval_dataset(val_loader_mel, model, loss_function)
if epoch%(100) == 0:
y_train_true = np.concatenate( y_train_gold, axis=0 )
y_val_true = np.concatenate( y_val_gold, axis=0 )
y_train_pred = np.concatenate( y_train_pred, axis=0 )
y_val_pred = np.concatenate( y_val_pred, axis=0 )
print("Epoch %d " %epoch)
print("Train loss: %f" %train_loss)
print("Validation loss: %f" %val_loss)
print("Accuracy for train:" , accuracy_score(y_train_true, y_train_pred))
print("Accuracy for validation:" , accuracy_score(y_val_true, y_val_pred))
print()
torch.save(model, './mel_32_500')
test_loss, (y_test_gold, y_test_pred) = eval_dataset(test_loader_mel, model, loss_function)
y_test_true = np.concatenate( y_test_gold, axis=0 )
y_test_pred = np.concatenate( y_test_pred, axis=0 )
print(classification_report(y_test_true, y_test_pred))
```
### Train the model in the beat synced mel spectrograms
```
RNN_SIZE = 32
EPOCHS = 500
model = BasicLSTM(num_mel, RNN_SIZE, 10, 1, bidirectional=True)
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.double()
model.to(DEVICE)
loss_function = nn.CrossEntropyLoss().to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
for epoch in range(EPOCHS):
# Train the model for one epoch
train_dataset(epoch, train_loader_beat_mel, model, loss_function, optimizer)
# Evaluate the performance of the model, on both data sets
train_loss, (y_train_gold, y_train_pred) = eval_dataset(train_loader_beat_mel, model, loss_function)
val_loss, (y_val_gold, y_val_pred) = eval_dataset(val_loader_beat_mel, model, loss_function)
if epoch%(100) == 0:
y_train_true = np.concatenate( y_train_gold, axis=0 )
y_val_true = np.concatenate( y_val_gold, axis=0 )
y_train_pred = np.concatenate( y_train_pred, axis=0 )
y_val_pred = np.concatenate( y_val_pred, axis=0 )
print("Epoch %d " %epoch)
print("Train loss: %f" %train_loss)
print("Validation loss: %f" %val_loss)
print("Accuracy for train:" , accuracy_score(y_train_true, y_train_pred))
print("Accuracy for validation:" , accuracy_score(y_val_true, y_val_pred))
print()
torch.save(model, './beat_mel_32_500')
test_loss, (y_test_gold, y_test_pred) = eval_dataset(test_loader_beat_mel, model, loss_function)
y_test_true = np.concatenate( y_test_gold, axis=0 )
y_test_pred = np.concatenate( y_test_pred, axis=0 )
print(classification_report(y_test_true, y_test_pred))
```
As we can see, all the metrics are higher due to the beat synchronization.
```
RNN_SIZE = 64
EPOCHS = 1000
model = BasicLSTM(num_mel, RNN_SIZE, 10, 1, bidirectional=True)
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.double()
model.to(DEVICE)
loss_function = nn.CrossEntropyLoss().to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-5)
for epoch in range(EPOCHS):
# Train the model for one epoch
train_dataset(epoch, train_loader_beat_mel, model, loss_function, optimizer)
# Evaluate the performance of the model, on both data sets
train_loss, (y_train_gold, y_train_pred) = eval_dataset(train_loader_beat_mel, model, loss_function)
val_loss, (y_val_gold, y_val_pred) = eval_dataset(val_loader_beat_mel, model, loss_function)
if epoch%(100) == 0:
y_train_true = np.concatenate( y_train_gold, axis=0 )
y_val_true = np.concatenate( y_val_gold, axis=0 )
y_train_pred = np.concatenate( y_train_pred, axis=0 )
y_val_pred = np.concatenate( y_val_pred, axis=0 )
print("Epoch %d " %epoch)
print("Train loss: %f" %train_loss)
print("Validation loss: %f" %val_loss)
print("Accuracy for train:" , accuracy_score(y_train_true, y_train_pred))
print("Accuracy for validation:" , accuracy_score(y_val_true, y_val_pred))
print()
torch.save(model, './beat_mel_32_1000')
test_loss, (y_test_gold, y_test_pred) = eval_dataset(test_loader_beat_mel, model, loss_function)
y_test_true = np.concatenate( y_test_gold, axis=0 )
y_test_pred = np.concatenate( y_test_pred, axis=0 )
print(classification_report(y_test_true, y_test_pred))
```
### Train the model in the beat synced chromagrams
```
RNN_SIZE = 16
EPOCHS = 300
model = BasicLSTM(num_chroma, RNN_SIZE, 10, 1, bidirectional=True)
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.double()
model.to(DEVICE)
loss_function = nn.CrossEntropyLoss().to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
for epoch in range(EPOCHS):
# Train the model for one epoch
train_dataset(epoch, train_loader_beat_chroma, model, loss_function, optimizer)
# Evaluate the performance of the model, on both data sets
train_loss, (y_train_gold, y_train_pred) = eval_dataset(train_loader_beat_chroma, model, loss_function)
val_loss, (y_val_gold, y_val_pred) = eval_dataset(val_loader_beat_chroma, model, loss_function)
if epoch%(50) == 0:
y_train_true = np.concatenate( y_train_gold, axis=0 )
y_val_true = np.concatenate( y_val_gold, axis=0 )
y_train_pred = np.concatenate( y_train_pred, axis=0 )
y_val_pred = np.concatenate( y_val_pred, axis=0 )
print("Epoch %d " %epoch)
print("Train loss: %f" %train_loss)
print("Validation loss: %f" %val_loss)
print("Accuracy for train:" , accuracy_score(y_train_true, y_train_pred))
print("Accuracy for validation:" , accuracy_score(y_val_true, y_val_pred))
print()
torch.save(model, './beat_chroma_16_300')
test_loss, (y_test_gold, y_test_pred) = eval_dataset(test_loader_beat_chroma, model, loss_function)
y_test_true = np.concatenate( y_test_gold, axis=0 )
y_test_pred = np.concatenate( y_test_pred, axis=0 )
print(classification_report(y_test_true, y_test_pred))
RNN_SIZE = 32
EPOCHS = 500
model = BasicLSTM(num_chroma, RNN_SIZE, 10, 1, bidirectional=True)
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.double()
model.to(DEVICE)
loss_function = nn.CrossEntropyLoss().to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-5)
for epoch in range(EPOCHS):
# Train the model for one epoch
train_dataset(epoch, train_loader_beat_chroma, model, loss_function, optimizer)
# Evaluate the performance of the model, on both data sets
train_loss, (y_train_gold, y_train_pred) = eval_dataset(train_loader_beat_chroma, model, loss_function)
val_loss, (y_val_gold, y_val_pred) = eval_dataset(val_loader_beat_chroma, model, loss_function)
if epoch%(50) == 0:
y_train_true = np.concatenate( y_train_gold, axis=0 )
y_val_true = np.concatenate( y_val_gold, axis=0 )
y_train_pred = np.concatenate( y_train_pred, axis=0 )
y_val_pred = np.concatenate( y_val_pred, axis=0 )
print("Epoch %d " %epoch)
print("Train loss: %f" %train_loss)
print("Validation loss: %f" %val_loss)
print("Accuracy for train:" , accuracy_score(y_train_true, y_train_pred))
print("Accuracy for validation:" , accuracy_score(y_val_true, y_val_pred))
print()
torch.save(model, './beat_chroma_32_500_l2reg')
test_loss, (y_test_gold, y_test_pred) = eval_dataset(test_loader_beat_chroma, model, loss_function)
y_test_true = np.concatenate( y_test_gold, axis=0 )
y_test_pred = np.concatenate( y_test_pred, axis=0 )
print(classification_report(y_test_true, y_test_pred))
```
Here, our model overfits because we trained it for too many epochs.
### Train the model in the beat fused spectrograms
```
RNN_SIZE = 32
EPOCHS = 500
model = BasicLSTM(num_chroma+num_mel, RNN_SIZE, 10, 1, bidirectional=True)
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.double()
model.to(DEVICE)
loss_function = nn.CrossEntropyLoss().to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
for epoch in range(EPOCHS):
# Train the model for one epoch
train_dataset(epoch, train_loader, model, loss_function, optimizer)
# Evaluate the performance of the model, on both data sets
train_loss, (y_train_gold, y_train_pred) = eval_dataset(train_loader, model, loss_function)
val_loss, (y_val_gold, y_val_pred) = eval_dataset(val_loader, model, loss_function)
if epoch%(100) == 0:
y_train_true = np.concatenate( y_train_gold, axis=0 )
y_val_true = np.concatenate( y_val_gold, axis=0 )
y_train_pred = np.concatenate( y_train_pred, axis=0 )
y_val_pred = np.concatenate( y_val_pred, axis=0 )
print("Epoch %d " %epoch)
print("Train loss: %f" %train_loss)
print("Validation loss: %f" %val_loss)
print("Accuracy for train:" , accuracy_score(y_train_true, y_train_pred))
print("Accuracy for validation:" , accuracy_score(y_val_true, y_val_pred))
print()
torch.save(model, './beat_fused_32_500')
test_loss, (y_test_gold, y_test_pred) = eval_dataset(test_loader, model, loss_function)
y_test_true = np.concatenate( y_test_gold, axis=0 )
y_test_pred = np.concatenate( y_test_pred, axis=0 )
print(classification_report(y_test_true, y_test_pred))
RNN_SIZE = 64
EPOCHS = 1000
model = BasicLSTM(num_chroma+num_mel, RNN_SIZE, 10, 1, bidirectional=True)
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.double()
model.to(DEVICE)
loss_function = nn.CrossEntropyLoss().to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-5 )
for epoch in range(EPOCHS):
# Train the model for one epoch
train_dataset(epoch, train_loader, model, loss_function, optimizer)
# Evaluate the performance of the model, on both data sets
train_loss, (y_train_gold, y_train_pred) = eval_dataset(train_loader, model, loss_function)
val_loss, (y_val_gold, y_val_pred) = eval_dataset(val_loader, model, loss_function)
if epoch%(100) == 0:
y_train_true = np.concatenate( y_train_gold, axis=0 )
y_val_true = np.concatenate( y_val_gold, axis=0 )
y_train_pred = np.concatenate( y_train_pred, axis=0 )
y_val_pred = np.concatenate( y_val_pred, axis=0 )
print("Epoch %d " %epoch)
print("Train loss: %f" %train_loss)
print("Validation loss: %f" %val_loss)
print("Accuracy for train:" , accuracy_score(y_train_true, y_train_pred))
print("Accuracy for validation:" , accuracy_score(y_val_true, y_val_pred))
print()
torch.save(model, './beat_fused_64_1000')
test_loss, (y_test_gold, y_test_pred) = eval_dataset(test_loader, model, loss_function)
y_test_true = np.concatenate( y_test_gold, axis=0 )
y_test_pred = np.concatenate( y_test_pred, axis=0 )
print(classification_report(y_test_true, y_test_pred))
```
### Step 6: Model evaluation
First some definitions:
- Accuracy: Percentage of total items classified correctly. It is a good measure when the target variable classes in the data are nearly balanced.
- Precision: Number of items correctly identified as positive out of total items identified as positive. It is about being precise,
- Recall: Number of items correctly identified as positive out of total true positives.
- f1-score: It is the harmonizc mean between precision and recal
- macro averaged metrics: It is the arithmetic mean of the per-class metrics
- micro averaged metrics: It is the weigthed mean of the per-class metrics
1. When the difference between the accuracy and the f1-score is big, it means that the dataset is imbalanced.
2. When the difference between the macro and micro f1 score is big, it means that the dataset is imbalanced, because the micro f1 weighs the f1 scores according to the size of each class.
3. When it is not so much about capturing cases correctly but more about capturing all cases that are in a certain class, recall is a better metric than precision. A classical example is the cancer prediction problem. When we treat false positives and false negatives the same, we can use the accuracy as our classification metric.
| github_jupyter |
```
%matplotlib inline
import numpy as np
import scipy.stats as st
import matplotlib.pyplot as plt
```
### Skewness
---
The <font color='red'>skewness</font> of a random variable is defined as
\begin{equation*}
\beta_1 = \mathrm{E}\left[\left(\frac{X-\mu}{\sigma}\right)^3\right],
\end{equation*}
where $\mu=\mathrm{E}[X]$ and $\sigma^2=\mathrm{Var}[X]$.
The skewness $\beta_1$ tells whether the distribution is symmetric around the mean $\mu$ or not.
+ If $\beta_1>0$, the distribution has a longer tail on the right.
+ If $\beta_1<0$, the distribution has a longer tail on the left.
+ If $\beta_1=0$, the distribution is symmetric around the mean $\mu$.
```
fig1 = plt.figure(num=1, facecolor='w')
x = np.linspace(-5.0, 5.0, 201)
plt.plot(x, st.skewnorm.pdf(x, 8.0, loc=-3.0, scale=2.0), 'b-', label='Positive Skew')
plt.plot(x, st.skewnorm.pdf(x, -8.0, loc=3.0, scale=2.0), 'r-', label='Negative Skew')
plt.plot(x, st.norm.pdf(x), 'g-', label='Symmetric')
plt.xlim((-5.0, 7.0))
plt.ylim((0.0, 0.42))
plt.xlabel('x')
plt.ylabel('Probability Density')
plt.legend(loc='upper right', frameon=False)
# plt.savefig('ms_fig_skewness.eps', dpi=600)
plt.show()
```
### Skew Normal Distribution
$$
f(x|\alpha,\mu,\sigma) = \frac{2}{\sigma}\phi\left(\frac{x-\mu}{\sigma}\right)\Phi\left(\frac{\alpha(x-\mu)}{\sigma}\right),
$$
where
$$
\phi(x) = \frac1{\sqrt{2\pi}}e^{-\frac{x^2}2},\quad
\Phi(x) = \int_{-\infty}^x\phi(z)dx.
$$
Reference:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.skewnorm.html
### Kurtosis
---
The <fond color='red'>kurtosis</font> of a random variable is defined as
$$
\beta_2 = \mathrm{E}\left[\left(\frac{X-\mu}{\sigma}\right)^4\right],
$$
where $\mu=\mathbf{E}[X]$ and $\sigma^2=\mathrm{Var}[X]$.
The kurtosis $\beta_2$ is a measurement of thickness/heaveiness of the tail.
Note that the kurtosis of the normal distribution is 3. Since the kurtosis of the normal distribution is 3, $\beta_2-3$ is often used as a measurement on whether the distribution has a thicker tail than the normal distribution, which is called the <font color='red'>excess kurtosis</font>.
+ If $\beta_2>3$, the distribution has a thicker tail (<font color='red'>leptokurtic</font>).
+ If $\beta_2<3$, the distribution has a thinner tail (<font color='red'>platykurtic</font>).
```
from scipy.special import gamma
fig2 = plt.figure(num=2, facecolor='w')
x = np.linspace(-5.0, 5.0, 201)
plt.plot(x, st.gennorm.pdf(x, 1.0, scale=np.sqrt(gamma(1.0)/gamma(3.0))),
'b-', label='Leptokurtic')
plt.plot(x, st.gennorm.pdf(x, 8.0, scale=np.sqrt(gamma(1.0/8.0)/gamma(3.0/8.0))),
'r-', label='Platykurtic')
plt.plot(x, st.norm.pdf(x), 'g-', label='Normal')
plt.xlim((-5.0, 5.0))
plt.ylim((0.0, 0.73))
plt.xlabel('x')
plt.ylabel('Probability Density')
plt.legend(loc='upper right', frameon=False)
# plt.savefig('ms_fig_kurtosis.eps', dpi=600)
plt.show()
```
### Generalized Normal (Error) Distribution
$$
f(x|\beta,\mu,\sigma) = \frac{\beta}{2\sigma\Gamma(1/\beta)}\exp\left[-\left|\frac{x-\mu}{\sigma}\right|^\beta\right].
$$
Reference:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gennorm.html
| github_jupyter |
# Composite
Can a composite pattern make templating easier?
## Templating
There are three template targets:
* plain text
* PDF
* Word
That's because some of the uses for this are lawyers that use Word documents.
I'm building the templating system up from low principles, because everyone tries to start in the middle and I've never found or built a templating system that I've been satisfied with. That's partly because it's an end-user product that's quite finicky. In a tool where you can do anything, anything we do to it can have brittle consequences.
Still, the goal is to provide merged documents in a safe way.
## Workflow
A good system will have:
* a valid source document
* a valid set of instructions to merge
* validation on inputs, even partial ones
* conditional sections of documents, based on the dataset
* incremental control over merging (partial merges OK)
* clear instructions when something failed
* document lifecycle, including versions
* the ability to target text, PDF, and Word
* support from the document and inputs to clean up and guide the process
If that's what I want generally, a composite only adds a tree to the system.
## Embedments
Embedments are basically fields, with foreign code and some smart collections of operations. Using a composite of embedments, we are saying there's a tree of instructions, one instruction per field. That may be the wrong way to think of the problem, considering the real problem defined in Workflow.
The Composite comes from the GoF Composite Pattern. The Embedment comes from Martin Fowler's Domain Specific Languages.
## Basic Model
The basic model uses a Component, Composite, and Leaf to create an operational workflow.
The original source on this can be [found here](https://sourcemaking.com/design_patterns/composite/python/1)
```
import abc
class Component(metaclass=abc.ABCMeta):
@abc.abstractmethod
def operation(self):
pass
class Composite(Component):
def __init__(self):
self._children = set()
def operation(self):
for child in self._children:
child.operation()
def add(self, component):
self._children.add(component)
def remove(self, component):
self._children.discard(component)
class Leaf(Component):
def operation(self):
pass
class L1(Leaf):
def operation(self):
print(id(self))
composite = Composite()
composite.add(L1())
composite.add(L1())
composite.operation()
root = Composite()
b1 = Composite()
l1 = L1()
l2 = L1()
b1.add(l1)
b1.add(l2)
root.add(b1)
b2 = Composite()
b2.add(l2)
root.add(b2)
root.operation()
```
### Making Sense
Strengths:
* a leaf can work with state
* Component can be state smart
Modifications:
* can pass/share state so leaves can work from a collective state
* work with a tree and a builder of some sort
```
import re
import abc
from collections.abc import Iterable
from functools import partial
def listify(o):
if o is None: return []
if isinstance(o, list): return o
if isinstance(o, str): return [o]
if isinstance(o, dict): return [o]
if isinstance(o, Iterable): return list(o)
return [o]
class Component(metaclass=abc.ABCMeta):
@abc.abstractmethod
def __call__(self):
pass
class Composite(Component):
@classmethod
def build(cls, components, **kw):
return cls(**kw).add(components)
def __init__(self, **kw):
self._children = []
self.sort_key = kw.get('key')
@property
def sorter(self):
if self.sort_key is None: return sorted
return partial(sorted, key=self.sort_key)
def add(self, components):
if not isinstance(components, Iterable): components = [components]
self._children = list(self.sorter(self._children + components))
return self
def remove(self, components):
if not isinstance(components, Iterable): components = [components]
for component in components:
self._children.remove(component)
return self
@property
def length(self):
return sum([child.length for child in self._children])
def __call__(self, *a, **kw):
for child in self._children:
child(*a, **kw)
def __repr__(self):
return f"{self.__class__.__name__}: {self.length} leaves"
class Leaf(Composite):
@classmethod
def hydrate(cls, item):
if isinstance(item, cls): return item
if isinstance(item, dict): return cls(**item)
return cls(item)
@classmethod
def build(cls, items):
items = listify(items)
leaves = [cls.hydrate(item) for item in items]
return Composite.build(leaves)
length = 1
def __call__(self, *a, **kw):
pass
class NaiveEmbedment(Leaf):
def __init__(self, name=None, pattern=None, **kw):
self.name = name
self.pattern = re.compile(name if pattern is None else pattern)
self.kw = kw
def __call__(self, document, replacement, *a, **kw):
return re.sub(self.pattern, replacement, document)
def __repr__(self):
return f"{self.__call__.__name__}: {self.name} {self.pattern}"
class InsertEmbedment(Leaf):
@classmethod
def build(cls, items):
items = listify(items)
leaves = [cls.hydrate(item) for item in items]
return Composite.build(leaves, key=lambda e: e.location)
def __init__(self, location, name=None, **kw):
self.location = location
self.name = name
self.offset = kw.get('offset', 0)
self.kw = kw
def __call__(self, document, text, *a, **kw):
offset = kw.get('offset', self.offset)
position = self.location + offset
self.offset = offset + len(text)
result = document[:position] + text + document[position:]
print(result)
return result
def __repr__(self):
return f"{self.__class__.__name__} {self.location}"
r = re.compile('foo')
doc = "foo bar baz foo bar foo"
NaiveEmbedment('foo')(doc, 'ccc')
s = InsertEmbedment.build([15, 10, 5])
s(doc, 'abc')
s = InsertEmbedment(12, name='incomplete')
print(s(doc, 'xxx '))
s.offset
```
### Making Sense
So, the embedment composite is different:
* uses call
* has some builders and a hydration mechanism
* addresses a sort
This makes it easier to assemble, but it's still lacking clarity:
* all the steps?
* error control?
* executable/valid?
* state/incremental process?
Also, it's hard to say what an embedment should be doing. There's the possiblity of knowing a location, or a slice that should be replaced, but that's difficult. It might be right to have a difficult embedment if the environment needs to learn those kinds of things, and it's easier to get a slice or a location from another tool. Compared to a Jinja template, though, it's opaque. There needs to be confidence from the other tool that this is the right way to address the source document.
This makes sense for use cases like:
* Given a text document, want to programatically build a template rather than pre-build it.
* Given a PDF document, want to create an overlay set of fields.
* Given a document and an ML model, I want to see if I can build a fieldset and survey pair.
## Embedments Composite
Embedments is a verb for what a field does. It applies code to embed itself.
* All embedments work on documents.
* Allow embedments to have an order.
* A simple embedment can apply text to a position in the document.
* The position is relative to the original location, or something easier to use.
```
class TextEmbedment:
def __init__(self, *a, **kw):
self.kw = kw
@property
def prior_offset(self):
return self.kw.get('offset', 0)
@property
def posterior_offset(self):
if hasattr(self, 'text'):
length = len(self.text)
else:
lengt = 0
return self.offset + length
def _get(self, name):
return getattr(self, name) + self.prior_offset
class PlainEmbedment(TextEmbedment):
def __init__(self, position, text, **kw):
self.position = position
self.text = text
self.kw = kw
def __call__(self, document):
position = self._get('position')
return document[:position] + self.text + document[position:]
class ReplacingEmbedment(TextEmbedment):
def __init__(self, begin, end, text, **kw):
self.begin = begin
self.end = end
self.text = text
self.kw = kw
def __call__(self, document):
begin = self._get('begin')
end = self._get('end')
return document[:begin] + self.text + document[end:]
document = "This is a document."
embedment = PlainEmbedment(10, 'nice ')
assert embedment(document) == "This is a nice document."
embedment = ReplacingEmbedment(8, 9, 'one fine')
assert embedment(document) == 'This is one fine document.'
```
### Making Sense
If I know where something goes, I can apply it. Therefore:
* Learn where something goes by an easier standard (encode what I know when I decide to add a field).
* Create a chain of embedments.
* Store and use the incrementing offset to apply the embedment to the right location.
* Use validation and error control throughout.
## Regular Expression Field Identity
I'm thinking that a hint could be used in a document. Say, field1 is a document marker. I can create a regular expression with a function and then ensure it's unique. If it's a stable identifier, great. If not, use the keywords to make a better regular expression. Default keywords make it obvious what I'm looking for.
I need an obvious way to see how these locations are being developed. Transparency.
```
import re
def match_count(r, doc, **kw):
if not isinstance(r, re.Pattern): r = re.compile(r)
return len(r.findall(doc))
def is_stable(r, docs, **kw):
if not isinstance(docs, list): docs = [docs]
counts = [match_count(r, doc) == 1 for doc in docs]
return all(counts)
def location_for(r, doc, **kw):
if not isinstance(r, re.Pattern): r = re.compile(r)
if not is_stable(r, doc, **kw): return (0, 0)
return r.search(doc).span()
def insertion_point_for(r, doc, **kw):
begin, _end = location_for(r, doc, **kw)
return begin
def replacing_embedment(r, value, doc, **kw):
begin, end = location_for(r, doc, **kw)
return ReplacingEmbedment(begin, end, value)
doc = "uber super duper doc"
field = "super"
r = re.compile(field)
assert match_count(r, doc) == 1
assert match_count(field, doc) == 1
assert is_stable(field, doc)
assert is_stable('uper', doc) is False
assert location_for(r, doc) == (5, 10)
e = replacing_embedment(r, 'fantastic', doc)
# e('fantastic', doc)
e(doc)
```
### FIXME
Fix this: an embedment without the replacement? I don't like this, but it's almost there.
## Jinja Templates
I can create templates out of Jinja instead.
Benefits:
* no guesswork on fields
* loops, conditions, logic
* stable, well-executed
Costs:
* larger framework
* not a stepping stone to PDF models
### Not Here
I'm not going to fully explore Jinja templates here. I don't want to leave that stuff in this lab right now. Better to come back to this, but leaving notes here. The template I threw away came from the [Jinja documentation](https://jinja.palletsprojects.com/en/2.11.x/).
```
# from jinja2 import Environment, PackageLoader, select_autoescape
# title = "Some Title"
# class User:
# def __init__(self, url, username):
# self.url = url
# self.username = username
# users = [User('http://example.com', 'Some User')]
# env = Environment(
# loader=PackageLoader('slip_box', 'templates'),
# autoescape=select_autoescape(['html', 'xml'])
# )
# template = env.get_template('test.html')
# print(template.render(title=title, users=users))
```
| github_jupyter |
## Pyspark
```
import pyspark
from pyspark.context import SparkContext
# sc = SparkContext('yarn-client', 'pyspark')
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName("Python Spark SQL basic example") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
df = spark.read.csv("../data/titanic-train.csv",header=True)
# df = spark.read.option("header", "true").csv("/Volumes/Transcend/Data Engineering School/titanic/train.csv") \
# .withColumn("Survived", col("Survived").cast("double")) \
# .withColumn("label", col("Survived")) \
# .withColumn("Pclass", col("Pclass").cast("double"))\
# .withColumn("SibSp", col("SibSp").cast("double"))\
# .withColumn("Parch", col("Parch").cast("double"))\
# .na.fill("S", "Embarked")
# zepplin에선
#%pyspark
#df = spark.read.option("header", "true").csv("../data/titanic-train.csv")
df.show()
type(df)
df
from pyspark.sql.functions import *
df.select(count("PassengerId"), sum("Survived")).show()
df.select(count("PassengerId"), sum("Survived"), sum("Survived")/count("PassengerId")).show()
```
- 38%의 생존률
```
df.collect()
df.columns
dir(df)
df.groupBy("Survived").count().show()
df.groupBy("Pclass","Survived").count().orderBy("PClass","Survived").show()
df.groupBy("Sex","Survived").count().orderBy("Sex","Survived").show()
# zeppelin
df.createOrReplaceTempView("titanic")
%sql
select * from titanic
df.groupBy("Parch","Survived").count().orderBy("Parch","Survived").show()
```
- 생존자 예측을 해보겠습니다
```
from pyspark.sql.types import *
from pyspark.sql.functions import *
```
- udf
```
def predict1_func():
return 0.0 # all dead
predict1 = udf(predict1_func, returnType=DoubleType())
def predict2_func(gender):
'''
성별에 따라 분류
'''
if gender == "female":
return 1.0
else:
return 0.0 # all dead
predict2 = udf(predict2_func, returnType=DoubleType())
df.select(predict1()).show()
df.select(predict1() == col("Survived")).show()
```
- 이미 evaluator는 구현되어 있음!
```
from pyspark.mllib.classification import LogisticRegressionWithLBFGS
from pyspark.mllib.evaluation import BinaryClassificationMetrics
from pyspark.mllib.util import MLUtils
from pyspark.ml.evaluation import BinaryClassificationEvaluator
predctionresult = df.select(predict1().alias("prediction"), col("Survived").cast("double").alias("label"))
prediction1result = titanic.select(predict1().alias("prediction"), col("Survived").cast("double").alias("label"))
prediction2result = titanic.select(predict2("Sex").alias("prediction"), col("Survived").cast("double").alias("label"))
from pyspark.ml.evaluation import BinaryClassificationEvaluator
evaluator = BinaryClassificationEvaluator()
evaluator.setRawPredictionCol("prediction").setLabelCol("label")
# evaluation
evaluator.setMetricName("areaUnderROC")
print(evaluator.evaluate(prediction1result))
print(evaluator.evaluate(prediction2result))
evaluator.setMetricName("areaUnderPR")
print(evaluator.evaluate(prediction1result))
print(evaluator.evaluate(prediction2result))
```
### 이젠 머신러닝 방법으로 만들어 보겠습니다!
- [문서](http://spark.apache.org/docs/2.2.0/api/python/pyspark.ml.html) 참고
- ml feature module도 좋은 방법을 알려줍니다! 꼭 봐보기
```
from pyspark.ml.classification import *
# 우선 logstic regression!
```
- mllib main guide : dataframe
- mllib rdd guide : rdd용, 이제 upgrade가 잘 안됨
```
lr = LogisticRegression()
assembler = VectorAssembler() \
.setInputCols(["Pclass", "sibSp"]) \
.setOutputCol("features")
data2 = assembler.transform(df)
lrModel = lr.fit(data2)
lrModel.coeeficients
```
- predictions라는 것이 추가되었음. 어떤 것을 넣어도 0이 됨
- 더 좋은 모델을!
| github_jupyter |
<h1> Preprocessing using tf.transform and Dataflow </h1>
This notebook illustrates:
<ol>
<li> Creating datasets for Machine Learning using tf.transform and Dataflow
</ol>
<p>
While Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.
<p>
Only specific combinations of TensorFlow/Beam are supported by tf.transform. So make sure to get a combo that is.
* TFT 0.4.0
* TF 1.4 or higher
* Apache Beam [GCP] 2.2.0 or higher
```
%bash
pip uninstall -y google-cloud-dataflow
pip install --upgrade --force tensorflow_transform==0.4.0 apache-beam[gcp]
%bash
pip freeze | grep -e 'flow\|beam'
```
You need to restart your kernel to register the new installs running the below cells
```
import tensorflow as tf
import apache_beam as beam
print(tf.__version__)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
!gcloud config set project $PROJECT
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
```
<h2> Save the query from earlier </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
```
query="""
SELECT
weight_pounds,
is_male,
mother_age,
mother_race,
plurality,
gestation_weeks,
mother_married,
ever_born,
cigarette_use,
alcohol_use,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
import google.datalab.bigquery as bq
df = bq.Query(query + " LIMIT 100").execute().result().to_dataframe()
df.head()
```
<h2> Create ML dataset using tf.transform and Dataflow </h2>
<p>
Let's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.
<p>
Note that after you launch this, the notebook won't show you progress. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about <b>30 minutes</b> for me. If you wish to continue without doing this step, you can copy my preprocessed output:
<pre>
gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc_tft gs://your-bucket/
</pre>
```
%writefile requirements.txt
tensorflow-transform==0.4.0
import datetime
import apache_beam as beam
import tensorflow_transform as tft
from tensorflow_transform.beam import impl as beam_impl
def preprocess_tft(inputs):
import copy
import numpy as np
def center(x):
return x - tft.mean(x)
result = copy.copy(inputs) # shallow copy
result['mother_age_tft'] = center(inputs['mother_age'])
result['gestation_weeks_centered'] = tft.scale_to_0_1(inputs['gestation_weeks'])
result['mother_race_tft'] = tft.string_to_int(inputs['mother_race'])
return result
#return inputs
def cleanup(rowdict):
import copy, hashlib
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,mother_race,plurality,gestation_weeks,mother_married,cigarette_use,alcohol_use'.split(',')
STR_COLUMNS = 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
FLT_COLUMNS = 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
# add any missing columns, and correct the types
def tofloat(value, ifnot):
try:
return float(value)
except (ValueError, TypeError):
return ifnot
result = {
k : str(rowdict[k]) if k in rowdict else 'None' for k in STR_COLUMNS
}
result.update({
k : tofloat(rowdict[k], -99) if k in rowdict else -99 for k in FLT_COLUMNS
})
# modify opaque numeric race code into human-readable data
races = dict(zip([1,2,3,4,5,6,7,18,28,39,48],
['White', 'Black', 'American Indian', 'Chinese',
'Japanese', 'Hawaiian', 'Filipino',
'Asian Indian', 'Korean', 'Samaon', 'Vietnamese']))
if 'mother_race' in rowdict and rowdict['mother_race'] in races:
result['mother_race'] = races[rowdict['mother_race']]
else:
result['mother_race'] = 'Unknown'
# cleanup: write out only the data we that we want to train on
if result['weight_pounds'] > 0 and result['mother_age'] > 0 and result['gestation_weeks'] > 0 and result['plurality'] > 0:
data = ','.join([str(result[k]) for k in CSV_COLUMNS])
result['key'] = hashlib.sha224(data).hexdigest()
yield result
def preprocess(query, in_test_mode):
import os
import os.path
import tempfile
import tensorflow as tf
from apache_beam.io import tfrecordio
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
import shutil
print 'Launching local job ... hang on'
OUTPUT_DIR = './preproc_tft'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
else:
print 'Launching Dataflow job {} ... hang on'.format(job_name)
OUTPUT_DIR = 'gs://{0}/babyweight/preproc_tft/'.format(BUCKET)
import subprocess
subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split())
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'max_num_workers': 24,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'requirements_file': 'requirements.txt'
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
# set up metadata
raw_data_schema = {
colname : dataset_schema.ColumnSchema(tf.string, [], dataset_schema.FixedColumnRepresentation())
for colname in 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
}
raw_data_schema.update({
colname : dataset_schema.ColumnSchema(tf.float32, [], dataset_schema.FixedColumnRepresentation())
for colname in 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
})
raw_data_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema(raw_data_schema))
def read_rawdata(p, step, test_mode):
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE MOD(ABS(hashmonth),4) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE MOD(ABS(hashmonth),4) = 3'.format(query)
if in_test_mode:
selquery = selquery + ' LIMIT 100'
#print 'Processing {} data from {}'.format(step, selquery)
return (p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query=selquery, use_standard_sql=True))
| '{}_cleanup'.format(step) >> beam.FlatMap(cleanup)
)
# run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')):
# analyze and transform training
raw_data = read_rawdata(p, 'train', in_test_mode)
raw_dataset = (raw_data, raw_data_metadata)
transformed_dataset, transform_fn = (
raw_dataset | beam_impl.AnalyzeAndTransformDataset(preprocess_tft))
transformed_data, transformed_metadata = transformed_dataset
_ = transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'train'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
# transform eval data
raw_test_data = read_rawdata(p, 'eval', in_test_mode)
raw_test_dataset = (raw_test_data, raw_data_metadata)
transformed_test_dataset = (
(raw_test_dataset, transform_fn) | beam_impl.TransformDataset())
transformed_test_data, _ = transformed_test_dataset
_ = transformed_test_data | 'WriteTestData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'eval'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
_ = (transform_fn
| 'WriteTransformFn' >>
transform_fn_io.WriteTransformFn(os.path.join(OUTPUT_DIR, 'metadata')))
job = p.run()
if in_test_mode:
job.wait_until_finish()
print "Done!"
preprocess(query, in_test_mode=False)
%bash
gsutil ls gs://${BUCKET}/babyweight/preproc_tft/*-00000*
```
Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
```
# Render our plots inline
%matplotlib inline
# Import modules for reading and plotting data
import pandas as pd
import matplotlib.pyplot as plt
```
# Reading data from a csv file
You can read data from a CSV file using the `read_csv` function. By default, it assumes that the fields are comma-separated.
We're going to be looking some earthquake data from the Southern California Earthquake Data Center (see [this page](http://service.scedc.caltech.edu/eq-catalogs/date_mag_loc.php) for more details or to retrieve your own earthquake catalog).
This dataset is a list of earthquakes occurring in Southern California with magnitude 4 or higher from 2002 to 2017.
```
broken_df = pd.read_csv('data/scedc.csv')
# Look at the first 3 rows
broken_df[:3]
```
You'll notice that this is totally broken! `read_csv` has a bunch of options that will let us fix that, though. Here we'll
* Change the column separator (in this case, it's whitespace)
* Change the first 2 column names to 'Date' and 'Time'
* Set the index to be the 'Date' column
```
# set the column separator (delimiter)
fixed_df = pd.read_csv('data/scedc.csv'
, delim_whitespace=True # could use sep=' ' instead for delimiter
, header=0) # says the column information is in line 0
# change the first 2 columns (index 0 and 1) name to Date and Time by
# adding list of ['Date','Time'] to list of columns in position 2 and on
fixed_df.columns = ['Date','Time'] + fixed_df.columns.tolist()[2:]
# set the index to be the Date
fixed_df.index = pd.to_datetime(fixed_df['Date'])
fixed_df[:3]
```
# Selecting a column
When you read a CSV, you get a kind of object called a `DataFrame`, which is made up of rows and columns. You get columns out of a DataFrame the same way you get elements out of a dictionary.
Here's an example:
```
fixed_df['MAG']
```
We can also look at some basic statistics of each column.
```
fixed_df.describe()
fixed_df.MAG.mean() # mean magnitude
fixed_df.DEPTH.max() # maximum depth
fixed_df.NPH.std() # standard deviation of number of phases used
```
# Plotting a column
Just add `.plot()` to the end! How could it be easier? =)
We can see the 2010 El-Mayor Cucapah earthquake has the highest magnitude.
```
fixed_df['MAG'].plot()
```
Let's make the plot bigger.
```
fixed_df['MAG'].plot(figsize=(10,4))
```
How about changing the linestyle and symbol opacity (alpha)?
```
fixed_df['MAG'].plot(figsize=(10,4),linestyle='',marker='o', alpha=0.5)
```
Can we add a legend? Yes!
```
fixed_df['MAG'].plot(figsize=(10,4),linestyle='',marker='o', alpha=0.5,legend=True)
```
We can also plot a rolling mean of the data. Let's show the rolling mean of depth for windows of 15 events.
```
fixed_df['DEPTH'].rolling(window=15).mean().plot(figsize=(10,4),linestyle='',marker='o', alpha=0.5,legend=True)
```
What if we want to know the cumulative number of events?
```
import numpy as np
cumulative_df = fixed_df.apply(np.cumsum).copy()
cumulative_df['DEPTH'].plot(figsize=(10,4))
```
What about number of events each day?
```
fixed_df['Date'].groupby([fixed_df.index]).count().plot(figsize=(10,4),linestyle='',marker='o', alpha=0.5,legend=True)
```
What if we wanted to know the distribution of each earthquake characteristic?
```
fixed_df.hist(figsize=(13,13))
```
We see here that most depths are around 8-10 km and most events have about magnitude 4.
| github_jupyter |
# Crossentropy method
This notebook will teach you to solve reinforcement learning problems with crossentropy method. We'll follow-up by scaling everything up and using neural network policy.
```
# XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
%env DISPLAY = : 1
import gym
import numpy as np
import pandas as pd
env = gym.make("Taxi-v2")
env.reset()
env.render()
n_states = env.observation_space.n
n_actions = env.action_space.n
print("n_states=%i, n_actions=%i" % (n_states, n_actions))
```
# Create stochastic policy
This time our policy should be a probability distribution.
```policy[s,a] = P(take action a | in state s)```
Since we still use integer state and action representations, you can use a 2-dimensional array to represent the policy.
Please initialize policy __uniformly__, that is, probabililities of all actions should be equal.
```
policy = <your code here! Create an array to store action probabilities >
assert type(policy) in (np.ndarray, np.matrix)
assert np.allclose(policy, 1./n_actions)
assert np.allclose(np.sum(policy, axis=1), 1)
```
# Play the game
Just like before, but we also record all states and actions we took.
```
def generate_session(policy, t_max=10**4):
"""
Play game until end or for t_max ticks.
:param policy: an array of shape [n_states,n_actions] with action probabilities
:returns: list of states, list of actions and sum of rewards
"""
states, actions = [], []
total_reward = 0.
s = env.reset()
for t in range(t_max):
a = <sample action from policy(hint: use np.random.choice) >
new_s, r, done, info = env.step(a)
# Record state, action and add up reward to states,actions and total_reward accordingly.
states.append(s)
actions.append(a)
total_reward += r
s = new_s
if done:
break
return states, actions, total_reward
s, a, r = generate_session(policy)
assert type(s) == type(a) == list
assert len(s) == len(a)
assert type(r) in [float, np.float]
# let's see the initial reward distribution
import matplotlib.pyplot as plt
%matplotlib inline
sample_rewards = [generate_session(policy, t_max=1000)[-1] for _ in range(200)]
plt.hist(sample_rewards, bins=20)
plt.vlines([np.percentile(sample_rewards, 50)], [0], [
100], label="50'th percentile", color='green')
plt.vlines([np.percentile(sample_rewards, 90)], [0], [
100], label="90'th percentile", color='red')
plt.legend()
```
### Crossentropy method steps (2pts)
```
def select_elites(states_batch, actions_batch, rewards_batch, percentile=50):
"""
Select states and actions from games that have rewards >= percentile
:param states_batch: list of lists of states, states_batch[session_i][t]
:param actions_batch: list of lists of actions, actions_batch[session_i][t]
:param rewards_batch: list of rewards, rewards_batch[session_i]
:returns: elite_states,elite_actions, both 1D lists of states and respective actions from elite sessions
Please return elite states and actions in their original order
[i.e. sorted by session number and timestep within session]
If you're confused, see examples below. Please don't assume that states are integers (they'll get different later).
"""
reward_threshold = <Compute minimum reward for elite sessions. Hint: use np.percentile >
elite_states = <your code here >
elite_actions = <your code here >
return elite_states, elite_actions
states_batch = [
[1, 2, 3], # game1
[4, 2, 0, 2], # game2
[3, 1] # game3
]
actions_batch = [
[0, 2, 4], # game1
[3, 2, 0, 1], # game2
[3, 3] # game3
]
rewards_batch = [
3, # game1
4, # game2
5, # game3
]
test_result_0 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=0)
test_result_40 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=30)
test_result_90 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=90)
test_result_100 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=100)
assert np.all(test_result_0[0] == [1, 2, 3, 4, 2, 0, 2, 3, 1]) \
and np.all(test_result_0[1] == [0, 2, 4, 3, 2, 0, 1, 3, 3]),\
"For percentile 0 you should return all states and actions in chronological order"
assert np.all(test_result_40[0] == [4, 2, 0, 2, 3, 1]) and \
np.all(test_result_40[1] == [3, 2, 0, 1, 3, 3]),\
"For percentile 30 you should only select states/actions from two first"
assert np.all(test_result_90[0] == [3, 1]) and \
np.all(test_result_90[1] == [3, 3]),\
"For percentile 90 you should only select states/actions from one game"
assert np.all(test_result_100[0] == [3, 1]) and\
np.all(test_result_100[1] == [3, 3]),\
"Please make sure you use >=, not >. Also double-check how you compute percentile."
print("Ok!")
def update_policy(elite_states, elite_actions):
"""
Given old policy and a list of elite states/actions from select_elites,
return new updated policy where each action probability is proportional to
policy[s_i,a_i] ~ #[occurences of si and ai in elite states/actions]
Don't forget to normalize policy to get valid probabilities and handle 0/0 case.
In case you never visited a state, set probabilities for all actions to 1./n_actions
:param elite_states: 1D list of states from elite sessions
:param elite_actions: 1D list of actions from elite sessions
"""
new_policy = np.zeros([n_states, n_actions])
<Your code here: update probabilities for actions given elite states & actions >
# Don't forget to set 1/n_actions for all actions in unvisited states.
return new_policy
elite_states, elite_actions = ([1, 2, 3, 4, 2, 0, 2, 3, 1], [
0, 2, 4, 3, 2, 0, 1, 3, 3])
new_policy = update_policy(elite_states, elite_actions)
assert np.isfinite(new_policy).all(
), "Your new policy contains NaNs or +-inf. Make sure you don't divide by zero."
assert np.all(
new_policy >= 0), "Your new policy can't have negative action probabilities"
assert np.allclose(new_policy.sum(
axis=-1), 1), "Your new policy should be a valid probability distribution over actions"
reference_answer = np.array([
[1., 0., 0., 0., 0.],
[0.5, 0., 0., 0.5, 0.],
[0., 0.33333333, 0.66666667, 0., 0.],
[0., 0., 0., 0.5, 0.5]])
assert np.allclose(new_policy[:4, :5], reference_answer)
print("Ok!")
```
# Training loop
Generate sessions, select N best and fit to those.
```
from IPython.display import clear_output
def show_progress(rewards_batch, log, reward_range=[-990, +10]):
"""
A convenience function that displays training progress.
No cool math here, just charts.
"""
mean_reward = np.mean(rewards_batch)
threshold = np.percentile(rewards_batch, percentile)
log.append([mean_reward, threshold])
clear_output(True)
print("mean reward = %.3f, threshold=%.3f" % (mean_reward, threshold))
plt.figure(figsize=[8, 4])
plt.subplot(1, 2, 1)
plt.plot(list(zip(*log))[0], label='Mean rewards')
plt.plot(list(zip(*log))[1], label='Reward thresholds')
plt.legend()
plt.grid()
plt.subplot(1, 2, 2)
plt.hist(rewards_batch, range=reward_range)
plt.vlines([np.percentile(rewards_batch, percentile)],
[0], [100], label="percentile", color='red')
plt.legend()
plt.grid()
plt.show()
# reset policy just in case
policy = np.ones([n_states, n_actions])/n_actions
n_sessions = 250 # sample this many sessions
percentile = 50 # take this percent of session with highest rewards
learning_rate = 0.5 # add this thing to all counts for stability
log = []
for i in range(100):
%time sessions = [ < generate a list of n_sessions new sessions > ]
states_batch, actions_batch, rewards_batch = zip(*sessions)
elite_states, elite_actions = <select elite states/actions >
new_policy = <compute new policy >
policy = learning_rate*new_policy + (1-learning_rate)*policy
# display results on chart
show_progress(rewards_batch, log)
```
# Digging deeper: approximate crossentropy with neural nets

In this section we will train a neural network policy for continuous state space game
```
# if you see "<classname> has no attribute .env", remove .env or update gym
env = gym.make("CartPole-v0").env
env.reset()
n_actions = env.action_space.n
plt.imshow(env.render("rgb_array"))
# create agent
from sklearn.neural_network import MLPClassifier
agent = MLPClassifier(hidden_layer_sizes=(20, 20),
activation='tanh',
warm_start=True, # keep progress between .fit(...) calls
max_iter=1 # make only 1 iteration on each .fit(...)
)
# initialize agent to the dimension of state an amount of actions
agent.fit([env.reset()]*n_actions, range(n_actions))
def generate_session(t_max=1000):
states, actions = [], []
total_reward = 0
s = env.reset()
for t in range(t_max):
# predict array of action probabilities
probs = agent.predict_proba([s])[0]
a = <sample action with such probabilities >
new_s, r, done, info = env.step(a)
# record sessions like you did before
states.append(s)
actions.append(a)
total_reward += r
s = new_s
if done:
break
return states, actions, total_reward
n_sessions = 100
percentile = 70
log = []
for i in range(100):
# generate new sessions
sessions = [ < generate a list of n_sessions new sessions > ]
states_batch, actions_batch, rewards_batch = map(np.array, zip(*sessions))
elite_states, elite_actions = <select elite actions just like before >
<fit agent to predict elite_actions(y) from elite_states(X) >
show_progress(rewards_batch, log, reward_range=[0, np.max(rewards_batch)])
if np.mean(rewards_batch) > 190:
print("You Win! You may stop training now via KeyboardInterrupt.")
```
# Results
```
# record sessions
import gym.wrappers
env = gym.wrappers.Monitor(gym.make("CartPole-v0"),
directory="videos", force=True)
sessions = [generate_session() for _ in range(100)]
env.close()
# upload to gym
# gym.upload("./videos/",api_key="<your_api_key>") #you'll need me later
# show video
from IPython.display import HTML
import os
video_names = list(
filter(lambda s: s.endswith(".mp4"), os.listdir("./videos/")))
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./videos/"+video_names[-1])) # this may or may not be _last_ video. Try other indices
```
# Homework part I
### Tabular crossentropy method
You may have noticed that the taxi problem quickly converges from -100 to a near-optimal score and then descends back into -50/-100. This is in part because the environment has some innate randomness. Namely, the starting points of passenger/driver change from episode to episode.
### Tasks
- __1.1__ (1 pts) Find out how the algorithm performance changes if you change different percentile and different n_samples.
- __1.2__ (2 pts) Tune the algorithm to end up with positive average score.
It's okay to modify the existing code.
```<Describe what you did here. Preferably with plot/report to support it.>```
# Homework part II
### Deep crossentropy method
By this moment you should have got enough score on [CartPole-v0](https://gym.openai.com/envs/CartPole-v0) to consider it solved (see the link). It's time to upload the result and get to something harder.
* if you have any trouble with CartPole-v0 and feel stuck, feel free to ask us or your peers for help.
### Tasks
* __2.1__ (3 pts) Pick one of environments: MountainCar-v0 or LunarLander-v2.
* For MountainCar, get average reward of __at least -150__
* For LunarLander, get average reward of __at least +50__
* For any environment, upload it to gym and post url in your anytask form.
See the tips section below, it's kinda important.
__Note:__ If your agent is below the target score, you'll still get most of the points depending on the result, so don't be afraid to submit it.
* __2.2__ (bonus: 4++ pt) Devise a way to speed up training at least 2x against the default version
* Obvious improvement: use [joblib](https://www.google.com/search?client=ubuntu&channel=fs&q=joblib&ie=utf-8&oe=utf-8)
* Try re-using samples from 3-5 last iterations when computing threshold and training
* Experiment with amount of training iterations and learning rate of the neural network (see params)
* __Please list what you did in anytask submission form__
### Tips
* Gym page: [mountaincar](https://gym.openai.com/envs/MountainCar-v0), [lunarlander](https://gym.openai.com/envs/LunarLander-v2)
* Sessions for MountainCar may last for 10k+ ticks. Make sure ```t_max``` param is at least 10k.
* Also it may be a good idea to cut rewards via ">" and not ">=". If 90% of your sessions get reward of -10k and 20% are better, than if you use percentile 20% as threshold, R >= threshold __fails cut off bad sessions__ whule R > threshold works alright.
* _issue with gym_: Some versions of gym limit game time by 200 ticks. This will prevent cem training in most cases. Make sure your agent is able to play for the specified __t_max__, and if it isn't, try `env = gym.make("MountainCar-v0").env` or otherwise get rid of TimeLimit wrapper.
* If you use old _swig_ lib for LunarLander-v2, you may get an error. See this [issue](https://github.com/openai/gym/issues/100) for solution.
* If it won't train it's a good idea to plot reward distribution and record sessions: they may give you some clue. If they don't, call course staff :)
* 20-neuron network is probably not enough, feel free to experiment.
* __Please upload the results to openai gym and send links to all submissions in the e-mail__
### Bonus tasks
* __2.3 bonus__ Try to find a network architecture and training params that solve __both__ environments above (_Points depend on implementation. If you attempted this task, please mention it in anytask submission._)
* __2.4 bonus__ Solve continuous action space task with `MLPRegressor` or similar.
* Start with ["Pendulum-v0"](https://github.com/openai/gym/wiki/Pendulum-v0).
* Since your agent only predicts the "expected" action, you will have to add noise to ensure exploration.
* [MountainCarContinuous-v0](https://gym.openai.com/envs/MountainCarContinuous-v0), [LunarLanderContinuous-v2](https://gym.openai.com/envs/LunarLanderContinuous-v2)
* 4 points for solving. Slightly less for getting some results below solution threshold. Note that discrete and continuous environments may have slightly different rules aside from action spaces.
If you're still feeling unchallenged, consider the project (see other notebook in this folder).
| github_jupyter |
# R: Cluster Robust Double Machine Learning
## Motivation
In many empirical applications, errors exhibit a clustered structure such that the usual i.i.d. assumption does not hold anymore. In order to perform valid statistical inference, researchers have to account for clustering. In this notebook, we will shortly emphasize the consequences of clustered data on inference based on the double machine learning (DML) approach as has been considered in [Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815). We will demonstrate how users of the [DoubleML](https://docs.doubleml.org/stable/index.html) package can account for one- and two-way clustering in their analysis.
Clustered errors in terms of one or multiple dimensions might arise in many empirical applications. For example, in a cross-sectional study, errors might be correlated (i) within regions (one-way clustering) or (ii) within regions and industries at the same time (two-way clustering). Another example for two-way clustering, discussed in [Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815), refers to market share data with market shares being subject to shocks on the market and product level at the same time. We refer to [Cameron et al. (2011)](https://doi.org/10.1198/jbes.2010.07136) for an introduction to multiway clustering and a illustrative list of empirical examples.
## Clustering and double machine learning
Clustering creates a challenge to the double machine learning (DML) approach in terms of
1. a necessary adjustment of the formulae used for estimation of the variance covariance matrix, standard errors, p-values etc., and,
2. an adjusted resampling scheme for the cross-fitting algorithm.
The first point equally applies to classical statistical models, for example a linear regression model (see, for example [Cameron et al. 2011](https://doi.org/10.1198/jbes.2010.07136)). The second point arises because the clustering implies a correlation of errors from train and test samples if the standard cross-fitting procedure suggested in [Chernozhukov et al. (2018)](https://doi.org/10.1111/ectj.12097) was employed. The DML approach builds on independent sample splits into partitions that are used for training of the machine learning (ML) model learners and generation of predictions that are eventually used for solving the score function. For a motivation of the necessity of sample splitting, we refer to the illustration example in the [user guide](
https://docs.doubleml.org/stable/guide/basics.html#sample-splitting-to-remove-bias-induced-by-overfitting) as well as to the explanation in [Chernozhukov et al. (2018)](https://doi.org/10.1111/ectj.12097) .
In order to achieve independent data splits in a setting with one-way or multi-way clustering, [Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815) develop an updated $K$-fold sample splitting procedure that ensures independent sample splits: The data set is split into disjoint partitions in terms of all clustering dimensions. For example, in a situation with two-way clustering, the data is split into $K^2$ folds. The machine learning models are then trained on a specific fold and used for generation of predictions in hold-out samples. Thereby, the sample splitting procedure ensures that the hold-out samples do not contain observations of the same clusters as used for training.
```
library('hdm')
library('DoubleML')
library('mlr3')
library('mlr3learners')
# surpress messages from mlr3 package during fitting
lgr::get_logger("mlr3")$set_threshold("warn")
library('ggplot2')
library('reshape2')
library('gridExtra')
```
## A Motivating Example: Two-Way Cluster Robust DML
In a first part, we show how the two-way cluster robust double machine learning (DML) ([Chiang et al. 2021](https://doi.org/10.1080/07350015.2021.1895815)) can be implemented with the [DoubleML](https://docs.doubleml.org/stable/index.html) package.
[Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815) consider double-indexed data
\begin{equation}
\lbrace W_{ij}: i \in \lbrace 1, \ldots, N \rbrace, j \in \lbrace 1, \ldots, M \rbrace \rbrace
\end{equation}
and the partially linear IV regression model (PLIV)
$$\begin{aligned}
Y_{ij} = D_{ij} \theta_0 + g_0(X_{ij}) + \epsilon_{ij}, & &\mathbb{E}(\epsilon_{ij} | X_{ij}, Z_{ij}) = 0, \\
Z_{ij} = m_0(X_{ij}) + v_{ij}, & &\mathbb{E}(v_{ij} | X_{ij}) = 0.
\end{aligned}$$
### Simulate two-way cluster data
We use the PLIV data generating process described in Section 4.1 of [Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815).
The DGP is defined as
$$\begin{aligned}
Z_{ij} &= X_{ij}' \xi_0 + V_{ij}, \\
D_{ij} &= Z_{ij}' \pi_{10} + X_{ij}' \pi_{20} + v_{ij}, \\
Y_{ij} &= D_{ij} \theta + X_{ij}' \zeta_0 + \varepsilon_{ij},
\end{aligned}$$
with
$$\begin{aligned}
X_{ij} &= (1 - \omega_1^X - \omega_2^X) \alpha_{ij}^X
+ \omega_1^X \alpha_{i}^X + \omega_2^X \alpha_{j}^X, \\
\varepsilon_{ij} &= (1 - \omega_1^\varepsilon - \omega_2^\varepsilon) \alpha_{ij}^\varepsilon
+ \omega_1^\varepsilon \alpha_{i}^\varepsilon + \omega_2^\varepsilon \alpha_{j}^\varepsilon, \\
v_{ij} &= (1 - \omega_1^v - \omega_2^v) \alpha_{ij}^v
+ \omega_1^v \alpha_{i}^v + \omega_2^v \alpha_{j}^v, \\
V_{ij} &= (1 - \omega_1^V - \omega_2^V) \alpha_{ij}^V
+ \omega_1^V \alpha_{i}^V + \omega_2^V \alpha_{j}^V,
\end{aligned}$$
and $\alpha_{ij}^X, \alpha_{i}^X, \alpha_{j}^X \sim \mathcal{N}(0, \Sigma)$
where $\Sigma$ is a $p_x \times p_x$ matrix with entries
$\Sigma_{kj} = s_X^{|j-k|}$.
Further
$$\begin{aligned}
\left(\begin{matrix} \alpha_{ij}^\varepsilon \\ \alpha_{ij}^v \end{matrix}\right),
\left(\begin{matrix} \alpha_{i}^\varepsilon \\ \alpha_{i}^v \end{matrix}\right),
\left(\begin{matrix} \alpha_{j}^\varepsilon \\ \alpha_{j}^v \end{matrix}\right)
\sim \mathcal{N}\left(0, \left(\begin{matrix} 1 & s_{\varepsilon v} \\
s_{\varepsilon v} & 1 \end{matrix} \right) \right)
\end{aligned}$$
and $\alpha_{ij}^V, \alpha_{i}^V, \alpha_{j}^V \sim \mathcal{N}(0, 1)$.
Data from this DGP can be generated with the [make_pliv_multiway_cluster_CKMS2021()](https://docs.doubleml.org/r/stable/reference/make_pliv_multiway_cluster_CKMS2021.html) function from [DoubleML](https://docs.doubleml.org/stable/index.html).
Analogously to [Chiang et al. (2021, Section 5)](https://doi.org/10.1080/07350015.2021.1895815)
we use the following parameter setting:
$\theta=1.0$, $N=M=25$, $p_x=100$, $\pi_{10}=1.0$, $\omega_X = \omega_{\varepsilon} = \omega_V = \omega_v = (0.25, 0.25)$, $s_X = s_{\varepsilon v} = 0.25$ and the $j$-th entries of the $p_x$-vectors $\zeta_0 = \pi_{20} = \xi_0$ are $(\zeta_{0})_j = 0.5^j$.
This are also the default values of [make_pliv_multiway_cluster_CKMS2021()](https://docs.doubleml.org/r/stable/reference/make_pliv_multiway_cluster_CKMS2021.html).
```
# Set the simulation parameters
N = 25 # number of observations (first dimension)
M = 25 # number of observations (second dimension)
dim_X = 100 # dimension of X
set.seed(3141) # set seed
obj_dml_data = make_pliv_multiway_cluster_CKMS2021(N, M, dim_X)
```
### Data-Backend for Cluster Data
The implementation of cluster robust double machine learning is based on a special data-backend called [DoubleMLClusterData](https://docs.doubleml.org/r/stable/reference/DoubleMLClusterData.html). As compared to the standard data-backend [DoubleMLData](https://docs.doubleml.org/r/stable/reference/DoubleMLData.html), users can specify the clustering variables during instantiation of a [DoubleMLClusterData](https://docs.doubleml.org/r/stable/reference/DoubleMLClusterData.html) object. The estimation framework will subsequently account for the provided clustering options.
```
# The simulated data is of type DoubleMLClusterData
print(obj_dml_data)
# The cluster variables are part of the DataFrame
head(obj_dml_data$data)
```
### Initialize the objects of class `DoubleMLPLIV`
```
# Set machine learning methods for m, g & r
ml_g = lrn("regr.cv_glmnet", nfolds = 10, s = "lambda.min")
ml_m = lrn("regr.cv_glmnet", nfolds = 10, s = "lambda.min")
ml_r = lrn("regr.cv_glmnet", nfolds = 10, s = "lambda.min")
# initialize the DoubleMLPLIV object
dml_pliv_obj = DoubleMLPLIV$new(obj_dml_data,
ml_g, ml_m, ml_r,
n_folds=3)
print(dml_pliv_obj)
library(RColorBrewer)
coul <- rev(colorRampPalette(brewer.pal(8, "RdBu"))(3))
options(repr.plot.width = 10, repr.plot.height = 10)
plt_smpls = function(smpls, n_folds) {
df = matrix(0, nrow = N*M, ncol = n_folds)
for (i_fold in 1:n_folds){
df[smpls$train_ids[[i_fold]], i_fold] = -1
df[smpls$test_ids[[i_fold]], i_fold] = 1
}
heatmap(df, Rowv=NA, Colv=NA, col=coul, cexRow=1.5, cexCol=1.5, scale='none')
}
plt_smpls_cluster = function(smpls_cluster, n_folds, n_folds_per_cluster) {
#options(repr.plot.width = 6, repr.plot.height = 6)
plots = list()
for (i_fold in 1:n_folds){
mat = matrix(0, nrow = M, ncol = N)
for (k in smpls_cluster$train_ids[[i_fold]][[1]]) {
for (l in smpls_cluster$train_ids[[i_fold]][[2]]) {
mat[k, l] = -1
}
}
for (k in smpls_cluster$test_ids[[i_fold]][[1]]) {
for (l in smpls_cluster$test_ids[[i_fold]][[2]]) {
mat[k, l] = 1
}
}
l = (i_fold-1) %% n_folds_per_cluster + 1
k = ((i_fold-1) %/% n_folds_per_cluster)+1
df = data.frame(mat)
cols = names(df)
names(df) = 1:N
df$id = 1:N
df_plot = melt(df, id.var = 'id')
df_plot$value = factor(df_plot$value)
plots[[i_fold]] = ggplot(data = df_plot, aes(x=id, y=variable)) +
geom_tile(aes(fill=value), colour = "grey50") +
scale_fill_manual(values = c("darkblue", "white", "darkred")) +
theme(text = element_text(size=15))
# ToDo: Add Subplot titles
if (k == 3) {
plots[[i_fold]] = plots[[i_fold]] + xlab(expression(paste('Second Cluster Variable ', l)))
} else {
plots[[i_fold]] = plots[[i_fold]] + xlab('')
}
if (l == 1) {
plots[[i_fold]] = plots[[i_fold]] + ylab(expression(paste('First Cluster Variable ', k)))
} else {
plots[[i_fold]] = plots[[i_fold]] + ylab('')
}
}
return(plots)
}
```
### Cluster Robust Cross Fitting
A key element of cluster robust DML ([Chiang et al. 2021](https://doi.org/10.1080/07350015.2021.1895815)) is a special sample splitting used for the cross-fitting.
In case of two-way clustering, we assume $N$ clusters in the first dimension and $M$ clusters in the second dimension.
For $K$-fold cross-fitting, [Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815) proposed to randomly partition $[N]:=\{1,\ldots,N\}$ into $K$ subsets $\{I_1, \ldots, I_K\}$ and $[M]:=\{1,\ldots,N\}$ into $K$ subsets $\{J_1, \ldots, J_K\}$.
Effectively, one then considers $K^2$ folds.
Basically for each $(k, \ell) \in \{1, \ldots, K\} \times \{1, \ldots, K\}$, the nuisance functions are estimated for all double-indexed observations in $([N]\setminus I_K) \times ([M]\setminus J_\ell)$, i.e.,
$$
\hat{\eta}_{k\ell} = \hat{\eta}\left((W_{ij})_{(i,j)\in ([N]\setminus I_K) \times ([M]\setminus J_\ell)}\right)
$$
The causal parameter is then estimated as usual by solving a moment condition with a Neyman orthogonal score function.
For two-way cluster robust double machine learning with algorithm [DML2](https://docs.doubleml.org/stable/guide/algorithms.html#algorithm-dml2) this results in solving
$$
\frac{1}{K^2} \sum_{k=1}^{K} \sum_{\ell=1}^{K} \frac{1}{|I_k| |J_\ell|} \sum_{(i,j) \in I_K \times J_\ell}
\psi(W_{ij}, \tilde{\theta}_0, \hat{\eta}_{k\ell}) = 0
$$
for $\tilde{\theta}_0$.
Here $|I_k|$ denotes the cardinality, i.e., the number of clusters in the $k$-th fold for the first cluster variable.
We can visualize the sample splitting of the $N \cdot M = 625$ observations into $K \cdot K = 9$ folds. The following heat map illustrates the partitioned data set that is split into $K=9$ folds. The horizontal axis corresponds to the fold indices and the vertical axis to the indices of the observations. A blue field indicates that the observation $i$ is used for fitting the nuisance part, red indicates that the fold is used for prediction generation and white means that an observation is left out from the sample splitting.
For example, the first observation as displayed on the very bottom of the figure is used for training of the nuisance parts in the first, second, fourth and fifth fold and used for generation of the predictions in fold nine. At the same time the observation is left out from the sample splitting procedure in folds three, six, seven and eight.
```
# The function plt_smpls is defined at the end of the Notebook
plt_smpls(dml_pliv_obj$smpls[[1]], dml_pliv_obj$n_folds)
```
If we visualize the sample splitting in terms of the cluster variables, the partitioning of the data into $9$ folds $I_k \times J_\ell$ becomes clear.
The identifiers for the first cluster variable $[N]:=\{1,\ldots,N\}$ have been randomly partioned into $K=3$ folds denoted by $\{I_1, I_2, I_3\}$ and the identifiers for the second cluster variable $[M]:=\{1,\ldots,M\}$ have also been randomly partioned into $K=3$ folds denoted by $\{J_1, J_2, J_3\}$.
By considering every combination $I_k \times J_\ell$ for $1 \leq k, \ell \leq K = 3$ we effectively base the cross-fitting on $9$ folds.
We now want to focus on the top-left sub-plot showing the partitioning of the cluster data for the first fold.
The $x$-axis corresponds to the first cluster variable and the $y$-axis to the second cluster variable.
Observations with cluster variables $(i,j) \in I_K \times J_\ell$ are used for estimation of the target parameter $\tilde{\theta}_0$ by solving a Neyman orthogonal score function.
For estimation of the nuisance function, we only use observation where neither the first cluster variable is in $I_K$ nor the second cluster variable is in $J_\ell$, i.e., we use observations indexed by $(i,j)\in ([N]\setminus I_K) \times ([M]\setminus J_\ell)$ to estimate the nuisance functions
$$
\hat{\eta}_{k\ell} = \hat{\eta}\left((W_{ij})_{(i,j)\in ([N]\setminus I_K) \times ([M]\setminus J_\ell)}\right).
$$
This way we guarantee that there are never observations from the same cluster (first and/or second cluster dimension) in the sample for the nuisance function estimation (blue) and at the same time in the sample for solving the score function (red). As a result of this special sample splitting proposed by [Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815), the observations in the score (red) and nuisance (blue) sample can be considered independent and the standard cross-fitting approach for double machine learning can be applied.
```
# The function plt_smpls_cluster is defined at the end of the Notebook
options(repr.plot.width = 12, repr.plot.height = 10)
plots = plt_smpls_cluster(dml_pliv_obj$smpls_cluster[[1]],
dml_pliv_obj$n_folds,
sqrt(dml_pliv_obj$n_folds))
grid.arrange(grobs=plots, ncol = 3, nrow = 3)
```
### Cluster Robust Standard Errors
In the abstract base class `DoubleML` the estimation of cluster robust standard errors is implemented for all supported double machine learning models.
It is based on the assumption of a linear Neyman orthogonal score function.
We use the notation $n \wedge m := \min\{n,m\}$.
For the the asymptotic variance of
$\sqrt{\underline{C}}(\tilde{\theta_0} - \theta_0)$ with
$\underline{C} := N \wedge M$
[Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815) then propose the following estimator
$$
\hat{\sigma}^2 = \hat{J}^{-1} \hat{\Gamma} \hat{J}^{-1}
$$
where
$$
\begin{aligned}
\hat{\Gamma} = \frac{1}{K^2} \sum_{(k, \ell) \in[K]^2}
\Bigg[ \frac{|I_k| \wedge |J_\ell|}{(|I_k||J_\ell|)^2}
\bigg(&\sum_{i \in I_k} \sum_{j \in J_\ell} \sum_{j' \in J_\ell}
\psi(W_{ij}; \tilde{\theta}, \hat{\eta}_{k \ell}) \psi(W_{ij'}; \tilde{\theta}_0, \hat{\eta}_{k \ell}) \\
&+ \sum_{i \in I_k} \sum_{i' \in I_k} \sum_{j \in J_\ell}
\psi(W_{ij}; \tilde{\theta}, \hat{\eta}_{k \ell}) \psi(W_{i'j}; \tilde{\theta}_0, \hat{\eta}_{k \ell})
\bigg)
\Bigg]
\end{aligned}$$
and
$$
\begin{aligned}
\hat{J} = \frac{1}{K^2} \sum_{(k, \ell) \in[K]^2} \frac{1}{|I_k||J_\ell|}
\sum_{i \in I_k} \sum_{j \in J_\ell}
\psi_a(W_{ij}; \tilde{\theta}_0, \hat{\eta}_{k \ell}).
\end{aligned}
$$
A $(1-\alpha)$ confidence interval is then given by ([Chiang et al. 2021](https://doi.org/10.1080/07350015.2021.1895815))
$$\begin{aligned}
\left[
\tilde{\theta} \pm \Phi^{-1}(1-\alpha/2) \sqrt{\hat{\sigma}^2 / \underline{C}}
\right]
\end{aligned}
$$
with $\underline{C} = N \wedge M$.
```
# Estimate the PLIV model with cluster robust double machine learning
dml_pliv_obj$fit()
dml_pliv_obj$summary()
```
## (One-Way) Cluster Robust Double Machine Learing
We again use the PLIV data generating process described in Section 4.1 of [Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815).
To obtain one-way clustered data, we set the following weights to zero
$$
\omega_2^X = \omega_2^\varepsilon = \omega_2^v = \omega_2^V = 0.
$$
Again we can simulate this data with [make_pliv_multiway_cluster_CKMS2021()](https://docs.doubleml.org/r/stable/reference/make_pliv_multiway_cluster_CKMS2021.html). To prepare the data-backend for one-way clustering, we only have to alter the `cluster_cols` to be `'cluster_var_i'`.
```
obj_dml_data = make_pliv_multiway_cluster_CKMS2021(N, M, dim_X,
omega_X = c(0.25, 0),
omega_epsilon = c(0.25, 0),
omega_v = c(0.25, 0),
omega_V = c(0.25, 0))
obj_dml_data$cluster_cols = 'cluster_var_i'
print(obj_dml_data)
# Set machine learning methods for m, g & r
ml_g = lrn("regr.cv_glmnet", nfolds = 10, s = "lambda.min")
ml_m = lrn("regr.cv_glmnet", nfolds = 10, s = "lambda.min")
ml_r = lrn("regr.cv_glmnet", nfolds = 10, s = "lambda.min")
# initialize the DoubleMLPLIV object
dml_pliv_obj = DoubleMLPLIV$new(obj_dml_data,
ml_g, ml_m, ml_r,
n_folds=3)
dml_pliv_obj$fit()
dml_pliv_obj$summary()
```
## Real-Data Application
As a real-data application we revist the consumer demand example from [Chiang et al. (2021)](https://doi.org/10.1080/07350015.2021.1895815).
The U.S. automobile data of [Berry, Levinsohn, and Pakes (1995)](https://doi.org/10.2307/2171802) is obtained from the `R` package [hdm](https://cran.r-project.org/web/packages/hdm/index.html). In this example, we consider different specifications for the cluster dimensions.
### Load and Process Data
```
## Prepare the BLP data
data(BLP);
blp_data <- BLP$BLP;
blp_data$price <- blp_data$price + 11.761
blp_data$log_p = log(blp_data$price)
x_cols = c('hpwt', 'air', 'mpd', 'space')
head(blp_data[x_cols])
iv_vars = as.data.frame(hdm:::constructIV(blp_data$firm.id,
blp_data$cdid,
blp_data$id,
blp_data[x_cols]))
formula = formula(paste0(" ~ -1 + (hpwt + air + mpd + space)^2",
"+ I(hpwt^2)*(air + mpd + space)",
"+ I(air^2)*(hpwt + mpd + space)",
"+ I(mpd^2)*(hpwt + air + space)",
"+ I(space^2)*(hpwt + air + mpd)",
"+ I(space^2) + I(hpwt^3) + I(air^3) + I(mpd^3) + I(space^3)"))
data_transf = data.frame(model.matrix(formula, blp_data))
names(data_transf)
y_col = 'y'
d_col = 'log_p'
cluster_cols = c('model.id', 'cdid')
all_z_cols = c('sum.other.hpwt', 'sum.other.mpd', 'sum.other.space')
z_col = all_z_cols[1]
dml_df = cbind(blp_data[c(y_col, d_col, cluster_cols)],
data_transf,
iv_vars[all_z_cols])
```
### Initialize `DoubleMLClusterData` object
```
dml_data = DoubleMLClusterData$new(dml_df,
y_col=y_col,
d_cols=d_col,
z_cols=z_col,
cluster_cols=cluster_cols,
x_cols=names(data_transf))
print(dml_data)
lasso = lrn("regr.cv_glmnet", nfolds = 10, s = "lambda.min")
coef_df = data.frame(matrix(NA_real_, ncol = 4, nrow = 1))
colnames(coef_df) = c('zero-way', 'one-way-product', 'one-way-market', 'two-way')
rownames(coef_df) = all_z_cols[1]
se_df = coef_df
n_rep = 10
```
### Two-Way Clustering with Respect to Product and Market
```
set.seed(1111)
dml_data$z_cols = z_col
dml_data$cluster_cols = c('model.id', 'cdid')
dml_pliv = DoubleMLPLIV$new(dml_data,
lasso, lasso, lasso,
n_folds=2, n_rep=n_rep)
dml_pliv$fit()
coef_df[1, 4] = dml_pliv$coef
se_df[1, 4] = dml_pliv$se
```
### One-Way Clustering with Respect to the Product
```
set.seed(2222)
dml_data$z_cols = z_col
dml_data$cluster_cols = 'model.id'
dml_pliv = DoubleMLPLIV$new(dml_data,
lasso, lasso, lasso,
n_folds=4, n_rep=n_rep)
dml_pliv$fit()
coef_df[1, 2] = dml_pliv$coef
se_df[1, 2] = dml_pliv$se
```
### One-Way Clustering with Respect to the Market
```
set.seed(3333)
dml_data$z_cols = z_col
dml_data$cluster_cols = 'cdid'
dml_pliv = DoubleMLPLIV$new(dml_data,
lasso, lasso, lasso,
n_folds=4, n_rep=n_rep)
dml_pliv$fit()
coef_df[1, 3] = dml_pliv$coef
se_df[1, 3] = dml_pliv$se
```
### No Clustering / Zero-Way Clustering
```
dml_data = DoubleMLData$new(dml_df,
y_col=y_col,
d_cols=d_col,
z_cols=z_col,
x_cols=names(data_transf))
print(dml_data)
set.seed(4444)
dml_data$z_cols = z_col
dml_pliv = DoubleMLPLIV$new(dml_data,
lasso, lasso, lasso,
n_folds=4, n_rep=n_rep)
dml_pliv$fit()
coef_df[1, 1] = dml_pliv$coef
se_df[1, 1] = dml_pliv$se
```
### Application Results
```
coef_df
se_df
```
## References
Berry, S., Levinsohn, J., and Pakes, A. (1995), Automobile Prices in Market
Equilibrium, Econometrica: Journal of the Econometric Society, 63, 841-890, doi: [10.2307/2171802](https://doi.org/10.2307/2171802).
Cameron, A. C., Gelbach, J. B. and Miller, D. L. (2011), Robust Inference with Multiway Clustering, Journal of Business & Economic Statistics, 29:2, 238-249, doi: [10.1198/jbes.2010.07136](https://doi.org/10.1198/jbes.2010.07136).
Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W. and Robins, J. (2018), Double/debiased machine learning for treatment and structural parameters. The Econometrics Journal, 21: C1-C68, doi: [10.1111/ectj.12097](https://doi.org/10.1111/ectj.12097).
Chiang, H. D., Kato K., Ma, Y. and Sasaki, Y. (2021), Multiway Cluster Robust Double/Debiased Machine Learning, Journal of Business & Economic Statistics, doi: [10.1080/07350015.2021.1895815](https://doi.org/10.1080/07350015.2021.1895815), arXiv: [1909.03489](https://arxiv.org/abs/1909.03489).
## Define Helper Functions for Plotting
```
library(RColorBrewer)
coul <- rev(colorRampPalette(brewer.pal(8, "RdBu"))(3))
options(repr.plot.width = 10, repr.plot.height = 10)
plt_smpls = function(smpls, n_folds) {
df = matrix(0, nrow = N*M, ncol = n_folds)
for (i_fold in 1:n_folds){
df[smpls$train_ids[[i_fold]], i_fold] = -1
df[smpls$test_ids[[i_fold]], i_fold] = 1
}
heatmap(df, Rowv=NA, Colv=NA, col=coul, cexRow=1.5, cexCol=1.5, scale='none')
}
plt_smpls_cluster = function(smpls_cluster, n_folds, n_folds_per_cluster) {
#options(repr.plot.width = 6, repr.plot.height = 6)
plots = list()
for (i_fold in 1:n_folds){
mat = matrix(0, nrow = M, ncol = N)
for (k in smpls_cluster$train_ids[[i_fold]][[1]]) {
for (l in smpls_cluster$train_ids[[i_fold]][[2]]) {
mat[k, l] = -1
}
}
for (k in smpls_cluster$test_ids[[i_fold]][[1]]) {
for (l in smpls_cluster$test_ids[[i_fold]][[2]]) {
mat[k, l] = 1
}
}
l = (i_fold-1) %% n_folds_per_cluster + 1
k = ((i_fold-1) %/% n_folds_per_cluster)+1
df = data.frame(mat)
cols = names(df)
names(df) = 1:N
df$id = 1:N
df_plot = melt(df, id.var = 'id')
df_plot$value = factor(df_plot$value)
plots[[i_fold]] = ggplot(data = df_plot, aes(x=id, y=variable)) +
geom_tile(aes(fill=value), colour = "grey50") +
scale_fill_manual(values = c("darkblue", "white", "darkred")) +
theme(text = element_text(size=15))
# ToDo: Add Subplot titles
if (k == 3) {
plots[[i_fold]] = plots[[i_fold]] + xlab(expression(paste('Second Cluster Variable ', l)))
} else {
plots[[i_fold]] = plots[[i_fold]] + xlab('')
}
if (l == 1) {
plots[[i_fold]] = plots[[i_fold]] + ylab(expression(paste('First Cluster Variable ', k)))
} else {
plots[[i_fold]] = plots[[i_fold]] + ylab('')
}
}
return(plots)
}
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Transformer model for language understanding
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/beta/tutorials/text/transformer">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/text/transformer.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/text/transformer.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/r2/tutorials/text/transformer.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial trains a <a href="https://arxiv.org/abs/1706.03762" class="external">Transformer model</a> to translate Portuguese to English. This is an advanced example that assumes knowledge of [text generation](text_generation.ipynb) and [attention](nmt_with_attention.ipynb).
The core idea behind the Transformer model is *self-attention*—the ability to attend to different positions of the input sequence to compute a representation of that sequence. Transformer creates stacks of self-attention layers and is explained below in the sections *Scaled dot product attention* and *Multi-head attention*.
A transformer model handles variable-sized input using stacks of self-attention layers instead of [RNNs](text_classification_rnn.ipynb) or [CNNs](../images/intro_to_cnns.ipynb). This general architecture has a number of advantages:
* It make no assumptions about the temporal/spatial relationships across the data. This is ideal for processing a set of objects (for example, [StarCraft units](https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/#block-8)).
* Layer outputs can be calculated in parallel, instead of a series like an RNN.
* Distant items can affect each other's output without passing through many RNN-steps, or convolution layers (see [Scene Memory Transformer](https://arxiv.org/pdf/1903.03878.pdf) for example).
* It can learn long-range dependencies. This is a challenge in many sequence tasks.
The downsides of this architecture are:
* For a time-series, the output for a time-step is calculated from the *entire history* instead of only the inputs and current hidden-state. This _may_ be less efficient.
* If the input *does* have a temporal/spatial relationship, like text, some positional encoding must be added or the model will effectively see a bag of words.
After training the model in this notebook, you will be able to input a Portuguese sentence and return the English translation.
<img src="https://www.tensorflow.org/images/tutorials/transformer/attention_map_portuguese.png" width="800" alt="Attention heatmap">
```
from __future__ import absolute_import, division, print_function, unicode_literals
!pip install tensorflow-gpu==2.0.0-beta1
import tensorflow_datasets as tfds
import tensorflow as tf
import time
import numpy as np
import matplotlib.pyplot as plt
```
## Setup input pipeline
Use [TFDS](https://www.tensorflow.org/datasets) to load the [Portugese-English translation dataset](https://github.com/neulab/word-embeddings-for-nmt) from the [TED Talks Open Translation Project](https://www.ted.com/participate/translate).
This dataset contains approximately 50000 training examples, 1100 validation examples, and 2000 test examples.
```
examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
as_supervised=True)
train_examples, val_examples = examples['train'], examples['validation']
```
Create a custom subwords tokenizer from the training dataset.
```
tokenizer_en = tfds.features.text.SubwordTextEncoder.build_from_corpus(
(en.numpy() for pt, en in train_examples), target_vocab_size=2**13)
tokenizer_pt = tfds.features.text.SubwordTextEncoder.build_from_corpus(
(pt.numpy() for pt, en in train_examples), target_vocab_size=2**13)
sample_string = 'Transformer is awesome.'
tokenized_string = tokenizer_en.encode(sample_string)
print ('Tokenized string is {}'.format(tokenized_string))
original_string = tokenizer_en.decode(tokenized_string)
print ('The original string: {}'.format(original_string))
assert original_string == sample_string
```
The tokenizer encodes the string by breaking it into subwords if the word is not in its dictionary.
```
for ts in tokenized_string:
print ('{} ----> {}'.format(ts, tokenizer_en.decode([ts])))
BUFFER_SIZE = 20000
BATCH_SIZE = 64
```
Add a start and end token to the input and target.
```
def encode(lang1, lang2):
lang1 = [tokenizer_pt.vocab_size] + tokenizer_pt.encode(
lang1.numpy()) + [tokenizer_pt.vocab_size+1]
lang2 = [tokenizer_en.vocab_size] + tokenizer_en.encode(
lang2.numpy()) + [tokenizer_en.vocab_size+1]
return lang1, lang2
```
Note: To keep this example small and relatively fast, drop examples with a length of over 40 tokens.
```
MAX_LENGTH = 40
def filter_max_length(x, y, max_length=MAX_LENGTH):
return tf.logical_and(tf.size(x) <= max_length,
tf.size(y) <= max_length)
```
Operations inside `.map()` run in graph mode and receive a graph tensor that do not have a numpy attribute. The `tokenizer` expects a string or Unicode symbol to encode it into integers. Hence, you need to run the encoding inside a `tf.py_function`, which receives an eager tensor having a numpy attribute that contains the string value.
```
def tf_encode(pt, en):
return tf.py_function(encode, [pt, en], [tf.int64, tf.int64])
train_dataset = train_examples.map(tf_encode)
train_dataset = train_dataset.filter(filter_max_length)
# cache the dataset to memory to get a speedup while reading from it.
train_dataset = train_dataset.cache()
train_dataset = train_dataset.shuffle(BUFFER_SIZE).padded_batch(
BATCH_SIZE, padded_shapes=([-1], [-1]))
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)
val_dataset = val_examples.map(tf_encode)
val_dataset = val_dataset.filter(filter_max_length).padded_batch(
BATCH_SIZE, padded_shapes=([-1], [-1]))
pt_batch, en_batch = next(iter(val_dataset))
pt_batch, en_batch
```
## Positional encoding
Since this model doesn't contain any recurrence or convolution, positional encoding is added to give the model some information about the relative position of the words in the sentence.
The positional encoding vector is added to the embedding vector. Embeddings represent a token in a d-dimensional space where tokens with similar meaning will be closer to each other. But the embeddings do not encode the relative position of words in a sentence. So after adding the positional encoding, words will be closer to each other based on the *similarity of their meaning and their position in the sentence*, in the d-dimensional space.
See the notebook on [positional encoding](https://github.com/tensorflow/examples/blob/master/community/en/position_encoding.ipynb) to learn more about it. The formula for calculating the positional encoding is as follows:
$$\Large{PE_{(pos, 2i)} = sin(pos / 10000^{2i / d_{model}})} $$
$$\Large{PE_{(pos, 2i+1)} = cos(pos / 10000^{2i / d_{model}})} $$
```
def get_angles(pos, i, d_model):
angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
return pos * angle_rates
def positional_encoding(position, d_model):
angle_rads = get_angles(np.arange(position)[:, np.newaxis],
np.arange(d_model)[np.newaxis, :],
d_model)
# apply sin to even indices in the array; 2i
sines = np.sin(angle_rads[:, 0::2])
# apply cos to odd indices in the array; 2i+1
cosines = np.cos(angle_rads[:, 1::2])
pos_encoding = np.concatenate([sines, cosines], axis=-1)
pos_encoding = pos_encoding[np.newaxis, ...]
return tf.cast(pos_encoding, dtype=tf.float32)
pos_encoding = positional_encoding(50, 512)
print (pos_encoding.shape)
plt.pcolormesh(pos_encoding[0], cmap='RdBu')
plt.xlabel('Depth')
plt.xlim((0, 512))
plt.ylabel('Position')
plt.colorbar()
plt.show()
```
## Masking
Mask all the pad tokens in the batch of sequence. It ensures that the model does not treat padding as the input. The mask indicates where pad value `0` is present: it outputs a `1` at those locations, and a `0` otherwise.
```
def create_padding_mask(seq):
seq = tf.cast(tf.math.equal(seq, 0), tf.float32)
# add extra dimensions to add the padding
# to the attention logits.
return seq[:, tf.newaxis, tf.newaxis, :] # (batch_size, 1, 1, seq_len)
x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
create_padding_mask(x)
```
The look-ahead mask is used to mask the future tokens in a sequence. In other words, the mask indicates which entries should not be used.
This means that to predict the third word, only the first and second word will be used. Similarly to predict the fourth word, only the first, second and the third word will be used and so on.
```
def create_look_ahead_mask(size):
mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
return mask # (seq_len, seq_len)
x = tf.random.uniform((1, 3))
temp = create_look_ahead_mask(x.shape[1])
temp
```
## Scaled dot product attention
<img src="https://www.tensorflow.org/images/tutorials/transformer/scaled_attention.png" width="500" alt="scaled_dot_product_attention">
The attention function used by the transformer takes three inputs: Q (query), K (key), V (value). The equation used to calculate the attention weights is:
$$\Large{Attention(Q, K, V) = softmax_k(\frac{QK^T}{\sqrt{d_k}}) V} $$
The dot-product attention is scaled by a factor of square root of the depth. This is done because for large values of depth, the dot product grows large in magnitude pushing the softmax function where it has small gradients resulting in a very hard softmax.
For example, consider that `Q` and `K` have a mean of 0 and variance of 1. Their matrix multiplication will have a mean of 0 and variance of `dk`. Hence, *square root of `dk`* is used for scaling (and not any other number) because the matmul of `Q` and `K` should have a mean of 0 and variance of 1, and you get a gentler softmax.
The mask is multiplied with -1e9 (close to negative infinity). This is done because the mask is summed with the scaled matrix multiplication of Q and K and is applied immediately before a softmax. The goal is to zero out these cells, and large negative inputs to softmax are near zero in the output.
```
def scaled_dot_product_attention(q, k, v, mask):
"""Calculate the attention weights.
q, k, v must have matching leading dimensions.
k, v must have matching penultimate dimension, i.e.: seq_len_k = seq_len_v.
The mask has different shapes depending on its type(padding or look ahead)
but it must be broadcastable for addition.
Args:
q: query shape == (..., seq_len_q, depth)
k: key shape == (..., seq_len_k, depth)
v: value shape == (..., seq_len_v, depth_v)
mask: Float tensor with shape broadcastable
to (..., seq_len_q, seq_len_k). Defaults to None.
Returns:
output, attention_weights
"""
matmul_qk = tf.matmul(q, k, transpose_b=True) # (..., seq_len_q, seq_len_k)
# scale matmul_qk
dk = tf.cast(tf.shape(k)[-1], tf.float32)
scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)
# add the mask to the scaled tensor.
if mask is not None:
scaled_attention_logits += (mask * -1e9)
# softmax is normalized on the last axis (seq_len_k) so that the scores
# add up to 1.
attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1) # (..., seq_len_q, seq_len_k)
output = tf.matmul(attention_weights, v) # (..., seq_len_q, depth_v)
return output, attention_weights
```
As the softmax normalization is done on K, its values decide the amount of importance given to Q.
The output represents the multiplication of the attention weights and the V (value) vector. This ensures that the words you want to focus on are kept as-is and the irrelevant words are flushed out.
```
def print_out(q, k, v):
temp_out, temp_attn = scaled_dot_product_attention(
q, k, v, None)
print ('Attention weights are:')
print (temp_attn)
print ('Output is:')
print (temp_out)
np.set_printoptions(suppress=True)
temp_k = tf.constant([[10,0,0],
[0,10,0],
[0,0,10],
[0,0,10]], dtype=tf.float32) # (4, 3)
temp_v = tf.constant([[ 1,0],
[ 10,0],
[ 100,5],
[1000,6]], dtype=tf.float32) # (4, 2)
# This `query` aligns with the second `key`,
# so the second `value` is returned.
temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32) # (1, 3)
print_out(temp_q, temp_k, temp_v)
# This query aligns with a repeated key (third and fourth),
# so all associated values get averaged.
temp_q = tf.constant([[0, 0, 10]], dtype=tf.float32) # (1, 3)
print_out(temp_q, temp_k, temp_v)
# This query aligns equally with the first and second key,
# so their values get averaged.
temp_q = tf.constant([[10, 10, 0]], dtype=tf.float32) # (1, 3)
print_out(temp_q, temp_k, temp_v)
```
Pass all the queries together.
```
temp_q = tf.constant([[0, 0, 10], [0, 10, 0], [10, 10, 0]], dtype=tf.float32) # (3, 3)
print_out(temp_q, temp_k, temp_v)
```
## Multi-head attention
<img src="https://www.tensorflow.org/images/tutorials/transformer/multi_head_attention.png" width="500" alt="multi-head attention">
Multi-head attention consists of four parts:
* Linear layers and split into heads.
* Scaled dot-product attention.
* Concatenation of heads.
* Final linear layer.
Each multi-head attention block gets three inputs; Q (query), K (key), V (value). These are put through linear (Dense) layers and split up into multiple heads.
The `scaled_dot_product_attention` defined above is applied to each head (broadcasted for efficiency). An appropriate mask must be used in the attention step. The attention output for each head is then concatenated (using `tf.transpose`, and `tf.reshape`) and put through a final `Dense` layer.
Instead of one single attention head, Q, K, and V are split into multiple heads because it allows the model to jointly attend to information at different positions from different representational spaces. After the split each head has a reduced dimensionality, so the total computation cost is the same as a single head attention with full dimensionality.
```
class MultiHeadAttention(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads):
super(MultiHeadAttention, self).__init__()
self.num_heads = num_heads
self.d_model = d_model
assert d_model % self.num_heads == 0
self.depth = d_model // self.num_heads
self.wq = tf.keras.layers.Dense(d_model)
self.wk = tf.keras.layers.Dense(d_model)
self.wv = tf.keras.layers.Dense(d_model)
self.dense = tf.keras.layers.Dense(d_model)
def split_heads(self, x, batch_size):
"""Split the last dimension into (num_heads, depth).
Transpose the result such that the shape is (batch_size, num_heads, seq_len, depth)
"""
x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))
return tf.transpose(x, perm=[0, 2, 1, 3])
def call(self, v, k, q, mask):
batch_size = tf.shape(q)[0]
q = self.wq(q) # (batch_size, seq_len, d_model)
k = self.wk(k) # (batch_size, seq_len, d_model)
v = self.wv(v) # (batch_size, seq_len, d_model)
q = self.split_heads(q, batch_size) # (batch_size, num_heads, seq_len_q, depth)
k = self.split_heads(k, batch_size) # (batch_size, num_heads, seq_len_k, depth)
v = self.split_heads(v, batch_size) # (batch_size, num_heads, seq_len_v, depth)
# scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)
# attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)
scaled_attention, attention_weights = scaled_dot_product_attention(
q, k, v, mask)
scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3]) # (batch_size, seq_len_q, num_heads, depth)
concat_attention = tf.reshape(scaled_attention,
(batch_size, -1, self.d_model)) # (batch_size, seq_len_q, d_model)
output = self.dense(concat_attention) # (batch_size, seq_len_q, d_model)
return output, attention_weights
```
Create a `MultiHeadAttention` layer to try out. At each location in the sequence, `y`, the `MultiHeadAttention` runs all 8 attention heads across all other locations in the sequence, returning a new vector of the same length at each location.
```
temp_mha = MultiHeadAttention(d_model=512, num_heads=8)
y = tf.random.uniform((1, 60, 512)) # (batch_size, encoder_sequence, d_model)
out, attn = temp_mha(y, k=y, q=y, mask=None)
out.shape, attn.shape
```
## Point wise feed forward network
Point wise feed forward network consists of two fully-connected layers with a ReLU activation in between.
```
def point_wise_feed_forward_network(d_model, dff):
return tf.keras.Sequential([
tf.keras.layers.Dense(dff, activation='relu'), # (batch_size, seq_len, dff)
tf.keras.layers.Dense(d_model) # (batch_size, seq_len, d_model)
])
sample_ffn = point_wise_feed_forward_network(512, 2048)
sample_ffn(tf.random.uniform((64, 50, 512))).shape
```
## Encoder and decoder
<img src="https://www.tensorflow.org/images/tutorials/transformer/transformer.png" width="600" alt="transformer">
The transformer model follows the same general pattern as a standard [sequence to sequence with attention model](nmt_with_attention.ipynb).
* The input sentence is passed through `N` encoder layers that generates an output for each word/token in the sequence.
* The decoder attends on the encoder's output and its own input (self-attention) to predict the next word.
### Encoder layer
Each encoder layer consists of sublayers:
1. Multi-head attention (with padding mask)
2. Point wise feed forward networks.
Each of these sublayers has a residual connection around it followed by a layer normalization. Residual connections help in avoiding the vanishing gradient problem in deep networks.
The output of each sublayer is `LayerNorm(x + Sublayer(x))`. The normalization is done on the `d_model` (last) axis. There are N encoder layers in the transformer.
```
class EncoderLayer(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, dff, rate=0.1):
super(EncoderLayer, self).__init__()
self.mha = MultiHeadAttention(d_model, num_heads)
self.ffn = point_wise_feed_forward_network(d_model, dff)
self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = tf.keras.layers.Dropout(rate)
self.dropout2 = tf.keras.layers.Dropout(rate)
def call(self, x, training, mask):
attn_output, _ = self.mha(x, x, x, mask) # (batch_size, input_seq_len, d_model)
attn_output = self.dropout1(attn_output, training=training)
out1 = self.layernorm1(x + attn_output) # (batch_size, input_seq_len, d_model)
ffn_output = self.ffn(out1) # (batch_size, input_seq_len, d_model)
ffn_output = self.dropout2(ffn_output, training=training)
out2 = self.layernorm2(out1 + ffn_output) # (batch_size, input_seq_len, d_model)
return out2
sample_encoder_layer = EncoderLayer(512, 8, 2048)
sample_encoder_layer_output = sample_encoder_layer(
tf.random.uniform((64, 43, 512)), False, None)
sample_encoder_layer_output.shape # (batch_size, input_seq_len, d_model)
```
### Decoder layer
Each decoder layer consists of sublayers:
1. Masked multi-head attention (with look ahead mask and padding mask)
2. Multi-head attention (with padding mask). V (value) and K (key) receive the *encoder output* as inputs. Q (query) receives the *output from the masked multi-head attention sublayer.*
3. Point wise feed forward networks
Each of these sublayers has a residual connection around it followed by a layer normalization. The output of each sublayer is `LayerNorm(x + Sublayer(x))`. The normalization is done on the `d_model` (last) axis.
There are N decoder layers in the transformer.
As Q receives the output from decoder's first attention block, and K receives the encoder output, the attention weights represent the importance given to the decoder's input based on the encoder's output. In other words, the decoder predicts the next word by looking at the encoder output and self-attending to its own output. See the demonstration above in the scaled dot product attention section.
```
class DecoderLayer(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, dff, rate=0.1):
super(DecoderLayer, self).__init__()
self.mha1 = MultiHeadAttention(d_model, num_heads)
self.mha2 = MultiHeadAttention(d_model, num_heads)
self.ffn = point_wise_feed_forward_network(d_model, dff)
self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = tf.keras.layers.Dropout(rate)
self.dropout2 = tf.keras.layers.Dropout(rate)
self.dropout3 = tf.keras.layers.Dropout(rate)
def call(self, x, enc_output, training,
look_ahead_mask, padding_mask):
# enc_output.shape == (batch_size, input_seq_len, d_model)
attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask) # (batch_size, target_seq_len, d_model)
attn1 = self.dropout1(attn1, training=training)
out1 = self.layernorm1(attn1 + x)
attn2, attn_weights_block2 = self.mha2(
enc_output, enc_output, out1, padding_mask) # (batch_size, target_seq_len, d_model)
attn2 = self.dropout2(attn2, training=training)
out2 = self.layernorm2(attn2 + out1) # (batch_size, target_seq_len, d_model)
ffn_output = self.ffn(out2) # (batch_size, target_seq_len, d_model)
ffn_output = self.dropout3(ffn_output, training=training)
out3 = self.layernorm3(ffn_output + out2) # (batch_size, target_seq_len, d_model)
return out3, attn_weights_block1, attn_weights_block2
sample_decoder_layer = DecoderLayer(512, 8, 2048)
sample_decoder_layer_output, _, _ = sample_decoder_layer(
tf.random.uniform((64, 50, 512)), sample_encoder_layer_output,
False, None, None)
sample_decoder_layer_output.shape # (batch_size, target_seq_len, d_model)
```
### Encoder
The `Encoder` consists of:
1. Input Embedding
2. Positional Encoding
3. N encoder layers
The input is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the encoder layers. The output of the encoder is the input to the decoder.
```
class Encoder(tf.keras.layers.Layer):
def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
rate=0.1):
super(Encoder, self).__init__()
self.d_model = d_model
self.num_layers = num_layers
self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
self.pos_encoding = positional_encoding(input_vocab_size, self.d_model)
self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate)
for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(rate)
def call(self, x, training, mask):
seq_len = tf.shape(x)[1]
# adding embedding and position encoding.
x = self.embedding(x) # (batch_size, input_seq_len, d_model)
x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
x += self.pos_encoding[:, :seq_len, :]
x = self.dropout(x, training=training)
for i in range(self.num_layers):
x = self.enc_layers[i](x, training, mask)
return x # (batch_size, input_seq_len, d_model)
sample_encoder = Encoder(num_layers=2, d_model=512, num_heads=8,
dff=2048, input_vocab_size=8500)
sample_encoder_output = sample_encoder(tf.random.uniform((64, 62)),
training=False, mask=None)
print (sample_encoder_output.shape) # (batch_size, input_seq_len, d_model)
```
### Decoder
The `Decoder` consists of:
1. Output Embedding
2. Positional Encoding
3. N decoder layers
The target is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the decoder layers. The output of the decoder is the input to the final linear layer.
```
class Decoder(tf.keras.layers.Layer):
def __init__(self, num_layers, d_model, num_heads, dff, target_vocab_size,
rate=0.1):
super(Decoder, self).__init__()
self.d_model = d_model
self.num_layers = num_layers
self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
self.pos_encoding = positional_encoding(target_vocab_size, d_model)
self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate)
for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(rate)
def call(self, x, enc_output, training,
look_ahead_mask, padding_mask):
seq_len = tf.shape(x)[1]
attention_weights = {}
x = self.embedding(x) # (batch_size, target_seq_len, d_model)
x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
x += self.pos_encoding[:, :seq_len, :]
x = self.dropout(x, training=training)
for i in range(self.num_layers):
x, block1, block2 = self.dec_layers[i](x, enc_output, training,
look_ahead_mask, padding_mask)
attention_weights['decoder_layer{}_block1'.format(i+1)] = block1
attention_weights['decoder_layer{}_block2'.format(i+1)] = block2
# x.shape == (batch_size, target_seq_len, d_model)
return x, attention_weights
sample_decoder = Decoder(num_layers=2, d_model=512, num_heads=8,
dff=2048, target_vocab_size=8000)
output, attn = sample_decoder(tf.random.uniform((64, 26)),
enc_output=sample_encoder_output,
training=False, look_ahead_mask=None,
padding_mask=None)
output.shape, attn['decoder_layer2_block2'].shape
```
## Create the Transformer
Transformer consists of the encoder, decoder and a final linear layer. The output of the decoder is the input to the linear layer and its output is returned.
```
class Transformer(tf.keras.Model):
def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
target_vocab_size, rate=0.1):
super(Transformer, self).__init__()
self.encoder = Encoder(num_layers, d_model, num_heads, dff,
input_vocab_size, rate)
self.decoder = Decoder(num_layers, d_model, num_heads, dff,
target_vocab_size, rate)
self.final_layer = tf.keras.layers.Dense(target_vocab_size)
def call(self, inp, tar, training, enc_padding_mask,
look_ahead_mask, dec_padding_mask):
enc_output = self.encoder(inp, training, enc_padding_mask) # (batch_size, inp_seq_len, d_model)
# dec_output.shape == (batch_size, tar_seq_len, d_model)
dec_output, attention_weights = self.decoder(
tar, enc_output, training, look_ahead_mask, dec_padding_mask)
final_output = self.final_layer(dec_output) # (batch_size, tar_seq_len, target_vocab_size)
return final_output, attention_weights
sample_transformer = Transformer(
num_layers=2, d_model=512, num_heads=8, dff=2048,
input_vocab_size=8500, target_vocab_size=8000)
temp_input = tf.random.uniform((64, 62))
temp_target = tf.random.uniform((64, 26))
fn_out, _ = sample_transformer(temp_input, temp_target, training=False,
enc_padding_mask=None,
look_ahead_mask=None,
dec_padding_mask=None)
fn_out.shape # (batch_size, tar_seq_len, target_vocab_size)
```
## Set hyperparameters
To keep this example small and relatively fast, the values for *num_layers, d_model, and dff* have been reduced.
The values used in the base model of transformer were; *num_layers=6*, *d_model = 512*, *dff = 2048*. See the [paper](https://arxiv.org/abs/1706.03762) for all the other versions of the transformer.
Note: By changing the values below, you can get the model that achieved state of the art on many tasks.
```
num_layers = 4
d_model = 128
dff = 512
num_heads = 8
input_vocab_size = tokenizer_pt.vocab_size + 2
target_vocab_size = tokenizer_en.vocab_size + 2
dropout_rate = 0.1
```
## Optimizer
Use the Adam optimizer with a custom learning rate scheduler according to the formula in the [paper](https://arxiv.org/abs/1706.03762).
$$\Large{lrate = d_{model}^{-0.5} * min(step{\_}num^{-0.5}, step{\_}num * warmup{\_}steps^{-1.5})}$$
```
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
def __init__(self, d_model, warmup_steps=4000):
super(CustomSchedule, self).__init__()
self.d_model = d_model
self.d_model = tf.cast(self.d_model, tf.float32)
self.warmup_steps = warmup_steps
def __call__(self, step):
arg1 = tf.math.rsqrt(step)
arg2 = step * (self.warmup_steps ** -1.5)
return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
learning_rate = CustomSchedule(d_model)
optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98,
epsilon=1e-9)
temp_learning_rate_schedule = CustomSchedule(d_model)
plt.plot(temp_learning_rate_schedule(tf.range(40000, dtype=tf.float32)))
plt.ylabel("Learning Rate")
plt.xlabel("Train Step")
```
## Loss and metrics
Since the target sequences are padded, it is important to apply a padding mask when calculating the loss.
```
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='train_accuracy')
```
## Training and checkpointing
```
transformer = Transformer(num_layers, d_model, num_heads, dff,
input_vocab_size, target_vocab_size, dropout_rate)
def create_masks(inp, tar):
# Encoder padding mask
enc_padding_mask = create_padding_mask(inp)
# Used in the 2nd attention block in the decoder.
# This padding mask is used to mask the encoder outputs.
dec_padding_mask = create_padding_mask(inp)
# Used in the 1st attention block in the decoder.
# It is used to pad and mask future tokens in the input received by
# the decoder.
look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])
dec_target_padding_mask = create_padding_mask(tar)
combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask)
return enc_padding_mask, combined_mask, dec_padding_mask
```
Create the checkpoint path and the checkpoint manager. This will be used to save checkpoints every `n` epochs.
```
checkpoint_path = "./checkpoints/train"
ckpt = tf.train.Checkpoint(transformer=transformer,
optimizer=optimizer)
ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)
# if a checkpoint exists, restore the latest checkpoint.
if ckpt_manager.latest_checkpoint:
ckpt.restore(ckpt_manager.latest_checkpoint)
print ('Latest checkpoint restored!!')
```
The target is divided into tar_inp and tar_real. tar_inp is passed as an input to the decoder. `tar_real` is that same input shifted by 1: At each location in `tar_input`, `tar_real` contains the next token that should be predicted.
For example, `sentence` = "SOS A lion in the jungle is sleeping EOS"
`tar_inp` = "SOS A lion in the jungle is sleeping"
`tar_real` = "A lion in the jungle is sleeping EOS"
The transformer is an auto-regressive model: it makes predictions one part at a time, and uses its output so far to decide what to do next.
During training this example uses teacher-forcing (like in the [text generation tutorial](./text_generation.ipynb)). Teacher forcing is passing the true output to the next time step regardless of what the model predicts at the current time step.
As the transformer predicts each word, *self-attention* allows it to look at the previous words in the input sequence to better predict the next word.
To prevent the model from peaking at the expected output the model uses a look-ahead mask.
```
EPOCHS = 20
# The @tf.function trace-compiles train_step into a TF graph for faster
# execution. The function specializes to the precise shape of the argument
# tensors. To avoid re-tracing due to the variable sequence lengths or variable
# batch sizes (the last batch is smaller), use input_signature to specify
# more generic shapes.
train_step_signature = [
tf.TensorSpec(shape=(None, None), dtype=tf.int64),
tf.TensorSpec(shape=(None, None), dtype=tf.int64),
]
@tf.function(input_signature=train_step_signature)
def train_step(inp, tar):
tar_inp = tar[:, :-1]
tar_real = tar[:, 1:]
enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp)
with tf.GradientTape() as tape:
predictions, _ = transformer(inp, tar_inp,
True,
enc_padding_mask,
combined_mask,
dec_padding_mask)
loss = loss_function(tar_real, predictions)
gradients = tape.gradient(loss, transformer.trainable_variables)
optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))
train_loss(loss)
train_accuracy(tar_real, predictions)
```
Portuguese is used as the input language and English is the target language.
```
for epoch in range(EPOCHS):
start = time.time()
train_loss.reset_states()
train_accuracy.reset_states()
# inp -> portuguese, tar -> english
for (batch, (inp, tar)) in enumerate(train_dataset):
train_step(inp, tar)
if batch % 50 == 0:
print ('Epoch {} Batch {} Loss {:.4f} Accuracy {:.4f}'.format(
epoch + 1, batch, train_loss.result(), train_accuracy.result()))
if (epoch + 1) % 5 == 0:
ckpt_save_path = ckpt_manager.save()
print ('Saving checkpoint for epoch {} at {}'.format(epoch+1,
ckpt_save_path))
print ('Epoch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1,
train_loss.result(),
train_accuracy.result()))
print ('Time taken for 1 epoch: {} secs\n'.format(time.time() - start))
```
## Evaluate
The following steps are used for evaluation:
* Encode the input sentence using the Portuguese tokenizer (`tokenizer_pt`). Moreover, add the start and end token so the input is equivalent to what the model is trained with. This is the encoder input.
* The decoder input is the `start token == tokenizer_en.vocab_size`.
* Calculate the padding masks and the look ahead masks.
* The `decoder` then outputs the predictions by looking at the `encoder output` and its own output (self-attention).
* Select the last word and calculate the argmax of that.
* Concatentate the predicted word to the decoder input as pass it to the decoder.
* In this approach, the decoder predicts the next word based on the previous words it predicted.
Note: The model used here has less capacity to keep the example relatively faster so the predictions maybe less right. To reproduce the results in the paper, use the entire dataset and base transformer model or transformer XL, by changing the hyperparameters above.
```
def evaluate(inp_sentence):
start_token = [tokenizer_pt.vocab_size]
end_token = [tokenizer_pt.vocab_size + 1]
# inp sentence is portuguese, hence adding the start and end token
inp_sentence = start_token + tokenizer_pt.encode(inp_sentence) + end_token
encoder_input = tf.expand_dims(inp_sentence, 0)
# as the target is english, the first word to the transformer should be the
# english start token.
decoder_input = [tokenizer_en.vocab_size]
output = tf.expand_dims(decoder_input, 0)
for i in range(MAX_LENGTH):
enc_padding_mask, combined_mask, dec_padding_mask = create_masks(
encoder_input, output)
# predictions.shape == (batch_size, seq_len, vocab_size)
predictions, attention_weights = transformer(encoder_input,
output,
False,
enc_padding_mask,
combined_mask,
dec_padding_mask)
# select the last word from the seq_len dimension
predictions = predictions[: ,-1:, :] # (batch_size, 1, vocab_size)
predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
# return the result if the predicted_id is equal to the end token
if tf.equal(predicted_id, tokenizer_en.vocab_size+1):
return tf.squeeze(output, axis=0), attention_weights
# concatentate the predicted_id to the output which is given to the decoder
# as its input.
output = tf.concat([output, predicted_id], axis=-1)
return tf.squeeze(output, axis=0), attention_weights
def plot_attention_weights(attention, sentence, result, layer):
fig = plt.figure(figsize=(16, 8))
sentence = tokenizer_pt.encode(sentence)
attention = tf.squeeze(attention[layer], axis=0)
for head in range(attention.shape[0]):
ax = fig.add_subplot(2, 4, head+1)
# plot the attention weights
ax.matshow(attention[head][:-1, :], cmap='viridis')
fontdict = {'fontsize': 10}
ax.set_xticks(range(len(sentence)+2))
ax.set_yticks(range(len(result)))
ax.set_ylim(len(result)-1.5, -0.5)
ax.set_xticklabels(
['<start>']+[tokenizer_pt.decode([i]) for i in sentence]+['<end>'],
fontdict=fontdict, rotation=90)
ax.set_yticklabels([tokenizer_en.decode([i]) for i in result
if i < tokenizer_en.vocab_size],
fontdict=fontdict)
ax.set_xlabel('Head {}'.format(head+1))
plt.tight_layout()
plt.show()
def translate(sentence, plot=''):
result, attention_weights = evaluate(sentence)
predicted_sentence = tokenizer_en.decode([i for i in result
if i < tokenizer_en.vocab_size])
print('Input: {}'.format(sentence))
print('Predicted translation: {}'.format(predicted_sentence))
if plot:
plot_attention_weights(attention_weights, sentence, result, plot)
translate("este é um problema que temos que resolver.")
print ("Real translation: this is a problem we have to solve .")
translate("os meus vizinhos ouviram sobre esta ideia.")
print ("Real translation: and my neighboring homes heard about this idea .")
translate("vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.")
print ("Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .")
```
You can pass different layers and attention blocks of the decoder to the `plot` parameter.
```
translate("este é o primeiro livro que eu fiz.", plot='decoder_layer4_block2')
print ("Real translation: this is the first book i've ever done.")
```
## Summary
In this tutorial, you learned about positional encoding, multi-head attention, the importance of masking and how to create a transformer.
Try using a different dataset to train the transformer. You can also create the base transformer or transformer XL by changing the hyperparameters above. You can also use the layers defined here to create [BERT](https://arxiv.org/abs/1810.04805) and train state of the art models. Futhermore, you can implement beam search to get better predictions.
| github_jupyter |
<a href="https://colab.research.google.com/github/rim-yu/Keras-GAN/blob/master/team_project_dcgan.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
https://blog.naver.com/dong961015/221839396386 참고하였음.
```
from __future__ import print_function, division
from keras.datasets import mnist
from keras.layers import Input, Dense, Reshape, Flatten, Dropout
from keras.layers import BatchNormalization, Activation, ZeroPadding2D
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.convolutional import UpSampling2D, Conv2D
from keras.models import Sequential, Model
from keras.optimizers import Adam
from keras.preprocessing import image
import glob
import matplotlib.pyplot as plt
import sys
import numpy as np
# 드라이브 마운트 후
path = glob.glob("/content/drive/Shared drives/team-project/team-project_rim/texts_jpg_original/*.jpg")
def load_image(path):
image_list = np.zeros((len(path), 28, 28, 1))
for i, fig in enumerate(path):
# target_size = (세로길이, 가로길이)
img = image.load_img(fig, color_mode='grayscale', target_size=(28, 28))
x = image.img_to_array(img).astype('float32')
image_list[i] = x
return image_list
a = image.load_img("/content/drive/Shared drives/team-project/team-project_rim/texts_jpg_original/GimhaeGayaR.jpg", color_mode='grayscale', target_size=(28, 28))
b = image.img_to_array(a).astype('float32')
print(b.shape)
a
class DCGAN():
def __init__(self):
# Input shape
self.img_rows = 28
self.img_cols = 28
self.channels = 1
self.img_shape = (self.img_rows, self.img_cols, self.channels)
self.latent_dim = 100
optimizer = Adam(0.0002, 0.5)
# Build and compile the discriminator
self.discriminator = self.build_discriminator()
self.discriminator.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
# Build the generator
self.generator = self.build_generator()
# The generator takes noise as input and generates imgs
z = Input(shape=(self.latent_dim,))
img = self.generator(z)
# For the combined model we will only train the generator
self.discriminator.trainable = False
# The discriminator takes generated images as input and determines validity
valid = self.discriminator(img)
# The combined model (stacked generator and discriminator)
# Trains the generator to fool the discriminator
self.combined = Model(z, valid)
self.combined.compile(loss='binary_crossentropy', optimizer=optimizer)
def build_generator(self):
model = Sequential()
model.add(Dense(128 * 7 * 7, activation="relu", input_dim=self.latent_dim))
model.add(Reshape((7, 7, 128)))
model.add(UpSampling2D())
model.add(Conv2D(128, kernel_size=3, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(UpSampling2D())
model.add(Conv2D(64, kernel_size=3, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(Conv2D(self.channels, kernel_size=3, padding="same"))
model.add(Activation("tanh"))
model.summary()
noise = Input(shape=(self.latent_dim,))
img = model(noise)
return Model(noise, img)
def build_discriminator(self):
model = Sequential()
model.add(Conv2D(32, kernel_size=3, strides=2, input_shape=self.img_shape, padding="same"))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(64, kernel_size=3, strides=2, padding="same"))
model.add(ZeroPadding2D(padding=((0,1),(0,1))))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(128, kernel_size=3, strides=2, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(256, kernel_size=3, strides=1, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
model.summary()
img = Input(shape=self.img_shape)
validity = model(img)
return Model(img, validity)
def train(self, epochs, batch_size=128, save_interval=50):
# Load the dataset
X_train = load_image(path)
# Rescale -1 to 1
X_train = X_train / 127.5 - 1.
X_train = np.expand_dims(X_train, axis=3)
# Adversarial ground truths
valid = np.ones((batch_size, 1))
fake = np.zeros((batch_size, 1))
for epoch in range(epochs):
# ---------------------
# Train Discriminator
# ---------------------
# Select a random half of images
idx = np.random.randint(0, X_train.shape[0], batch_size)
imgs = X_train[idx]
# Sample noise and generate a batch of new images
noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
gen_imgs = self.generator.predict(noise)
# Train the discriminator (real classified as ones and generated as zeros)
d_loss_real = self.discriminator.train_on_batch(imgs, valid)
d_loss_fake = self.discriminator.train_on_batch(gen_imgs, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# ---------------------
# Train Generator
# ---------------------
# Train the generator (wants discriminator to mistake images as real)
g_loss = self.combined.train_on_batch(noise, valid)
# Plot the progress
print ("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100*d_loss[1], g_loss))
# If at save interval => save generated image samples
if epoch % save_interval == 0:
self.save_imgs(epoch)
def save_imgs(self, epoch):
r, c = 5, 5
noise = np.random.normal(0, 1, (r * c, self.latent_dim))
gen_imgs = self.generator.predict(noise)
# Rescale images 0 - 1
gen_imgs = 0.5 * gen_imgs + 0.5
fig, axs = plt.subplots(r, c)
cnt = 0
for i in range(r):
for j in range(c):
axs[i,j].imshow(gen_imgs[cnt, :,:,0], cmap='gray')
axs[i,j].axis('off')
cnt += 1
#fig.savefig("images/mnist_%d.png" % epoch)
fig.savefig("/content/drive/My Drive/team-project/images/picture_%d.png" % epoch)
plt.close()
if __name__ == '__main__':
dcgan = DCGAN()
dcgan.train(epochs=4000, batch_size=32, save_interval=50)
class DCGAN():
def __init__(self):
# 들어올 이미지 사이즈 설정
self.img_rows = 28
self.img_cols = 28
self.channels = 1
self.img_shape = (self.img_rows, self.img_cols, self.channels)
self.latent_dim = 100
optimizer = Adam(0.0002, 0.5)
# Build and compile the discriminator
self.discriminator = self.build_discriminator()
self.discriminator.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
# Build the generator
self.generator = self.build_generator()
# The generator takes noise as input and generates imgs
z = Input(shape=(self.latent_dim,))
img = self.generator(z)
# For the combined model we will only train the generator
self.discriminator.trainable = False
# The discriminator takes generated images as input and determines validity
valid = self.discriminator(img)
# The combined model (stacked generator and discriminator)
# Trains the generator to fool the discriminator
self.combined = Model(z, valid)
self.combined.compile(loss='binary_crossentropy', optimizer=optimizer)
def build_generator(self):
model = Sequential()
model.add(Dense(128 * 64 * 64, activation="relu", input_dim=self.latent_dim))
model.add(Reshape((64, 64, 128)))
model.add(UpSampling2D())
model.add(Conv2D(128, kernel_size=3, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(UpSampling2D())
model.add(Conv2D(64, kernel_size=3, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(Conv2D(self.channels, kernel_size=3, padding="same"))
model.add(Activation("tanh"))
model.summary()
noise = Input(shape=(self.latent_dim,))
img = model(noise)
return Model(noise, img)
def build_discriminator(self):
model = Sequential()
model.add(Conv2D(32, kernel_size=3, strides=2, input_shape=self.img_shape, padding="same"))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(64, kernel_size=3, strides=2, padding="same"))
model.add(ZeroPadding2D(padding=((0,1),(0,1))))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(128, kernel_size=3, strides=2, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(256, kernel_size=3, strides=1, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
model.summary()
img = Input(shape=self.img_shape)
validity = model(img)
return Model(img, validity)
def train(self, epochs, batch_size=128, save_interval=50):
# Load the dataset
X_train=load_image(path)
# Rescale -1 to 1
X_train = X_train / 127.5 - 1.
# Adversarial ground truths
valid = np.ones((batch_size, 1))
fake = np.zeros((batch_size, 1))
for epoch in range(epochs):
# ---------------------
# Train Discriminator
# ---------------------
# Select a random half of images
idx = np.random.randint(0, X_train.shape[0], batch_size)
imgs = X_train[idx]
# Sample noise and generate a batch of new images
noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
gen_imgs = self.generator.predict(noise)
# Train the discriminator (real classified as ones and generated as zeros)
d_loss_real = self.discriminator.train_on_batch(imgs, valid)
d_loss_fake = self.discriminator.train_on_batch(gen_imgs, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# ---------------------
# Train Generator
# ---------------------
# Train the generator (wants discriminator to mistake images as real)
g_loss = self.combined.train_on_batch(noise, valid)
# Plot the progress
print ("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100*d_loss[1], g_loss))
# If at save interval => save generated image samples
if epoch % save_interval == 0:
self.save_imgs(epoch)
#생성되는 이미지를 epoch50번 할때마다 저장
def save_imgs(self, epoch):
r, c = 5, 5
noise = np.random.normal(0, 1, (r * c, self.latent_dim))
gen_imgs = self.generator.predict(noise)
# Rescale images 0 - 1
gen_imgs = 0.5 * gen_imgs + 0.5
fig, axs = plt.subplots(r, c)
cnt = 0
for i in range(r):
for j in range(c):
axs[i,j].imshow(gen_imgs[cnt, :,:,0], cmap='gray')
axs[i,j].axis('off')
cnt += 1
plt.savefig("/content/drive/My Drive/team-project/images/picture_%d.png" % epoch)
plt.close()
if __name__ == '__main__':
dcgan = DCGAN()
dcgan.train(epochs=4000, batch_size=32, save_interval=50)
```
| github_jupyter |
# Artificial test set preparation
This notebook prepares the synthetic test set based on the Lakh MIDI Dataset (LMD). Run `../note_seq/prepare.ipynb` and `../audio_train/prepare.ipynb` first.
Copyright 2020 InterDigital R&D and Télécom Paris.
Author: Ondřej Cífka
```
import collections
import concurrent.futures as cf
import copy
import glob
import hashlib
import itertools
import json
import os
import pickle
import random
import re
import shutil
import sys
import IPython.display as ipd
import librosa
from natsort import natsorted, ns
import note_seq
import numpy as np
import matplotlib.pyplot as plt
import pretty_midi
import pysndfx
import soundfile as sf
from sklearn.model_selection import train_test_split
from tqdm.auto import tqdm
INPUT_DIR = '../note_seq/data/'
INPUT_SECTION = 'test' # 'val'
OUTPUT_SUFFIX = '' # '_val'
OUTPUT_DIR = 'wav_16kHz' + OUTPUT_SUFFIX
TOTAL_FILES = 169556
SR = 16000
SF_PATH = '/data/ocifka/id_bkp/data/soundfonts/TimbresOfHeaven/Timbres Of Heaven (XGM) 3.94.sf2'
def filter_sequence(sequence, instrument_re=None, instrument_ids=None, programs=None, drums=None,
copy=False):
if copy:
sequence, original_sequence = note_seq.NoteSequence(), sequence
sequence.CopyFrom(original_sequence)
if isinstance(instrument_re, str):
instrument_re = re.compile(instrument_re)
# Filter the instruments based on name and ID
deleted_ids = set()
if instrument_re is not None:
deleted_ids.update(i.instrument for i in sequence.instrument_infos
if not instrument_re.search(i.name))
if instrument_ids is not None:
deleted_ids.update(i.instrument for i in sequence.instrument_infos
if i.instrument not in instrument_ids)
new_infos = [i for i in sequence.instrument_infos if i.instrument not in deleted_ids]
del sequence.instrument_infos[:]
sequence.instrument_infos.extend(new_infos)
# Filter the event collections
for collection in [sequence.notes, sequence.pitch_bends, sequence.control_changes]:
collection_copy = list(collection)
del collection[:]
for event in collection_copy:
if event.instrument in deleted_ids:
continue
if instrument_ids is not None and event.instrument not in instrument_ids:
continue
if programs is not None and event.program not in programs:
continue
if drums is not None and event.is_drum != drums:
continue
collection.add().CopyFrom(event)
return sequence
def compute_stats(pm):
piano_roll = ~np.isclose(pm.get_piano_roll(fs=100), 0)
num_voices = piano_roll.sum(axis=0)
pitches, _ = np.where(piano_roll)
return {
'voices': num_voices.sum() / ((num_voices > 0).sum() + 1e-9),
'pitch.mean': np.mean(pitches),
'pitch.low': np.percentile(pitches, 5),
'pitch.high': np.percentile(pitches, 95),
'program': instr.program
}
with open('../audio_train/metadata.json') as f:
src_metadata = json.load(f)[INPUT_SECTION]
src_paths = [item['src_path'] for item in src_metadata.values()]
all_stats = collections.defaultdict(list)
for path in tqdm(src_paths):
with open(os.path.join(INPUT_DIR, path), 'rb') as f:
ns = pickle.load(f)
pm = note_seq.midi_io.note_sequence_to_pretty_midi(ns)
for instr in pm.instruments:
if instr.is_drum:
continue
for k, v in compute_stats(instr).items():
all_stats[k].append(v)
plt.hist(all_stats['voices'], bins=np.linspace(0.5, 10.5, 101))
plt.xticks(np.linspace(0, 10, 11))
plt.show()
np.median(all_stats['voices']), np.median(all_stats['pitch.mean'])
plt.hist(all_stats['pitch.mean'], bins=np.linspace(0, 128, 129))
plt.show()
hist, _, _, _ = plt.hist2d(all_stats['voices'], all_stats['pitch.mean'], bins=[np.linspace(0.5, 5.5, 6), np.linspace(0, 128, 17)])
plt.colorbar()
plt.show()
hist.round().astype('int')
hist, _, _, _ = plt.hist2d(all_stats['program'], all_stats['pitch.mean'], bins=[np.linspace(0, 128, 129), np.linspace(0, 128, 17)])
plt.colorbar()
plt.show()
hist, _, _, _ = plt.hist2d(all_stats['voices'], all_stats['pitch.mean'],
bins=[np.quantile(all_stats['voices'], [0., 0.5, 1.]),
np.quantile(all_stats['pitch.mean'], [0., 0.5, 1.])],
vmin=0)
plt.colorbar()
plt.show()
hist.round().astype('int')
voices_bins = np.quantile(all_stats['voices'], [0., 0.5, 1.])
pitch_bins = np.quantile(all_stats['pitch.mean'], [0., 0.5, 1.])
def assign_bin(a, bins):
n = np.digitize(a, bins)
return np.clip(n, a_min=1, a_max=len(bins) - 1)
binned = collections.defaultdict(list)
for path in tqdm(src_paths):
with open(os.path.join(INPUT_DIR, path), 'rb') as f:
ns_full = pickle.load(f)
ns_full.filename = path
if not ns_full.instrument_infos:
continue
max_instrument = max(ii.instrument for ii in ns_full.instrument_infos)
for instrument, program in sorted(set((n.instrument, n.program) for n in ns_full.notes if not n.is_drum)):
ns = filter_sequence(ns_full, instrument_ids={instrument}, copy=True)
meta = {
'src_path': path,
'instrument': instrument,
'src_program': program
}
stats = compute_stats(note_seq.midi_io.note_sequence_to_pretty_midi(ns))
bin_key = 'voices{}_pitch{}'.format(assign_bin(stats['voices'], voices_bins),
assign_bin(stats['pitch.mean'], pitch_bins))
binned[bin_key].append((ns, meta))
binned = dict(binned)
def synth_ns(ns, out_path):
audio = note_seq.midi_synth.fluidsynth(ns, sf2_path=SF_PATH, sample_rate=SR)
if len(audio) < 1 * SR: # Skip if shorter than 1 second
return False
os.makedirs(os.path.dirname(out_path), exist_ok=True)
sf.write(out_path, audio, SR, subtype='PCM_24')
return True
MAX_PER_PROGRAM = 4
metadata = {}
for bin_key in tqdm(sorted(binned.keys())):
shuffled = np.random.default_rng(seed=0).choice(binned[bin_key], len(binned[bin_key]), replace=False)
# Generate segments
segments_by_program = collections.defaultdict(list)
for ns, meta in shuffled:
if len(segments_by_program[meta['src_program']]) >= MAX_PER_PROGRAM:
continue
# Use filename as seed
seed = meta['src_path'].encode()
seed = int.from_bytes(hashlib.sha512(seed).digest(), 'big')
rng = np.random.default_rng(seed=seed)
# Pick a non-silent segment
boundaries = np.arange(0., ns.total_time, 1.) # 1-second chunks
onset_counts, _ = np.histogram([n.start_time for n in ns.notes], bins=boundaries)
cumsum = np.cumsum(onset_counts > 0)
[candidates] = np.nonzero(
(onset_counts[:-7] > 0) # The first second needs to have an onset
& (cumsum[7:] - np.pad(cumsum[:-8], (1, 0)) >= 4)) # & at least 4 of 8 seconds need to have an onset
if len(candidates) == 0:
continue
index = rng.choice(candidates)
segment = note_seq.sequences_lib.extract_subsequence(ns, index, index + 8)
segment_meta = {
'src_path': meta['src_path'],
'program': meta['src_program'],
'instrument': meta['instrument'],
'index': index
}
segments_by_program[meta['src_program']].append((segment, segment_meta))
# Pair up segments so that we have two different programs in each pair. Synthesize audio.
rng = np.random.default_rng(seed=0)
with cf.ProcessPoolExecutor(16) as pool:
futures = {}
num_pairs = sum(len(x) for x in segments_by_program.values()) // 2
for _ in range(num_pairs):
avail_programs = [p for p, v in segments_by_program.items() if len(v) > 0]
if len(avail_programs) < 2:
break
p1, p2 = rng.choice(avail_programs, 2, replace=False)
(ns1, meta1), (ns2, meta2) = segments_by_program[p1].pop(), segments_by_program[p2].pop()
# Create style transfer target (ground truth)
ns3 = copy.deepcopy(ns1)
for note in ns3.notes:
note.program = meta2['program']
meta = {
'content': meta1,
'style': meta2,
'target': {
'src_path': meta1['src_path'],
'index': meta1['index'],
'program': meta2['program']
}
}
digest = hashlib.md5(repr(meta).encode()).hexdigest()
meta['content']['path'] = f'{bin_key}/{digest}.content.wav'
meta['style']['path'] = f'{bin_key}/{digest}.style.wav'
meta['target']['path'] = f'{bin_key}/{digest}.target.wav'
assert digest not in metadata
metadata[digest] = meta
for ns, name in [(ns1, 'content'), (ns2, 'style'), (ns3, 'target')]:
out_path = os.path.join(OUTPUT_DIR, meta[name]['path'])
futures[pool.submit(synth_ns, ns, out_path)] = digest
for future in tqdm(cf.as_completed(futures), total=len(futures), desc=bin_key):
if not future.result():
digest = futures[future]
for name in metadata[digest]:
path = metadata[digest][name]['path']
if os.path.exists(path):
os.remove(path)
del metadata[digest]
print(f'Saved {len(metadata)} triplets')
class NumPyJSONEncoder(json.JSONEncoder):
def default(self, x):
if isinstance(x, (np.ndarray, np.generic)):
return x.tolist()
else:
return super().default(x)
with open(f'metadata{OUTPUT_SUFFIX}.json', 'w') as f:
json.dump(metadata, f, cls=NumPyJSONEncoder)
with open(f'triplets{OUTPUT_SUFFIX}', 'w') as f:
for _, meta in sorted(metadata.items()):
print(os.path.join(OUTPUT_DIR, meta['content']['path']),
os.path.join(OUTPUT_DIR, meta['style']['path']),
os.path.join(OUTPUT_DIR, meta['target']['path']),
sep='\t', file=f)
!cut -f 1,2 <triplets{OUTPUT_SUFFIX} >pairs{OUTPUT_SUFFIX}
!wc -l triplets* pairs*
```
| github_jupyter |
# Variational Monte Carlo with Neural Networks
In this tutorial we will use NetKet to obtain the ground state of the J1-J2 model in one-dimension with periodic boundary conditions, using a Neural Network variational wave-function. The Hamiltonian of the model is given by:
$$ H = \sum_{i=1}^{L} J_{1}\vec{\sigma}_{i} \cdot \vec{\sigma}_{i+1} + J_{2} \vec{\sigma}_{i} \cdot \vec{\sigma}_{i+2} $$
where the sum is over sites of the 1-D chain. Here $\vec{\sigma}=(\sigma^x,\sigma^y,\sigma^z)$ is the vector of Pauli matrices.
We will also explore some useful functionalities provided by the package.
## Objectives:
1. Defining custom Hamiltonians
2. Defining the machine (variational ansatz)
3. Variational Monte Carlo Optimisation
4. Measuring observables
5. Data Visualisation
6. Sanity Check: Exact Diagonalisation
Let's start.
```
# ensure we run on the CPU
import os
os.environ["JAX_PLATFORM_NAME"] = "cpu"
# Import netket library
import netket as nk
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
```
## 1) Defining a Custom Hamiltonian
The first thing to do is to define the graph (lattice) on which to specify the Hamiltonian. Here we would like to build a one-dimensional graph with both nearest and next nearest neighbour bonds. The graph is created in the ``nk.graph.CustomGraph`` class. To initialise the class we simply provide a list of edges in the ``[[site_i, site_j, edge_color], ...]``
```
#Couplings J1 and J2
J = [1, 0.2]
L = 14
# Define custom graph
edge_colors = []
for i in range(L):
edge_colors.append([i, (i+1)%L, 1])
edge_colors.append([i, (i+2)%L, 2])
# Define the netket graph object
g = nk.graph.Graph(edges=edge_colors)
```
We specify a different ``color`` for each type of bond so as to define a different operator for each of them.
Next, we define the relevant bond operators.
```
#Sigma^z*Sigma^z interactions
sigmaz = [[1, 0], [0, -1]]
mszsz = (np.kron(sigmaz, sigmaz))
#Exchange interactions
exchange = np.asarray([[0, 0, 0, 0], [0, 0, 2, 0], [0, 2, 0, 0], [0, 0, 0, 0]])
bond_operator = [
(J[0] * mszsz).tolist(),
(J[1] * mszsz).tolist(),
(-J[0] * exchange).tolist(),
(J[1] * exchange).tolist(),
]
bond_color = [1, 2, 1, 2]
```
**Side Remark**: Notice the minus sign in front of the exchange. This is simply a basis rotation corresponding to the Marshall sign rule (as an exercise, change the sign of this exchange and observe that the exact diagonalization results in Section 6 do not change). The goal of this basis rotation is to speed up the convergence of the Monte Carlo simulations of the wave-function (by providing a good variational sign structure to start with), but in principle the converged results should be identical in both bases. Note further that this sign change is useful at low frustration (such as here $J_2=0.2$), but may actually be not optimal at stronger frustration. As a bonus exercise, repeat the calculation with $J_2=0.8$, and see which basis (*i.e.* which sign in front of the exchange) leads to faster convergence.
Before defining the Hamiltonian, we also need to specify the Hilbert space. For our case, this would be the chain spin-half degrees of freedom.
```
# Spin based Hilbert Space
hi = nk.hilbert.Spin(s=0.5, total_sz=0.0, N=g.n_nodes)
```
Note that we impose here the total magnetization to be zero (it turns out to be the correct magnetization for the ground-state). As an exercise, check that the energy of the lowest state in other magnetization sectors is larger.
Next, we define the custom graph Hamiltonian using the ``nk.operator.GraphOperator`` class, by providing the hilbert space ``hi``, the bond operators ``bondops=bond_operator`` and the corresponding bond color ``bondops_colors=bond_color``. The information about the graph (bonds and bond colors) are contained within the ``nk.hilbert.Spin`` object ``hi``.
```
# Custom Hamiltonian operator
op = nk.operator.GraphOperator(hi, graph=g, bond_ops=bond_operator, bond_ops_colors=bond_color)
```
## 2) Defining the Machine
For this tutorial, we shall use the most common type of neural network: fully connected feedforward neural network ``nk.machine.FFNN``. Other types of neural networks available will be discussed in other tutorials.
```
import netket.nn as nknn
import jax.numpy as jnp
class FFNN(nknn.Module):
@nknn.compact
def __call__(self, x):
x = nknn.Dense(features=2*x.shape[-1], use_bias=True, dtype=np.complex128, kernel_init=nknn.initializers.normal(stddev=0.01), bias_init=nknn.initializers.normal(stddev=0.01))(x)
x = nknn.log_cosh(x)
x = jnp.sum(x, axis=-1)
return x
model = FFNN()
```
## 3) Variational Monte Carlo Optimisation
We have now set up our model (Hamiltonian, Graph, Hilbert Space) and can proceed to optimise the variational ansatz we chose, namely the ``ffnn`` machine.
To setup the variational Monte Carlo optimisation tool, we have to provide a sampler ``nk.sampler`` and an optimizer ``nk.optimizer``.
```
# We shall use an exchange Sampler which preserves the global magnetization (as this is a conserved quantity in the model)
sa = nk.sampler.MetropolisExchange(hilbert=hi, graph=g, d_max = 2)
# Construct the variational state
vs = nk.variational.MCState(sa, model, n_samples=1000)
# We choose a basic, albeit important, Optimizer: the Stochastic Gradient Descent
opt = nk.optimizer.Sgd(learning_rate=0.01)
# Stochastic Reconfiguration
sr = nk.optimizer.SR(diag_shift=0.01)
# We can then specify a Variational Monte Carlo object, using the Hamiltonian, sampler and optimizers chosen.
# Note that we also specify the method to learn the parameters of the wave-function: here we choose the efficient
# Stochastic reconfiguration (Sr), here in an iterative setup
gs = nk.VMC(hamiltonian=op, optimizer=opt, variational_state=vs, preconditioner=sr)
```
## 4) Measuring Observables
Before running the optimization, it can be helpful to add some observables to keep track off during the optimization. For our purpose, let us measure the antiferromagnetic structure factor, defined as:
$$ \frac{1}{L} \sum_{ij} \langle \hat{\sigma}_{i}^z \cdot \hat{\sigma}_{j}^z\rangle e^{i\pi(i-j)}$$.
```
# We need to specify the local operators as a matrix acting on a local Hilbert space
sf = []
sites = []
structure_factor = nk.operator.LocalOperator(hi, dtype=complex)
for i in range(0, L):
for j in range(0, L):
structure_factor += (nk.operator.spin.sigmaz(hi, i)*nk.operator.spin.sigmaz(hi, j))*((-1)**(i-j))/L
```
Once again, notice that we had to multiply the exchange operator (matrix) by some factor. This is to account for the Marshall basis rotation we made in our model.
We can now optimize our variational ansatz. The optimization data for each iteration will be stored in a log file, which contains the following information:
1. Mean, variance and uncertainty in the Energy $ \langle \hat{H} \rangle $
2. Mean, variance and uncertainty in the Energy Variance, $ \langle\hat{H}^{2}\rangle-\langle \hat{H}\rangle^{2}$.
3. Acceptance rates of the sampler
4. Mean, variance and uncertainty of observables (if specified)
Now let's learn the ground-state!
```
# Run the optimization protocol
gs.run(out='test', n_iter=600, obs={'Structure Factor': structure_factor})
```
## 5) Data Visualisation
Now that we have optimized our machine to find the ground state of the $J_1-J_2$ model, let's look at what we have.
The relevant data are stored in the ".log" file while the optimized parameters are in the ".wf" file. The files are all in json format.
We shall extract the energy as well as specified observables (antiferromagnetic structure factor in our case) from the ".log" file.
```
# Load the data from the .log file
import json
data=json.load(open("test.log"))
iters = data['Energy']['iters']
energy=data['Energy']['Mean']['real']
sf=data['Structure Factor']['Mean']['real']
fig, ax1 = plt.subplots()
ax1.plot(iters, energy, color='blue', label='Energy')
ax1.set_ylabel('Energy')
ax1.set_xlabel('Iteration')
ax2 = ax1.twinx()
ax2.plot(iters, np.array(sf), color='green', label='Structure Factor')
ax2.set_ylabel('Structure Factor')
ax1.legend(loc=2)
ax2.legend(loc=1)
plt.show()
```
Let's also compute the average of those quantities (energy and neel order) over the last 50 iterations where the optimization seems to have converged.
```
print(r"Structure factor = {0:.3f}({1:.3f})".format(np.mean(sf[-50:]),
np.std(np.array(sf[-50:]))/np.sqrt(50)))
print(r"Energy = {0:.3f}({1:.3f})".format(np.mean(energy[-50:]), np.std(energy[-50:])/(np.sqrt(50))))
```
## 6) Sanity Check: Exact Diagonalisation
Now that we have obtained some results using VMC, it is a good time to check the quality of our results (at least for small system sizes). For this purpose, Netket provides exact diagonalisation tools.
```
E_gs, ket_gs = nk.exact.lanczos_ed(op, compute_eigenvectors=True)
structure_factor_gs = (ket_gs.T.conj()@structure_factor.to_linear_operator()@ket_gs).real[0,0]
```
Here we have specified that we want the corresponding eigenvector (in order to compute observables).
```
print("Exact Ground-state Structure Factor: {0:.3f}".format(structure_factor_gs))
print("Exact ground state energy = {0:.3f}".format(E_gs[0]))
```
So we see that the both energy and the structure factor we obtained is in agreement with the value obtained via exact diagonalisation.
| github_jupyter |
# Pandas与绘图基础知识点
作者:杨岱川
时间:2019年9月
最新编辑:2020年5月
github:https://github.com/DrDavidS/basic_Machine_Learning
开源协议:[MIT](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/LICENSE)
## 导入pandas
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Pandas的两种主要数据结构
### Series
Series 是一种带轴标签的的一维数组,可以保存任何数据类型。
```
nay = np.random.rand(5)
ser = pd.Series(nay)
print(nay)
print(ser)
print(type(nay))
print(type(ser))
```
### Dataframe
DataFrame是一种数据框结构,有点类似于平时使用的Excel表格。
以著名的 iris 数据集为例,我们将利用这个数据集来讲解DataFrame相关用法。
参考链接:[This notebook demos Python data visualizations on the Iris dataset](https://www.kaggle.com/benhamner/python-data-visualizations)
>Iris数据集是常用的分类实验数据集,由Fisher, 1936收集整理。Iris也称鸢尾花卉数据集,是一类多重变量分析的数据集。数据集包含150个数据样本,分为3类,每类50个数据,每个数据包含4个属性。可通过花萼长度,花萼宽度,花瓣长度,花瓣宽度4个属性预测鸢尾花卉属于(Setosa,Versicolour,Virginica)三个种类中的哪一类。
### 读取文件为Dataframe
如何读取txt文件或者csv等文件为 Dataframe ?
一般情况下我们使用 Pandas 自带的 read_csv() 方法。如果是 xlsx 文件,则使用 read_excel() 方法。
Pandas 可以读取非常多种类的文件,具体内容可以参考[IO工具](https://www.pypandas.cn/docs/user_guide/io.html#csv-文本文件)。
```
iris = pd.read_csv("Iris.csv")
iris
```
### 简单查看数据
前十行:
```
iris.head(10)
```
查看每个种类有多少数量:
```
iris['Species'].value_counts()
iris.describe()
```
### 查看种类分布
使用matplotlib查看种类分布,首先我们将三种鸢尾花拆分成三个Dataframe:
DataFrame 中,筛选数据一般有 iloc 和 loc 方法,两种方法稍有区别。iloc主要使用索引筛选,loc接受字符串输入筛选。
以下采用两种筛选方法。注意第二种方法在修改数据的时候,有可能导致数据不安全(例如覆盖掉之前的Dataframe)从而弹出警告。
```
iris_virginica = iris.loc[iris['Species'] == 'Iris-virginica']
iris_versicolor = iris.loc[iris['Species'] == 'Iris-versicolor']
iris_setosa = iris.loc[iris['Species'] == 'Iris-setosa']
iris_virginica
iris_virginica = iris[iris['Species'] == 'Iris-virginica']
iris_versicolor = iris[iris['Species'] == 'Iris-versicolor']
iris_setosa = iris[iris['Species'] == 'Iris-setosa']
iris_virginica
```
然后以花萼长度为$ x $轴,以花萼宽度为$ y $轴,画出散点图,用不同的颜色标记。
matplotlib中,使用scatter(散点图)来绘制图形,我们使用了三次scatter方法来绘制三个不同种类的鸢尾花,最后绘制的图形会将三次内容叠加显示。
```
# 画布大小
plt.figure(figsize=(10,10))
# 中文标题
plt.rcParams['font.sans-serif']=['SimHei']
plt.rcParams['axes.unicode_minus'] = False
plt.title('鸢尾花花萼长宽与种类关系')
# 数据散点图
plt.scatter(iris_virginica['SepalLengthCm'], iris_virginica['SepalWidthCm'], c='r', label='Iris-virginica')
plt.scatter(iris_versicolor['SepalLengthCm'], iris_versicolor['SepalWidthCm'], c='g', label='Iris-versicolor')
plt.scatter(iris_setosa['SepalLengthCm'], iris_setosa['SepalWidthCm'], c='b', label='Iris-setosa')
# 其他部分
plt.legend() # 显示图例
plt.grid(True) # 显示网格
plt.show() # 显示图片
```
同时我们再试试seaborn,这也是一个热门的Python绘图工具。在本节中,seaborn的用法不是重点,希望学习seaborn的同学可以参考官方文档和教程。
```
import seaborn as sns
sns.jointplot(x="SepalLengthCm", y="SepalWidthCm", data=iris, height=10)
plt.show()
sns.FacetGrid(iris, hue="Species", height=10) \
.map(plt.scatter, "SepalLengthCm", "SepalWidthCm") \
.add_legend() # 图例
plt.show()
```
| github_jupyter |
「PyTorch入門 8. クイックスタート」
===============================================================
【原題】Learn the Basics
【原著】
[Suraj Subramanian](https://github.com/suraj813)、[Seth Juarez](https://github.com/sethjuarez/) 、[Cassie Breviu](https://github.com/cassieview/) 、[Dmitry Soshnikov](https://soshnikov.com/)、[Ari Bornstein](https://github.com/aribornstein/)
【元URL】https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html
【翻訳】電通国際情報サービスISID AIトランスフォーメーションセンター 小川 雄太郎
【日付】2021年03月20日
【チュトーリアル概要】
本チュートリアルでは、機械学習を実行するためのPyTorchの各APIを紹介します。詳細を知りたい方は、各セクションのリンク先をご参照ください。
---
【注意】
ディープラーニングフレームワークの経験があり、ディープラーニングの実装に慣れている方は、本チュートリアルをまず確認し、PyTorchのAPIに慣れてください。
<br>
ディープラーニングフレームワークを利用した実装に初めて取り組む方は、本チュートリアルからではなく、
[「PyTorch入門 1. テンソル」](https://colab.research.google.com/github/YutaroOgawa/pytorch_tutorials_jp/blob/main/notebook/0_Learn%20the%20Basics/1_1_tensor_tutorial_jp.ipynb)から、1 stepずつ進めてください。
---
データの取り扱い
-----------------
PyTorchではデータを取り扱う際に、基本的な要素が2つ存在します。
``torch.utils.data.Dataset``と``torch.utils.data.DataLoader``です。
<br>
``Dataset`` は各ラベルと、それに対応するサンプルデータを保持します。
``DataLoader`` は``Dataset``をイテレーティブに(=反復処理可能に)操作できるようラップしたものになります。
<br>
(詳細)
https://pytorch.org/docs/stable/data.html
```
%matplotlib inline
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor, Lambda, Compose
import matplotlib.pyplot as plt
```
PyTorchには以下に示すようなドメイン固有のライブラリが存在し、それぞれにデータセットが用意されています。
本チュートリアルでは、TorchVisionのデータセットを使用します。
- [TorchText](https://pytorch.org/text/stable/index.html)
- [TorchVision](https://pytorch.org/vision/stable/index.html)
- [TorchAudio](https://pytorch.org/audio/stable/index.html)
``torchvision.datasets`` モジュールには、画像データの ``Dataset`` オブジェクトがたくさん用意されています。
例えば、CIFAR, COCOなどです ([データセット一覧はこちらから](https://pytorch.org/docs/stable/torchvision/datasets.html))。
本チュートリアルでは、FashionMNISTデータセットを使用します。
TorchVisionの すべての``Dataset`` には2つの引数があります。
``transform`` と ``target_transform`` であり、それぞれサンプルとラベルの変換処理を行います。
```
# 訓練データをdatasetsからダウンロード
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor(),
)
# テストデータをdatasetsからダウンロード
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor(),
)
```
DataLoaderの引数としてDatasetを渡します。
DataLoaderはDatasetをイテレート処理できるようにラップしたものです。
バッチ処理、サンプリング、シャッフル、マルチプロセスでのデータ読み込み等をサポートしています。
以下では、バッチサイズを64と定義します。
つまり、データローダの反復要素は、64個の特徴量とラベルから構成されるバッチを返すことになります。
```
batch_size = 64
# データローダーの作成
train_dataloader = DataLoader(training_data, batch_size=batch_size)
test_dataloader = DataLoader(test_data, batch_size=batch_size)
for X, y in test_dataloader:
print("Shape of X [N, C, H, W]: ", X.shape)
print("Shape of y: ", y.shape, y.dtype)
break
```
さらなる詳細は、[「PyTorch入門 2. データセットとデータローダー」](https://colab.research.google.com/github/YutaroOgawa/pytorch_tutorials_jp/blob/main/notebook/0_Learn%20the%20Basics/0_2_data_tutorial_jp.ipynb
)をご覧ください。
--------------
モデルの構築
------------------
PyTorchでニューラルネットワークの形を定義する際には、[nn.Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html)を継承します。
``__init__`` 関数で、ネットワークの各レイヤーを定義し、データの順伝搬を``forward`` 関数に定義します。
なお処理を高速化するために、可能であればニューラルネットワークをGPU上へ移動させます。
```
# 訓練に際して、可能であればGPU(cuda)を設定します。GPUが搭載されていない場合はCPUを使用します
device = "cuda" if torch.cuda.is_available() else "cpu"
print("Using {} device".format(device))
# modelを定義します
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10),
nn.ReLU()
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
model = NeuralNetwork().to(device)
print(model)
```
さらなる詳細は、[「PyTorch入門 4. モデル構築」](https://colab.research.google.com/github/YutaroOgawa/pytorch_tutorials_jp/blob/main/notebook/0_Learn%20the%20Basics/0_4_buildmodel_tutorial_js.ipynb
)をご覧ください。
--------------
モデルパラメータの最適化
----------------------------------------
ニューラルネットワークモデルを訓練するためには、
損失関数:[loss function](<https://pytorch.org/docs/stable/nn.html#loss-functions)と、最適化手法:[optimizer](https://pytorch.org/docs/stable/optim.html)が必要です。
```
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
```
1回の訓練ループにおいてモデルは、まず訓練データのバッチに対して推論を実行して損失誤差を計算し、
その後、損失誤差をバックプロパゲーションして、モデルパラメータを更新します。
```
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device)
# 損失誤差を計算
pred = model(X)
loss = loss_fn(pred, y)
# バックプロパゲーション
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
```
また、モデルがうまく学習していることを確認するために、テストデータセットに対するモデルの性能も確認します。
```
def test(dataloader, model):
size = len(dataloader.dataset)
model.eval()
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
X, y = X.to(device), y.to(device)
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= size
correct /= size
print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
```
訓練プロセスでは複数イテレーションを実行します。
各エポックの間、モデルはより良い予測ができるようにパラメータを学習します。
エポックごとにモデルの正解率と損失を出力して、正解率が向上し、損失が低下していっているかを確認します。
```
epochs = 5
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
train(train_dataloader, model, loss_fn, optimizer)
test(test_dataloader, model)
print("Done!")
```
さらなる詳細は、[「PyTorch入門 6. 最適化」](https://colab.research.google.com/github/YutaroOgawa/pytorch_tutorials_jp/blob/main/notebook/0_Learn%20the%20Basics/0_6_optimization_tutorial_js.ipynb)をご覧ください。
--------------
モデルの保存
-------------
モデルを保存する一般的な方法は、モデルの内部状態の辞書(モデルのパラメータを含む)をシリアル化する手法です。
```
torch.save(model.state_dict(), "model.pth")
print("Saved PyTorch Model State to model.pth")
```
---
モデルの読み込み
----------------------------
モデルの読み込む際には、まずモデルの構造を再作成し、そのインスタンスに、保存しておいた状態辞書をロードします。
```
model = NeuralNetwork()
model.load_state_dict(torch.load("model.pth"))
```
これでモデルは推論可能な状態です。
```
classes = [
"T-shirt/top",
"Trouser",
"Pullover",
"Dress",
"Coat",
"Sandal",
"Shirt",
"Sneaker",
"Bag",
"Ankle boot",
]
model.eval()
x, y = test_data[0][0], test_data[0][1]
with torch.no_grad():
pred = model(x)
predicted, actual = classes[pred[0].argmax(0)], classes[y]
print(f'Predicted: "{predicted}", Actual: "{actual}"')
```
さらなる詳細は、[「PyTorch入門 7. モデルの保存・読み込み」](https://colab.research.google.com/github/YutaroOgawa/pytorch_tutorials_jp/blob/main/notebook/0_Learn%20the%20Basics/0_7_saveloadrun_tutorial_js.ipynb)をご覧ください。
以上。
| github_jupyter |
# Midterm report: Heart disease
Shuangyi Tan s1889983
Shuangyi Tan's dataset is about heart disease. And it came from [Kaggle](https://www.kaggle.com/dileep070/heart-disease-prediction-using-logistic-regression/kernels).
The source dataset is publicly available on the Kaggle website from an ongoing cardiovascular study of residents of Framingham, Massachusetts. The purpose of the classification is to predict whether the patient has a 10-year risk of future coronary heart disease (CHD).[[1]](https://www.kaggle.com/dileep070/heart-disease-prediction-using-logistic-regression)
## Data processing and analysis
### 1. import librarys and supress Warnings
```
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import scipy.stats as st
import statsmodels.api as sm
import matplotlib.pyplot as plt
from statsmodels.tools import add_constant as add_constant
import seaborn as sns
%matplotlib inline
sns.set()
```
### 2. Read the data, drop some information, rename columns
```
heart_df=pd.read_csv("framingham_heart_disease.csv")
heart_df.head()
heart_df.describe()
```
From above, we could see that the original dataset has 14 variables, 2 classes and 4238 samples.
From original dataset description, we have:
**Demographic:**
Sex: male or female(Nominal)
Age: Age of the patient;(Continuous - Although the recorded ages have been truncated to whole numbers, the concept of age is continuous)
**Behavioral:**
Current Smoker: whether or not the patient is a current smoker (Nominal)
Cigs Per Day: the number of cigarettes that the person smoked on average in one day.(can be considered continuous as one can have any number of cigarettes, even half a cigarette.)
**Medical(history):**
BP Meds: whether or not the patient was on blood pressure medication (Nominal)
Prevalent Stroke: whether or not the patient had previously had a stroke (Nominal)
Prevalent Hyp: whether or not the patient was hypertensive (Nominal)
Diabetes: whether or not the patient had diabetes (Nominal)
Tot Chol: total cholesterol level (Continuous)
Sys BP: systolic blood pressure (Continuous)
Dia BP: diastolic blood pressure (Continuous)
BMI: Body Mass Index (Continuous)
Heart Rate: heart rate (Continuous - In medical research, variables such as heart rate though in fact discrete, yet are considered continuous because of large number of possible values.)
Glucose: glucose level (Continuous) Predict variable (desired target)
**Class:**
10 year risk of coronary heart disease CHD (binary: “1”, means “Yes”, “0” means “No”)
```
heart_df.drop(['education'],axis=1,inplace=True)
heart_df.rename(columns={'male':'Sex_male'},inplace=True)
```
From basic information, we could know education has nearly no relationship with heart disease. so we drop it here.
### 3. Handle the missing values
```
count=0
for i in heart_df.isnull().sum(axis=1):
if i>0:
count=count+1
print('Total number of rows with missing values is ', count)
print('since it is only',round((count/len(heart_df.index))*100), 'percent of the entire dataset the rows with missing values are excluded.')
heart_df.dropna(axis=0,inplace=True)
pd.set_option('display.max_columns', 1000)
pd.set_option('display.width', 1000)
heart_df.describe()
```
Firstly, we check the proportion of missing values and find that total number of rows with missing values is 489, which is 12 percent of entire dataset.
As a result, here we do the complete analysis for the dataset, which means that we drop all samples with missing values.
Then we will have a dataset with values all observed. The new dataset has 3749 samples.
### 4. Add a constant
Here we add a constant for each sample, which make things more convenient for using several models.
```
heart_df_constant = add_constant(heart_df)
heart_df_constant.head()
```
### 5. Choose relative features we need to use
```
st.chisqprob = lambda chisq, df: st.chi2.sf(chisq, df)
cols = heart_df_constant.columns[:-1]
model = sm.Logit(heart_df.TenYearCHD,heart_df_constant[cols])
result = model.fit()
print(result.summary())
# Define back_feature_selection
def back_feature_elem (data_frame,dep_var,col_list):
while len(col_list)>0 :
model = sm.Logit(dep_var,data_frame[col_list])
result = model.fit(disp=0)
largest_pvalue = round(result.pvalues,3).nlargest(1)
if largest_pvalue[0]<(0.05):
return result
break
else:
col_list=col_list.drop(largest_pvalue.index)
# Use back_feature_selection to select features, if p > 0.05, we delete the feature
result=back_feature_elem(heart_df_constant,heart_df.TenYearCHD,cols)
#Interpreting the results: Odds Ratio, Confidence Intervals and Pvalues
params = np.exp(result.params)
conf = np.exp(result.conf_int())
conf['OR'] = params
pvalue=round(result.pvalues,3)
conf['pvalue']=pvalue
conf.columns = ['CI 95%(2.5%)', 'CI 95%(97.5%)', 'Odds Ratio','pvalue']
print (conf)
new_features=heart_df[['age','Sex_male','cigsPerDay','totChol','sysBP','glucose','TenYearCHD']]
```
Feature Selection: Backward elemination (P-value approach)
Here we use backward elemination to select relative features.
Statistically speaking, if some attributes have the P values are higher than the preferred alpha (5%), there is a very low statistically significant relationship between these attributes and the probability of heart disease.
The backward elimination method is used to delete those attributes with the highest P value at a time, and then run the learning repeatedly until the P values of all the attributes are less than 0.05.
So finally we choose 'age','Sex_male','cigsPerDay','totChol','sysBP' and 'glucose' these six features which are most relative to the heart disease due to the low p-values. We use these six features and the class variable to form a new dataset named 'new_features' here.
### 6. Plot basis features of new dataset
```
import seaborn as sns
plt.figure(figsize=(8,6))
sns.countplot(x='TenYearCHD',data=new_features)
plt.title('Number of people in each class')
plt.ylabel('Number of People')
plt.savefig('Class.png')
plt.show()
print()
count = new_features.TenYearCHD.value_counts()
print(count)
print()
```
Firstly, from what is stated above, there are 2 classes. 0 shows that the participant has no has 10-year risk of future coronary heart disease(CHD) and 1 otherwise. From the result above, we could see that the dataset is extremely unbalanced.
```
plt.figure(figsize=(10,6))
sns.heatmap(new_features.corr(),cmap='YlGn',annot=True)
plt.savefig('Corr.png')
plt.show()
```
From the graph above, we could see that the correlation between features are small, which means that the features we choose here describe the dataset to the utmost extent.
```
# Plot out the distribution, which will take a long time
y_target = new_features['TenYearCHD']
new_features['target'] = new_features['TenYearCHD'].map({0:'NH',1:'H'})
g = sns.pairplot(new_features.drop('TenYearCHD', axis = 1), hue="target", palette='prism');
plt.savefig('Distribution.png')
plt.show()
```
From the pairplot above, we could see that the data is very mixed in every dimension. That is, the problem here is a complex problem, which means that the classifier can't easily reach a high accuracy.
### 7. Split training data and testing data
```
new_features=heart_df[['age','Sex_male','cigsPerDay','totChol','sysBP','glucose','TenYearCHD']]
x = new_features.iloc[:,:-1]
y = new_features.iloc[:,-1]
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=.20,random_state=5)
```
This module is aiming to generate 2 sets, one for testing and another for training. The proposition here we use is training set : testing set = 4:1.
### 8. Training different classifier and test result analysis
Firstly we focus on the random guess classifier, which also will be a baseline.
The AUROC of random guess will be 0.5 and the AUPRC of it should be $\frac{572}{3177+572} \approx 0.15257$.
```
# Create a list to show the Acc, AUROC and AUPRC for different classifiers
table = pd.DataFrame(index=["Logistic Regression","SVM","KNN","Naive Bayes","Desicion Tree","Random Forest"], columns=["AUROC", "AUPRC", "ACC", "log_loss", "F1"])
# Fit the model and predict y with it
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
log_y_score = logreg.fit(x_train, y_train).decision_function(x_test)
log_y_pred = logreg.predict(x_test)
from sklearn import svm
SvM = svm.SVC()
SVM_y_score = SvM.fit(x_train, y_train).decision_function(x_test)
SVM_y_pred = SvM.predict(x_test)
from sklearn.neighbors import KNeighborsClassifier
neigh = KNeighborsClassifier(n_neighbors=10)
KNN_y_score = neigh.fit(x_train, y_train).predict_proba(x_test)[:, 1]
KNN_y_pred = neigh.predict(x_test)
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
NB_y_score = gnb.fit(x_train, y_train).predict_proba(x_test)[:, 1]
NB_y_pred = gnb.predict(x_test)
from sklearn.tree import DecisionTreeClassifier
dt = DecisionTreeClassifier()
DT_y_score = dt.fit(x_train, y_train).predict_proba(x_test)[:, 1]
DT_y_pred = dt.predict(x_test)
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=100, max_depth=2, random_state=0)
RF_y_score = rf.fit(x_train, y_train).predict_proba(x_test)[:, 1]
RF_y_pred = rf.predict(x_test)
# AUROC
from sklearn.metrics import roc_auc_score
log_AUROC = roc_auc_score(y_test,log_y_score)
table.values[0][0] = log_AUROC
SVM_AUROC = roc_auc_score(y_test,SVM_y_score)
table.values[1][0] = SVM_AUROC
KNN_AUROC = roc_auc_score(y_test,KNN_y_score)
table.values[2][0] = KNN_AUROC
NB_AUROC = roc_auc_score(y_test,NB_y_score)
table.values[3][0] = NB_AUROC
DT_AUROC = roc_auc_score(y_test,DT_y_score)
table.values[4][0] = DT_AUROC
RF_AUROC = roc_auc_score(y_test,RF_y_score)
table.values[5][0] = RF_AUROC
# AUPRC
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import auc
def AUPRC(y_test,y_pred_proba):
precision, recall, thresholds = precision_recall_curve(y_test,y_pred_proba)
auprc = auc(recall, precision)
return auprc
log_AUPRC = AUPRC(y_test,log_y_score)
table.values[0][1] = log_AUPRC
SVM_AUPRC = AUPRC(y_test,SVM_y_score)
table.values[1][1] = SVM_AUPRC
KNN_AUPRC = AUPRC(y_test,KNN_y_score)
table.values[2][1] = KNN_AUPRC
NB_AUPRC = AUPRC(y_test,NB_y_score)
table.values[3][1] = NB_AUPRC
DT_AUPRC = AUPRC(y_test,DT_y_score)
table.values[4][1] = DT_AUPRC
RF_AUPRC = AUPRC(y_test,RF_y_score)
table.values[5][1] = RF_AUPRC
# Model accuracy
from sklearn.metrics import accuracy_score
log_Acc = accuracy_score(y_test, log_y_pred)
table.values[0][2] = log_Acc
SVM_Acc = accuracy_score(y_test, SVM_y_pred)
table.values[1][2] = SVM_Acc
KNN_Acc = accuracy_score(y_test, KNN_y_pred)
table.values[2][2] = KNN_Acc
NB_Acc = accuracy_score(y_test, NB_y_pred)
table.values[3][2] = NB_Acc
DT_Acc = accuracy_score(y_test, DT_y_pred)
table.values[4][2] = DT_Acc
RF_Acc = accuracy_score(y_test, RF_y_pred)
table.values[5][2] = RF_Acc
#neg_log loss
from sklearn.metrics import log_loss
log_neg_log = log_loss(y_test, log_y_pred)
table.values[0][3] = log_neg_log
SVM_neg_log = log_loss(y_test, SVM_y_pred)
table.values[1][3] = SVM_neg_log
KNN_neg_log = log_loss(y_test, KNN_y_pred)
table.values[2][3] = KNN_neg_log
NB_neg_log = log_loss(y_test, NB_y_pred)
table.values[3][3] = NB_neg_log
DT_neg_log = log_loss(y_test, DT_y_pred)
table.values[4][3] = DT_neg_log
RF_neg_log = log_loss(y_test, RF_y_pred)
table.values[5][3] = RF_neg_log
#F1
from sklearn.metrics import f1_score
log_f1 = f1_score(y_test, log_y_pred, average='weighted')
table.values[0][4] = log_f1
SVM_f1 = f1_score(y_test, SVM_y_pred, average='weighted')
table.values[1][4] = SVM_f1
KNN_f1 = f1_score(y_test, KNN_y_pred, average='weighted')
table.values[2][4] = KNN_f1
NB_f1 = f1_score(y_test, NB_y_pred, average='weighted')
table.values[3][4] = NB_f1
DT_f1 = f1_score(y_test, DT_y_pred, average='weighted')
table.values[4][4] = DT_f1
RF_f1 = f1_score(y_test, RF_y_pred, average='weighted')
table.values[5][4] = RF_f1
# Show the conclu table
print(table.head(7))
```
Here we choose Logistic regression, Support Vector Machine, KNN, Naive Bayes, Decision Tree and Random Forest these 6 classifiers for experiments. Also, we choose AUROC, AUPRC, accuracy, log loss and F1 score as measurements.
The first one is Logistic regression, here we use built-in preset parameters for the classifier. The ROC and PRC are shown below.
```
from sklearn.metrics import roc_curve
y_pred_proba = log_y_score ###############
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
#Area under ROC curve
from sklearn.metrics import roc_auc_score
auroc = roc_auc_score(y_test,y_pred_proba)
plt.plot([0,1],[0,1],'k--')
plt.plot(fpr,tpr, label='log') ##############
plt.xlabel('fpr')
plt.ylabel('tpr')
title_name = 'log ROC curve, AUROC ='+str(auroc) ############
plt.title(title_name)
plt.savefig('log ROC curve.png') ####################
plt.show()
precision, recall, thresholds = precision_recall_curve(y_test,y_pred_proba)
auprc = auc(recall, precision)
plt.hlines(572/(3177+572), 0, 1, colors = "c", linestyles = "dashed")
plt.plot(recall,precision, label='log') ####################
plt.xlabel('recall')
plt.ylabel('precision')
title_name = 'log PRC curve, AUPRC ='+str(auprc) ###############
plt.title(title_name)
plt.savefig('log PRC curve.png') #######################
plt.show()
```
The second one is Support Vector Machine, here we use built-in preset parameters for the classifier. The ROC and PRC are shown below.
```
from sklearn.metrics import roc_curve
y_pred_proba = SVM_y_score ###############
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
#Area under ROC curve
from sklearn.metrics import roc_auc_score
auroc = roc_auc_score(y_test,y_pred_proba)
plt.plot([0,1],[0,1],'k--')
plt.plot(fpr,tpr, label='SVM') ##############
plt.xlabel('fpr')
plt.ylabel('tpr')
title_name = 'SVM ROC curve, AUROC ='+str(auroc) ############
plt.title(title_name)
plt.savefig('SVM ROC curve.png') ####################
plt.show()
precision, recall, thresholds = precision_recall_curve(y_test,y_pred_proba)
auprc = auc(recall, precision)
plt.hlines(572/(3177+572), 0, 1, colors = "c", linestyles = "dashed")
plt.plot(recall,precision, label='SVM') ####################
plt.xlabel('recall')
plt.ylabel('precision')
title_name = 'SVM PRC curve, AUPRC ='+str(auprc) ###############
plt.title(title_name)
plt.savefig('SVM PRC curve.png') #######################
plt.show()
```
The third one is KNN(K = 10), here for other parameters we use built-in preset ones for the classifier. The ROC and PRC are shown below.
```
from sklearn.metrics import roc_curve
y_pred_proba = KNN_y_score ###############
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
#Area under ROC curve
from sklearn.metrics import roc_auc_score
auroc = roc_auc_score(y_test,y_pred_proba)
plt.plot([0,1],[0,1],'k--')
plt.plot(fpr,tpr, label='KNN') ##############
plt.xlabel('fpr')
plt.ylabel('tpr')
title_name = 'KNN ROC curve, AUROC ='+str(auroc) ############
plt.title(title_name)
plt.savefig('KNN ROC curve.png') ####################
plt.show()
precision, recall, thresholds = precision_recall_curve(y_test,y_pred_proba)
auprc = auc(recall, precision)
plt.hlines(572/(3177+572), 0, 1, colors = "c", linestyles = "dashed")
plt.plot(recall,precision, label='KNN') ####################
plt.xlabel('recall')
plt.ylabel('precision')
title_name = 'KNN PRC curve, AUPRC ='+str(auprc) ###############
plt.title(title_name)
plt.savefig('KNN PRC curve.png') #######################
plt.show()
```
The fourth one is Naive Bayes, here we use built-in preset parameters for the classifier. The ROC and PRC are shown below.
```
from sklearn.metrics import roc_curve
y_pred_proba = NB_y_score ###############
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
#Area under ROC curve
from sklearn.metrics import roc_auc_score
auroc = roc_auc_score(y_test,y_pred_proba)
plt.plot([0,1],[0,1],'k--')
plt.plot(fpr,tpr, label='NB') ##############
plt.xlabel('fpr')
plt.ylabel('tpr')
title_name = 'NB ROC curve, AUROC ='+str(auroc) ############
plt.title(title_name)
plt.savefig('NB ROC curve.png') ####################
plt.show()
precision, recall, thresholds = precision_recall_curve(y_test,y_pred_proba)
auprc = auc(recall, precision)
plt.hlines(572/(3177+572), 0, 1, colors = "c", linestyles = "dashed")
plt.plot(recall,precision, label='NB') ####################
plt.xlabel('recall')
plt.ylabel('precision')
title_name = 'NB PRC curve, AUPRC ='+str(auprc) ###############
plt.title(title_name)
plt.savefig('NB PRC curve.png') #######################
plt.show()
```
The fifth one is Decision Tree, here we use built-in preset parameters for the classifier. The ROC and PRC are shown below.
```
from sklearn.metrics import roc_curve
y_pred_proba = DT_y_score ###############
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
#Area under ROC curve
from sklearn.metrics import roc_auc_score
auroc = roc_auc_score(y_test,y_pred_proba)
plt.plot([0,1],[0,1],'k--')
plt.plot(fpr,tpr, label='DT') ##############
plt.xlabel('fpr')
plt.ylabel('tpr')
title_name = 'DT ROC curve, AUROC ='+str(auroc) ############
plt.title(title_name)
plt.savefig('DT ROC curve.png') ####################
plt.show()
precision, recall, thresholds = precision_recall_curve(y_test,y_pred_proba)
auprc = auc(recall, precision)
plt.hlines(572/(3177+572), 0, 1, colors = "c", linestyles = "dashed")
plt.plot(recall,precision, label='DT') ####################
plt.xlabel('recall')
plt.ylabel('precision')
title_name = 'DT PRC curve, AUPRC ='+str(auprc) ###############
plt.title(title_name)
plt.savefig('DT PRC curve.png') #######################
plt.show()
```
The sixth one is Random Forest with n_estimators=100, max_depth=2,and random_state=0. The ROC and PRC are shown below.
```
from sklearn.metrics import roc_curve
y_pred_proba = RF_y_score ###############
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
#Area under ROC curve
from sklearn.metrics import roc_auc_score
auroc = roc_auc_score(y_test,y_pred_proba)
plt.plot([0,1],[0,1],'k--')
plt.plot(fpr,tpr, label='RF') ##############
plt.xlabel('fpr')
plt.ylabel('tpr')
title_name = 'RF ROC curve, AUROC ='+str(auroc) ############
plt.title(title_name)
plt.savefig('RF ROC curve.png') ####################
plt.show()
precision, recall, thresholds = precision_recall_curve(y_test,y_pred_proba)
auprc = auc(recall, precision)
plt.hlines(572/(3177+572), 0, 1, colors = "c", linestyles = "dashed")
plt.plot(recall,precision, label='RF') ####################
plt.xlabel('recall')
plt.ylabel('precision')
title_name = 'RF PRC curve, AUPRC ='+str(auprc) ###############
plt.title(title_name)
plt.savefig('RF PRC curve.png') #######################
plt.show()
```
**Overall analysis**:
From the table above, we could see that all the classifier we use here have higher accuracy than random guess.
As for which classifier is better, AUROC votes for:
Naive Bayes $\approx$ Logistic Regression > Random Forest >> KNN > SVM > Decision Tree.
AUPRC votes for:
Logistic Regression > Naive Bayes > Random Forest >> Decision Tree > KNN > SVM.
Accuracy votes for:
Logistic Regression $\approx$ SVM = Random Forest > Naive Bayes > KNN > Decision Tree.
Log_loss votes for:
Logistic Regression > SVM = Random Forest > Naive Bayes > KNN >> Decision Tree.
F1 score votes for:
Naive Bayes > Logistic Regression > KNN > SVM > Random Forest = Decision Tree.
For the majority vote, Logistic regression is the best classifier and Naive Bayes is the second best. Among these measurements, AUPRC seems has a similar pattern to the result achieved by the majority vote. However, from the majority vote, we also could see that the performance of Decision Tree is the worst and the performance of KNN is the second worst. For this situation, AUROC gives out the similar result. As for accuracy and F1 score, there is one more thing we need to mention. That is, the accuracy or F1 score of different classifier have no significant difference, which means they are not suitable for measuring performance of classifier for this dataset.
For the graphs above, we could see that PRC works well for "good" classifier (which means that the classifier works well), but for "bad" ones, the curve becomes unstable. At the same time, ROC shows similarly for "good" ones but clearly expresses the poor performance of those "bad" classifier.
So for this dataset, AUPRC should be a better measurement for evaluate the performance of "good" classifier than AUROC, while AUROC works well for evaluate the performance of "bad" classifier. They both works much better than accuracy and F1 score. As for log_loss, the result it gives out seems quite different from the majority vote, especially for the the Naive Bayes and SVM. To reach a conclusion, AUPRC and AUROC should be used together for choosing classifiers. That is, AUPRC should be use to evaluate which classifier is accurate, while AUROC should be used to measure how inaccurate a classifier is.
```
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.ensemble import RandomForestClassifier
from sklearn import model_selection
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.metrics import roc_auc_score, average_precision_score, f1_score, log_loss, recall_score, precision_recall_curve, auc
from sklearn.metrics import roc_curve, accuracy_score
import seaborn as sns
%matplotlib inline
sns.set()
X_train = x_train
X_test = x_test
seed=7
models = [] # Here I will append all the algorithms that I will use. Each one will run in all the created datasets.
models.append(('LR', LogisticRegression()))
models.append(('KNN', KNeighborsClassifier()))
models.append(('NB', GaussianNB()))
models.append(('RF', RandomForestClassifier()))
models.append(('DecisionTree', DecisionTreeClassifier(random_state=42)))
models.append(('SVM',SVC(random_state=42,probability=True)))
test_ratio = y_test.sum()/len(y_test)
# compare different classifiers
results_accuracy=[]
results_auroc=[]
results_average_precision=[]
results_neg_log_loss=[]
results_f1 = []
results_recall =[]
names=[]
fpr_full = []
tpr_full = []
thresholds_roc_full = []
precision_full = []
recall_full = []
thresholds_prc_full = []
measures = ['AUROC','AUPRC','accuracy','log loss','F1']
scores_table = np.zeros([8,5])
roc_cut = np.zeros([8,]).astype(int) # cut points for fpr, tpr, thresholds for ROC curve of each model
prc_cut = np.zeros([8,]).astype(int) # cut points for precision, recall, thresholds for PRC curve of each model
i = 0 # looping index
for name, model in models:
y_pred_proba = model.fit(X_train, y_train).predict_proba(X_test)[:, 1]
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
fpr_full = np.concatenate((fpr_full, fpr))
tpr_full = np.concatenate((tpr_full, tpr))
thresholds_roc_full = np.concatenate((thresholds_roc_full, thresholds))
roc_cut[i + 1] = roc_cut[i] + fpr.shape[0]
#Area under ROC curve
auroc = roc_auc_score(y_test,y_pred_proba)
precision, recall, thresholds = precision_recall_curve(y_test,y_pred_proba)
precision_full = np.concatenate((precision_full, precision))
recall_full = np.concatenate((recall_full, recall))
thresholds_prc_full = np.concatenate((thresholds_prc_full, thresholds))
prc_cut[i + 1] = prc_cut[i] + recall.shape[0]
# area under PRC curve
auprc = auc(recall, precision)
accuracy = accuracy_score(y_test, model.predict(X_test))
average_precision = average_precision_score(y_test, model.predict(X_test))
f1 = f1_score(y_test, model.predict(X_test))
log_loss_score = log_loss(y_test, model.predict(X_test))
recall = recall_score(y_test, model.predict(X_test))
names.append(name)
# report of scores
scores_table[i, 0] = auroc
scores_table[i, 1] = auprc
scores_table[i, 2] = accuracy
scores_table[i, 3] = log_loss_score
scores_table[i, 4] = f1
print(name,': AUROC = {:.3f}, AUPRC = {:.3f}, '.format(auroc,auprc),
'. \nAccuracy = {:.3f}, log loss = {:.3f},'.format(accuracy, log_loss_score))
print ("--"*30)
i = i + 1
scores_table[5, 0] = 0.5 # random guess
scores_table[5, 1] = test_ratio # random guess
#plot ROC
plt.rcParams["figure.figsize"] = (8,5)
plt.plot([0,1],[0,1],'k--')
for i in range(7):
plt.plot(fpr_full[roc_cut[i]:roc_cut[i + 1]],tpr_full[roc_cut[i]:roc_cut[i + 1]], label=name)
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.legend(['random guess','Logistic Regression','KNN','Naive Bayes','Random Forest','Decision Tree','SVM'])
save_name = 'heart_disease_ROC_curve_new.png'
plt.savefig(save_name)
plt.show()
# plot PRC
plt.rcParams["figure.figsize"] = (8,5)
plt.axhline(y=test_ratio, xmin=0, xmax=1,color='k', linestyle = '--')
for i in range(7):
plt.plot(recall_full[prc_cut[i]:prc_cut[i + 1]],precision_full[prc_cut[i]:prc_cut[i + 1]], label=name)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.legend(['random guess','Logistic Regression','KNN','Naive Bayes','Random Forest','Decision Tree','SVM'])
save_name = 'heart_disease_PRcurve_new.png'
plt.savefig(save_name)
plt.show()
```
| github_jupyter |
# Submitting calculations
#### Create a structure, kpoints, and input parameters and submit a Quantum ESPRESSO (PW) calculation
Time: 3 mins
<div class="alert alert-box alert-info">
This example expects that you have already imported the sample data provided with the demos (to have the SSSP pseudopotential library).
</div>
```
from aiida.backends.utils import load_dbenv, is_dbenv_loaded
if not is_dbenv_loaded():
load_dbenv()
from aiida.orm.utils import load_node, CalculationFactory
from aiida.work.run import run
from aiida.orm.code import Code
from aiida.orm.data.base import Bool, Str
from aiida.orm.data.parameter import ParameterData
from aiida.orm.data.array.kpoints import KpointsData
from aiida.orm.data.structure import StructureData
from aiida_quantumespresso.utils.pseudopotential import validate_and_prepare_pseudos_inputs
from aiida_quantumespresso.utils.resources import get_default_options
from ase.spacegroup import crystal
import nglview
#import warnings
#warnings.filterwarnings('ignore')
```
#### Create a diamond cubic crystal structure
```
# Define a structure, employing the Atomic Structure Environment library
alat = 3.568
ase_definition = crystal('C', [(0,0,0)], spacegroup=227, cellpar=[alat, alat, alat, 90, 90, 90], primitive_cell=True)*2
structure = StructureData(ase=ase_definition)
structure.store()
```
#### Show the already stored structure
```
view = nglview.show_ase(structure.get_ase())
view
```
#### Create the _k_-points mesh
```
# Define the k-points mesh
kpoints = KpointsData()
kpoints_mesh = [1, 1, 1]
kpoints.set_kpoints_mesh(kpoints_mesh)
kpoints.store()
```
#### List existing pseudopotential families
```
!verdi data upf listfamilies
```
#### Define the calculation input parameters
```
# Define the pseudo potential family and input parameters for pw.x
pseudo_family = 'SSSP'
parameters = {
'CONTROL': {
"calculation": "scf",
'tstress': True,
},
'SYSTEM': {
'ecutwfc': 40.,
'ecutrho': 320.,
},
'ELECTRONS': {
'conv_thr': 1.e-10,
}
}
```
#### Select the code to use
```
#### Select the code to use
from notebook_helpers import get_code_pwonly_dropdown
from IPython.display import display
code_group = get_code_pwonly_dropdown()
display(code_group)
# Dictionary, where values are the code labels of each type of code required
# Here we require only PW
code_names = code_group.children[1].value
if code_names:
code_name = code_names['pw']
print "I will use the code '{}'".format(code_names['pw'])
else:
print "No code found: the rest will crash. Select a code, or configure one first!"
code_name = None
code = Code.get_from_string(code_name)
```
#### Run a PwCalculation with the inputs we define
<div class="alert alert-box alert-info">
Remember at this stage to check if the daemon is started, otherwise the calculation will never run<br>
<br>
To check the daemon status, run in a terminal `verdi daemon status`<br>
To start the daemon, run in a terminal `verdi daemon start`
</div>
```
PwCalculation = CalculationFactory('quantumespresso.pw')
inputs = {
'code': code,
'structure': structure,
'kpoints': kpoints,
'parameters': ParameterData(dict=parameters),
'pseudo': validate_and_prepare_pseudos_inputs(structure, pseudo_family=Str(pseudo_family)),
'_options': get_default_options()
}
print('Running a {} pw.x calculation... '.format('scf'))
results, pk = run(PwCalculation.process(), _return_pid=True, **inputs)
calc = load_node(pk)
print('PwCalculation<{}> terminated with state: {}'.format(calc.pk, calc.get_state()))
print('\n{link:25s} {node}'.format(link='Output link', node='Node pk and type'))
print('{s}'.format(s='-'*60))
for link, node in sorted(calc.get_outputs(also_labels=True)):
print('{:25s} <{}> {}'.format(link, node.pk, node.__class__.__name__))
```
#### There are convenient methods to directly access the results
```
print "Total energy = {} {}".format(calc.res.energy, calc.res.energy_units)
```
| github_jupyter |
```
import numpy as np
from scipy.integrate import solve_ivp
import autograd
from autograd.numpy import cos,sin
import math
import matplotlib.pyplot as plt
%matplotlib inline
from new_data_builder import get_system
def get_hamiltonian(sys):
if sys == 'pendulum':
def hamiltonian_fn(coords):
q, p = coords[:,0],coords[:,1]
H = 9 * (1 - cos(q)) + p ** 2 / 2
return H
elif sys == 'mass_spring':
def hamiltonian_fn(coords):
q, p = coords[:,0],coords[:,1]
H = q**2/2 + p ** 2/2
return H
elif sys == 'heinon':
def hamiltonian_fn(coords):
x, y, px, py = coords[:,0],coords[:,1],coords[:,2],coords[:,3]
lambda_ = 1
H = 0.5 * px ** 2 + 0.5 * py ** 2 + 0.5 * (x ** 2 + y ** 2) + lambda_ * (
(x ** 2) * y - (y ** 3) / 3)
return H
elif sys == 'nspring':
def hamiltonian_fn(vecs, m=[1]*5, k=[1]*5, num_particles=5):
energies = []
for vec in vecs:
num_particles = num_particles
x = vec[:num_particles * 2]
p = vec[2 * num_particles:]
xs = x.reshape(-1, 2)
ps = p.reshape(-1, 2)
U1 = 0
K = 0
for i in range(num_particles):
for j in range(i + 1, num_particles):
U1 += .5 * k[i] * k[j] * ((xs[i] - xs[j]) ** 2).sum()
K += 0.5 * ((ps[i] ** 2).sum()) / m[i]
energies.append(K+U1)
return np.array(energies)
elif sys == 'ngrav':
def hamiltonian_fn(states):
def potential_energy(state):
tot_energy = np.zeros((1, 1, state.shape[2]))
for i in range(state.shape[0]):
for j in range(i + 1, state.shape[0]):
r_ij = ((state[i:i + 1, 1:3] - state[j:j + 1, 1:3]) ** 2).sum(1, keepdims=True) ** .5
m_i = state[i:i + 1, 0:1]
m_j = state[j:j + 1, 0:1]
tot_energy += m_i * m_j / r_ij
U = -tot_energy.sum(0).squeeze()
return U
def kinetic_energy(state):
energies = .5 * state[:, 0:1] * (state[:, 3:5] ** 2).sum(1, keepdims=True)
T = energies.sum(0).squeeze()
return T
energies = []
for state in states:
qs=state[1:5].reshape(-1,2)
ps=state[6:].reshape(-1,2)
ms = np.array([1,1]).reshape(2,1)
newstate = np.concatenate([ms,qs,ps],1).reshape(2,5,1)
energies.append( potential_energy(newstate) + kinetic_energy(newstate))
return np.array(energies)
elif sys == 'three_body':
def hamiltonian_fn(states):
def potential_energy(state):
'''U=\sum_i,j>i G m_i m_j / r_ij'''
tot_energy = np.zeros((1, 1, state.shape[2]))
for i in range(state.shape[0]):
for j in range(i + 1, state.shape[0]):
r_ij = ((state[i:i + 1, 1:3] - state[j:j + 1, 1:3]) ** 2).sum(1, keepdims=True) ** .5
m_i = state[i:i + 1, 0:1]
m_j = state[j:j + 1, 0:1]
tot_energy += m_i * m_j / r_ij
U = -tot_energy.sum(0).squeeze()
return U
def kinetic_energy(state):
'''T=\sum_i .5*m*v^2'''
energies = .5 * state[:, 0:1] * (state[:, 3:5] ** 2).sum(1, keepdims=True)
T = energies.sum(0).squeeze()
return T
def total_energy(state):
return potential_energy(state) + kinetic_energy(state)
energies = []
for state in states:
qs=state[:int(3*2)].reshape(-1,2)
ps=state[int(3*2):].reshape(-1,2)
ms = np.array([1,1,1]).reshape(3,1)
newstate = np.concatenate([ms,qs,ps],1).reshape(3,5,1)
energies.append( potential_energy(newstate) + kinetic_energy(newstate))
return np.array(energies)
elif sys == 'heinon':
def hamiltonian_fn(coords):
x, y, px, py = coords[:,0],coords[:,1],coords[:,2],coords[:,3]
lambda_ = 1
H = 0.5 * px ** 2 + 0.5 * py ** 2 + 0.5 * (x ** 2 + y ** 2) + lambda_ * (
(x ** 2) * y - (y ** 3) / 3)
return H
return hamiltonian_fn
NSAMPLES = 25
SYSTEMS = ['mass_spring','pendulum','ngrav','nspring']
nparts = [1,1,2,5,3,1]
INTEGRATORS = ['rk3','rk4','vi1','vi2','vi3','vi4']
Tmax = 20
dtvals = [0.01,0.05,0.1,0.2,0.3,0.4,0.5]
seed = 20
TABLE_VALUES = np.zeros((NSAMPLES,len(dtvals),len(SYSTEMS),len(INTEGRATORS),2))
for seed_val in range(NSAMPLES):
for dt_index,dt in enumerate(dtvals):
for sys_index,system in enumerate(SYSTEMS):
gt = get_system(system,'gt',1,nparts[sys_index], Tmax, dt,dt,seed=seed_val)
ham_fn = get_hamiltonian(system)
for methdex,method in enumerate(INTEGRATORS):
res = get_system(system,method,1,nparts[sys_index], Tmax, dt,dt,seed=seed_val)
res = np.array(res)
state_error= ((gt - res)**2).mean()
energy_error= ((ham_fn(gt) - ham_fn(res))**2).mean()
# energy = hamiltonian_eval(res)
# energy_error = ((gt_energy-energy)**2)
TABLE_VALUES[seed_val,dt_index,sys_index,methdex,0] = state_error
TABLE_VALUES[seed_val,dt_index,sys_index,methdex,1] = energy_error
import seaborn as sns
sns.axes_style(style='ticks')
sns.set_context("paper",font_scale=2, rc={'figure.figsize':(5,5),"font.size":20,"axes.titlesize":20,"axes.labelsize":20,'lines.linewidth':3})
labels = ['MASS SPRING','PENDULUM','2-BODY GRAVITATIONAL','5 SPRING PARTICLE','3-BODY GRAVITATIONAL','HEINON HEILES']
for sysdex,sys in enumerate(SYSTEMS):
fig,ax = plt.subplots(1,2,figsize=(12,7))
axs = ax.ravel()
fig.suptitle(labels[sysdex])
for integ_index,integ in enumerate(INTEGRATORS):
axs[0].plot(dtvals,TABLE_VALUES[:,:,sysdex,integ_index,0].mean(0),label=f'{integ}')
axs[0+1].plot(dtvals,TABLE_VALUES[:,:,sysdex,integ_index,1].mean(0),label=f'{integ}')
axs[0].set_title('State Error')
axs[0+1].set_title('Energy Error')
axs[0].set_ylabel('Average MSE')
axs[0].set_xlabel(r'$\Delta t$')
axs[1].set_ylabel('Average MSE')
axs[1].set_xlabel(r'$\Delta t$')
axs[0].set_yscale('log')
axs[1].set_yscale('log')
axs[0].set_xscale('log')
axs[1].set_xscale('log')
axs[0].legend()
axs[1].legend()
# plt.tight_layout()
plt.savefig(f'{sys}_{Tmax}.pdf',bbox_inches='tight')
NSAMPLES = 25
SYSTEMS = ['three_body','heinon']
nparts = [3,1]
INTEGRATORS = ['rk3','rk4','vi1','vi2','vi3','vi4']
Tmax = 20
dtvals = [0.01,0.05,0.1,0.5]
seed = 20
TABLE_VALUES = np.zeros((NSAMPLES,len(dtvals),len(SYSTEMS),len(INTEGRATORS),2))
for seed_val in range(NSAMPLES):
for dt_index,dt in enumerate(dtvals):
for sys_index,system in enumerate(SYSTEMS):
gt = get_system(system,'gt',1,nparts[sys_index], Tmax, dt,dt,seed=seed_val)
ham_fn = get_hamiltonian(system)
for methdex,method in enumerate(INTEGRATORS):
res = get_system(system,method,1,nparts[sys_index], Tmax, dt,dt,seed=seed_val)
res = np.array(res)
state_error= ((gt - res)**2).mean()
energy_error= ((ham_fn(gt) - ham_fn(res))**2).mean()
# energy = hamiltonian_eval(res)
# energy_error = ((gt_energy-energy)**2)
TABLE_VALUES[seed_val,dt_index,sys_index,methdex,0] = state_error
TABLE_VALUES[seed_val,dt_index,sys_index,methdex,1] = energy_error
import seaborn as sns
sns.axes_style(style='ticks')
sns.set_context("paper",font_scale=2, rc={'figure.figsize':(5,5),"font.size":20,"axes.titlesize":20,"axes.labelsize":20,'lines.linewidth':3})
for sysdex,sys in enumerate(SYSTEMS):
fig,axs = plt.subplots(1,2,figsize=(20,10))
axs[0].set_yscale('log')
axs[1].set_yscale('log')
axs[0].set_xscale('log')
axs[1].set_xscale('log')
fig.suptitle(sys)
for integ_index,integ in enumerate(INTEGRATORS):
axs[0].plot(dtvals,TABLE_VALUES[:,:,sysdex,integ_index,0].mean(0),label=f'{integ}')
axs[1].plot(dtvals,TABLE_VALUES[:,:,sysdex,integ_index,1].mean(0),label=f'{integ}')
axs[0].set_title('State Error')
axs[1].set_title('Energy Error')
axs[0].set_ylabel('Average MSE')
axs[0].set_xlabel(r'$\Delta t$')
axs[1].set_ylabel('Average MSE')
axs[1].set_xlabel(r'$\Delta t$')
axs[0].legend()
axs[1].legend()
plt.savefig(f'{sys}_{Tmax}.pdf',bbox_inches='tight')
# fig,axs = plt.subplots(4,5,figsize=(20,20))
# ax = axs.ravel()
# methods_list = ['rk1','rk2','rk3','rk4','gt','vi1','vi2','vi3','vi4','gt']
# for methdex,method in enumerate(methods_list):
# res = pendulum(method,1, Tmax, dt,dt,seed=seed)
# if method !='gt':
# res = np.array(res)
# ax[methdex].scatter(res[:,0],res[:,1],label=method)
# ax[methdex].set_title(method)
# energy = hamiltonian_eval(res)
# ax[methdex+10].scatter(np.arange(len(energy)),energy,label=method)
# ax[methdex+10].set_title(f'{method}_energy')
# plt.legend()
# fig,axs = plt.subplots(4,5,figsize=(20,20))
# ax = axs.ravel()
# methods_list = ['rk1','rk2','rk3','rk4','gt','vi1','vi2','vi3','vi4','gt']
# gt = pendulum('gt',1, Tmax, dt,dt,seed=seed)
# gt_energy = hamiltonian_eval(gt)
# for methdex,method in enumerate(methods_list):
# res = pendulum(method,1, Tmax, dt,dt,seed=seed)
# if method !='gt':
# res = np.array(res)
# state_error= ((gt - res)**2).mean(1)
# ax[methdex].scatter(np.arange(len(state_error)),state_error,label=method)
# energy = hamiltonian_eval(res)
# energy_error = ((gt_energy-energy)**2)
# ax[methdex+10].scatter(np.arange(len(energy_error)),energy_error,label=method)
# plt.legend()
##### ENERGY #####
def potential_energy(state):
'''U=\sum_i,j>i G m_i m_j / r_ij'''
tot_energy = np.zeros((1,1,state.shape[2]))
for i in range(state.shape[0]):
for j in range(i+1,state.shape[0]):
r_ij = ((state[i:i+1,1:3] - state[j:j+1,1:3])**2).sum(1, keepdims=True)**.5
m_i = state[i:i+1,0:1]
m_j = state[j:j+1,0:1]
tot_energy += m_i * m_j / r_ij
U = -tot_energy.sum(0).squeeze()
return U
def kinetic_energy(state):
'''T=\sum_i .5*m*v^2'''
energies = .5 * state[:,0:1] * (state[:,3:5]**2).sum(1, keepdims=True)
T = energies.sum(0).squeeze()
return T
def total_energy(state):
return potential_energy(state) + kinetic_energy(state)
def update(t, state):
qs = state[3:int(3+2*3)].reshape(-1, 2)
ps = state[int(3+2*3):].reshape(-1, 2)
ms = np.array([1, 1,1]).reshape(3, 1)
state = np.concatenate([ms, qs, ps], 1)
deriv = np.zeros_like(state)
deriv[:, 1:3] = state[:, 3:5] # dx, dy = vx, vy
deriv[:, 3:5] = get_accelerations(state)
qd = deriv[:, 1:3].ravel()
pd = deriv[:, 3:5].ravel()
return np.hstack([0,0,0, qd, pd])
##### DYNAMICS #####
def get_accelerations(state, epsilon=0):
# shape of state is [bodies x properties]
net_accs = [] # [nbodies x 2]
for i in range(state.shape[0]): # number of bodies
other_bodies = np.concatenate([state[:i, :], state[i+1:, :]], axis=0)
displacements = other_bodies[:, 1:3] - state[i, 1:3] # indexes 1:3 -> pxs, pys
distances = (displacements**2).sum(1, keepdims=True)**0.5
masses = other_bodies[:, 0:1] # index 0 -> mass
pointwise_accs = masses * displacements / (distances**3 + epsilon) # G=1
net_acc = pointwise_accs.sum(0, keepdims=True)
net_accs.append(net_acc)
net_accs = np.concatenate(net_accs, axis=0)
return net_accs
##### INITIALIZE THE TWO BODIES #####
def rotate2d(p, theta):
c, s = np.cos(theta), np.sin(theta)
R = np.array([[c, -s],[s, c]])
return (R @ p.reshape(2,1)).squeeze()
def random_config(nu=2e-1, min_radius=0.9, max_radius=1.2):
'''This is not principled at all yet'''
state = np.zeros((3,5))
state[:,0] = 1
p1 = 2*np.random.rand(2) - 1
r = np.random.rand() * (max_radius-min_radius) + min_radius
p1 *= r/np.sqrt( np.sum((p1**2)) )
p2 = rotate2d(p1, theta=2*np.pi/3)
p3 = rotate2d(p2, theta=2*np.pi/3)
# # velocity that yields a circular orbit
v1 = rotate2d(p1, theta=np.pi/2)
v1 = v1 / r**1.5
v1 = v1 * np.sqrt(np.sin(np.pi/3)/(2*np.cos(np.pi/6)**2)) # scale factor to get circular trajectories
v2 = rotate2d(v1, theta=2*np.pi/3)
v3 = rotate2d(v2, theta=2*np.pi/3)
# make the circular orbits slightly chaotic
v1 *= 1 + nu*(2*np.random.rand(2) - 1)
v2 *= 1 + nu*(2*np.random.rand(2) - 1)
v3 *= 1 + nu*(2*np.random.rand(2) - 1)
state[0,1:3], state[0,3:5] = p1, v1
state[1,1:3], state[1,3:5] = p2, v2
state[2,1:3], state[2,3:5] = p3, v3
return state
##### INTEGRATE AN ORBIT OR TWO #####
def sample_orbits(timesteps=20, trials=5000, nbodies=3, orbit_noise=2e-1,
min_radius=0.9, max_radius=1.2, t_span=[0, 5], verbose=False, **kwargs):
orbit_settings = locals()
if verbose:
print("Making a dataset of near-circular 3-body orbits:")
state = random_config(nu=orbit_noise, min_radius=min_radius, max_radius=max_radius)
orbit, settings = get_orbit(state, t_points=timesteps, t_span=t_span, nbodies=nbodies, **kwargs)
# print(orbit.shape)
# batch = orbit.transpose(2,0,1).reshape(-1,nbodies*5)
# for state in batch:
# dstate = update(None, state)
# # reshape from [nbodies, state] where state=[m, qx, qy, px, py]
# # to [canonical_coords] = [qx1, qx2, qy1, qy2, px1,px2,....]
# coords = state.reshape(nbodies,5).T[1:].flatten()
# dcoords = dstate.reshape(nbodies,5).T[1:].flatten()
# x.append(coords)
# dx.append(dcoords)
# shaped_state = state.copy().reshape(nbodies,5,1)
# e.append(total_energy(shaped_state))
# data = {'coords': np.stack(x)[:N],
# 'dcoords': np.stack(dx)[:N],
# 'energy': np.stack(e)[:N] }
return orbit
def get_orbit(state, update_fn=update, t_points=100, t_span=[0, 2],integrator_type='gt', **kwargs):
if not 'rtol' in kwargs.keys():
kwargs['rtol'] = 1e-12
# kwargs['atol'] = 1e-12
# kwargs['atol'] = 1e-9
orbit_settings = locals()
nbodies = state.shape[0]
t_eval = np.arange(t_span[0], t_span[1], dt)
if len(t_eval) != t_points:
t_eval = t_eval[:-1]
orbit_settings['t_eval'] = t_eval
# print(state)
qs = state[:, 1:3].ravel()
ps = state[:, 3:5].ravel()
if integrator_type == 'gt':
path = solve_ivp(fun=update_fn, t_span=t_span, y0=np.hstack([1,1,1,qs, ps]),
t_eval=t_eval, method='DOP853', **kwargs)
orbit = path['y'].T
# elif 'vi' in integrator_type:
# orbit = rk(dynamics_fn, np.arange(0, T_max, dt), state.flatten(), dt)
else:
orbit = rk(dynamics_fn, np.arange(0, T_max, dt), np.hstack([1,1,1, qs, ps]), dt)
return orbit, orbit_settings
def dynamics_fn(state):
return update(0, state)
def rk(dx_dt_fn, t, y0, dt):
single_step = choose_integrator_nongraph(integrator_type)
store = []
store.append(y0)
for i in range(len(t)):
# print(type(y0))
ynext = single_step(dx_dt_fn, y0, dt)
store.append(ynext)
y0 = ynext
return store[:-1]
##### LOAD OR SAVE THE DATASET #####
def get_dataset(experiment_name, save_dir, **kwargs):
'''Returns an orbital dataset. Also constructs
the dataset if no saved version is available.'''
return sample_orbits(timesteps=int(np.ceil(Tmax / dt)), trials=1, nbodies=2, orbit_noise=5e-2,
min_radius=0.9, max_radius=1.2, t_span=[0, Tmax], verbose=False)
Tmax = 20
dt =0.01
res = get_dataset('hh','jj')
plt.figure()
plt.scatter(res[:,3],res[:,4])
plt.scatter(res[:,5],res[:,6])
plt.scatter(res[:,7],res[:,8])
```
| github_jupyter |
```
# helper functions
def plot_dataset(X, y, axes):
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
plt.axis(axes)
plt.grid(True, which='both')
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"$x_2$", fontsize=20, rotation=0)
def plot_predictions(clf, axes):
x0s = np.linspace(axes[0], axes[1], 100)
x1s = np.linspace(axes[2], axes[3], 100)
x0, x1 = np.meshgrid(x0s, x1s)
X = np.c_[x0.ravel(), x1.ravel()]
y_pred = clf.predict(X).reshape(x0.shape)
y_decision = clf.decision_function(X).reshape(x0.shape)
plt.contourf(x0, x1, y_pred, cmap=plt.cm.brg, alpha=0.2)
plt.contourf(x0, x1, y_decision, cmap=plt.cm.brg, alpha=0.1)
import matplotlib.pyplot as plt
import numpy as np
```
### Linear SVM Classification
You can think of an SVM classifier as fitting the widest possible street (represented by the parallel dashed lines) between the classes. This is called **large margin classification**.
Adding more training instances “off the street” will not affect the decision boundary at all: it is fully determined (or "supported") by the instances located on the
edge of the street. These instances are called the support vectors
SVMs are sensitive to the feature scales.
### Soft Margin Classification
If we strictly impose that all instances be off the street and on the right side, this is called **hard margin classification**. There are two main issues with hard margin classification. First, it only works if the data is linearly separable, and second it is quite sensitive to outliers
The objective is to find a good balance between keeping the street as large as possible and limiting the margin violations (i.e., instances that end up in the middle of the street or even on the wrong side). This is called so **margin classification**.
In Scikit-Learn’s SVM classes, you can control this balance using the C hyperparameter: a smaller C value leads to a wider street but more margin violations.
Using a high C value the classifier makes fewer margin violations but ends up with a smaller margin.
```
import numpy as np
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)]
y = (iris["target"] == 2).astype(np.float64)
svm_clf = Pipeline([
("scaler", StandardScaler()),
("linear_svc", LinearSVC(C=1, loss="hinge")),
])
svm_clf.fit(X, y)
svm_clf.predict([[5.5, 1.7]])
```
### Nonlinear SVM Classification
Although linear SVM classifiers are efficient and work surprisingly well in many cases, many datasets are not even close to being linearly separable. One approach to handling nonlinear datasets is to add more features, such as polynomial features (as you did in Chapter 4); in some cases this can result in a linearly separable dataset.
```
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, noise=0.15, random_state=42)
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
polynomial_svm_clf = Pipeline([
("poly_features", PolynomialFeatures(degree=3)),
("scaler", StandardScaler()),
("svm_clf", LinearSVC(C=10, loss="hinge", max_iter=5000, random_state=42))
])
polynomial_svm_clf.fit(X, y)
plot_predictions(polynomial_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
```
### Polynomial kernel
Adding polynomial features is simple to implement and can work great with all sorts of Machine Learning algorithms (not just SVMs), but at a low polynomial degree it cannot deal with very complex datasets, and with a high polynomial degree it creates a huge number of features, making the model too slow.
Fortunately, when using SVMs you can apply an almost miraculous mathematical technique called the kernel trick.
It makes it possible to get the same result as if you added many polynomial features, even with very highdegree polynomials, without actually having to add them. So there is no combinatorial explosion of the number of features since you don’t actually add any features. This trick is implemented by the SVC class.
```
from sklearn.svm import SVC
poly_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=3, coef0=1, C=5))
])
```
The hyperparameter **coef0** controls how much the model is influenced by highdegree polynomials versus low-degree polynomials.
```
poly_kernel_svm_clf.fit(X, y)
poly100_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=10, coef0=100, C=5))
])
poly100_kernel_svm_clf.fit(X, y)
fig, axes = plt.subplots(ncols=2, figsize=(11, 4), sharey=True)
plt.sca(axes[0])
plot_predictions(poly_kernel_svm_clf, [-1.5, 2.45, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.4, -1, 1.5])
plt.title(r"$d=3, r=1, C=5$", fontsize=18)
plt.sca(axes[1])
plot_predictions(poly100_kernel_svm_clf, [-1.5, 2.45, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.4, -1, 1.5])
plt.title(r"$d=10, r=100, C=5$", fontsize=18)
plt.ylabel("")
```
### Adding Similarity Features
Another technique to tackle nonlinear problems is to add features computed using a ***similarity function*** that measures how much each instance resembles a particular ***landmark***
let’s define the similarity function to be the ***Gaussian Radial Basis Function (RBF)***
$\phi_{\gamma}(\mathbf{x}, ℓ) = exp \ (-\gamma \left\lVert \mathbf{x} - ℓ \right\rVert^2)$
It is a bell-shaped function varying from 0 (very far away from the landmark) to 1 (at
the landmark).
```
def gaussian_rbf(x, landmark, gamma):
return np.exp(-gamma * np.linalg.norm(x - landmark)**2)
print(gaussian_rbf(-1, -2, 0.3))
print(gaussian_rbf(-1, 1, 0.3))
```
Selecting the landmarks: The simplest approach is to create a
landmark at the location of each and every instance in the dataset. This creates many dimensions and thus increases the chances that the transformed training set will be linearly separable. The downside is that a training set with $m$ instances and $n$ features gets transformed into a training set with $m$ instances and $m$ features (assuming you drop the original features). If your training set is very large, you end up with an equally large number of features.
### Gaussian RBF Kernel
Just like the polynomial features method, the similarity features method can be useful with any Machine Learning algorithm, but it may be computationally expensive to compute all the additional features, especially on large training sets. However, once again the kernel trick does its SVM magic: it makes it possible to obtain a similar result as if you had added many similarity features, without actually having to add them.
```
rbf_kernel_svm_clf = Pipeline([
("scalar", StandardScaler()),
("svm_clf", SVC(kernel="rbf", gamma=5, C=0.001))
])
rbf_kernel_svm_clf.fit(X, y)
from sklearn.svm import SVC
gamma1, gamma2 = 0.1, 5
C1, C2 = 0.001, 1000
hyperparams = (gamma1, C1), (gamma1, C2), (gamma2, C1), (gamma2, C2)
svm_clfs = []
for gamma, C in hyperparams:
rbf_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="rbf", gamma=gamma, C=C))
])
rbf_kernel_svm_clf.fit(X, y)
svm_clfs.append(rbf_kernel_svm_clf)
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(10.5, 7), sharex=True, sharey=True)
for i, svm_clf in enumerate(svm_clfs):
plt.sca(axes[i // 2, i % 2])
plot_predictions(svm_clf, [-1.5, 2.45, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.45, -1, 1.5])
gamma, C = hyperparams[i]
plt.title(r"$\gamma = {}, C = {}$".format(gamma, C), fontsize=16)
if i in (0, 1):
plt.xlabel("")
if i in (1, 3):
plt.ylabel("")
```
### Regression
The SVM algorithm is quite versatile: not only does it support linear and nonlinear classification, but it also supports linear and nonlinear regression. The trick is to reverse the objective: >> instead of trying to fit the largest possible street between two classes while limiting margin violations, SVM Regression tries to fit as many instances as possible on the street while limiting margin violations (i.e., instances of the street). The width of the street is controlled by a hyperparameter $ϵ$.
```
np.random.seed(42)
m = 50
X = 2 * np.random.rand(m, 1)
y = (4 + 3 * X + np.random.randn(m, 1)).ravel()
from sklearn.svm import LinearSVR
svm_reg1 = LinearSVR(epsilon=1.5, random_state=42)
svm_reg2 = LinearSVR(epsilon=0.5, random_state=42)
svm_reg1.fit(X, y)
svm_reg2.fit(X, y)
def find_support_vectors(svm_reg, X, y):
y_pred = svm_reg.predict(X)
off_margin = (np.abs(y - y_pred) >= svm_reg.epsilon)
return np.argwhere(off_margin)
svm_reg1.support_ = find_support_vectors(svm_reg1, X, y)
svm_reg2.support_ = find_support_vectors(svm_reg2, X, y)
eps_x1 = 1
eps_y_pred = svm_reg1.predict([[eps_x1]])
def plot_svm_regression(svm_reg, X, y, axes):
x1s = np.linspace(axes[0], axes[1], 100).reshape(100, 1)
y_pred = svm_reg.predict(x1s)
plt.plot(x1s, y_pred, "k-", linewidth=2, label=r"$\hat{y}$")
plt.plot(x1s, y_pred + svm_reg.epsilon, "k--")
plt.plot(x1s, y_pred - svm_reg.epsilon, "k--")
plt.scatter(X[svm_reg.support_], y[svm_reg.support_], s=180, facecolors='#FFAAAA')
plt.plot(X, y, "bo")
plt.xlabel(r"$x_1$", fontsize=18)
plt.legend(loc="upper left", fontsize=18)
plt.axis(axes)
fig, axes = plt.subplots(ncols=2, figsize=(9, 4), sharey=True)
plt.sca(axes[0])
plot_svm_regression(svm_reg1, X, y, [0, 2, 3, 11])
plt.title(r"$\epsilon = {}$".format(svm_reg1.epsilon), fontsize=18)
plt.ylabel(r"$y$", fontsize=18, rotation=0)
plt.annotate(
'', xy=(eps_x1, eps_y_pred), xycoords='data',
xytext=(eps_x1, eps_y_pred - svm_reg1.epsilon),
textcoords='data', arrowprops={'arrowstyle': '<->', 'linewidth': 1.5}
)
plt.text(0.91, 5.6, r"$\epsilon$", fontsize=20)
plt.sca(axes[1])
plot_svm_regression(svm_reg2, X, y, [0, 2, 3, 11])
plt.title(r"$\epsilon = {}$".format(svm_reg2.epsilon), fontsize=18)
plt.show()
```
Adding more training instances within the margin does not affect the model’s predictions; thus, the model is said to be $ϵ-insensitive$.
```
# Non-linear regression tasks
np.random.seed(42)
m = 100
X = 2 * np.random.rand(m, 1) - 1
y = (0.2 + 0.1 * X + 0.5 * X**2 + np.random.randn(m, 1)/10).ravel()
from sklearn.svm import SVR
svm_poly_reg = SVR(kernel="poly", degree=2, C=100,
epsilon=0.1, gamma="scale")
svm_poly_reg.fit(X, y)
from sklearn.svm import SVR
svm_poly_reg1 = SVR(kernel="poly", degree=2, C=100, epsilon=0.1, gamma="scale")
svm_poly_reg2 = SVR(kernel="poly", degree=2, C=0.01, epsilon=0.1, gamma="scale")
svm_poly_reg1.fit(X, y)
svm_poly_reg2.fit(X, y)
fig, axes = plt.subplots(ncols=2, figsize=(9, 4), sharey=True)
plt.sca(axes[0])
plot_svm_regression(svm_poly_reg1, X, y, [-1, 1, 0, 1])
plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg1.degree, svm_poly_reg1.C, svm_poly_reg1.epsilon), fontsize=18)
plt.ylabel(r"$y$", fontsize=18, rotation=0)
plt.sca(axes[1])
plot_svm_regression(svm_poly_reg2, X, y, [-1, 1, 0, 1])
plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg2.degree, svm_poly_reg2.C, svm_poly_reg2.epsilon), fontsize=18)
plt.show()
```
| github_jupyter |
# Анализ данных на Python
### Семинар 5. Множества и словари
# Что такое Хэш-таблица?
Вы - продавец в магазине. Когда покупатель что-то у вас покупает, вы проверяете стоимость товара по книге.
```
book = [('яйца', 60), ('чай', 16), ('кофе', 35), ('лён', 20),
('петрушка', 15), ('торт', 10), ('арбуз', 60), ('йогурт', 35),
('соя', 20), ('ролтон', 42), ('бобы', 10), ('глаз дракона', 2)]
```
Как найти сколько стоят бобы? Листать книгу, читать в ней каждую строчку до тех пор пока мы не найдём ответ.
```
x = 'бобы'
for item in book:
if item[0] == x:
print(item[1])
```
__Вопрос:__ Если у нас всего $n$ продуктов, сколько действий нам надо будет сделать в худшем случае?
Как-то долговато. Давайте попробуем ускориться. Одной из идей ускорения может быть сортировка. Если отсортировать все продукты по их названию, искать будет легче.
```
book = sorted(book, key=lambda w: w[0])
book
```
Будем открывать книгу в середине. Там буква "п". Нам нужна буква "б", она левее буквы "п". Откроем серидну левой части книги, там буква "й", нам нужно еще левее, снова откроем середину. Будем так делать до тех пор, пока не найдём бобы. Такая процедура будет работать быстрее, она называется __бинарный поиск.__
```
# Попытайтесь на досуге написать такой поиск самостоятельно :)
# ┬─┬ ノ( ゜-゜ノ) Писать код самому?
# (╯° □°)╯︵ ┻━┻
```
__Вопрос:__ Если у нас всего $n$ продуктов, сколько действий нам надо будет сделать в худшем случае?
Всё ещё долго. А можно ещё быстрее? Конечно можно. Давайте наймём помощницу по имени Алиса и заставим её выучить всю книгу с продуктами и ценами наизусть. Тогда мы сможем задавать ей вопрос и моментально будем получать ответ на него. Просто чудо, а не помощница! Где бы взять такую...
Попробуем создать её из тех структур данных, которые мы уже знаем. А именно: из массивов. Для этого будем использовать __хеш-функцию.__ Хеш-функция - это такая функция, которая на вход получает строку и возвращает число. Они и поможет нам создать свою Алису.
```
x = [0]*33 # заведём пустой массив
x[:10]
```
Пусть наша хэш-функция возвращает номер первой буквы слова в алфавите.
```
def simple_hash(x):
alph = 'абвгдеёжзийклмнопрстуфхцчшщъыьэюя'
return alph.index(x[0])
simple_hash('яйца')
```
Положим в массив $x$ на $32$ позицию цену на яйца. По аналогии сделаем со всеми продуктами и их ценами.
```
for food, price in book:
x[simple_hash(food)] = price
x
```
Хэщ-функция в нашем примере связывает каждое название с одним индексом. На месте этого индекса в векторе $x$ и лежит нужная цена. __Поздравляю, мы создали свою Алису!__
А теперь к нам приходит клиент и спрашивает: "А сколько стоит торт?" Мы легко можем ответить на его вопрос:
```
ind = simple_hash('торт')
x[ind]
```
И мы делаем это моментально, без перебора как в первых двух случаях.
__Вопросы:__
- Понятное дело, что на практике хеш-функции устроены сложнее, у той функции, которую мы тут использовали есть куча проблем: какие это проблемы?
- Как можно попытаться решить эти проблемы?
В python хэш-таблицы реализованы в виде словарей и множеств. Давайте с ними познакомимся.
# Множества (set)
Мы уже знаем списки и кортежи - упорядоченные структуры, которые могут хранить в себе объекты любых типов, к которым мы можем обратиться по индексу. Теперь поговорим о стуктурах неупорядоченных - множествах и словарях.
Множества хранят некоторое количество объектов, но, в отличие от списка, один объект может храниться в множестве не более одного раза. Кроме того, порядок элементов множества произволен, им нельзя управлять.
Тип называется set, это же является конструктором типа, т.е. в функцию set можно передать произвольную последовательность, и из этой последовательности будет построено множество:
```
print(set([10, 20, 30])) # передаем список
print(set((4, 5, 6))) # передаем tuple
print(set(range(10))) # послед-сть от 0 до 10 не включая
print(set()) # пустое множество
```
Другой способ создать множество - это перечислить его элементы в фигурных скобках (список - в квадратных, кортеж в круглых, а множество - в фигурных)
```
primes = {2, 3, 5, 7}
animals = {"cat", "dog", "horse", 'cat'}
print(primes)
print(animals)
```
Кстати, обратите внимание, что множество может состоять только из уникальных объектов. Выше множество animals включает в себя только одну кошку несмотря на то, что в конструктор мы передали 'cat' два раза. Преобразовать список в множество - самый простой способ узнать количество уникальных объектов.
Со множествами работает почти всё, что работает с последовательностями (но не работают индексы, потому что элементы не хранятся упорядоченно).
```
print(len(primes)) # длина
print(11 in primes) # проверка на наличие элемента in хорошо и быстро работает для множеств
print("cow" in animals)
```
Все возможные операции с множествами: https://docs.python.org/3/library/stdtypes.html#set-types-set-frozenset
Отдельно мы посмотрим на так называемые операции над множествами. Если вы знаете круги Эйлера, то помните как различают объекты множеств - пересечение, объекты, которые принадлежат множеству а, но не принадлежат b и так далее. Давайте посмотрим, как эти операции реализованы в питоне.
```
a = {1, 2, 3, 4}
b = {3, 4, 5, 6}
c = {2, 3}
print(c <= a) # проверка на подмножество (с подномжество a)
print(c <= b) # не подмножество, т.к. в b нет 2
print(a >= c) # а надмножество с
print(a | b) # объединение
print(a & b) # пересечение
print(a - b) # разность множеств (все что в a, кроме b)
print(a ^ b) # симметрическая разность множеств (объединение без пересечения)
c = a.copy() # копирование множества, или set(a)
print(c)
```
Предыдущие операции не меняли множества, создавали новые. А как менять множество:
```
s = {1, 2, 3}
s.add(10) # добавить
print(s) # обратите внимание, что порядок элементов непредсказуем
s.remove(1) # удаление элемента
s.discard(1) # аналогично, но не будет ошибки, если вдруг такого элемента нет в множестве
print(s)
x = s.pop() # удаляет и возвращает один произвольный элемент множества (можем сохранить его в переменную)
print(s)
print(x)
s.clear() # очистить
print(s)
```
Как мы сокращали арифметические операции раньше (например, +=), так же можно сокращать операции над множествами.
```
s |= {10, 20} # s = s | {10, 20} # объединение множества s с {10,20}
print(s)
# s ^=, s &= и т.п.
```
### Задача 1 (на самом деле очень простая)
Дан список чисел, который может содержать до 100000 чисел. Определите, сколько в нем встречается различных чисел.
#### Пример 1
##### Ввод
1 2 3 2 1
##### Вывод
3
#### Пример 2
##### Ввод
1 2 3 4 5 1 2 1 2 7 3
##### Вывод
6
```
# решение здесь
```
### Задача 2 (факультативы)
Группа из 3 студентов пишет заявки на желаемые факультативы из списка: английский, немецкий, право, математика, сольфеджио. Факультатив откроют, если на него запишутся все студенты. Каждый студент может выбрать минимум один и максимум три факультатива. Нужно посчитать количество факультативов, которые откроются.
*Пример*
**Ввод:**
английский сольфеджио право
математика сольфеджио
немецкий право
**Вывод:**
0
**Ввод:**
математика немецкий право
математика немецкий
немецкий право математика
**Вывод:**
2
```
# решение здесь
```
Давайте модифицируем эту задачу, и представим, что только два студента выбирают факультативы, откроются факультативы, которые выбрал, хотя бы один студент. Но затем выясняется, что один из факультативов из списка отменили. Выводить будет через запятую те факультативы, которые откроются.
*Пример*
**Ввод:**
математика немецкий право
математика английский
право
**Вывод:**
математика, немецкий, английский
```
# решение здесь
```
## Задача 3: [камешки](https://leetcode.com/problems/jewels-and-stones/)
У Дори в глубинах океана есть кучка камней. Часть камней из этой кучки драгоценные. Недавно она пересчитала все драгоценные и забыла сколько их. Чтобы больше не забывать, Дори решила написать на питоне функцию, которая будет считать камни за неё.
Напишите на python функцию, которая принимает на вход список драгоценных камней $J$ и список камней, которые есть у Дори $S$. На выход функция возвращает число драгоценных камней в запасах Дори.
__Примеры:__
> Input: J = "aA", S = "aAAbbbb" <br />
Output: 3
Тут драгоценными считаются камни a и A. У Дори есть камни aAAbbbb. Среди них три драгоценных, aAA.
>Input: J = "z", S = "ZZ" <br />
Output: 0
Драгоценными мы считаем только камень z. У Дори два камня, оба обычные.
<img src="https://steemitimages.com/0x0/https://media.makeameme.org/created/repeat-repeat-repeat-5984a6.jpg" height="400" width="400">
```
def numJewelsInStones(J, S):
### ╰( ͡° ͜ʖ ͡° )つ▬▬ι═══════ bzzzzzzzzzz
# will the code be with you
return
def test_problem_13(func, test_data):
for inputs, true_answer in test_data:
answer = func(*inputs)
assert answer == true_answer, f'Expected {true_answer}, got {answer}. Input: {inputs}'
print("OK!")
NUM_JEWELS_IN_STONES_TESTS_DATA = [
(("aA", "aAAbbbb"), 3),
(("z","ZZ"),0)
]
test_problem_13(numJewelsInStones, NUM_JEWELS_IN_STONES_TESTS_DATA)
```
__Пара слов об эффективности:__
```
from random import random
n_obs = 10**6
mylist = [random() for _ in range(n_obs)]
myset = set(mylist)
%%timeit
0.5 in mylist # список
%%timeit
0.5 in myset # множество
```
# Словари (dict)
Обычный массив (в питоне это список) можно понимать как функцию, которая сопоставляет начальному отрезку натурального ряда какие-то значения.
Давайте посмотрим на списки непривычным способом. Списки - это функции (отображения), которые отображают начальный ряд натуральных чисел в объекты (проще говоря - преводят число 0,1,2,3... во что-то):
```
l = [10, 20, 30, 'a']
print(l[0])
print(l[1])
print(l[2])
print(l[3])
```
В словарях отображать можно не только начала натурального ряда, а произвольные объекты. Представьте себе настоящий словарь или телефонную книжку. Имени человека соответствует номер телефона.
Классическое использование словарей в анализе данных: хранить частоту слова в тексте.
кот $\rightarrow$ 10
и $\rightarrow$ 100
Тейлора $\rightarrow$ 2
Словарь состоит из набора ключей и соответствующих им значений. Значения могут быть любыми объектами (также как и в списке, хранить можно произвольные объекты). А ключи могут быть почти любыми объектами, но только неизменяемыми. В частности числами, строками, кортежами. Список или множество не могут быть ключом.
Одному ключу соответствует ровно одно значение. Но одно и то же значение, в принципе, можно сопоставить разным ключам.
```
a = dict()
a[(2,3)] = [2,3] # кортеж может быть ключом, потому что он неизменямый
a
b = dict()
b[[2,3]] = [2,3] # а список уже нет, получим ошибку
print(b)
```
### Создание словаря
В фигурных скобках (как множество), через двоеточие ключ:значение
```
d1 = {"кот": 10, "и": 100, "Тейлора": 2}
print(d1)
```
Через функцию dict(). Обратите внимание, что тогда ключ-значение задаются не через двоеточие, а через знак присваивания. А строковые ключи пишем без кавычек - по сути мы создаем переменные с такими названиями и присваиваим им значения (а потом функция dict() уже превратит их в строки).
```
d2 = dict(кот=10, и=100, Тейлора=2)
print(d2) # получили тот же результат, что выше
```
И третий способ - передаем функции dict() список списков или кортежей с парами ключ-значение.
```
d3 = dict([("кот", 10), ("и", 100), ("Тейлора", 2)]) # перечисление (например, список) tuple
print(d3)
```
Помните, когда мы говорили про списки, мы обсуждали проблему того, что важно создавать именно копию объекта, чтобы сохранять исходный список. Копию словаря можно сделать так
```
d4 = dict(d3) # фактически, копируем dict который строчкой выше
print(d4)
d1 == d2 == d3 == d4 # Содержание всех словарей одинаковое
```
Пустой словарь можно создать двумя способами.
```
d2 = {} # это пустой словарь (но не пустое множество)
d4 = dict()
print(d2, d4)
```
### Операции со словарями
Как мы уже говорили, словари неупорядоченные структуры и обратиться по индексу к объекту уже больше не удастся.
```
d1[1] # выдаст ошибку во всех случах кроме того, если в вашем словаре вдруг есть ключ 1
```
Но можно обращаться к значению по ключу.
```
print(d1['кот'])
```
Можно создать новую пару ключ-значение. Для этого просто указываем в квадратных скобках название нового ключа.
```
d1[1] = 'test'
print(d1[1]) # теперь работает!
```
Внимание: если элемент с указанным ключом уже существует, новый с таким же ключом не добавится! Ключ – это уникальный идентификатор элемента. Если мы добавим в словарь новый элемент с уже существующим ключом, мы просто изменим старый – словари являются изменяемыми объектами.
```
d1["кот"] = 11 # так же как в списке по индексу - можно присвоить новое значение по ключу
print(d1['кот'])
d1["кот"] += 1 # или даже изменить его за счет арифметической операции
print(d1['кот'])
```
А вот одинаковые значения в словаре могут быть.
```
d1['собака'] = 12
print(d1)
```
Кроме обращения по ключу, можно достать значение с помощью метода .get(). Отличие работы метода в том, что если ключа еще нет в словаре, он не генерирует ошибку, а возвращает объект типа None ("ничего"). Это очень полезно в решении некоторых задач.
```
print(d1.get("кот")) # вернул значение 11
print(d1.get("ктоо")) # вернут None
print(d1['ктоо']) # ошибка
```
Удобство метода .get() заключается в том, что мы сами можем установить, какое значение будет возвращено, в случае, если пары с выбранным ключом нет в словаре. Так, вместо None мы можем вернуть строку Not found, и ломаться ничего не будет:
```
print(d1.get("ктоо", 'Not found')) # передаем вторым аргументом, что возвращать
print(d1.get("ктоо", False)) # передаем вторым аргументом, что возвращать
```
Также со словарями работают уже знакомые нам операции - проверка количества элементов, проверка на наличие объектов.
```
print(d1)
print("кот" in d1) # проверка на наличие ключа
print("ктоо" not in d1) # проверка на отстуствие ключа
```
Удалить отдельный ключ или же очистить весь словарь можно специальными операциями.
```
del d1["кот"] # удалить ключ со своим значением
print(d1)
d1.clear() # удалить все
print(d1)
```
У словарей есть три метода, с помощью которых мы можем сгенерировать список только ключей, только значений и список пар ключ-значения (на самом деле там несколько другая структура, но ведет себя она очень похоже на список).
```
print(d1.values()) # только значения
print(d1.keys()) # только ключи
print(d1.items()) # только значения
```
## Пример
```
product = ['яйца', 'чай', 'кофе', 'банан', 'петрушка', 'сода',
'яблочко', 'йогурт', 'соя', 'беозар', 'бобы', 'печень дракона']
price = [60, 16, 35, 20, 15, 10, 60, 35, 20, 42, 10, 2]
zip(product, price)
book_dict = dict(zip(product, price))
book_dict
book_dict.keys()
book_dict.values()
book_dict.items()
for k, v in book_dict.items():
print(f"Продукт {k} стоит {v} рублей")
{i: i**3 for i in range(2, 20, 3)} # словарь
```
Ну, и раз уж питоновские словари так похожи на обычные, давайте представим, что у нас есть словарь, где все слова многозначные. Ключом будет слово, а значением ‒ целый список.
```
my_dict = {'swear' : ['клясться', 'ругаться'], 'dream' : ['спать', 'мечтать']}
```
По ключу мы получим значение в виде списка:
```
my_dict['swear']
```
Так как значением является список, можем отдельно обращаться к его элементам:
```
my_dict['swear'][0] # первый элемент
```
Можем пойти дальше и создать словарь, где значениями являются словари! Например, представим, что в некотором сообществе проходят выборы, и каждый участник может проголосовать за любое число кандидатов. Данные сохраняются в виде словаря, где ключами являются имена пользователей, а значениями – пары *кандидат-голос*.
```
votes = {'user1': {'cand1': '+', 'cand2': '-'},
'user2' : {'cand1': 0, 'cand3' : '+'}} # '+' - за, '-' - против, 0 - воздержался
votes
```
По аналогии с вложенными списками по ключам мы сможем обратиться к значению в словаре, который сам является значением в `votes` (да, эту фразу нужно осмыслить):
```
votes['user1']['cand1'] # берем значение, соответствующее ключу user1, в нем – ключу cand1
```
"Объединять словари можно через метод update(). Обратите внимание, что если в словарях есть одинаковые ключи, то они перезапишутся ключами того словаря, который добавляем."
```
votes.update(my_dict)
votes
votes['swear'] = 1 # добавим в словарь votes ключ swear, чтобы проверить, что произойдет, когда объединим с my_dict, в котором есть такой ключ\n",
votes.update(my_dict)
print(votes) # swear в votes перезаписался swear из mydict
```
## Задача 4: слова
Напишите функцию `stats(s)`, принимающую на вход строку `s`, содержащую слова, разделённые пробелами, и находящую самое часто встречающееся слово. Если такое слово одно, верните его, если их несколько, верните список, отсортированный в лексикографическом порядке.
Например: `stats("hello hello world")` должна вернуть строчку `"hello"`, а `stats("a a b b c")` должна вернуть список `['a','b']`.
```
def stats(s):
### ╰( ͡° ͜ʖ ͡° )つ▬▬ι═══════ bzzzzzzzzzz
# will the code be with you
return
def test_problem(func, test_data):
for inputs, true_answer in test_data:
answer = func(inputs)
assert answer == true_answer, f'Expected {true_answer}, got {answer}. Input: {inputs}'
print("OK!")
STATS_TESTS_DATA = [
("hello hello world", "hello"),
("a a b b c", ['a','b'])
]
test_problem(stats, STATS_TESTS_DATA)
```
## Задача 5: сумма двух
Дан массив из целых чисел `nums` и ещё одно целое число `target`. Найдите все такие пары чисел из массива `nums`, которые в сумме дают число `target`. Выведите на экран их индексы. Одно и то же число использовать при подсчёте суммы дважды нельзя. Попытайтесь решить эту задачу максимально эффективно.
```
def two_sum_fast(nums, target):
# ┬─┬ ノ( ゜-゜ノ)
# (╯° □°)╯︵ ┻━┻
return ...
TWO_SUM_TESTS_DATA = [
(([2, 7, 11, 15], 9), [(0, 1)]),
]
test_problem_13(two_sum_fast, TWO_SUM_TESTS_DATA)
```
| github_jupyter |
# Hypothesis Testing
In this project we'll be exploring basic hypothesis testing. What is a hypothesis test? It's a way to check the likliehood of a proposed statistical outcome. What follows are some examples of hypothesis tests and the way we can characterise the evidence we have to support some statistical conjecture. So let's get started!
The basic idea of a hypothesis test is to
1. Find a way to measure the size of the effect. This is called the *test* *statistic*
2. Next define a *null* *hypothesis* (that there is no effect).
3. Compute a `p-value` the probability of measuring the size witnessed if the null hypothesis is *true*
4. Finally, interpret the result. If the p-value is low, then the effect is *statistically* *significant*.
This is the entire point. There are many different tests and approaches to performing these four steps in different circumstances, but this is how you can think about it.
Let's see how this works in a [concrete example](http://greenteapress.com/thinkstats2/html/thinkstats2010.html#sec90). Suppose I have a coin that I flip 250 times. Suppose I see 140 heads (and 110 tails). Is the coin fair? Let's go through the steps:
1. How do we measure the effect?
Easy! Let's take the difference between the number of heads and the number of tails: x = 140-110 = 30
2. The null hypothesis is that "there is no effect", or the coin is fair.
In other words we expect x to be zero.
3. How do we compute the p-value?
This is the probability of seeing a difference of 30 if the coin is *fair*.
This is the fun part! Let's throw a coin 250 times:
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
N=250
TrueFalseFlipArray = np.random.rand(N)<0.5
print(TrueFalseFlipArray[:10])
print('-----')
# remember "True" is like 1 and False" is like "0" so "count_nonzero" counts the True elements only.
numHeads = TrueFalseFlipArray.sum()
print("num heads:", numHeads)
```
So we can flip 250 coins and count the number of heads. We have to do this *many* times and then estimate the chance of getting an absolute difference between heads and tails of 30 or more. Let's use a 2D array for that. To make it more manageable let's start with 5 trials (columns) of 10 flips (rows) each:
```
N = 10
iters = 5
data = np.random.rand(N,iters)<0.5
data
```
Now let's sum the columns (trials):
```
heads = data.sum(axis=0)
heads
tails = N-heads
tails
tails+heads
print("So the first column had %d heads, while the second had %d heads, and so on." % (heads[0], heads[1]))
# Let's bump it up to 1000 iterations (trials) of 250 flips:
N = 250
iters = 1000
data = np.random.rand(N,iters)<0.5
heads = data.sum(axis=0)
tails = N - heads
plt.hist(heads-tails)
(abs(heads-tails)>=30).sum()/iters
```
So the chance of getting a difference of 30 or more is about 7%. That's the `p-value`.
## Differences between two data sets
What if I make a series of 20 measurements:
```
x1 = np.array([3.25466863, 2.97370402, 2.91113498, 3.4574893 , 3.17937048,
3.03048094, 3.21812428, 2.81350504, 2.9976349 , 2.97788408,
3.1813029 , 2.87498481, 2.90372449, 3.46095383, 3.11570786,
2.69100383, 2.97142051, 2.72968174, 2.48244642, 2.8584929 ])
x1
```
And then I make another seriels:
```
x2 = np.array([3.58365047, 3.04506491, 3.35190893, 2.76485786, 3.8494015 ,
3.17593123, 3.03499338, 2.31533078, 2.58647626, 3.47397813,
2.9985396 , 3.46170964, 3.23908075, 2.78904992, 3.000179 ,
3.23386923, 3.10856455, 3.24167989, 2.92353227, 3.09131427])
x2
x1_mean = x1.mean()
x2_mean = x2.mean()
print("They have different means: <x1>=%4.3f <x2>=%4.3f -> <x2-x1> = %4.3f" % (x1_mean, x2_mean, x2_mean-x1_mean))
plt.plot(range(len(x1)), x1, 'b.')
plt.plot(np.arange(len(x2))+len(x1), x2, 'r+')
plt.plot((0,len(x1)),(x1_mean,x1_mean),'b-')
plt.plot((len(x1),len(x1)+len(x2)),(x2_mean,x2_mean),'r-')
plt.title("x1 data followed by x2 data")
plt.grid()
```
Now, the question is: Is this difference *real*, or just a statistical fluctuation?
1. What's my test statistic?
It appears that `<x2> > <x1>`, so maybe I should use `<x2> - <x1>`. Make sense?
2. The null hypothesis would be that `<x2> - <x1>` is zero.
How do we compute the `p-value`? One way is to put all the data into a pot, stir it up, and pull out two sets at random that have the same size as the original sets. For these two random sets, compute the difference of their means. Let's try that!
```
iters = 10000
all_data = np.append(x1, x2) # put all the data in one pot
mean_diffs = []
for i in range(iters):
np.random.shuffle(all_data) # stir the pot
x1_mean_test = np.mean(all_data[:len(x1)]) # get mean of the first lot, like the original x1
x2_mean_test = np.mean(all_data[len(x1):]) # get the mean of the second lot, like the original x2
mean_diffs.append(x2_mean_test - x1_mean_test) # compute the difference of means and collect
mean_diffs = np.array(mean_diffs) # convert to numpy array
plt.hist(mean_diffs,20)
plt.grid()
plt.title("mean differences of shuffled data")
```
# Exercises
1. Now it's your turn. You have a numpy array of differences of means. What's the `p-value` of finding a difference greater than or equal to the observed difference in your data of 0.109? (Answer: approximately 0.127)
2. How do you interpret this `p-value`? Are the measured difference statistically significant?
3. For your lab data, carry out a hypothesis test to support or refute your measurement of an effect.
| github_jupyter |
```
import database_tables as tables
import pandas as pd
import os
import dautil as dl
import ch7util
import sqlite3
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import HTML
def populate_date_dim(session):
for d in pd.date_range(start='19000101', end='20250101'):
adate = tables.DateDim(date=d.date(), day_of_month=d.day,
day_of_week=d.dayofweek, month=d.month,
quarter=d.quarter, year=d.year)
session.add(adate)
session.commit()
def populate_asset_dim(session):
asset = tables.AssetDim(symbol='AAPL', name='Apple Inc.',
category='Common Stock', country='USA',
sector='Consumer Goods')
session.add(asset)
asset = tables.AssetDim(symbol='INTC', name='Intel Corporation',
category='Common Stock', country='USA',
sector='Technology')
session.add(asset)
asset = tables.AssetDim(symbol='MSFT', name='Microsoft Corporation',
category='Common Stock', country='USA',
sector='Technology')
session.add(asset)
asset = tables.AssetDim(symbol='KO', name='The Coca-Cola Company',
category='Common Stock', country='USA',
sector='Consumer Goods')
session.add(asset)
asset = tables.AssetDim(symbol='DIS', name='The Walt Disney Company',
category='Common Stock', country='USA',
sector='Services')
session.add(asset)
asset = tables.AssetDim(symbol='MCD', name='McDonald\'s Corp.',
category='Common Stock', country='USA',
sector='Services')
session.add(asset)
asset = tables.AssetDim(symbol='NKE', name='NIKE, Inc.',
category='Common Stock', country='USA',
sector='Consumer Goods')
session.add(asset)
asset = tables.AssetDim(symbol='IBM',
name='International Business Machines Corporation',
category='Common Stock', country='USA',
sector='Technology')
session.add(asset)
session.commit()
def populate_source_dim(session):
session.add(tables.SourceDim(name='Yahoo Finance',
url='https://finance.yahoo.com'))
session.commit()
def populate_prices(session):
symbols = dl.db.map_to_id(session, tables.AssetDim.symbol)
dates = dl.db.map_to_id(session, tables.DateDim.date)
source_id = session.query(tables.SourceDim).first().id
ohlc = dl.data.OHLC()
conn = sqlite3.connect(dbname)
c = conn.cursor()
insert = '''INSERT INTO stock_price (id, date_id,
asset_id, source_id, open_price, high_price, low_price,
close_price, adjusted_close, volume) VALUES({id}, {date_id},
{asset_id}, {source_id}, {open_price}, {high_price},
{low_price}, {close_price}, {adj_close}, {volume})'''
logger = dl.log_api.conf_logger(__name__)
for symbol in ch7util.STOCKS:
df = ohlc.get(symbol)
i = 0
for index, row in df.iterrows():
date_id = dates[index.date()]
asset_id = symbols[symbol]
i += 1
stmt = insert.format(id=i, date_id=date_id,
asset_id=asset_id,
source_id=source_id,
open_price=dl.data.centify(row['Open']),
high_price=dl.data.centify(row['High']),
low_price=dl.data.centify(row['Low']),
close_price=dl.data.centify(row['Close']),
adj_close=dl.data.centify(row['Adj Close']),
volume=int(row['Volume']))
c.execute(stmt)
if i % 1000 == 0:
logger.info("Progress %s %s", symbol, i)
conn.commit()
conn.commit()
c.close()
conn.close()
def populate(session):
if session.query(tables.SourceDim).count() == 0:
populate_source_dim(session)
populate_asset_dim(session)
populate_date_dim(session)
populate_prices(session)
def plot_volume(col, ax):
df = pd.read_sql(sql.format(col=col), conn)
sns.barplot(x=col, y='AVG(P.Volume/1000)', data=df,
hue='sector', ax=ax)
ax.legend(loc='best')
dbname = os.path.join(dl.data.get_data_dir(), 'stock_prices.db')
session = dl.db.create_session(dbname, tables.Base)
populate(session)
sql = '''
SELECT
A.sector,
D.{col},
AVG(P.Volume/1000)
FROM stock_price P
INNER JOIN date_dim D ON (P.Date_Id = D.Id)
INNER JOIN asset_dim A ON (P.asset_id = a.Id)
GROUP BY
A.sector,
D.{col}
'''
conn = sqlite3.connect(dbname)
%matplotlib inline
context = dl.nb.Context('populate_database')
dl.nb.RcWidget(context)
sp = dl.plotting.Subplotter(2, 2, context)
plot_volume('day_of_week', sp.ax)
sp.ax.set_xticklabels(['Mon', 'Tue', 'Wed', 'Thu', 'Fri'])
plot_volume('month', sp.next_ax())
sp.ax.set_xticklabels(dl.ts.short_months())
plot_volume('day_of_month', sp.next_ax())
plot_volume('quarter', sp.next_ax())
HTML(sp.exit())
```
| github_jupyter |
# 1. Description
This notebook will performe the subgrouping algorithm shown in Figure S1 in the paper "Prediction of the ICU mortality based on the missing events.".
# 2. Before running...
Before proceeding the followings, plaease solve the python environment accordingly first. This program requires the following libraries.
```
import pandas as pd # 1.2.1
import itertools
```
Then please put the related files at the appropriate directory so that the program reaches the input files. To test if you correctly set the input files, run the below. If you could, the cell would not end without errors.
```
input_ids = set(pd.read_csv("ids/004_sepsis_aps_4226.csv", header=None).iloc[:,0].tolist())
len(input_ids) # 4226
eICU_file = {}
eICU_file['apacheApsVar'] = pd.read_csv('data/apacheApsVar.csv')
eICU_file['apachePatientResult'] = pd.read_csv('data/apachePatientResult.csv')
eICU_file['apachePredVar'] = pd.read_csv('data/apachePredVar.csv')
eICU_file['patient'] = pd.read_csv('data/patient.csv')
```
For preparing the file "004_sepsis_aps_4226.csv", please follow this notebook (1_the_sepsis_group_and_non_sepsis_group.ipynb).<br>
<br>
For getting the eICU files, please see "https://www.usa.philips.com/healthcare/solutions/enterprise-telehealth/eri" or "https://eicu-crd.mit.edu/gettingstarted/access/".
# 3. Class definition
```
class status():
def __init__(self, df):
# df
self.df = df
self.df_thistime = pd.DataFrame([], columns=df.columns)
self.df_next = df
# ID
self.ids_all = set(df.patientunitstayid)
self.ids_thistime = set([])
self.ids_next = self.ids_all
# parameter
self.target = []
def remove(self, target):
self.target += target
# Update ID
tmp = set(self.df_next.drop(target, axis=1).where(self.df_next>=0).dropna().patientunitstayid)
self.ids_thistime |= tmp
self.ids_next -= tmp
# df thistime
self.df_thistime = self.df.drop(self.target, axis=1)
self.df_thistime = self.df_thistime.query("patientunitstayid in @ self.ids_thistime")
# df_next
self.df_next = self.df_next.drop(target, axis=1)
self.df_next = self.df_next.query("patientunitstayid in @ self.ids_next")
def get_next(self, depth):
# 1st and Last columns are not paramters
parameters = self.df_next.columns[1:-1]
# depth : the number of parameters to be excluded at once
combinations = [list(i) for i in itertools.combinations(parameters, depth)]
# # of non-NaN-records
num_non_nan = pd.DataFrame({
"__".join(comb) : [len(self.df_next.drop(comb, axis=1).where(self.df_next>=0).dropna())]
for comb in combinations
}).T
# "1 <= # of non-NaN-records < # of pids" is ideal.
tf = num_non_nan.applymap(lambda x : 1 <= x <= len(self.df_next) - 1)
if tf.any().any():
tmp = num_non_nan[tf]
return tmp.idxmax()[0]
else:
# "# of non-NaN-records == # of pids" is acceptable.
tf = num_non_nan.applymap(lambda x : 1 <= x <= len(self.df_next))
if tf.any().any():
tmp = num_non_nan[tf]
return tmp.idxmax()[0]
else:
# If there's no more parameteres, return nan
if len(parameters) == 1:
return "nan"
# If there's only NaN records, return ""
else:
return ""
```
# 4. Prepare DataFrame
## 4.1. Definition of variables used in this study
```
len(eICU_file['apachePatientResult'][eICU_file['apachePatientResult']["apacheversion"]=="IV"])
eICU_parm = {}
eICU_parm['apacheApsVar'] = [
'patientunitstayid',
'intubated',
'vent',
'dialysis',
'eyes',
'motor',
'verbal',
'meds',
'urine',
'wbc',
'temperature',
'respiratoryrate',
'sodium',
'heartrate',
'meanbp',
'ph',
'hematocrit',
'creatinine',
'albumin',
'pao2',
'pco2',
'bun',
'glucose',
'bilirubin',
'fio2'
]
eICU_parm['apachePatientResult'] = [
'patientunitstayid',
'apachescore',
'predictedicumortality',
'predictediculos',
'predictedhospitalmortality',
'predictedhospitallos',
'preopmi',
'preopcardiaccath',
'ptcawithin24h',
'predventdays'
]
eICU_parm['apachePredVar'] = [
'patientunitstayid',
'gender',
'teachtype',
'bedcount',
'graftcount',
'age',
'thrombolytics',
'aids',
'hepaticfailure',
'lymphoma',
'metastaticcancer',
'leukemia',
'immunosuppression',
'cirrhosis',
'ima',
'midur',
'ventday1',
'oobventday1',
'oobintubday1',
'diabetes'
]
eICU_parm['patient'] = [
'patientunitstayid',
'hospitalid',
'admissionheight',
'hospitaladmitoffset',
'admissionweight'
]
```
## 4.2. DataFrame
```
#========================================
# select columns and ids for each file
#========================================
eICU_df = {}
eICU_df['apacheApsVar'] = eICU_file['apacheApsVar'][eICU_parm['apacheApsVar']].query("patientunitstayid in @ input_ids")
eICU_df['apachePatientResult'] = eICU_file['apachePatientResult'][eICU_parm['apachePatientResult']].query("patientunitstayid in @ input_ids")
eICU_df['apachePredVar'] = eICU_file['apachePredVar'][eICU_parm['apachePredVar']].query("patientunitstayid in @ input_ids")
eICU_df['patient'] = eICU_file['patient'][eICU_parm['patient']].query("patientunitstayid in @ input_ids")
#========================================
# make column names unique
#========================================
# (column name -> filename + column name)
eICU_df['apacheApsVar'].columns = [
'apacheApsVar_' + parm if not parm=="patientunitstayid" else "patientunitstayid"
for parm in eICU_df['apacheApsVar'].columns
]
eICU_df['apachePatientResult'].columns = [
'apachePatientResult_' + parm if not parm=="patientunitstayid" else "patientunitstayid"
for parm in eICU_df['apachePatientResult'].columns
]
eICU_df['apachePredVar'].columns = [
'apachePredVar_' + parm if not parm=="patientunitstayid" else "patientunitstayid"
for parm in eICU_df['apachePredVar'].columns
]
eICU_df['patient'].columns = [
'patient_' + parm if not parm=="patientunitstayid" else "patientunitstayid"
for parm in eICU_df['patient'].columns
]
#========================================
# Make X
#========================================
# 1st column : key (patientunitstayid)
key = pd.DataFrame(list(input_ids), columns=["patientunitstayid"])
# 2nd~ column : parameters
key_X = pd.merge(key, eICU_df['apacheApsVar'], on="patientunitstayid")
key_X = pd.merge(key_X, eICU_df['apachePatientResult'], on="patientunitstayid")
key_X = pd.merge(key_X, eICU_df['apachePredVar'], on="patientunitstayid")
key_X = pd.merge(key_X, eICU_df['patient'], on="patientunitstayid")
#========================================
# Make X_y (df)
#========================================
# Last column : DEAD(=1) or ALIVE(=0)
y = eICU_file["apachePatientResult"][['patientunitstayid', 'actualicumortality']].replace('ALIVE',0).replace('EXPIRED',1)
key_X_y = pd.merge(key_X, y, on="patientunitstayid")
#========================================
# Rename
#========================================
df = key_X_y
len(df)
```
# 5. Subgrouping
```
df_status = status(df)
k=1
print("# of INPUT : ", len(df_status.ids_all), " patientunitstayids", "\n\n")
while 1:
print("##################################")
print(" Subgroup ",k)
print("##################################")
print("\n")
print("Checking the inputs...")
print("\n")
parms_tobe_excluded = []
#========================================
# Get Subgroup
#========================================
while not(500 <= len(df_status.ids_thistime) <= 2000):
parms = ""
# upto 3 parameters taken into account
for i in range(1,4):
parms = df_status.get_next(i)
# if parameters are found
if parms != "":
break
# If no paramteres found, Output "Time out" and Stop.
if parms == "":
print("Time Out\n")
df_status.df_next = pd.DataFrame()
break
# If too many nan, Output "Interpolation needed" and Stop
if parms == "nan":
print("Interpolation needed\n")
df_status.df_next = pd.DataFrame()
break
# Change format
parms = parms.split("__")
# Update
df_status.remove(parms)
parms_tobe_excluded += parms
print("--> ", [i.split("_")[1] for i in parms], " is/are selected to be excluded.\n")
print("--> ", ", ".join([i.split("_")[1] for i in parms_tobe_excluded]), " was/were excluded in the end.\n")
parms_tobe_excluded = []
df_A = pd.DataFrame()
if 500 <= len(df_status.ids_next):
print("--> ", len(df_status.ids_thistime), " patientunitstayids survived.\n")
df_A = df_status.df_thistime
df_status = status(df_status.df_next)
else:
# The rests pids are picked up and merged into thistime.
print("--> ", len(df_status.ids_thistime)+len(df_status.ids_next), " patientunitstayids survived.\n")
df_A = pd.concat([df_status.df_thistime, df_status.df_next])
df_status.ids_next = set([])
df_status.df_next = df_status.df_next.query("patientunitstayid in @ df_status.ids_next")
df_A = df_A.where(df_A>=0).dropna()
k+=1
if len(df_status.df_next) == 0:
break
```
| github_jupyter |
# Planar data classification with one hidden layer
Welcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression.
**You will learn how to:**
- Implement a 2-class classification neural network with a single hidden layer
- Use units with a non-linear activation function, such as tanh
- Compute the cross entropy loss
- Implement forward and backward propagation
## 1 - Packages ##
Let's first import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.
- [sklearn](http://scikit-learn.org/stable/) provides simple and efficient tools for data mining and data analysis.
- [matplotlib](http://matplotlib.org) is a library for plotting graphs in Python.
- testCases provides some test examples to assess the correctness of your functions
- planar_utils provide various useful functions used in this assignment
```
# Package imports
import numpy as np
import matplotlib.pyplot as plt
from testCases_v2 import *
import sklearn
import sklearn.datasets
import sklearn.linear_model
from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets
%matplotlib inline
np.random.seed(1) # set a seed so that the results are consistent
```
## 2 - Dataset ##
First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables `X` and `Y`.
```
X, Y = load_planar_dataset()
```
Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data.
```
# Visualize the data:
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
```
You have:
- a numpy-array (matrix) X that contains your features (x1, x2)
- a numpy-array (vector) Y that contains your labels (red:0, blue:1).
Lets first get a better sense of what our data is like.
**Exercise**: How many training examples do you have? In addition, what is the `shape` of the variables `X` and `Y`?
**Hint**: How do you get the shape of a numpy array? [(help)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html)
```
### START CODE HERE ### (≈ 3 lines of code)
shape_X = None
shape_Y = None
m = None # training set size
### END CODE HERE ###
print ('The shape of X is: ' + str(shape_X))
print ('The shape of Y is: ' + str(shape_Y))
print ('I have m = %d training examples!' % (m))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**shape of X**</td>
<td> (2, 400) </td>
</tr>
<tr>
<td>**shape of Y**</td>
<td>(1, 400) </td>
</tr>
<tr>
<td>**m**</td>
<td> 400 </td>
</tr>
</table>
## 3 - Simple Logistic Regression
Before building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.
```
# Train the logistic regression classifier
clf = sklearn.linear_model.LogisticRegressionCV();
clf.fit(X.T, Y.T);
```
You can now plot the decision boundary of these models. Run the code below.
```
# Plot the decision boundary for logistic regression
plot_decision_boundary(lambda x: clf.predict(x), X, Y)
plt.title("Logistic Regression")
# Print accuracy
LR_predictions = clf.predict(X.T)
print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +
'% ' + "(percentage of correctly labelled datapoints)")
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**Accuracy**</td>
<td> 47% </td>
</tr>
</table>
**Interpretation**: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now!
## 4 - Neural Network model
Logistic regression did not work well on the "flower dataset". You are going to train a Neural Network with a single hidden layer.
**Here is our model**:
<img src="images/classification_kiank.png" style="width:600px;height:300px;">
**Mathematically**:
For one example $x^{(i)}$:
$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1]}\tag{1}$$
$$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$
$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2]}\tag{3}$$
$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$
$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{[2](i)} > 0.5 \\ 0 & \mbox{otherwise } \end{cases}\tag{5}$$
Given the predictions on all the examples, you can also compute the cost $J$ as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$
**Reminder**: The general methodology to build a Neural Network is to:
1. Define the neural network structure ( # of input units, # of hidden units, etc).
2. Initialize the model's parameters
3. Loop:
- Implement forward propagation
- Compute loss
- Implement backward propagation to get the gradients
- Update parameters (gradient descent)
You often build helper functions to compute steps 1-3 and then merge them into one function we call `nn_model()`. Once you've built `nn_model()` and learnt the right parameters, you can make predictions on new data.
### 4.1 - Defining the neural network structure ####
**Exercise**: Define three variables:
- n_x: the size of the input layer
- n_h: the size of the hidden layer (set this to 4)
- n_y: the size of the output layer
**Hint**: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.
```
# GRADED FUNCTION: layer_sizes
def layer_sizes(X, Y):
"""
Arguments:
X -- input dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
Returns:
n_x -- the size of the input layer
n_h -- the size of the hidden layer
n_y -- the size of the output layer
"""
### START CODE HERE ### (≈ 3 lines of code)
n_x = None # size of input layer
n_h = None
n_y = None # size of output layer
### END CODE HERE ###
return (n_x, n_h, n_y)
X_assess, Y_assess = layer_sizes_test_case()
(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)
print("The size of the input layer is: n_x = " + str(n_x))
print("The size of the hidden layer is: n_h = " + str(n_h))
print("The size of the output layer is: n_y = " + str(n_y))
```
**Expected Output** (these are not the sizes you will use for your network, they are just used to assess the function you've just coded).
<table style="width:20%">
<tr>
<td>**n_x**</td>
<td> 5 </td>
</tr>
<tr>
<td>**n_h**</td>
<td> 4 </td>
</tr>
<tr>
<td>**n_y**</td>
<td> 2 </td>
</tr>
</table>
### 4.2 - Initialize the model's parameters ####
**Exercise**: Implement the function `initialize_parameters()`.
**Instructions**:
- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.
- You will initialize the weights matrices with random values.
- Use: `np.random.randn(a,b) * 0.01` to randomly initialize a matrix of shape (a,b).
- You will initialize the bias vectors as zeros.
- Use: `np.zeros((a,b))` to initialize a matrix of shape (a,b) with zeros.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
params -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
### START CODE HERE ### (≈ 4 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
assert (W1.shape == (n_h, n_x))
assert (b1.shape == (n_h, 1))
assert (W2.shape == (n_y, n_h))
assert (b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
n_x, n_h, n_y = initialize_parameters_test_case()
parameters = initialize_parameters(n_x, n_h, n_y)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td>**W1**</td>
<td> [[-0.00416758 -0.00056267]
[-0.02136196 0.01640271]
[-0.01793436 -0.00841747]
[ 0.00502881 -0.01245288]] </td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 0.]
[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01057952 -0.00909008 0.00551454 0.02292208]]</td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.]] </td>
</tr>
</table>
### 4.3 - The Loop ####
**Question**: Implement `forward_propagation()`.
**Instructions**:
- Look above at the mathematical representation of your classifier.
- You can use the function `sigmoid()`. It is built-in (imported) in the notebook.
- You can use the function `np.tanh()`. It is part of the numpy library.
- The steps you have to implement are:
1. Retrieve each parameter from the dictionary "parameters" (which is the output of `initialize_parameters()`) by using `parameters[".."]`.
2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).
- Values needed in the backpropagation are stored in "`cache`". The `cache` will be given as an input to the backpropagation function.
```
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
Returns:
A2 -- The sigmoid output of the second activation
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
# Implement Forward Propagation to calculate A2 (probabilities)
### START CODE HERE ### (≈ 4 lines of code)
Z1 = None
A1 = None
Z2 = None
A2 = None
### END CODE HERE ###
assert(A2.shape == (1, X.shape[1]))
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache
X_assess, parameters = forward_propagation_test_case()
A2, cache = forward_propagation(X_assess, parameters)
# Note: we use the mean here just to make sure that your output matches ours.
print(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))
```
**Expected Output**:
<table style="width:50%">
<tr>
<td> 0.262818640198 0.091999045227 -1.30766601287 0.212877681719 </td>
</tr>
</table>
Now that you have computed $A^{[2]}$ (in the Python variable "`A2`"), which contains $a^{[2](i)}$ for every example, you can compute the cost function as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$
**Exercise**: Implement `compute_cost()` to compute the value of the cost $J$.
**Instructions**:
- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented
$- \sum\limits_{i=0}^{m} y^{(i)}\log(a^{[2](i)})$:
```python
logprobs = np.multiply(np.log(A2),Y)
cost = - np.sum(logprobs) # no need to use a for loop!
```
(you can use either `np.multiply()` and then `np.sum()` or directly `np.dot()`).
```
# GRADED FUNCTION: compute_cost
def compute_cost(A2, Y, parameters):
"""
Computes the cross-entropy cost given in equation (13)
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
parameters -- python dictionary containing your parameters W1, b1, W2 and b2
Returns:
cost -- cross-entropy cost given equation (13)
"""
m = Y.shape[1] # number of example
# Compute the cross-entropy cost
### START CODE HERE ### (≈ 2 lines of code)
logprobs = None
cost = None
### END CODE HERE ###
cost = np.squeeze(cost) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
assert(isinstance(cost, float))
return cost
A2, Y_assess, parameters = compute_cost_test_case()
print("cost = " + str(compute_cost(A2, Y_assess, parameters)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**cost**</td>
<td> 0.693058761... </td>
</tr>
</table>
Using the cache computed during forward propagation, you can now implement backward propagation.
**Question**: Implement the function `backward_propagation()`.
**Instructions**:
Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation.
<img src="images/grad_summary.png" style="width:600px;height:300px;">
<!--
$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$
$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $
$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$
$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $
$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $
$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$
- Note that $*$ denotes elementwise multiplication.
- The notation you will use is common in deep learning coding:
- dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$
- db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$
- dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$
- db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$
!-->
- Tips:
- To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute
$g^{[1]'}(Z^{[1]})$ using `(1 - np.power(A1, 2))`.
```
# GRADED FUNCTION: backward_propagation
def backward_propagation(parameters, cache, X, Y):
"""
Implement the backward propagation using the instructions above.
Arguments:
parameters -- python dictionary containing our parameters
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
grads -- python dictionary containing your gradients with respect to different parameters
"""
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
### START CODE HERE ### (≈ 2 lines of code)
W1 = None
W2 = None
### END CODE HERE ###
# Retrieve also A1 and A2 from dictionary "cache".
### START CODE HERE ### (≈ 2 lines of code)
A1 = None
A2 = None
### END CODE HERE ###
# Backward propagation: calculate dW1, db1, dW2, db2.
### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)
dZ2 = None
dW2 = None
db2 = None
dZ1 = None
dW1 = None
db1 = None
### END CODE HERE ###
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
parameters, cache, X_assess, Y_assess = backward_propagation_test_case()
grads = backward_propagation(parameters, cache, X_assess, Y_assess)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("db2 = "+ str(grads["db2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td>**dW1**</td>
<td> [[ 0.00301023 -0.00747267]
[ 0.00257968 -0.00641288]
[-0.00156892 0.003893 ]
[-0.00652037 0.01618243]] </td>
</tr>
<tr>
<td>**db1**</td>
<td> [[ 0.00176201]
[ 0.00150995]
[-0.00091736]
[-0.00381422]] </td>
</tr>
<tr>
<td>**dW2**</td>
<td> [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]] </td>
</tr>
<tr>
<td>**db2**</td>
<td> [[-0.16655712]] </td>
</tr>
</table>
**Question**: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).
**General gradient descent rule**: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.
**Illustration**: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.
<img src="images/sgd.gif" style="width:400;height:400;"> <img src="images/sgd_bad.gif" style="width:400;height:400;">
```
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate = 1.2):
"""
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients
Returns:
parameters -- python dictionary containing your updated parameters
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
# Retrieve each gradient from the dictionary "grads"
### START CODE HERE ### (≈ 4 lines of code)
dW1 = None
db1 = None
dW2 = None
db2 = None
## END CODE HERE ###
# Update rule for each parameter
### START CODE HERE ### (≈ 4 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:80%">
<tr>
<td>**W1**</td>
<td> [[-0.00643025 0.01936718]
[-0.02410458 0.03978052]
[-0.01653973 -0.02096177]
[ 0.01046864 -0.05990141]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ -1.02420756e-06]
[ 1.27373948e-05]
[ 8.32996807e-07]
[ -3.20136836e-06]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01041081 -0.04463285 0.01758031 0.04747113]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.00010457]] </td>
</tr>
</table>
### 4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model() ####
**Question**: Build your neural network model in `nn_model()`.
**Instructions**: The neural network model has to use the previous functions in the right order.
```
# GRADED FUNCTION: nn_model
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):
"""
Arguments:
X -- dataset of shape (2, number of examples)
Y -- labels of shape (1, number of examples)
n_h -- size of the hidden layer
num_iterations -- Number of iterations in gradient descent loop
print_cost -- if True, print the cost every 1000 iterations
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(3)
n_x = layer_sizes(X, Y)[0]
n_y = layer_sizes(X, Y)[2]
# Initialize parameters, then retrieve W1, b1, W2, b2. Inputs: "n_x, n_h, n_y". Outputs = "W1, b1, W2, b2, parameters".
### START CODE HERE ### (≈ 5 lines of code)
parameters = None
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
### START CODE HERE ### (≈ 4 lines of code)
# Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
A2, cache = None
# Cost function. Inputs: "A2, Y, parameters". Outputs: "cost".
cost = None
# Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
grads = None
# Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
parameters = None
### END CODE HERE ###
# Print the cost every 1000 iterations
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
return parameters
X_assess, Y_assess = nn_model_test_case()
parameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=True)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td>
**cost after iteration 0**
</td>
<td>
0.692739
</td>
</tr>
<tr>
<td>
<center> $\vdots$ </center>
</td>
<td>
<center> $\vdots$ </center>
</td>
</tr>
<tr>
<td>**W1**</td>
<td> [[-0.65848169 1.21866811]
[-0.76204273 1.39377573]
[ 0.5792005 -1.10397703]
[ 0.76773391 -1.41477129]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 0.287592 ]
[ 0.3511264 ]
[-0.2431246 ]
[-0.35772805]] </td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-2.45566237 -3.27042274 2.00784958 3.36773273]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.20459656]] </td>
</tr>
</table>
### 4.5 Predictions
**Question**: Use your model to predict by building predict().
Use forward propagation to predict results.
**Reminder**: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases}
1 & \text{if}\ activation > 0.5 \\
0 & \text{otherwise}
\end{cases}$
As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: ```X_new = (X > threshold)```
```
# GRADED FUNCTION: predict
def predict(parameters, X):
"""
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (n_x, m)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
"""
# Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
### START CODE HERE ### (≈ 2 lines of code)
A2, cache = None
predictions = None
### END CODE HERE ###
return predictions
parameters, X_assess = predict_test_case()
predictions = predict(parameters, X_assess)
print("predictions mean = " + str(np.mean(predictions)))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td>**predictions mean**</td>
<td> 0.666666666667 </td>
</tr>
</table>
It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.
```
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
plt.title("Decision Boundary for hidden layer size " + str(4))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td>**Cost after iteration 9000**</td>
<td> 0.218607 </td>
</tr>
</table>
```
# Print accuracy
predictions = predict(parameters, X)
print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')
```
**Expected Output**:
<table style="width:15%">
<tr>
<td>**Accuracy**</td>
<td> 90% </td>
</tr>
</table>
Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression.
Now, let's try out several hidden layer sizes.
### 4.6 - Tuning hidden layer size (optional/ungraded exercise) ###
Run the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.
```
# This may take about 2 minutes to run
plt.figure(figsize=(16, 32))
hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]
for i, n_h in enumerate(hidden_layer_sizes):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer of size %d' % n_h)
parameters = nn_model(X, Y, n_h, num_iterations = 5000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
predictions = predict(parameters, X)
accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)
print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
```
**Interpretation**:
- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data.
- The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticable overfitting.
- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting.
**Optional questions**:
**Note**: Remember to submit the assignment but clicking the blue "Submit Assignment" button at the upper-right.
Some optional/ungraded questions that you can explore if you wish:
- What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?
- Play with the learning_rate. What happens?
- What if we change the dataset? (See part 5 below!)
<font color='blue'>
**You've learnt to:**
- Build a complete neural network with a hidden layer
- Make a good use of a non-linear unit
- Implemented forward propagation and backpropagation, and trained a neural network
- See the impact of varying the hidden layer size, including overfitting.
Nice work!
## 5) Performance on other datasets
If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
```
# Datasets
noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()
datasets = {"noisy_circles": noisy_circles,
"noisy_moons": noisy_moons,
"blobs": blobs,
"gaussian_quantiles": gaussian_quantiles}
### START CODE HERE ### (choose your dataset)
dataset = "noisy_moons"
### END CODE HERE ###
X, Y = datasets[dataset]
X, Y = X.T, Y.reshape(1, Y.shape[0])
# make blobs binary
if dataset == "blobs":
Y = Y%2
# Visualize the data
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
```
Congrats on finishing this Programming Assignment!
Reference:
- http://scs.ryerson.ca/~aharley/neural-networks/
- http://cs231n.github.io/neural-networks-case-study/
| github_jupyter |
### TCLab Overview

### Generate Step Test Data
```
import numpy as np
import pandas as pd
import tclab
import time
import os.path
# generate step test data on Arduino
filename = 'data.csv'
# redo data collection?
redo = False
# check if file already exists
if os.path.isfile(filename) and (not redo):
print('File: '+filename+' already exists.')
print('Change redo=True to collect data again')
print('TCLab should be at room temperature at start')
else:
# heater steps
Q1d = np.zeros(601)
Q1d[10:200] = 80
Q1d[200:280] = 20
Q1d[280:400] = 70
Q1d[400:] = 50
Q2d = np.zeros(601)
Q2d[120:320] = 100
Q2d[320:520] = 10
Q2d[520:] = 80
# Connect to Arduino
a = tclab.TCLab()
fid = open(filename,'w')
fid.write('Time,Q1,Q2,T1,T2\n')
fid.close()
# run step test (10 min)
for i in range(601):
# set heater values
a.Q1(Q1d[i])
a.Q2(Q2d[i])
print('Time: ' + str(i) + \
' Q1: ' + str(Q1d[i]) + \
' Q2: ' + str(Q2d[i]) + \
' T1: ' + str(a.T1) + \
' T2: ' + str(a.T2))
# wait 1 second
time.sleep(1)
fid = open(filename,'a')
fid.write(str(i)+','+str(Q1d[i])+','+str(Q2d[i])+',' \
+str(a.T1)+','+str(a.T2)+'\n')
# close connection to Arduino
a.close()
fid.close()
```
### Plot Step Test Data
```
import matplotlib.pyplot as plt
%matplotlib inline
# read data file
filename = 'data.csv'
data = pd.read_csv(filename)
# plot measurements
plt.figure(figsize=(10,7))
plt.subplot(2,1,1)
plt.plot(data['Time'],data['Q1'],'r-',label='Heater 1')
plt.plot(data['Time'],data['Q2'],'b--',label='Heater 2')
plt.ylabel('Heater (%)')
plt.legend(loc='best')
plt.subplot(2,1,2)
plt.plot(data['Time'],data['T1'],'r.',label='Temperature 1')
plt.plot(data['Time'],data['T2'],'b.',label='Temperature 2')
plt.ylabel(r'Temperature ($^oC$)')
plt.legend(loc='best')
plt.xlabel('Time (sec)')
plt.savefig('data.png')
plt.show()
```
### Physics-based Model Prediction
```
!pip install gekko
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from gekko import GEKKO
from scipy.integrate import odeint
from scipy.interpolate import interp1d
# Import data
try:
# try to read local data file first
filename = 'data.csv'
data = pd.read_csv(filename)
except:
filename = 'http://apmonitor.com/pdc/uploads/Main/tclab_data2.txt'
data = pd.read_csv(filename)
# Fit Parameters of Energy Balance
m = GEKKO() # Create GEKKO Model
# Parameters to Estimate
U = m.FV(value=10,lb=1,ub=20)
Us = m.FV(value=20,lb=5,ub=40)
alpha1 = m.FV(value=0.01,lb=0.001,ub=0.03) # W / % heater
alpha2 = m.FV(value=0.005,lb=0.001,ub=0.02) # W / % heater
tau = m.FV(value=10.0,lb=5.0,ub=60.0)
# Measured inputs
Q1 = m.Param()
Q2 = m.Param()
Ta =23.0+273.15 # K
mass = 4.0/1000.0 # kg
Cp = 0.5*1000.0 # J/kg-K
A = 10.0/100.0**2 # Area not between heaters in m^2
As = 2.0/100.0**2 # Area between heaters in m^2
eps = 0.9 # Emissivity
sigma = 5.67e-8 # Stefan-Boltzmann
TH1 = m.SV()
TH2 = m.SV()
TC1 = m.CV()
TC2 = m.CV()
# Heater Temperatures in Kelvin
T1 = m.Intermediate(TH1+273.15)
T2 = m.Intermediate(TH2+273.15)
# Heat transfer between two heaters
Q_C12 = m.Intermediate(Us*As*(T2-T1)) # Convective
Q_R12 = m.Intermediate(eps*sigma*As*(T2**4-T1**4)) # Radiative
# Energy balances
m.Equation(TH1.dt() == (1.0/(mass*Cp))*(U*A*(Ta-T1) \
+ eps * sigma * A * (Ta**4 - T1**4) \
+ Q_C12 + Q_R12 \
+ alpha1*Q1))
m.Equation(TH2.dt() == (1.0/(mass*Cp))*(U*A*(Ta-T2) \
+ eps * sigma * A * (Ta**4 - T2**4) \
- Q_C12 - Q_R12 \
+ alpha2*Q2))
# Conduction to temperature sensors
m.Equation(tau*TC1.dt() == TH1-TC1)
m.Equation(tau*TC2.dt() == TH2-TC2)
# Options
# STATUS=1 allows solver to adjust parameter
U.STATUS = 1
Us.STATUS = 1
alpha1.STATUS = 1
alpha2.STATUS = 1
tau.STATUS = 1
Q1.value=data['Q1'].values
Q2.value=data['Q2'].values
TH1.value=data['T1'].values[0]
TH2.value=data['T2'].values[0]
TC1.value=data['T1'].values
TC1.FSTATUS = 1 # minimize fstatus * (meas-pred)^2
TC2.value=data['T2'].values
TC2.FSTATUS = 1 # minimize fstatus * (meas-pred)^2
m.time = data['Time'].values
m.options.IMODE = 5 # MHE
m.options.EV_TYPE = 2 # Objective type
m.options.NODES = 2 # Collocation nodes
m.options.SOLVER = 3 # IPOPT
m.solve(disp=False) # Solve
# Parameter values
print('U : ' + str(U.value[0]))
print('Us : ' + str(Us.value[0]))
print('alpha1: ' + str(alpha1.value[0]))
print('alpha2: ' + str(alpha2.value[-1]))
print('tau: ' + str(tau.value[0]))
sae = 0.0
for i in range(len(data)):
sae += np.abs(data['T1'][i]-TC1.value[i])
sae += np.abs(data['T2'][i]-TC2.value[i])
print('SAE Energy Balance: ' + str(sae))
# Create plot
plt.figure(figsize=(10,7))
ax=plt.subplot(2,1,1)
ax.grid()
plt.plot(data['Time'],data['T1'],'r.',label=r'$T_1$ measured')
plt.plot(m.time,TC1.value,color='black',linestyle='--',\
linewidth=2,label=r'$T_1$ energy balance')
plt.plot(data['Time'],data['T2'],'b.',label=r'$T_2$ measured')
plt.plot(m.time,TC2.value,color='orange',linestyle='--',\
linewidth=2,label=r'$T_2$ energy balance')
plt.ylabel(r'T ($^oC$)')
plt.legend(loc=2)
ax=plt.subplot(2,1,2)
ax.grid()
plt.plot(data['Time'],data['Q1'],'r-',\
linewidth=3,label=r'$Q_1$')
plt.plot(data['Time'],data['Q2'],'b:',\
linewidth=3,label=r'$Q_2$')
plt.ylabel('Heaters')
plt.xlabel('Time (sec)')
plt.legend(loc='best')
plt.savefig('Physics_fit.png')
```
### Determine FOPDT Parameters with Graphical Fit Widget
```
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.integrate import odeint
import ipywidgets as wg
from IPython.display import display
# try to read local data file first
try:
filename = 'data.csv'
data = pd.read_csv(filename)
except:
filename = 'http://apmonitor.com/pdc/uploads/Main/tclab_data2.txt'
data = pd.read_csv(filename)
n = 601 # time points to plot
tf = 600.0 # final time
# Use expected room temperature for initial condition
#y0 = [23.0,23.0]
# Use initial condition
y0d = [data['T1'].values[0],data['T2'].values[0]]
# load data
Q1 = data['Q1'].values
Q2 = data['Q2'].values
T1 = data['T1'].values
T2 = data['T2'].values
T1p = np.ones(n)*y0d[0]
T2p = np.ones(n)*y0d[1]
def process(y,t,u1,u2,Kp,Kd,taup):
y1,y2 = y
dy1dt = (1.0/taup) * (-(y1-y0d[0]) + Kp * u1 + Kd * (y2-y1))
dy2dt = (1.0/taup) * (-(y2-y0d[1]) + (Kp/2.0) * u2 + Kd * (y1-y2))
return [dy1dt,dy2dt]
def fopdtPlot(Kp,Kd,taup,thetap):
y0 = y0d
t = np.linspace(0,tf,n) # create time vector
iae = 0.0
# loop through all time steps
for i in range(1,n):
# simulate process for one time step
ts = [t[i-1],t[i]] # time interval
inputs = (Q1[max(0,i-int(thetap))],Q2[max(0,i-int(thetap))],Kp,Kd,taup)
y = odeint(process,y0,ts,args=inputs)
y0 = y[1] # record new initial condition
T1p[i] = y0[0]
T2p[i] = y0[1]
iae += np.abs(T1[i]-T1p[i]) + np.abs(T2[i]-T2p[i])
# plot FOPDT response
plt.figure(1,figsize=(15,7))
plt.subplot(2,1,1)
plt.plot(t,T1,'r.',linewidth=2,label='Temperature 1 (meas)')
plt.plot(t,T2,'b.',linewidth=2,label='Temperature 2 (meas)')
plt.plot(t,T1p,'r--',linewidth=2,label='Temperature 1 (pred)')
plt.plot(t,T2p,'b--',linewidth=2,label='Temperature 2 (pred)')
plt.ylabel(r'T $(^oC)$')
plt.text(200,20,'Integral Abs Error: ' + str(np.round(iae,2)))
plt.text(400,35,r'$K_p$: ' + str(np.round(Kp,0)))
plt.text(400,30,r'$K_d$: ' + str(np.round(Kd,0)))
plt.text(400,25,r'$\tau_p$: ' + str(np.round(taup,0)) + ' sec')
plt.text(400,20,r'$\theta_p$: ' + str(np.round(thetap,0)) + ' sec')
plt.legend(loc=2)
plt.subplot(2,1,2)
plt.plot(t,Q1,'b--',linewidth=2,label=r'Heater 1 ($Q_1$)')
plt.plot(t,Q2,'r:',linewidth=2,label=r'Heater 2 ($Q_2$)')
plt.legend(loc='best')
plt.xlabel('time (sec)')
Kp_slide = wg.FloatSlider(value=0.5,min=0.1,max=1.5,step=0.05)
Kd_slide = wg.FloatSlider(value=0.0,min=0.0,max=1.0,step=0.05)
taup_slide = wg.FloatSlider(value=50.0,min=50.0,max=250.0,step=5.0)
thetap_slide = wg.FloatSlider(value=0,min=0,max=30,step=1)
wg.interact(fopdtPlot, Kp=Kp_slide, Kd=Kd_slide, taup=taup_slide,thetap=thetap_slide)
print('FOPDT Simulator: Adjust Kp, Kd, taup, and thetap ' + \
'to achieve lowest Integral Abs Error')
```
### Determine FOPDT Parameters with Optimization
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from scipy.optimize import minimize
from scipy.interpolate import interp1d
# initial guesses
x0 = np.zeros(4)
x0[0] = 0.8 # Kp
x0[1] = 0.2 # Kd
x0[2] = 150.0 # taup
x0[3] = 10.0 # thetap
# Import CSV data file
# try to read local data file first
try:
filename = 'data.csv'
data = pd.read_csv(filename)
except:
filename = 'http://apmonitor.com/pdc/uploads/Main/tclab_data2.txt'
data = pd.read_csv(filename)
Q1_0 = data['Q1'].values[0]
Q2_0 = data['Q2'].values[0]
T1_0 = data['T1'].values[0]
T2_0 = data['T2'].values[0]
t = data['Time'].values - data['Time'].values[0]
Q1 = data['Q1'].values
Q2 = data['Q2'].values
T1 = data['T1'].values
T2 = data['T2'].values
# specify number of steps
ns = len(t)
delta_t = t[1]-t[0]
# create linear interpolation of the u data versus time
Qf1 = interp1d(t,Q1)
Qf2 = interp1d(t,Q2)
# define first-order plus dead-time approximation
def fopdt(T,t,Qf1,Qf2,Kp,Kd,taup,thetap):
# T = states
# t = time
# Qf1 = input linear function (for time shift)
# Qf2 = input linear function (for time shift)
# Kp = model gain
# Kd = disturbance gain
# taup = model time constant
# thetap = model time constant
# time-shift Q
try:
if (t-thetap) <= 0:
Qm1 = Qf1(0.0)
Qm2 = Qf2(0.0)
else:
Qm1 = Qf1(t-thetap)
Qm2 = Qf2(t-thetap)
except:
Qm1 = Q1_0
Qm2 = Q2_0
# calculate derivative
dT1dt = (-(T[0]-T1_0) + Kp*(Qm1-Q1_0) + Kd*(T[1]-T[0]))/taup
dT2dt = (-(T[1]-T2_0) + (Kp/2.0)*(Qm2-Q2_0) + Kd*(T[0]-T[1]))/taup
return [dT1dt,dT2dt]
# simulate FOPDT model
def sim_model(x):
# input arguments
Kp,Kd,taup,thetap = x
# storage for model values
T1p = np.ones(ns) * T1_0
T2p = np.ones(ns) * T2_0
# loop through time steps
for i in range(0,ns-1):
ts = [t[i],t[i+1]]
T = odeint(fopdt,[T1p[i],T2p[i]],ts,args=(Qf1,Qf2,Kp,Kd,taup,thetap))
T1p[i+1] = T[-1,0]
T2p[i+1] = T[-1,1]
return T1p,T2p
# define objective
def objective(x):
# simulate model
T1p,T2p = sim_model(x)
# return objective
return sum(np.abs(T1p-T1)+np.abs(T2p-T2))
# show initial objective
print('Initial SSE Objective: ' + str(objective(x0)))
print('Optimizing Values...')
# optimize without parameter constraints
#solution = minimize(objective,x0)
# optimize with bounds on variables
bnds = ((0.4, 1.5), (0.1, 0.5), (50.0, 200.0), (0.0, 30.0))
solution = minimize(objective,x0,bounds=bnds,method='SLSQP')
# show final objective
x = solution.x
iae = objective(x)
Kp,Kd,taup,thetap = x
print('Final SSE Objective: ' + str(objective(x)))
print('Kp: ' + str(Kp))
print('Kd: ' + str(Kd))
print('taup: ' + str(taup))
print('thetap: ' + str(thetap))
# save fopdt.txt file
fid = open('fopdt.txt','w')
fid.write(str(Kp)+'\n')
fid.write(str(Kd)+'\n')
fid.write(str(taup)+'\n')
fid.write(str(thetap)+'\n')
fid.write(str(T1_0)+'\n')
fid.write(str(T2_0)+'\n')
fid.close()
# calculate model with updated parameters
T1p,T2p = sim_model(x)
plt.figure(1,figsize=(15,7))
plt.subplot(2,1,1)
plt.plot(t,T1,'r.',linewidth=2,label='Temperature 1 (meas)')
plt.plot(t,T2,'b.',linewidth=2,label='Temperature 2 (meas)')
plt.plot(t,T1p,'r--',linewidth=2,label='Temperature 1 (pred)')
plt.plot(t,T2p,'b--',linewidth=2,label='Temperature 2 (pred)')
plt.ylabel(r'T $(^oC)$')
plt.text(200,20,'Integral Abs Error: ' + str(np.round(iae,2)))
plt.text(400,35,r'$K_p$: ' + str(np.round(Kp,1)))
plt.text(400,30,r'$K_d$: ' + str(np.round(Kd,1)))
plt.text(400,25,r'$\tau_p$: ' + str(np.round(taup,0)) + ' sec')
plt.text(400,20,r'$\theta_p$: ' + str(np.round(thetap,0)) + ' sec')
plt.legend(loc=2)
plt.subplot(2,1,2)
plt.plot(t,Q1,'b--',linewidth=2,label=r'Heater 1 ($Q_1$)')
plt.plot(t,Q2,'r:',linewidth=2,label=r'Heater 2 ($Q_2$)')
plt.legend(loc='best')
plt.xlabel('time (sec)')
plt.savefig('fopdt_opt.png')
plt.show()
```
### Design 2 PID Controllers for T1 and T2
```
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.integrate import odeint
import ipywidgets as wg
from IPython.display import display
n = 601 # time points to plot
tf = 600.0 # final time
# Load TCLab FOPDT
fid = open('fopdt.txt','r')
Kp = float(fid.readline())
Kd = float(fid.readline())
taup = float(fid.readline())
thetap = float(fid.readline())
T1_0 = float(fid.readline())
T2_0 = float(fid.readline())
fid.close()
y0 = [T1_0,T2_0]
Kff = -Kp/Kd
def process(y,t,u1,u2):
y1,y2 = y
dy1dt = (1.0/taup) * (-(y1-y0[0]) + Kp * u1 + Kd * (y2-y1))
dy2dt = (1.0/taup) * (-(y2-y0[1]) + (Kp/2.0) * u2 + Kd * (y1-y2))
return [dy1dt,dy2dt]
def pidPlot(Kc,tauI,tauD,Kff):
y0 = [23.0,23.0]
t = np.linspace(0,tf,n) # create time vector
P1 = np.zeros(n) # initialize proportional term
I1 = np.zeros(n) # initialize integral term
D1 = np.zeros(n) # initialize derivative term
FF1 = np.zeros(n) # initialize feedforward term
e1 = np.zeros(n) # initialize error
P2 = np.zeros(n) # initialize proportional term
I2 = np.zeros(n) # initialize integral term
D2 = np.zeros(n) # initialize derivative term
FF2 = np.zeros(n) # initialize feedforward term
e2 = np.zeros(n) # initialize error
OP1 = np.zeros(n) # initialize controller output
OP2 = np.zeros(n) # initialize disturbance
PV1 = np.ones(n)*y0[0] # initialize process variable
PV2 = np.ones(n)*y0[1] # initialize process variable
SP1 = np.ones(n)*y0[0] # initialize setpoint
SP2 = np.ones(n)*y0[1] # initialize setpoint
SP1[10:] = 60.0 # step up
SP1[400:] = 40.0 # step up
SP2[150:] = 50.0 # step down
SP2[350:] = 35.0 # step down
Kc1 = Kc
Kc2 = Kc*2.0
Kff1 = Kff
Kff2 = Kff*2.0
iae = 0.0
# loop through all time steps
for i in range(1,n):
# simulate process for one time step
ts = [t[i-1],t[i]] # time interval
heaters = (OP1[max(0,i-int(thetap))],OP2[max(0,i-int(thetap))])
y = odeint(process,y0,ts,args=heaters)
y0 = y[1] # record new initial condition
# calculate new OP with PID
PV1[i] = y[1][0] # record T1 PV
PV2[i] = y[1][1] # record T2 PV
iae += np.abs(SP1[i]-PV1[i]) + np.abs(SP2[i]-PV2[i])
dt = t[i] - t[i-1] # calculate time step
# PID for loop 1
e1[i] = SP1[i] - PV1[i] # calculate error = SP - PV
P1[i] = Kc1 * e1[i] # calculate proportional term
I1[i] = I1[i-1] + (Kc1/tauI) * e1[i] * dt # calculate integral term
D1[i] = -Kc * tauD * (PV1[i]-PV1[i-1])/dt # calculate derivative
FF1[i] = Kff1 * (PV2[i]-PV1[i])
OP1[i] = P1[i] + I1[i] + D1[i] + FF1[i] # calculate new output
if OP1[i]>=100:
OP1[i] = 100.0
I1[i] = I1[i-1] # reset integral
if OP1[i]<=0:
OP1[i] = 0.0
I1[i] = I1[i-1] # reset integral
# PID for loop 2
e2[i] = SP2[i] - PV2[i] # calculate error = SP - PV
P2[i] = Kc2 * e2[i] # calculate proportional term
I2[i] = I2[i-1] + (Kc2/tauI) * e2[i] * dt # calculate integral term
D2[i] = -Kc2 * tauD * (PV2[i]-PV2[i-1])/dt # calculate derivative
FF2[i] = Kff2 * (PV1[i]-PV2[i])
OP2[i] = P2[i] + I2[i] + D2[i] + FF2[i] # calculate new output
if OP2[i]>=100:
OP2[i] = 100.0
I2[i] = I2[i-1] # reset integral
if OP2[i]<=0:
OP2[i] = 0.0
I2[i] = I2[i-1] # reset integral
# plot PID response
plt.figure(1,figsize=(15,7))
plt.subplot(2,2,1)
plt.plot(t,SP1,'k-',linewidth=2,label='Setpoint 1 (SP)')
plt.plot(t,PV1,'r:',linewidth=2,label='Temperature 1 (PV)')
plt.ylabel(r'T $(^oC)$')
plt.text(100,35,'Integral Abs Error: ' + str(np.round(iae,2)))
plt.text(400,35,r'$K_{c1}$: ' + str(np.round(Kc,1)))
plt.text(400,30,r'$\tau_I$: ' + str(np.round(tauI,0)) + ' sec')
plt.text(400,25,r'$\tau_D$: ' + str(np.round(tauD,0)) + ' sec')
plt.text(400,20,r'$K_{ff}$: ' + str(np.round(Kff1,2)))
plt.ylim([15,70])
plt.legend(loc=1)
plt.subplot(2,2,2)
plt.plot(t,SP2,'k-',linewidth=2,label='Setpoint 2 (SP)')
plt.plot(t,PV2,'r:',linewidth=2,label='Temperature 2 (PV)')
plt.ylabel(r'T $(^oC)$')
plt.text(20,65,r'$K_{c2}$: ' + str(np.round(Kc*2,1)))
plt.text(20,60,r'$\tau_I$: ' + str(np.round(tauI,0)) + ' sec')
plt.text(20,55,r'$\tau_D$: ' + str(np.round(tauD,0)) + ' sec')
plt.text(20,50,r'$K_{ff}$: ' + str(np.round(Kff2,2)))
plt.ylim([15,70])
plt.legend(loc=1)
plt.subplot(2,2,3)
plt.plot(t,OP1,'b--',linewidth=2,label='Heater 1 (OP)')
plt.legend(loc='best')
plt.xlabel('time (sec)')
plt.subplot(2,2,4)
plt.plot(t,OP2,'b--',linewidth=2,label='Heater 2 (OP)')
plt.legend(loc='best')
plt.xlabel('time (sec)')
print('PID with Feedforward Simulator: Adjust Kc, tauI, tauD, and Kff ' + \
'to achieve lowest Integral Abs Error')
# ITAE Setpoint Tracking PI Tuning
Kc = (0.586/Kp)*(thetap/taup)**(-0.916); tauI = taup/(1.03-0.165*(thetap/taup))
print(f'ITAE Recommended: Kc={Kc:4.2f}, tauI={tauI:5.1f}, tauD=0, Kff={Kff:4.2f}')
# IMC Aggressive PID Tuning
tauc = max(0.1*taup,0.8*thetap); Kc = (1/Kp)*(taup+0.5*taup)/(tauc+0.5*thetap)
tauI = taup+0.5*thetap; tauD = taup*thetap/(2*taup+thetap); Kff=-Kd/Kp
print(f'IMC Recommended: Kc={Kc:4.2f}, tauI={tauI:5.1f}, tauD={tauD:4.2f}, Kff={Kff:4.2f}')
Kc_slide = wg.FloatSlider(value=Kc,min=0.0,max=50.0,step=1.0)
tauI_slide = wg.FloatSlider(value=tauI,min=20.0,max=250.0,step=1.0)
tauD_slide = wg.FloatSlider(value=tauD,min=0.0,max=20.0,step=1.0)
Kff_slide = wg.FloatSlider(value=Kff,min=-0.5,max=0.5,step=0.1)
wg.interact(pidPlot, Kc=Kc_slide, tauI=tauI_slide, tauD=tauD_slide,Kff=Kff_slide)
print('')
```
### Test Interacting PID Controllers with TCLab
```
import tclab
import numpy as np
import time
import matplotlib.pyplot as plt
from scipy.integrate import odeint
#-----------------------------------------
# PID controller performance for the TCLab
#-----------------------------------------
# PID Parameters
Kc = 5.0
tauI = 120.0 # sec
tauD = 2.0 # sec
Kff = -0.3
# Animate Plot?
animate = True
if animate:
try:
from IPython import get_ipython
from IPython.display import display,clear_output
get_ipython().run_line_magic('matplotlib', 'inline')
ipython = True
print('IPython Notebook')
except:
ipython = False
print('Not IPython Notebook')
#-----------------------------------------
# PID Controller with Feedforward
#-----------------------------------------
# inputs ---------------------------------
# sp = setpoint
# pv = current temperature
# pv_last = prior temperature
# ierr = integral error
# dt = time increment between measurements
# outputs --------------------------------
# op = output of the PID controller
# P = proportional contribution
# I = integral contribution
# D = derivative contribution
def pid(sp,pv,pv_last,ierr,dt,d,cid):
# Parameters in terms of PID coefficients
if cid==1:
# controller 1
KP = Kc
Kf = Kff
else:
# controller 2
KP = Kc * 2.0
Kf = Kff * 2.0
KI = Kc/tauI
KD = Kc*tauD
# ubias for controller (initial heater)
op0 = 0
# upper and lower bounds on heater level
ophi = 100
oplo = 0
# calculate the error
error = sp-pv
# calculate the integral error
ierr = ierr + KI * error * dt
# calculate the measurement derivative
dpv = (pv - pv_last) / dt
# calculate the PID output
P = KP * error
I = ierr
D = -KD * dpv
FF = Kff * d
op = op0 + P + I + D + FF
# implement anti-reset windup
if op < oplo or op > ophi:
I = I - KI * error * dt
# clip output
op = max(oplo,min(ophi,op))
# return the controller output and PID terms
return [op,P,I,D,FF]
# save txt file with data and set point
# t = time
# u1,u2 = heaters
# y1,y2 = tempeatures
# sp1,sp2 = setpoints
def save_txt(t, u1, u2, y1, y2, sp1, sp2):
data = np.vstack((t, u1, u2, y1, y2, sp1, sp2)) # vertical stack
data = data.T # transpose data
top = ('Time,Q1,Q2,T1,T2,TSP1,TSP2')
np.savetxt('validate.txt', data, delimiter=',',\
header=top, comments='')
# Connect to Arduino
a = tclab.TCLab()
# Wait until temperature is below 25 degC
print('Check that temperatures are < 25 degC before starting')
i = 0
while a.T1>=25.0 or a.T2>=25.0:
print(f'Time: {i} T1: {a.T1} T2: {a.T2}')
i += 10
time.sleep(10)
# Turn LED on
print('LED On')
a.LED(100)
# Run time in minutes
run_time = 10.0
# Number of cycles
loops = int(60.0*run_time)
tm = np.zeros(loops)
# Heater set point steps
Tsp1 = np.ones(loops) * a.T1
Tsp2 = np.ones(loops) * a.T2 # set point (degC)
Tsp1[10:] = 60.0 # step up
Tsp1[400:] = 40.0 # step down
Tsp2[150:] = 50.0 # step up
Tsp2[350:] = 35.0 # step down
T1 = np.ones(loops) * a.T1 # measured T (degC)
T2 = np.ones(loops) * a.T2 # measured T (degC)
# impulse tests (0 - 100%)
Q1 = np.ones(loops) * 0.0
Q2 = np.ones(loops) * 0.0
if not animate:
print('Running Main Loop. Ctrl-C to end.')
print(' Time SP1 PV1 Q1 SP2 PV2 Q2 IAE')
print(('{:6.1f} {:6.2f} {:6.2f} ' + \
'{:6.2f} {:6.2f} {:6.2f} {:6.2f} {:6.2f}').format( \
tm[0],Tsp1[0],T1[0],Q1[0],Tsp2[0],T2[0],Q2[0],0.0))
# Main Loop
start_time = time.time()
prev_time = start_time
dt_error = 0.0
# Integral error
ierr1 = 0.0
ierr2 = 0.0
# Integral absolute error
iae = 0.0
if not ipython:
plt.figure(figsize=(10,7))
plt.ion()
plt.show()
try:
for i in range(1,loops):
# Sleep time
sleep_max = 1.0
sleep = sleep_max - (time.time() - prev_time) - dt_error
if sleep>=1e-4:
time.sleep(sleep-1e-4)
else:
print('exceeded max cycle time by ' + str(abs(sleep)) + ' sec')
time.sleep(1e-4)
# Record time and change in time
t = time.time()
dt = t - prev_time
if (sleep>=1e-4):
dt_error = dt-sleep_max+0.009
else:
dt_error = 0.0
prev_time = t
tm[i] = t - start_time
# Read temperatures in Kelvin
T1[i] = a.T1
T2[i] = a.T2
# Disturbances
d1 = T1[i] - 23.0
d2 = T2[i] - 23.0
# Integral absolute error
iae += np.abs(Tsp1[i]-T1[i]) + np.abs(Tsp2[i]-T2[i])
# Calculate PID output
[Q1[i],P,ierr1,D,FF] = pid(Tsp1[i],T1[i],T1[i-1],ierr1,dt,d2,1)
[Q2[i],P,ierr2,D,FF] = pid(Tsp2[i],T2[i],T2[i-1],ierr2,dt,d1,2)
# Write output (0-100)
a.Q1(Q1[i])
a.Q2(Q2[i])
if not animate:
# Print line of data
print(('{:6.1f} {:6.2f} {:6.2f} ' + \
'{:6.2f} {:6.2f} {:6.2f} {:6.2f} {:6.2f}').format( \
tm[i],Tsp1[i],T1[i],Q1[i],Tsp2[i],T2[i],Q2[i],iae))
else:
if ipython:
plt.figure(figsize=(10,7))
else:
plt.clf()
# Update plot
ax=plt.subplot(2,1,1)
ax.grid()
plt.plot(tm[0:i],Tsp1[0:i],'k--',label=r'$T_1$ set point')
plt.plot(tm[0:i],T1[0:i],'b.',label=r'$T_1$ measured')
plt.plot(tm[0:i],Tsp2[0:i],'k-',label=r'$T_2$ set point')
plt.plot(tm[0:i],T2[0:i],'r.',label=r'$T_2$ measured')
plt.ylabel(r'Temperature ($^oC$)')
plt.text(0,65,'IAE: ' + str(np.round(iae,2)))
plt.legend(loc=4)
plt.ylim([15,70])
ax=plt.subplot(2,1,2)
ax.grid()
plt.plot(tm[0:i],Q1[0:i],'b-',label=r'$Q_1$')
plt.plot(tm[0:i],Q2[0:i],'r:',label=r'$Q_2$')
plt.ylim([-10,110])
plt.ylabel('Heater (%)')
plt.legend(loc=1)
plt.xlabel('Time (sec)')
if ipython:
clear_output(wait=True)
display(plt.gcf())
else:
plt.draw()
plt.pause(0.05)
# Turn off heaters
a.Q1(0)
a.Q2(0)
a.close()
# Save text file
save_txt(tm,Q1,Q2,T1,T2,Tsp1,Tsp2)
# Save Plot
if not animate:
plt.figure(figsize=(10,7))
ax=plt.subplot(2,1,1)
ax.grid()
plt.plot(tm,Tsp1,'k--',label=r'$T_1$ set point')
plt.plot(tm,T1,'b.',label=r'$T_1$ measured')
plt.plot(tm,Tsp2,'k-',label=r'$T_2$ set point')
plt.plot(tm,T2,'r.',label=r'$T_2$ measured')
plt.ylabel(r'Temperature ($^oC$)')
plt.text(0,65,'IAE: ' + str(np.round(iae,2)))
plt.legend(loc=4)
ax=plt.subplot(2,1,2)
ax.grid()
plt.plot(tm,Q1,'b-',label=r'$Q_1$')
plt.plot(tm,Q2,'r:',label=r'$Q_2$')
plt.ylabel('Heater (%)')
plt.legend(loc=1)
plt.xlabel('Time (sec)')
plt.savefig('PID_Control.png')
# Allow user to end loop with Ctrl-C
except KeyboardInterrupt:
# Disconnect from Arduino
a.Q1(0)
a.Q2(0)
print('Shutting down')
a.close()
save_txt(tm[0:i],Q1[0:i],Q2[0:i],T1[0:i],T2[0:i],Tsp1[0:i],Tsp2[0:i])
plt.savefig('PID_Control.png')
# Make sure serial connection closes with an error
except:
# Disconnect from Arduino
a.Q1(0)
a.Q2(0)
print('Error: Shutting down')
a.close()
save_txt(tm[0:i],Q1[0:i],Q2[0:i],T1[0:i],T2[0:i],Tsp1[0:i],Tsp2[0:i])
plt.savefig('PID_Control.png')
raise
print('PID test complete')
print('Kc: ' + str(Kc))
print('tauI: ' + str(tauI))
print('tauD: ' + str(tauD))
print('Kff: ' + str(Kff))
```
| github_jupyter |
```
#Multimodal models for processing all data which is preprocessed
%cd /content/drive/MyDrive
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import pandas as pd
from sklearn.model_selection import train_test_split
synth_df=pd.read_csv("nlpGendata.csv")
X=synth_df.values[:,:-1]
y=synth_df.values[:,-1]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.20, random_state=42)
geo_X_train=X_train[:,0:4]
geo_X_test=X_test[:,0:4]
url_X_train=X_train[:,4:461]
url_X_test=X_test[:,4:461]
origurl_X_train=X_train[:,461:956]
origurl_X_test=X_test[:,461:956]
ancurl_X_train=X_train[:,956:1428]
ancurl_X_test=X_test[:,956:1428]
nlp_X_train=X_train[:,1428:1429]
nlp_X_test=X_test[:,1428:1429]
geo_input=keras.Input(shape=(4,),name='geo')
geo_x=layers.Dense(10,activation='sigmoid')(geo_input)
# geo_x=layers.Dense(16,activation='sigmoid')(geo_x)
geo_output=layers.Dense(1,activation='sigmoid')(geo_x)
url_input=keras.Input(shape=(457,),name='url')
url_x=layers.Dense(500,activation='sigmoid')(url_input)
url_x=layers.Dense(50,activation='sigmoid')(url_x)
url_output=layers.Dense(1,activation='sigmoid')(url_x)
origurl_input=keras.Input(shape=(495,),name='origurl')
origurl_x=layers.Dense(550,activation='sigmoid')(origurl_input)
# origurl_x=layers.Dense(55,activation='sigmoid')(origurl_x)
origurl_output=layers.Dense(1,activation='sigmoid')(origurl_x)
ancurl_input=keras.Input(shape=(472,),name='ancurl')
ancurl_x=layers.Dense(550,activation='sigmoid')(ancurl_input)
# ancurl_x=layers.Dense(60,activation='sigmoid')(ancurl_x)
ancurl_output=layers.Dense(1,activation='sigmoid')(ancurl_x)
nlp_input=keras.Input(shape=(1,),name='nlp-score')
# nlp_output = nlp_input
# alt_x=layers.Dense(64,activation='sigmoid')(alt_input)
# alt_x=layers.Dense(25,activation='sigmoid')(alt_x)
nlp_output=nlp_input
# cap_input=keras.Input(shape=(19,),name='cap')
# cap_x=layers.Dense(16,activation='sigmoid')(cap_input)
# cap_x=layers.Dense(8,activation='sigmoid')(cap_x)
# cap_output=layers.Dense(1,activation='sigmoid')(cap_x)
xf1=layers.concatenate([geo_output,url_output,origurl_output,ancurl_output,nlp_output])
xf1=layers.Dense(20,activation='sigmoid')(xf1)
result=layers.Dense(1, name="ad/nonad")(xf1)
model=keras.Model(inputs=[geo_input,url_input,origurl_input,ancurl_input,nlp_input],outputs=[result])
keras.utils.plot_model(model, "dbz_super_2.png", show_shapes=True)
model.summary()
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[
keras.losses.BinaryCrossentropy(from_logits=True),
# keras.losses.CategoricalCrossentropy(from_logits=True),
],
metrics=['accuracy'],
)
s='geo_X_test'
print(s.replace("geo","geo"),",")
print(s.replace("geo","url"),",",)
print(s.replace("geo","origurl"),",",)
print(s.replace("geo","ancurl"),",",)
print(s.replace("geo","alt"),",",)
print(s.replace("geo","cap"),"",)
model.fit(
{"geo": geo_X_train, "url": url_X_train ,
"origurl": origurl_X_train ,
"ancurl": ancurl_X_train ,
"nlp-score": nlp_X_train },
{"ad/nonad": y_train},
epochs=200,
batch_size=64,
validation_split=0.125,
verbose=1,
)
test_score= model.evaluate([geo_X_test ,
url_X_test ,
origurl_X_test ,
ancurl_X_test ,
nlp_X_test ],y_test,verbose=2)
model.save('dbz_super_1.h5')
from keras.models import load_model
prevModel = load_model('nlp_model_5.h5')
train_score= prevModel.evaluate([geo_X_train ,
url_X_train ,
origurl_X_train ,
ancurl_X_train ,
alt_X_train],y_train,verbose=1)
print(train_score)
test_score= prevModel.evaluate([geo_X_test ,
url_X_test ,
origurl_X_test ,
ancurl_X_test ,
alt_X_test ],y_test,verbose=2)
print(test_score)
# print('Test accuracy:', score[1])
# score = prevModel.evaluate(X_val,y_val, verbose=1)
# print(score)
# print('Validation accuracy:', score[1])
prevModel.summary()
keras.utils.plot_model(prevModel, "multiModal_zwei.png", show_shapes=True)
# import matplotlib.pyplot as plt
# x=[0.9777, 0.9774, 0.9767, 0.9777, 0.9763, 0.9777, 0.9767, 0.9767, 0.9763, 0.9781, 0.977, 0.9767, 0.9767, 0.9763, 0.9777, 0.9777, 0.9763, 0.9774, 0.9767, 0.976, 0.9763, 0.9777, 0.9774, 0.9767, 0.9763, 0.977, 0.976, 0.9777, 0.9774, 0.9763, 0.9777, 0.9763, 0.9767, 0.9774, 0.9767, 0.9763, 0.9767, 0.976, 0.9774, 0.9774, 0.9781, 0.977, 0.977, 0.977, 0.9767, 0.977, 0.977, 0.9774, 0.9774, 0.9781, 0.9777, 0.976, 0.9781, 0.9777, 0.9774, 0.9763, 0.9774, 0.9774, 0.977, 0.9763, 0.9774, 0.9777, 0.9777, 0.976, 0.9781, 0.9774, 0.9781, 0.9767, 0.977, 0.976, 0.9777, 0.9774, 0.9756, 0.9767, 0.9767, 0.9774, 0.977, 0.9777, 0.9763, 0.977, 0.977, 0.9774, 0.9774, 0.9774, 0.9781, 0.9781, 0.9774, 0.977, 0.9774, 0.9781, 0.9774, 0.9763, 0.977, 0.9777, 0.9774, 0.9767, 0.9774, 0.9777, 0.977, 0.9774, 0.9781, 0.9774, 0.977, 0.9784, 0.977, 0.9763, 0.9781, 0.9763, 0.9784, 0.976, 0.9781, 0.9767, 0.9763, 0.9774, 0.9777, 0.977, 0.9777, 0.9767, 0.9781, 0.976, 0.9774, 0.9756, 0.9767, 0.9767, 0.977, 0.9767, 0.9777, 0.9767, 0.9781, 0.9777, 0.9777, 0.977, 0.9774, 0.9781, 0.9763, 0.9777, 0.9777, 0.9784, 0.9777, 0.9767, 0.9781, 0.9784, 0.9777, 0.977, 0.9767, 0.9767, 0.9784, 0.9774, 0.977, 0.9774, 0.9777, 0.9781, 0.9774, 0.977, 0.9774, 0.977, 0.9777, 0.9781, 0.9774, 0.977, 0.9784, 0.9781, 0.9784, 0.9774, 0.9777, 0.9777, 0.9777, 0.977, 0.9777, 0.9774, 0.9763, 0.9774, 0.9777, 0.977, 0.977, 0.9774, 0.9781, 0.9788, 0.977, 0.9777, 0.9774, 0.977, 0.9767, 0.9767, 0.977, 0.9781, 0.9777, 0.9774, 0.9784, 0.9774, 0.9781, 0.9777, 0.9774, 0.9781, 0.9781, 0.9777, 0.9763, 0.9774, 0.9777, 0.9763, 0.9781, 0.9767, 0.9774, 0.9784, 0.977, 0.9777, 0.9774, 0.9781, 0.9767, 0.9763, 0.9774, 0.9777, 0.9777, 0.977, 0.977, 0.977, 0.977, 0.9781, 0.977, 0.9774, 0.9781, 0.9774, 0.9774, 0.9781, 0.976]
# yy=[0.9805, 0.9781, 0.9732, 0.9757, 0.9781, 0.9732, 0.9781, 0.9805, 0.9757, 0.9732, 0.9781, 0.9757, 0.9805, 0.9732, 0.9757, 0.9732, 0.9781, 0.9732, 0.9732, 0.9805, 0.9805, 0.9781, 0.9757, 0.9757, 0.9781, 0.9781, 0.9757, 0.9757, 0.9805, 0.9805, 0.9684, 0.9757, 0.9732, 0.9781, 0.9781, 0.9708, 0.9757, 0.9757, 0.9805, 0.9805, 0.9805, 0.9781, 0.9732, 0.9781, 0.9757, 0.9732, 0.9805, 0.9708, 0.9781, 0.9757, 0.9684, 0.9781, 0.9757, 0.9781, 0.9781, 0.9805, 0.9805, 0.9805, 0.9781, 0.9781, 0.9805, 0.9805, 0.9684, 0.9757, 0.9757, 0.9781, 0.9781, 0.9757, 0.9684, 0.9757, 0.9781, 0.9757, 0.9781, 0.9781, 0.9732, 0.9732, 0.9781, 0.9757, 0.9805, 0.9805, 0.9757, 0.9757, 0.9781, 0.9781, 0.9757, 0.9732, 0.9757, 0.9781, 0.9781, 0.9757, 0.9805, 0.9757, 0.9805, 0.9781, 0.9757, 0.9805, 0.9708, 0.9805, 0.9805, 0.9757, 0.9757, 0.9805, 0.9805, 0.9757, 0.9805, 0.9805, 0.9732, 0.9805, 0.9805, 0.9781, 0.9781, 0.9781, 0.9781, 0.9757, 0.9805, 0.9805, 0.9781, 0.9781, 0.9781, 0.9781, 0.9708, 0.9781, 0.9805, 0.9805, 0.9757, 0.9805, 0.9805, 0.9805, 0.9805, 0.9805, 0.9805, 0.9805, 0.9805, 0.9757, 0.9805, 0.9757, 0.9781, 0.9781, 0.9757, 0.9781, 0.9805, 0.9781, 0.9781, 0.9732, 0.9781, 0.9805, 0.9659, 0.9805, 0.9708, 0.9781, 0.9708, 0.9781, 0.9781, 0.9805, 0.9805, 0.9805, 0.9781, 0.9781, 0.9805, 0.9781, 0.9757, 0.9805, 0.9781, 0.9805, 0.9781, 0.9757, 0.9757, 0.9805, 0.9805, 0.9805, 0.9781, 0.9805, 0.9805, 0.9781, 0.9781, 0.9781, 0.9781, 0.9781, 0.9781, 0.9781, 0.9805, 0.9781, 0.9805, 0.9805, 0.9781, 0.9781, 0.9757, 0.9805, 0.9757, 0.9781, 0.9805, 0.9781, 0.9781, 0.9805, 0.9781, 0.9732, 0.9781, 0.9781, 0.9805, 0.9781, 0.9781, 0.9805, 0.9781, 0.9781, 0.9781, 0.9757, 0.9805, 0.9781, 0.9781, 0.9805, 0.9781, 0.9805, 0.9781, 0.9805, 0.9781, 0.9781, 0.9805, 0.9805, 0.9805, 0.9781, 0.9781, 0.9757, 0.9805, 0.9781, 0.9781]
# rr=range(1,226)
# # plt.plot(rr,yy)
# plt.plot(rr,x)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
train = pd.read_csv('Train.csv')
test = pd.read_csv('Test.csv')
train.head()
test.head()
train.info()
def points(row):
return row.replace("." , ":")
train.STA = train.STA.apply(points)
test.STA = test.STA.apply(points)
train.DATOP = train.DATOP.replace("." , ":")
train.STD = train.STD.replace("." , ":")
train.STA = train.STA.replace("." , ":")
test.STA = test.STA.replace("." , ":")
test.DATOP = pd.to_datetime(test.DATOP)
test.STD = pd.to_datetime(test.STD)
test.STA = pd.to_datetime(test.STA)
train.DATOP = pd.to_datetime(train.DATOP)
train.STD = pd.to_datetime(train.STD)
train.STA = pd.to_datetime(train.STA)
train['day'] = train.DATOP.dt.day
train['month'] = train.DATOP.dt.month
train['year'] = train.DATOP.dt.year
train['daydep'] = train.STD.dt.day
train['dayarr'] = train.STA.dt.day
test['day'] = test.DATOP.dt.day
test['month'] = test.DATOP.dt.month
test['year'] = test.DATOP.dt.year
test['daydep'] = test.STD.dt.day
test['dayarr'] = test.STA.dt.day
test['sta_hour'] = test.STA.dt.hour
test['sta_minute'] = test.STA.dt.minute
train['sta_hour'] = train.STA.dt.hour
train['sta_minute'] = train.STA.dt.minute
test['std_hour'] = test.STD.dt.hour
test['std_minute'] = test.STD.dt.minute
train['std_hour'] = train.STD.dt.hour
train['std_minute'] = train.STD.dt.minute
train['hr_sin_sta'] = np.sin(train['sta_hour']*(2.*np.pi/24))
train['hr_cos_sta'] = np.cos(train['sta_hour']*(2.*np.pi/24))
train['mn_sin_sta'] = np.sin(train['sta_minute']*(2.*np.pi/60))
train['mn_cos_sta'] = np.cos(train['sta_minute']*(2.*np.pi/60))
train['hr_sin_std'] = np.sin(train['std_hour']*(2.*np.pi/24))
train['hr_cos_std'] = np.cos(train['std_hour']*(2.*np.pi/24))
train['mn_sin_std'] = np.sin(train['std_minute']*(2.*np.pi/60))
train['mn_cos_std'] = np.cos(train['std_minute']*(2.*np.pi/60))
test['hr_sin_sta'] = np.sin(test['sta_hour']*(2.*np.pi/24))
test['hr_cos_sta'] = np.cos(test['sta_hour']*(2.*np.pi/24))
test['mn_sin_sta'] = np.sin(test['sta_minute']*(2.*np.pi/60))
test['mn_cos_sta'] = np.cos(test['sta_minute']*(2.*np.pi/60))
test['hr_sin_std'] = np.sin(test['std_hour']*(2.*np.pi/24))
test['hr_cos_std'] = np.cos(test['std_hour']*(2.*np.pi/24))
test['mn_sin_std'] = np.sin(test['std_minute']*(2.*np.pi/60))
test['mn_cos_std'] = np.cos(test['std_minute']*(2.*np.pi/60))
train.STATUS.value_counts().plot(kind = 'bar')
def status_rare(row):
if row != "ATA" and row != "SCH":
return "Other"
else:
return row
train.STATUS = train.STATUS.apply(status_rare)
test.STATUS = test.STATUS.apply(status_rare)
"""train.STATUS = np.where(train.STATUS == "ATA" , 1, 0)
test.STATUS = np.where(test.STATUS == "ATA" , 1, 0)"""
train['total_time'] = pd.to_datetime((train.STA - train.STD)).dt.hour * 60+ pd.to_datetime((train.STA - train.STD)).dt.minute
test['total_time'] = pd.to_datetime((test.STA - test.STD)).dt.hour * 60+ pd.to_datetime((test.STA - test.STD)).dt.minute
train.head()
['TUN', 'DJE', 'ORY', 'MIR', 'MRS', 'LYS', 'NCE', 'ALG', 'MXP', 'IST']
train['same_city'] = np.where(train['DEPSTN'] == train['ARRSTN'] ,1 ,0)
test['same_city'] = np.where(test['DEPSTN'] == test['ARRSTN'] ,1 ,0)
def extract_plan_hi(row):
return row.split(" ")[1][:3]
def extract_plan(row):
return row.split(" ")[0]
train['airplane_type'] = train['AC'].apply(extract_plan_hi)
test['airplane_type'] = test['AC'].apply(extract_plan_hi)
train['airplane_city'] = train['AC'].apply(extract_plan)
test['airplane_city'] = test['AC'].apply(extract_plan)
train['airplane_city'] = np.where(train['airplane_city'] == 'TU', 1, 0)
test['airplane_city'] = np.where(test['airplane_city'] == 'TU', 1, 0)
train['airplane_type'].value_counts().index
plt.figure(figsize = (15 , 8))
sns.scatterplot(x = 'target' , y = 'total_time' , data = train)
sns.boxplot(train.target)
train.corrwith(train['target']).plot.bar(figsize = (20,10), grid = True, title = 'Correlation with ETA')
def city_other(row):
if row not in ['TUN', 'DJE', 'ORY', 'MIR', 'MRS', 'LYS', 'NCE', 'ALG', 'MXP', 'IST']:
return "Other"
else:
return row
test.DEPSTN = test.DEPSTN.apply(city_other)
train.DEPSTN = train.DEPSTN.apply(city_other)
test.ARRSTN = test.ARRSTN.apply(city_other)
train.ARRSTN = train.ARRSTN.apply(city_other)
test.DEPSTN.value_counts()
train.isnull().sum()
sns.distplot(train.target.transform(np.sqrt))
(train.target == 0).sum()
trainset = train.drop(['ID' , 'DATOP' , 'FLTID' , 'STD' , 'STA' , 'AC'] , axis = 1)
testset = test.drop(['ID' , 'DATOP' , 'FLTID' , 'STD' , 'STA', 'AC'] , axis = 1)
#trainset = trainset[trainset['target'] != 0]
all_ = pd.concat([trainset.drop(['target'], axis = 1), testset])
all_data = pd.get_dummies(all_)
Xt = all_data.iloc[:trainset.shape[0],:]
Xts = all_data.iloc[trainset.shape[0]:,:]
trainset.head()
X = Xt.copy()
y = trainset.target
"""X = trainset.drop(['target'], axis = 1)
y = trainset.target"""
for c in X.columns:
col_type = X[c].dtype
if col_type == 'object' or col_type.name == 'category':
X[c] = X[c].astype('category')
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size = 0.3 , random_state = 70)
from sklearn.linear_model import LinearRegression
reg = LinearRegression()
reg.fit(x_train, y_train)
from sklearn.metrics import mean_squared_error
y_pred = reg.predict(x_test)
np.sqrt(mean_squared_error(y_pred , y_test))
from sklearn.ensemble import RandomForestRegressor
reg = RandomForestRegressor()
reg.fit(x_train, y_train)
from sklearn.metrics import mean_squared_error
y_pred = reg.predict(x_test)
np.sqrt(mean_squared_error(y_pred , y_test))
x_train.columns
from xgboost import XGBRegressor
reg = XGBRegressor(objective = 'reg:squarederror')
reg.fit(x_train, y_train)
from sklearn.metrics import mean_squared_error
y_pred = reg.predict(x_test)
np.sqrt(mean_squared_error(y_pred , y_test))
params = {
'application': 'binary', # for binary classification
# 'num_class' : 1, # used for multi-classes
'boosting': 'gbdt', # traditional gradient boosting decision tree
'num_iterations': 100,
'learning_rate': 0.05,
'num_leaves': 62,
'device': 'cpu', # you can use GPU to achieve faster learning
'max_depth': -1, # <0 means no limit
'max_bin': 510, # Small number of bins may reduce training accuracy but can deal with over-fitting
'lambda_l1': 5, # L1 regularization
'lambda_l2': 10, # L2 regularization
'metric' : 'binary_error',
'subsample_for_bin': 200, # number of samples for constructing bins
'subsample': 1, # subsample ratio of the training instance
'colsample_bytree': 0.8, # subsample ratio of columns when constructing the tree
'min_split_gain': 0.5, # minimum loss reduction required to make further partition on a leaf node of the tree
'min_child_weight': 1, # minimum sum of instance weight (hessian) needed in a leaf
'min_child_samples': 5# minimum number of data needed in a leaf
}
from lightgbm import LGBMRegressor
reg = LGBMRegressor(boosting_type= 'gbdt',
objective = 'binary',
n_jobs = 5,
silent = True,
max_depth = params['max_depth'],
max_bin = params['max_bin'],
subsample_for_bin = params['subsample_for_bin'],
subsample = params['subsample'],
min_split_gain = params['min_split_gain'],
min_child_weight = params['min_child_weight'],
min_child_samples = params['min_child_samples'])
reg.fit(x_train, y_train)
from sklearn.metrics import mean_squared_error
y_pred = reg.predict(x_test)
np.sqrt(mean_squared_error(y_pred , y_test))
from sklearn.model_selection import cross_val_score
result = cross_val_score(LGBMRegressor(), X, y, cv=10 , scoring = 'neg_mean_squared_error')
result
np.sqrt(np.abs(result.mean()))
from sklearn.model_selection import GridSearchCV
params = {'n_estimator': [10,50,100,150, 200 , 400 , 600, 1000]}
# Iny_testitialize XGB and GridSearch
grid = GridSearchCV(LGBMRegressor(), params ,n_jobs = -1, verbose=True, cv = 5,scoring = 'neg_mean_squared_error' )
grid.fit(X, y)
print(grid.best_score_)
print(grid.best_params_)
def above(x):
if x < 20:
return 0
else:
return x
resu = pd.DataFrame({"Pred" : y_pred , "Test":y_test})
aa = resu.Pred.apply(above).values
resu[resu['Test'] < 10].sort_values("Pred")
for c in testset.columns:
col_type = testset[c].dtype
if col_type == 'object' or col_type.name == 'category':
testset[c] = testset[c].astype('category')
result = reg.predict(Xts)
orders = test.ID
all_data = list(zip(orders, result.reshape(1,-1)[0].astype('float32')))
final_result = pd.DataFrame(all_data, columns=['ID', 'target'])
final_result.head()
sns.distplot(final_result.target)
final_result.describe()
final_result['target'] = final_result['target'].apply(above)
final_result[final_result['target']< 0]
def relu(x):
return x * (x > 0)
final_result['target'] = final_result['target'].transform(relu).transform(np.abs)
final_result['target'] = final_result['target'].astype(float)
final_result.to_csv('first_tunis_air_3_lgbm_15.csv' , index = False)
from keras.callbacks import ModelCheckpoint
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten,Dropout, BatchNormalization
NN_model = Sequential()
# The Input Layer :
NN_model.add(Dense(64, kernel_initializer='normal',input_dim = x_train.shape[1], activation='relu'))
NN_model.add(BatchNormalization())
NN_model.add(Dropout(0.2))
NN_model.add(Dense(64, kernel_initializer='normal',activation='relu'))
NN_model.add(BatchNormalization())
NN_model.add(Dropout(0.2))
NN_model.add(Dense(1, kernel_initializer='normal', activation='relu'))
# Compile the network :
NN_model.compile(loss='mean_absolute_error', optimizer='adam', metrics=['mean_absolute_error'])
NN_model.summary()
NN_model.fit(x_train, y_train, epochs=50, batch_size=256, validation_split = 0.2)
NN_model.evaluate(x_test, y_test)
```
| github_jupyter |
# Project 3: Implement SLAM
---
## Project Overview
In this project, you'll implement SLAM for robot that moves and senses in a 2 dimensional, grid world!
SLAM gives us a way to both localize a robot and build up a map of its environment as a robot moves and senses in real-time. This is an active area of research in the fields of robotics and autonomous systems. Since this localization and map-building relies on the visual sensing of landmarks, this is a computer vision problem.
Using what you've learned about robot motion, representations of uncertainty in motion and sensing, and localization techniques, you will be tasked with defining a function, `slam`, which takes in six parameters as input and returns the vector `mu`.
> `mu` contains the (x,y) coordinate locations of the robot as it moves, and the positions of landmarks that it senses in the world
You can implement helper functions as you see fit, but your function must return `mu`. The vector, `mu`, should have (x, y) coordinates interlaced, for example, if there were 2 poses and 2 landmarks, `mu` will look like the following, where `P` is the robot position and `L` the landmark position:
```
mu = matrix([[Px0],
[Py0],
[Px1],
[Py1],
[Lx0],
[Ly0],
[Lx1],
[Ly1]])
```
You can see that `mu` holds the poses first `(x0, y0), (x1, y1), ...,` then the landmark locations at the end of the matrix; we consider a `nx1` matrix to be a vector.
## Generating an environment
In a real SLAM problem, you may be given a map that contains information about landmark locations, and in this example, we will make our own data using the `make_data` function, which generates a world grid with landmarks in it and then generates data by placing a robot in that world and moving and sensing over some numer of time steps. The `make_data` function relies on a correct implementation of robot move/sense functions, which, at this point, should be complete and in the `robot_class.py` file. The data is collected as an instantiated robot moves and senses in a world. Your SLAM function will take in this data as input. So, let's first create this data and explore how it represents the movement and sensor measurements that our robot takes.
---
## Create the world
Use the code below to generate a world of a specified size with randomly generated landmark locations. You can change these parameters and see how your implementation of SLAM responds!
`data` holds the sensors measurements and motion of your robot over time. It stores the measurements as `data[i][0]` and the motion as `data[i][1]`.
#### Helper functions
You will be working with the `robot` class that may look familiar from the first notebook,
In fact, in the `helpers.py` file, you can read the details of how data is made with the `make_data` function. It should look very similar to the robot move/sense cycle you've seen in the first notebook.
```
import numpy as np
from helpers import make_data
# your implementation of slam should work with the following inputs
# feel free to change these input values and see how it responds!
# world parameters
num_landmarks = 5 # number of landmarks
N = 20 # time steps
world_size = 100.0 # size of world (square)
# robot parameters
measurement_range = 50.0 # range at which we can sense landmarks
motion_noise = 2.0 # noise in robot motion
measurement_noise = 2.0 # noise in the measurements
distance = 20.0 # distance by which robot (intends to) move each iteratation
# make_data instantiates a robot, AND generates random landmarks for a given world size and number of landmarks
data = make_data(N, num_landmarks, world_size, measurement_range, motion_noise, measurement_noise, distance)
```
### A note on `make_data`
The function above, `make_data`, takes in so many world and robot motion/sensor parameters because it is responsible for:
1. Instantiating a robot (using the robot class)
2. Creating a grid world with landmarks in it
**This function also prints out the true location of landmarks and the *final* robot location, which you should refer back to when you test your implementation of SLAM.**
The `data` this returns is an array that holds information about **robot sensor measurements** and **robot motion** `(dx, dy)` that is collected over a number of time steps, `N`. You will have to use *only* these readings about motion and measurements to track a robot over time and find the determine the location of the landmarks using SLAM. We only print out the true landmark locations for comparison, later.
In `data` the measurement and motion data can be accessed from the first and second index in the columns of the data array. See the following code for an example, where `i` is the time step:
```
measurement = data[i][0]
motion = data[i][1]
```
```
# print out some stats about the data
time_step = 0
print('Example measurements: \n', data[time_step][0])
print('\n')
print('Example motion: \n', data[time_step][1])
```
Try changing the value of `time_step`, you should see that the list of measurements varies based on what in the world the robot sees after it moves. As you know from the first notebook, the robot can only sense so far and with a certain amount of accuracy in the measure of distance between its location and the location of landmarks. The motion of the robot always is a vector with two values: one for x and one for y displacement. This structure will be useful to keep in mind as you traverse this data in your implementation of slam.
## Initialize Constraints
One of the most challenging tasks here will be to create and modify the constraint matrix and vector: omega and xi. In the second notebook, you saw an example of how omega and xi could hold all the values the define the relationships between robot poses `xi` and landmark positions `Li` in a 1D world, as seen below, where omega is the blue matrix and xi is the pink vector.
<img src='images/motion_constraint.png' width=50% height=50% />
In *this* project, you are tasked with implementing constraints for a 2D world. We are referring to robot poses as `Px, Py` and landmark positions as `Lx, Ly`, and one way to approach this challenge is to add *both* x and y locations in the constraint matrices.
<img src='images/constraints2D.png' width=50% height=50% />
You may also choose to create two of each omega and xi (one for x and one for y positions).
### TODO: Write a function that initializes omega and xi
Complete the function `initialize_constraints` so that it returns `omega` and `xi` constraints for the starting position of the robot. Any values that we do not yet know should be initialized with the value `0`. You may assume that our robot starts out in exactly the middle of the world with 100% confidence (no motion or measurement noise at this point). The inputs `N` time steps, `num_landmarks`, and `world_size` should give you all the information you need to construct intial constraints of the correct size and starting values.
*Depending on your approach you may choose to return one omega and one xi that hold all (x,y) positions *or* two of each (one for x values and one for y); choose whichever makes most sense to you!*
```
def initialize_constraints(N, num_landmarks, world_size):
''' This function takes in a number of time steps N, number of landmarks, and a world_size,
and returns initialized constraint matrices, omega and xi.'''
## Recommended: Define and store the size (rows/cols) of the constraint matrix in a variable
size = 2*N + 2*num_landmarks
## TODO: Define the constraint matrix, Omega, with two initial "strength" values
## for the initial x, y location of our robot
#omega = [0]
omega = np.zeros((size, size))
omega[0][0] = 1
omega[1][1] = 1
## TODO: Define the constraint *vector*, xi
## you can assume that the robot starts out in the middle of the world with 100% confidence
xi = np.zeros((size, 1))
xi[0] = world_size / 2.
xi[1] = world_size / 2.
return omega, xi
```
### Test as you go
It's good practice to test out your code, as you go. Since `slam` relies on creating and updating constraint matrices, `omega` and `xi` to account for robot sensor measurements and motion, let's check that they initialize as expected for any given parameters.
Below, you'll find some test code that allows you to visualize the results of your function `initialize_constraints`. We are using the [seaborn](https://seaborn.pydata.org/) library for visualization.
**Please change the test values of N, landmarks, and world_size and see the results**. Be careful not to use these values as input into your final smal function.
This code assumes that you have created one of each constraint: `omega` and `xi`, but you can change and add to this code, accordingly. The constraints should vary in size with the number of time steps and landmarks as these values affect the number of poses a robot will take `(Px0,Py0,...Pxn,Pyn)` and landmark locations `(Lx0,Ly0,...Lxn,Lyn)` whose relationships should be tracked in the constraint matrices. Recall that `omega` holds the weights of each variable and `xi` holds the value of the sum of these variables, as seen in Notebook 2. You'll need the `world_size` to determine the starting pose of the robot in the world and fill in the initial values for `xi`.
```
# import data viz resources
import matplotlib.pyplot as plt
from pandas import DataFrame
import seaborn as sns
%matplotlib inline
# define a small N and world_size (small for ease of visualization)
N_test = 5
num_landmarks_test = 2
small_world = 10
# initialize the constraints
initial_omega, initial_xi = initialize_constraints(N_test, num_landmarks_test, small_world)
# define figure size
plt.rcParams["figure.figsize"] = (10,7)
# display omega
sns.heatmap(DataFrame(initial_omega), cmap='Blues', annot=True, linewidths=.5)
# define figure size
plt.rcParams["figure.figsize"] = (1,7)
# display xi
sns.heatmap(DataFrame(initial_xi), cmap='Oranges', annot=True, linewidths=.5)
```
---
## SLAM inputs
In addition to `data`, your slam function will also take in:
* N - The number of time steps that a robot will be moving and sensing
* num_landmarks - The number of landmarks in the world
* world_size - The size (w/h) of your world
* motion_noise - The noise associated with motion; the update confidence for motion should be `1.0/motion_noise`
* measurement_noise - The noise associated with measurement/sensing; the update weight for measurement should be `1.0/measurement_noise`
#### A note on noise
Recall that `omega` holds the relative "strengths" or weights for each position variable, and you can update these weights by accessing the correct index in omega `omega[row][col]` and *adding/subtracting* `1.0/noise` where `noise` is measurement or motion noise. `Xi` holds actual position values, and so to update `xi` you'll do a similar addition process only using the actual value of a motion or measurement. So for a vector index `xi[row][0]` you will end up adding/subtracting one measurement or motion divided by their respective `noise`.
### TODO: Implement Graph SLAM
Follow the TODO's below to help you complete this slam implementation (these TODO's are in the recommended order), then test out your implementation!
#### Updating with motion and measurements
With a 2D omega and xi structure as shown above (in earlier cells), you'll have to be mindful about how you update the values in these constraint matrices to account for motion and measurement constraints in the x and y directions. Recall that the solution to these matrices (which holds all values for robot poses `P` and landmark locations `L`) is the vector, `mu`, which can be computed at the end of the construction of omega and xi as the inverse of omega times xi: $\mu = \Omega^{-1}\xi$
**You may also choose to return the values of `omega` and `xi` if you want to visualize their final state!**
```
## TODO: Complete the code to implement SLAM
## slam takes in 6 arguments and returns mu,
## mu is the entire path traversed by a robot (all x,y poses) *and* all landmarks locations
def slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise):
## TODO: Use your initilization to create constraint matrices, omega and xi
omega, xi = initialize_constraints(N, num_landmarks, world_size)
## TODO: Iterate through each time step in the data
## get all the motion and measurement data as you iterate
for time_step in range(N-1):
## TODO: update the constraint matrix/vector to account for all *measurements*
## this should be a series of additions that take into account the measurement noise
measurements = data[time_step][0]
for L_mesurement in measurements:
L_index, measurement_x, measurement_y = L_mesurement
omega[2*time_step][2*time_step] += 1/measurement_noise
omega[2*time_step][2*(N+L_index)] += -1/measurement_noise
omega[2*(N+L_index)][2*time_step] += -1/measurement_noise
omega[2*(N+L_index)][2*(N+L_index)] += 1/measurement_noise
omega[2*time_step+1][2*time_step+1] += 1/measurement_noise
omega[2*time_step+1][2*(N+L_index)+1] += -1/measurement_noise
omega[2*(N+L_index)+1][2*time_step+1] += -1/measurement_noise
omega[2*(N+L_index)+1][2*(N+L_index)+1] += 1/measurement_noise
xi[2*time_step] += -measurement_x/measurement_noise
xi[2*(N+L_index)] += measurement_x/measurement_noise
xi[2*time_step+1] += -measurement_y/measurement_noise
xi[2*(N+L_index)+1] += measurement_y/measurement_noise
## TODO: update the constraint matrix/vector to account for all *motion* and motion noise
dx, dy = data[time_step][1]
#update omega matrix for x
omega[2*time_step][2*time_step] += 1/measurement_noise
omega[2*time_step][2*(time_step+1)] += -1/measurement_noise
omega[2*(time_step+1)][2*time_step] += -1/measurement_noise
omega[2*(time_step+1)][2*(time_step+1)] += 1/measurement_noise
#update omega matrix for y
omega[2*time_step+1][2*time_step+1] += 1/measurement_noise
omega[2*time_step+1][2*time_step+3] += -1/measurement_noise
omega[2*time_step+3][2*time_step+1] += -1/measurement_noise
omega[2*time_step+3][2*time_step+3] += 1/measurement_noise
#update xi
xi[2*time_step] += -dx/motion_noise
xi[2*(time_step+1)] += dx/motion_noise
xi[2*time_step+1] += -dy/motion_noise
xi[2*time_step+3] += dy/motion_noise
## TODO: After iterating through all the data
## Compute the best estimate of poses and landmark positions
## using the formula, omega_inverse * Xi
omega_inv = np.linalg.inv(np.matrix(omega))
mu = omega_inv*xi
return mu
```
## Helper functions
To check that your implementation of SLAM works for various inputs, we have provided two helper functions that will help display the estimated pose and landmark locations that your function has produced. First, given a result `mu` and number of time steps, `N`, we define a function that extracts the poses and landmarks locations and returns those as their own, separate lists.
Then, we define a function that nicely print out these lists; both of these we will call, in the next step.
```
# a helper function that creates a list of poses and of landmarks for ease of printing
# this only works for the suggested constraint architecture of interlaced x,y poses
def get_poses_landmarks(mu, N):
# create a list of poses
poses = []
for i in range(N):
poses.append((mu[2*i].item(), mu[2*i+1].item()))
# create a list of landmarks
landmarks = []
for i in range(num_landmarks):
landmarks.append((mu[2*(N+i)].item(), mu[2*(N+i)+1].item()))
# return completed lists
return poses, landmarks
def print_all(poses, landmarks):
print('\n')
print('Estimated Poses:')
for i in range(len(poses)):
print('['+', '.join('%.3f'%p for p in poses[i])+']')
print('\n')
print('Estimated Landmarks:')
for i in range(len(landmarks)):
print('['+', '.join('%.3f'%l for l in landmarks[i])+']')
```
## Run SLAM
Once you've completed your implementation of `slam`, see what `mu` it returns for different world sizes and different landmarks!
### What to Expect
The `data` that is generated is random, but you did specify the number, `N`, or time steps that the robot was expected to move and the `num_landmarks` in the world (which your implementation of `slam` should see and estimate a position for. Your robot should also start with an estimated pose in the very center of your square world, whose size is defined by `world_size`.
With these values in mind, you should expect to see a result that displays two lists:
1. **Estimated poses**, a list of (x, y) pairs that is exactly `N` in length since this is how many motions your robot has taken. The very first pose should be the center of your world, i.e. `[50.000, 50.000]` for a world that is 100.0 in square size.
2. **Estimated landmarks**, a list of landmark positions (x, y) that is exactly `num_landmarks` in length.
#### Landmark Locations
If you refer back to the printout of *exact* landmark locations when this data was created, you should see values that are very similar to those coordinates, but not quite (since `slam` must account for noise in motion and measurement).
```
# call your implementation of slam, passing in the necessary parameters
mu = slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise)
# print out the resulting landmarks and poses
if(mu is not None):
# get the lists of poses and landmarks
# and print them out
poses, landmarks = get_poses_landmarks(mu, N)
print_all(poses, landmarks)
```
## Visualize the constructed world
Finally, using the `display_world` code from the `helpers.py` file (which was also used in the first notebook), we can actually visualize what you have coded with `slam`: the final position of the robot and the positon of landmarks, created from only motion and measurement data!
**Note that these should be very similar to the printed *true* landmark locations and final pose from our call to `make_data` early in this notebook.**
```
# import the helper function
from helpers import display_world
# Display the final world!
# define figure size
plt.rcParams["figure.figsize"] = (20,20)
# check if poses has been created
if 'poses' in locals():
# print out the last pose
print('Last pose: ', poses[-1])
# display the last position of the robot *and* the landmark positions
display_world(int(world_size), poses[-1], landmarks)
```
### Question: How far away is your final pose (as estimated by `slam`) compared to the *true* final pose? Why do you think these poses are different?
You can find the true value of the final pose in one of the first cells where `make_data` was called. You may also want to look at the true landmark locations and compare them to those that were estimated by `slam`. Ask yourself: what do you think would happen if we moved and sensed more (increased N)? Or if we had lower/higher noise parameters.
**Answer**: (Write your answer here.)
- According to the first cell, the robot has the following coordinates: [x=45.62829, y=51.61174], which is very similar to the last pose calculation([43.42201470380701, 51.40250976015007]). The true locations of the landmarks are summarized in the following list: [[33, 90], [33, 48], [1, 58], [51, 57], [6, 19]] while the estimated locations computed in the cell above yields the following list:[[31.751, 89.241],[32.115, 47.626],[0.212, 57.660],[49.661, 56.108],[4.924, 18.469]), which is very similar to the list of the true locations. The slam algorithm is able to determine a good approximation of the locations of the robot and the landmarks. The values computed with slam are slightly different from the true values because of the measurement and the motion errors.
- The greater the measurement and the motion error, the less reliable the measurements are, and the less precise the location estimates are (for both the robot and the landmarks).
- If we increased N, we would refine the location estimates as more time_steps yields more computations. The higher N, the more accurate the last pose and the landmarks' locations are.
## Testing
To confirm that your slam code works before submitting your project, it is suggested that you run it on some test data and cases. A few such cases have been provided for you, in the cells below. When you are ready, uncomment the test cases in the next cells (there are two test cases, total); your output should be **close-to or exactly** identical to the given results. If there are minor discrepancies it could be a matter of floating point accuracy or in the calculation of the inverse matrix.
### Submit your project
If you pass these tests, it is a good indication that your project will pass all the specifications in the project rubric. Follow the submission instructions to officially submit!
```
# Here is the data and estimated outputs for test case 1
test_data1 = [[[[1, 19.457599255548065, 23.8387362100849], [2, -13.195807561967236, 11.708840328458608], [3, -30.0954905279171, 15.387879242505843]], [-12.2607279422326, -15.801093326936487]], [[[2, -0.4659930049620491, 28.088559771215664], [4, -17.866382374890936, -16.384904503932]], [-12.2607279422326, -15.801093326936487]], [[[4, -6.202512900833806, -1.823403210274639]], [-12.2607279422326, -15.801093326936487]], [[[4, 7.412136480918645, 15.388585962142429]], [14.008259661173426, 14.274756084260822]], [[[4, -7.526138813444998, -0.4563942429717849]], [14.008259661173426, 14.274756084260822]], [[[2, -6.299793150150058, 29.047830407717623], [4, -21.93551130411791, -13.21956810989039]], [14.008259661173426, 14.274756084260822]], [[[1, 15.796300959032276, 30.65769689694247], [2, -18.64370821983482, 17.380022987031367]], [14.008259661173426, 14.274756084260822]], [[[1, 0.40311325410337906, 14.169429532679855], [2, -35.069349468466235, 2.4945558982439957]], [14.008259661173426, 14.274756084260822]], [[[1, -16.71340983241936, -2.777000269543834]], [-11.006096015782283, 16.699276945166858]], [[[1, -3.611096830835776, -17.954019226763958]], [-19.693482634035977, 3.488085684573048]], [[[1, 18.398273354362416, -22.705102332550947]], [-19.693482634035977, 3.488085684573048]], [[[2, 2.789312482883833, -39.73720193121324]], [12.849049222879723, -15.326510824972983]], [[[1, 21.26897046581808, -10.121029799040915], [2, -11.917698965880655, -23.17711662602097], [3, -31.81167947898398, -16.7985673023331]], [12.849049222879723, -15.326510824972983]], [[[1, 10.48157743234859, 5.692957082575485], [2, -22.31488473554935, -5.389184118551409], [3, -40.81803984305378, -2.4703329790238118]], [12.849049222879723, -15.326510824972983]], [[[0, 10.591050242096598, -39.2051798967113], [1, -3.5675572049297553, 22.849456408289125], [2, -38.39251065320351, 7.288990306029511]], [12.849049222879723, -15.326510824972983]], [[[0, -3.6225556479370766, -25.58006865235512]], [-7.8874682868419965, -18.379005523261092]], [[[0, 1.9784503557879374, -6.5025974151499]], [-7.8874682868419965, -18.379005523261092]], [[[0, 10.050665232782423, 11.026385307998742]], [-17.82919359778298, 9.062000642947142]], [[[0, 26.526838150174818, -0.22563393232425621], [4, -33.70303936886652, 2.880339841013677]], [-17.82919359778298, 9.062000642947142]]]
## Test Case 1
##
# Estimated Pose(s):
# [50.000, 50.000]
# [37.858, 33.921]
# [25.905, 18.268]
# [13.524, 2.224]
# [27.912, 16.886]
# [42.250, 30.994]
# [55.992, 44.886]
# [70.749, 59.867]
# [85.371, 75.230]
# [73.831, 92.354]
# [53.406, 96.465]
# [34.370, 100.134]
# [48.346, 83.952]
# [60.494, 68.338]
# [73.648, 53.082]
# [86.733, 38.197]
# [79.983, 20.324]
# [72.515, 2.837]
# [54.993, 13.221]
# [37.164, 22.283]
# Estimated Landmarks:
# [82.679, 13.435]
# [70.417, 74.203]
# [36.688, 61.431]
# [18.705, 66.136]
# [20.437, 16.983]
### Uncomment the following three lines for test case 1 and compare the output to the values above ###
mu_1 = slam(test_data1, 20, 5, 100.0, 2.0, 2.0)
poses, landmarks = get_poses_landmarks(mu_1, 20)
print_all(poses, landmarks)
# Here is the data and estimated outputs for test case 2
test_data2 = [[[[0, 26.543274387283322, -6.262538160312672], [3, 9.937396825799755, -9.128540360867689]], [18.92765331253674, -6.460955043986683]], [[[0, 7.706544739722961, -3.758467215445748], [1, 17.03954411948937, 31.705489938553438], [3, -11.61731288777497, -6.64964096716416]], [18.92765331253674, -6.460955043986683]], [[[0, -12.35130507136378, 2.585119104239249], [1, -2.563534536165313, 38.22159657838369], [3, -26.961236804740935, -0.4802312626141525]], [-11.167066095509824, 16.592065417497455]], [[[0, 1.4138633151721272, -13.912454837810632], [1, 8.087721200818589, 20.51845934354381], [3, -17.091723454402302, -16.521500551709707], [4, -7.414211721400232, 38.09191602674439]], [-11.167066095509824, 16.592065417497455]], [[[0, 12.886743222179561, -28.703968411636318], [1, 21.660953298391387, 3.4912891084614914], [3, -6.401401414569506, -32.321583037341625], [4, 5.034079343639034, 23.102207946092893]], [-11.167066095509824, 16.592065417497455]], [[[1, 31.126317672358578, -10.036784369535214], [2, -38.70878528420893, 7.4987265861424595], [4, 17.977218575473767, 6.150889254289742]], [-6.595520680493778, -18.88118393939265]], [[[1, 41.82460922922086, 7.847527392202475], [3, 15.711709540417502, -30.34633659912818]], [-6.595520680493778, -18.88118393939265]], [[[0, 40.18454208294434, -6.710999804403755], [3, 23.019508919299156, -10.12110867290604]], [-6.595520680493778, -18.88118393939265]], [[[3, 27.18579315312821, 8.067219022708391]], [-6.595520680493778, -18.88118393939265]], [[], [11.492663265706092, 16.36822198838621]], [[[3, 24.57154567653098, 13.461499960708197]], [11.492663265706092, 16.36822198838621]], [[[0, 31.61945290413707, 0.4272295085799329], [3, 16.97392299158991, -5.274596836133088]], [11.492663265706092, 16.36822198838621]], [[[0, 22.407381798735177, -18.03500068379259], [1, 29.642444125196995, 17.3794951934614], [3, 4.7969752441371645, -21.07505361639969], [4, 14.726069092569372, 32.75999422300078]], [11.492663265706092, 16.36822198838621]], [[[0, 10.705527984670137, -34.589764174299596], [1, 18.58772336795603, -0.20109708164787765], [3, -4.839806195049413, -39.92208742305105], [4, 4.18824810165454, 14.146847823548889]], [11.492663265706092, 16.36822198838621]], [[[1, 5.878492140223764, -19.955352450942357], [4, -7.059505455306587, -0.9740849280550585]], [19.628527845173146, 3.83678180657467]], [[[1, -11.150789592446378, -22.736641053247872], [4, -28.832815721158255, -3.9462962046291388]], [-19.841703647091965, 2.5113335861604362]], [[[1, 8.64427397916182, -20.286336970889053], [4, -5.036917727942285, -6.311739993868336]], [-5.946642674882207, -19.09548221169787]], [[[0, 7.151866679283043, -39.56103232616369], [1, 16.01535401373368, -3.780995345194027], [4, -3.04801331832137, 13.697362774960865]], [-5.946642674882207, -19.09548221169787]], [[[0, 12.872879480504395, -19.707592098123207], [1, 22.236710716903136, 16.331770792606406], [3, -4.841206109583004, -21.24604435851242], [4, 4.27111163223552, 32.25309748614184]], [-5.946642674882207, -19.09548221169787]]]
## Test Case 2
##
# Estimated Pose(s):
# [50.000, 50.000]
# [69.035, 45.061]
# [87.655, 38.971]
# [76.084, 55.541]
# [64.283, 71.684]
# [52.396, 87.887]
# [44.674, 68.948]
# [37.532, 49.680]
# [31.392, 30.893]
# [24.796, 12.012]
# [33.641, 26.440]
# [43.858, 43.560]
# [54.735, 60.659]
# [65.884, 77.791]
# [77.413, 94.554]
# [96.740, 98.020]
# [76.149, 99.586]
# [70.211, 80.580]
# [64.130, 61.270]
# [58.183, 42.175]
# Estimated Landmarks:
# [76.777, 42.415]
# [85.109, 76.850]
# [13.687, 95.386]
# [59.488, 39.149]
# [69.283, 93.654]
### Uncomment the following three lines for test case 2 and compare to the values above ###
mu_2 = slam(test_data2, 20, 5, 100.0, 2.0, 2.0)
poses, landmarks = get_poses_landmarks(mu_2, 20)
print_all(poses, landmarks)
```
| github_jupyter |
```
import os
import cv2
import math
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, fbeta_score
from keras import optimizers
from keras import backend as K
from keras.models import Sequential, Model
from keras.applications.vgg16 import VGG16
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler, EarlyStopping
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, Activation, BatchNormalization, GlobalAveragePooling2D, Input
# Set seeds to make the experiment more reproducible.
from tensorflow import set_random_seed
from numpy.random import seed
set_random_seed(0)
seed(0)
%matplotlib inline
sns.set(style="whitegrid")
warnings.filterwarnings("ignore")
train = pd.read_csv('../input/imet-2019-fgvc6/train.csv')
labels = pd.read_csv('../input/imet-2019-fgvc6/labels.csv')
test = pd.read_csv('../input/imet-2019-fgvc6/sample_submission.csv')
train["attribute_ids"] = train["attribute_ids"].apply(lambda x:x.split(" "))
train["id"] = train["id"].apply(lambda x: x + ".png")
test["id"] = test["id"].apply(lambda x: x + ".png")
print('Number of train samples: ', train.shape[0])
print('Number of test samples: ', test.shape[0])
print('Number of labels: ', labels.shape[0])
display(train.head())
display(labels.head())
```
### Model parameters
```
# Model parameters
BATCH_SIZE = 64
EPOCHS = 30
LEARNING_RATE = 0.0001
HEIGHT = 64
WIDTH = 64
CANAL = 3
N_CLASSES = labels.shape[0]
ES_PATIENCE = 5
DECAY_DROP = 0.5
DECAY_EPOCHS = 10
classes = list(map(str, range(N_CLASSES)))
def f2_score_thr(threshold=0.5):
def f2_score(y_true, y_pred):
beta = 2
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold), K.floatx())
true_positives = K.sum(K.clip(y_true * y_pred, 0, 1), axis=1)
predicted_positives = K.sum(K.clip(y_pred, 0, 1), axis=1)
possible_positives = K.sum(K.clip(y_true, 0, 1), axis=1)
precision = true_positives / (predicted_positives + K.epsilon())
recall = true_positives / (possible_positives + K.epsilon())
return K.mean(((1+beta**2)*precision*recall) / ((beta**2)*precision+recall+K.epsilon()))
return f2_score
def custom_f2(y_true, y_pred):
beta = 2
tp = np.sum((y_true == 1) & (y_pred == 1))
tn = np.sum((y_true == 0) & (y_pred == 0))
fp = np.sum((y_true == 0) & (y_pred == 1))
fn = np.sum((y_true == 1) & (y_pred == 0))
p = tp / (tp + fp + K.epsilon())
r = tp / (tp + fn + K.epsilon())
f2 = (1+beta**2)*p*r / (p*beta**2 + r + 1e-15)
return f2
def step_decay(epoch):
initial_lrate = LEARNING_RATE
drop = DECAY_DROP
epochs_drop = DECAY_EPOCHS
lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))
return lrate
train_datagen=ImageDataGenerator(rescale=1./255, validation_split=0.25)
train_generator=train_datagen.flow_from_dataframe(
dataframe=train,
directory="../input/imet-2019-fgvc6/train",
x_col="id",
y_col="attribute_ids",
batch_size=BATCH_SIZE,
shuffle=True,
class_mode="categorical",
classes=classes,
target_size=(HEIGHT, WIDTH),
subset='training')
valid_generator=train_datagen.flow_from_dataframe(
dataframe=train,
directory="../input/imet-2019-fgvc6/train",
x_col="id",
y_col="attribute_ids",
batch_size=BATCH_SIZE,
shuffle=True,
class_mode="categorical",
classes=classes,
target_size=(HEIGHT, WIDTH),
subset='validation')
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_dataframe(
dataframe=test,
directory = "../input/imet-2019-fgvc6/test",
x_col="id",
target_size=(HEIGHT, WIDTH),
batch_size=1,
shuffle=False,
class_mode=None)
```
### Model
```
def create_model(input_shape, n_out):
input_tensor = Input(shape=input_shape)
base_model = VGG16(weights=None, include_top=False,
input_tensor=input_tensor)
base_model.load_weights('../input/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5')
x = GlobalAveragePooling2D()(base_model.output)
x = Dropout(0.5)(x)
x = Dense(1024, activation='relu')(x)
x = Dropout(0.5)(x)
final_output = Dense(n_out, activation='sigmoid', name='final_output')(x)
model = Model(input_tensor, final_output)
return model
# warm up model
# first: train only the top layers (which were randomly initialized)
model = create_model(input_shape=(HEIGHT, WIDTH, CANAL), n_out=N_CLASSES)
for layer in model.layers:
layer.trainable = False
for i in range(-5, 0):
model.layers[i].trainable = True
optimizer = optimizers.Adam(lr=LEARNING_RATE)
metrics = ["accuracy", "categorical_accuracy"]
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=ES_PATIENCE)
callbacks = [es]
model.compile(optimizer=optimizer, loss="binary_crossentropy", metrics=metrics)
model.summary()
```
#### Train top layers
```
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=5,
callbacks=callbacks,
verbose=2,
max_queue_size=16, workers=3, use_multiprocessing=True)
```
#### Fine-tune the complete model
```
# train all layers
for i in range(-9, 0):
model.layers[i].trainable = True
metrics = ["accuracy", "categorical_accuracy"]
lrate = LearningRateScheduler(step_decay)
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=(ES_PATIENCE))
callbacks = [es, lrate]
# optimizer = optimizers.Adam(lr=0.0003)
# model.compile(optimizer=optimizer, loss="binary_crossentropy", metrics=metrics)
# model.summary()
# STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
# STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
# history = model.fit_generator(generator=train_generator,
# steps_per_epoch=STEP_SIZE_TRAIN,
# validation_data=valid_generator,
# validation_steps=STEP_SIZE_VALID,
# epochs=(EPOCHS * 0.9),
# callbacks=callbacks,
# verbose=2,
# max_queue_size=16, workers=3, use_multiprocessing=True)
optimizer = optimizers.SGD(lr=0.0001, momentum=0.9)
model.compile(optimizer=optimizer, loss="binary_crossentropy", metrics=metrics)
model.summary()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
# epochs=(EPOCHS * 0.1),
epochs=EPOCHS,
callbacks=callbacks,
verbose=2,
max_queue_size=16, workers=3, use_multiprocessing=True)
```
### Build complete model
### Complete model graph loss
```
sns.set_style("whitegrid")
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, sharex='col', figsize=(20,7))
ax1.plot(history.history['loss'], label='Train loss')
ax1.plot(history.history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history.history['acc'], label='Train Accuracy')
ax2.plot(history.history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
ax3.plot(history.history['categorical_accuracy'], label='Train Cat Accuracy')
ax3.plot(history.history['val_categorical_accuracy'], label='Validation Cat Accuracy')
ax3.legend(loc='best')
ax3.set_title('Cat Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
```
### Find best threshold value
```
lastFullValPred = np.empty((0, N_CLASSES))
lastFullValLabels = np.empty((0, N_CLASSES))
for i in range(STEP_SIZE_VALID+1):
im, lbl = next(valid_generator)
scores = model.predict(im, batch_size=valid_generator.batch_size)
lastFullValPred = np.append(lastFullValPred, scores, axis=0)
lastFullValLabels = np.append(lastFullValLabels, lbl, axis=0)
print(lastFullValPred.shape, lastFullValLabels.shape)
def find_best_fixed_threshold(preds, targs, do_plot=True):
score = []
thrs = np.arange(0, 0.5, 0.01)
for thr in thrs:
score.append(custom_f2(targs, (preds > thr).astype(int)))
score = np.array(score)
pm = score.argmax()
best_thr, best_score = thrs[pm], score[pm].item()
print(f'thr={best_thr:.3f}', f'F2={best_score:.3f}')
if do_plot:
plt.plot(thrs, score)
plt.vlines(x=best_thr, ymin=score.min(), ymax=score.max())
plt.text(best_thr+0.03, best_score-0.01, f'$F_{2}=${best_score:.3f}', fontsize=14);
plt.show()
return best_thr, best_score
threshold, best_score = find_best_fixed_threshold(lastFullValPred, lastFullValLabels, do_plot=True)
```
### Apply model to test set and output predictions
```
test_generator.reset()
STEP_SIZE_TEST = test_generator.n//test_generator.batch_size
preds = model.predict_generator(test_generator, steps=STEP_SIZE_TEST)
predictions = []
for pred_ar in preds:
valid = []
for idx, pred in enumerate(pred_ar):
if pred > threshold:
valid.append(idx)
if len(valid) == 0:
valid.append(np.argmax(pred_ar))
predictions.append(valid)
filenames = test_generator.filenames
label_map = {valid_generator.class_indices[k] : k for k in valid_generator.class_indices}
results = pd.DataFrame({'id':filenames, 'attribute_ids':predictions})
results['id'] = results['id'].map(lambda x: str(x)[:-4])
results['attribute_ids'] = results['attribute_ids'].apply(lambda x: list(map(label_map.get, x)))
results["attribute_ids"] = results["attribute_ids"].apply(lambda x: ' '.join(x))
results.to_csv('submission.csv',index=False)
results.head(10)
```
| github_jupyter |
# <font color='orange'>Word2Vec Introduction </font>
The Natural Languahe Processing for data science that was introduced in class discusses the ideas of topics as Bag of words and N-grams with a key note by professor mentioning “this lecture may be subject to change in the upcoming years, as massive improvements in “off-the-shelf” language understanding are ongoing.” This tutorial will introduce you to one such deep learning method known as Word2Vec which aims to learn the meaning of the words rather than relying on heuristic approaches.
Limiting the scope of the tutorial and not going into deeper mathematics, word2vec can be explained as a model where each word is represented as a vector in N dimensional space. During the initialization stage, the vocabulary of corpus can be randomly initialized as vectors with some values. Then, using vector addition, if we add tiny bit of one vector to another, the vector moves closer to each other and similarly subtraction results in moving further away from one another. So while training a model we can pull the vector closer to words that it co occurs with within a specified window and away from all the other words. So in the end of the training words that are similar in meaning end up clustering nearby each other and many more patterns arises automatically.
## <font color='orange'>Technique</font>
Word2Vec uses a technique called <b>skip-gram with negative sampling</b> to map the semantic space of a text corpus. I have tried to explain this using a passage taken from dataset used in our tutorial.
“This system is sometimes known as a presidential system because the government is answerable<font color='green'> <i>solely and exclusively to a </i></font><font color='red'><i><b> 'presiding' </b></i></font><font color='green'><i>activist head of state, and</i></font> is selected by and on occasion dismissed by the head of state without reference to the legislature.”
<b>Step 1</b>
Let’s take a word from above passage as target and number of words occurring close to the target as context (five words on either side of target).
<i>Target</i> = presiding
<i>Context</i> = solely and exclusively to a; activist head of state, and
<b>Step 2</b>
Each of the word in the above paragragh should be represented as a vector in n dimensional space. In the beginning, these vector values can be randomly initialized.
Note: The dimension can be decided at the time of model creation, given as one of the parameters of models
<b>Step 3</b>
Our goal is to get the target and context closer to each other in vector space. This can be done by taking the target and context vectors and pulling them together by a small amount by maximizing the log transformation of the dot product of the target and the context vectors.
<b>Step 4</b>
Another motive is to move target vector away from words that are not in it’s context. To achieve this, words are randomly sampled from rest of the corpus and they are pushed away from target vector by minimizing the log transformation of the dot product of the target and the non context sample words.
The above four steps is repeated for each target word. In the end the vectors that are often used together will be pulled towards each other and vectors for words that are rarely used together will be pushed apart.
In our example if notion of ‘presiding’ resembles ‘activist head’ often in corpus then vectors for these two concepts will be close to one another.
## <font color='blue'>Data Collection</font>
I used [ClueWeb datasets](http://lemurproject.org/clueweb12/) which consists of around (50 million Wikipedia documents) and indexed them using [Apache Lucene](https://lucene.apache.org/core/). After Indexing, I used Lucene Search Engine to extract the top 1000 documents for query "president united states". These documents are then stored in a local file "wikidocs"
I am using these 1000 documents to build the word2vec model to find the interested relationships between words.
Please note that the entire process of indexing and retrieving the documents is beyond the scope of the tutorial.
The file wikidocs can be downloaded from https://www.dropbox.com/s/rnu33c4j6ywnu6z/wikidocs?dl=0. Make sure to save the file in same directory as the notebook.
```
#Load the documents extracted from wikipedia corpus
with open("wikidocs") as f:
html = f.read()
html = unicode(html, errors='replace')
#Printing the lenght of the corpus
print("The lenght of corpus is "+str(len(html))+".")
```
## <font color='blue'> Install Libraries</font>
The data collected is in the form of xml which needs to be cleaned to plain text format. I am using BeautifulSoup libraries to parse the wikipedia documents and nltk tokenizer to convert them to tokens.
Please note that these libraries are already been introduced in class and used in homeworks. Hence, running the below command should work for you.
```
import urllib
import re
import pandas as pd
import nltk.data
from nltk.corpus import stopwords
from bs4 import BeautifulSoup
nltk.download('punkt')
```
## <font color='blue'> Parsing </font>
<b>Convert the docs into list of sentences</b>
Word2Vec toolkit in python expects input in the form of a list of sentences, each of which is a list of words.
<b>Remove punctuation and lowercase all words</b>
The words are converted into lowecase and punctuations are removed using regular expressions.
<b>Stopwords</b>
Removal of stopwords is optional. It is better to not remove stopwords for word2vec algorithm for the first train as the algorithm relies on the broader context of the sentence in order to produce high-quality word vectors. However, if results are not satisfactory, the model can be trained again by removing stop words.
```
# Function to convert a document to a sequence of words,
# optionally removing stop words.
# Returns a list of words.
def review_to_wordlist(review, remove_stopwords=False):
# 1. Remove HTML
review_text = BeautifulSoup(review,"lxml").get_text()
review_text = review_text.encode('utf-8')
# 2. Remove non-letters
review_text = re.sub("[^a-zA-Z]"," ", review_text)
# 3. Convert words to lower case and split them
words = review_text.lower()
words = words.split()
# 4. Optionally remove stop words (false by default)
if remove_stopwords:
stops = stopwords.words("english")
stops = set(stops)
words = [w for w in words if not w in stops]
# 5. Return a list of words
return(words)
```
Function to split a document into parsed sentences using NLTK tokenizer.
```
# Function to split a review into parsed sentences. Returns a
# list of sentences, where each sentence is a list of words
def review_to_sentences( review, tokenizer, remove_stopwords=False):
# 1. Use the NLTK tokenizer to split the paragraph into sentences
raw_sentences = tokenizer.tokenize(review.strip())
# 2. Loop over each sentence
sentences = []
for raw_sentence in raw_sentences:
# If a sentence is empty, skip it
if len(raw_sentence) > 0:
# Otherwise, call review_to_wordlist to get a list of words
sentences.append( review_to_wordlist( raw_sentence, \
remove_stopwords ))
# Return the list of sentences (each sentence is a list of words,
# so this returns a list of lists
return sentences
```
Now we apply these functions to our Wikipedia Corpus
```
#Parse documents to create list of sentences
tokenizer = nltk.data.load('nltk:tokenizers/punkt/english.pickle')
sentences = review_to_sentences(html, tokenizer)
print("There are " + str(len(sentences)) + " sentences in our corpus.")
```
Let's observe what a sentence looks like after parsing it through above functions.
```
sentences[100]
```
# <font color='green'>Generating Word2Vec Model</font>
## <font color='red'> Installing the libraries</font>
You can install gensim using pip:
$ pip install --upgrade gensim
If this fail, make sure you’re installing into a writeable location (or use sudo), and have following dependencies.
Python >= 2.6
NumPy >= 1.3
SciPy >= 0.7
Alternatively, you can use conda package to install gensim, which takes care of all the dependencies.
$ conda install -c anaconda gensim=0.12.4
## <font color='red'>Load libraries for word2vec</font>
After you run all the installs, make sure the following commands work for you:
```
import gensim
from gensim.models import word2vec
from gensim.models import Phrases
from gensim.models import Word2Vec
import logging
```
## <font color='red'>Training the Model </font>
## Logging
Import the built-in logging module and configure it so that Word2Vec creates nice output messages
```
#Using built in logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s',\
level=logging.INFO)
```
## Parameters for model
The Word2Vec model requires some parameters for initialization.
<b>size</b>
Size is number of dimensions you want for word vectors. If you have an idea about how many topics the corpus cover, you can use that as size here. For wikipedia documents I use around 50-100. Usually, you will need to experiment with this value and pick the one which gives you best result.
<b>min_count</b>
Terms that occur less than min_count are ignored in calculations. This reduce noise in the vector space. I have used 10 for my experiment. Usually for bigger corpus size, you can experiment with higher values.
<b>window</b>
The maximum distance between the current and predicted word within a sentence. This is explained in the technique section of the tutorial.
<b>downsampling</b>
Threshold for configuring which higher-frequency words are randomly downsampled. Useful range is (0, 1e-5)
```
# Set values for various parameters
num_features = 200 # Word vector dimensionality
min_word_count = 10 # Minimum word count
num_workers = 4 # Number of threads to run in parallel
context = 10 # Context window size
downsampling = 1e-3 # Downsample setting for frequent words
```
## Initialize and train the model
Train the model using the above parameters. This might take some time
```
print "Training model..."
model = word2vec.Word2Vec(sentences, workers=num_workers, \
size=num_features, min_count = min_word_count, \
window = context, sample = downsampling)
```
If you don’t plan to train the model any further, calling init_sims will be better for memory.
```
model.init_sims(replace=True)
```
## <font color='red'>Storing & Loading Models</font>
It can be helpful to create a meaningful model name and save the model for later use.
You can load it later using Word2Vec.load()
```
#You can save the model using meaningful name
model_name = "wiki_100features_15word_count"
model.save(model_name)
#Loading the saved model
word2vec_model = gensim.models.Word2Vec.load("wiki_100features_15word_count")
```
## <font color='red'>Investigate the vocabulary</font>
You can either use model.index2word which gives list of all the terms in vocabulary. Or model.vocab.keys() which gives keys of all the terms used in the model.
```
#List of all the vcabulary terms
vocab = model.index2word
print "Lenght of vocabulary =",len(vocab)
print vocab[20:69]
```
Check if the word ‘obama’ exists in the vocabulary:
```
'obama' in model.vocab
```
Check if the word 'beyonce’ exists in the vocabulary:
```
'beyonce' in model.vocab
```
The vector representation of word ‘obama’ looks like this:
```
model['obama']
```
Let's test the words similar to "obama"
```
model.most_similar('obama', topn=10)
```
## <font color='red'>Phrases</font>
We can use gensim models.phrases in order to detect common phrases from sentences. For example two single words "new" and "york" can be combined as one word "new york".
```
bigram = gensim.models.Phrases(sentences)
```
Generte the model using above bigram.
```
new_model = Word2Vec(bigram[sentences], workers=num_workers, \
size=num_features, min_count = min_word_count, \
window = context, sample = downsampling)
```
Let's access the new vocabulary.
```
vocab = new_model.vocab.keys()
vocab = list(vocab)
print "Lenght of new vocabulary =",len(vocab)
print vocab[15:55]
```
Check if the word 'dominican republic’ exists in the vocabulary:
```
'dominican_republic' in new_model.vocab
```
Let’s assess the relationship of words in our semantic vector space. For example, which words are most similar to the word ‘republic’?
```
new_model.most_similar('republic', topn=10)
```
What about the phrase "dominican republic"?
```
new_model.most_similar('dominican_republic', topn=10)
```
Does the results differ if we exclude the relationship between republic and dominican_republic?
```
new_model.most_similar(positive=['republic'], negative=['dominican_republic'], topn=10)
```
## <font color='red'>Query Expansion</font>
One of the applications of word embedding is that it can be used in search engine in order to expand the query terms in order to produce better results.
If you recall previously, the wikipedia documents are extracted from Lucene Search using the query "president united states". Now, let's use these three query terms to obtain expanded terms closest to query.
Note: This idea is taken from the paper: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/06/acl2016.pdf
Below function take each term in vocabulary and compare it to each term of query in terms of similarity score. The similarity scores are added for all terms and top k terms are returned.
```
#Function to expand a query term
def expand_term(query,k):
#Get the vocab of model
vocab = new_model.index2word
vocab = set(vocab)
term_score = {}
#Split the query terms
query_list = query.split()
#Convert the query to lower case
query_list = [element.lower() for element in query_list]
#Remove stop words from query
stops = stopwords.words("english")
stops = set(stops)
query = [word for word in query_list if word not in stops]
#Filter the vocab to remove stopwords
filter_vocab = [word for word in vocab if word not in stops]
#Calculate each score for terms in vocab
for term in filter_vocab:
term_score[term] = 0.0
for q in query:
if term in term_score:
term_score[term] += new_model.similarity(q,term)
else:
term_score[term] = new_model.similarity(q,term)
#Sort the top k terms of dict term_score
sorted_k_terms = sorted(term_score.iteritems(), key=lambda x:-x[1])[:k]
sorted_k_terms = dict(sorted_k_terms)
#join the query term
q_term = ' '.join(query)
#Return the expanded terms of query
return sorted_k_terms.keys()
```
Now, let's test our function to check the result for query "president united states"
```
query = "president united states"
k = 15 #k defines number of expanded terms
result = expand_term(query,k)
print result
```
Now let's try for "republic constitution"
```
query = "republic constitution"
k = 15 #k defines number of expanded terms
result = expand_term(query,k)
print result
```
## <font color='orange'>Applications</font>
Word2Vec model as described in this tutorial capture semantic and syntactic relationships among words in corpus. Hence, it can be used in search engines for synonyms, query expansion as well as recommendations (for example, recommending similar movies).
In our experiments, word embeddings do not seem to provide enough discriminative power between related but distinct concepts. This could be due to smaller corpus size as well as word embeddings are in it's initial stage of development. Hence, there is a huge scope for improvements in the above technique for it to be fully utilized in commercial applications.
This being said, word2vec are exteremely interesting and it's lot of fun to explore the relationships amongst different words.
## <font color='orange'>Refrences</font>
[Google's code, writeup, and the accompanying papers](https://code.google.com/archive/p/word2vec/)
[Efficient Estimation of Word Representations in Vector Space](https://arxiv.org/pdf/1301.3781.pdf)
[Distributed Representations of Words and Phrases and their Compositionality](https://arxiv.org/pdf/1310.4546.pdf)
[Presentation on word2vec by Tomas Mikolov from Google](https://docs.google.com/file/d/0B7XkCwpI5KDYRWRnd1RzWXQ2TWc/edit)
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W0D4_Calculus/student/W0D4_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Tutorial 1: Differentiation and Integration
**Week 0, Day 4: Calculus**
**By Neuromatch Academy**
__Content creators:__ John S Butler, Arvind Kumar with help from Ella Batty
__Content reviewers:__ Aderogba Bayo, Tessy Tom, Matt McCann
__Production editors:__ Matthew McCann, Ella Batty
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
---
# Tutorial Objectives
In this tutorial, we will cover aspects of calculus that will be frequently used in the main NMA course. We assume that you have some familiarty with calculus, but may be a bit rusty or may not have done much practice. Specifically the objectives of this tutorial are
* Get an intuitive understanding of derivative and integration operations
* Learn to calculate the derivatives of 1- and 2-dimensional functions/signals numerically
* Familiarize with the concept of neuron transfer function in 1- and 2-dimensions.
* Familiarize with the idea of numerical integration using Riemann sum
```
# @title Video 1: Why do we care about calculus?
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1F44y1z7Uk", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="NZwfH_dG2wI", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
---
# Setup
```
# Imports
!pip install sympy --quiet
import numpy as np
import scipy.optimize as opt # import root-finding algorithm
import sympy as sp # Python toolbox for symbolic maths
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D # Toolbox for rendring 3D figures
from mpl_toolkits import mplot3d # Toolbox for rendring 3D figures
# @title Figure Settings
import ipywidgets as widgets # interactive display
from ipywidgets import interact
%config InlineBackend.figure_format = 'retina'
# use NMA plot style
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
my_layout = widgets.Layout()
fig_w, fig_h = 12, 4.5
my_fontsize = 16
my_params = {'axes.labelsize': my_fontsize,
'axes.titlesize': my_fontsize,
'figure.figsize': [fig_w, fig_h],
'font.size': my_fontsize,
'legend.fontsize': my_fontsize-4,
'lines.markersize': 8.,
'lines.linewidth': 2.,
'xtick.labelsize': my_fontsize-2,
'ytick.labelsize': my_fontsize-2}
plt.rcParams.update(my_params)
# @title Plotting Functions
def move_sympyplot_to_axes(p, ax):
backend = p.backend(p)
backend.ax = ax
backend.process_series()
backend.ax.spines['right'].set_color('none')
backend.ax.spines['bottom'].set_position('zero')
backend.ax.spines['top'].set_color('none')
plt.close(backend.fig)
def plot_functions(function, show_derivative, show_integral):
# For sympy we first define our symbolic variable
x, y, z, t, f = sp.symbols('x y z t f')
# We define our function
if function == 'Linear':
f = -2*t
name = r'$-2t$'
elif function == 'Parabolic':
f = t**2
name = r'$t^2$'
elif function == 'Exponential':
f = sp.exp(t)
name = r'$e^t$'
elif function == 'Sine':
f = sp.sin(t)
name = r'$sin(t)$'
elif function == 'Sigmoid':
f = 1/(1 + sp.exp(-(t-5)))
name = r'$\frac{1}{1+e^{-(t-5)}}$'
if show_derivative and not show_integral:
# Calculate the derivative of sin(t) as a function of t
diff_f = sp.diff(f)
print('Derivative of', f, 'is ', diff_f)
p1 = sp.plot(f, diff_f, show=False)
p1[0].line_color='r'
p1[1].line_color='b'
p1[0].label='Function'
p1[1].label='Derivative'
p1.legend=True
p1.title = 'Function = ' + name + '\n'
p1.show()
elif show_integral and not show_derivative:
int_f = sp.integrate(f)
int_f = int_f - int_f.subs(t, -10)
print('Integral of', f, 'is ', int_f)
p1 = sp.plot(f, int_f, show=False)
p1[0].line_color='r'
p1[1].line_color='g'
p1[0].label='Function'
p1[1].label='Integral'
p1.legend=True
p1.title = 'Function = ' + name + '\n'
p1.show()
elif show_integral and show_derivative:
diff_f = sp.diff(f)
print('Derivative of', f, 'is ', diff_f)
int_f = sp.integrate(f)
int_f = int_f - int_f.subs(t, -10)
print('Integral of', f, 'is ', int_f)
p1 = sp.plot(f, diff_f, int_f, show=False)
p1[0].line_color='r'
p1[1].line_color='b'
p1[2].line_color='g'
p1[0].label='Function'
p1[1].label='Derivative'
p1[2].label='Integral'
p1.legend=True
p1.title = 'Function = ' + name + '\n'
p1.show()
else:
p1 = sp.plot(f, show=False)
p1[0].line_color='r'
p1[0].label='Function'
p1.legend=True
p1.title = 'Function = ' + name + '\n'
p1.show()
def plot_alpha_func(t, f, df_dt):
plt.figure()
plt.subplot(2,1,1)
plt.plot(t, f, 'r', label='Alpha function')
plt.xlabel('Time (au)')
plt.ylabel('Voltage')
plt.title('Alpha function (f(t))')
#plt.legend()
plt.subplot(2,1,2)
plt.plot(t, df_dt, 'b', label='Derivative')
plt.title('Derivative of alpha function')
plt.xlabel('Time (au)')
plt.ylabel('df/dt')
#plt.legend()
def plot_rate_and_gain(I, rate, gain):
plt.figure()
plt.subplot(1,2,1)
plt.plot(I,rate)
plt.xlabel('Injected current (au)')
plt.ylabel('Output firing rate (normalized)')
plt.title('Transfer function')
plt.subplot(1,2,2)
# Uncomment to plot
plt.plot(I[0:-1], gain)
plt.xlabel('Injected current (au)')
plt.ylabel('Gain')
plt.title('Gain')
def plot_charge_transfer(t, PSP, numerical_integral):
fig, axes = plt.subplots(1, 2)
axes[0].plot(t, PSP)
axes[0].set(xlabel = 't', ylabel = 'PSP')
axes[1].plot(t, numerical_integral)
axes[1].set(xlabel = 't', ylabel = 'Charge Transferred')
```
---
# Section 1: What is differentiation and integration?
```
# @title Video 2: A geometrical interpretation of differentiation and integration
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1sU4y1G7Ru", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="uQjwr9RQaEs", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
This video covers the definition of differentiation and integration, highlights the geometrical interpretation of each, and introduces the idea of eigenfunctions.
<details>
<summary> <font color='blue'>Click here for text recap of video </font></summary>
Calculus is a part of mathematics concerned with **continous change**. There are two branches of calculus: differential calculus and integral calculus.
Differentiation of a function $f(t)$ gives you the derivative of that function $\frac{d(f(t))}{dt}$. A derivative captures how sensitive a function is to slight changes in the input for different ranges of inputs. Geometrically, the derivative of a function at a certain input is the slope of the function at that input. For example, as you drive, the distance traveled changes continuously with time. The derivative of the distance traveled with respect to time is the velocity of the vehicle at each point in time. The velocity tells you the rate of change of the distance traveled at different points in time. If you have slow velocity (a small derivative), the distance traveled doesn't change much for small changes in time. A high velocity (big derivative) means that the distance traveled changes a lot for small changes in time.
The sign of the derivative of a function (or signal) tells whether the signal is increasing or decreasing. For a signal going through changes as a function of time, the derivative will become zero when the signal changes its direction of change (e.g. from increasing to decreasing). That is, at local minimum or maximum values, the slope of the signal will be zero. This property is used in optimizing problems. But we can also use it to find peaks in a signal.
Integration can be thought of as the reverse of differentation. If we integrate the velocity with respect to time, we can calculate the distance traveled. By integrating a function, we are basically trying to find functions that would have the original one as their derivative. When we integrate a function, our integral will have an added unknown scalar constant, $C$.
For example, if $$ g(t) = 1.5t^2 + 4t - 1$$
our integral function $f(t)$ will be:
$$ f(t) = \int g(t) dt = 0.5t^3 + 2t^2 - t + C$$.
This constant exists because the derivative of a constant is 0 so we cannot know what the constant should be. This is an indefinite integral. If we compute a definite integral, that is the integral between two limits of the input, we will not have this unknown constant and the integral of a function will capture the area under the curve of that function between those two limits.
</details>
### Interactive Demo 1: Geometrical understanding
In the interactive demo below, you can pick different functions to examine in the drop down menu. You can then choose to show the derivative function and/or the integral function.
For the integral, we have chosen the unknown constant $C$ such that the integral function at the left x-axis limit is 0 (f(t = -10) = 0). So the integral will reflect the area under the curve starting from that position.
For each function:
* Examine just the function first. Discuss and predict what the derivative and integral will look like. Remember that derivative = slope of function, integral = area under curve from t = -10 to that t.
* Check the derivative - does it match your expectations?
* Check the integral - does it match your expectations?
```
# @markdown Execute this cell to enable the widget
function_options = widgets.Dropdown(
options=['Linear', 'Parabolic', 'Exponential', 'Sine', 'Sigmoid'],
description='Function',
disabled=False,
)
derivative = widgets.Checkbox(
value=False,
description='Show derivative',
disabled=False,
indent=False
)
integral = widgets.Checkbox(
value=False,
description='Show integral',
disabled=False,
indent=False
)
def on_value_change(change):
derivative.value = False
integral.value = False
function_options.observe(on_value_change, names='value')
interact(plot_functions, function = function_options, show_derivative = derivative, show_integral = integral);
```
In the demo above you may have noticed that the derivative and integral of the exponential function is same as the expoential function itself.
Some functions like the exponential function, when differentiated or integrated, equal a scalar times the same function. This is a similar idea to eigenvectors of a matrix being those that, when multipled by the matrix, equal a scalar times themselves, as you saw yesterday!
When
\begin{align*}
\frac{d(f(t)}{dt} = a\cdot f(t),
\end{align*}
we say that $f(t)$ is an **eigenfunction** for derivative operator, where $a$ is a scaling factor. Similarly, when
\begin{align*}
\int f(t)dt = a\cdot f(t),
\end{align*}
we say that $f(t)$ is an **eigenfunction** for integral operator.
As you can imagine, working with eigenfunctions can make mathematical analysis easy.
---
# Section 2: Analytical & Numerical Differentiation
```
# @title Video 3: Differentiation
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV14g41137d5", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="sHogZISXGuQ", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
In this section, we will delve into how we actually find the derivative of a function, both analytically and numerically.
## Section 2.1: Analytical Differentiation
When we find the derivative analytically, we are finding the exact formula for the derivative function.
To do this, instead of having to do some fancy math every time, we can often consult [an online resource](https://en.wikipedia.org/wiki/Differentiation_rules) for a list of common derivatives, in this case our trusty friend Wikipedia.
If I told you to find the derivative of $f(t) = t^3$, you could consult that site and find in Section 2.1, that if $f(t) = t^n$, then $\frac{d(f(t))}{dt} = nt^{n-1}$. So you would be able to tell me that the derivative of $f(t) = t^3$ is $\frac{d(f(t))}{dt} = 3t^{2}$.
This list of common derivatives often contains only very simple functions. Luckily, as we'll see in the next two sections, we can often break the derivative of a complex function down into the derivatives of more simple components.
### Section 2.1.1: Product Rule
Sometimes we encounter functions which are the product of two functions that both depend on the variable.
How do we take the derivative of such functions? For this we use the [Product Rule](https://en.wikipedia.org/wiki/Product_rule).
\begin{align}
f(t) = u(t)\cdot v(t)\\
\frac{d(f(t))}{dt} = v\cdot \frac{du}{dt} + u\cdot \frac{dv}{dt}\\
\end{align}
#### Coding Exercise 2.1.1: Derivative of the postsynaptic potential alpha function
Let's use the product rule to get the derivative of the post-synaptic potential alpha function. As we saw in Video 3, the shape of the postsynaptic potential is given by the so called alpha function:
\begin{align*}
f(t) = t \cdot exp(-\frac{t}{\tau})
\end{align*}
Here $f(t)$ is a product of $t$ and $exp(-\frac{t}{\tau})$. The variable $\tau$ is the time constant of the synapse.
We have defined $u(t)$ and $v(t)$ in the code below, in terms of the variable $t$ which is an array of time steps from 0 to 10. Define $\frac{du}{dt}$ and $\frac{dv}{dt}$, the compute the full derivative of the alpha function using the product rule. You can always consult wikipedia to figure out $\frac{du}{dt}$ and $\frac{dv}{dt}$!
```
########################################################################
## TODO for students
## Complete all ... in code below and remove
raise NotImplementedError("Calculate the derivatives")
########################################################################
# Define time, time constant
t = np.arange(0, 10, .1)
tau = 0.5
# Compute alpha function
f = t * np.exp(-t/tau)
# Define u(t), v(t)
u_t = t
v_t = np.exp(-t/tau)
# Define du/dt, dv/dt
du_dt = ...
dv_dt = ...
# Define full derivative
df_dt = ...
# Visualize
plot_alpha_func(t, f, df_dt)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D4_Calculus/solutions/W0D4_Tutorial1_Solution_636667ff.py)
*Example output:*
<img alt='Solution hint' align='left' width=843 height=303 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D4_Calculus/static/W0D4_Tutorial1_Solution_636667ff_0.png>
### Section 2.1.2: Chain Rule
Many times we encounter situations in which the variable $a$ is changing with time ($t$) and affecting another variable $r$. How can we estimate the derivative of $r$ with respect to $a$ i.e. $\frac{dr}{da} = ?$
To calculate $\frac{dr}{da}$ we use the [Chain Rule](https://en.wikipedia.org/wiki/Chain_rule).
\begin{align}
\frac{dr}{da} = \frac{dr}{dt}\cdot\frac{dt}{da}
\end{align}
That is, we calculate the derivative of both variables with respect to time and divide the time derivative of $r$ by that of time derivative of $a$.
We will step back from applications for a second: we can use this to simplify taking derivatives of complex functions, as you will see in the next exercise.
#### Math Exercise 2.1.2: Chain Rule
Let's say that:
$$ r(a) = e^{a^4 + 1} $$
What is $\frac{dr}{da}$? This is a more complex function so we can't simply consult a table of common derivatives. Can you use the chain rule to help?
Hint: we didn't define t but you could set t equal to the function in the exponent
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D4_Calculus/solutions/W0D4_Tutorial1_Solution_a0e42694.py)
### Section 2.2.3: Derivatives in Python using Sympy
There is a useful Python library for getting the analytical derivatives of functions: Sympy. We actually used this in Interactive Demo 1, under the hood.
See the following cell for an example of setting up a sympy function and finding the derivative.
```
# For sympy we first define our symbolic variables
f, t = sp.symbols('f, t')
# Function definition (sigmoid)
f = 1/(1 + sp.exp(-(t-5)))
# Get the derivative
diff_f = sp.diff(f)
# Print the resulting function
print('Derivative of', f, 'is ', diff_f)
```
## Section 2.2: Numerical Differentiation
Formally, the derivative of a function $\mathcal{f}(x)$ at any value $a$ is given by the finite difference formula (FD):
\begin{align*}
FD = \frac{f(a+h) - f(a)}{h}
\end{align*}
As $h\rightarrow 0$, the FD approaches the actual value of the derivative. Let's check this.
*Note that the numerical estimate of the derivative will result
in a time series whose length is one short of the original time series.*
### Interactive Demo 2.2: Numerical Differentiation of the Sine Function
Below, we find the numerical derivative of the sine function for different values of $h$, and and compare the result the analytical solution.
- What values of h result in more accurate numerical derivatives?
```
# @markdown *Execute this cell to enable the widget.*
def numerical_derivative_demo(h = 0.2):
# Now lets create a sequence of numbers which change according to the sine function
dt = 0.01
tx = np.arange(-10, 10, dt)
sine_fun = np.sin(tx)
# symbolic diffrentiation tells us that the derivative of sin(t) is cos(t)
cos_fun = np.cos(tx)
# Numerical derivative using difference formula
n_tx = np.arange(-10,10,h) # create new time axis
n_sine_fun = np.sin(n_tx) # calculate the sine function on the new time axis
sine_diff = (n_sine_fun[1:] - n_sine_fun[0:-1]) / h
fig = plt.figure()
ax = plt.subplot(111)
plt.plot(tx, sine_fun, label='sine function')
plt.plot(tx, cos_fun, label='analytical derivative of sine')
with plt.xkcd():
# notice that numerical derivative will have one element less
plt.plot(n_tx[0:-1], sine_diff, label='numerical derivative of sine')
plt.xlim([-10, 10])
plt.xlabel('Time (au)')
plt.ylabel('f(x) or df(x)/dt')
ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.05),
ncol=3, fancybox=True)
plt.show()
_ = widgets.interact(numerical_derivative_demo, h = (0.01, 0.5, .02))
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D4_Calculus/solutions/W0D4_Tutorial1_Solution_36cd3b93.py)
## Section 2.3: Transfer Function and Gain of a Neuron
When we inject a constant current (DC) in a neuron, its firing rate changes as a function of strength of the injected current. This is called the **input-output transfer function** or just the *transfer function* or *I/O Curve* of the neuron. For most neurons this can be approximated by a sigmoid function e.g.
\begin{align}
rate(I) = \frac{1}{1+\text{e}^{-a*(I-\theta)}} - \frac{1}{exp(a*\theta)} + \eta
\end{align}
where $I$ is injected current, $rate$ is the neuron firing rate and $\eta$ is noise (Gaussian noise with zero mean and $\sigma$ standard deviation).
*You will visit this equation in a different context in Week 3*
### Coding Exercise 2.3: Calculating the Transfer Function and Gain of a Neuron
The slope of a neurons input-output transfer function ($\frac{d(r(I)}{dI}$) is called the **gain** of the neuron, as it tells how the neuron output will change if the input is changed.
Estimate the gain of the following neuron transfer function using numerical differentiaton. We will use our timestep as h.
```
# @markdown *Execute this cell to enable the numerical differentiation function: `numerical_derivative`*
def numerical_derivative(x, h):
'''Numerical derivative calculation
Args:
x: array of numbers
h: time step for differentiation
Returns:
Numerical derivative of f for a time step of h
'''
dxdt = np.zeros(len(x)-1)
dxdt = (x[1:] - x[0:-1])/h
return dxdt
help(numerical_derivative)
def compute_rate_and_gain(I, a, theta, current_timestep):
""" Compute rate and gain of neuron based on parameters
Args:
I (ndarray): different possible values of the current
a (scalar): parameter of the transfer function
theta (scalar): parameter of the transfer function
current_timestep (scalar): the time we're using to take steps
Returns:
(ndarray, ndarray): rate and gain for each possible value of I
"""
########################################################################
## TODO for students
## Complete all ... in code below and remove
raise NotImplementedError("Calculate the gain")
########################################################################
# Compute rate
rate = (1+np.exp(-a*(I-theta)))**-1 - (1+np.exp(a*theta))**-1
# Compute gain
gain = ...
return rate, gain
current_timestep = 0.1
I = np.arange(0, 8, current_timestep)
# Neuron transfer function
a = 1.2 # You can change this value
theta = 5 # You can change this value
# Compute rate and gain
rate, gain = compute_rate_and_gain(I, a, theta, current_timestep)
# Visualize rate and gain
plot_rate_and_gain(I, rate, gain)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D4_Calculus/solutions/W0D4_Tutorial1_Solution_9fc5d678.py)
*Example output:*
<img alt='Solution hint' align='left' width=843 height=303 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D4_Calculus/static/W0D4_Tutorial1_Solution_9fc5d678_0.png>
The slope of the transfer function tells us in which range of inputs the neuron output is most sensitive to changes in its input. Change the parameters of the neuron transfer function (i.e. $a$ and $\theta$) and see if you can predict the value of $I$ for which the neuron has maximal slope and which parameter determines the peak value of the gain.
# Section 3: Functions of Multiple Variables
```
# @title Video 4: Functions of multiple variables
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Ly4y1M77D", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Mp_uNNNiQAI", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
This video covers what partial derivatives are.
<details>
<summary> <font color='blue'>Click here for text recap of video </font></summary>
In the previous section, you looked at function of single variable $t$ or $x$. In most cases, we encounter functions of multiple variables. For example, in the brain, the firing rate of a neuron is a function of both excitatory and inhibitory input rates. In the following, we will look into how to calculate derivatives of such functions.
When we take the derrivative of a multivariable function with respect to one of the variables it is called the **partial derivative**. For example if we have a function:
\begin{align}
f(x,y) = x^2 + 2xy + y^2
\end{align}
The we can define the partial derivatives as
\begin{align}
\frac{\partial(f(x,y))}{\partial x} = 2x + 2y + 0 \\\\
\frac{\partial(f(x,y))}{\partial y} = 0 + 2x + 2y
\end{align}
In the above, the derivative of the last term ($y^2$) with respect to $x$ is zero because it does not change with respect to $x$. Similarly, the derivative of $x^2$ with respect to $y$ is also zero.
</details>
## Section 3.1: Analytical partial derivatives
Just as with the derivatives we saw earlier, you can get partial derivatives through either an analytical method (finding an exact equation) or a numerical method (approximating).
### Interactive Demo 3.1: Visualize partial derivatives
In the demo below, you can input any function of x and y and then visualize both the function and partial derivatives.
We visualized the 2-dimensional function as a surface plot in which the values of the function are rendered as color. Yellow represents a high value and blue represents a low value. The height of the surface also shows the numerical value of the function. The first plot is that of our function. And the two bottom plots are the derivative surfaces with respect to $x$ and $y$ variables.
1. Ensure you understand how the plots relate to each other - if not, review the above material
2. Can you come up with a function where the partial derivative with respect to x will be a linear plane and the derivative with respect to y will be more curvy?
3. What happens to the partial derivatives if there are no terms involving multiplying x and y together?
```
# @markdown Execute this widget to enable the demo
# Let's use sympy to calculate Partial derivatives of a function of 2-variables
@interact(f2d_string = 'x**2 + 2*x*y + y**2')
def plot_partial_derivs(f2d_string):
f, x, y = sp.symbols('f, x, y')
f2d = eval(f2d_string)
f2d_dx = sp.diff(f2d,x)
f2d_dy = sp.diff(f2d,y)
print('Partial derivative of ', f2d, 'with respect to x is', f2d_dx)
print('Partial derivative of ', f2d, 'with respect to y is', f2d_dy)
p1 = sp.plotting.plot3d(f2d, (x, -5, 5), (y, -5, 5),show=True,xlabel='x', ylabel='y', zlabel='f(x,y)',title='Our function')
p2 = sp.plotting.plot3d(f2d_dx, (x, -5, 5), (y, -5, 5),show=True,xlabel='x', ylabel='y', zlabel='df(x,y)/dx',title='Derivative w.r.t. x')
p3 = sp.plotting.plot3d(f2d_dy, (x, -5, 5), (y, -5, 5),show=True,xlabel='x', ylabel='y', zlabel='df(x,y)/dy',title='Derivative w.r.t. y')
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D4_Calculus/solutions/W0D4_Tutorial1_Solution_5deca1d0.py)
## Section 3.2: Numerical calculation of partial derivatives
Let's take the example of a neuron driven by excitatory and inhibitory inputs. Because this is for illustrative purposes, we will not go in the details of the numerical range of the input and output variables.
In the function below, we assume that the firing rate of a neuron increases motonotically with an increase in excitation and decreases monotonically with an increase in inhibition. The inhibition is modelled as a subtraction. Like for the 1-dimensional transfer function, here we assume that we can approximate the transfer function as a sigmoid function.
To evaluate the partial derivatives we can use the same numerical differentiation as before but now we apply it to each row and column separately.
```
# @markdown Execute this cell to visualize the neuron firing rate surface
def sigmoid_function(x,a,theta):
'''
Population activation function.
Expects:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
the population activation response F(x) for input x
'''
# add the expression of f = F(x)
f = (1+np.exp(-a*(x-theta)))**-1 - (1+np.exp(a*theta))**-1
return f
# Neuron Transfer function
step_size = 0.1
exc_input = np.arange(2,9,step_size)
inh_input = np.arange(0,7,step_size)
exc_a = 1.2
exc_theta = 2.4
inh_a = 1.
inh_theta = 4.
rate = np.zeros((len(exc_input),len(inh_input)))
for ii in range(len(exc_input)):
for jj in range(len(inh_input)):
rate[ii,jj] = sigmoid_function(exc_input[ii],exc_a,exc_theta) - sigmoid_function(inh_input[jj],inh_a,inh_theta)*0.5
with plt.xkcd():
X, Y = np.meshgrid(exc_input, inh_input)
fig = plt.figure(figsize=(12,12))
ax1 = fig.add_subplot(2,2,1)
lg_txt = 'Inhibition = ' + str(inh_input[0])
ax1.plot(exc_input,rate[:,0],label=lg_txt)
lg_txt = 'Inhibition = ' + str(inh_input[20])
ax1.plot(exc_input,rate[:,20],label=lg_txt)
lg_txt = 'Inhibition = ' + str(inh_input[40])
ax1.plot(exc_input,rate[:,40],label=lg_txt)
ax1.legend()
ax1.set_xlabel('Excitatory input (au)')
ax1.set_ylabel('Neuron output rate (au)');
ax2 = fig.add_subplot(2,2,2)
lg_txt = 'Excitation = ' + str(exc_input[0])
ax2.plot(inh_input,rate[0,:],label=lg_txt)
lg_txt = 'Excitation = ' + str(exc_input[20])
ax2.plot(inh_input,rate[20,:],label=lg_txt)
lg_txt = 'Excitation = ' + str(exc_input[40])
ax2.plot(inh_input,rate[40,:],label=lg_txt)
ax2.legend()
ax2.set_xlabel('Inhibitory input (au)')
ax2.set_ylabel('Neuron output rate (au)');
ax3 = fig.add_subplot(2, 1, 2, projection='3d')
surf= ax3.plot_surface(Y.T, X.T, rate, rstride=1, cstride=1,
cmap='viridis', edgecolor='none')
ax3.set_xlabel('Inhibitory input (au)')
ax3.set_ylabel('Excitatory input (au)')
ax3.set_zlabel('Neuron output rate (au)');
fig.colorbar(surf)
```
In the **Top-Left** plot, we see how the neuron output rate increases as a function of excitatory input (e.g. the blue trace). However, as we increase inhibition, expectedly the neuron output decreases and the curve is shifted downwards. This constant shift in the curve suggests that the effect of inhibition is subtractive, and the amount of subtraction does not depend on the neuron output.
We can alternatively see how the neuron output changes with respect to inhibition and study how excitation affects that. This is visualized in the **Top-Right** plot.
This type of plotting is very intuitive, but it becomes very tedious to visualize when there are larger numbers of lines to be plotted. A nice solution to this visualization problem is to render the data as color, as surfaces, or both.
This is what we have done in the plot on the bottom. The colormap on the right shows the output of the neuron as a function of inhibitory input and excitatory input. The output rate is shown both as height along the z-axis and as the color. Blue means low firing rate and yellow means high firing rate (see the color bar).
In the above plot, the output rate of the neuron goes below zero. This is of course not physiological as neurons cannot have negative firing rates. In models, we either choose the operating point such that the output does not go below zero, or else we clamp the neuron output to zero if it goes below zero. You will learn about it more in Week 3.
```
# @markdown Execute this cell to visualize the transfer function and its partial derivative
# Neuron Transfer Function
step_size = 0.1
exc_input = np.arange(1,10,step_size)
inh_input = np.arange(0,7,step_size)
exc_a = 1.2
exc_theta = 2.4
inh_a = 1.
inh_theta = 4.
rate = np.zeros((len(exc_input),len(inh_input)))
for ii in range(len(exc_input)):
for jj in range(len(inh_input)):
rate[ii,jj] = sigmoid_function(exc_input[ii],exc_a,exc_theta) - sigmoid_function(inh_input[jj],inh_a,inh_theta)*0.5
# Derivative with respect to excitatory input rate
rate_de = np.zeros((len(exc_input)-1,len(inh_input)))# this will have one row less than the rate matrix
for ii in range(len(inh_input)):
rate_de[:,ii] = (rate[1:,ii] - rate[0:-1,ii])/step_size
# Derivative with respect to inhibitory input rate
rate_di = np.zeros((len(exc_input),len(inh_input)-1))# this will have one column less than the rate matrix
for ii in range(len(exc_input)):
rate_di[ii,:] = (rate[ii,1:] - rate[ii,0:-1])/step_size
with plt.xkcd():
X, Y = np.meshgrid(exc_input, inh_input)
fig = plt.figure(figsize=(20,8))
ax1 = fig.add_subplot(1, 3, 1, projection='3d')
surf1 = ax1.plot_surface(Y.T, X.T, rate, rstride=1, cstride=1, cmap='viridis', edgecolor='none')
ax1.set_xlabel('Inhibitory input (au)')
ax1.set_ylabel('Excitatory input (au)')
ax1.set_zlabel('Neuron output rate (au)')
ax1.set_title('Rate as a function of Exc. and Inh');
ax1.view_init(45, 10)
fig.colorbar(surf1)
Xde, Yde = np.meshgrid(exc_input[0:-1], inh_input)
ax2 = fig.add_subplot(1, 3, 2, projection='3d')
surf2 = ax2.plot_surface(Yde.T, Xde.T, rate_de, rstride=1, cstride=1, cmap='viridis', edgecolor='none')
ax2.set_xlabel('Inhibitory input (au)')
ax2.set_ylabel('Excitatory input (au)')
ax2.set_zlabel('Neuron output rate (au)');
ax2.set_title('Derivative wrt Excitation');
ax2.view_init(45, 10)
fig.colorbar(surf2)
Xdi, Ydi = np.meshgrid(exc_input, inh_input[:-1])
ax3 = fig.add_subplot(1, 3, 3, projection='3d')
surf3 = ax3.plot_surface(Ydi.T, Xdi.T, rate_di, rstride=1, cstride=1, cmap='viridis', edgecolor='none')
ax3.set_xlabel('Inhibitory input (au)')
ax3.set_ylabel('Excitatory input (au)')
ax3.set_zlabel('Neuron output rate (au)');
ax3.set_title('Derivative wrt Inhibition');
ax3.view_init(15, -115)
fig.colorbar(surf3)
```
Is this what you expected? Change the code to generate the 2-d transfer function of the neuron and test your intuitions -- unhide the code in the cell just below section 3.2, edit and rerun.
Can you relate this shape of the partial derivative surface to the gain of the 1-d transfer-function of a neuron (Section 2).
We will use the partial derivative several times in the course. For example partial derivative are used the calculate the Jacobian of a system of differential equations. The Jacobian is used to determine the dynamics and stability of a system. This will be introduced in the second week while studying the dynamics of excitatory and inhibitory population interactions.
---
# Section 4: Numerical Integration
```
# @title Video 5: Numerical Integration
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1p54y1H7zt", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="cT0_CbD_h9Q", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
This video covers numerical integration and specifically Riemann sums.
<details>
<summary> <font color='blue'>Click here for text recap of video </font></summary>
Geometrically, integration is the area under the curve. This interpretation gives two formal ways to calculate the integral of a function numerically.
**[Riemann sum](https://en.wikipedia.org/wiki/Riemann_sum)**:
If we wish to integrate a function $f(t)$ with respect to $t$, then first we divide the function into $n$ intervals of size $dt = a-b$, where $a$ is the starting of the interval. Thus, each interval gives a rectangle with height $f(a)$ and width $dt$. By summing the area of all the rectangles, we can approximate the area under the curve. As the size $dt$ approaches to zero, our estimate of the integral approcahes the analytical calculation. Essentially, the Riemann sum is cutting the region under the curve in vertical stripes, calculating area of the each stripe and summing them up.
</details>
## Section 4.1: Demonstration of the Riemann Sum
### Interactive Demo 4.1: Riemann Sum vs. Analytical Integral with changing step size
Below, we will compare numerical integration using the Riemann Sum with the analytical solution. You can change the interval size $dt$ using the slider.
1. What values of dt result in the best numerical integration?
2. What is the downside of choosing that value of dt?
3. With large dt, why are we underestimating the integral (as opposed to overestimating?
```
# @markdown Run this cell to enable the widget!
def riemann_sum_demo(dt = 0.5):
step_size = 0.1
min_val = 0.
max_val = 10.
tx = np.arange(min_val, max_val, step_size)
# Our function
ftn = tx**2 - tx + 1
# And the integral analytical formula calculates using sympy
int_ftn = tx**3/3 - tx**2/2 + tx
# Numerical integration of f(t) using Riemann Sum
n = int((max_val-min_val)/dt)
r_tx = np.zeros(n)
fun_value = np.zeros(n)
for ii in range(n):
a = min_val+ii*dt
fun_value[ii] = a**2 - a + 1
r_tx[ii] = a;
# Riemann sum is just cumulative sum of the fun_value multiplied by the
r_sum = np.cumsum(fun_value)*dt
with plt.xkcd():
plt.figure(figsize=(20,5))
ax = plt.subplot(1,2,1)
plt.plot(tx,ftn,label='Function')
for ii in range(n):
plt.plot([r_tx[ii], r_tx[ii], r_tx[ii]+dt, r_tx[ii]+dt], [0, fun_value[ii], fun_value[ii], 0] ,color='r')
plt.xlabel('Time (au)')
plt.ylabel('f(t)')
plt.title('f(t)')
plt.grid()
plt.subplot(1,2,2)
plt.plot(tx,int_ftn,label='Analytical')
plt.plot(r_tx+dt,r_sum,color = 'r',label='Riemann Sum')
plt.xlabel('Time (au)')
plt.ylabel('int(f(t))')
plt.title('Integral of f(t)')
plt.grid()
plt.legend()
plt.show()
_ = widgets.interact(riemann_sum_demo, dt = (0.1, 1., .02))
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D4_Calculus/solutions/W0D4_Tutorial1_Solution_fd942e45.py)
There are other methods of numerical integration, such as
**[Lebesgue integral](https://en.wikipedia.org/wiki/Lebesgue_integral)** and **Runge Kutta**. In the Lebesgue integral, we divide the area under the curve into horizontal stripes. That is, instead of the independent variable, the range of the function $f(t)$ is divided into small intervals. In any case, the Riemann sum is the basis of Euler's method of integration for solving ordinary differential equations - something you will do in a later tutorial today.
## Section 4.2: Neural Applications of Numerical Integration
### Coding Exercise 4.2: Calculating Charge Transfer with Excitatory Input
An incoming spike elicits a change in the post-synaptic membrane potential ($PSP(t)$) which can be captured by the following function
\begin{align}
PSP(t) = J\times t\times exp\big(-\frac{t-t_{sp}}{\tau_{s}}\big)
\end{align}
where $J$ is the synaptic amplitude, $t_{sp}$ is the spike time and $\tau_s$ is the synaptic time constant.
Estimate the total charge transfered to the postsynaptic neuron during an PSP with amplitude $J=1.0$, $\tau_s = 1.0$ and $t_{sp} = 1.$ (that is the spike occured at 1ms). The total charge will be the integral of the PSP function.
```
########################################################################
## TODO for students
## Complete all ... in code below and remove
raise NotImplementedError("Calculate the charge transfer")
########################################################################
# Set up parameters
J = 1
tau_s = 1
t_sp = 1
dt = .1
t = np.arange(0, 10, dt)
# Code PSP formula
PSP = ...
# Compute numerical integral
# We already have PSP at every time step (height of rectangles). We need to
#. multiply by width of rectangles (dt) to get areas
rectangle_areas = ...
# Cumulatively sum rectangles (hint: use np.cumsum)
numerical_integral = ...
# Visualize
plot_charge_transfer(t, PSP, numerical_integral)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D4_Calculus/solutions/W0D4_Tutorial1_Solution_200c1e98.py)
*Example output:*
<img alt='Solution hint' align='left' width=843 height=303 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D4_Calculus/static/W0D4_Tutorial1_Solution_200c1e98_0.png>
You can see from the figure that the total charge transferred is a little over 2.5.
---
# Section 5: Differentiation and Integration as Filtering Operations
```
# @title Video 6: Filtering Operations
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Vy4y1M7oT", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="7_ZjlT2d174", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
This video covers a different interpretation of differentiation and integration: viewing them as filtering operations.
<details>
<summary> <font color='blue'>Click here for text recap of video </font></summary>
In the above, we used the notions that geometrically integration is the area under the curve and differentiation is the slope of the curve. There is another interpretation of these two operations.
As we calculate the derivative of a function, we take the difference of adjacent values of the function. This results in the removal of common part between the two values. As a consequence, we end up removing the unchanging part of the signal. If we now think in terms of frequencies, differentiation removes low frequencies, or slow changes. That is, differentiation acts as a high pass filter.
Integration does the opposite because in the estimation of an integral we keep adding adjacent values of the signal. So, again thinking in terms of frequencies, integration is akin to the removal of high frequencies or fast changes (low-pass filter). The shock absorbers in your bike are an example of integrators.
We can see this behavior the demo below. Here we will not work with functions, but with signals. As such, functions and signals are the same. Just that in most cases our signals are measurements with respect to time.
```
# @markdown Execute this cell to see visualization
h = 0.01
tx = np.arange(0,2,h)
noise_signal = np.random.uniform(0, 1, (len(tx)))*0.5
x1 = np.sin(0.5*np.pi*tx) + noise_signal # This will generate a 1 Hz sin wave
# In the signal x1 we have added random noise which contributs the high frequencies
# Take the derivative equivalent of the signal i.e. subtract the adjacent values
x1_diff = (x1[1:] - x1[:-1])
# Take the integration equivalent of the signal i.e. sum the adjacent values. And divide by 2 (take average essentially)
x1_integrate = (x1[1:] + x1[:-1])/2
# Plotting code
plt.figure(figsize=(15,10))
plt.subplot(3,1,1)
plt.plot(tx,x1,label='Original Signal')
#plt.xlabel('Time (sec)')
plt.ylabel('Signal Value(au)')
plt.legend()
plt.subplot(3,1,2)
plt.plot(tx[0:-1],x1_diff,label='Differentiated Signal')
# plt.xlabel('Time (sec)')
plt.ylabel('Differentiated Value(au)')
plt.legend()
plt.subplot(3,1,3)
plt.plot(tx,x1,label='Original Signal')
plt.plot(tx[0:-1],x1_integrate,label='Integrate Signal')
plt.xlabel('Time (sec)')
plt.ylabel('Integrate Value(au)')
plt.legend()
```
Notice how the differentiation operation amplifies the fast changes which were contributed by noise. By contrast, the integration operation supresses the fast changing noise. If we perform the same operation of averaging the adjancent samples on the orange trace, we will further smoothen the signal. Such sums and subtractions form the basis of digital filters.
---
# Summary
* Geometrically, integration is the area under the curve and differentiation is the slope of the function
* The concepts of slope and area can be easily extended to higher dimensions. We saw this when we took the derivative of a 2-dimensional transfer function of a neuron
* Numerical estimates of both derivatives and integrals require us to choose a time step $h$. The smaller the $h$, the better the estimate, but for small values of $h$, more computations are needed. So there is always some tradeoff.
* Partial derivatives are just the estimate of the slope along one of the many dimensions of the function. We can combine the slopes in different directions using vector sum to find the direction of the slope.
* Because the derivative of a function is zero at the local peak or trough, derivatives are used to solve optimization problems.
* When thinking of signal, integration operation is equivalent to smoothening the signals (i.e. remove fast changes)
* Differentiation operations remove slow changes and enhance high frequency content of a signal
| github_jupyter |
###### The probability of an eccentricity GIVEN that a planet is transiting (P(e|b)) and the probability of a longitude of periastron GIVEN that a planet is transiting (P(w|b)) are different than P(e) and P(w).
https://academic.oup.com/mnras/article/444/3/2263/1053015
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from tqdm import tqdm
from astropy.table import Table
import astropy.units as u
import matplotlib
# Using `batman` to create & fit fake transit
import batman
# Using astropy BLS and scipy curve_fit to fit transit
from astropy.timeseries import BoxLeastSquares
from scipy.optimize import curve_fit
import scipy.optimize as opt
# Using emcee & corner to find and plot (e, w) distribution
import emcee
import corner
# And importing `photoeccentric`
import photoeccentric as ph
# Random stuff
import scipy.constants as c
from scipy.stats import rayleigh
import os
from pathlib import Path
%load_ext autoreload
%autoreload 2
%matplotlib inline
plt.rcParams['figure.figsize'] = [12, 8]
def find_nearest(array, value):
array = np.asarray(array)
idx = (np.abs(array - value)).argmin()
return array[idx]
def mode(dist):
"""Gets mode of a histogram.
Parameters
----------
dist: array
Distribution
Returns
-------
mode: float
Mode
"""
#n, bins = np.histogram(dist, bins=np.linspace(np.nanmin(dist), np.nanmax(dist), 100))
n, bins = np.histogram(dist, bins=np.linspace(np.nanmin(dist), np.nanmax(dist), 500))
mode = bins[np.nanargmax(n)]
return mode
```
# UNIFORM DISTRIBUTION
```
distpath_uniform = "/Users/ssagear/Dropbox (UFL)/Research/MetallicityProject/photoeccentric/notebooks/hpgresults/uniform_dist/results_w_-90to270/"
truee = []
truew = []
edist_uniform = []
for subdir, dirs, files in os.walk(distpath_uniform):
try:
trueparams = subdir.split("/Users/ssagear/Dropbox (UFL)/Research/MetallicityProject/photoeccentric/notebooks/hpgresults/uniform_dist/results_w_-90to270/e_",1)[1]
truee.append(float(trueparams.split('_w_')[0]))
truew.append(float(trueparams.split('_w_')[1]))
except IndexError:
continue
for file in files:
if 'distributions' in file:
distpath = os.path.join(subdir, file)
edist_uniform.append(np.genfromtxt(distpath, delimiter=','))
truee = np.array(truee)
truew = np.array(truew)
edist_uniform = np.array(edist_uniform)
fite = []
fitw = []
fitg = []
es_uniform = []
ws_uniform = []
gs_uniform = []
for i in range(len(edist_uniform)):
es_uniform.append(edist_uniform[i][:,0])
fite.append(ph.mode(edist_uniform[i][:,0]))
ws_uniform.append(edist_uniform[i][:,1])
fitw.append(ph.mode(edist_uniform[i][:,1]))
gs_uniform.append(edist_uniform[i][:,2])
fitg.append(ph.mode(edist_uniform[i][:,2]))
fite = np.array(fite)
es_uniform = np.array(es_uniform)
fitw = np.array(fitw)
ws_uniform = np.array(ws_uniform)
fitg = np.array(fitg)
gs_uniform = np.array(gs_uniform)
# for i in range(len(es_uniform)):
# plt.hist(es_uniform[i], alpha=0.3, density=True, stacked=True);
# plt.xlabel('Fit E Distributions')
# plt.title('All E distributions (sample ~500)')
# plt.cla()
# for i in range(len(ws_uniform)):
# plt.hist(ws_uniform[i], alpha=0.3, density=True, stacked=True);
# plt.xlabel('Fit w Distributions (deg)')
# plt.title('All w distributions (sample ~800)')
plt.cla()
fz = 15
plt.hist2d(ws_uniform.flatten(), es_uniform.flatten(), bins=[20,20], cmap='Blues', density=True);
plt.colorbar(label='Rel. Density')
plt.scatter(truew, truee, marker='o', s=15, c='palevioletred', label='True Params')
plt.scatter(fitw, fite, marker='X', s=15, c='mediumaquamarine', label='Fit Params (Mode)')
plt.legend(fontsize=fz)
plt.xlabel('$\omega$', fontsize=fz)
plt.ylabel('$e$', fontsize=fz)
plt.title('Fit $(e,w)$ Heatmap From Uniform Distribubtion', fontsize=fz)
plt.savefig('Fit_EW_heatmap_uniform_new.png')
randint = int(np.random.randint(0, len(es_uniform), 1))
for i in range(len(es_uniform)):
plt.cla()
plt.hist2d(ws_uniform[i], es_uniform[i], cmap='Blues', bins=[13,10]);
plt.scatter(truew[i], truee[i], c='r', s=100, label='True')
plt.scatter(fitw[i], fite[i], c='g', marker='X', s=100, label='Fit')
plt.xlim(-90, 270)
plt.ylim(0, 1)
plt.xlabel(r'$\omega$', fontsize=20)
plt.ylabel('$e$', fontsize=20)
plt.title('$e$ = ' + str(truee[i]) + '; $\omega$ = ' + str(truew[i]), fontsize=20)
plt.legend()
plt.savefig('all_new_heatmaps/e_' + str(truee[i]) + '_w_' + str(truew[i]) + 'singleheatmap.png')
wuniform = np.random.uniform(-90, 270, 50)
inds = []
for i in range(len(wuniform)):
inds.append(int(np.argwhere(fitw == find_nearest(fitw, wuniform[i]))[0]))
euni = truee[inds].flatten()
wuni = truew[inds].flatten()
efuni = fite[inds].flatten()
wfuni = fitw[inds].flatten()
edistuni = np.array(es_uniform[inds]).flatten()
wdistuni = np.array(ws_uniform[inds]).flatten()
def deltallike(g, gerr, truee, truew, fite, fitw):
model_fit = (1+e*np.sin(w*(np.pi/180.)))/np.sqrt(1-e**2)
sigma2_fit = gerr ** 2
loglike_fit = -0.5 * np.sum((g - model_fit) ** 2 / sigma2_fit + np.log(sigma2_fit))
model_true = (1+truee*np.sin(truew*(np.pi/180.)))/np.sqrt(1-truee**2)
sigma2_true = gerr ** 2
loglike_true = -0.5 * np.sum((g - model_true) ** 2 / sigma2_true + np.log(sigma2_true))
llike = np.abs(loglike_fit-loglike_true)
return llike
llike = []
for i in range(len(truee)):
g = ph.mode(gs_uniform[i])
e = ph.mode(es_uniform[i])
w = ph.mode(ws_uniform[i])
gerr = np.nanstd(gs_uniform[i])
llike.append(deltallike(g, gerr, truee[i], truew[i], e, w))
llike = np.array(llike)
llikeuni = llike[inds]
plt.cla()
fz = 15
plt.hist2d(wdistuni, edistuni, bins=[20,20], cmap='Blues')#, norm=matplotlib.colors.LogNorm());
#plt.clim(vmax=4000.0)
plt.colorbar(label='Rel. Density')
plt.scatter(wuni, euni, marker='o', s=80, c='palevioletred', label='True Params')
plt.scatter(wfuni, efuni, marker='X', s=80, c='mediumaquamarine', label='Fit Params')
plt.legend(fontsize=fz)
plt.xlabel('$w$', fontsize=fz)
plt.ylabel('$e$', fontsize=fz)
plt.title('50 Fit $w$s Pulled Uniformly', fontsize=fz)
plt.savefig('heatmap_truee_uniformwfit_50_611.png')
#euni,wuni
plt.cla()
fz = 15
plt.hist2d(wdistuni, edistuni, bins=[20,10], cmap='Blues')#, norm=matplotlib.colors.LogNorm());
#plt.clim(vmax=4000.0)
plt.colorbar(label='Rel. Density')
plt.scatter(wuni, euni, marker='o', s=80, c=llikeuni, label='True Params', cmap='Reds')#, norm=matplotlib.colors.LogNorm())
plt.colorbar(label='"log Likelihood"')
#plt.scatter(wfuni, efuni, marker='X', s=80, c='mediumaquamarine', label='Fit Params')
plt.legend(fontsize=fz)
plt.xlabel('$w$', fontsize=fz)
plt.ylabel('$e$', fontsize=fz)
plt.title('50 Fit $w$s Pulled Uniformly', fontsize=fz)
plt.savefig('heatmap_truee_uniformwfit50_llike_new_611.png')
plt.cla()
fz = 15
plt.hist2d(wdistuni, edistuni, bins=[20,10], cmap='Blues')#, norm=matplotlib.colors.LogNorm());
#plt.clim(vmax=4000.0)
plt.colorbar(label='Rel. Density')
plt.scatter(wfuni, efuni, marker='X', s=80, c=llikeuni, label='Fit Params', cmap='Greens')#, norm=matplotlib.colors.LogNorm())
plt.colorbar(label='DELTA Log Like')
plt.legend(fontsize=fz)
plt.xlabel('$w$', fontsize=fz)
plt.ylabel('$e$', fontsize=fz)
plt.title('50 Fit $w$s Pulled Uniformly', fontsize=fz)
plt.savefig('heatmap_truee_uniformwfit50_lfitslike_new_611.png')
plt.cla()
fz = 15
cmap = plt.cm.Reds
plt.hist2d(ws_uniform.flatten(), es_uniform.flatten(), bins=[20,20], cmap='Blues', density=True)#norm=matplotlib.colors.LogNorm());
plt.colorbar(label='Rel. Density')
plt.scatter(truew, truee, marker='o', s=15, c=llike, label='True Params', cmap=cmap, norm=matplotlib.colors.LogNorm())
plt.colorbar(label='$\Delta$ Ln Likelihood')
#plt.clim(vmin=0, vmax=700)
#plt.scatter(fitw, fite, marker='x', s=15, c='mediumaquamarine', label='Fit Params')
plt.legend(fontsize=fz)
plt.xlabel('$\omega$', fontsize=fz)
plt.ylabel('$e$', fontsize=fz)
plt.title('Fit $(e,w)$ Heatmap From Uniform Distribubtion', fontsize=fz)
#plt.savefig('heatmap_llike_colorbar_new.png')
plt.cla()
fz = 15
cmap = plt.cm.Greens
plt.hist2d(ws_uniform.flatten(), es_uniform.flatten(), bins=[20,20], cmap='Blues', density=True)#, norm=matplotlib.colors.LogNorm());
plt.colorbar(label='Rel. Density')
plt.scatter(fitw, fite, marker='X', s=15, c=llike, label='Fit Params', cmap=cmap)
plt.colorbar(label='$\Delta$ Ln Likelihood')
plt.clim(vmin=0, vmax=700)
plt.legend(fontsize=fz)
plt.xlabel('$\omega$', fontsize=fz)
plt.ylabel('$e$', fontsize=fz)
plt.title('Fit $(e,w)$ Heatmap From Uniform Distribubtion', fontsize=fz)
plt.savefig('heatmap_llike_colorbar_fit_new.png')
plt.hist(edistuni, color='cornflowerblue', bins=np.arange(0, 1, 0.1), density=True, stacked=True)
plt.xlabel('$e$', fontsize=fz)
plt.ylabel('Rel. Density')
plt.title('e distribution')
```
# GAUSSIAN DISTRIBUTION
```
# e_rand = np.random.normal(0.4, 0.1, size=n)
# w_rand = np.random.normal(0.0, 45.0, size=n)
trueew_gaussian = pd.read_csv('/Users/ssagear/Dropbox (UFL)/Research/MetallicityProject/photoeccentric/notebooks/plots_hpg/gaussian/fitew.txt', index_col=False)
distpath_gaussian = "/Users/ssagear/Dropbox (UFL)/Research/MetallicityProject/photoeccentric/notebooks/plots_hpg/gaussian/edists/"
paths = sorted(Path(distpath_gaussian).iterdir(), key=os.path.getmtime)
paths.reverse()
edist_gaussian = []
for file in paths:
fname = os.path.join(distpath_gaussian, file)
try:
edist_gaussian.append(np.genfromtxt(fname, delimiter=','))
except UnicodeDecodeError:
pass
es_gaussian = []
ws_gaussian = []
gs_gaussian = []
for i in range(len(edist_gaussian)):
es_gaussian.append(edist_gaussian[i][:,0])
ws_gaussian.append(edist_gaussian[i][:,1])
gs_gaussian.append(edist_gaussian[i][:,2])
es_gaussian = np.array(es_gaussian)
fite = []
plt.cla()
for i in range(len(es_gaussian)):
fite.append(mode(es_gaussian[i]))
plt.hist(es_gaussian[i], alpha=0.3, density=True, stacked=True);
fite = np.array(fite)
plt.xlabel('Fit E Distributions')
plt.title('All E distributions (sample ~800)')
ws_gaussian = np.array(ws_gaussian)
fitw = []
plt.cla()
for i in range(len(ws_gaussian)):
fitw.append(mode(ws_gaussian[i]))
plt.hist(ws_gaussian[i], alpha=0.3, density=True, stacked=True);
fitw = np.array(fitw)
plt.xlabel('Fit w Distributions (deg)')
plt.title('All w distributions (sample ~800)')
truew = np.array(trueew_gaussian['truew'])
truee = np.array(trueew_gaussian['truee'])
len(truee)
for i in range(len(fitw)):
if fitw[i] > 90:
fitw[i] = fitw[i]-180
plt.cla()
fz = 15
cmap = plt.cm.Blues
cmap.set_bad(color='white')
plt.hist2d(ws_gaussian.flatten(), es_gaussian.flatten(), bins=[22,10], cmap=cmap, norm=matplotlib.colors.LogNorm());
plt.colorbar(label='Rel. Density')
plt.scatter(truew, truee, marker='o', s=15, c='palevioletred', label='True Params')
plt.scatter(fitw, fite, marker='x', s=15, c='mediumaquamarine', label='Fit Params (Mode)')
plt.legend(fontsize=fz)
plt.xlim(-90, 90)
plt.xlabel('$\omega$', fontsize=fz)
plt.ylabel('$e$', fontsize=fz)
plt.title('Fit $(e,w)$ Heatmap From gaussian Distribubtion', fontsize=fz)
plt.savefig('Fit_EW_heatmap_gaussian.png')
wgaussian = np.random.uniform(-90, 90, 2)
inds = []
for i in range(len(wgaussian)):
inds.append(int(np.argwhere(fitw == find_nearest(fitw, wgaussian[i]))[0]))
euni = truee[inds].flatten()
wuni = truew[inds].flatten()
efuni = fite[inds].flatten()
wfuni = fitw[inds].flatten()
edistuni = np.array(es_gaussian[inds]).flatten()
wdistuni = np.array(ws_gaussian[inds]).flatten()
llike = []
for i in range(len(fite)):
counts, wbins, ebins = np.histogram2d(ws_gaussian[i], es_gaussian[i], bins=15);
llike.append(counts[np.digitize(truew[i], wbins)-1, np.digitize(truee[i], ebins)-1])
llike = np.array(llike)
llikeuni = llike[inds]
plt.cla()
fz = 15
plt.hist2d(wdistuni, edistuni, bins=[25,10], cmap='Blues', norm=matplotlib.colors.LogNorm());
plt.xlim(-90, 90)
#plt.clim(vmax=4000.0)
plt.colorbar(label='Rel. Density')
plt.scatter(wuni, euni, marker='o', s=80, c='palevioletred', label='True Params')
plt.scatter(wfuni, efuni, marker='X', s=80, c='mediumaquamarine', label='Fit Params')
plt.legend(fontsize=fz)
plt.xlabel('$w$', fontsize=fz)
plt.ylabel('$e$', fontsize=fz)
plt.title('50 Fit $w$s Pulled gaussianly', fontsize=fz)
plt.savefig('heatmap_truee_gaussianwfit_50.png')
```
| github_jupyter |
### Meshwork: linking meshes, skeletons, and annotations
There are many ways to describe neuroanatomy. We often work with three of them:
* Meshes. Meshes provide high resolution 3d structure of the surface of a cell. This is important for understanding the fine details of a neuron, like how long spines are or how the shape of a bouton.
* Skeletons. Skeletons provide a tree-like topological structure of a cell. They make it easy to ask questions about proximal/distalness or branch points, are much faster to compute on, and are the typical way neurons are described in most of neuroscience.
* Annotations. Annotations decorate the structure of a neuron and can indicate all sorts of things, from synapses to soma location to the base of an axon.
The `Meshwork` class helps work interoperably with these three views of the same neuron and gets rid of a lot of the tedious computation one often needs to do when navigating back and forth between these representations.
#### 1. Building a Meshwork object
The core of a Meshwork object is, as the name suggests, the mesh. Meshes are one of the fundamental objects in the Connectome Annotation Versioning Engine, and every root id has a unique mesh and can be downloaded through cloudvolume. We call this global version, which is the same for everyone, the *Base Mesh*.
However, the Base Mesh often has artifacts that cause problems for analysis. For example, internal structures like vesicles or the nucleus can be segmented separately and have internal mesh vertices or artifacts in across several sections of imagery can cause a continous branch to have a gap. While these don't usually cause obvious problems, they present technical problems. Internal mesh vertices can 'grab' annotations like synapses, and gaps make it impossible to naively compute path distances.
Because of this, we need to define a derived mesh that will be used to build continuous skeletonizations and anchor annotations. We call this the *Anchor Mesh*. The Anchor Mesh usually has internal vertices removed and gaps bridged. However, it's important to note that there are choices that go into defining this mesh, and thus it is not a universal object.
Building a Meshwork object starts with creating an Anchor Mesh. Here, we filter the original mesh to keep only the vertices in the largest connected component, which is sufficient to clean up the example mesh.
```
from meshparty import meshwork
from meshparty import trimesh_io, trimesh_vtk, skeletonize, mesh_filters
import numpy as np
import pandas as pd
import os
# Specify the base mesh:
oid = 648518346349539789
mm = trimesh_io.MeshMeta()
mesh_base = mm.mesh(filename=f'meshes/{oid}.h5')
# Filter out mesh vertices that are not in the largest connected component
in_comp = mesh_filters.filter_largest_component(mesh_base)
mesh_anchor = mesh_base.apply_mask(in_comp)
# The basic Meshwork takes the anchor mesh
nrn = meshwork.Meshwork(mesh_anchor, seg_id=oid)
```
#### Annotations
The first useful feature of a Meshwork object is dynamic annotation management. Annotations must be in the form of pandas dataframes. A given annotation has a name, a dataframe, and can be either 'anchored' to the mesh or not. For anchored dataframes, there must be a specified point column name. Each row gets associated with the closest mesh vertex to its point in this column. Unanchored annotations don't have attached mesh vertices, but can be useful for tracking information.
```
syn_in_df = pd.read_hdf('syn.h5', 'post')
nrn.add_annotations('syn_in', syn_in_df, point_column='ctr_pt_position')
```
Annotations are found under the `.anno` property. Each annotation can be accessed either as a property or as a dictionary with the name as a key.
An annotation has a number of different properties:
* .df : A length-n dataframe. Note than an anchored annotation has a new column, `mesh_index`, that specifies the index of the mesh index it is anchored to.
```
nrn.anno.syn_in.df.head()
```
* .voxels : An n x 3 array of voxel locations for each annotation point.
```
nrn.anno.syn_in.voxels
```
* .points : An n x 3 array of point locations for each annotation point in the same units as the mesh vertices.
```
nrn.anno.syn_in.points
```
* .mesh_index : An array of mesh indices for each annotation point. (In fact, this array is actually a MeshIndex, but it can be used like a standard numpy array.
```
nrn.anno.syn_in.mesh_index
```
For example, we can use the point property to easily make a visualization of the synapses.
```
syn_actor = trimesh_vtk.point_cloud_actor(nrn.anno.syn_in.points, size=800, color=(0.2, 0.9, 0.9))
mesh_actor = trimesh_vtk.mesh_actor(nrn.mesh, opacity=1, color=(0.7, 0.7, 0.7))
trimesh_vtk.render_actors([mesh_actor, syn_actor])
```
Additional annotations can keep being added.
Here, we want the `soma_pt` annotation to only have the approximate center of the soma, but don't want to link it to any paritcular mesh index. We thus set `anchored=False`. This will keep a mesh index from being computed and prevent any filtering, although voxel and point locations can still be accessed.
```
syn_out_df = pd.read_hdf('syn.h5', 'pre')
soma_df = pd.read_hdf('syn.h5', 'soma')
nrn.add_annotations('syn_out', syn_out_df, point_column='ctr_pt_position')
nrn.add_annotations('soma_pt', soma_df.query('pt_root_id == @oid').copy(), point_column='pt_position', anchored=False)
nrn.anno
```
#### Masks
Boolean masks work on meshwork objects, similar to meshes alone, but in this case they apply to the mesh vertices and annotations together.
```
# Make a mask object for mesh vertices within 30000 nm of a particular synapse
close_to_point_mask = mesh_filters.filter_spatial_distance_from_points(nrn.mesh,
nrn.anno.syn_in.points[10],
30000)
# Apply the mask and visualize the mesh as before (we also show the anchor mesh for comparison)
nrn.apply_mask(close_to_point_mask)
syn_actor = trimesh_vtk.point_cloud_actor(nrn.anno.syn_in.points, size=800, color=(0.2, 0.9, 0.9))
mesh_actor = trimesh_vtk.mesh_actor(nrn.mesh, opacity=1, color=(0.7, 0.7, 0.7))
mesh_base_actor = trimesh_vtk.mesh_actor(mesh_anchor, opacity=0.2, color=(0.7, 0.7, 0.7))
trimesh_vtk.render_actors([mesh_actor, syn_actor, mesh_base_actor])
```
Unlike meshes, meshwork objects retain the memory of the Anchor Mesh and can always be reset.
```
print(f'There are {len(nrn.anno.syn_in)} input synapses before reset.')
nrn.reset_mask()
print(f'There are {len(nrn.anno.syn_in)} input synapses after reset.')
```
Sometimes you want to query what annotations are within a given node mask and not change the whole object. We can accomplish this with the `filter_query` function like so:
```
len(nrn.anno.syn_in.filter_query(close_to_point_mask).df)
```
#### Skeletons : Adding directed topology
A meshwork class can also build a skeleton from their Anchor Mesh. Much of the utility of a meshwork object comes from being able to link extremely fast skeleton operations like finding a path between two points to mesh structures and annotations.
```
nrn.skeletonize_mesh(soma_pt=nrn.anno.soma_pt.points[0], soma_thresh_distance=8500)
```
The mesh now has a `.skeleton` property. For example, we can quickly compute the total cable length of the mesh in microns:
```
nrn.skeleton.path_length()/1000
```
However, it would be confusing for us to always go back and forth between meshes and skeletons. In general, we will stick to dealing with mesh vertices. Inside the Meshwork object the conversion from mesh to skeleton vertices and back again is handled automatically. For example, `nrn.root` returns a mesh index associated with the skeleton root, which here is set to the soma position.
Here, let's look at the child nodes of the root, which should be the base of each dendritic and axonal process. For each index, we can ask for the skeleton downstream of that point.
```
# Get all children of the root node, which is the soma.
branch_starts = nrn.child_index(nrn.root)
# Let's just pick one of these
mesh_downstream = nrn.downstream_of(branch_starts[2])
mesh_downstream
```
Functions like `path_length` exist at the meshwork level where they expect mesh indices.
```
nrn.path_length(mesh_downstream) / 1000
```
### JointMeshIndex and JointSkeletonIndex
You'll note that the type of `mesh_downstream` is not an array, but a `JointMeshIndex`. This is a class that contains methods to convert mesh indices to other representations, such as boolean masks and skeleton indices.
MeshIndices have a number of conversion functions for different situations.
```
# to_mesh_index just returns the same values
mesh_downstream.to_mesh_index
# to_mesh_mask returns a boolean mask with True at the location of the indices
mesh_downstream.to_mesh_mask
# to_skel_index returns the skeleton indices for each. Note: This does not give a 1-1 correspondance between indices.
mesh_downstream.to_skel_index
# to_skel_index returns the skeleton indices for each. Has a -1 if no map is available for the index.
mesh_downstream.to_skel_index_padded
```
Skeleton indices are like mesh indices, but because there is a 1-to-many mapping between skeleton vertices and mesh vertices they have a few more options.
```
# SkeletonIndexs are similar, but for meshes
# Let's make one from the downstream values.
skinds = np.unique( mesh_downstream.to_skel_index )
# to_mesh_index returns the exact mesh index
skinds.to_mesh_index
# to_mesh_mask returns the mesh mask for the skeleton indices
skinds.to_mesh_mask
# to_mesh_region returns a list of MeshIndex arrays for each element of the skeleton indices
skinds.to_mesh_region
```
A list of indices can be converted into a MeshIndex or SkeletonIndex through
`nrn.MeshIndex` or `nrn.SkeletonIndex` methods.
```
minds = nrn.MeshIndex([1,2,3])
minds.to_skel_index
```
#### Putting operations together
As an example, let's look at the synapses per unit length for each branch by going through each branch off the root and computing its total path length and the number of synaptic inputs and outputs on it.
```
syn_in_on_branch = []
syn_out_on_branch = []
len_of_branch = []
branch_starts = nrn.child_index(nrn.root)
for bp in branch_starts:
mesh_downstream = nrn.downstream_of(bp)
syn_in_on_branch.append(len(nrn.anno.syn_in.filter_query(mesh_downstream.to_mesh_mask).df))
syn_out_on_branch.append(len(nrn.anno.syn_out.filter_query(mesh_downstream.to_mesh_mask).df))
len_of_branch.append(nrn.path_length(mesh_downstream))
syn_in_on_branch = np.array(syn_in_on_branch)
syn_out_on_branch = np.array(syn_out_on_branch)
len_of_branch = np.array(len_of_branch)/1000 # in microns
syn_in_per_micron = syn_in_on_branch/len_of_branch
syn_out_per_micron = syn_out_on_branch/len_of_branch
pd.DataFrame({'Input density': syn_in_per_micron, 'Output density': syn_out_per_micron})
```
Let's visualize the branch with the highest density of inputs:
```
nrn.reset_mask()
branch_ind = np.argmax(syn_in_per_micron)
mesh_downstream = nrn.downstream_of(branch_starts[branch_ind])
syn_points_downstream = nrn.anno.syn_in.filter_query(mesh_downstream.to_mesh_mask).points
syn_actor = trimesh_vtk.point_cloud_actor(syn_points_downstream, size=800, color=(0.2, 0.8, 0.8))
mesh_actor = trimesh_vtk.mesh_actor(nrn.mesh, opacity=1, color=(0.7, 0.7, 0.7))
trimesh_vtk.render_actors([syn_actor, mesh_actor])
```
#### Loading and saving
We can loading and save the whole meshwork object. Note that it saves the original anchor mesh (*not* the base mesh) and all annotations, but does apply the same mask.
```
filename = f"{nrn.seg_id}_meshwork.h5"
nrn.save_meshwork(filename, overwrite=True)
nrn2 = meshwork.load_meshwork(filename)
np.all( nrn.branch_points == nrn2.branch_points )
```
#### More skeleton operations
In general, operations that consider paths along the neurite or have a sense of distal/proximal have these skeleton-like meshwork functions.
End points can be accessed like branch points:
```
nrn.end_points
```
Segments are regions between branch and end points. Here, let's compute them and highlight the segment with the most input synapses.
```
# compute_all_segments and look at two of them
mesh_segs = nrn.segments()
mesh_segs[0:2]
# Count inputs
nsyn_in = []
for seg in mesh_segs:
nsyn_in.append(len(nrn.anno.syn_in.filter_query(seg.to_mesh_mask).df))
clrs = np.array( [(0.3, 0.3, 0.3), (0.8, 0.2, 0.2)] )
seg = mesh_segs[np.argmax(nsyn_in)]
mesh_colors = clrs[seg.to_mesh_mask.astype(int)]
ma = trimesh_vtk.mesh_actor(nrn.mesh, vertex_colors=mesh_colors, opacity=1)
trimesh_vtk.render_actors([ma])
```
The geodesic distance between mesh vertices along the skeleton is computed with `distance_between`. Here, we compute the distribution of closest distances between output synapses. The `distance_to_root` function is convenient wrapper for finding the distance to the root node.
```
d_out = nrn.distance_between(nrn.anno.syn_out.mesh_index, nrn.anno.syn_out.mesh_index)
d_out
# Remove the trivial 0 along the diagonal
d_out_nodiag = d_out + np.diag(np.inf*np.ones(len(d_out)))
intersynapse_d = np.min(d_out_nodiag, axis=1)
import matplotlib.pyplot as plt
_ = plt.hist(intersynapse_d/1000, bins=np.arange(0,40, 2))
```
The mesh points along the path between mesh points can be computed with `path_between`. Here, we plot synapse sizes as a function of distance to root along a single path from one end point to root.
```
path_to_root = nrn.path_between(nrn.end_points[5], nrn.root)
syn_on_path = nrn.anno.syn_in.filter_query(path_to_root.to_mesh_mask)
fig, ax = plt.subplots(figsize=(5, 3), dpi=100)
ax.plot(nrn.distance_to_root(syn_on_path.mesh_index)/1000, syn_on_path.df['size'], '.')
_ = ax.set_xlabel('Dist to root ($\mu m)')
_ = ax.set_ylabel('Synapse size')
```
### Using the skeleton to split axon/dendrite
```
from meshparty.meshwork import algorithms
is_axon, qual = algorithms.split_axon_by_synapses(nrn,
nrn.anno.syn_in.mesh_index,
nrn.anno.syn_out.mesh_index,
)
# split_axon_by_synapses returns a skeleton mask, which we want to convert to a SkeletonIndex
is_axon_skel = nrn.SkeletonIndex(np.flatnonzero(is_axon))
clrs = np.array([[0.7, 0.7, 0.7], [0.8, 0.2, 0.3]])
mesh_color = clrs[is_axon_skel.to_mesh_mask.astype(int)]
ma = trimesh_vtk.mesh_actor(nrn.mesh, vertex_colors=mesh_color, opacity=1)
trimesh_vtk.render_actors([ma])
```
### Linear density
This function computes the density of annotations (e.g. synapses) along a moving window across the branches of a neuron, normalized by cable length.
```
# Let us look at linear synapse density only on the dendrites
nrn.apply_mask(np.invert(is_axon_skel.to_mesh_mask))
rho = nrn.linear_density(nrn.anno.syn_in.mesh_index, 2500, normalize=True, exclude_root=True)
ma = trimesh_vtk.mesh_actor(nrn.mesh, vertex_colors=(1000*rho-0.5), opacity=1)
trimesh_vtk.render_actors([ma])
```
| github_jupyter |
# Chapter 5 - Resampling Methods
- [Load dataset](#Load-dataset)
- [Cross-Validation](#5.1-Cross-Validation)
```
# %load ../standard_import.txt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn.linear_model as skl_lm
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split, LeaveOneOut, KFold, cross_val_score
from sklearn.preprocessing import PolynomialFeatures
%matplotlib inline
plt.style.use('seaborn-white')
```
### Load dataset
Dataset available on http://www-bcf.usc.edu/~gareth/ISL/data.html
```
df1 = pd.read_csv('Data/Auto.csv', na_values='?').dropna()
df1.info()
```
## 5.1 Cross-Validation
### Figure 5.2 - Validation Set Approach
Using Polynomial feature generation in scikit-learn<BR>
http://scikit-learn.org/dev/modules/preprocessing.html#generating-polynomial-features
```
t_prop = 0.5
p_order = np.arange(1,11)
r_state = np.arange(0,10)
X, Y = np.meshgrid(p_order, r_state, indexing='ij')
Z = np.zeros((p_order.size,r_state.size))
regr = skl_lm.LinearRegression()
# Generate 10 random splits of the dataset
for (i,j),v in np.ndenumerate(Z):
poly = PolynomialFeatures(int(X[i,j]))
X_poly = poly.fit_transform(df1.horsepower.values.reshape(-1,1))
X_train, X_test, y_train, y_test = train_test_split(X_poly, df1.mpg.ravel(),
test_size=t_prop, random_state=Y[i,j])
regr.fit(X_train, y_train)
pred = regr.predict(X_test)
Z[i,j]= mean_squared_error(y_test, pred)
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(10,4))
# Left plot (first split)
ax1.plot(X.T[0],Z.T[0], '-o')
ax1.set_title('Random split of the data set')
# Right plot (all splits)
ax2.plot(X,Z)
ax2.set_title('10 random splits of the data set')
for ax in fig.axes:
ax.set_ylabel('Mean Squared Error')
ax.set_ylim(15,30)
ax.set_xlabel('Degree of Polynomial')
ax.set_xlim(0.5,10.5)
ax.set_xticks(range(2,11,2));
```
### Figure 5.4
```
p_order = np.arange(1,11)
r_state = np.arange(0,10)
# LeaveOneOut CV
regr = skl_lm.LinearRegression()
loo = LeaveOneOut()
loo.get_n_splits(df1)
scores = list()
for i in p_order:
poly = PolynomialFeatures(i)
X_poly = poly.fit_transform(df1.horsepower.values.reshape(-1,1))
score = cross_val_score(regr, X_poly, df1.mpg, cv=loo, scoring='neg_mean_squared_error').mean()
scores.append(score)
# k-fold CV
folds = 10
elements = len(df1.index)
X, Y = np.meshgrid(p_order, r_state, indexing='ij')
Z = np.zeros((p_order.size,r_state.size))
regr = skl_lm.LinearRegression()
for (i,j),v in np.ndenumerate(Z):
poly = PolynomialFeatures(X[i,j])
X_poly = poly.fit_transform(df1.horsepower.values.reshape(-1,1))
kf_10 = KFold(n_splits=folds, random_state=Y[i,j])
Z[i,j] = cross_val_score(regr, X_poly, df1.mpg, cv=kf_10, scoring='neg_mean_squared_error').mean()
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(10,4))
# Note: cross_val_score() method return negative values for the scores.
# https://github.com/scikit-learn/scikit-learn/issues/2439
# Left plot
ax1.plot(p_order, np.array(scores)*-1, '-o')
ax1.set_title('LOOCV')
# Right plot
ax2.plot(X,Z*-1)
ax2.set_title('10-fold CV')
for ax in fig.axes:
ax.set_ylabel('Mean Squared Error')
ax.set_ylim(15,30)
ax.set_xlabel('Degree of Polynomial')
ax.set_xlim(0.5,10.5)
ax.set_xticks(range(2,11,2));
```
| github_jupyter |
<a href="https://colab.research.google.com/github/pragmatizt/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments/blob/master/IRA_E_LS_DS_131_Statistics_Probability_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<img align="left" src="https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png" width=200>
<br></br>
<br></br>
## *Data Science Unit 1 Sprint 3 Assignment 1*
# Apply the t-test to real data
Your assignment is to determine which issues have "statistically significant" differences between political parties in this [1980s congressional voting data](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records). The data consists of 435 instances (one for each congressperson), a class (democrat or republican), and 16 binary attributes (yes or no for voting for or against certain issues). Be aware - there are missing values!
Your goals:
1. Load and clean the data (or determine the best method to drop observations when running tests)
2. Using hypothesis testing, find an issue that democrats support more than republicans with p < 0.01
3. Using hypothesis testing, find an issue that republicans support more than democrats with p < 0.01
4. Using hypothesis testing, find an issue where the difference between republicans and democrats has p > 0.1 (i.e. there may not be much of a difference)
Note that this data will involve *2 sample* t-tests, because you're comparing averages across two groups (republicans and democrats) rather than a single group against a null hypothesis.
Stretch goals:
1. Refactor your code into functions so it's easy to rerun with arbitrary variables
2. Apply hypothesis testing to your personal project data (for the purposes of this notebook you can type a summary of the hypothesis you formed and tested)
## Answers to Questions:
(you can refer to the work below -- but for ease of navigation, I included the answers here)
- 1) Done. See Section 1) Loading and Cleaning Data.
- 2) An example of this is the "Adoption of the Budget Resolution". Looking at the mean scores between the Republican and Democrat votes, 88% votes yes for Dems, and only 13% voted yes for Republicans. The P-value is ridiculously low at 2x(10^77). *The code is shown below on #3*
- **Null Hypothesis:** There is no difference between how Dems and Reps voted for the "Adoption of the Budget Resolution".
- **Alternative Hypothesis:** There _is_ a difference between how D and R voted for the "Adoption of the Budget Resolution" bill.
- **Conclusion:** Due to the low p-value (2x(10^77), we reject the null hypothesis.
- 3) An example for this is "Religious Groups in Schools" bill. Mean score for Rep: 89%, Dem: 47%. The T-statistic supports this when with a -9 T-stat. The p-value is also low at 2x(10^20).
- **Null Hypothesis:** There is no difference between how Dems and Reps voted for the "Adoption of the Budget Resolution".
- **Alternative Hypothesis:** There _is_ a difference between how D and R voted for the "Religious Groups in School" bill.
- **Conclusion:** Due to the low p-value 2x(10^20), we reject the null hypothesis.
- 4) An example for this is the "Water Cost Sharing" bill. The means for R and D are nearly identical at 50% each. The T-Statistic is +/- 0.08. P-Value is high at .929.
- **Null Hypothesis:** There is no difference between how Dems and Reps voted for the "Adoption of the Budget Resolution".
- **Alternative Hypothesis:** There _is_ a difference between how D and R voted for the "Water Cost Sharing" bill.
- **Conclusion:** Due to the a high p-value of .929, we fail to reject the null hypothesis.
## Section 1) Loading and Cleaning Data
```
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.data
voting_data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.data'
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Also importing scipy.stats
from scipy.stats import ttest_ind, ttest_1samp, ttest_ind_from_stats, ttest_rel
voting_data = pd.read_csv(voting_data_url)
voting_data.head()
column_headers = ['party', 'handicapped-infants', 'water-project-cost-sharing',
'adoption-of-the-budget-resolution', 'physician-fee-freeze', 'el-salvador-aid',
'religious-groups-in-schools', 'anti-satellite-test-ban', 'aid-to-nicaraguan-contras',
'mx-missile', 'immigration', 'synfuels-corporation-cutback',
'education-spending', 'superfund-right-to-sue', 'crime',
'duty-free-exports', 'export-administration-act-south-africa']
# name our new variable "df"
df = pd.read_csv(voting_data_url, names=column_headers, na_values="?")
"""
1. party: 2 (democrat, republican)
2. handicapped-infants: 2 (y,n)
3. water-project-cost-sharing: 2 (y,n)
4. adoption-of-the-budget-resolution: 2 (y,n)
5. physician-fee-freeze: 2 (y,n)
6. el-salvador-aid: 2 (y,n)
7. religious-groups-in-schools: 2 (y,n)
8. anti-satellite-test-ban: 2 (y,n)
9. aid-to-nicaraguan-contras: 2 (y,n)
10. mx-missile: 2 (y,n)
11. immigration: 2 (y,n)
12. synfuels-corporation-cutback: 2 (y,n)
13. education-spending: 2 (y,n)
14. superfund-right-to-sue: 2 (y,n)
15. crime: 2 (y,n)
16. duty-free-exports: 2 (y,n)
17. export-administration-act-south-africa: 2 (y,n)
"""
# Headers now loaded. NaN values present.
df.head()
```
## In a 1-sample T-test I get to choose my null hypothesis, which allows me to frame the question that I want to ask.
```
# Null Hypothesis: There is 0 Support for this bill among Republicans in the house
# (remember, you choose this)
# stats.ttest_1samp(rep['handicapped-infants'], 0, nan_policy='omit')
# Which means your Alternative = "There is non-0 support (some support) for this bill"
## CONCLUSION: Due to a T-Statistic of 6.16 and a p-value of .0000000005434,
# we reject the null hypothesis that there is 0 support for the handicapped-infants bill
# among Republicans in congress, and suggest the alternative that there is some support
#
# Example of R's and D's being split:
# Null Hypothesis: Republican support is evenly divided
# Alternative: Republican support is not evenly divided
# stats.ttest_1samp(rep['handicapped-infants'], 0.5, nan_policy='omit')
# Conclusion: due to a t-sttatistic of -6.16 and p-value of 0
# we reject the null hyp that there is a 50/50 support for the H-I bill
# among Reps in congress, and sugest the alternative that there non-50/50 support
# for the bill among republics
```
## Above was in-class -- let's do the assignment below:
```
# STEP 1: converting y's to 1's, and n's to 0's.
df = df.replace(to_replace = "y", value= 1)
df = df.replace(to_replace = "n", value= 0)
# STEP 1: create extract dataframes for reps and dems.
# starting with republicans
rep = df[df['party'] == 'republican']
rep.head()
# One for the democrats.
dem = df[df['party'] == 'democrat']
dem.head()
# Let's do another 1-sample test for this assignment:
ttest_1samp(rep['water-project-cost-sharing'], 0, nan_policy='omit')
# And another just for practice's sake:
ttest_1samp(dem['el-salvador-aid'], 0, nan_policy='omit')
```
## 2 Sample T-test
(for this we want to compare Reps to Dems)
```
ttest_ind(rep['handicapped-infants'], dem['handicapped-infants'], nan_policy='omit')
"""
Null Hypotehsis: support among the two parties is equal
Alternative: Support among the two parties is different
Due to a p-value of 0, I reject the null hypothesis and suggest the alternative.
"""
## WRITE A FUNCTION THAT LOOPS OVER EVERY ISSUE;
## Use null hypotehsis for all of them
## Run a t-test for each issue
## you'll end up with a whole bunch of T-stats and P-values.
## I Could then make a plot or histogram of all my t-values
##
#t_values = [2,4,1.5]
## t_values = pd.Series(t_values, ('issue1', 'issue2', 'issue3') <--- RYan will fix this.
# t_values.hist()
## What does this show???
# 2 water project sharing
print("Dem mean is", dem['water-project-cost-sharing'].mean())
print("Rep mean is", rep['water-project-cost-sharing'].mean())
ttest_ind(dem['water-project-cost-sharing'], rep['water-project-cost-sharing'], nan_policy='omit')
# 3 adoption of the budget resolution
print("Dem mean is", dem['adoption-of-the-budget-resolution'].mean())
print("Rep mean is", rep['adoption-of-the-budget-resolution'].mean())
ttest_ind(dem['adoption-of-the-budget-resolution'], rep['adoption-of-the-budget-resolution'], nan_policy='omit')
# 3 adoption of the budget resolution
print("Rep mean is", rep['adoption-of-the-budget-resolution'].mean())
print("Dem mean is", dem['adoption-of-the-budget-resolution'].mean())
ttest_ind(rep['adoption-of-the-budget-resolution'], dem['adoption-of-the-budget-resolution'], nan_policy='omit')
# 4 physician fee freeze
print("Dem mean is", dem['physician-fee-freeze'].mean())
print("Rep mean is", rep['physician-fee-freeze'].mean())
ttest_ind(dem['physician-fee-freeze'], rep['physician-fee-freeze'], nan_policy='omit')
# 5 El Salvador aid
print("Dem mean is", dem['el-salvador-aid'].mean())
print("Rep mean is", rep['el-salvador-aid'].mean())
ttest_ind(dem['el-salvador-aid'], rep['el-salvador-aid'], nan_policy='omit')
# 6 religious groups in schools
print("Dem mean is", dem['religious-groups-in-schools'].mean())
print("Rep mean is", rep['religious-groups-in-schools'].mean())
ttest_ind(dem['religious-groups-in-schools'], rep['religious-groups-in-schools'], nan_policy='omit')
# 7 Anti-Satellite-Test-Ban
print("Dem mean is", dem['anti-satellite-test-ban'].mean())
print("Rep mean is", rep['anti-satellite-test-ban'].mean())
ttest_ind(dem['anti-satellite-test-ban'], rep['anti-satellite-test-ban'], nan_policy='omit')
# 8 Aid to Nicaraguan Contras
print("Dem mean is", dem['aid-to-nicaraguan-contras'].mean())
print("Rep mean is", rep['aid-to-nicaraguan-contras'].mean())
ttest_ind(dem['aid-to-nicaraguan-contras'], rep['aid-to-nicaraguan-contras'], nan_policy='omit')
# 9 mx missile
print("Dem mean is", dem['anti-satellite-test-ban'].mean())
print("Rep mean is", rep['anti-satellite-test-ban'].mean())
ttest_ind(dem['anti-satellite-test-ban'], rep['anti-satellite-test-ban'], nan_policy='omit')
# 10 immigration
print("Dem mean is", dem['immigration'].mean())
print("Rep mean is", rep['immigration'].mean())
ttest_ind(dem['immigration'], rep['immigration'], nan_policy='omit')
# 10 again -- just seeing what happens when I reverse reps and dems
print("Dem mean is", rep['immigration'].mean())
print("Rep mean is", dem['immigration'].mean())
ttest_ind(rep['immigration'], dem['immigration'], nan_policy='omit')
## OH COOL. So the P-value doesn't change. But the T-statistic changes.
# 11 SynFuels Corporation Cutback
print("Dem mean is", rep['synfuels-corporation-cutback'].mean())
print("Rep mean is", dem['synfuels-corporation-cutback'].mean())
ttest_ind(dem['synfuels-corporation-cutback'], rep['synfuels-corporation-cutback'], nan_policy='omit')
# 12 Education Spending
print("Dem mean is", rep['education-spending'].mean())
print("Rep mean is", dem['education-spending'].mean())
ttest_ind(dem['education-spending'], rep['education-spending'], nan_policy='omit')
# 13 Superfund Right to Sue
print("Dem mean is", rep['superfund-right-to-sue'].mean())
print("Rep mean is", dem['superfund-right-to-sue'].mean())
ttest_ind(dem['superfund-right-to-sue'], rep['superfund-right-to-sue'], nan_policy='omit')
# 14 Crime
print("Dem mean is", rep['crime'].mean())
print("Rep mean is", dem['crime'].mean())
ttest_ind(dem['crime'], rep['crime'], nan_policy='omit')
# 15 Duty Free Exports
print("Dem mean is", rep['duty-free-exports'].mean())
print("Rep mean is", dem['duty-free-exports'].mean())
ttest_ind(dem['duty-free-exports'], rep['duty-free-exports'], nan_policy='omit')
# 16 Export Administration Act South Africa
print("Dem mean is", rep['export-administration-act-south-africa'].mean())
print("Rep mean is", dem['export-administration-act-south-africa'].mean())
ttest_ind(dem['export-administration-act-south-africa'], rep['export-administration-act-south-africa'], nan_policy='omit')
```
## Graphing the T-Values
```
t_values = [-0.08896538137868286, 23.21277691701378, -49.36708157301406, -21.13669261173219, -9.737575825219457,
12.526187929077842, 18.052093200819733, 12.526187929077842, ]
t_values = pd.Series(t_values)
t_values.plot(kind='bar');
style.use()
```
## Describe the Rep and Dem dataframes
```
dem.describe()
rep.describe()
```
| github_jupyter |
This is a notebook with all experiments in the DEDPUL paper on behcnmark data sets: UCI, MNIST, CIFAR-10.
At the end of the notebook you can play with DEDPUL on specific data sets.
```
import numpy as np
import pandas as pd
from scipy.stats import norm, laplace
import torch.nn as nn
import torch.optim as optim
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn.functional as F
import pickle
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from matplotlib.lines import Line2D
%matplotlib inline
from IPython import display
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
import random
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import cross_val_score, train_test_split
from scipy.stats import gaussian_kde
from sklearn.mixture import GaussianMixture
from sklearn.metrics import accuracy_score, log_loss, mean_absolute_error, mean_squared_error, brier_score_loss
from sklearn.metrics import precision_score, recall_score, roc_auc_score, balanced_accuracy_score
from sklearn.model_selection import StratifiedKFold
from sklearn.decomposition import PCA
from sklearn.svm import OneClassSVM
from scipy.stats import linregress
from scipy.optimize import minimize
from scipy.stats import t
from statsmodels.stats.multitest import multipletests
from keras.layers import Dense, Dropout, Input, BatchNormalization
from keras.models import Model, Sequential
from keras.optimizers import Adam
from keras.losses import binary_crossentropy
from keras.callbacks import EarlyStopping
from algorithms import *
from utils import *
from KMPE import *
from NN_functions import *
from torchvision.datasets import MNIST
from tqdm import tqdm_notebook as tqdm
# from tqdm import tqdm as tqdm
import warnings
warnings.filterwarnings('ignore')
```
# Data
```
def read_data(data_mode, truncate=None, random_state=None):
if data_mode == 'bank':
df = pd.read_csv('UCI//bank//bank-full.csv', sep=';')
df['balance'] = normalize_col(df['balance'])
df = dummy_encode(df)
df.rename(columns={'y': 'target'}, inplace=True)
elif data_mode == 'concrete':
df = pd.read_excel('UCI//concrete//Concrete_Data.xls')
df = normalize_cols(df)
df.rename(columns={'Concrete compressive strength(MPa, megapascals) ': 'target'}, inplace=True)
df['target'] = reg_to_class(df['target'])
elif data_mode == 'housing':
df = pd.read_fwf('UCI//housing//housing.data.txt', header=None)
df = normalize_cols(df)
df.rename(columns={13: 'target'}, inplace=True)
df['target'] = reg_to_class(df['target'])
elif data_mode == 'landsat':
df = pd.read_csv('UCI//landsat//sat.trn.txt', header=None, sep=' ')
df = pd.concat([df, pd.read_csv('UCI//landsat//sat.tst.txt', header=None, sep=' ')])
df = normalize_cols(df, columns=[x for x in range(36)])
df.rename(columns={36: 'target'}, inplace=True)
df['target'] = mul_to_bin(df['target'])
elif data_mode == 'mushroom':
df = pd.read_csv('UCI//mushroom//agaricus-lepiota.data.txt', header=None)
df = dummy_encode(df)
df.rename(columns={0: 'target'}, inplace=True)
elif data_mode == 'pageblock':
df = pd.read_fwf('UCI//pageblock//page-blocks.data', header=None)
df = normalize_cols(df, columns=[x for x in range(10)])
df.rename(columns={10: 'target'}, inplace=True)
df['target'] = mul_to_bin(df['target'], 1)
elif data_mode == 'shuttle':
df = pd.read_csv('UCI//shuttle//shuttle.trn', header=None, sep=' ')
df = pd.concat([df, pd.read_csv('UCI//shuttle//shuttle.tst.txt', header=None, sep=' ')])
df = normalize_cols(df, columns=[x for x in range(9)])
df.rename(columns={9: 'target'}, inplace=True)
df['target'] = mul_to_bin(df['target'], 1)
elif data_mode == 'spambase':
df = pd.read_csv('UCI//spambase//spambase.data.txt', header=None, sep=',')
df = normalize_cols(df, columns=[x for x in range(57)])
df.rename(columns={57: 'target'}, inplace=True)
elif data_mode == 'wine':
df = pd.read_csv('UCI//wine//winequality-red.csv', sep=';')
df_w = pd.read_csv('UCI//wine//winequality-white.csv', sep=';')
df['target'] = 1
df_w['target'] = 0
df = pd.concat([df, df_w])
df = normalize_cols(df, [x for x in df.columns if x != 'target'])
elif data_mode.startswith('mnist'):
data = MNIST('mnist', download=True, train=True)
data_test = MNIST('mnist', download=True, train=False)
df = data.train_data
target = data.train_labels
df_test = data_test.test_data
target_test = data_test.test_labels
df = pd.DataFrame(torch.flatten(df, start_dim=1).detach().numpy())
df_test = pd.DataFrame(torch.flatten(df_test, start_dim=1).detach().numpy())
df = pd.concat([df, df_test])
df = normalize_cols(df)
target = pd.Series(target.detach().numpy())
target_test = pd.Series(target_test.detach().numpy())
target = pd.concat([target, target_test])
if data_mode == 'mnist_1':
target[target % 2 == 0] = 0
target[target != 0] = 1
elif data_mode == 'mnist_2':
target[target < 5] = 0
target[target >= 5] = 1
elif data_mode == 'mnist_3':
target[target.isin({0, 3, 5, 6, 7})] = 0
target[target.isin({1, 2, 4, 8, 9})] = 1
df['target'] = target
elif data_mode.startswith('cifar10'):
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
# trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
# shuffle=True, num_workers=2)
data = trainset.data
target = trainset.targets
# if truncate is not None and truncate < trainset.data.shape[0]:
# np.random.seed(random_state)
# mask = np.random.choice(np.arange(trainset.data.shape[0]), truncate, replace=False)
# np.random.seed(None)
# data = trainset.data[mask]
# target = trainset.targets[mask]
data = data / 128 - 1
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
target = pd.Series(target)
target[target.isin([0, 1, 8, 9])] = 1
target[target != 1] = 0
df = pd.DataFrame(data.reshape(data.shape[0], -1))
df['target'] = target
# 1 = N, 0 = P
df['target'] = 1 - df['target']
if truncate is not None and truncate < df.shape[0]:
if truncate > 1:
df = df.sample(n=truncate, random_state=random_state)
elif truncate > 0:
df = df.sample(frac=truncate, random_state=random_state)
return df
def make_pu(df, data_mode, alpha=0.5, random_state=None):
df['target_pu'] = df['target']
n_pos, n_pos_to_mix, n_neg_to_mix = shapes[data_mode][alpha]
df_pos = df[df['target'] == 0].sample(n=n_pos+n_pos_to_mix, random_state=random_state, replace=False).reset_index(drop=True)
df_neg = df[df['target'] == 1].sample(n=n_neg_to_mix, random_state=random_state, replace=False).reset_index(drop=True)
df_pos.loc[df_pos.sample(n=n_pos_to_mix, random_state=random_state, replace=False).index, 'target_pu'] = 1
return pd.concat([df_pos, df_neg]).sample(frac=1).reset_index(drop=True)
shapes = {'bank': {0.95: (1000, int(39922 / 19), 39922),
0.75: (1000, 4289, 4289*3),
0.50: (1000, 4289, 4289),
0.25: (1000, 4289, int(4289 / 3)),
0.05: (1000, 4289, int(4289 / 19))},
'concrete': {0.95: (100, int(540 / 19), 540),
0.75: (100, 180, 540),
0.50: (100, 390, 390),
0.25: (100, 390, 130),
0.05: (100, 380, 20)},
'housing': {0.95: (194, 15, 297),
0.75: (110, 99, 297),
0.50: (50, 159, 159),
0.25: (50, 159, 53),
0.05: (57, 152, 8)},
'landsat': {0.95: (1000, 189, 3594),
0.75: (1000, 1000, 3000),
0.50: (1000, 1841, 1841),
0.25: (1000, 1841, int(1841 / 3)),
0.05: (1000, 1841, int(1841 / 19))},
'mushroom': {0.95: (1000, 200, 3800),
0.75: (1000, 1000, 3000),
0.50: (1000, 2000, 2000),
0.25: (1000, 2916, 972),
0.05: (990, 2926, 154)},
'pageblock': {0.95: (100, 234, 4446),
0.75: (100, 460, 1380),
0.50: (100, 460, 460),
0.25: (101, 459, 153),
0.05: (104, 456, 24)},
'shuttle': {0.95: (1000, int(45586 / 19), 45586),
0.75: (1000, 11414, 11414 * 3),
0.50: (1000, 11414, 11414),
0.25: (1000, 11414, int(11414 / 3)),
0.05: (1000, 11414, int(11414 / 19))},
'spambase': {0.95: (400, 147, 2788),
0.75: (400, 929, 2788),
0.50: (400, 1413, 1413),
0.25: (400, 1413, 471),
0.05: (407, 1406, 74)},
'wine': {0.95: (500, int(4898 / 19), 4898),
0.75: (500, 1099, 1099 * 3),
0.50: (500, 1099, 1099),
0.25: (500, 1099, int(1099 / 3)),
0.05: (500, 1099, int(1099 / 19))},
'mnist_1': {0.95: (1000, int(34418 / 19), 34418),
0.75: (1000, int(34418 / 3), 34418),
0.50: (1000, 34418, 34418),
0.25: (1000, 34582, int(34582 / 3)),
0.05: (1000, 34582, int(34582 / 19))},
'cifar10': {0.95: (1000, int(30000 / 19), 30000),
0.75: (1000, 10000, 30000),
0.50: (1000, 19000, 19000),
0.25: (1000, 19000, int(19000 / 3)),
0.05: (1000, 19000, int(19000 / 19))}}
LRS = {
'bank': 5e-4,
'concrete': 1e-4,
'housing': 1e-4,
'landsat': 1e-5,
'mushroom': 1e-4,
'pageblock': 1e-4,
'shuttle': 1e-4,
'spambase': 1e-5,
'wine': 5e-5,
'mnist_1': 1e-4,
'mnist_2': 1e-4,
'mnist_3': 1e-4,
'cifar10': 1e-4,
}
```
# DEDPUL, EN, and nnPU
```
def experiment_uci(datasets=None, alphas=None, n_networks=1, n_rep=10, find_alpha=False):
if alphas is None:
alphas = [0.05, 0.25, 0.5, 0.75, 0.95]
if datasets is None:
datasets = [
'bank', 'concrete', 'mushroom', 'pageblock', 'landsat', 'shuttle', 'spambase', 'wine',
'mnist_1',
'cifar10',
]
results = []
fixed_alpha = None
for dataset in tqdm(datasets):
for alpha in tqdm(shapes[dataset].keys()):
if alpha not in alphas:
continue
n_pos, n_pos_to_mix, n_neg_to_mix = shapes[dataset][alpha]
n_mix = n_pos_to_mix + n_neg_to_mix
real_alpha = n_neg_to_mix / n_mix
for i in tqdm(range(n_rep)):
df = read_data(dataset, truncate=None, random_state=i)
df = make_pu(df, dataset, alpha, random_state=i)
data = df.drop(['target', 'target_pu'], axis=1).values
all_conv = False
n_neurons = 512
if dataset == 'cifar10':
data = np.swapaxes(data.reshape(data.shape[0], 32, 32, 3), 1, 3)
all_conv = True
n_neurons = 128
target_pu = df['target_pu'].values
target_mix = df.loc[df['target_pu'] == 1, 'target'].values
if not find_alpha:
fixed_alpha = real_alpha
res = dict()
try:
res = estimate_poster_cv(data, target_pu, estimator='ntc_methods', alpha=fixed_alpha,
estimate_poster_options={'disp': False, 'alpha_as_mean_poster': True},
estimate_diff_options={
'MT': False, 'MT_coef': 0.25, 'decay_MT_coef': False, 'tune': False,
'bw_mix': 0.05, 'bw_pos': 0.1, 'threshold': 'mid',
'n_gauss_mix': 20, 'n_gauss_pos': 10,
'bins_mix': 20, 'bins_pos': 20, 'k_neighbours': None,},
estimate_preds_cv_options={
'n_networks': n_networks,
'cv': 5,
'random_state': i*n_networks,
'hid_dim': n_neurons,
'n_hid_layers': 1,
'lr': LRS[dataset],
'l2': 1e-4,
'bn': True,
'all_conv': all_conv,
},
train_nn_options = {
'n_epochs': 250, 'loss_function': 'log', 'batch_size': 64,
'n_batches': None, 'n_early_stop': 7, 'disp': False,
},
### catboost
# estimate_preds_cv_options={
# 'n_networks': n_networks,
# 'cv': 5,
# 'random_state': i*n_networks,
# 'n_early_stop': 25,
# 'catboost_params': {
# 'iterations': 500, 'depth': 8, 'learning_rate': 0.02,
# 'l2_leaf_reg': 10,
# },
# },
)
res['nnre_sigmoid'] = estimate_poster_cv(
data, target_pu, estimator='nnre', alpha=real_alpha,
estimate_preds_cv_options={
'n_networks': n_networks,
'cv': 5,
'random_state': i*n_networks,
'hid_dim': n_neurons,
'n_hid_layers': 1,
'lr': LRS[dataset],
'l2': 1e-4,
'bn': True,
'all_conv': all_conv,
},
train_nn_options = {
'n_epochs': 250, 'loss_function': 'sigmoid', 'batch_size': 64,
'n_batches': None, 'n_early_stop': 10, 'disp': False,
'beta': 0, 'gamma': 1,
},
)
res['nnre_brier'] = estimate_poster_cv(
data, target_pu, estimator='nnre', alpha=real_alpha,
estimate_preds_cv_options={
'n_networks': n_networks,
'cv': 5,
'random_state': i*n_networks,
'hid_dim': n_neurons,
'n_hid_layers': 1,
'lr': LRS[dataset],
'l2': 1e-4,
'bn': True,
'all_conv': all_conv,
},
train_nn_options = {
'n_epochs': 250, 'loss_function': 'brier', 'batch_size': 64,
'n_batches': None, 'n_early_stop': 10, 'disp': False,
'beta': 0, 'gamma': 1,
},
)
except:
print(f'dataset = {dataset}, i = {i} failed')
continue
for key in res.keys():
est_alpha, poster = res[key]
cur_result = [dataset, key, i, n_mix, n_pos, alpha, real_alpha, est_alpha]
if poster is not None:
cur_result.append(np.mean(poster))
cur_result.append(accuracy_score(target_mix, poster.round()))
cur_result.append(roc_auc_score(target_mix, poster))
cur_result.append(log_loss(target_mix, poster))
cur_result.append(precision_score(target_mix, poster.round()))
cur_result.append(recall_score(target_mix, poster.round()))
cur_result.append(balanced_accuracy_score(target_mix, poster.round()))
cur_result.append(brier_score_loss(target_mix, poster))
results.append(cur_result)
df_results = pd.DataFrame(results, columns=['dataset', 'estimator', 'random_state',
'n_mix', 'n_pos', 'alpha', 'real_alpha', 'est_alpha', 'mean_poster',
'accuracy', 'roc', 'log_loss', 'precision', 'recall',
'accuracy_balanced', 'brier_score',])
return df_results
res = experiment_uci()
res = res.round(5)
res.to_csv('exp_uci_new.csv', index=False, sep=';', decimal=',')
```
# KMPE
```
def experiment_uci_KM(datasets=None, alphas=None, n_rep=10, max_sample=4000):
if alphas is None:
alphas = [0.05, 0.25, 0.5, 0.75, 0.95]
if datasets is None:
datasets = ['bank', 'concrete', 'landsat', 'mushroom', 'pageblock', 'shuttle', 'spambase', 'wine',
'mnist_1',
'cifar10',]
results = []
for dataset in tqdm(datasets):
for alpha in tqdm(shapes[dataset].keys()):
if alpha not in alphas:
continue
n_pos, n_pos_to_mix, n_neg_to_mix = shapes[dataset][alpha]
n_mix = n_pos_to_mix + n_neg_to_mix
real_alpha = n_neg_to_mix / n_mix
for i in tqdm(range(n_rep)):
df = read_data(dataset, truncate=None, random_state=i)
df = make_pu(df, dataset, alpha, random_state=i)
if dataset in {'mnist_1', 'mnist_2', 'mnist_3', 'cifar10'}:
pca = PCA(50)
data = pca.fit_transform(df.drop(['target', 'target_pu'], axis=1).values)
else:
data = df.drop(['target', 'target_pu'], axis=1).values
target_pu = df['target_pu'].values
target_mix = df.loc[df['target_pu'] == 1, 'target'].values
data_pos = data[target_pu == 0]
data_mix = data[target_pu == 1][np.random.randint(0, int(target_pu.sum()),
min(max_sample - int((1-target_pu).sum()),
int(target_pu.sum())))]
try:
KM_2 = 1 - wrapper(data_mix, data_pos, disp=False,
KM_1=False, KM_2=True, lambda_lower_bound=0.5, lambda_upper_bound=10)
except ValueError as e:
KM_2 = np.NaN
print(e)
cur_result = [dataset, i, n_mix, n_pos, alpha, real_alpha, KM_2]
results.append(cur_result)
df_results = pd.DataFrame(results, columns=['dataset', 'random_state', 'n_mix', 'n_pos', 'alpha', 'real_alpha',
'est_alpha'])
df_results['estimator'] = 'KM'
return df_results
res_KM = experiment_uci_KM(datasets=None, alphas=None, n_rep=10)
res_KM = res_KM.round(5)
res_KM.to_csv('exp_KMPE_uci.csv', index=False, sep=';', decimal=',')
```
# TIcE
```
from TIcE import tice, tice_c_to_alpha, min_max_scale
def experiment_uci_TIcE(datasets=None, alphas=None, n_rep=10, NTC=False):
if alphas is None:
alphas = [0.05, 0.25, 0.5, 0.75, 0.95]
if datasets is None:
datasets = [
'bank', 'concrete', 'mushroom',
'landsat', 'pageblock', 'shuttle', 'spambase', 'wine',
'mnist_1',
'cifar10'
]
results = []
for dataset in tqdm(datasets):
for alpha in tqdm(shapes[dataset].keys()):
if alpha not in alphas:
continue
n_pos, n_pos_to_mix, n_neg_to_mix = shapes[dataset][alpha]
n_mix = n_pos_to_mix + n_neg_to_mix
real_alpha = n_neg_to_mix / n_mix
for i in tqdm(range(n_rep)):
df = read_data(dataset, truncate=None, random_state=i)
df = make_pu(df, dataset, alpha, random_state=i)
data = df.drop(['target', 'target_pu'], axis=1).values
target_pu = df['target_pu'].values
target_mix = df.loc[df['target_pu'] == 1, 'target'].values
gamma = target_pu.sum() / target_pu.shape[0]
if NTC:
if dataset == 'cifar10':
data = np.swapaxes(data.reshape(data.shape[0], 32, 32, 3), 1, 3)
data = estimate_preds_cv(
data, target_pu, cv=5, all_conv=(dataset == 'cifar10'),
n_networks=1, hid_dim=512, n_hid_layers=1, lr=LRS[dataset], bn=True, l2=1e-4,
train_nn_options={'n_epochs': 200, 'metric': roc_auc_loss, 'batch_size': 64,
'n_batches': None, 'n_early_stop': 10, 'disp': False,
'loss_function': 'log'}).reshape(-1, 1)
else:
if dataset in {'mnist_1', 'mnist_2', 'mnist_3', 'cifar10'}:
pca = PCA(200)
data = pca.fit_transform(data)
data = min_max_scale(data)
c = tice(data, 1 - target_pu, 10, np.random.randint(5, size=len(data)),
delta=.2 maxSplits=500, minT=10, n_splits=3)[0]
alpha_tice = tice_c_to_alpha(c, gamma)
cur_result = [dataset, i, n_mix, n_pos, alpha, real_alpha, alpha_tice]
results.append(cur_result)
df_results = pd.DataFrame(results, columns=['dataset', 'random_state', 'n_mix', 'n_pos', 'alpha', 'real_alpha',
'est_alpha'])
df_results['estimator'] = 'tice'
return df_results
res_tice = experiment_uci_TIcE(NTC=False, n_rep=10)
res_tice = res_tice.round(5)
res_tice['est_alpha'][res_tice['est_alpha'] < 0] = 0
res_tice.to_csv('TIcE_uci.csv', index=False, sep=';', decimal=',')
```
# The data set with all results
```
# this is a merged data set with all experiments with all methods
res = pd.read_csv('exp_uci_new_merged.csv', sep=';', decimal=',')
res['alpha_mae'] = (res['real_alpha'] - res['est_alpha']).abs()
res_grouped = res.groupby(['dataset', 'n_mix', 'n_pos', 'alpha', 'estimator']).mean().drop(
['est_alpha', 'random_state'], axis=1).reset_index()
res_pivot_alpha = res_grouped.pivot_table(index=['dataset', 'alpha'], columns=['estimator'], values='alpha_mae')
metric = 'accuracy'
res_pivot_roc = res_grouped.pivot_table(index=['dataset', 'alpha'], columns=['estimator'], values=metric)
res_pivot_roc = 1 - res_pivot_roc
res_pivot_roc = clean_columns_poster(res_pivot_roc)
res_pivot_alpha.reset_index(inplace=True)
res_pivot_alpha.loc[res_pivot_alpha['dataset'] == 'mnist_1', 'dataset'] = 'mnist'
res_pivot_alpha.set_index(['dataset', 'alpha'], inplace=True)
res_pivot_roc.reset_index(inplace=True)
res_pivot_roc.loc[res_pivot_roc['dataset'] == 'mnist_1', 'dataset'] = 'mnist'
res_pivot_roc.set_index(['dataset', 'alpha'], inplace=True)
```
# Plots
```
# To switch between comparison of DEDPUL with SOTA and ablations of DEDPUL, comment lines under #1 and
# uncomment lines under #2
def plot_results_uci(res_plt, datasets=None, ylims=None, reverse_alpha=False, save_name=None, dpi=200,
alpha_mode=True):
if reverse_alpha:
# by default all estimates are computed for negative priors; here convert them to positive priors
res_plt['alpha'] = 1 - res_plt['alpha']
if ylims is None:
ylims = dict()
if datasets is None:
datasets = ['bank', 'concrete', 'mushroom', 'landsat', 'pageblock', 'shuttle',
'spambase', 'wine', 'mnist', 'cifar10']
fig = plt.figure(0)
fig.set_size_inches(w=35, h=14)
for i in range(10):
try:
dataset = datasets[i]
except:
break
res_plt_cur = res_plt[res_plt['dataset'] == dataset]
plt.subplot2grid((2, 5), (i//5,i%5), colspan=1, rowspan=1)
plt.title(dataset, fontdict={'fontsize': 23})
if not alpha_mode:
pass
# 1
plt.plot(res_plt_cur['alpha'], res_plt_cur['nnre_brier'], color='purple', marker='v', ls='--')
plt.plot(res_plt_cur['alpha'], res_plt_cur['nnre_sigmoid'], color='brown', marker='^', ls='-.')
else:
pass
# 1
plt.plot(res_plt_cur['alpha'], res_plt_cur['KM'], 'rs--')
plt.plot(res_plt_cur['alpha'], res_plt_cur['tice'], c='g', marker='v', ls='-.')
plt.plot(res_plt_cur['alpha'], res_plt_cur['tice+ntc'], c='b', marker='*', ls='-.')
# 2
# plt.plot(res_plt_cur['alpha'], res_plt_cur['baseline_dedpul'], c='r', marker='*', ls='--')
plt.plot(res_plt_cur['alpha'], res_plt_cur['dedpul'], 'ko-')
# 1
plt.plot(res_plt_cur['alpha'], res_plt_cur['e1_en'], c='orange', marker='x', ls=':')
# 2
# plt.plot(res_plt_cur['alpha'], res_plt_cur['cat'], c='g', marker='*', ls='--')
# plt.plot(res_plt_cur['alpha'], res_plt_cur['dedpul_GMM'], c='purple', marker='*', ls='-.')
# plt.plot(res_plt_cur['alpha'], res_plt_cur['dedpul_hist'], c='b', marker='*', ls='-.')
# plt.plot(res_plt_cur['alpha'], res_plt_cur['dedpul_no_heur'], c='darkcyan', marker='*', ls=':')
plt.xlim(0, 1)
plt.yticks([0, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5], fontsize='xx-large')
plt.xticks(res_plt_cur['alpha'].unique(), fontsize='xx-large')
if dataset in ylims.keys():
plt.ylim(0, ylims[dataset])
if i >= 5:
plt.xlabel(r'$\alpha$', fontsize='xx-large')
if i% 5 == 0:
if not alpha_mode:
plt.ylabel('1 - accuracy', fontsize='xx-large')
else:
plt.ylabel(r'$\left|\alpha - \widetilde{\alpha}^*\right|$', fontsize='xx-large')
if dataset == 'mushroom':
if not alpha_mode:
# 1
plt.legend(handles=(Line2D([], [], linestyle=':', color='orange', marker='x'),
Line2D([], [], color='purple', marker='v', ls='--'),
Line2D([], [], color='brown', marker='^', ls='-.'),
Line2D([], [], linestyle='-', color='k', marker='o')),
labels=('EN', 'nnPU-brier', 'nnPU-sigmoid', 'DEDPUL'), loc='upper left', fontsize='xx-large')
# 2
# plt.legend(handles=(Line2D([], [], linestyle='-', color='k', marker='o'),
# Line2D([], [], linestyle='--', color='g', marker='*'),
# Line2D([], [], linestyle='-.', color='purple', marker='*'),
# Line2D([], [], linestyle='-.', color='b', marker='*'),
# Line2D([], [], linestyle=':', color='darkcyan', marker='*')),
# labels=('DEDPUL', 'catboost', 'GMM', 'hist', 'no_smooth'),
# loc='upper left', fontsize='xx-large')
else:
# 1
plt.legend(handles=(Line2D([], [], linestyle=':', color='orange', marker='x'),
Line2D([], [], c='g', marker='v', ls='-.'),
Line2D([], [], c='b', marker='*', ls='-.'),
Line2D([], [], linestyle='--', color='r', marker='s'),
Line2D([], [], linestyle='-', color='k', marker='o'),
),
labels=('EN', 'TIcE', 'NTC+TIcE', 'KM2', 'DEDPUL'),
loc='upper left', fontsize='xx-large')
# 2
# plt.legend(handles=(Line2D([], [], linestyle='-', color='k', marker='o'),
# Line2D([], [], linestyle='--', color='r', marker='*'),
# Line2D([], [], linestyle='--', color='g', marker='*'),
# Line2D([], [], linestyle='-.', color='purple', marker='*'),
# Line2D([], [], linestyle='-.', color='b', marker='*'),
# Line2D([], [], linestyle=':', color='darkcyan', marker='*')),
# labels=('DEDPUL', 'simple_alpha', 'catboost', 'GMM', 'hist', 'no_smooth'),
# loc='upper center', fontsize='xx-large')
if save_name:
plt.savefig(save_name + '.png', dpi=dpi)
plot_results_uci(res_pivot_alpha.copy().reset_index(), reverse_alpha=True, alpha_mode=True,
ylims={'bank': 0.4,
'concrete': 0.5,
'landsat': 0.5,
'mushroom': 0.3,
'pageblock': 0.4,
'shuttle': 0.1,
'spambase': 0.5,
'wine': 0.5,
'mnist': 0.5,
'cifar10': 0.5},
# # save_name='uci_alpha_new',
# ylims={'bank': 0.2,
# 'concrete': 0.45,
# 'landsat': 0.16,
# 'mushroom': 0.3,
# 'pageblock': 0.5,
# 'shuttle': 0.2,
# 'spambase': 0.25,
# 'wine': 0.35,
# 'mnist': 0.15,
# 'cifar10': 0.25},
# # save_name='uci_alpha_new_ablation',
)
plot_results_uci(res_pivot_roc.copy().reset_index(), reverse_alpha=True, alpha_mode=False,
ylims={'bank': 0.25,
'concrete': 0.4,
'landsat': 0.15,
'mushroom': 0.1,
'pageblock': 0.25,
'shuttle': 0.05,
'spambase': 0.25,
'wine': 0.05,
'mnist': 0.2,
'cifar10': 0.35},
# # save_name='uci_ac_new',
# ylims={'bank': 0.25,
# 'concrete': 0.3,
# 'landsat': 0.1,
# 'mushroom': 0.05,
# 'pageblock': 0.2,
# 'shuttle': 0.05,
# 'spambase': 0.2,
# 'wine': 0.05,
# 'mnist': 0.15,
# 'cifar10': 0.25},
# # save_name='uci_ac_new_ablation',
)
```
# DEDPUL on specific single data sets
```
# data_mode = 'bank' # 0.11, (5289, 39922, 45211)
# data_mode = 'concrete' # 0.47, (490, 540, 1030)
# data_mode = 'housing' # 0.41, (209, 297, 506)
# data_mode = 'landsat' # 0.44, (2841, 3594, 6435)
data_mode = 'mushroom' # 0.48, (3916, 4208, 8124)
# data_mode = 'pageblock' # 0.1, (560, 4913, 5473)
# data_mode = 'shuttle' # 0.21, (12414, 45586, 58000)
# data_mode = 'spambase' # 0.39, (1813, 2788, 4601)
# data_mode = 'wine' # 0.24, (1599, 4898, 6497)
# data_mode = 'mnist_1' # 0.51, (35582, 34418, 70000)
# data_mode = 'cifar10' # 0.4, (20000, 30000, 50000)
alpha = 0.25
df = read_data(data_mode, truncate=None)
df = make_pu(df, data_mode, alpha=alpha, random_state=None)
alpha = ((df['target'] == 1).sum() / (df['target_pu'] == 1).sum()).item()
gamma = (df['target_pu'] == 1).sum() / df.shape[0]
print(df.shape)
print('alpha =', alpha)
data = df.drop(['target', 'target_pu'], axis=1).values
all_conv = False
if data_mode == 'cifar10':
data = np.swapaxes(data.reshape(data.shape[0], 32, 32, 3), 1, 3)
all_conv = True
target = df['target_pu'].values
preds = estimate_preds_cv(data, target, cv=3, bayes=False, random_state=42, all_conv=all_conv,
n_networks=1, hid_dim=512, n_hid_layers=1, lr=LRS[data_mode], l2=1e-4, bn=True,
train_nn_options={'n_epochs': 500, 'bayes_weight': 1e-5,
'batch_size': 64, 'n_batches': None, 'n_early_stop': 7,
'disp': True, 'loss_function': 'log',
'metric': roc_auc_loss, 'stop_by_metric': False})
# preds = estimate_preds_cv_catboost(
# data, target, cv=5, n_networks=1, random_state=42, n_early_stop=20, verbose=True,
# catboost_params={
# 'iterations': 500, 'depth': 8, 'learning_rate': 0.02, 'l2_leaf_reg': 10,
# },
# )
print('ac', accuracy_score(df['target_pu'], preds.round()))
# print('bac', balanced_accuracy_score(df['target_pu'], preds.round()))
print('roc', roc_auc_score(df['target_pu'], preds))
print('brier', brier_score_loss(df['target_pu'], preds))
bw_mix = 0.05
bw_pos = 0.1
MT_coef = 0.25
threshold = (preds[df['target_pu'].values==1].mean()+preds[df['target_pu'].values==0].mean())/2
threshold_h = preds[df['target_pu'].values==1].mean()
threshold_l = preds[df['target_pu'].values==0].mean()
k_neighbours = df.shape[0] // 20
diff = estimate_diff(preds, df['target_pu'].values, bw_mix, bw_pos, 'logit', threshold, k_neighbours,
MT=False, MT_coef=MT_coef, tune=False)
test_alpha, poster = estimate_poster_em(diff, mode='dedpul', converge=True, nonconverge=True,
max_diff=0.05, step=0.0025, alpha_as_mean_poster=True,
# alpha=alpha,
)
print('alpha:', test_alpha, '\nmean_poster:', np.mean(poster), '\ncons_alpha:', alpha,
'\nbaseline alpha:', 1-min(1/diff))
# print('log_loss:',
# log_loss(df.loc[df['target_pu']==1, 'target'], poster))
print('accuracy:',
accuracy_score(df.loc[df['target_pu']==1, 'target'], poster.round()))
print('balanced_accuracy:',
balanced_accuracy_score(df.loc[df['target_pu']==1, 'target'], poster.round()))
print('precision:',
precision_score(df.loc[df['target_pu']==1, 'target'], poster.round()))
print('recall:',
recall_score(df.loc[df['target_pu']==1, 'target'], poster.round()))
print('roc-auc:',
roc_auc_score(df.loc[df['target_pu']==1, 'target'], poster))
# print('MAE:',
# mean_absolute_error(df.loc[df['target_pu']==1, 'target'], poster))
# print('RMSE:',
# np.sqrt(mean_squared_error(df.loc[df['target_pu']==1, 'target'], poster)))
```
| github_jupyter |
```
import pandas as pd
import pickle
import json
import seaborn as sns
import pprint
import json
import glob
import os
import numpy as np
from ast import literal_eval
pp = pprint.PrettyPrinter(depth=6)
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rcParams['figure.figsize'] = (15.0, 5.0)
pd.set_option('display.max_columns', 120)
pd.set_option('display.max_rows', 120)
import git
git = git.Git("../../../sonarqube/")
szz_folder = "../../szz"
csv_folder = "../../csv"
```
### Import issues
```
issues = pd.read_csv(f"{csv_folder}/issues.csv", index_col=0)
for date_field in ["created", "duedate", "lastViewed", "resolutiondate", "updated"]:
issues[date_field] = pd.to_datetime(issues[date_field], errors="coerce")
issues = issues[issues.created > '2015-03-17 15:04:32+0000']
issues.head()
```
### Import fixversions
```
fixversions = pd.read_csv(f"{csv_folder}/issues_fixversions.csv", index_col=0)
fixversions.head(5)
```
### Import versions
```
versions = pd.read_csv(f"{csv_folder}/issues_versions.csv", index_col=0)
versions.head(5)
```
### Import tags
```
tags = pd.read_csv(f"{csv_folder}/tags.csv", index_col=0)
tags.Date = pd.to_datetime(tags.Date)
tags["month"] = tags.Date.dt.month
tags["year"] = tags.Date.dt.year
tags
```
## Szz Unleashed - Bug issues with Affected Version
* python3 fetch_jira_bugs/fetch.py --issue-code SONAR --jira-project jira.sonarsource.com
* python3 fetch_jira_bugs/git_log_to_array.py --repo-path ../sonarqube --from-commit b326bfd875b0b41
* python3 fetch_jira_bugs/find_bug_fixes.py --gitlog gitlog.json --issue-list issue/ --gitlog-pattern "SONAR-{nbr}"
* java -jar build/libs/szz_find_bug_introducers-0.1.jar -i ../issue_list.json -r ../../sonarqube/
```
name_config = "jaccard"
working_folder = f"compare_parameters_unleashed/{name_config}"
szz_files = glob.glob(f"{szz_folder}/{working_folder}/issues/*.json")
szz_files
```
### Bug fixing commits
```
data = {}
for szz_file in szz_files:
with open(szz_file, "r") as f:
data.update(json.load(f))
fields =[ 'creationdate',
'resolutiondate',
'commitdate',
'hash']
tuples = []
for key in data.keys():
inner_tuple = []
inner_tuple.append(key)
for field in fields:
inner_tuple.append(data[key][field])
tuples.append(tuple(inner_tuple))
issues_fixing_commit = pd.DataFrame(tuples, columns=["issue_name"]+fields)
issues_fixing_commit
```
### Bug inducing commits
```
def load_fix_inducers():
szz_inducing_folders = list(filter(lambda path: True if os.path.isdir(path) else False, glob.glob(f"{szz_folder}/{working_folder}/results/*")))
szz_inducing_files = list(filter(lambda path: True if (os.path.isfile(path) and ("fix_and_introducers_pairs" in path)) else False, glob.glob(f"{szz_folder}/{working_folder}/results/*")))
fix_and_introducers_pairs = []
for file in szz_inducing_files:
with open(file, "r") as f:
fix_and_introducers_pairs.append(json.load(f))
for folder in szz_inducing_folders:
with open(f"{folder}/fix_and_introducers_pairs.json", "r") as f:
fix_and_introducers_pairs.append(json.load(f))
fix_and_introducers_pairs_tuples = []
for pair_list in fix_and_introducers_pairs:
for pair in pair_list:
fix_and_introducers_pairs_tuples.append((pair[0], pair[1]))
fix_and_introducers = pd.DataFrame(fix_and_introducers_pairs_tuples, columns=["fixing_commit", "inducing_commit"])
return fix_and_introducers
fix_and_introducers = load_fix_inducers()
fix_and_introducers = fix_and_introducers.drop_duplicates(subset=["fixing_commit", "inducing_commit"], keep="first")
def find_first_tag_contains_commit(commit):
tags = git.tag("--contains", commit).split("\n")
if len(tags) > 0:
return tags[0]
return None
def all_tag_contains_commit(commit):
tags = git.tag("--contains", commit).split("\n")
return tags
def find_all_tags_for_df(df, newColumn, func, field):
cache = {}
df[newColumn] = np.nan
for i, row in df.iterrows():
if(not row[field] in cache):
cache[row[field]] = func(row[field])
else:
row[newColumn] = cache[row[field]]
if(i%100 == 0):
print(i)
print(cache)
fix_and_introducers
from pandarallel import pandarallel
pandarallel.initialize()
dataframes = [fix_and_introducers]
#for df in dataframes:
#df["all_affected_tags_fixing"] = df.fixing_commit.parallel_apply(lambda sha: all_tag_contains_commit(sha))
#df["all_affected_tags_inducing"] = df.inducing_commit.apply(lambda sha: all_tag_contains_commit(sha))
#fix_and_introducers.to_csv(f"{csv_folder}/fix_and_introducers_{name_config}.csv")
fix_and_introducers = pd.read_csv(f"{csv_folder}/fix_and_introducers_{name_config}.csv", index_col=0)
fix_and_introducers['all_affected_tags_fixing'] = fix_and_introducers['all_affected_tags_fixing'].apply(lambda x: literal_eval(x))
fix_and_introducers['all_affected_tags_inducing'] = fix_and_introducers['all_affected_tags_inducing'].apply(lambda x: literal_eval(x))
fix_and_introducers["tags_affected_only_inducing"] = fix_and_introducers.apply(\
lambda row: list(set(row.all_affected_tags_inducing)\
.difference(set(row.all_affected_tags_fixing))),axis=1)
fix_and_introducers
```
## Analysis
**Distribution of number of inducing commits per bug**
```
sns.boxplot(x=fix_and_introducers.groupby("fixing_commit").inducing_commit.count())
```
**Percentage of bugs for which we have inducing commits out of the ones with a fixing commits**
```
fix_and_introducers.fixing_commit.nunique()
issues_fixing_commit.hash.nunique()
len(set(issues_fixing_commit.hash).intersection(set(fix_and_introducers.fixing_commit))) / len(issues_fixing_commit.hash)
```
### Merging all datasets
```
versions = versions.merge(issues[["issue_key", "issue_id"]])
versions
versions_list = versions.groupby(["issue_id", "issue_key"]).version_name.apply(list).reset_index()
versions_list = versions_list.rename(columns={"version_name":"versions"})
versions_list
def merge_with_versions(df):
merge = versions_list.merge(issues_fixing_commit, left_on="issue_key", right_on="issue_name")
inducing_and_versions = df[df.index.isin(df[["fixing_commit", "inducing_commit"]].drop_duplicates().index)]
merge = merge.merge(inducing_and_versions[["fixing_commit", "inducing_commit", "tags_affected_only_inducing", "all_affected_tags_inducing"]], left_on="hash", right_on="fixing_commit")
return merge
merge = merge_with_versions(fix_and_introducers)
```
**Uniform versions from Jira to Github**
```
map_versions = {"8.5.0.37579": "8.5",
"8.4.0.35506": "8.4",
"8.3.0.34182": "8.3",
"8.2.0.32929": "8.2",
"8.1.0.31237": "8.1",
"8.4.2.36762": "8.4.2",
"8.4.1.35646": "8.4.1"}
def replace_versions(tags):
for i, tag in enumerate(tags):
if(tag in map_versions):
tags[i] = map_versions[tag]
return tags
merge.all_affected_tags_inducing = merge.all_affected_tags_inducing.apply(replace_versions)
merge.tags_affected_only_inducing = merge.tags_affected_only_inducing.apply(replace_versions)
def set_intersection(series):
ret_val = set()
for i, l in enumerate(series):
if i == 0:
ret_val = set(l)
else:
ret_val = ret_val.intersection(set(l))
return list(ret_val)
def set_union(series):
ret_val = set()
for i, l in enumerate(series):
if i == 0:
ret_val = set(l)
else:
ret_val = ret_val.union(set(l))
return list(ret_val)
def merge_tags(df):
intersection = df.groupby('issue_id').tags_affected_only_inducing.apply(set_intersection).reset_index()
union = df.groupby('issue_id').tags_affected_only_inducing.apply(set_union).reset_index()
x = intersection.merge(union, on="issue_id", suffixes =["_intersection","_union"])
x = x.rename(columns={"tags_affected_only_inducing_intersection":"tags_intersection",
"tags_affected_only_inducing_union": "tags_union"})
return x
def merge_with_versions(df):
merge = versions_list.merge(df, on="issue_id")
return merge
merged_final = merge_with_versions(merge_tags(merge))
```
**Tags-Intersection: reported versions intersection ratio with SZZ tags**
```
merged_final["intersection_ratio_intersection"] =\
merged_final.apply(\
lambda row: \
len(set(row.versions).intersection(\
set(row.tags_intersection))) \
/ len(row.versions), axis=1)
merged_final.intersection_ratio_intersection.value_counts(normalize=True).sort_index()
```
**Tags-union: reported versions intersection ratio with SZZ tags****Union full intersection**
```
merged_final["intersection_ratio_union"] =\
merged_final.apply(\
lambda row: \
len(set(row.versions).intersection(\
set(row.tags_union))) \
/ len(row.versions), axis=1)
merged_final.intersection_ratio_union.value_counts(normalize=True).sort_index()
merged_final
x = merged_final.apply(lambda row: len(row.tags_intersection) / len(row.versions), axis=1)
print(f"Median intersection: {x.median()}")
print(f"Mean intersection: {x.mean()}")
x = merged_final.apply(lambda row: len(row.tags_union) / len(row.versions), axis=1)
print(f"Median union: {x.median()}")
print(f"Mean union: {x.mean()}")
fig, ax =plt.subplots(2,1, sharex=True)
sns.boxplot(x=merged_final.tags_union.apply(lambda x: len(x)), ax=ax[0])
sns.boxplot(x=merged_final.tags_intersection.apply(lambda x: len(x)), ax=ax[1])
#fig.show()
```
**After how many merged reported tags do we find the first match with the versions on Jira?**
```
def find_first_matching_tag_position(row):
for i, tag in enumerate(row.tags_intersection):
for v in row.versions:
if(tag == v):
return i
return -1
from pandarallel import pandarallel
pandarallel.initialize()
res = merged_final.parallel_apply(find_first_matching_tag_position, axis=1)
sns.countplot(x=res[res > -1])
```
| github_jupyter |
```
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import re
import requests
import time
import numpy as np
from collections import defaultdict
from matplotlib import lines, markers
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
plt.style.use('ggplot')
```
## Configuration
```
FLINK_ENDPOINT = 'http://localhost:8081'
SAMPLING_FREQ_SEC = 1
ANALYSIS_DURATION_SEC = 600
def splitComponent(component, pattern):
m = pattern.match(component)
if m:
mdict = matchDict(m)
return mdict['instance'], mdict['component'], mdict['metric']
raise Exception(f'Failed to match {component}!')
def getTaskManagers():
return requests.get(f'{FLINK_ENDPOINT}/taskmanagers').json()['taskmanagers']
def getAvailableTaskManagerMetrics():
return [metric['id'] for metric in requests.get(f'{FLINK_ENDPOINT}/taskmanagers/metrics').json()]
def getTaskManagerMetrics(tmID, metrics):
metricString = ','.join(metrics)
return requests.get(f'{FLINK_ENDPOINT}/taskmanagers/{tmID}/metrics', params={'get': metricString}).json()
def getAvailableVertexMetrics(jobID, vertexID):
return [metric['id'] for metric in requests.get(f'{FLINK_ENDPOINT}/jobs/{jobID}/vertices/{vertexID}/metrics').json()]
def getJobInfo(jobID):
return requests.get(f'{FLINK_ENDPOINT}/jobs/{jobID}').json()
def getAvailableJobMetrics(jobID):
jobMetrics = dict() # vertexID -> [metrics]
vertexes = getJobInfo(jobID)['vertices']
for vertex in vertexes:
jobMetrics[vertex['id']] = getAvailableVertexMetrics(jobID, vertex['id'])
return jobMetrics
def getJobMetricNames(jobMetrics, namePattern):
metricNames = set()
for metrics in jobMetrics.values():
for metric in metrics:
m = namePattern.match(metric)
if m:
metricNames.add(m.group('metric'))
return metricNames
def updateVertexRequests(selectedMetrics, availableJobMetrics, namePattern, vertexRequests):
"""Given a list of metrics names, update the given vertex requests dictonary
Keyword arguments:
selectedMetrics -- A list of metric names
availableJobMetrics -- A dictionary of vertexID: [metrics], as returned by getAvailableJobMetrics
namePattern -- A regular expression that converts metricName -> metric (and optionally filters unwanted metrics)
vertexRequests -- A dictionary of vertexID: [metrics] that will be used for recording statistics
"""
for vertexID, availableMetrics in availableJobMetrics.items():
filteredMetrics = []
for metric in availableMetrics:
m = namePattern.match(metric)
if m and m.group('metric') in selectedMetrics:
filteredMetrics.append(metric)
vertexRequests[vertexID] = filteredMetrics
print(f'{len(vertexRequests[vertexID])} metrics for {vertexID}')
def selectVertexMetrics(availableJobMetrics, vertexRequests, namePattern, description):
widget = widgets.SelectMultiple(description=description, options=getJobMetricNames(availableJobMetrics, namePattern))
interact(updateVertexRequests, selectedMetrics=widget, namePattern=fixed(namePattern),
availableJobMetrics=fixed(availableJobMetrics), vertexRequests=fixed(vertexRequests))
def getVertexMetrics(jobID, vertexID, metrics, maxRequestLength=40):
def rawGetJobMetrics(jobID, vertexID, metrics):
metricString = ','.join(metrics)
return requests.get(f'{FLINK_ENDPOINT}/jobs/{jobID}/vertices/{vertexID}/metrics', params={'get': metricString}).json()
completeJSON = []
# Split metric requests so that the request string does not become too long
for i in range(0, len(metrics), maxRequestLength):
partialMetrics = metrics[i:i+maxRequestLength]
completeJSON += rawGetJobMetrics(jobID, vertexID, partialMetrics)
return completeJSON
def matchDict(match):
d = defaultdict(lambda: 'DEFAULT')
matchDict = match.groupdict()
d.update(matchDict)
return d
def plotAggregated(df, ax, startTime, aggregateFor, groupBy):
markerstyles = list(markers.MarkerStyle.markers.keys())
aggregated = df.groupby(aggregateFor).aggregate({'value': [np.mean, np.std]})
for i, (name, group) in enumerate(aggregated.groupby(level=groupBy)):
data = group.reset_index()
data.t -= startTime
ax.plot(data.t, data.value['mean'], alpha=.7, label=name[0][:5] + '_' + name[1][:15],
marker=markerstyles[i % len(markerstyles)], markevery=10, markersize=5)
ax.fill_between(data.t, data.value['mean'] - data.value['std']/2, data.value['mean'] + data.value['std']/2, alpha=.3)
def recordVertexMetrics(df, vertexMetrics, timestamp, namePattern):
for vertexID, metrics in vertexMetrics.items():
metricValues = getVertexMetrics(jobID, vertexID, metrics)
for metric in metricValues:
componentInstance, componentName, baseMetric = splitComponent(metric['id'], namePattern)
df = df.append({'t': int(timestamp),
'vertex': vertexID,
'component': componentName,
'instance': componentInstance,
'metric': baseMetric,
'value': float(metric['value'])},
ignore_index=True)
return df
def recordTaskManagerMetrics(df, taskManagers, taskManagerMetrics, timestamp):
for tm in taskManagers:
metricValues = getTaskManagerMetrics(tm['id'], taskManagerMetrics)
for metric in metricValues:
df = df.append({'t': int(timestamp),
'tm': tm['id'],
'metric': metric['id'],
'value': float(metric['value'])},
ignore_index=True)
return df
# Get running job
# TODO: Report for multiple jobs
jobs = requests.get(f'{FLINK_ENDPOINT}/jobs').json()['jobs']
taskManagers = getTaskManagers()
runningJobs = [job for job in jobs if job['status'] == 'RUNNING']
assert len(runningJobs) == 1, 'Toolkit can only work with exactly one running job!'
jobID = runningJobs[0]['id']
print(f'Reporting for job "{jobID}"')
jobData = pd.DataFrame(columns=['t', 'vertex', 'component', 'instance', 'metric', 'value'])
jobData['t'] = jobData['t'].astype(int)
jobData['value'] = jobData['value'].astype(float)
tmData = pd.DataFrame(columns=['t', 'tm', 'metric', 'value'])
tmData['t'] = tmData['t'].astype(int)
tmData['value'] = tmData['value'].astype(float)
# Regex patterns for splitting metric names
OPERATOR_METRIC_PATTERN = re.compile('^(?P<instance>\d+)\.(?P<component>.+)\.(?P<metric>.+)?$')
TASK_METRIC_PATTERN = re.compile('^(?P<instance>\d+)\.(?P<metric>[^\.]+)$')
# Task and operator metrics need to be requested from specific vertexes
taskMetrics = dict()
operatorMetrics = dict()
availableJobMetrics = getAvailableJobMetrics(jobID)
selectVertexMetrics(availableJobMetrics, taskMetrics, TASK_METRIC_PATTERN, 'task metrics')
selectVertexMetrics(availableJobMetrics, operatorMetrics, OPERATOR_METRIC_PATTERN, 'op metrics')
# Task Manager Metrics are requested from all taskmanagers
# so we just maintain the names
taskManagerMetrics = None
@interact(selectedMetrics=widgets.SelectMultiple(description='tm metrics', options=getAvailableTaskManagerMetrics()))
def selectTaskManagerMetrics(selectedMetrics):
global taskManagerMetrics
taskManagerMetrics = selectedMetrics if selectedMetrics else []
%matplotlib inline
%load_ext autoreload
%autoreload 2
%matplotlib notebook
def plotMetric(df, ax, startTime, metric, aggregateFor, groupBy):
ax.clear()
plotAggregated(df[df.metric == metric], ax, startTime, aggregateFor, groupBy)
ax.legend()
ax.set(xlabel='sec', title=metric)
taskMetricNames = list(getJobMetricNames(taskMetrics, TASK_METRIC_PATTERN))
operatorMetricNames = list(getJobMetricNames(operatorMetrics, OPERATOR_METRIC_PATTERN))
nPlots = len(taskManagerMetrics) + len(operatorMetricNames) + len(taskMetricNames)
fig, axes = plt.subplots(figsize=(8, 4*nPlots), nrows=nPlots, sharex=True, squeeze=False)
plt.ion()
fig.show()
fig.canvas.draw()
startTime = time.time()
currentTime = startTime
minimumRecordedTime = min(jobData.t.min(), tmData.t.min())
referenceTime = int(startTime) if np.isnan(minimumRecordedTime) else minimumRecordedTime
try:
while currentTime - startTime < ANALYSIS_DURATION_SEC:
# Retrieve metrics through Flink's REST API
jobData = recordVertexMetrics(jobData, taskMetrics, currentTime, TASK_METRIC_PATTERN)
jobData = recordVertexMetrics(jobData, operatorMetrics, currentTime, OPERATOR_METRIC_PATTERN)
tmData = recordTaskManagerMetrics(tmData, taskManagers, taskManagerMetrics, currentTime)
# Plot task and operator metrics
axesIndex = 0
for metric in taskMetricNames + operatorMetricNames:
ax = axes[axesIndex][0]
plotMetric(jobData, ax, referenceTime, metric, ['t', 'vertex', 'component'], ['vertex', 'component'])
axesIndex += 1
# Plot task manager metrics
for metric in taskManagerMetrics:
ax = axes[axesIndex][0]
plotMetric(tmData, ax, referenceTime, metric, ['t', 'tm', 'metric'], ['tm', 'metric'])
axesIndex += 1
fig.canvas.draw()
currentTime = time.time()
time.sleep(SAMPLING_FREQ_SEC)
except KeyboardInterrupt:
pass
```
| github_jupyter |
```
# Initialize Otter
import otter
grader = otter.Notebook("quiz01.ipynb")
```
# Quiz 1
## Data 94, Spring 2021
This quiz is meant to be completed in **50 minutes**, from 11:10AM-12:00PM on Friday, February 12th.
### Test Cases
Unlike in the homework assignments, the test cases that you see in this notebook **are not** the final test cases that your quiz will be graded on. The test cases that you see in this notebook only test your code on one example (typically one that is given in the question), but your final grade on each question will be determined using a series of "hidden tests" which you will only see once we release grades. **In other words, just because you see `q3 passed!` does not necessarily mean you answered Question 3 correctly.**
That means, after you complete a question, you should try out a few examples on your own to make sure your code is doing what it is supposed to. We will also be manually grading your code to make sure it truly does what it is supposed to.
### Grading
There are 5 questions in this quiz, each worth 3 points (not including Question 0, which is required but worth 0 points). **However, the maximum possible score on this quiz is 12.** This means that of the 5 questions on the quiz, you only need to complete 4 for full credit. If you complete all 5, you will not receive any extra credit (i.e. the max score is still 12/12), but it may help you in the event that you don't receive full credit on one of your other questions.
Each question has 6 test cases, but you only get to see 1 of them in this notebook. Each test case is worth an equal number of points. You will see your full score on the quiz within a few days of submitting.
### Submission
After finishing the quiz, submit it to Gradescope just like any other homework assignment.
```
# Don't worry about what this cell does, just run it.
import numpy as np
```
## Question 0 – Academic Honesty Statement [Points: 0, but you must do it!]
### Copy the following statement into the cell below, and replace `{YOUR NAME HERE}` with your full name.
"I, {YOUR NAME HERE}, certify that all work on this quiz is my own. I did not communicate with any other human beings while taking this exam, and I did not refer to any materials that were disallowed by course policy."
_Type your answer here, replacing this text._
## Question 1 – Triangles [Points: 3]
Heron's formula states that the area of a triangle whose sides have lengths $a$, $b$, and $c$ is
$$A = \sqrt{s(s-a)(s-b)(s-c)}$$
where $s$ is half of the triangle's perimeter, $s = \frac{a + b + c}{2}$.
Complete the implementation of the function `triangle_area`, which takes in side lengths `a`, `b`, and `c`, and returns the area of the corresponding triangle, as a float.
**If there is no triangle with side lengths $a$, $b$, and $c$ – that is, if $s(s - a)(s - b)(s - c)$ is less than or equal to 0, or if any of $a$, $b$, or $c$ is less than or equal to 0 – then your function should return `-1`.** Make sure to check for this, or your answer will not receive full credit!
For example, `triangle_area(3, 4, 5)` should return `6.0`, and `triangle_area(3, 3, 100)` should return `-1`.
<!--
BEGIN QUESTION
name: q1
points:
each: 0.5
-->
```
def triangle_area(a, b, c):
...
grader.check("q1")
```
## Question 2 – Lines [Points: 3]
The equation of a line is $y = mx + b$. Given two points $(x_0, y_0)$ and $(x_1, y_1)$, we can find $m$ and $b$ using the following formulas:
$$m = \frac{y_1 - y_0}{x_1 - x_0}$$
$$b = y_0 - mx_0 = y_1 - mx_1$$
We say multiple points are **collinear** if they all fall on a single straight line. To check if three points are collinear, we can do the following:
- Use the first two points, $(x_0, y_0)$ and $(x_1, y_1)$, to determine $m$ and $b$.
- The points are collinear if $y_2$ is equal to $mx_2 + b$.
In this question, `x` and `y` are both lists with three elements, corresponding to the $x$ and $y$ values of three points. For instance, `x = [2, 8, -1]` and `y = [1, 4, 2]` correspond to the three points $(2, 1), (8, 4),$ and $(-1, 2)$ (so $x_i$ is the element at position `i` in `x`).
Complete the implementation of the function `collinear`, which takes in two lists `x` and `y` and returns `True` if the provided points are collinear, and `False` if they are not.
Assume that all elements of `x` and `y` are ints, and that `m` and `b` are also ints (this way, we don't need to worry about floating point issues). Also assume that all elements in `x` are unique.
For example, `collinear([0, 1, 2], [3, 4, 5])` should return the boolean `True`, since these three points lie on the line $y = x + 3$. `collinear([1, 2, 10], [5, 10, 15])` should return the boolean `False`, since the first two points lie on the line $y = 5x$ while the third point does not.
<!--
BEGIN QUESTION
name: q2
points:
each: 0.5
-->
```
def collinear(x, y):
m = (y[1] - y[0]) // (x[1] - x[0])
b = ...
on_line = ...
return on_line
grader.check("q2")
```
## Question 3 – Calendar [Points: 3]
Gerald is trying to remember his dad's birthday, which is on May 16.
Complete the implementation of the function `check_birthday`, which takes in two arguments, `month` and `day`, and returns whether the corresponding date is before, on, or after May 16. `check_birthday` should return the string `'Before'`, `'Birthday'`, or `'After'`.
Assume that `month` is an integer in the range from 1 (indicating January) to 12 (indicating December), and that `day` is an integer in the range from 1 to 31. You can assume that the day of the month will be valid for the given month. For example, `check_birthday(5, 16)` should return the string `'Birthday'`, and `check_birthday(11, 26)` should return the string `'After'`.
<!--
BEGIN QUESTION
name: q3
points:
each: 0.5
-->
```
def check_birthday(month, day):
...
grader.check("q3")
```
## Question 4 – Email Verification [Points: 3]
We define a "valid Berkeley email address" to be one that follows the structure `'username@berkeley.edu'`.
Complete the implementation of the function `extract_calnet`, which takes in a string `email`, and returns the username portion of `email` if it is a valid Berkeley email address, and `False` if it is not. Assume that `email` has exactly one `'@'` symbol.
For example, `extract_calnet('dirks@berkeley.edu')` should return the string `'dirks'`, and both `extract_calnet('me@data.berkeley.edu')` and `extract_calnet('berkeley@gmail.com')` should return the boolean `False`.
_Hint: Use string indexing!_
<!--
BEGIN QUESTION
name: q4
points:
each: 0.5
-->
```
def extract_calnet(email):
at_index = ...
is_valid = ...
if is_valid:
output = ...
else:
output = ...
return output
grader.check("q4")
```
## Question 5 – Multiplicative Persistency [Points: 3]
We define the "multiplicative persistence" of a two-digit number $n$ to be the number of times the digits of $n$ need to be multiplied until the result is a single digit number.
Consider 77:
$$77 \rightarrow 7 \cdot 7 = 49 \rightarrow 4 \cdot 9 = 36 \rightarrow 3 \cdot 6 = 18 \rightarrow 1 \cdot 8 = 8$$
For 77 we had to repeat this sequence 4 times; thus, we say the persistency of 77 is 4, so `persistency(77)` should return the int `4`. Similarly, `persistency(18)` should return the int `1`.
Complete the implementation of the function `persistency`, which takes in a two-digit integer `n` and returns its multiplicative persistence.
**Note:** You are not allowed to use `str` in your implementation.
<!--
BEGIN QUESTION
name: q5
points:
each: 0.5
-->
```
def persistency(n):
count = ...
while n >= 10:
n = ...
count = ...
return count
grader.check("q5")
```
---
To double-check your work, the cell below will rerun all of the autograder tests.
```
grader.check_all()
```
## Submission
Make sure you have run all cells in your notebook in order before running the cell below, so that all images/graphs appear in the output. The cell below will generate a zip file for you to submit. **Please save before exporting!**
```
# Save your notebook first, then run this cell to export your submission.
grader.export()
```
| github_jupyter |
```
import warnings
warnings.simplefilter("ignore")
from altaipony.lcio import from_mast
import numpy as np
import matplotlib.pyplot as plt
```
Find and download the LightCurveFile for your light curve:
```
flc = from_mast("TIC 29780677", mode="LC", c=2, cadence="short", mission="TESS", author="SPOC")
flc
```
De-trend the light curve:
```
flcd = flc.detrend("savgol")
```
Now you can visually compare the results:
```
%matplotlib inline
plt.figure(figsize=(12,5))
plt.plot(flcd.time.value, flcd.flux / np.nanmedian(flcd.flux)+0.1, c="r",
label="PDCSAP_FLUX")
plt.plot(flcd.time.value, flcd.detrended_flux / np.nanmedian(flcd.detrended_flux),
"b", label="detrended flux")
plt.xlabel("Time - 2457000 [BKJD days]")
plt.ylabel(r"Flux [e$^-$s$^{-1}$]")
plt.xlim(flcd.time.value[0], flcd.time.value[-1])
plt.ylim(.95,1.30)
plt.legend(loc=2,fontsize=13);
#plt.xlim(1356.7,1357) # not a flare
#plt.xlim(1380,1380.5) # the largest flare
#plt.xlim(1375.8,1376.3) # the second largest flare
```
The periodicity in the raw light curve disappears in the de-trended one. Outliers and noise is still there.
Where are the flares?
```
flcd = flcd.find_flares()
flcd.flares.sort_values(by="ed_rec", ascending=False)
```
Not all visible outliers qualify for flare candidates, because we require a minimum a minimum of three outliers in a row for any candidate. We also apply a set of other criteria, which you can read up on [here in the docs](https://altaipony.readthedocs.io/en/latest/tutorials/altai.html).
What did the de-trending do to the flare candidates and their properties?
Let's find out and try to find flares in the raw light curve.
```
flc.detrended_flux = flc.flux
flc.detrended_flux_err = flc.flux_err
flc = flc.find_flares()
flc.flares.sort_values(by="ed_rec", ascending=False)
```
The large flares appear larger, and there is a small extra candidate. The former is because these flares occur when the stellar variability adds flux to the flare. If no de-trending occurs, variability is not accounted for. The extra flare will not be recognized as such in the de-trended light curve because it will not stick out enough (above 3 sigma level) from the quiescent flux.
But there is likely also an effect introduced by the de-trending algorithm itself. Here is where injection-recovery comes in:
```
flcd, fakeflc = flcd.sample_flare_recovery(inject_before_detrending=True, mode="savgol",
iterations=50, fakefreq=1, ampl=[1e-4, 0.5],
dur=[.001/6., 0.1/6.]
)
fig, ax = plt.subplots(figsize=(12,5))
fakeflc.plot(ax=ax,c="r", label="TIC 29780677 with injected flares")
flcd.plot(ax=ax,c="k");
print("The total number of injected flares is {}.".format(flcd.fake_flares.shape[0]))
print("Choose the bins below such that the number in each ampl-dur bin is >10 on average.")
flcc = flcd.characterize_flares(ampl_bins=10, dur_bins=10)
flcc.flares[["dur", "ampl_rec","ed_rec","tstart",
"ed_ratio","recovery_probability",
"ed_ratio_count","recovery_probability_std",
"ed_corr",]].sort_values(by="ed_rec")
```
Depending on the statistics of the injected flares, the middle sized flare may not have been captured by the injection recovery grid (or only with a small number in `ed_ratio_count`).
We can solve this by increasing either the number of iterations in `sample_flare_recovery` or changing the range of amplitudes and durations injected, or both. But the latter is difficult to know a priori because we do not know how recovered amplitudes and durations map to injected ones. You will have to tinker with the parameters at the moment, like `dur` and `ampl` in sample flare_recovery, or `ampl_bins` and `dur_bins` in `characterize_flares`.
A cross-check for the quality of the values for ED ratio (`ed_ratio`) and recovery probability (`recovery_probability`) are the respective counts, that is the number of synthetic flares that exhibit the true and recovered characteristics. Usually, ~100 events in a bin is a good heuristic. But you should also check the `_std` values for the intristic scatter in `ed_corr` and `recovery_probability`. The latter will show that the uncertainty on recovery probability for the small flares is ~.4, which you may choose as criterion for excluding this candidate from the results.
Caution: low number in `ed_ratio_count` may also mean that the recovery probability at these energies is just low, especially when `ed_rec` is small.
Questions? Something does not work? Email me @ eilin@aip.de
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.