code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Getting Started
## Platforms to Practice
Let us understand different platforms we can leverage to practice Apache Spark using Python.
* Local Setup
* Databricks Platform
* Setting up your own cluster
* Cloud based labs
## Setup Spark Locally - Ubuntu
Let us setup Spark Locally on Ubuntu.
* Install latest version of Anaconda
* Make sure Jupyter Notebook is setup and validated.
* Setup Spark and Validate.
* Setup Environment Variables to integrate Pyspark with Jupyter Notebook.
* Launch Jupyter Notebook using `pyspark` command.
* Setup PyCharm (IDE) for application development.
## Setup Spark Locally - Mac
### Let us setup Spark Locally on Ubuntu.
* Install latest version of Anaconda
* Make sure Jupyter Notebook is setup and validated.
* Setup Spark and Validate.
* Setup Environment Variables to integrate Pyspark with Jupyter Notebook.
* Launch Jupyter Notebook using `pyspark` command.
* Setup PyCharm (IDE) for application development.
## Signing up for ITVersity Labs
Here are the steps for signing to ITVersity labs.
* Go to https://labs.itversity.com
* Sign up to our website
* Purchase lab access
* Go to lab page and create lab account
* Login and practice
## Using ITVersity Labs
Let us understand how to submit the Spark Jobs in ITVersity Labs.
* You can either use Jupyter based environment or `pyspark` in terminal to submit jobs in ITVersity labs.
* You can also submit Spark jobs using `spark-submit` command.
* As we are using Python we can also use the help command to get the documentation - for example `help(spark.read.csv)`
## Interacting with File Systems
Let us understand how to interact with file system using %fs command from Databricks Notebook.
* We can access datasets using %fs magic command in Databricks notebook
* By default, we will see files under dbfs
* We can list the files using ls command - e. g.: `%fs ls`
* Databricks provides lot of datasets for free under databricks-datasets
* If the cluster is integrated with AWS or Azure Blob we can access files by specifying the appropriate protocol (e.g.: s3:// for s3)
* List of commands available under `%fs`
* Copying files or directories `-cp`
* Moving files or directories `-mv`
* Creating directories `-mkdirs`
* Deleting files and directories `-rm`
* We can copy or delete directories recursively using `-r` or `--recursive`
## Getting File Metadata
Let us review the source location to get number of files and the size of the data we are going to process.
* Location of airlines data /public/airlines_all/airlines
* We can get first details of files using hdfs dfs -ls /public/airlines_all/airlines
* Spark uses HDFS APIs to interact with the file system and we can access HDFS APIs using sc._jsc and sc._jvm to get file metadata.
* Here are the steps to get the file metadata.
* Get Hadoop Configuration using `sc._jsc.hadoopConfiguration()` - let's say `conf`
* We can pass conf to `sc._jvm.org.apache.hadoop.fs.FileSystem` get to get FileSystem object - let's say `fs`
* We can build `path` object by passing the path as string to `sc._jvm.org.apache.hadoop.fs.Path`
* We can invoke `listStatus` on top of fs by passing path which will return an array of FileStatus objects - let's say files.
* Each `FileStatus` object have all the metadata of each file.
* We can use `len` on files to get number of files.
* We can use `>getLen` on each `FileStatus` object to get the size of each file.
* Cumulative size of all files can be achieved using `sum(map(lambda file: file.getLen(), files))`
Let us first get list of files
```
hdfs dfs -ls /public/airlines_all/airlines
```
Here is the consolidated script to get number of files and cumulative size of all files in a given folder.
```
conf = sc._jsc.hadoopConfiguration()
fs = sc._jvm.org.apache.hadoop.fs.FileSystem.get(conf)
path = sc._jvm.org.apache.hadoop.fs.Path("/public/airlines_all/airlines")
files = fs.listStatus(path)
sum(map(lambda file: file.getLen(), files))/1024/1024/1024
```
| github_jupyter |
# Predicting Student Admissions with Neural Networks
In this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data:
- GRE Scores (Test)
- GPA Scores (Grades)
- Class rank (1-4)
The dataset originally came from here: http://www.ats.ucla.edu/
## Loading the data
To load the data and format it nicely, we will use two very useful packages called Pandas and Numpy. You can read on the documentation here:
- https://pandas.pydata.org/pandas-docs/stable/
- https://docs.scipy.org/
```
# Importing pandas and numpy
import pandas as pd
import numpy as np
# Reading the csv file into a pandas DataFrame
data = pd.read_csv('student_data.csv')
# Printing out the first 10 rows of our data
data[:10]
```
## Plotting the data
First let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank.
```
# Importing matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
# Function to help us plot
def plot_points(data):
X = np.array(data[["gre","gpa"]])
y = np.array(data["admit"])
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')
plt.xlabel('Test (GRE)')
plt.ylabel('Grades (GPA)')
# Plotting the points
plot_points(data)
plt.show()
```
Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank.
```
# Separating the ranks
data_rank1 = data[data["rank"]==1]
data_rank2 = data[data["rank"]==2]
data_rank3 = data[data["rank"]==3]
data_rank4 = data[data["rank"]==4]
# Plotting the graphs
plot_points(data_rank1)
plt.title("Rank 1")
plt.show()
plot_points(data_rank2)
plt.title("Rank 2")
plt.show()
plot_points(data_rank3)
plt.title("Rank 3")
plt.show()
plot_points(data_rank4)
plt.title("Rank 4")
plt.show()
```
This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it.
## TODO: One-hot encoding the rank
Use the `get_dummies` function in pandas in order to one-hot encode the data.
Hint: To drop a column, it's suggested that you use `one_hot_data`[.drop( )](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html).
```
# TODO: Make dummy variables for rank and concat existing columns
one_hot_data = pass
# Print the first 10 rows of our data
one_hot_data[:10]
```
## TODO: Scaling the data
The next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800.
```
# Making a copy of our data
processed_data = one_hot_data[:]
# TODO: Scale the columns
# Printing the first 10 rows of our procesed data
processed_data[:10]
```
## Splitting the data into Training and Testing
In order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data.
```
sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False)
train_data, test_data = processed_data.iloc[sample], processed_data.drop(sample)
print("Number of training samples is", len(train_data))
print("Number of testing samples is", len(test_data))
print(train_data[:10])
print(test_data[:10])
```
## Splitting the data into features and targets (labels)
Now, as a final step before the training, we'll split the data into features (X) and targets (y).
```
features = train_data.drop('admit', axis=1)
targets = train_data['admit']
features_test = test_data.drop('admit', axis=1)
targets_test = test_data['admit']
print(features[:10])
print(targets[:10])
```
## Training the 2-layer Neural Network
The following function trains the 2-layer neural network. First, we'll write some helper functions.
```
# Activation (sigmoid) function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_prime(x):
return sigmoid(x) * (1-sigmoid(x))
def error_formula(y, output):
return - y*np.log(output) - (1 - y) * np.log(1-output)
```
# TODO: Backpropagate the error
Now it's your turn to shine. Write the error term. Remember that this is given by the equation $$ (y-\hat{y}) \sigma'(x) $$
```
# TODO: Write the error term formula
def error_term_formula(x, y, output):
pass
# Neural Network hyperparameters
epochs = 1000
learnrate = 0.5
# Training function
def train_nn(features, targets, epochs, learnrate):
# Use to same seed to make debugging easier
np.random.seed(42)
n_records, n_features = features.shape
last_loss = None
# Initialize weights
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features.values, targets):
# Loop through all records, x is the input, y is the target
# Activation of the output unit
# Notice we multiply the inputs and the weights here
# rather than storing h as a separate variable
output = sigmoid(np.dot(x, weights))
# The error, the target minus the network output
error = error_formula(y, output)
# The error term
error_term = error_term_formula(x, y, output)
# The gradient descent step, the error times the gradient times the inputs
del_w += error_term * x
# Update the weights here. The learning rate times the
# change in weights, divided by the number of records to average
weights += learnrate * del_w / n_records
# Printing out the mean square error on the training set
if e % (epochs / 10) == 0:
out = sigmoid(np.dot(features, weights))
loss = np.mean((out - targets) ** 2)
print("Epoch:", e)
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
print("=========")
print("Finished training!")
return weights
weights = train_nn(features, targets, epochs, learnrate)
```
## Calculating the Accuracy on the Test Data
```
# Calculate accuracy on test data
test_out = sigmoid(np.dot(features_test, weights))
predictions = test_out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))
```
| github_jupyter |
## MEG Group Analysis
Group analysis for MEG data, for the FOOOF paper.
The Data Source is from the
[Human Connectome Project](https://www.humanconnectome.org/)
This notebook is for group analysis of MEG data using the
[omapping](https://github.com/voytekresearch/omapping) module.
```
%matplotlib inline
from scipy.io import loadmat
from scipy.stats.stats import pearsonr
from om.meg.single import MegSubj
from om.meg.single import print_corrs_mat, print_corrs_vec
from om.meg.group import MegGroup
from om.meg.group import osc_space_group
from om.plts.meg import *
from om.core.db import OMDB
from om.core.osc import Osc
from om.core.io import load_obj_pickle, save_obj_pickle
```
## Settings
```
SAVE_FIG = False
```
### Setup
```
# Get database object
db = OMDB()
# Check what data is available
# Note: this function outa date (checks the wrong file folder)
sub_nums, source = db.check_data_files(dat_type='fooof', dat_source='HCP', verbose=True)
# Drop outlier subject
sub_nums = list(set(sub_nums) - set([662551]))
```
### Oscillation Band Definitions
```
# Set up oscillation band definitions to use
bands = Osc()
bands.add_band('Theta', [3, 7])
bands.add_band('Alpha', [7, 14])
bands.add_band('Beta', [15, 30])
```
### Load Data
```
# Initialize MegGroup object
meg_group = MegGroup(db, osc)
# Add subjects to meg_group
for i, subj in enumerate(sub_nums):
meg_subj = MegSubj(OMDB(), source[i], osc) # Initialize MegSubj object
meg_subj.import_fooof(subj, get_demo=True) # Import subject data
meg_subj.all_oscs(verbose=False) # Create vectors of all oscillations
meg_subj.osc_bands_vertex() # Get oscillations per band per vertex
meg_subj.peak_freq(dat='all', avg='mean') # Calculate peak frequencies
meg_group.add_subject(meg_subj, # Add subject data to group object
add_all_oscs=True, # Whether to include all-osc data
add_vertex_bands=True, # Whether to include osc-band-vertex data
add_peak_freqs=True, # Whether to include peak frequency data
add_vertex_oscs=False, # Whether to include all-osc data for each vertex
add_vertex_exponents=True, # Whether to include the aperiodic exponent per vertex
add_demo=True) # Whether to include demographic information
# OR: Check available saved files to load one of them
meg_files = db.check_res_files('meg')
# Load a pickled file
#meg_group = load_obj_pickle('meg', meg_files[2])
```
### Data Explorations
```
# Check how many subjects group includes
print('Currently analyzing ' + str(meg_group.n_subjs) + ' subjects.')
# Check data descriptions - sex
print('# of Females:\t', sum(np.array(meg_group.sex) == 'F'))
print('# of Females:\t', sum(np.array(meg_group.sex) == 'M'))
# Check some simple descriptives
print('Number of oscillations found across the whole group: \t', meg_group.n_oscs_tot)
print('Average number of oscillations per vertex: \t\t {:1.2f}'.format(np.mean(meg_group.n_oscs / 7501)))
# Plot all oscillations across the group
plot_all_oscs(meg_group.centers_all, meg_group.powers_all, meg_group.bws_all,
meg_group.comment, save_out=SAVE_FIG)
```
### Save out probabilities per frequency range
....
```
# Check for oscillations above / below fitting range
# Note: this is a quirk of older FOOOF version - fixed in fitting now
print(len(meg_group.centers_all[meg_group.centers_all < 2]))
print(len(meg_group.centers_all[meg_group.centers_all > 40]))
# Calculate probability of observing an oscillation in each frequency
bins = np.arange(0, 43, 1)
counts, freqs = np.histogram(meg_group.centers_all, bins=bins)
probs = counts / meg_group.n_oscs_tot
# Fix for the oscillation out of range
add = sum(probs[0:3]) + sum(probs[35:])
freqs = freqs[3:35]
probs = probs[3:35]
probs = probs + (add/len(probs))
# np.save('freqs.npy', freqs)
# np.save('probs.npy', probs)
```
## BACK TO NORMAL PROGRAMMING
```
# ??
print(sum(meg_group.powers_all < 0.05) / len(meg_group.powers_all))
print(sum(meg_group.bws_all < 1.0001) / len(meg_group.bws_all))
# Plot a single oscillation parameter at a time
plot_all_oscs_single(meg_group.centers_all, 0, meg_group.comment,
n_bins=150, figsize=(15, 5))
if True:
plt.savefig('meg-osc-centers.pdf', bbox_inches='tight')
```
### Exponents
```
# Plot distribution of all aperiodic exponents
plot_exponents(meg_group.exponents, meg_group.comment, save_out=SAVE_FIG)
# Check the global mean exponent value
print('Global mean exponent value is: \t{:1.4f} with st. dev of {:1.4f}'\
.format(np.mean(meg_group.exponents), np.std(meg_group.exponents)))
# Calculate Average Aperiodic Exponent value per Vertex
meg_group.group_exponent(avg='mean')
# Save out group exponent results
#meg_group.save_gr_exponent(file_name='json')
# Set group exponent results for visualization with Brainstorm
#meg_group.set_exponent_viz()
```
### Oscillation Topographies
##### Oscillation Probability
```
# Calculate probability of oscilation (band specific) across the cortex
meg_group.osc_prob()
# Correlations between probabilities of oscillatory bands.
prob_rs, prob_ps, prob_labels = meg_group.osc_map_corrs(map_type='prob')
print_corrs_mat(prob_rs, prob_ps, prob_labels)
# Plot the oscillation probability correlation matrix
#plot_corr_matrix(prob_rs, osc.labels, save_out=SAVE_FIG)
# Save group oscillation probability data for visualization with Brainstorm
meg_group.set_map_viz(map_type='prob', file_name='json')
# Save group oscillation probability data out to npz file
#meg_group.save_map(map_type='prob', file_name='json')
```
##### Oscillation Power Ratio
```
# Calculate power ratio of oscilation (band specific) across the cortex
meg_group.osc_power()
# Correlations between probabilities of oscillatory bands.
power_rs, power_ps, power_labels = meg_group.osc_map_corrs(map_type='power')
print_corrs_mat(power_rs, power_ps, power_labels)
# Plot the oscillation probability correlation matrix
#plot_corr_matrix(power_rs, osc.labels, save_out=SAVE_FIG)
# Save group oscillation probability data for visualization with Brainstorm
meg_group.set_map_viz(map_type='power', file_name='json')
# Save group oscillation probability data out to npz file
#meg_group.save_map(map_type='power', file_name='json')
```
##### Oscillation Score
```
# Calculate oscillation score
meg_group.osc_score()
# Save group oscillation probability data for visualization with Brainstorm
#meg_group.set_map_viz(map_type='score', file_name='json')
# Save group oscillation score data out to npz file
#meg_group.save_map(map_type='score', file_name='80_new_group')
# Correlations between osc-scores of oscillatory bands.
score_rs, score_ps, score_labels = meg_group.osc_map_corrs(map_type='score')
print_corrs_mat(score_rs, score_ps, score_labels)
# Plot the oscillation score correlation matrix
#plot_corr_matrix(score_rs, osc.labels, save_out=SAVE_FIG)
# Save out pickle file of current MegGroup() object
#save_obj_pickle(meg_group, 'meg', 'test')
```
#### Check correlation of aperiodic exponent with oscillation bands
```
n_bands = len(meg_group.bands)
exp_rs = np.zeros(shape=[n_bands])
exp_ps = np.zeros(shape=[n_bands])
for ind, band in enumerate(meg_group.bands):
r_val, p_val = pearsonr(meg_group.exponent_gr_avg, meg_group.osc_scores[band])
exp_rs[ind] = r_val
exp_ps[ind] = p_val
for rv, pv, label in zip(exp_rs, exp_ps, ['Theta', 'Alpha', 'Beta']):
print('Corr of {}-Exp \t is {:1.2f} \t with p-val of {:1.2f}'.format(label, rv, pv))
```
#### Plot corr matrix including bands & exponents
```
all_rs = np.zeros(shape=[n_bands+1, n_bands+1])
all_rs[0:n_bands, 0:n_bands] = score_rs
all_rs[n_bands, 0:n_bands] = exp_rs
all_rs[0:n_bands, n_bands] = exp_rs;
from copy import deepcopy
all_labels = deepcopy(osc.labels)
all_labels.append('Exps')
#plot_corr_matrix_tri(all_rs, all_labels)
#if SAVE_FIG:
# plt.savefig('Corrs.pdf')
corr_data = all_rs
labels = all_labels
# TEMP / HACK - MAKE & SAVE CORR-PLOT
# Generate a mask for the upper triangle
mask = np.zeros_like(corr_data, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Generate a custom diverging colormap
cmap = sns.color_palette("coolwarm", 7)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr_data, mask=mask, cmap=cmap, annot=True, square=True, annot_kws={"size":15},
vmin=-1, vmax=1, xticklabels=labels, yticklabels=labels)
plt.savefig('corr.pdf')
#plot_corr_matrix(all_rs, all_labels, save_out=SAVE_FIG)
```
| github_jupyter |
# Plagiarism Text Data
In this project, you will be tasked with building a plagiarism detector that examines a text file and performs binary classification; labeling that file as either plagiarized or not, depending on how similar the text file is when compared to a provided source text.
The first step in working with any dataset is loading the data in and noting what information is included in the dataset. This is an important step in eventually working with this data, and knowing what kinds of features you have to work with as you transform and group the data!
So, this notebook is all about exploring the data and noting patterns about the features you are given and the distribution of data.
> There are not any exercises or questions in this notebook, it is only meant for exploration. This notebook will note be required in your final project submission.
---
## Read in the Data
The cell below will download the necessary data and extract the files into the folder `data/`.
This data is a slightly modified version of a dataset created by Paul Clough (Information Studies) and Mark Stevenson (Computer Science), at the University of Sheffield. You can read all about the data collection and corpus, at [their university webpage](https://ir.shef.ac.uk/cloughie/resources/plagiarism_corpus.html).
> **Citation for data**: Clough, P. and Stevenson, M. Developing A Corpus of Plagiarised Short Answers, Language Resources and Evaluation: Special Issue on Plagiarism and Authorship Analysis, In Press. [Download]
```
!wget https://s3.amazonaws.com/video.udacity-data.com/topher/2019/January/5c4147f9_data/data.zip
!unzip data
# import libraries
import pandas as pd
import numpy as np
import os
```
This plagiarism dataset is made of multiple text files; each of these files has characteristics that are is summarized in a `.csv` file named `file_information.csv`, which we can read in using `pandas`.
```
csv_file = 'data/file_information.csv'
plagiarism_df = pd.read_csv(csv_file)
# print out the first few rows of data info
plagiarism_df.head(10)
```
## Types of Plagiarism
Each text file is associated with one **Task** (task A-E) and one **Category** of plagiarism, which you can see in the above DataFrame.
### Five task types, A-E
Each text file contains an answer to one short question; these questions are labeled as tasks A-E.
* Each task, A-E, is about a topic that might be included in the Computer Science curriculum that was created by the authors of this dataset.
* For example, Task A asks the question: "What is inheritance in object oriented programming?"
### Four categories of plagiarism
Each text file has an associated plagiarism label/category:
1. `cut`: An answer is plagiarized; it is copy-pasted directly from the relevant Wikipedia source text.
2. `light`: An answer is plagiarized; it is based on the Wikipedia source text and includes some copying and paraphrasing.
3. `heavy`: An answer is plagiarized; it is based on the Wikipedia source text but expressed using different words and structure. Since this doesn't copy directly from a source text, this will likely be the most challenging kind of plagiarism to detect.
4. `non`: An answer is not plagiarized; the Wikipedia source text is not used to create this answer.
5. `orig`: This is a specific category for the original, Wikipedia source text. We will use these files only for comparison purposes.
> So, out of the submitted files, the only category that does not contain any plagiarism is `non`.
In the next cell, print out some statistics about the data.
```
# print out some stats about the data
print('Number of files: ', plagiarism_df.shape[0]) # .shape[0] gives the rows
# .unique() gives unique items in a specified column
print('Number of unique tasks/question types (A-E): ', (len(plagiarism_df['Task'].unique())))
print('Unique plagiarism categories: ', (plagiarism_df['Category'].unique()))
```
You should see the number of text files in the dataset as well as some characteristics about the `Task` and `Category` columns. **Note that the file count of 100 *includes* the 5 _original_ wikipedia files for tasks A-E.** If you take a look at the files in the `data` directory, you'll notice that the original, source texts start with the filename `orig_` as opposed to `g` for "group."
> So, in total there are 100 files, 95 of which are answers (submitted by people) and 5 of which are the original, Wikipedia source texts.
Your end goal will be to use this information to classify any given answer text into one of two categories, plagiarized or not-plagiarized.
### Distribution of Data
Next, let's look at the distribution of data. In this course, we've talked about traits like class imbalance that can inform how you develop an algorithm. So, here, we'll ask: **How evenly is our data distributed among different tasks and plagiarism levels?**
Below, you should notice two things:
* Our dataset is quite small, especially with respect to examples of varying plagiarism levels.
* The data is distributed fairly evenly across task and plagiarism types.
```
# Show counts by different tasks and amounts of plagiarism
# group and count by task
counts_per_task=plagiarism_df.groupby(['Task']).size().reset_index(name="Counts")
print("\nTask:")
display(counts_per_task)
# group by plagiarism level
counts_per_category=plagiarism_df.groupby(['Category']).size().reset_index(name="Counts")
print("\nPlagiarism Levels:")
display(counts_per_category)
# group by task AND plagiarism level
counts_task_and_plagiarism=plagiarism_df.groupby(['Task', 'Category']).size().reset_index(name="Counts")
print("\nTask & Plagiarism Level Combos :")
display(counts_task_and_plagiarism)
```
It may also be helpful to look at this last DataFrame, graphically.
Below, you can see that the counts follow a pattern broken down by task. Each task has one source text (original) and the highest number on `non` plagiarized cases.
```
import matplotlib.pyplot as plt
% matplotlib inline
# counts
group = ['Task', 'Category']
counts = plagiarism_df.groupby(group).size().reset_index(name="Counts")
plt.figure(figsize=(8,5))
plt.bar(range(len(counts)), counts['Counts'], color = 'blue')
```
## Up Next
This notebook is just about data loading and exploration, and you do not need to include it in your final project submission.
In the next few notebooks, you'll use this data to train a complete plagiarism classifier. You'll be tasked with extracting meaningful features from the text data, reading in answers to different tasks and comparing them to the original Wikipedia source text. You'll engineer similarity features that will help identify cases of plagiarism. Then, you'll use these features to train and deploy a classification model in a SageMaker notebook instance.
| github_jupyter |
# Application of SVD in image processing
## Introduction to SVD
Matrix factorization is a very important part of linear algebra. By decomposing the original matrix into matrices of different properties, the matrix factorization can not only show the potential attributes of the original matrix, but also help to implement various algorithms efficiently. Among all kinds of matrix factorizations, SVD (Singular Value Decomposition) is one of the most common used factorizaitons. SVD decomposes an aribitrary matrix into two orthogonal matrices and one diagonal matrix, with each of them has a specific mathematical meaning. We will elaborate on this process in the following section.
Definition of Singular Value: <br>
Define $A \in C_r^{(m \times n)}$, $A^HA$ has eigenvalues $$\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_r \geq \lambda_{r+1} = \lambda_n = 0$$
Then we say $\sigma_i = \sqrt{\lambda_i} (i=1,2,\cdots,r)$ are singular values of the matrix $A$
Assume that $A \in C_r^{(m \times n)}$, there exists matrix of rank m $U$ and matrix of rank n $v$ such that $$ A = U\Sigma V^H$$
$\Sigma$ is the diagonal matrix of $A$'s singular values. $U$'s column are $A$'s left singular vectors and $V$'s columns are $A$'s right singular vectors.
### Proof of SVD
Define $A \in C_r^{(m \times n)}$, $$V_1 = (v_1 v_2 \cdots v_r)$$
is a set of normalized orthogonal vectors corresponding to $A^HA$'s $r$ eigenvalues $\sigma_i^2$, which satisfies $$A^HAv_i = \sigma_i^2 v_i (i=1,2,\cdots,r)$$
left multiplies $v_i^H$, $$v_i^HA^HAv_i = ||Av_i||^2 = \sigma_i^2$$
using square root on both sides, we get $||Av_i|| = \sigma_i$ <br>
And $$Y_1=(y_1,y)2,\cdots,y_r)=AV_1=(Av_1,Av_2,\cdots,Av_r)$$
$$Y_1^HY_1 = V_1^HA^HAV_1 = \left(\begin{matrix}
\sigma_1^2 && 0 \\
&\ddots& \\
0 && \sigma_r^2
\end{matrix}\right) = D$$
Hence, $y_i(i=1,2,\cdots,r)$ is a set of orthogonal vectors and they have lengths $\sqrt(\sigma_i^2) = \sigma_i$. We can the calculate the unit vectors $$U_1 = Y_1D^{-1}$$
and $$AV_1 = U_1D$$
Now, we only need to combine $U_1$ and $AA^H$'s eigenvectors of trivial eigenvalues to form a orthogonal matrix $$U = (U_1\space\space U_2)$$
And expand $V_1$ to orthogonal matrix $$V = (V_1 \space\space V_2)$$
And then set $$\Sigma = \left(\begin{matrix}
D & 0 \\
0 & O
\end{matrix}\right)$$
Finally,$$AV = U\Sigma$$ $$A = U\Sigma V^H$$
### Meaning of SVD
For any matrix $A \in C_r^{(m \times n)}$, it has 4 vector space: <br>
>1. (Row space): vector space formed by all row vectors <br>
2. (Column space): vector space formed by all column vectors (Range) <br>
3. (Null space): vector space formed by vectors that satisfy $Ax=0$ <br>
4. (Left null space): vector space formed by vectors that satisfy $A^Hx=0$
From fundamental theorem of linear algebra, This four vector space have the following relationships:
> the dimension of row space and vectors space are $r$ <br>
the dimension of null space is $n - r$ <br>
the left null space has dimension $m-r$ <br>
the row space and null space are orthogonal complement <br>
the column space and left null space are orthogonal complement <br>
The SVD relates 4 vector space elegantly. First of all, $A$ and $A^HA$ have the same null space, because $A^HA$'s null space and eigen space formed by its zero eigenvalues are the same. Therefore, the $r+1th$ and $nth$ column of $V$ constitutes a set of unit orthogonal basis of the null space of $A$. <br>
In terms of the orthogonality of $V$, the first to $rth$ columns of V forms a orthogonal complement vector space to the vector space of $r+1th$ to $nth$ columns of itself. By the uniqueness of orthogonal complement property and fundamental theorem of linear algebra, the first to $r$th columns of $V$ constitute a unit orthogonal basis of $A$'s row space. <br>
In the same sense, the first to the $rth$ columns of $U$ form a vector space that is a unit orthogonal basis of $A$'s column space; the $r+1th$ to $mth$ columns of U forms a unit orthogonal basis of the left null space of A. What's more, according to SVD theorem, the first $rth$ columns of $U$ and those of $V$ (assume as $U_1$ and $V_1$) has the following relathionship: $$AV_1 = DU_1 (D = diag(\sigma_1, \sigma_2, \cdots, \sigma_r))$$
Hence, SVD not only generates 4 unit orthogonal bases of 4 vectors space of the original matrix, but also relates its row space and column space through a simple linear transformation.
## SVD in Image Processing
SVD has important usages in matrix calculations, text mining and many other fields. I will go through its usage in image processing, especially in the compression of images and the representation of image features
### SVD and Compression of Images.
Digital images could be considered as a 2-dimension matrix in which every entry represents the grayness of a pixel in the image. With the development of the related technologies, modern digital cameras are able to photograph a picture of millions of pixels. As a result, the matrix of the image has a relative large size, even huge. But due to the large portion of areas that have similar colors, which means the pixels are highly correlated in those areas, the image as certain certain level of redundancy of information in its corresponding matrix. <br>
<br>
Such redundancy can be measured by the rank of the matrix: A high rank matrix as lower correlation between columns and a low rank matrix has higher correlation bewteen columns. The picture below is transformed from a vector $u = [0, 0.1, 0.2, \cdots, 0.9]$ right multiplies a vector $v^H$ with 10 entries all equal to 1. <br>
<!-- <div align=center><img width = '150' height ='150' src =blog_imgs/black.png></div> <br> -->
 <br>
The matrix of the image is
$$\left(\begin{matrix}
0 & 0 & \cdots & 0 \\
0.1 & 0.1 & \cdots & 0.1 \\
0.2 & 0.2 & \cdots & 0.2 \\
\vdots & \vdots && \vdots \\
0.9 & 0.9 & \cdots & 0.9
\end{matrix}\right)$$
Apparently, the rank of this matrix is 1 because every row of it can be achieved by another row multiplying a number. Hence, We just need $u,v$ vectors with 20 numbers to store all the information of this matrix. The 80 left entries are redundant in this case. The compression ratio is $$CR = \frac{20}{100} = 0.2$$
Of course, this is an extreme case; the more general problem is like this: for a matrix for an image, how do we acquire the best similar matrix of a low rank? SVD gives an answer to this question. <br>
<br>
We can write SVD in the form of outer product expansion:
$$A = U\Sigma V^H = (u_1,u_2,\cdots,u_m) \left(\begin{matrix}
\sigma_1 &&& o \\
&\ddots&&\\
&&\sigma_r&\\
o &&& o
\end{matrix}\right) \left(\begin{matrix}
v_1^H\\
v_2^H\\
\vdots\\
v_n^H\\
\end{matrix}\right) =\sum_{i=1}^r\sigma_i u_i v_i^H$$
<br>
After we write the product in sum, SVD is only left with $r$ nonzero singular values and corresponding outer products with $u_i, v_i$. The rest of the outer products are redundant data, which contribute nothing to the construction of A. We can easily discover that, there is a nonzero eigenvalue of the above matrix. <br>
<br>
Naturally, we can perform SVD on any image, remove those vectors of its zero singular values and get a similar matrix with a lower rank. However, because of the existence of noise and the complexity of the image content, images in reality are always full rank. In other words, the number of nonzero singular values equal the number of columns or rows. Simply removing redundant singular vectors does not help in compression.
<br><br>
Though highly correlated areas in images are linearly indepedent in definition rigorously, they can be seen as approximately linear dependent. This truth would cause the existence of extremely small singular values because the linear independence caused by the small differences between pixels does not matter in the whole image. Therefore, we can remove the small $\sigma_i$ in the outer product expansion formula to get a rank of $r-k$ matrix. This matrix is viewed as the low rak approximation. Here comes the question: how do we know that it is a good approximation?
<br>
Define $A \in C_r^{(m \times n)}$, the optimization problem $$O = min||A-A_1||_F \space s.t. \space rank(A_1) = 1$$
takes its minimum when $A=\sigma_1 u_1 v_1^H$, and $\sigma_1$ is the biggest singular value of $A$, while $u_1,v_1$ are its left and right singular vectors. $||\cdot||_F$ is the Frobenius norm.
### <div align=center>-----Simple Proof-----</div>
We can take the SVD of $A$ as $U\Sigma V^H$ and take it into the object funciton: $$||A - A_1||_F = ||U\Sigma V^H - A_1||_F$$
Due to Forbenius norm's unitary invariance, we get $$ ||U\Sigma V^H - A_1||_F = ||\Sigma - U^HA_1V||_F$$
As $A_1$ has rank 1, $U^HA_1V$ can be represented as $\alpha xy^H$, of which $x,y$ are the unit vectors in $C^M,C^N$. Hence, $$||U\Sigma V^H - A_1||_F = ||\Sigma-\alpha xy^H||_F$$
With $||X||_F^2 = tr(X^HX)$, and $tr(XY) = tr(YX)$, we turn the problem into solving for the trace of the matrix instead of the Forbenius norm. <br>
$$||\Sigma - \alpha xy^H||_F^2$$ $$=tr[(\Sigma-\alpha xy^H)^H(\Sigma - \alpha xy^H)]$$ $$=tr(\Sigma^H\Sigma - \Sigma^H-\alpha xy^H -\alpha xy^H\Sigma + \alpha^Hyy^H)$$
$$=tr(\Sigma^H\Sigma + \alpha^2 - 2\alpha tr[\Sigma^H Re(xy^H)])$$ $$=||\Sigma||_F^2 + \alpha^2 - 2\alpha\sum_{i=1}^r\sigma_iRe(x_iy_i^*)$$
And we have $$\sum_{i=1}^r\sigma_iRe(x_iy_i^*) \leq \sum_{i=1}^r\sigma_i|x_iy_i^*| \leq \sum_{i=1}^r\sigma_i|x_i||y_i^*|\leq \sigma_1\sum_{i=1}^r|x_i||y_i^*|=\sigma_1(\tilde{x},\tilde{y})$$
while $\tilde{x} = (|x_1|,|x_2|,\cdots,|x_r|), \tilde{y}=(|y_1|,|y_2|,\cdots,|y_r|$, and $(\cdot,\cdot)$ is the inner product of vectors. According to Cauchy-Schwartz inequality, we have $$\sigma_1(\tilde{x},\tilde{y}) \leq \sigma_1|\tilde{X}||\tilde{y}| \leq \sigma|x||y| = \sigma_1$$.
Above all, the lower bound of $||A-A_1||_F^2$ = $$\Sigma^H\Sigma + \alpha^2 - 2\alpha tr[\Sigma^H Re(x_iy_i^*)]$$
$$\geq ||\Sigma||_F^2 + \alpha^2 - 2\alpha\sigma_1$$ $$=||\Sigma||_F^2 + (\alpha-\sigma_1)^2 - \sigma_1^2$$
When $\alpha = \sigma_1$, this lower bound gets its minimum $||\Sigma||_F^2 - \sigma_1^2$. By the way, $x,y$ is equal to $e_1 = (1,0,\cdots,0)^T$. Now we have $$A_1 = \alpha Uxy^HV^H=\alpha u_1v_1^H$$
### <div align=center>-----Done-----</div>
In application, we can get a k-rank approximation of $A$ by an iterative 1-rank approximation greedy algorithm: <br>
> 1) get the best 1-rank approximation of matrix $A$ as $A_1$ <br>
2) get the difference matrix $E_1 = A - A_1$ <br>
3) get the best 1-rank approximation of $E_1$ as $A_2$ <br>
4) get the difference matrix $E_2 = E_1 - A_2$ <br>
5) iteratively approach the k times approximation till getting the result $\hat{A} = \sum_{i=1}^n A_i$
Lawson and Hanson proved that this algorithm will return a best k-rank approximation, which is the sum of the first $kth$ outer products of SVD. Because $E_1 = A - \sigma_1 u_1 v_1^H = \sum_{i=1}^r \sigma_k u_k v_k^H$, the second iteration will return a 1-rank approximation of A_2 as $\sigma_2 u_2 v_2^H$, and kth iteration as $A_k = \sigma_k u_k v_k^H$, finally $$\hat{A} =\sum_{i=1}^K \sigma_k u_k v_k^H $$
Above all, the first $kth$ outer products of SVD outer product expansion is its best k-rank approximation.
## Example
<!-- <div align=center><img width = '150' height ='150' src=blog_imgs/example.png></div> <br> -->
<!-- <div align=center><img width = '250' height ='250' src=blog_imgs/example2.png></div> <br> -->
 <br>

```
# Python with opencv library
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cv2
img = cv2.imread('beach.jpg', 0)
# obtain svd
U, S, V = np.linalg.svd(img)
print(U.shape, S.shape, V.shape)
comps = [638, 500, 400, 300, 200, 100]
plt.figure(figsize = (16, 8))
for i in range(6):
low_rank = U[:, :comps[i]] @ np.diag(S[:comps[i]]) @ V[:comps[i], :]
if(i == 0):
plt.subplot(2, 3, i+1), plt.imshow(low_rank, cmap = 'gray'), plt.axis('off'), plt.title("Original Image with n_components =" + str(comps[i]))
else:
plt.subplot(2, 3, i+1), plt.imshow(low_rank, cmap = 'gray'), plt.axis('off'), plt.title("n_components =" + str(comps[i]))
plt.savefig('beach-svd.jpg')
```


#### With the rank decreased, the compressed images are getting more vague but still keep the most information we need to recognize the 'beach' content.
### SVD and Image representation
SVD has its more imporatant roles in image representation, which means picking basis vectors for images so that images can show various useful features. For example, Discrete Fourier Transform is one of the common image representation methods. SVD is also a powerful image representation method, which is also called Principle Componenet Analysis - PCA.
Suppose that we have 10 2-dimension data as following:

```
using Plots
a = [(-0.01,0.3), (0.3,-0.28), (-1.32,-1.43), (-0.42,-0.57), (0,0.6), (1.08,0.42), (2.42,1.99), (1.12,0.03), (0.05,0.63), (1.47,1.74)]
xs = [i[1] for i in a]
ys = [i[2] for i in a]
gr()
plot(xs, ys, seriestype = :scatter)
```
From the plot, we can see that these points center around a line that passes the origin. If x and y mean two physics measurement at the same time, then we might believe these two variables have linear relationship. The problem is, how should we calculate this line? More generally, how to find directions of the trend of changes of a set of n-dimension data with mean zero?
First of all, assume the sample $X \in R^{m\times n}$, its $i(1,2,\cdots,n)th$ column $x_i$ represents the $ith$ m-dimension sample. In total, there are n samples with mean zero:
$$X = (x_1,x_2,\cdots,x_n), x_i \in R^m$$ $$\sum_{i=1}^n x_i = 0$$
Projecting these n samples onto some unit vector $u$, if the directions of $u$ matches the direction of the most violent change, then the projected samples have length along the the direction of $u$, $|u^TX|^2$, the maximum of all times. Hence, we have the following optimization problem: $$O = max(uXX^Tu) \space s.t. \space u^tu = 1$$
Since $XX^T$ is a symmetric matrix, we can do diagonalization: $XX^T = Q^T\Lambda Q$. $$u^TXX^Tu=u^TQ^T\Lambda Qu = y^T \Lambda y$$ with $y^Ty=1$ as orthogonal linear transformations preserve the inner product. Expanding the above equation, we have $$y^T\Lambda y = \sum_{i=1}^n y_i^2\lambda_i \leq \lambda_1\sum_{i=1}^n y_i^2 = \lambda_1$$
$\lambda_1$ is the largest eigenvalue of $XX^T$ (Assume $\Lambda=diag(\lambda_1,\lambda_2,\cdots,\lambda_n)$ has its eigenvalues in the diagonal in descending order), when $y=(1,0,0,\cdots,0)$. Now $u$ is the first column of the eigenvector matrix $Q$, which is the eigenvector of $\lambda_1$. The above property is the condition of Rayleigh-Ritz theorem in $R$. In other words, when $u$ is left singular vector of the largest singular value of the matrix $X$, the sample data has its biggest sum of squares of projections on the direction of $u$, which is the square of the largest singular value. Similarly, if we want to find the $k(k=1,2,\cdots,n)$ number of the most violent change of sample data, according to Rayleigh-Ritz theorem, we should find the k number of left singular vectors of the matrix $X$.
Since those k number of orthogonal vectors show the major trend of the data well, they can be a set of image representation basis, which is also called the Principle Component of the sample data. Suppose $k=rank(X)$, we can use SVD to show the principle component ($u_1,u_2,\cdots,u_k$) of data $x_j$: $$X = (u_1,u_2,\cdots,u_k)\Sigma V^T = (u_1,u_2,\cdots,u_k)C$$
$$\Leftarrow \Rightarrow (x_1,x_2,\cdots,x_k) = (u_1,u_2,\cdots,u_k) \left(\begin{matrix}
c_{11} &\cdots& c_{1n} \\
\vdots &\ddots&\vdots\\
c_{n1} &\cdots& c_{nn}
\end{matrix}\right)$$
$$\Leftarrow \Rightarrow x_j = \sum_{i=1}^k u_ic_{ij} (j=1,2,\cdots,n)$$
In the real world, the principle component number $k$ is always smaller than the original dimension of data $X$, so we acquire a more tight image reprensentaion through SVD. In the chart above, we can get the first principle component is $u_1=(-0.74, -0.67)^T$ with singular value 4.63. And the projected sample data vector length has square root 1.38.
In image processing, especially the face recognition, sample matrix $X$ has each of its column as a column of image. Suppose the image is of size (256 $\times$ 256) pixels, the dimension of this image is 65536. As the increase of sample data, such a high dimension posts a big challenge to data storage and the computing resources. However, comparing to the high dimension of images, the total number of images is much smaller. Assume that a face recognition system has 50 users and each user has 10 sample images, we will have 500 images in database. The sample matrix of these data is very narrow, which is $X \in R^{65536\times 500}$. Based on the property of singular value decomposition, this matrix has at most 500 non-zero singular values, meaning that only 500 principle components have change of directions larger than zero while in other directions, all samples remain undeviated.
Therefore, we can assume all 50 users are distributed in a relatively lower dimension of vector space. This space is consists of a basis of 500 principle components. After we read a image, we can project it onto this low dimension space and use a certain algorithm to recognize it.
Surprisingly, if we recover these principle components back to an image, these images resemble the basic information and attributes of a human face. Hence, these faces are called "Eigenface."
# Conclusion
SVD has its important status in preprocessing images and extracting features. Today, I only show its function in image compression and image representation, but we can still see its powerful usage. The mathematics has shown SVD's feasibility and correctness in different ways. I will share more about its usage in machine learning.
# Reference
1. Lawson, C.L. and R.J. Hanson, Solving least squares problems. Vol. 15. 1995: SIAM.
| github_jupyter |
# LSTM
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import tensorflow
from numpy import *
from math import sqrt
from pandas import *
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
from sklearn.preprocessing import OneHotEncoder
from sklearn.metrics import mean_squared_error
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Bidirectional
from tensorflow.keras.layers import BatchNormalization, Embedding, TimeDistributed, LeakyReLU
from tensorflow.keras.layers import LSTM, GRU
from tensorflow.keras.optimizers import Adam
from matplotlib import pyplot
from pickle import load
X_train = np.load("X_train.npy", allow_pickle=True)
y_train = np.load("y_train.npy", allow_pickle=True)
X_test = np.load("X_test.npy", allow_pickle=True)
y_test = np.load("y_test.npy", allow_pickle=True)
#Parameters
LR = 0.001
BATCH_SIZE = 64
N_EPOCH = 50
input_dim = X_train.shape[1]
feature_size = X_train.shape[2]
output_dim = y_train.shape[1]
def basic_lstm(input_dim, feature_size):
model = Sequential()
model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
model.add(Dense(2048))
model.add(Dropout(0.5))
model.add(Dense(2048))
model.add(Dense(units=output_dim))
model.compile(optimizer=Adam(lr = LR), loss='mse')
history = model.fit(X_train, y_train, epochs=N_EPOCH, batch_size=BATCH_SIZE, validation_data=(X_test, y_test),
verbose=2, shuffle=False)
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='validation')
pyplot.legend()
pyplot.show()
return model
model = basic_lstm(input_dim, feature_size)
model.save('LSTM_30to1.h5')
print(model.summary())
yhat = model.predict(X_test, verbose=0)
#print(yhat)
rmse = sqrt(mean_squared_error(y_test, yhat))
print(rmse)
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from pickle import load
from sklearn.metrics import mean_squared_error, mean_absolute_error
########### Test dataset #########
# Load scaler/ index
X_scaler = load(open('X_scaler.pkl', 'rb'))
y_scaler = load(open('y_scaler.pkl', 'rb'))
train_predict_index = np.load("index_train.npy", allow_pickle=True)
test_predict_index = np.load("index_test.npy", allow_pickle=True)
# Load test dataset/ model
G_model = tf.keras.models.load_model('LSTM_30to1.h5')
X_test = np.load("X_test.npy", allow_pickle=True)
y_test = np.load("y_test.npy", allow_pickle=True)
def get_test_plot(X_test, y_test):
# Set output steps
output_dim = y_test.shape[1]
# Get predicted data
y_predicted = G_model(X_test)
rescaled_real_y = y_scaler.inverse_transform(y_test)
rescaled_predicted_y = y_scaler.inverse_transform(y_predicted)
## Predicted price
predict_result = pd.DataFrame()
for i in range(rescaled_predicted_y.shape[0]):
y_predict = pd.DataFrame(rescaled_predicted_y[i], columns=["predicted_price"],
index=test_predict_index[i:i + output_dim])
predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)
## Real price
real_price = pd.DataFrame()
for i in range(rescaled_real_y.shape[0]):
y_train = pd.DataFrame(rescaled_real_y[i], columns=["real_price"], index=test_predict_index[i:i + output_dim])
real_price = pd.concat([real_price, y_train], axis=1, sort=False)
predict_result['predicted_mean'] = predict_result.mean(axis=1)
real_price['real_mean'] = real_price.mean(axis=1)
#drop 2020
# Input_Before = '2020-01-01'
# predict_result = predict_result.loc[predict_result.index < Input_Before]
# real_price = real_price.loc[real_price.index < Input_Before]
# Plot the predicted result
plt.figure(figsize=(15, 4))
plt.plot(real_price["real_mean"])
plt.plot(predict_result["predicted_mean"], color='r')
plt.xlabel("Date")
plt.ylabel("Confirmed")
plt.legend(("Real confirmed", "Predicted confirmed"), loc="upper left", fontsize=10)
plt.title("The result of test", fontsize=15)
plt.show()
# Calculate RMSE
predicted = predict_result["predicted_mean"]
real = real_price["real_mean"]
For_MSE = pd.concat([predicted, real], axis=1)
RMSE = np.sqrt(mean_squared_error(predicted, real))
MAE = mean_absolute_error(predicted, real)
# accuracy = accuracy_score(predicted, real)
print('-- RMSE -- ', RMSE)
print('-- MAE --', MAE)
#print('-- accuracy --', accuracy)
return predict_result, RMSE, MAE
test_predicted, test_RMSE, test_MAE = get_test_plot(X_test, y_test)
```
| github_jupyter |
# Settings
```
%load_ext autoreload
%autoreload 2
%env TF_KERAS = 1
import os
sep_local = os.path.sep
import sys
sys.path.append('..'+sep_local+'..')
print(sep_local)
os.chdir('..'+sep_local+'..'+sep_local+'..'+sep_local+'..'+sep_local+'..')
print(os.getcwd())
import tensorflow as tf
print(tf.__version__)
```
# Dataset loading
```
dataset_name='Dstripes'
images_dir = 'C:\\Users\\Khalid\\Documents\projects\\Dstripes\DS06\\'
validation_percentage = 20
valid_format = 'png'
from training.generators.file_image_generator import create_image_lists, get_generators
imgs_list = create_image_lists(
image_dir=images_dir,
validation_pct=validation_percentage,
valid_imgae_formats=valid_format
)
inputs_shape= image_size=(200, 200, 3)
batch_size = 32
latents_dim = 32
intermediate_dim = 50
training_generator, testing_generator = get_generators(
images_list=imgs_list,
image_dir=images_dir,
image_size=image_size,
batch_size=batch_size,
class_mode=None
)
import tensorflow as tf
train_ds = tf.data.Dataset.from_generator(
lambda: training_generator,
output_types=tf.float32 ,
output_shapes=tf.TensorShape((batch_size, ) + image_size)
)
test_ds = tf.data.Dataset.from_generator(
lambda: testing_generator,
output_types=tf.float32 ,
output_shapes=tf.TensorShape((batch_size, ) + image_size)
)
_instance_scale=1.0
for data in train_ds:
_instance_scale = float(data[0].numpy().max())
break
_instance_scale
import numpy as np
from collections.abc import Iterable
if isinstance(inputs_shape, Iterable):
_outputs_shape = np.prod(inputs_shape)
_outputs_shape
```
# Model's Layers definition
```
units=20
c=50
enc_lays = [
tf.keras.layers.Conv2D(filters=units, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Conv2D(filters=units*9, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Flatten(),
# No activation
tf.keras.layers.Dense(latents_dim)
]
dec_lays = [
tf.keras.layers.Dense(units=c*c*units, activation=tf.nn.relu),
tf.keras.layers.Reshape(target_shape=(c , c, units)),
tf.keras.layers.Conv2DTranspose(filters=units, kernel_size=3, strides=(2, 2), padding="SAME", activation='relu'),
tf.keras.layers.Conv2DTranspose(filters=units*3, kernel_size=3, strides=(2, 2), padding="SAME", activation='relu'),
# No activation
tf.keras.layers.Conv2DTranspose(filters=3, kernel_size=3, strides=(1, 1), padding="SAME")
]
```
# Model definition
```
model_name = dataset_name+'AE_Convolutional_reconst_1ell_1ssmi'
experiments_dir='experiments'+sep_local+model_name
from training.autoencoding_basic.autoencoders.autoencoder import autoencoder as AE
inputs_shape=image_size
variables_params = \
[
{
'name': 'inference',
'inputs_shape':inputs_shape,
'outputs_shape':latents_dim,
'layers': enc_lays
}
,
{
'name': 'generative',
'inputs_shape':latents_dim,
'outputs_shape':inputs_shape,
'layers':dec_lays
}
]
from utils.data_and_files.file_utils import create_if_not_exist
_restore = os.path.join(experiments_dir, 'var_save_dir')
create_if_not_exist(_restore)
_restore
#to restore trained model, set filepath=_restore
ae = AE(
name=model_name,
latents_dim=latents_dim,
batch_size=batch_size,
variables_params=variables_params,
filepath=None
)
from evaluation.quantitive_metrics.structural_similarity import prepare_ssim_multiscale
from statistical.losses_utilities import similarity_to_distance
from statistical.ae_losses import expected_loglikelihood as ell
ae.compile(loss={'x_logits': lambda x_true, x_logits: ell(x_true, x_logits)+similarity_to_distance(prepare_ssim_multiscale([ae.batch_size]+ae.get_inputs_shape()))(x_true, x_logits)})
```
# Callbacks
```
from training.callbacks.sample_generation import SampleGeneration
from training.callbacks.save_model import ModelSaver
es = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=1e-12,
patience=12,
verbose=1,
restore_best_weights=False
)
ms = ModelSaver(filepath=_restore)
csv_dir = os.path.join(experiments_dir, 'csv_dir')
create_if_not_exist(csv_dir)
csv_dir = os.path.join(csv_dir, ae.name+'.csv')
csv_log = tf.keras.callbacks.CSVLogger(csv_dir, append=True)
csv_dir
image_gen_dir = os.path.join(experiments_dir, 'image_gen_dir')
create_if_not_exist(image_gen_dir)
sg = SampleGeneration(latents_shape=latents_dim, filepath=image_gen_dir, gen_freq=5, save_img=True, gray_plot=False)
```
# Model Training
```
ae.fit(
x=train_ds,
input_kw=None,
steps_per_epoch=int(1e4),
epochs=int(1e6),
verbose=2,
callbacks=[ es, ms, csv_log, sg, gts_mertics, gtu_mertics],
workers=-1,
use_multiprocessing=True,
validation_data=test_ds,
validation_steps=int(1e4)
)
```
# Model Evaluation
## inception_score
```
from evaluation.generativity_metrics.inception_metrics import inception_score
is_mean, is_sigma = inception_score(ae, tolerance_threshold=1e-6, max_iteration=200)
print(f'inception_score mean: {is_mean}, sigma: {is_sigma}')
```
## Frechet_inception_distance
```
from evaluation.generativity_metrics.inception_metrics import frechet_inception_distance
fis_score = frechet_inception_distance(ae, training_generator, tolerance_threshold=1e-6, max_iteration=10, batch_size=32)
print(f'frechet inception distance: {fis_score}')
```
## perceptual_path_length_score
```
from evaluation.generativity_metrics.perceptual_path_length import perceptual_path_length_score
ppl_mean_score = perceptual_path_length_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200, batch_size=32)
print(f'perceptual path length score: {ppl_mean_score}')
```
## precision score
```
from evaluation.generativity_metrics.precision_recall import precision_score
_precision_score = precision_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)
print(f'precision score: {_precision_score}')
```
## recall score
```
from evaluation.generativity_metrics.precision_recall import recall_score
_recall_score = recall_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)
print(f'recall score: {_recall_score}')
```
# Image Generation
## image reconstruction
### Training dataset
```
%load_ext autoreload
%autoreload 2
from training.generators.image_generation_testing import reconstruct_from_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'reconstruct_training_images_like_a_batch_dir')
create_if_not_exist(save_dir)
reconstruct_from_a_batch(ae, training_generator, save_dir)
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'reconstruct_testing_images_like_a_batch_dir')
create_if_not_exist(save_dir)
reconstruct_from_a_batch(ae, testing_generator, save_dir)
```
## with Randomness
```
from training.generators.image_generation_testing import generate_images_like_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'generate_training_images_like_a_batch_dir')
create_if_not_exist(save_dir)
generate_images_like_a_batch(ae, training_generator, save_dir)
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'generate_testing_images_like_a_batch_dir')
create_if_not_exist(save_dir)
generate_images_like_a_batch(ae, testing_generator, save_dir)
```
### Complete Randomness
```
from training.generators.image_generation_testing import generate_images_randomly
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'random_synthetic_dir')
create_if_not_exist(save_dir)
generate_images_randomly(ae, save_dir)
from training.generators.image_generation_testing import interpolate_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'interpolate_dir')
create_if_not_exist(save_dir)
interpolate_a_batch(ae, testing_generator, save_dir)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/luisgs7/Monitoria/blob/main/aula_03_19_06_2021.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Atividade de Hoje
## 01) Faça um programa para escrever a contagem regressiva do lançamento de um foguete. O programa deve imprimir 10, 9, 8, ..., 1, 0 e Fogo! na tela.
## 01 - usuário informe um valor de entrada
## 02 - Preciso decrementar este numero informado, no while
## 03 - Imprimir o numero dentro do while
```
numero = int(input("Digite o valor inicial da contagem: ")) # numero = 4
while(numero >= 0): # numero >= 0 True numero = 4 | 3 | 2 | 1 | 0
print(numero) # imprimo 4 | 3 | 2 | 1 | 0
numero = numero - 1 # numero = 4 - 1 = 3 | numero = 3 - 1 = 2 | 1 | 0 | -1
print("Fogo")
```
## 02) Crie um programa para exibir os números de 1 a 100.
## imcrementar o numero no while
## De 1 até 100
```
valor_inicial = 1
# Enquanto valor inicial for menor ou igual a 100, Saia do While
while(valor_inicial <= 100):
print(valor_inicial)
valor_inicial = valor_inicial + 1
```
## 03) Desenvolva um algoritmo que imprima a quantidade de números pares de 100 até 200, incluindo-os.
## Verificar a partir do valor 100 até 200, e imprimir os numeros pares
```
valor_inicial = 100
valor_final = 200
while(valor_inicial <= valor_final): ## até 200 a partir de 100 os numeros pares e imprimir
# Se o valor_inicial é par ou ímpar
if(valor_inicial % 2 == 0): # 100/2 = 50.0 101/2 = 50.5 101/2 = 50.5
print(valor_inicial)
valor_inicial = valor_inicial + 1
```
## 04) Como fazer para contar a quantidade de números pares entre dois números quaisquer?
## usuário irá informar um número inicial e um número final 10 - 20
## Terei que verificar se o valor informado é par
## Se for par imprima
```
numero_inicial = int(input("Digite o número inicial: ")) # 6
numero_final = int(input("Digite o número final: ")) # 10
# 6 7 10
while(numero_inicial <= numero_final):
# Verifico se o valor é par
if(numero_inicial % 2 == 0): # 6 / 2 = 3.0 7/2 = 3.5
print(numero_inicial) # imprimo 6
# Imcrementação
numero_inicial = numero_inicial + 1 # 6 + 1 = 7
```
## 05) Calcule o fatorial de um número informado pelo usuário.
```
numero = int(input("Digite um número para calcular um fatorial: ")) # 5
fatorial = 1
# 5 | 4 | 3 | 2 | 1 True False
while(numero > 0):
fatorial = fatorial * numero # fatorial = 1 * 5 = 5 | 5 * 4 = 20 | 20 * 3 = 60 | 2 * 60 = 120
print(numero) # 5 4 3 2 1
numero = numero - 1 # 5 - 1 = 4 | 4 - 1 = 3 | 3 - 1 = 2 | 2 - 1 = 1 | 1 - 1 = 0
print("A Fatorial eh:", fatorial)
```
# URI Online Judge
## 1006 - Média 1
```
A = float(input())
B = float(input())
C = float(input())
MEDIA = (A * 2 + B * 3 + C * 5) / 10
print("MEDIA = %.1f" %(MEDIA))
```
| github_jupyter |
```
import tensorflow as tf
print(tf.__version__)
import numpy as np
import matplotlib.pyplot as plt
# Create features
X = np.array([-7.0, -4.0, -1.0, 2.0, 5.0, 8.0, 11.0, 14.0])
# Create labels
Y = np.array([3.0, 6.0, 9.0, 12.0, 15.0, 18.0, 21.0, 24.0])
# Visualize it
plt.scatter(X, Y);
import numpy as np
import matplotlib.pyplot as plt
# create features (using tensors)
X = tf.constant ([-7.0, -4.0, -1.0, 2.0, 5.0, 8.0, 11.0, 14.0])
# create labels (using tensors)
y = tf.constant ([3.0, 6.0, 9.0, 12.0, 15.0, 18.0, 21.0, 24.0])
# Visualize it
plt.scatter(X, Y)
# Take a single example of X
input_shape = X[0].shape
# Take a single example of y
output_shape = y[0].shape
input_shape, output_shape # these are both scalars (no shape)
# Let's take a look at the single examples invidually
X[0], y[0]
# set random seed
tf.random.set_seed(42)
# Create a model using the sequential API
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(1))
model.compile(loss='mae', optimizer='sgd', metrics=["mae"])
# model.compile(loss=tf.keras.losses.mae, # mae is short for mean absolute error
# optimizer=tf.keras.optimizers.SGD(), # SGD is short for stochastic gradient descent
# metrics=["mae"])
# Compile the model
# model.compile(loss=tf.keras.losses.mae,
# optimizer=tf.keras.optimizers.SGD(),
# metrics=["mae"])
# Fit the model
model.fit(tf.expand_dims(X, axis=-1), y, epochs=5)
X, y
model.predict([17.0])
# set the random seed
tf.random.set_seed(42)
# create the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(1)
])
# compile the model
model.compile(loss='mae',
optimizer='sgd',
metrics=['mae'])
# fit the model
model.fit(tf.expand_dims(X, axis=-1), y, epochs=100)
X, y
model.predict([17.0,2.0])
# set random seed
tf.random.set_seed(42)
# Create a model using the sequential API
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(50, activation='relu'))
model.add(tf.keras.layers.Dense(1))
# model.compile(loss='mae', optimizer='sgd', metrics=["mae"])
model.compile(loss=tf.keras.losses.mae, # mae is short for mean absolute error
optimizer=tf.keras.optimizers.Adam(learning_rate=0.1), # SGD is short for stochastic gradient descent
metrics=["mae"])
# Compile the model
# model.compile(loss=tf.keras.losses.mae,
# optimizer=tf.keras.optimizers.SGD(),
# metrics=["mae"])
# Fit the model
model.fit(tf.expand_dims(X, axis=-1), y, epochs=100)
# set random seed
tf.random.set_seed(42)
# Create a model using the sequential API
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(1, activation=None))
# model.add(tf.keras.layers.Dense(1))
# model.add(tf.keras.layers.Dense(1))
# model.add(tf.keras.layers.Dense(1))
model.compile(loss='mae', optimizer='sgd', metrics=["mae"])
# model.compile(loss=tf.keras.losses.mae, # mae is short for mean absolute error
# optimizer=tf.keras.optimizers.SGD(), # SGD is short for stochastic gradient descent
# metrics=["mae"])
# Compile the model
# model.compile(loss=tf.keras.losses.mae,
# optimizer=tf.keras.optimizers.SGD(),
# metrics=["mae"])
# Fit the model
model.fit(tf.expand_dims(X, axis=-1), y, epochs=100)
model.predict([17.0])
X = np.arange(-100, 100, 4)
X
y = np.arange(-90, 110, 4)
y
len(X)
len(y)
# visualize the data
import matplotlib.pyplot as plt
plt.scatter(X,y)
### the 3 sets
# train
# practice
# test
X_train = X[:40]
y_train = y[:40]
X_test = X[40:]
y_test = y[40:]
len(X_train), len(y_train), len(X_test), len(y_test)
plt.figure(figsize=(10,7))
plt.scatter(X_train, y_train, c='b', label='Training Data')
plt.scatter(X_test, y_test, c='g', label='Testing Data')
plt.legend();
tf.random.set_seed(42)
# Build the model
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(1))
# compile the model
model.compile(loss='mae', optimizer='sgd', metrics=['mae'])
# fit the model
# model.fit(tf.expand_dims(X_train, axis=-1), y_train, epochs=100)
# model.build([1])
# model.summary()
tf.random.set_seed(42)
# Build the model
model = tf.keras.Sequential(name='model1')
model.add(tf.keras.layers.Dense(10, input_shape=[1], name='inputLayer'))
model.add(tf.keras.layers.Dense(2, name='hidden'))
model.add(tf.keras.layers.Dense(1, name='outputLayer'))
# compile the model
model.compile(loss='mae',
optimizer=tf.keras.optimizers.Adam(learning_rate=.01),
metrics=['mae','mse'])
# fit the model
model.fit(tf.expand_dims(X_train, axis=-1), y_train, epochs=100, verbose=0)
model.summary()
from tensorflow.keras.utils import plot_model
plot_model(model,show_shapes=True)
y_preds = model.predict(X_test)
y_preds
y_test
def plot_predictions(train_data=X_train,
train_labels=y_train,
test_data=X_test,
test_labels=y_test,
predictions=y_preds):
"""
Plots training data, test data and compares predictions.
"""
plt.figure(figsize=(10, 7))
# Plot training data in blue
plt.scatter(train_data, train_labels, c="b", label="Training data")
# Plot test data in green
plt.scatter(test_data, test_labels, c="g", label="Testing data")
# Plot the predictions in red (predictions were made on the test data)
plt.scatter(test_data, predictions, c="r", label="Predictions")
# Show the legend
plt.legend();
plot_predictions()
model.evaluate(X_test, y_test)
def mae(y_test, y_pred):
return tf.metrics.mean_absolute_error(y_test,
y_pred.squeeze())
def mse(y_test, y_pred):
return tf.metrics.mean_squared_error(y_test,
y_pred.squeeze())
mae(y_test, y_preds)
mse(y_test, y_preds)
tf.random.set_seed(42)
# build the model
model_1 = tf.keras.Sequential(name='bestResults')
model_1.add(tf.keras.layers.Dense(10, input_shape=[1]))
model_1.add(tf.keras.layers.Dense(2))
model_1.add(tf.keras.layers.Dense(5))
# model_1.add(tf.keras.layers.Dense(4))
model_1.add(tf.keras.layers.Dense(1))
# compile the model
model_1.compile(loss='mae',
optimizer=tf.keras.optimizers.Adam(learning_rate=.01),
metrics=['mae'])
# fit the model
model_1.fit(X_train, y_train, epochs=500, verbose=0)
y_preds_1 = model_1.predict(X_test)
plot_predictions(predictions=y_preds_1)
mae_1=mae(y_test, y_preds_1)
mse_1=mse(y_test, y_preds_1)
mae_1, mse_1
model_1.save('model_1_best_results')
model_1.save('model_1_best_results.h5')
!ls model_1_best_results.h5
loaded_model = tf.keras.models.load_model('model_1_best_results')
loaded_model.summary()
loaded_model_h5 = tf.keras.models.load_model('model_1_best_results.h5')
loaded_model_h5.summary()
import tensorflow as tf
import pandas as pd
import matplotlib.pyplot as plt
# Read in the insurance dataset
insurance = pd.read_csv("https://raw.githubusercontent.com/stedy/Machine-Learning-with-R-datasets/master/insurance.csv")
insurance.head()
insurance.info()
insurance_one_hot = pd.get_dummies(insurance)
insurance_one_hot.head()
insurance_one_hot.info()
X = insurance_one_hot.drop('charges', axis=1)
y = insurance_one_hot['charges']
X.head()
y.head()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X,
y,
test_size=0.2,
random_state=42
)
# set the random seed
tf.random.set_seed(42)
# create the model
model = tf.keras.Sequential(name='InsuranceModel')
model.add(tf.keras.layers.Dense(1))
model.add(tf.keras.layers.Dense(1))
# compile the model
model.compile(
loss='mae',
optimizer=tf.keras.optimizers.Adam(learning_rate=.01),
metrics=['mae']
)
# fit the model
model.fit(X_train, y_train, epochs=100, verbose=0)
model.summary()
# check the results of model
model.evaluate(X_test, y_test)
# set the random seed
tf.random.set_seed(42)
# create the model
model_1 = tf.keras.Sequential(name='InsuranceModel')
model_1.add(tf.keras.layers.Dense(1))
model_1.add(tf.keras.layers.Dense(1))
model_1.add(tf.keras.layers.Dense(1))
# compile the model
model_1.compile(
loss='mae',
optimizer=tf.keras.optimizers.Adam(learning_rate=.01),
metrics=['mae']
)
# fit the model
model_1.fit(X_train, y_train, epochs=100, verbose=0)
# check the results of model
model_1.evaluate(X_test, y_test)
# set the random seed
tf.random.set_seed(42)
# create the model
model_2 = tf.keras.Sequential(name='InsuranceModel')
model_2.add(tf.keras.layers.Dense(100, activation='relu'))
# model_2.add(tf.keras.layers.Dense(10))
model_2.add(tf.keras.layers.Dense(1))
# compile the model
model_2.compile(
loss='mae',
optimizer=tf.keras.optimizers.Adam(learning_rate=0.1),
metrics=['mae']
)
# fit the model
history = model_2.fit(X_train, y_train, epochs=100, verbose=0)
# check the results of model
model_2.evaluate(X_test, y_test)
# set the random seed
tf.random.set_seed(42)
# create the model
model_3 = tf.keras.Sequential(name='InsuranceModel')
model_3.add(tf.keras.layers.Dense(100, activation='relu'))
# model_3.add(tf.keras.layers.Dense(1))
model_3.add(tf.keras.layers.Dense(1))
# compile the model
model_3.compile(
loss='mae',
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['mae']
)
# fit the model
model_3.fit(X_train, y_train, epochs=100, verbose=0)
# check the results of model
model_3.evaluate(X_test, y_test)
pd.DataFrame(history.history).plot()
plt.ylabel('loss')
plt.xlabel('epochs');
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
# create column transformer
ct = make_column_transformer(
(MinMaxScaler(), ['age', 'bmi', 'children']),# gets all values between 0 and 1
(OneHotEncoder(handle_unknown='ignore'), ['sex', 'smoker', 'region'])
)
X = insurance.drop('charges', axis=1)
y = insurance['charges']
X_train, X_test, y_train, y_test = train_test_split(
X,
y,
test_size=0.2,
random_state=42
)
ct.fit(X_train)
X_train_normal = ct.transform(X_train)
X_test_normal = ct.transform(X_test)
X_train.loc[0]
X_train_normal[0]
# Notice the normalized/one-hot encoded shape is larger because of the extra columns
X_train_normal.shape, X_train.shape
# set the random seed
tf.random.set_seed(42)
# create the model
model_normal = tf.keras.Sequential(name='InsuranceModelNormal')
model_normal.add(tf.keras.layers.Dense(10, activation='relu'))
model_normal.add(tf.keras.layers.Dense(1))
# model_normal.add(tf.keras.layers.Dense(1))
model_normal.add(tf.keras.layers.Dense(1))
# compile the model
model_normal.compile(
loss='mae',
optimizer=tf.keras.optimizers.Adam(learning_rate=0.1),
metrics=['mae']
)
# fit the model
history = model_normal.fit(X_train_normal, y_train, epochs=100, verbose=0)
# check the results of model
model_normal.evaluate(X_test_normal, y_test)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/gndede/python/blob/main/WebScraping219.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Web Scraping
```
#In this lab exercise, we'll scrape Goodread's Best Books list:
#link to scrap https://www.goodreads.com/list/show/1.Best_Books_Ever?page=1 .
#We'll walk through scraping the list pages for the book names/urls
```
**Table of Contents¶**
1: Learning Goals
2: Exploring the Web pages and downloading them
3: Parse the page, extract book urls
4: Parse a book page, extract book properties
5: Set up a pipeline for fetching and parsing
**Learning Goals**
Understand the structure of a web page. Use Beautiful soup to scrape content from these web pages.
This lab corresponds to lectures 2, 3 and 4 and maps on to homework 1 and further.
**1. Exploring the web pages and downloading them¶**
We're going to see the structure of Goodread's best books list. We'll use the Developer tools in chrome, safari and firefox have similar tools available
```
## RUN THIS CELL TO GET THE RIGHT FORMATTING
from IPython.core.display import HTML
def css_styling():
styles = open("../../styles/cs109.css", "r").read()
return HTML(styles)
css_styling()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn.apionly as sns
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
```
To fetch this page, we use the requests module. But, are we allowed to do this? Lets check:
https://www.goodreads.com/robots.txt
Yes we are.
```
import time, requests
URLSTART="https://www.goodreads.com"
BESTBOOKS="/list/show/1.Best_Books_Ever?page="
url = URLSTART+BESTBOOKS+'1'
print(url)
page = requests.get(url)
```
We can see properties of the page. Most relevant are status_code and text. The first line tells us if the web-page was found, and if found , ok.
```
page.status_code # 200 is good
page.text[:5000]
#Let us write a loop to fetch 2 pages of "best-books" from goodreads.
#Notice the use of a format string. This is an example of old-style python format strings
URLSTART="https://www.goodreads.com"
BESTBOOKS="/list/show/1.Best_Books_Ever?page="
for i in range(1,3):
bookpage=str(i)
stuff=requests.get(URLSTART+BESTBOOKS+bookpage)
filetowrite="files/page"+ '%02d' % i + ".html"
print("FTW", filetowrite)
fd=open(filetowrite,"w")
fd.write(stuff.text)
fd.close()
time.sleep(2)
```
**2. Parse the page, extract book urls**
Notice how we do file input-output, and use beautiful soup in the code below. The with construct ensures that the file being read is closed, something we do explicitly for the file being written. We look for the elements with class bookTitle, extract the urls, and write them into a file
```
from bs4 import BeautifulSoup
bookdict={}
for i in range(1,3):
books=[]
stri = '%02d' % i
filetoread="files/page"+ stri + '.html'
print("FTW", filetoread)
with open(filetoread) as fdr:
data = fdr.read()
soup = BeautifulSoup(data, 'html.parser')
for e in soup.select('.bookTitle'):
books.append(e['href'])
print(books[:10])
bookdict[stri]=books
fd=open("files/list"+stri+".txt","w")
fd.write("\n".join(books))
fd.close()
#Here is George Orwell's 1984
bookdict['02'][0]
#Lets go look at the first URLs on both pages
```
**3. Parse a book page, extract book properties**
```
#Ok so now lets dive in and get one of these these files and parse them.
furl=URLSTART+bookdict['02'][0]
furl
fstuff=requests.get(furl)
print(fstuff.status_code)
d=BeautifulSoup(fstuff.text, 'html.parser')
d.select("meta[property='og:title']")[0]['content']
#Lets get everything we want...
d=BeautifulSoup(fstuff.text, 'html.parser')
print(
"title", d.select_one("meta[property='og:title']")['content'],"\n",
"isbn", d.select("meta[property='books:isbn']")[0]['content'],"\n",
"type", d.select("meta[property='og:type']")[0]['content'],"\n",
"author", d.select("meta[property='books:author']")[0]['content'],"\n",
"average rating", d.select_one("span.average").text,"\n",
"ratingCount", d.select("meta[itemprop='ratingCount']")[0]["content"],"\n",
"reviewCount", d.select_one("span.count")["title"]
)
```
Ok, now that we know what to do, lets wrap our fetching into a proper script. So that we dont overwhelm their servers, we will only fetch 5 from each page, but you get the idea...
We'll segue of a bit to explore new style format strings. See https://pyformat.info for more info.
```
"list{:0>2}.txt".format(3)
a = "4"
b = 4
class Four:
def __str__(self):
return "Fourteen"
c=Four()
"The lazy cat jumped over the {} and {} and {}".format(a, b, c)
```
**4. Set up a pipeline for fetching and parsing**
# Ok lets get back to the fetching process...
```
fetched=[]
for i in range(1,3):
with open("files/list{:0>2}.txt".format(i)) as fd:
counter=0
for bookurl_line in fd:
if counter > 4:
break
bookurl=bookurl_line.strip()
stuff=requests.get(URLSTART+bookurl)
filetowrite=bookurl.split('/')[-1]
filetowrite="files/"+str(i)+"_"+filetowrite+".html"
print("FTW", filetowrite)
fd=open(filetowrite,"w", encoding='utf-8')
fd.write(stuff.text)
fd.close()
fetched.append(filetowrite)
time.sleep(2)
counter=counter+1
print(fetched)
# Ok we are off to parse each one of the html pages we fetched.
# We have provided the skeleton of the code and the code to parse the year,
#since it is a bit more complex...see the difference in the screenshots above.
import re
yearre = r'\d{4}'
def get_year(d):
if d.select_one("nobr.greyText"):
return d.select_one("nobr.greyText").text.strip().split()[-1][:-1]
else:
thetext=d.select("div#details div.row")[1].text.strip()
rowmatch=re.findall(yearre, thetext)
if len(rowmatch) > 0:
rowtext=rowmatch[0].strip()
else:
rowtext="NA"
return rowtext
#Your job is to fill in the code to get the genres.
def get_genres(d):
# your code here
listofdicts=[]
for filetoread in fetched:
print(filetoread)
td={}
with open(filetoread) as fd:
datext = fd.read()
d=BeautifulSoup(datext, 'html.parser')
td['title']=d.select_one("meta[property='og:title']")['content']
td['isbn']=d.select_one("meta[property='books:isbn']")['content']
td['booktype']=d.select_one("meta[property='og:type']")['content']
td['author']=d.select_one("meta[property='books:author']")['content']
td['rating']=d.select_one("span.average").text
td['ratingCount']=d.select_one("meta[itemprop='ratingCount']")["content"]
td['reviewCount']=d.select_one("span.count")["title"]
td['year'] = get_year(d)
td['file']=filetoread
glist = get_genres(d)
td['genres']="|".join(glist)
listofdicts.append(td)
listofdicts[0]
#Finally lets write all this stuff into a csv file which we will use to do analysis.
df = pd.DataFrame.from_records(listofdicts)
df.head()
df.to_csv("files/meta.csv", index=False, header=True)
```
| github_jupyter |
# EUC 1993-2017: ECCOv4r4, GISS-G, and GISS-H
```
import os
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
from xgcm import Grid
from pych.calc import haversine
ecco = xr.open_dataset('/workspace/results/eccov4r4/equatorial-under-current/euc_eccov4r4.nc')
gissg = xr.open_dataset('/workspace/results/giss-euc/giss_g.nc')
gissh = xr.open_dataset('/workspace/results/giss-euc/giss_h.nc')
```
### EUC: zonal transport where U>0, integrated 0->400m, 1.5S to 1.5N
Here averaged over 1993-2017
```
def convert_m3_to_sv(xda):
if 'units' in xda.attrs:
if xda.attrs['units'] == 'Sv':
return xda
xda *= 1e-6
xda.attrs['units']='Sv'
return xda.copy(deep=True)
gissg['ubar']=gissg['u'].sel(time=slice('1993','2017')).mean('time')
gissg['trsp_x'] = gissg['ubar']*gissg['drF']*gissg['dyG']
# all of the full cells
kbot = 15
gissg['euc'] = gissg['trsp_x'].where(gissg['ubar']>0,0.).sel(YC=slice(-1.25,1.25)).isel(Z=slice(0,kbot)).sum(['YC','Z'])
# get half the transport at latitudinal bounds, only full vertical depth points
gissg['euc'] += 0.5*gissg['trsp_x'].where(gissg['ubar']>0,0.).sel(YC=-1.5).isel(Z=slice(0,kbot)).sum('Z')
gissg['euc'] += 0.5*gissg['trsp_x'].where(gissg['ubar']>0,0.).sel(YC=1.5).isel(Z=slice(0,kbot)).sum('Z')
# get full latitudinal cells, fraction of bottom cell
dz = (400-gissg.Zp1[kbot])/gissg.drF[kbot]
gissg['euc'] += dz*gissg['trsp_x'].where(gissg['ubar']>0,0.).sel(YC=slice(-1.25,1.25)).isel(Z=kbot+1).sum(['YC'])
# get half latitudinal bounds, and fraction of bottom cell
gissg['euc'] += dz*0.5*gissg['trsp_x'].where(gissg['ubar']>0,0.).sel(YC=-1.5).isel(Z=kbot+1)
gissg['euc'] += dz*0.5*gissg['trsp_x'].where(gissg['ubar']>0,0.).sel(YC=1.5).isel(Z=kbot+1)
gissg['euc'] = convert_m3_to_sv(gissg['euc'])
gissh['ubar']=gissh['u'].sel(time=slice('1993','2017')).mean('time')
gissh['trsp_x'] = gissh['ubar']*gissh['drF']*gissh['dyF']
# all of the full cells
kbot = 12
gissh['euc'] = gissh['trsp_x'].where(gissh['ubar']>0,0.).sel(YC=slice(-1.25,1.25)).isel(Z=slice(0,kbot)).sum(['YC','Z'])
# get half the transport at latitudinal bounds, only full vertical depth points
gissh['euc'] += 0.5*gissh['trsp_x'].where(gissh['ubar']>0,0.).sel(YC=-1.5).isel(Z=slice(0,kbot)).sum('Z')
gissh['euc'] += 0.5*gissh['trsp_x'].where(gissh['ubar']>0,0.).sel(YC=1.5).isel(Z=slice(0,kbot)).sum('Z')
# get full latitudinal cells, fraction of bottom cell
# gissH goes exactly to 400m in first 12 cells, so this is actually not necessary
dz = (400-gissh.Zp1[kbot])/gissh.drF[kbot]
gissh['euc'] += dz*gissh['trsp_x'].where(gissh['ubar']>0,0.).sel(YC=slice(-1.25,1.25)).isel(Z=kbot+1).sum(['YC'])
# get half latitudinal bounds, and fraction of bottom cell
gissh['euc'] += dz*0.5*gissh['trsp_x'].where(gissh['ubar']>0,0.).sel(YC=-1.5).isel(Z=kbot+1)
gissh['euc'] += dz*0.5*gissh['trsp_x'].where(gissh['ubar']>0,0.).sel(YC=1.5).isel(Z=kbot+1)
gissh['euc'] = convert_m3_to_sv(gissh['euc'])
```
### GISS-H zonal velocity problems
There are a few points where the velocity is O(10^30).
Obviously this causes problems.
Right now I'm simply ignore longitudes where these occur
```
gissh['euc'] = xr.where(gissh.euc>1e20,np.NAN,gissh.euc)
np.isnan(gissh.euc).sum()
from matplotlib.ticker import MultipleLocator
def euc_plot(xda,xcoord='XG',ax=None,xskip=20,xminor_skip=10,yminor_skip=1):
if ax is None:
fig,ax = plt.subplots(1,1)
x=xda[xcoord]
xbds = [140,-80]
# Grab Pacific
xda = xda.where((x<=xbds[0])|(x>=xbds[1]),drop=True)
x_split=xda[xcoord]
xda[xcoord]=xr.where(xda[xcoord]<=0,360+xda[xcoord],xda[xcoord])
xda = xda.sortby(xcoord)
xda.plot(ax=ax)
xlabel_int = [xx for xx in np.concatenate([np.arange(xbds[0],181),np.arange(-179,xbds[1]+2)])]
xlbl=[]
for x in xlabel_int:
if x>0:
xlbl.append(r'%d$^\circ$E' % x)
else:
xlbl.append(r'%s$^\circ$W' % -x)
x_slice = slice(None,None,xskip)
ax.xaxis.set_ticks(xda[xcoord].values[x_slice])
ax.xaxis.set_ticklabels(xlbl[x_slice])
ax.xaxis.set_minor_locator(MultipleLocator(xminor_skip))
ax.yaxis.set_minor_locator(MultipleLocator(yminor_skip))
ax.set_xlim([xbds[0],xbds[1]+360])
return ax
plt.rcParams.update({'figure.figsize':(18,6),'font.size':18,'text.usetex':True})
fig_dir='/workspace/results/eccov4r4/equatorial-under-current/figures/'
if not os.path.isdir(fig_dir):
os.makedirs(fig_dir)
fig,ax = plt.subplots(1,1,figsize=(10,6))
ax=euc_plot(gissg.euc,ax=ax)
ax=euc_plot(gissh.euc,xcoord='XC',ax=ax)
ax=euc_plot(ecco.trsp,xcoord='lon',ax=ax)
ax.set_xlabel('')
#ax.grid();
ax.tick_params(direction='in',which='major',length=8,
top=True,right=True,pad=6)
ax.tick_params(direction='in',which='minor',length=5,
top=True,right=True,pad=6)
ax.legend(('E2.1-G','E2.1-H','ECCOv4r4'),loc='lower center',frameon=False)
# bbox_to_anchor=(1.04,0.5), loc="center left", borderaxespad=0)
# bbox_to_anchor=(0,1.02,1,0.2), loc="lower left",
# mode="expand", borderaxespad=0, ncol=3)
ax.set_title('Equatorial Undercurrent Transport\nTime Mean 1993-2017',fontsize=20,pad=25);
ax.set_ylabel('EUC Transport, Sv',fontsize=16)
ax.set_ylim([0,35])
fig.savefig(f'{fig_dir}/euc_comparison_1993-2017.png',dpi=300,
bbox_inches='tight',pad_inches=1)
```
| github_jupyter |
# Chapter 11 - Gradient Descent
```
import sys
sys.path.append("../")
from utils import *
np.random.seed(0)
```
The most plain implementation of gradient descent, for minimizing a differentiable function $f$
```
def VanillaGradientDescent(f, f_grad, init=np.random.uniform(-1, 1, 2), eta=lambda t: .1, tol=1e-5):
steps, delta = [init], tol
t = 1
while delta >= tol:
g, eta_t = f_grad(steps[-1]), eta(t)
step = steps[-1] - eta_t * g
steps.append(step)
delta = np.sum((steps[-1] - steps[-2])**2)**.5
t += 1
return np.array(steps)
```
The following functions are used for plotting (in 2D and 3D) the loss surface of a given function to optimize
```
def as_array(x):
return np.array([x]) if np.isscalar(x) else x
def function_contour(fun, vals):
xx, yy = np.meshgrid(vals, vals)
z = fun(np.c_[xx.ravel(), yy.ravel()]).reshape(len(vals), len(vals))
return go.Contour(x = vals, y=vals, z=z, opacity=.4, colorscale="Blues_r", showscale=False)
def function_surface(fun, vals):
xx, yy = np.meshgrid(vals, vals)
z = fun(np.c_[xx.ravel(), yy.ravel()]).reshape(len(vals), len(vals))
return go.Surface(x = vals, y=vals, z=z, opacity=.4, colorscale="Blues_r", showscale=False)
```
## Optimize MSE Using GD
```
def MSE(X: np.ndarray, y: np.ndarray):
def _evaluate(w: np.ndarray):
Y = np.broadcast_to(y[..., np.newaxis], (y.shape[0], w.shape[0]))
return np.mean( (X @ w.T - Y)**2, axis=0)
def _gradient(w: np.ndarray):
return X.T @ (X @ w.T - y) * 2 / X.shape[0]
return _evaluate, _gradient
n = 50
w = np.random.random(size = (2, ))
X = np.c_[np.random.uniform(low=-3, high=3, size=(n, 1)), np.ones((n, 1))]
y = X @ w + np.random.normal(0, 1, size=(n,))
```
Using the MSE module above (the evaluation and gradient computation) functions above we explore the gradient descent algorithm. First, we can track the stepping of the algorithm in the parameter space (i.e. obaining different feasible solutions $\mathbf{w}$ at each iteration) and observe the linear model it reflects
```
f, f_grad = MSE(X, y)
# Run the GD algorithm
steps = VanillaGradientDescent(f, f_grad,
init=np.array([4.5,-4]),
eta=lambda t: .1,
tol=1e-2)
# Obtain objective surface
vals = np.linspace(-5, 5, 50)
contour = function_contour(f, vals)
frames, markers = [], []
for i in range(1, len(steps)+1):
z = as_array(f(steps[:i]))
frames.append(go.Frame(data=[
# 2D visualization of progress
go.Scatter(x=steps[:i,0], y=steps[:i,1], marker=dict(size=3, color="black"), showlegend=False),
go.Scatter(x=[steps[i-1,0]], y=[steps[i-1,1]], marker=dict(size=5, color="red"), showlegend=False),
contour,
# Visualization of regression line and data
go.Scatter(x=X[:, 0], y=y, marker=dict(size=5, color="black"), mode = 'markers', showlegend=False, xaxis="x2", yaxis="y2"),
go.Scatter(x=[X[:, 0].min(), X[:, 0].max()],
y=[X[:, 0].min()*steps[i-1,0] + steps[i-1,1], X[:, 0].max()*steps[i-1,0] + steps[i-1,1]],
marker=dict(size=3, color="Blue"), mode='lines', showlegend=False, xaxis="x2", yaxis="y2")],
traces=[0, 1, 2, 3, 4, 5],
layout=go.Layout(title=rf"$\text{{Iteration }} {i}/{steps.shape[0]}$" )))
# Create animated figure
fig = make_subplots(rows=1, cols=2, column_widths = [400, 700], horizontal_spacing=.075,
subplot_titles=(r"$\text{MSE Descent Profile}$", r"$\text{Fitted Model}$"))\
.update_layout(width=1100, height = 400, title = frames[0].layout.title,
updatemenus = [dict(type="buttons", buttons=[AnimationButtons.play(1200,0),
AnimationButtons.pause()])])
fig = fig.add_traces(frames[0]["data"], rows=1, cols=[1, 1, 1, 2, 2])\
.update(frames = frames)
fig = fig.update_xaxes(range=[vals[0], vals[-1]], title=r"$\text{Regression Coefficient }w_1$", col=1)\
.update_yaxes(range=[vals[0], vals[-1]], title=r"$\text{Regression Intercept }w_2$", col=1)\
.update_xaxes(title=r"$\text{Variable } x$", col=2)\
.update_yaxes(range=[min(y)-.5, max(y)+.5], title=r"$\text{Response }y$", col=2)
animation_to_gif(fig, "../figures/mse_gd_opt.gif", 700, width=1100, height=400)
fig.show()
```
Next, we examin the RSS optimization process for different constant values of the step size
```
f, f_grad = MSE(X, y)
vals = np.linspace(-5, 5, 50)
contour = function_contour(f, vals)
eta = .01
steps = VanillaGradientDescent(f, f_grad, eta=lambda t: eta,tol = 1e-5, init=np.array([4.5,-4]))
fig = go.Figure(data =
[go.Scatter(x=steps[:,0], y=steps[:,1], marker=dict(size=3, color="black"), mode="markers+lines", showlegend=False),
contour],
layout = go.Layout(
width=400, height=400,
xaxis = dict(title = r"$\text{Regression Coefficient }w_1$", range=[-5,5]),
yaxis = dict(title = r"$\text{Regression Intercept }w_2$", range=[-5,5]),
title = rf"$\text{{Step Size: }}\eta={eta} \text{{ (}}n={len(steps)}\text{{ Iterations)}}$"
))
fig.write_image(f"../figures/mse_gd_eta_{eta}.png")
fig.show()
```
## Visualize 2/3D Traverse In Parameter Space For GD Iterations
```
def Animate_GradientDescent(f, f_grad, init, eta, delta, axis_range, frame_time=500):
steps = VanillaGradientDescent(f, f_grad, init, eta, delta)
surface, contour = function_surface(f, axis_range), function_contour(f, axis_range)
frames, markers = [], []
for i in range(1, len(steps) + 1):
z = as_array(f(steps[:i]))
frames.append(go.Frame(data=[
# 3D visualization of progress
go.Scatter3d(x=steps[:i,0], y=steps[:i,1], z=z[:i], marker=dict(size=3, color="black"), showlegend=False),
go.Scatter3d(x=[steps[i-1,0]], y=[steps[i-1,1]], z=[z[i-1]],marker=dict(size=5, color="red"), showlegend=False),
surface,
# 2D visualization of progress
go.Scatter(x=steps[:i,0], y=steps[:i,1], marker=dict(size=3, color="black"), mode="markers+lines", showlegend=False),
go.Scatter(x=[steps[i-1,0]], y=[steps[i-1,1]], marker=dict(size=5, color="red"), showlegend=False),
contour],
traces=[0, 1, 2, 3, 4, 5],
layout=go.Layout(title=rf"$\text{{Iteration }} {i}/{steps.shape[0]}$" )))
return make_subplots(rows=1, cols=2, specs=[[{'type':'scene'}, {}]],
subplot_titles=('3D Visualization Of Function', '2D Visualization Of Function'))\
.add_traces(data=frames[0]["data"], rows=[1, 1, 1, 1, 1, 1], cols=[1, 1, 1, 2, 2, 2])\
.update(frames = frames)\
.update_xaxes(range=[axis_range[0], axis_range[-1]])\
.update_yaxes(range=[axis_range[0], axis_range[-1]])\
.update_layout(width=900, height = 330, title = frames[0].layout.title,
updatemenus = [dict(type="buttons", buttons=[AnimationButtons.play(frame_time,0),
AnimationButtons.pause()])])
```
### Gradient Descent Over Gaussian Function
```
from numpy.linalg import solve, det
def negative_gaussian(mu=np.zeros(2), cov=np.eye(2)):
from scipy.stats import multivariate_normal
def _evaluate(x: np.ndarray):
return - multivariate_normal(mu, cov).pdf(x)
def _gradient(x: np.ndarray):
z = solve(cov,x-mu)
return np.exp(-z @ (x-mu) /2) * z / (2*np.sqrt((2*np.pi)**mu.shape[0] * det(cov)))
return _evaluate, _gradient
Animate_GradientDescent(*negative_gaussian(cov=[5,10]*np.eye(2)),
init=np.array([-4.8,-4.8]),
eta= lambda t: 300,
delta=1e-2,
axis_range=np.linspace(-5, 5, 50))
```
### Gradient Descent Over Highly Non-Convex Function
```
def non_convex_function():
def _evaluate(x: np.ndarray):
x = np.stack(x, axis=0)
z = np.sin(x[:, 0] * x[:, 1]) / np.sqrt(x[:, 0]**2 + x[:, 1]**2)
return np.array([[z]]) if np.isscalar(z) else z
def _gradient(x: np.ndarray):
X, Y = x[0], x[1]
a = np.array([(Y*np.cos(X*Y)*(X**2 + Y**2) - X*np.sin(X*Y)) / (X**2 + Y**2)**(1.5),
(X*np.cos(X*Y)*(X**2 + Y**2) - Y*np.sin(X*Y)) / (X**2 + Y**2)**(1.5)])
return a
return _evaluate, _gradient
Animate_GradientDescent(*non_convex_function(),
init=np.random.uniform(-5,5,2),
eta= lambda t: 2*.1,
delta=1e-3,
axis_range=np.linspace(-5, 5, 50))
```
## Stochastic Gradient Descent
Below is a naive implementation of the stochastic gradient descent, recieving a "module" to minimize and a batch size
```
def VanillaStochasticGradientDescent(module, init=np.random.uniform(-1, 1, 2), eta=lambda t: .1, tol=1e-5, batch_size=5):
steps, delta = [init], tol
t = 1
while delta >= tol:
# Sample data for current iteration
ids = module.sample_batch(batch_size)
# Calculate iteration elements
g, eta_t = module.gradient(steps[-1], samples = ids), eta(t)
step = steps[-1] - eta_t * g
steps.append(step)
delta = np.sum((steps[-1] - steps[-2])**2)**.5
t += 1
return np.array(steps)
```
The MSE module consists of `evaluate`, `gradient` and `sample_batch` functions. To enbable the SGD descent behave like the GD, in the case a batch size is not passed the `sample_batch` returns $0,1,\ldots,n\_samples$.
```
class MSE:
def __init__(self, X: np.ndarray, y: np.ndarray):
self.X, self.y = X, y
def evaluate(self, w: np.ndarray, samples: np.ndarray = None):
if samples is None:
samples = np.arange(self.X.shape[0])
X, y = self.X[samples, :], self.y[samples]
Y = np.broadcast_to(y[..., np.newaxis], (y.shape[0], w.shape[0]))
return np.mean( (X @ w.T - Y)**2, axis=0)
def gradient(self, w: np.ndarray, samples: np.ndarray = None):
if samples is None:
samples = np.arange(self.X.shape[0])
return self.X[samples,:].T @ (self.X[samples,:] @ w.T - self.y[samples]) * 2 / len(samples)
def sample_batch(self, n:int=None):
if n is None:
return np.arange(self.X.shape[0])
return np.random.randint(self.X.shape[0], size=n)
# Generate data according to the linear regression with Gaussian noise assumptions
np.random.seed(0)
n = 100
w = np.array([5,-2])
X = np.c_[np.random.uniform(low=-3, high=3, size=(n, 1)), np.ones((n, 1))]
y = X @ w + np.random.normal(0, 5, size=(n,))
module = MSE(X, y)
vals = np.linspace(-30, 30, 100)
contour = function_contour(module.evaluate, vals)
eta, init = lambda t: .1, np.array([-20,-20])
gd_steps = VanillaStochasticGradientDescent(module, eta=eta, init=init, batch_size=None, tol=1e-1)
sgd_steps = VanillaStochasticGradientDescent(module, eta=eta, init=init, batch_size=5, tol=1e-1)
fig = make_subplots(rows=1, cols=2,
subplot_titles = (r"$\text{Gradient Descent}$",
r"$\text{Stochastic Gradient Descent}$"))\
.add_traces([go.Scatter(x=gd_steps[:,0], y=gd_steps[:,1], mode = "markers+lines", showlegend=False, marker_color="black"),
go.Scatter(x=sgd_steps[:,0], y=sgd_steps[:,1], mode = "markers+lines", showlegend=False, marker_color="black"),
contour,contour], rows=[1]*4, cols=[1,2,1,2])\
.update_xaxes(range=[vals[0],vals[-1]])\
.update_yaxes(range=[vals[0],vals[-1]])\
.update_layout(width=800, height=400)
fig.write_image(f"../figures/mse_gd_sgd.png")
fig.show()
```
| github_jupyter |
```
from IPython.core.display import HTML
def css_styling():
styles = open("./styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
Open Jupyter notebook:
<br> Start >> Programs >> Programming >> Anaconda3 >> JupyterNotebook
<br>(Start >> すべてのプログラム >> Programming >> Anaconda3 >> JupyterNotebook)
In Jupyter notebook, select the tab with the contents list of the interactive textbook:
Open __Seminar 1__ by clicking on __1_Data_types_and_operators__.
<h1>Data Types and Simple Arithmetic Operators</h1>
<h1>Lesson Goal</h1>
Compose and solve simple mathematical problems using Python.
<h1>Objectives</h1>
- Use Python as a calculator.
- Express mathematical and logic operations correctly.
- Learn to use different "types" of variable.
We will finish by learning how to create a local copy of the interactive textbook on your personal computer that you will use to complete your homework.
Why we are studying this:
- To do basic algebra in Python.
- To use programming to solve engineering problems that you will encounter in your other classes.
Lesson structure:
- Learn new skills together:
- __Demonstration__ on slides.
- __Completing examples__ in textbooks.
- __Feedback answers__ (verbally / whiteboards)
- Practise alone: __Completing review excercises__.
- Skills Review: __Updating your online git repository__.
- New skills: Updating your online git repository __from home__.
- __Summary__.
Each time you complete a section of your textbook, please wait to feedback the answer before moving on.
Let’s start by practising how you will fill in your textbooks and feedback answers.
__Basic Arithmetic Operators...__
<a id='AlgebraicOperators'></a>
<h2>Simple Operators</h2>
We can use Python like a calculator.
__Simple arithmetical operators:__
$+$ Addition <br>
$-$ Subtraction <br>
$*$ Multiplication <br>
$/$ Division <br>
$//$ Floor division <br>
$\%$ Modulus <br>
$**$ Exponent <br>
<h3>Algebraic Operators</h3>
Express the following simple expressions using python code. <br>
Click on the cell to type in it. <br>
Press "Shift" + "Enter" to run the cell.
$3 + 8$
```
3+8
```
$2 - 4$
```
2-4
```
Use the list of mathematical operators in your textbook to write the expressions using Python.
__STOP__ when you have completed the the expression $2^{3}$.
We will review your answers before moving on to Section 2.
$6 \times 4$
```
6*4
```
$ 12 \div 5$
```
12/5
```
$12 \div 5$ without any decimal points or remainders.
```
12//5
```
The remainder when $12 \div 5$
```
12%5
```
$2^{3}$
```
2**3
```
### Operator precedence
__Operator precedence:__ The order in which operations are performed when there are multiple operations in an expression
e.g. multiplication before addition.
Python follows the usual mathematical rules for precedence.
> 1. Parentheses e.g. $(2+4)$
1. Exponents e.g. $2^2$
1. Multiplication, Division, Floor Division and Modulus (left to right)
1. Addition and Subtraction (left to right)
- The expression should __evaluate correctly__.
- The expression should be __easily readable__.
__Easily Readable__
Simple enough for someone else reading the code to understand.
It is possible to write __code__ that is correct, but might be difficult for someone (including you!) to check.
#### Correct Evaluation
A common example:
$$
\frac{10}{2 \times 50} = 0.1
$$
```
10 / 2 * 50
```
is incorrect.
Multiplication and division have the same precedence.
The expression is evaluated 'left-to-right'.
The correct result is acheived by using brackets $()$, as you would when using a calculator.
$$
\frac{10}{2 \times 50} = 0.1
$$
__How would you enter this using a calculator to get the correct order of precedence?__
```
10 / (2*50)
```
#### Readability
An example that __evaluates__ the following expression correctly:
$$
2^{3} \cdot 4 = 32
$$
but is __not easily readable__:
```
2**3*4
```
$$
2^{3} \cdot 4 = 32
$$
A better (__more readable__) expression:
```
(2**3)*4
```
It is best practise to use spaces between characters to make your code more readable.
You will be marked on readbility in your assessment.
Start developing good habits now!
```
(2**3)*4
#(2**3) * 4
```
## Variables and Assignment
We can easily solve the equations so far using a calculator.
Let's look at some special operations that Python allows us to do.
What if we want to evaluate the same expression multiple times, changing the numerical constants each time?
Example:
>$x^{y} \cdot z = $ <br>
>$2^{3} \cdot 4 = $ <br>
$4^{5} \cdot 3 = $ <br>
$6^{2} \cdot 2 =$ ...
What if we want to use the value of the expression in a subsequent computation?
Example:
>$a = b + c$
>$d = a + b$
In both these cases programming can improve the speed and ease of computation by using *assignment*.
### Assigning Variables
When we compute something, we usually want to __store__ the result.
This allows us to use it in subsequent computations.
*Variables* are what we use to store something.
```
c = 10
print(c)
```
Above, the variable `c` is used to 'store' the value `10`.
The function `print` is used to display the value of a variable.
(We will learn what functions are and how we use them later).
To compute $c = a + b$ , where $a = 2$ and $b = 11$:
```
a = 2
b = 11
c = a + b
```
On each line the expression on the right-hand side of the assignment operator '`=`' is evaluated and then stored as the variable on the left-hand side.
```
print(c)
```
If we want to change the value of $a$ to $4$ and recompute the sum, replace `a = 2` with `a = 4` and execute the code.
__Try this yourself__.
Change the value of a or b.
Re-run the cell to update the value.
(Click on the cell to type in it. <br>
Press "Shift" + "Enter" to run the cell.)
Then run the `print(c)` block to view the new value.
__In the cell below find $y$ when__:
<br>$y=ax^2+bx+c$,
<br>$a=1$
<br>$b=1$
<br>$c=-6$
<br>$x=-2$
When you have finished, hold up your answer on your whiteboard.
```
# create variables a, b, c and x
# e.g. a = 1
a = 1
b = 1
c = -6
x = 0
y = a*x**2 + b*x + c
print(y)
#type: print (y) to reveal the answer
```
What value did you get for y
Answer: $ y = -4 $
Now change the value of $x$ so that $x = 0$ and re-run the cell to update the value.
What value did you get for y this time?
Answer = $ y = -6 $
### Augmented Assignment
The case where the assigned value depends on a previous value of the variable.
Example:
```
a = 2
b = 11
a = a + b
print(a)
```
This type of expression is not a valid algebraic statement since '`a`' appears on both sides of '`=`'.
However, is very common in computer programming.
__How it works:__
> `a = a + b`
1. The expression on the right-hand side is evaluated (the values assigned to `a` and `b` are summed).
2. The result is assigned to the left-hand side (to the variable `a`).
<a id='Shortcuts'></a>
### Shortcuts
Augmented assignments can be written in short form.
For __addition__:
`a = a + b` can be written `a += b`
```
# Long-hand addition
a = 2
b = 11
a = a + b
print(a)
# Short-hand addition
a = 2
b = 11
a += b
print(a)
```
For __subtraction__:
`a = a - b` can be written `a -= b`
```
# Long-hand subtraction
a = 1
b = 4
a = a - b
print(a)
# Short-hand subtraction
a = 1
b = 4
a -= b
print(a)
```
The <a href='#AlgebraicOperators'>basic algebraic operators</a> can all be manipulated in the same way to produce a short form of augmented assigment.
Complete the cells below to include the __short form__ of the expression and `print(a)` to check your answers match.
__Multiplication__
```
# Long-hand multiplication
a = 10
c = 2
a = c*a
print(a)
# Short-hand multiplication
a = 10
c = 2
a *= c
print(a)
```
__Division__
```
# Long-hand division
a = 1
a = a/4
print(a)
# Short-hand division
a = 1
a /= 4
print(a)
```
__Floor Division__
```
# Long-hand floor division
a = 12
a = a//5
print(a)
# Short-hand floor division
a = 12
a //= 5
print(a)
```
__Modulus__
```
# Long-hand modulus
a = 12
c = 5
a = a % c
print(a)
# Short-hand modulus
a = 12
c = 5
a %= c
print(a)
```
__Exponent__
```
# Long-hand exponent
a = 3
c = 2
a = a ** c
print(a)
# Short-hand exponent
a = 3
c = 2
a **= c
print(a)
```
##### Note: The sentences beginning with "#" in the cell are called comments.
These are not computed as part of the program but are there for humans to read to help understand what the code does.
## Naming Variables
__It is good practice to use meaningful variable names. __
e.g. using '`x`' for time, and '`t`' for position is likely to cause confusion.
You will be marked on readbility in your assessment.
Start developing good habits now!
Problems with poorly considered variable names:
1. You're much more likely to make errors.
1. It can be difficult to remember what the program does.
1. It can be difficult for others to understand and use your program.
__Different languages have different rules__ for what characters can be used in variable names.
In Python variable names can use letters and digits, but cannot start with a digit.
e.g.
`data5 = 3` $\checkmark$
`5data = 3` $\times$
__Python is a case-sensitive language__
e.g. the variables '`A`' and '`a`' are different.
__Languages have *reserved keywords*__ that cannot be used as variable names as they are used for other purposes.
The reserved keywords in Python are:
`['False', 'None', 'True', 'and', 'as', 'assert', 'break', 'class', 'continue', 'def', 'del', 'elif', 'else', 'except', 'finally', 'for', 'from', 'global', 'if', 'import', 'in', 'is', 'lambda', 'nonlocal', 'not', 'or', 'pass', 'raise', 'return', 'try', 'while', 'with', 'yield']`
Reserved words are colored bold green when you type them in the Notebook so you can see if one is being used.
If you try to assign something to a reserved keyword, you will get an error e.g. it is not possible to create a variable with the name __`for`__:
```
for = 12
```
__Sometimes it is useful to have variable names that are made up of two words.__
A convention is to separate the words in the variable name using an underscore '`_`'.
e.g. a variable name for storing the number of days:
```python
num_days = 10
```
Suggest a variable name for each of the following quantities and hold it up on your whiteboard.
__temperature__
__height__
__depth of hole__
__class__
## Comparing Variables Using Booleans
__Boolean:__A type of variable that can take on one of two values - true or false.
One way to visualise how a Boolean works is consider the answer when we make a comparison...
<a id='ComparisonOperators'></a>
### Comparison Operators
__Comparison Operator:__An operator that is used to compare the values of two variables.
__Commonly used comparison operators:__
$==$ Equality <br>
$!=$ Inequality <br>
$>$ Greater than <br>
$<$ Less than <br>
$>=$ Greater than or equal to <br>
$<=$ Less than or equal to <br>
__Example:__ Comparing variables a and b using comparison operators returns a boolean variable:
```
a = 10.0
b = 9.9
# Check if a is less than b.
print("Is a less than b?")
print(a < b)
# Check if a is more than b.
print("Is a greater than b?")
print(a > b)
```
##### Note: We can print words by placing them between quotation marks "......".
The collection of words between the marks is called a *string*.
A string is a type of *variable*. We will learn about other types of variable shortly.
__Complete the cell in your textbook by writing the correct comparison operator in each set of empty brackets.__
```
a = 14
b = -9
c = 14
# Check if a is equal to b
print("Is a equal to b?")
print(a == b)
# Check if a is equal to c
print("Is a equal to c?")
print(a == c)
# Check if a is not equal to c
print("Is a not equal to c?")
print(a != c)
# Check if a is less than or equal to b
print("Is a less than or equal to b?")
print(a <= b)
# Check if a is less than or equal to c
print("Is a less than or equal to c?")
print(a <= c)
# Check if two colours are the same
colour0 = 'blue'
colour1 = 'green'
print("Is colour0 the same as colour1?")
print(colour0 == colour1)
```
### Logical Operators
The comparisons we have looked at so far consider two variables.
*Logical operators*:
> ```python
and
or
not
```
allow us to make multiple comparisons at the same time.
The code
```python
X and Y
```
will evaluate to `True` if statement `X` *and* statement `Y` are both true.
Otherwise will evaluate to `False`.
The code
```python
X or Y
```
will evaluate to `True` if statement `X` *or* statement `Y` is true.
Otherwise will evaluate to `False`.
__Examples:__
$10 < 9$ is false
$15 < 20$ is true
```
#print(10 < 9 and 15 < 20)
#print(10 < 9 or 15 < 20)
```
Guess the answer (`True` or `False`) by writing it on your whiteboard:
```
# print(1 < 2 and 3 < 4)
# print(1 < 2 or 4 < 3)
# print(1 < 2 and 3 < 4)
```
In Python, the 'not' operator negates a statement, e.g.:
```
a = 12
b = 7
#print(a < b)
#print(not a < b)
```
In your textbook you will find an example of a simple computer program that uses comparison operators.
Based on the current time of day, the program answers two questions:
>__Is it lunchtime?__
>`True`
if it is lunch time.
>__Is it time for work?__
>`True`
if it is within working hours.
```
time = 13.05 # current time
work_starts = 8.00 # time work starts
work_ends = 17.00 # time work ends
lunch_starts = 13.00 # time lunch starts
lunch_ends = 14.00 # time lunch ends
# variable lunchtime is True or False
lunchtime = time >= lunch_starts and time < lunch_ends
# variable work_time is True or False
work_time = (work_starts <= time < work_ends) and not lunchtime
print("Is it lunchtime?")
print(lunchtime)
print("Is it time for work?")
print(work_time)
```
Based on the current time of day, the program answers two questions:
>__Is it lunchtime?__
>`True`
if it is lunch time.
>__Is it time for work?__
>`True`
if it is within working hours.
You can see that if we change the time, the program output changes.
__Try changing the value of variable `time`__ to a value that is:
- before work
- during work
- during lunchtime
- after work
Each time you change the value of `time` re-run the cell to check if the answer is as you expect; lunchtime, work-time or neither.
Note that the comparison operators (`>=`, `<=`, `<` and `>`) are evaluated before the Boolean operators (`and`, `or`).
### Operator Precedence
> 1. Parentheses
1. Exponents
1. Multiplication, Division, Floor Division and Modulus (left to right)
1. Addition and Subtraction (left to right)
1. Comparison Operators (left to right)
1. Boolean not
1. Boolean and
1. Boolean or
```
a = 3 + 1 < 4 or 3 * 1 < 4
a = ((3 + 1) < 4) or ((3 * 1) < 4)
```
Both expressions show the same equation but the second is more __readable__.
## Types
All variables have a 'type', which indicates what the variable is, e.g. a number, a string of characters, etc.
Type is important because it determines:
- how a variable is stored
- how it behaves when we perform operations on it
- how it interacts with other variables.
e.g.multiplication of two real numbers is different from multiplication of two complex numbers.
### Introspection
We can check a variable's type using *introspection*.
To check the type of a variable we use the function `type`.
```
x = True
print(type(x))
a = "1.0"
print(type(a))
```
Complete the cell in your interactive textbook to find the `type` of `a` when it is written as shown below:
```
a = 1
print(type(a))
a = 1.0
print(type(a))
```
What is the first type? What is the second type?
Did anyone get a different answer?
Note that `a = 1` and `a = 1.0` are different types!
- __bool__ means __Boolean__ variable.
- __str__ means __string__ variable.
- __int__ means __integer__ variable.
- __float__ means __floating point__ variable.
This distinction is very important for numerical computations.
We will look at the meaning of these different types next...
Explain the importance of the position of print statements when augmenting variables.
### Booleans
A type of variable that can take on one of two values - true or false. This is the simplest type.
```
a = True
b = False
# test will = True if a or b = True
test = a or b
print(test)
print(type(test))
```
##### Note: We can use a single instance of the print function to display multiple pieces of information if we sperate them by commas.
e.g. `print(item_1, item_2)`
```
print(test, type(test))
```
__Re-cap: what does a evaluate to? (`True` or `False`)__
```
a = (5 < 6 or 7 > 8)
#print(a)
print(a)
```
<a id='Strings'></a>
### Strings
A string is a collection of characters.
A string is created by placing the characters between quotation marks.
You may use single or double quotation marks; either is fine e.g.
my_string = 'This is a string.'
or
my_string = "This is a string."
__Example:__ Assign a string to a variable, display the string, and then check its type:
```
my_string = "This is a string."
print(my_string)
print(type(my_string))
```
We can perform many different operations on strings.
__Example__: Extract a *single* character as a new string:
> *__NOTE:__ Python counts from 0.*
```
my_string = "This is a string."
# Store the 3rd character of `my_string` as a new variable
s = my_string[2]
# Print
print(s)
# Check type
print(type(s))
```
The number that describes the position of a character is called the *index*.
What is the character at index 4?
What is the index of character r?
```
my_string = "This is a string."
```
This shows that we count spaces as characters.
__Try it yourself__.
`my_string = "This is a string."`
In the cell provided in your textbook:
- store the 5th character as a new variable
- print the new variable
- check that it is a string
```
# Store the 6th character as a new variable
index5 = my_string[5]
# Print the new variable
print(index5)
# Check the type of the new variable
print(type(index5))
```
We can extract a *range of* characters as a new string by specifiying the index to __start__ at and the index to __stop__ at:
```
my_string = "This is a string."
# Store the first 6 characters
s = my_string[0:6]
# print
print(s)
# check type
print(type(s))
```
$$
\underbrace{
\underbrace{t}_{\text{0}} \
\underbrace{h}_{\text{1}}\
\underbrace{i}_{\text{2}}\
\underbrace{s}_{\text{3}}\
\underbrace{}_{\text{4}}\
\underbrace{i}_{\text{5}}\
}_{\text{s}}
\underbrace{s}_{\text{6}}\
\underbrace{}_{\text{7}}\
\underbrace{a}_{\text{8}}\
\underbrace{}_{\text{9}}\
\underbrace{s}_{\text{10}}\
\underbrace{t}_{\text{11}}\
\underbrace{r}_{\text{12}}\
\underbrace{i}_{\text{13}}\
\underbrace{n}_{\text{14}} \
\underbrace{g}_{\text{15}} \
\underbrace{.}_{\text{16}} \
$$
__Note:__
- The space between the first and second word is counted as the 5th character.
- The "stop" value is not included in the range.
```
# Store the last 4 characters and print
s = my_string[-4:]
print(s)
```
$$
my\_string =
\underbrace{t}_{\texttt{-17}} \
\underbrace{h}_{\texttt{-16}}\
\underbrace{i}_{\texttt{-15}}\
\underbrace{s}_{\texttt{-14}}\
\underbrace{}_{\texttt{-13}}\
\underbrace{i}_{\texttt{-12}}\
\underbrace{s}_{\texttt{-11}}\
\underbrace{}_{\texttt{-10}}\
\underbrace{a}_{\texttt{-9}}\
\underbrace{}_{\texttt{-8}}\
\underbrace{s}_{\texttt{-7}}\
\underbrace{t}_{\texttt{-6}}\
\underbrace{r}_{\texttt{-5}}\
\underbrace{
\underbrace{i}_{\texttt{-4}}\
\underbrace{n}_{\texttt{-3}} \
\underbrace{g}_{\texttt{-2}} \
\underbrace{.}_{\texttt{-1}} \
}_{\text{s}}
$$
__Note:__
- The second value in this range is empty.
- This means the range ends at the end of the string.
__Try it yourself.__
In the cell provided in your textbook:
- store the last 6 characters
- print your new variable
```
# Store the last 6 characters as a new variable
w = my_string[-6:]
# Print the new varaible
print(w)
```
In the next cell provided:
- store 6 characters, starting with the 2nd character; "his is"
- print your new variable
```
# Store 6 characters, starting with "h"
e = my_string[1:7]
# Print the new varaible
print(e)
```
Is there an alternative way of extracting the same string?
__Example:__ Add strings together.
```
start = "Py"
end = "thon"
word = start + end
print(word)
```
__Example:__ Add a section of a string to a section of another string:
```
start = "Pythagorus"
end = "marathon"
word = start[:2] + end[-4:]
print(word)
```
__Note__: We can use a blank space __or__ a 0 to index the first character; either is OK.
__Try it yourself:__
In the cell in your textbook add the variables `start` and `end` to make a sentence.
```
start = "My name is"
end = "Hemma"
emma = start + end
print(emma)
# Add start and end to make a new variable and print it
```
Notice that we need to add a space to seperate the words "is" and "Hemma".
We do this using a pair of quotation marks, seperated by a space.
```
sentence = start + " " + end
#print(sentence)
print(sentence)
```
### Numeric types
Numeric types are particlarly important when solving scientific and engineering problems.
Python 3 has three numerical types:
- integers (`int`)
- floating point numbers (`float`)
- complex numbers (`complex`)
__Integers:__Whole numbers. <br>
__Floating point:__Numbers with a decimal place.<br>
__Complex numbers:__Numbers with a real and imaginary part.<br>
Python determines the type of a number from the way we input it.
e.g. It will decide that a number is an `int` if we assign a number with no decimal place:
__Try it for yourself__
In the cell provided in your textbook:
- Create a variable with the value 3.1
- Print the variable type
- Create a variable with the value 2
- Print the variable type
```
# Create a variable with the value 3.1
a = 3.1
# Print the variable type
print(type(a))
# Create a variable with the value 2
opopo = 2
# Print the variable type
print(type(opopo))
print(type(2.))
print(type(2.0))
print(type(float(2)))
```
What type is the first variable?
What type is the second variable?
__How could you re-write the number 2 so that Python makes it a float?__
Try changing the way 2 is written and run the cell again to check that the variable type has changed.
### Integers
- Integers (`int`) are whole numbers.
- They can be postive or negative.
- Integers should be used when a value can only take on a whole number <br> e.g. the year, or the number of students following this course.
### Floating point
Most engineering calculations involve numbers that cannot be represented as integers.
Numbers that have a decimal point are automatically stored using the `float` type.
A number is automatically classed as a float:
- if it has a decimal point
- if it is written using scientific notation (i.e. using e or E - either is fine)
<a id='ScientificNotation'></a>
#### Scientific Notation
In scientific notation, the letter e (or E) symbolises the power of ten in the exponent.
For example:
$$
10.45e2 = 10.45 \times 10^{2} = 1045
$$
Examples using scientific notation.
```
a = 2e0
print(a, type(a))
b = 2e3
print(b)
c = 2.1E3
print(c)
```
__Try it yourself__
In the cell provided in your textbook:
- create a floating point variable for each number shown using scientific notation.
- print each variable to check it matches the number given in the comment.
```
# Create a variable with value 62
a = 6.2e1
# Print the variable
print(a)
# Create a variable with value 35,000
b = 3.5e4
# Print the variable
print(b)
# Are there any other ways you could have expressed this?
print(True)
```
What alternative ways can be used to express 35,000?
### Complex numbers
Complex numbers have real and imaginary parts.
We can declare a complex number in Python by adding `j` or `J` after the complex part of the number:
__Standard mathematical notation.__ __Python notation__
$ a = \underbrace{3}_{\text{real part}} + \underbrace{4j}_{\text{imaginary part}} $
`a = 3 + 4j` __or__ `a = 3 + 4J`
```
b = 4 - 3j
print(b, type(b))
```
<a id='Casting'></a>
## Type Conversions (Casting)
We often want to change between types.
Sometimes we need to make sure two variables have the same type in order to perform an operation on them.
Sometimes we recieve data of a type that is not directly usable by the program.
This is called *type conversion* or *type casting*.
### Automatic Type Conversion
If we add two integers, the results will be an integer:
```
a = 4 # int
b = 15 # int
c = a + b
print(c, type(c))
```
However, if we add an int and a float, the result will be a float:
```
a = 4 # int
b = 15.0 # float
c = a + b
print(c, type(c))
```
If we divide two integers, the result will be a `float`:
```
a = 16 # int
b = 4 # int
c = a/b
print(c, type(c))
```
When dividing two integers with floor division (or 'integer division') using `//`, the result will be an `int` e.g.
```
a = 16 # int
b = 3 # int
c = a//b
print(c, type(c))
```
In general:
- operations that mix an `int` and `float` will generate a `float`.
- operations that mix an `int` or a `float` with `complex` will generate a `complex` type.
If in doubt, use `type` to check.
### Explicit Type Conversion
We can explicitly change (or *cast*) the type.
To cast variable a as a different type, write the name of the type, followed by the variable to convert in brackets.
__Example: Cast from an int to a float:__
```
a = 1
a = float(a)
print(a, type(a))
# If we use a new variable name the original value is unchanged.
a = 1
b = float(a)
print(a, type(a))
print(b, type(b))
# If we use the orignal name, the variable is updated.
a = float(a)
print(a, type(a))
```
__Try it yourself.__
In the cell provided:
- cast variable `a` from a float back to an int.
- print variable `a` and its type to check your answer
```
#cast a as an int
a = int(a)
print(a, type(a))
# print a and its type
```
##### Note: Take care when casting as the value of the variable may change as well as the type.
To demonstrate this we will complete a short exercise together...
In the cell provided in your textbook:
1. cast `i` as an `int` and print `i`.
1. cast `i` back to a `float` and print `i`.
```
i = 1.3 # float
print(i, type(i))
# cast i as an int and print it
i = int(i)
print(i)
# cast i back to a float and print it
i = float(i)
print(i)
```
What has happened to the original value of `i`?
Note that rounding is applied when converting from a `float` to an `int`; the values after the decimal point are discarded.
This type of rounding is called 'round towards zero' or 'truncation'.
A common task is converting numerical types to-and-from strings.
Examples:
- Reading a number from a file where it appears as as a string
- User input might be given as a string.
__Example: Cast from a float to a string:__
```
a = 1.023
b = str(a)
print(b, type(b))
```
__Example: Cast from a string to a float:__
It is important to cast string numbers as either `int`s or `float`s for them to perform correctly in algebraic expressions.
Consider the example below:
```
a = "15.07"
b = "18.07"
print("As string numbers:")
print("15.07 + 18.07 = ", a + b)
print("When cast from string to float:")
print("15.07 + 18.07 = ", float(a) + float(b))
```
Note from the cell above that numbers expressed as strings can be cast as floats *within* algebraic expressions.
Only numerical values can be cast as numerical types.
e.g. Trying to cast the string `four` as an integer causes an error:
```
f = float("four")
```
__Complete the review exercises in your textbook.__
We will stop 10 minutes before the end of the seminar to:
- update your online git repository
- summarise what we have learnt today
## Review Exercises
Here are a series of short engineering problems for you to practise each of the new Python skills that you have learnt today.
### Review Excercise: Gravitational Potential
The gravitational potential, $V$, of a particle of mass $m$ at a distance $r$ from a body of mass $M$, is:
$$
V = \frac{G M m}{r}
$$
In the cell below, solve for $V$ when:
$G = \text{gravitational constant} = 6.674 \times 10^{-11}$ Nm$^{2}$kg$^{-2}$.
$M = 1.65 \times 10^{12}$kg
$m = 6.1 \times 10^2$kg
$r = 7.0 \times 10^3$ m
Assign variables for $G, M, m$ and $r$ before solving.
<br>Express the numbers using __scientific notation__.
<a href='#ScientificNotation'>Jump to Scientific Notation</a>
```
# Gravitational Potential
G = 6.674e-11
M = 1.65e12
m = 6.1e2
r = 7.0e3
V = (G*M*m)/r
print(V, "J")
```
### Review Exercise: Fahrenheit to Celsius
Degrees Fahrenheit ($T_F$) are converted to degrees Celsius ($T_c$) using the formula:
$$
T_c = 5(T_f - 32)/9
$$
In the cell below, write a program to convert 78 degrees Fahrenheit to degrees Celsius and print the result.
Write your program such that you can easily change the input temperature in Fahrenheit and re-calculate the answer.
```
# Convert degrees Fahrenheit to degrees Celsius
Tf = 78
Tc = 5*(Tf - 32) / 9
print(Tc)
```
### Review Exercise: Volume of a Cone
The volume of a cone is:
$$
V = \frac{1}{3}(base \ area)\times(perpendicular \ height)
$$

In the cell below, find the internal volume of a cone of internal dimensions:
base radius, $r = 5cm$
perpendicular height, $h = 15cm$
Assign variables for $r$ and $h$ before solving.
```
pi = 3.142
# Internal volume
r_cone = 2.5
h_cone = 15
A_cone = pi * r_cone**2
V = (1/3) * A_cone * h_cone
print(V)
```
The cone is held upside down and filled with liquid.
The liquid is then transferred to a hollow cylinder.
Base radius of cylinder, $r_c = 4cm$.
<img src="img/cone-cyl.gif" alt="Drawing" style="width: 200px;"/>
The volume of liquid in the cylinder is:
$V = (base \ area)\times(height \ of \ liquid)$
In the cell below, find the height of the liquid in the cylinder?
Assign a variables for $r_c$ before solving.
```
# H = height of liquid in the cylinder
r_cyl = 4
A_cyl = pi * r_cyl**2
h_cyl = V / A_cyl
print(h_cyl)
```
The total height of the cyclinder, $H_{tot}$ is 10cm.
In the cell below, use a __comparison operator__ to show if the height of the liquid, $H$, is more than half the total height of the cylinder.
<a href='#ComparisonOperators'>Jump to Comparison Operators</a>
```
# Is the height of the liquid more than half the total height of the cylinder?
h_tot = 10
h_cyl > h_tot/2
```
Lastly, go back and change the radius of the __cone__ to 2.5cm.
Re-run the cells to observe how you can quickly re-run calculations using different initial values.
### Review Exercise: Manipulating Strings
<a href='#Strings'>Jump to Strings</a>
__(A)__
In the cell below, print a new string whose:
- first 3 letters are the last 3 letters of `a`
- last 3 letters are the first 3 letters of `b`
```
a = "orangutans"
b = "werewolves"
print(a[-3:] + b[:3])
```
__(B)__
In the cell below, use `c` to make a new string that says: `programming`.
```
c = "programme"
o = "ing"
print(c[:-1] + o)
```
__(C)__
In the cell below, __cast__ `d` and `e` as a different type so that:
`f` = (numerical value of `d`) + (numerical value of `e`)
$$
using standard arithmetic.
<a href='#Casting'>Jump to Type Conversion (Casting)'</a>
Print `f`
```
d = "3.12"
e = "7.41"
f = float(d) + float(e)
print(f)
```
Use __shortcut notation__ to update the value of `f`.
The new value of f should equal the __remainder (or modulus)__ when f is divided by 3.
<a href='#Shortcuts'>Jump to Shortcuts</a>
```
# What is the remainder (modulus) when f is divided by 3
f %= 3
print(f)
```
In the cell below, change the type of the variable f to ab integer.
<a href='#Casting'>Jump to Type Conversion (Casting)'</a>
```
# f expressed as an integer
f = int(f)
print(f)
```
# Summary
- We can perform simple *arithmetic operations* in Python (+, -, $\times$, $\div$.....)
- We can *assign* values to variables.
- Expressions containing multiple operators obey precedence when executing operations.
# Summary
- *Comparison operators* (==, !=, <, >....) compare two variables.
- The outcome of a comparison is a *Boolean* (True or False) value.
- *Logical operators* (`and`, `or`) compares the outcomes of two comparison operations.
- The outcome of a logical operation is a *Boolean* (True or False) value.
- The logical `not` operator returns the inverse Boolean value of a comparison.
- Every variable has a type (`int`, `float`, `string`....).
- A type is automatically assigned when a variable is created.
- Python's `type()` function can be used to determine the type of a variable.
- The data type of a variable can be converted by casting (`int()`, `float()`....)
# Homework
1. __CLONE__ your online GitHub repository to your personal computer (if you have not done so in-class today).
<br>(Refer to supplementary material: __S1_Introduction_to_Version_Control.ipynb__, Creating a Local Repository on your Personal Computer)
1. __COMPLETE__ any unfinished Review Exercises.
1. __PUSH__ the changes you make at home to your online repository
<br>(Refer to supplementary material: __S1_Introduction_to_Version_Control.ipynb__, Syncronising Repositories, Pushing Changes to an Online Repository
).
# Next Seminar
If possible, please bring your personal computer to class.
We are going to complete an excercise: __pulling changes made in-class to the local repository on your personal computer.__
If you cannot bring your personal computer with you, you can practise using a laptop provided in class, but you wil need to repeat the steps at home in your own time.
# Updating your Git repository
You have made several changes to your interactive textbook.
The final thing we are going to do is add these changes to your online repository so that:
- I can check your progress
- You can access the changes from outside of the university server.
> Save your work.
> <br>`git add -A`
> <br>`git commit -m "A short message describing changes"`
> <br>`git push origin master`
<br>Refer to supplementary material: __S1_Introduction_to_Version_Control.ipynb__
# Updating your Git repository from home.
We will finish by learning how to:
- Create a local repository on your personal computer, containing the chnages you have made today.
- Add the changes you make at home (homework: review exercises) to the online Git Repository.
In Jupyter notebook, open: __S1_Introduction_to_Version_Control__.
Navigate to section: __Creating a Local Repository on your Personal Computer.__
| github_jupyter |
<font color="green">*To start working on this notebook, or any other notebook that we will use in the Moringa Data Science Course, we will need to save our own copy of it. We can do this by clicking File > Save a Copy in Drive. We will then be able to make edits to our own copy of this notebook.*</font>
# SQL Programming - Getting Started with Databases
## 1.1 Overview
Structured Query Language (SQL) is the language that is used to store, manipulate and retrieve data from many databases. It is the standard language for many relational database management systems which are a type of databases used by organisations across the world. These type of relational database management systems are used to store data in tables and examples of such include SQLite, MySQL, Postgres, Oracle etc.
During the Moringa DataScience Prep, we will learn about SQL since as Data Scientists, we may be required to interact with this data through the use of SQL.
In this notebook, we will use SQL to learn how tables in databases are created. More specifically, we will learn how the structure of tables is defined which is critical in determining the quality of data. The better the structure, the easier it becomes to clean data.
## 1.2 Connecting to our Database
```
# We will first load an sql extension into our environment
# This extension will allow us to work with sql on Colaboratory
#
%load_ext sql
# We will then connect to our in memory sqlite database
# NB: This database will cease to exist as soon as the database connection is closed.
# We will learn more about how databases are created later in prep.
#
%sql sqlite://
```
## 1.3 Creating a Table
```
# Example 1
# We will now define and create a table Classmates in our database (if it doesn't exist).
# This table will have fields: PersonID, LastName, FirstName, Phone and Residence as shown below.
# We will then fetch all records from the table.
#
%%sql
CREATE TABLE if not exists Classmates (
PersonID,
LastName,
FirstName,
Phone,
Residence
);
SELECT * From Classmates;
# Example 2
# In this example, we will create a table named Customers
# with the columns Id, Name, Age, Address, Salary.
# This kind of a table structure can be used by Sacco Management system.
# Then later fetch all records in the table.
#
%%sql
CREATE TABLE if not exists Customers(
Id,
Name,
Age,
Address,
Salary
);
SELECT * From Customers;
# Example 3
# In this example, we will create a Students table for a student management system.
# This will contain the following fields,
# AdmissionsNo, FirstName, MiddleName, LastName, DateOfBirth and DateOfAdmission.
# Then fetch all records from Students table.
#
%%sql
CREATE TABLE if not exists Students(
AdmissionsNo,
FirstName,
MiddleName,
LastName,
DateOfBirth,
DateOfAdmission
);
SELECT * from Students;
```
### <font color="green"> 1.3 Challenges</font>
```
# Challenge 1
# Let us create a table name PC with the following fields;
# Code, model, speed, RAM, HD, CD and Price.
# We also specify the appropriate data types to our table, then display it.
#
%%sql
CREATE TABLE if not exists PC(
Code,
mode,
speed,
RAM,
HD,
CD,
Price
);
SELECT * from PC;
# Challenge 2
# Let us create a table named Printer with
# the following code, model, speed, type and Price.
#
%%sql
CREATE TABLE if not exists Printer(
Code,
model,
speed,
type,
Price
);
SELECT * from Printer;
# Challenge 3
# We can now write another table called Movies with the columns
# id, title, director, year and length_minutes
#
%%sql
CREATE TABLE if not exists Movies(
id,
title,
director,
year,
length_minutes
);
SELECT * from Movies;
```
## 1.4 Specifying Column Data Types
```
# Example 1
# While defining our table, we should specify different data types.
# These datatypes will ensure that the particular column stores only
# records of that type i.e. The NationalID column in the defined Citizens table
# below will only take integer values. These are values between -2,147,483,648 to 2,147,483,647.
# If one would require to store much smaller or bigger values than the range above then
# they can use a different datatype i.e. tinyint or bigint.
# The datatype varchar will hold letters and numbers upto the specified limit
# in the brackets.
#
%%sql
CREATE TABLE IF NOT EXISTS Citizens (
NationalID int,
FirstName varchar(255),
MiddleName varchar(255),
PostalAddress varchar(255),
Residence varchar(255));
SELECT * from Citizens;
# Example 2
# Specifying column data types will ensure that
# the data that is stored within that table of the correct type.
# The data type date would ensure that the data stored is in the format YYYY-MM-DD.
# The data type boolean supports the storage of two values: TRUE or FALSE.
# No other data of a different nature would be accepted to the table
# than the one it was specified to have.
#
%%sql
CREATE TABLE IF NOT EXISTS artists(
Artist_Id int,
Artist_Name varchar(60),
Artist_DOB date,
Posters_In_Stock boolean);
SELECT * from artists;
# Example 3
# The data type text accepts upto 2,147,483,647 characters
# The data type float accepts floating point numbers i.e 183.3 as shown
#
%%sql
CREATE TABLE IF NOT EXISTS Players (
id int,
name text,
age integer,
height float);
SELECT * from Players;
```
### <font color="green"> 1.4 Challenges</font>
```
# Challenge 1
# Let's create a table customer with CustID with datatype Integer, LastName
# with datatype character(25), FirstName with datatype Character(20)
#
%%sql
CREATE TABLE IF NOT EXISTS customer (
CustID int,
LastName char(25),
FirstName char(20));
SELECT * from customer;
# Challenge 3
# Create a table called sales that stores sales ID, customer ID, name, and address information.
# using also the appropriate data types
#
%%sql
CREATE TABLE IF NOT EXISTS sales (
SalesID int,
CustID int,
name text,
address varchar(255));
SELECT * from sales;
# Challenge 4
# Create a table called employees that stores employee number, employee name,
# department, and salary information using appropriate data types
#
%%sql
CREATE TABLE IF NOT EXISTS employees (
emp_No int,
emp_Name char(25),
Derp text,
Salary float);
SELECT * from employees;
```
## 1.5 Specifying Column Default Values
```
# Example 1
# To specify column default values, let's define a table named artists
# which contains three columns artist_id, artist_name and place_of_birth.
# - artist_id will be of the type int
# - artist_name of the type varchar(60)
# - place_of_birth varchar(60) with default 'Unknown'.
# NB: Changing the case of our data type definition will not have any effect.
#
%%sql
CREATE TABLE Artists (
artist_id INT,
artist_name VARCHAR(60),
place_of_birth VARCHAR(60)
);
```
### <font color="green"> 1.5 Challenges</font>
```
# Challenge 1
# Let's create a new table called latest_players with similar fields to
# the already created Players table but specify the default value to unknown
#
%%sql
CREATE TABLE latest_players (
ID INT,
time date,
place_of_birth VARCHAR(60)
);
SELECT * from latest_players;
# Challenge 2
# Let's create a new table called restaurants with the fields
# - name: string
# - description: text
# - address: string, default value is unknown
# - user_id: integer
# - last_orders_at: date
# We can perform data type external research if need be
#
%%sql
CREATE TABLE restaurants (
name text,
description text,
address string unknown,
user_is int,
last_orders_at date
);
Select * from restaurants;
```
## 1.6 Altering SQL Tables
```
# Example 1: Adding a Column
# To add a column Gender to the Classmates table, we do the following,
# then preview the table to see the changes
#
%%sql
ALTER TABLE Classmates ADD Gender;
SELECT * FROM Classmates;
# Example 2: Deleting a Column
# To delete a column Phone in a table above we do the following,
# Then fetch records from the table to confirm the changes
#
%%sql
ALTER TABLE Classmates drop column Residence;
# Example 3
# We can change the name of the Classmates table to Schoolmates by doing the following,
# Then fetching the records from the table to confirm the changes
#
%%sql
ALTER TABLE Classmates RENAME TO Schoolmates;
SELECT * FROM Schoolmates;
```
### <font color="green"> 1.6 Challenges</font>
```
# Challenge 1
# We can add a column DOB with the data type DATE to the TeamMembers table by;
# Hint: The data type comes after the column name
#
%%sql
ALTER TABLE TeamMembers ADD DOB date;
Select * from TeamMembers;
# Example 4: Confirmation
# Let's check our data type
%%sql
PRAGMA table_info(TeamMembers);
# Challenge 2
# Let's now add a column STUDIO with the data type TEXT to the Artists table
#
%%sql
ALTER TABLE Artists ADD STUDIO text;
Select * from Artists;
# Challenge 3
# We then rename the table Artists to MusicArtists
#
%%sql
ALTER TABLE Artists RENAME TO MusicArtists;
SELECT * FROM MusicArtists;
```
## 1.7 Dropping SQL Tables
### 1.71 Truncating a Table
```
# Example 1
# We may have two options while thinking about dropping (deleting) database tables.
# These are truncating or dropping.
# We can use the TRUNCATE TABLE statement to delete the data inside
# our table as shown below. Do note that this command retains the table itself.
# We will get to confirm the effect of using this command later when we get
# to insert data to the table.
#
%%sql
TRUNCATE TABLE Classmates;
```
### 1.72 Dropping a Table
```
# Example 1
# We can drop our table by using the DROP TABLE statement as shown below
#
%sql DROP TABLE Classmates;
```
### <font color="green"> 1.7 Challenges</font>
```
# Challenge 1
# Lets drop the Players table from our database
#
%sql DROP TABLE Players;
# Challenge 2
# Lets drop the Customers table from our database
#
%sql DROP TABLE Customers;
# Challenge 3
# And finally truncate and drop our Artists table from our database
#
%sql
TRUNCATE TABLE Artists;
DROP TABLE Classmates;
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
from torch.autograd import Variable
import matplotlib.pyplot as plt
import torch.nn.functional as F
raw = pd.read_csv('../dat/schools_w_clusters.csv')
raw = raw[['Cluster ID', 'Id', 'Site name', 'Address', 'Zip', 'Phone']]
raw['Zip'] = raw['Zip'].astype(str)
raw['Phone'] = raw['Phone'].astype(str)
raw.head(15)
inpt1 = record_formatter(raw.iloc[0])
inpt2 = record_formatter(raw.iloc[7])
inpt3 = record_formatter(raw.iloc[11])
otpt1, otpt2 = model.forward(inpt1, inpt2)
print(loss.forward(otpt1,otpt2,1))
otpt1, otpt3 = model.forward(inpt1, inpt3)
print(loss.forward(otpt1,otpt3,0))
otpt2, otpt3 = model.forward(inpt2, inpt3)
print(loss.forward(otpt2,otpt3,0))
print('name max len =', raw['Site name'].str.len().max())
print('address max len =', raw['Address'].str.len().max())
print('Zip max len =', raw['Zip'].str.len().max())
print('phone max len =', raw['Phone'].str.len().max())
```
for a total of max length 154
## defs
The following insanity is how we need to convert into a useable Torch tensor of correct size and Variable...ness.
```
Variable(torch.from_numpy(np.random.rand(10)).float()).view(1,10)
def extend_to_length(string_to_expand, length):
extension = '~' * (length-len(string_to_expand))
return string_to_expand + extension
def record_formatter(record):
name = extend_to_length(record['Site name'], 95)
addr = extend_to_length(record['Address'], 43)
zipp = extend_to_length(record['Zip'], 7)
phon = extend_to_length(record['Phone'], 9)
strings = list(''.join((name, addr, zipp, phon)))
characters = np.array(list(map(ord, strings)))
return Variable(torch.from_numpy(characters).float()).view(1,len(characters))
class SiameseNetwork(nn.Module):
def __init__(self):
super(SiameseNetwork, self).__init__()
self.fc1 = nn.Sequential(
nn.Linear(154,100),
nn.ReLU(inplace=True),
nn.Linear(100, 80),
nn.Sigmoid())
def forward_once(self, x):
return self.fc1(x)
def forward(self, input1, input2):
output1 = self.forward_once(input1)
output2 = self.forward_once(input2)
return output1, output2
class ContrastiveLoss(torch.nn.Module):
def __init__(self, margin=1.0):
super(ContrastiveLoss, self).__init__()
self.margin = margin
'''
def forward(self, x0, x1, y):
# euclidian distance
diff = x0 - x1
dist_sq = torch.sum(torch.pow(diff, 2), 1)
dist = torch.sqrt(dist_sq)
mdist = self.margin - dist
dist = torch.clamp(mdist, min=0.0)
loss = y * dist_sq + (1 - y) * torch.pow(dist, 2)
loss = torch.sum(loss) / 2.0 / x0.size()[1]
return loss
'''
def forward(self, output1, output2, label):
euclidean_distance = F.pairwise_distance(output1, output2)
loss_contrastive = torch.mean((1-label) * torch.pow(euclidean_distance, 2) +
(label) * torch.pow(torch.clamp(self.margin - euclidean_distance, min=0.0), 2))
return loss_contrastive
inpt1 = record_formatter(raw.iloc[0])
inpt2 = record_formatter(raw.iloc[7])
otpt1, otpt2 = model.forward(inpt1,inpt2)
loss.forward(otpt1,otpt2,1)
```
## data characteristics
```
raw.shape
raw['Cluster ID'].unique().shape
```
## training
```
model = SiameseNetwork()
loss = ContrastiveLoss(margin=1)
optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.1)
%%time
diff = 10
loss_holder = []
model.train()
for epoch in range(10):
for i in range(raw.shape[0]-diff):
# build data pairs
inpt1 = record_formatter(raw.iloc[i])
inpt2 = record_formatter(raw.iloc[i+diff])
label = 1 if (raw.iloc[i]['Cluster ID'] == raw.iloc[i+diff]['Cluster ID']) else 0
# forward
otpt1, otpt2 = model.forward(inpt1, inpt2)
optimizer.zero_grad()
loss_calc = loss.forward(otpt1, otpt2, label)
# reassign loss requiring gradient
loss_calc = Variable(loss_calc.data, requires_grad=True)
# backprop
loss_calc.backward()
optimizer.step()
# console.log
loss_holder.append(loss_calc.data[0])
#print(label)
if i == raw.shape[0]-diff-1:
print('loss for epoch', epoch, 'is',
sum(loss_holder[-raw.shape[0]:]))
model.eval()
model.state_dict().keys()
inpt1.size()
loss_calc
model.forward(inpt1,inpt2)
plt.plot(loss_holder)
plt.show()
plt.plot(loss_holder[:raw.shape[0]])
plt.show()
model.state_dict()
```
| github_jupyter |
```
!cat "../README.md"
```
# Dependencies and Setup
* Load File
* Read Purchasing File and store into Pandas DataFrame
```
import pandas as pd
file_to_load = "./Resources/purchase_data.csv"
purchase_data = pd.read_csv(file_to_load)
purchase_data.head()
```
## Player Count
* Display the total number of players
```
# Due to the possibility of players to purchase multiple items, it is important to count each player once. It can be done
# by counting the number of unique elements either by using unique() function or value_counts().
purchase_data['SN'].isnull().any() # This was needed to check if there are null elements in Players column
unique_players = purchase_data['SN'].nunique()
pd.DataFrame({"Total Players": [unique_players]})
```
## Purchasing Analysis (Total)
* Run basic calculations to obtain number of unique items, average price, etc.
* Create a summary data frame to hold the results
* Optional: give the displayed data cleaner formatting
* Display the summary data frame
```
unique_items = purchase_data['Item ID'].unique().size
average_price = purchase_data['Price'].mean()
purchase_no = purchase_data['Purchase ID'].count()
total_revenue = purchase_data['Price'].sum()
summary_df = pd.DataFrame({"Number of Unique Items": [unique_items], "Average Price": [average_price],\
"Number of Purchases": [purchase_no], "Total Revenue": [total_revenue]})
summary_df['Average Price'] = summary_df['Average Price'].map("${:,.2f}".format)
summary_df['Total Revenue'] = summary_df['Total Revenue'].map("${:,.2f}".format)
summary_df
```
## Gender Demographics
* Percentage and Count of Male Players
* Percentage and Count of Female Players
* Percentage and Count of Other / Non-Disclosed
```
unique_grouped_df = purchase_data.groupby('Gender')['SN'].unique()
male, female, others = unique_grouped_df['Male'].size, unique_grouped_df['Female'].size, unique_grouped_df['Other / Non-Disclosed'].size
demographics_dict = {"Total Count": pd.Series([male, female, others], index = ['Male', 'Female', 'Others / Non-Disclosed']),\
"Percentage of Players": pd.Series([round(male/unique_players*100, 2), round(female/unique_players*100, 2),\
round(others/unique_players*100, 2)], index = ['Male', 'Female', 'Others / Non-Disclosed'])}
demographics_df = pd.DataFrame(demographics_dict)
demographics_df['Percentage of Players'] = demographics_df['Percentage of Players'].map("{:,.2f}%".format)
demographics_df
```
## Purchasing Analysis (Gender)
* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender
* Create a summary data frame to hold the results
* Optional: give the displayed data cleaner formatting
* Display the summary data frame
```
import numpy as np
gender_counts = np.array([female, male, others])
gender_purchase_count = purchase_data.groupby('Gender')['Purchase ID'].count()
gender_av_purchase = purchase_data.groupby('Gender')['Price'].mean()
gender_purchase_total = purchase_data.groupby('Gender')['Price'].sum()
gender_av_per_person = gender_purchase_total/gender_counts
purchase_summary_dict = {"Purchase Count": pd.Series(gender_purchase_count),\
"Average Purchase Price": pd.Series(gender_av_purchase),\
"Total Purchase Value": pd.Series(gender_purchase_total),\
"Average Total Purchase per Person": pd.Series(gender_av_per_person)}
gender_purchase_df = pd.DataFrame(purchase_summary_dict, index=['Female', 'Male', 'Other / Non-Disclosed'])
gender_purchase_df['Average Purchase Price'] = gender_purchase_df['Average Purchase Price'].map("${:,.2f}".format)
gender_purchase_df['Total Purchase Value'] = gender_purchase_df['Total Purchase Value'].map("${:,.2f}".format)
gender_purchase_df['Average Total Purchase per Person'] = gender_purchase_df['Average Total Purchase per Person'].map("${:,.2f}".format)
gender_purchase_df
```
## Age Demographics
* Establish bins for ages
* Categorize the existing players using the age bins. Hint: use pd.cut()
* Calculate the numbers and percentages by age group
* Create a summary data frame to hold the results
* Optional: round the percentage column to two decimal points
* Display Age Demographics Table
```
age_list =['<10', '10-14', '15-19', '20-24', '25-29', '30-34', '35-39', '40+']
bins = [0, 9, 14, 19, 24, 29, 34, 39, np.inf]
# First, clean the data from duplicate players
no_dup = purchase_data.drop_duplicates('SN')
# Sort the data according to the age ranges
no_dup['age_group'] = pd.cut(no_dup['Age'], bins=bins, labels = age_list)
# Find total number of players of same age group
total_count = no_dup.groupby('age_group')['Age'].count() #no_dup.age_group.value_counts()
# Calculate the percentage by dividing the total number of players of a certain age group by total number of unique players
percentage_group = total_count*100/unique_players
# Create new dataframe and display
age_demog = pd.DataFrame({"Total Count":total_count, "Percentage of Players": percentage_group})
age_demog['Percentage of Players'] = age_demog['Percentage of Players'].map("{:,.2f}%".format)
age_demog
```
## Purchasing Analysis (Age)
* Bin the purchase_data data frame by age
* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below
* Create a summary data frame to hold the results
* Optional: give the displayed data cleaner formatting
* Display the summary data frame
```
# We repeat the same binning process to create new age_group in the original DataFrame. The only difference now is that
# we need duplicate data as well
purchase_data['age_group'] = pd.cut(purchase_data['Age'], bins=bins, labels=age_list)
# grouping data by age_group we just created. We will be using this function for this exercise.
age_group_data = purchase_data.groupby('age_group')
# Counting all ages that fall into each age category
purchase_count = age_group_data['Age'].count()
# Average purchase price for each age group
average_purchase_price = age_group_data['Price'].mean()
# Total money spent by each age group
total_purchase_value = age_group_data['Price'].sum()
# Average amount of money spent given the unique set of players
average_total_ppp = age_group_data['Price'].sum()/age_group_data['SN'].nunique()
purchase_df = pd.DataFrame({"Purchase Count": purchase_count, "Average Purchase Price": average_purchase_price,\
"Total Purchase Value": total_purchase_value, "Average Total Purchase per Person": average_total_ppp})
purchase_df['Average Purchase Price'] = purchase_df['Average Purchase Price'].map("${:,.2f}".format)
purchase_df['Total Purchase Value'] = purchase_df['Total Purchase Value'].map("${:,.2f}".format)
purchase_df['Average Total Purchase per Person'] = purchase_df['Average Total Purchase per Person'].map("${:,.2f}".format)
purchase_df
```
## Top Spenders
* Run basic calculations to obtain the results in the table below
* Create a summary data frame to hold the results
* Sort the total purchase value column in descending order
* Optional: give the displayed data cleaner formatting
* Display a preview of the summary data frame
```
# For this exercise we need to group the original dataframe by name of the players ("SN")
sn_group_data = purchase_data.groupby('SN')
purchase_count = sn_group_data['Purchase ID'].count()
average_pur_pr = sn_group_data['Price'].mean()
total_pur_val = sn_group_data['Price'].sum()
top_spenders_df = pd.DataFrame({"Purchase Count": purchase_count, "Average Purchase Price": average_pur_pr,\
"Total Purchase Value": total_pur_val})
# It is important to sort the dataframe before formatting the values in columns before. After formatting the values are
# no longer numeric but an object and mathematical operations do not perform as expected on objects
top_spenders_df = top_spenders_df.sort_values(by=["Total Purchase Value"], ascending=False)
top_spenders_df['Average Purchase Price'] = top_spenders_df['Average Purchase Price'].map("${:,.2f}".format)
top_spenders_df['Total Purchase Value'] = top_spenders_df['Total Purchase Value'].map("${:,.2f}".format)
top_spenders_df.head()
```
## Most Popular Items
* Retrieve the Item ID, Item Name, and Item Price columns
* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value
* Create a summary data frame to hold the results
* Sort the purchase count column in descending order
* Optional: give the displayed data cleaner formatting
* Display a preview of the summary data frame
```
id_group_data = purchase_data.groupby(['Item ID', 'Item Name'])
purchase_count = id_group_data['Purchase ID'].count()
item_price = id_group_data["Price"].min()
item_price.head()
total_pur_value = id_group_data['Price'].sum()
most_popular_df = pd.DataFrame({"Purchase Count": purchase_count, "Item Price": item_price,\
"Total Purchase Value": total_pur_value})
most_popular_df = most_popular_df.sort_values(by='Purchase Count', ascending = False)
most_popular_df['Item Price'] = most_popular_df['Item Price'].map("${:,.2f}".format)
most_popular_df['Total Purchase Value'] = most_popular_df['Total Purchase Value'].map("${:,.2f}".format)
most_popular_df.head()
```
## Most Profitable Items
* Sort the above table by total purchase value in descending order
* Optional: give the displayed data cleaner formatting
* Display a preview of the data frame
```
# This exercise almost the repetition of the above exercise with an exception that we will be sorting the data based on Total
# Purchase Value as opposed to Purchase Count above.
most_profitable_df = pd.DataFrame({"Purchase Count": purchase_count, "Item Price": item_price,\
"Total Purchase Value": total_pur_value})
most_profitable_df = most_profitable_df.sort_values(by='Total Purchase Value', ascending = False)
most_profitable_df['Item Price'] = most_profitable_df['Item Price'].map("${:,.2f}".format)
most_profitable_df['Total Purchase Value'] = most_profitable_df['Total Purchase Value'].map("${:,.2f}".format)
most_profitable_df.head()
```
| github_jupyter |
# 决策树
决策树(Decision Tree)首先对数据进行处理,利用归纳算法生成可读的规则和决策树,然后使用决策对新数据进行分析,本质上是通过一系列规则对数据进行分类的过程
决策树是一种典型的分类方法。其中:
+ 每个内部结点表示一个属性上的判断
+ 每个分支代表一个判断结果的输出
+ 每个叶结点代表一种分类结果。
CLS算法是早期提出的决策树学习算法,是很多决策树学习算法的基础框架。
依据其中选择分类属性的策略不同,可以得到不同的决策树算法。比较常用的决策树有ID3,C4.5和CART三种和实现,其中CART一般优于其他决策树,并且可用于回归任务。
下面我们将编写代码实现这三种决策树算法。
### 任务一: 导入包和创建数据集
本实验所需的包不多
+ log用于计算
+ treePlotter为已经编写好的用于可视化决策树的代码,createPlot(tree)就可以调用
+ csv为对csv文件进行操作所需的包
本实验第一个使用的是天气情况数据集,属性集合A={ 天气,温度,湿度,风速}, 类别标签有两个,类别集合L={进行(yes),取消(no)}。

本实验中我们用字典嵌套的形式来表示一个决策树,如一个形如
的决策树可表示为 {'weather': {0: {'wspeed': {0: 'yes', 2: 'no', 3: 'no'}}, 1: 'yes'}}
```
from math import log
import treePlotter,csv
import numpy as np
def createDataSet1():
data=[
[0, 0, 0, 0, 'yes'],
[0, 1, 0, 1, 'yes'],
[0, 2, 1, 0, 'no'],
[0, 2, 1, 1, 'no'],
[0, 1, 1, 0, 'no'],
[1, 2, 1, 0, 'yes'],
[1, 0, 0, 1, 'yes'],
[1, 1, 1, 1, 'yes'],
[1, 2, 0, 0, 'yes'],
[2, 1, 1, 0, 'yes'],
[2, 0, 0, 0, 'yes'],
[2, 1, 0, 0, 'yes'],
[2, 0, 0, 1, 'no'],
[2, 1, 1, 1, 'no']
]
features=['weather','temperature','humidity','wspeed']
return data,features
data1,features1 = createDataSet1()
features1
```
### 任务二:ID3树
ID3 以信息熵的增益来作为分类的依据。假设样本集D中第$k$类样本占比$p_k$,可计算其对应信息熵为:$$Ent(D)=-\sum_k p_k log p_k$$ $Ent(D)$越小,代表数据集越有序,纯度越高。我们首先编写计算数据集香农熵的函数。
##### 2.1完成香农熵计算函数
```
def calcShannonEnt(dataSet):
"""
函数:计算数据集香农熵
参数:dataSet:数据集
labels:数据标签
返回:shannonEnt 数据集对应的香农熵
"""
numEntries = len(dataSet) #样本数
labelCounts = {} #统计不同label出现次数的字典(key为label,value为出现次数)
shannonEnt = 0.0
#计算labelCounts
for featVec in dataSet:
# 获取当前这条数据的label值
currentLabel = featVec[-1]
# 是新label,则在标签字典中新建对应的key,value的对应出现的次数,初始化为0
if currentLabel not in labelCounts.keys():
labelCounts[currentLabel] = 0
# 已有则当前label出现次数+1
labelCounts[currentLabel] += 1
### START CODE HERE ###
### END CODE HERE ###
return shannonEnt
print(calcShannonEnt(data1))
data1[0][-1] = 'maybe' #尝试增加一个分类选项,观察熵变化
print(calcShannonEnt(data1))
#out:0.9402859586706309 ; 1.2638091738835462
data1[0][-1] = 'yes' #还原
```
##### 2.2 完成基本功能函数
+ splitDataSet:用于在决策树每个分支,将特征取某个值的所有数据划分为一个数据集
```
def splitDataSet(dataSet, axis, value):
"""
函数:将axis列属性值为value的组合为一个数据集,并删除第axis列特征信息
参数:axis:特征列索引
value:待分离的特征取值
返回:retDataSet:被分割出来的数据集
"""
retDataSet = []
for data in dataSet:
# 如果数据集的第axis列值等于value,保留条数据,并删除第axis列特征信息
if data[axis] == value:
# 获取被降维特征前面的所有特征
reducedFeatVec = data[:axis]
# 接上被降维特征后面的所有特征
reducedFeatVec.extend(data[axis + 1:])
# 新的降维数据加入新的返回数据集中
retDataSet.append(reducedFeatVec)
return retDataSet
splitDataSet(data1,0,1)
#out:[[2, 1, 0, 'yes'], [0, 0, 1, 'yes'], [1, 1, 1, 'yes'], [2, 0, 0, 'yes']]
```
##### 2.3 用信息增益选择待分类的特征
那么假设用离散属性a有V个可能值,划分能产生V个分支,每个分支包含的数据记为$D^v$。
由此我们可以得出用属性a对样本集D划分的信息增益计算公式:
$$Gain(D,a)=Ent(D)-\sum_v\frac{|D^v|}{|D|}Ent(D^v)$$
```
def chooseBestFeature_ID3(dataSet):
"""
函数:利用香农熵,计算所有可能划分的信息增益,输出当前数据集最好的分类特征
参数:dataSet
返回:bestFeature:最优特征的index(下标)
"""
numFeatures = len(dataSet[0]) - 1 #特征数
baseEntropy = calcShannonEnt(dataSet) #Ent(D)
bestInfoGain = 0.0 #信息增益
bestFeature = -1 #最好信息增益特征
#遍历每个特征
for i in range(numFeatures):
featList = [example[i] for example in dataSet]
uniqueVals = set(featList) #第i个特征的可能取值
newEntropy = 0.0
### STARD CODE HERE ###
#计算以第i个特征划分产生的infoGain
#如果大于当前bestInfoGain,则保留当前划分为最优划分
### END CODE HERE ###
return bestFeature
chooseBestFeature_ID3(data1)
#out:0
```
##### 2.4 生成ID3决策树
接下来我们可以用 **递归** 的方法生成决策树,其基本流程如下:
+ 划分条件:自根结点开始,通过选择出最佳属性进行划分树结构,并递归划分;
+ 停止条件:当前结点都是同一种类型;当前结点后为空,或者所有样本在所有属性上取值相同,无法划分;
这是通用的创建决策树的函数,根据参数chooseBestFeature的不同,得到不同算法的决策树,当前任务中参数为刚刚编写的 chooseBestFeature_ID3。
#### 备注:
此处代码实现的ID3树,每个结点不能选取祖先结点用过的分类特征。
而实际上结点的不同子树,是有可能选取同样的分类特征的。
原因在于代码实现的 del (features[bestFeat]) 会导致一个特征被选用后,之后就再不能被选用。可以通过在递归时传入features的一份复制来避免这个问题。
```
def createTree(dataSet, features, chooseBestFeature):
"""
函数:递归地根据数据集和数据特征名创建决策树
参数:chooseBestFeature:函数作为参数,通过chooseBestFeature(dataSet)调用,
根据参数的不同,获取由ID3或C4.5算法选择的最优特征的index
返回:myTree:由集合表示的决策树
"""
classList = [data[-1] for data in dataSet] #当前数据集的所有标签
bestFeat = chooseBestFeature(dataSet) #当前数据集最优特征
bestFeatName = features[bestFeat] #最优特征的标签名
myTree = {bestFeatName: {}} #构造当前结点——最优特征:子结点集合
bestFeatValues = set([data[bestFeat] for data in dataSet]) #最优特征可能的取值,set去重
del (features[bestFeat]) #删除已用过的分类标签
### STARD CODE HERE ###
# 如果当前dataSet所有的标签相同,此结点分类完毕,结束决策,返回分类标签
# 否则,为每个最优特征取值,递归地创建子树
### END CODE HERE ###
return myTree
data1, labels1 = createDataSet1()
ID3Tree = createTree(data1, labels1,chooseBestFeature_ID3)
treePlotter.createPlot(ID3Tree)
```
### <center> Sample Output:</center>

### 任务三:C4.5树
ID3用信息增益选择属性的方式会让他对取值数目较多的属性产生偏好,接下来我们通过一个直观的例子来说明。
假设数据集变成如下所示,某个属性(如风速)变为每个样本一个值的情况,构建一个ID3树。
```
def createDataSet2():
data=[
[0, 0, 1, 0, 'yes'],
[1, 1, 0, 1, 'yes'],
[0, 0, 0, 2, 'no'],
[0, 1, 1, 3, 'no'],
[1, 1, 1, 4, 'yes']
]
features2=['weather','temperature','humidity','wspeed']
return data,features2
data2, features2 = createDataSet2()
ID3Tree = createTree(data2, features2, chooseBestFeature_ID3)
treePlotter.createPlot(ID3Tree)
```
### <center> Sample Output:</center>

可以观察到,ID3树利用了该属性为每一个样本创建了分支,这样得到的决策树显然泛化性会很差。
为了进行改进,我们可以设想为信息增益增加一个类似于正则项的惩罚参数,在特征取值多时,降低信息增益。
**信息增益比 = 惩罚参数 * 信息增益**
C4.5算法为属性定义一个Intrinsic Value(IV)来构建这个惩罚参数:$$IV(a)=-\sum_{v=1}^{V}\frac{|D^v|}{|D|}log\frac{|D^v|}{|D|}$$
其数学意义为:以特征a作为随机变量的熵的倒数。
假设某个属性将样本等分为x份,可得其$IV=-log(1/x)$

观察函数图像会发现,样本划分越多,x越大,其值越大
于是可将信息增益改进为信息增益比$$GainRatio(D,a)=\frac{Gain(D,a)}{IV(a)}$$
#### 任务3.1 用信息增益比选择分类特征
```
def chooseBestFeature_C45(dataSet):
"""
函数:计算所有可能划分的信息增益比,输出当前数据集最好的分类特征
参数:dataSet
返回:bestFeature:最优特征的index(下标)
"""
numFeatures = len(dataSet[0]) - 1
baseEntropy = calcShannonEnt(dataSet)
bestInfoGain = 0.0
bestFeature = -1
for i in range(numFeatures):
featList = [example[i] for example in dataSet]
uniqueVals = set(featList)
newEntropy = 0.0
IV = 0.0
### STARD CODE HERE ###
# 计算以第i个特征划分的infoGain,以及其IV
# 计算GainRatio衰减
# 如果大于当前最优,则保留当前划分为最优划分
### END CODE HERE ###
return bestFeature
```
#### 任务3.2 生成C4.5树
```
data2, labels2 = createDataSet2()
C45Tree = createTree(data2, labels2, chooseBestFeature_C45)
treePlotter.createPlot(C45Tree)
```
### <center> Sample Output:</center>

可以观察到,C4.5算法的确对特征取值较少的属性产生了更多偏好,可以有效的避免上述ID3树存在的问题。但C4.5算法分类结果还是存在一定的过拟合。
### 任务四:CART
前面的实验我们发现ID3和C4.5算法在用于分类问题是有效的,那么决策树可以适用于回归问题吗?
CART(Classification and regression tree)如其名,便是可以既可以用于解决分类问题,又可以用于解决回归问题的决策树算法。
在解决分类问题时:
ID3/C4.5基于信息论熵模型选择一个离散的特征进行分类,根据特征取值数目一次性划分若干子结点,然后子结点的数据集将不再包含这个特征,这个特征不再参与接下来的分类,这意味着这种决策树模型是不能直接处理连续取值的特征的,除非划分区间将其离散化。
CART则根据**基尼系数(Gini Index)** 为连续或离散的特征选择一个划分点,产生左右两个分支,生成二叉树。在产生分支后,仍可以再利用这个特征,参与接下来的分类,产生下一个分支。用叶子结点样本**最多的标签**作为预测输出。
在解决回归问题时:
CART根据**平方损失**选择最优划分特征和划分点,并用叶子结点样本**标签均值**作为预测输出。
接下来我们来具体实现CART回归树,并尝试用于解决一个分类问题。
##### 任务4.1 iris数据集读取和预处理
Iris数据集即鸢尾属植物数据集,该数据集测量了所有150个样本的4个特征,分别是:
+ sepal length(花萼长度)
+ sepal width(花萼宽度)
+ petal length(花瓣长度)
+ petal width(花瓣宽度)
标签为其种属:Iris Setosa,Iris Versicolour,Iris Virginica。该数据集被广泛用于分类算法示例,我们可以看到其4个特征取值均是连续的。数据集存储在 iris.csv 文件中,我们从中手动划分一部分作为训练集。
```
def createDataSetIris():
'''
函数:获取鸢尾花数据集,以及预处理
返回:
Data:构建决策树的数据集(因打乱有一定随机性)
Data_test:手动划分的测试集
featrues:特征名列表
labels:标签名列表
'''
labels = ["setosa","versicolor","virginica"]
with open('iris.csv','r') as f:
rawData = np.array(list(csv.reader(f)))
features = np.array(rawData[0,1:-1])
dataSet = np.array(rawData[1:,1:]) #去除序号和特征列
np.random.shuffle(dataSet) #打乱(之前如果不加array()得到的会是引用,rawData会被一并打乱)
return rawData[1:,1:], dataSet, features, labels
rawData, data, features, labels = createDataSetIris()
print(rawData[0]) #['5.1' '3.5' '1.4' '0.2' 'setosa']
print(data[0])
print(features) #['Sepal.Length' 'Sepal.Width' 'Petal.Length' 'Petal.Width']
print(labels) #['setosa', 'versicolor', 'virginica']
```
##### 4.2 完成基尼指数计算函数
数据集D的基尼值(Gini Index)计算公式如下:
$$Gini(D)=\sum_{k=1}^{K}\sum_{k'≠K}p_kp_k'=1-\sum_{k=1}^{K}p_k^2$$
其数学意义为,从数据集中任选两个样本,类别不一致的概率。其值越小,数据集纯度越高。
数据集D某个划分a的基尼系数计算如下:
$$GiniIndex(D,a)=\sum_{v=1}^{V}\frac{|D^v|}{|D|}Gini(D^v)$$
```
def calcGiniIndex(dataSet):
'''
函数:计算数据集基尼值
参数:dataSet:数据集
返回: Gini值
'''
counts = [] #每个标签在数据集中出现的次数
count = len(dataSet) #数据集长度
for label in labels:
counts.append([d[-1] == label for d in dataSet].count(True))
### STARD CODE HERE ###
gini = None
### END CODE HERE ###
return gini
calcGiniIndex(rawData)
#out:0.6666666666666667
```
##### 4.3 完成基本功能函数
+ binarySplitDataSet: 和ID3,C4.5不同,CART每个划分均为二分,且不删除特征信息。这里由于已知数据集特征取值全是连续取值型的, 对算法的部分功能进行了并不严谨的简化。实际应用中的CART还应该判断特征取值是否离散,若离散,并把feature等于和不等于value的数据划分为两个数据集。
+ classificationLeaf:用于分类命题,此处实现的是多数表决器,叶结点输出数据集最多的标签作为分类。如果是用于回归问题,叶结点应该输出的是数据集列的均值作为回归预测。
```
def binarySplitDataSet(dataSet, feature, value):
'''
函数:将数据集按特征列的某一取值换分为左右两个子数据集
参数:dataSet:数据集
feature:数据集中某一特征列
value:该特征列中的某个取值
返回:左右子数据集
'''
matLeft = [d for d in dataSet if d[feature] <= value]
matRight = [d for d in dataSet if d[feature] > value]
return matLeft,matRight
binarySplitDataSet(rawData,0,"4.3")[0]
#out[array(['4.3', '3', '1.1', '0.1', 'setosa'], dtype='<U12')]
def classifyLeaf(dataSet, labels):
'''
函数:求数据集最多的标签,用于结点分类
参数:dataSet:数据集
labels:标签名列表
返回:该标签的index
'''
counts = []
for label in labels:
counts.append([d[-1] == label for d in dataSet].count(True))
return np.argmax(counts) #argmax:使counts取最大值的下标
classifyLeaf(rawData[40:120],labels)
#out:1
```
##### 4.4 用基尼系数选择特征及划分点
CART在这一步选择的不仅是特征,而是特征以及该特征的一个分界点。CART要遍历所有特征的所有样本取值作为分界点的Gini系数,从中找出最优特征和最优划分。
在这里我们进一步地为决策树设定停止条件——阈值。当结点样本树足够小或者Gini增益足够小的时候停止划分,将结点中最多的样本作为结点的决策分类。
```
def chooseBestSplit(dataSet, labels, leafType=classifyLeaf, errType=calcGiniIndex, threshold=(0.01,4)):
'''
函数:利用基尼系数选择最佳划分特征及相应的划分点
参数:dataSet:数据集
leafType:叶结点输出函数(当前实验为分类)
errType:损失函数,选择划分的依据(分类问题用的就是GiniIndex)
threshold: Gini阈值,样本阈值(结点Gini或样本数低于阈值时停止)
返回:bestFeatureIndex:划分特征
bestFeatureValue:最优特征划分点
'''
thresholdErr = threshold[0] #Gini阈值
thresholdSamples = threshold[1] #样本阈值
err = errType(dataSet)
bestErr = np.inf
bestFeatureIndex = 0 #最优特征的index
bestFeatureValue = 0 #最优特征划分点
### STARD CODE HERE ###
#当数据中输出值都相等时,返回叶结点(即feature=None,value=结点分类)
#尝试所有特征的所有取值,二分数据集,计算err(本实验为Gini),保留bestErr
#检验Gini阈值,若是则不再划分,返回叶结点
#检验左右数据集的样本数是否小于阈值,若是则不再划分,返回叶结点
### END CODE HERE ###
return bestFeatureIndex,bestFeatureValue
chooseBestSplit(rawData, labels)
#out:(2, '1.9')
```
##### 4.5 生成CART
根据参数leafType,errType的不同,生成CART分类树或是CART回归树。
```
def createTree_CART(dataSet, labels, leafType=classifyLeaf, errType=calcGiniIndex, threshold=(0.01,4)):
'''
函数:建立CART树
参数:同上
返回:CART树
'''
feature,value = chooseBestSplit(dataSet, labels, leafType, errType, threshold)
### STARD CODE HERE ###
#是叶结点则返回决策分类(chooseBestSplit返回None时表明这里是叶结点)
#否则创建分支,递归生成子树
leftSet,rightSet = binarySplitDataSet(dataSet, feature, value)
myTree = {}
myTree[features[feature]] = {}
myTree[features[feature]]['<=' + str(value) + ' contains' + str(len(leftSet))] = None
myTree[features[feature]]['>' + str(value) + ' contains' + str(len(rightSet))] = None
### END CODE HERE ###
return myTree
CARTTree = createTree_CART(data, labels, classifyLeaf, calcGiniIndex, (0.01,4))
treePlotter.createPlot(CARTTree)
```
### <center> Sample Output:</center>
#### 备注:
+ 由于实现细节,实现顺序有所不同,最终生成的树可能也不一样,之前函数的测试样例通过即可。
+ 一个分支两个子结点分类相同是未达到Gini阈值,却达到样本阈值导致的,可以通过更改特征选择代码中,停止划分判断的顺序避免。

从实例可以看到一些CART树的特点,如:连续属性二分划分特征,特征可重复用于结点分类等等
| github_jupyter |
The folk at [Code for Berlin](https://www.codefor.de/berlin/) have created a REST API offering access to the database of Berlin street trees and have [an issue open](https://github.com/codeforberlin/tickets/issues/3) asking people to try to do "something" with it. It seemed a cool way to look more deeply into the architecture of REST APIs on both the client and server side as well as playing with an interesting dataset, given I live in Berlin and like trees.
The API itself is built using the [Django REST Framework](https://www.django-rest-framework.org/) and is hosted [here](https://github.com/codeforberlin/trees-api-v2). An [interactive map](https://trees.codefor.de/) exists which uses the api to plot all the trees and allows some simple filtering on top of tiles from Open Street Map. I took a look and it proved a great intro to the data I wanted to do a deeper analysis of the data.
Some of the things I wanted to look into were:
* Which areas have the most trees, the oldest trees etc
* Are there any connections between the number of trees and other datapoints (air quality, socioeconomic demographics etc)
* Why are there no trees showing on my street even though I can see some out the window as I type this?
## What sort of data is there and how can it be consumed?
One of the cool things about the Django REST Framework is the way it's API can be explored out of the box. Simply point your browser to the API using the following link:
https://trees.codefor.de/api/v2
You should see something like this:
```
HTTP 200 OK
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept
{
"trees": "https://trees.codefor.de/api/v2/trees/",
"species": "https://trees.codefor.de/api/v2/species/",
"genera": "https://trees.codefor.de/api/v2/genera/",
"boroughs": "https://trees.codefor.de/api/v2/boroughs/"
}
```
Essetially this is telling us that we have four endpoints - trees, species, genera and boroughs. You can follow the links to each one to get more details. To explore the data available, I hacked together a simple python wrapper which you can find here:
https://github.com/scrambldchannel/berlin-trees-api-pywrapper
### Usage
The wrapper can be installed via pip:
```
pip install git+https://github.com/scrambldchannel/berlin-trees-api-pywrapper.git
```
#### Setup the wrapper
Note I am specifying version 2. When I look at the
```
# Import the module and other useful libs
import json
from berlintreesapiwrapper import TreesWrapper
# Instantiate the api wrapper object
# you can change the base url if you are running a local instance of the api
base_url = "https://trees.codefor.de/api/"
api_version = 2
api = TreesWrapper(api_root = base_url, version = api_version)
```
#### Calling functions
There is a function defined for each endpoint. At this stage, each function accepts only a couple of parameters. Each endpoint returns paginated results (the current config seems to return ten results per page) so the page number is a valid parameter for each function, defaulting to 1 if not supplied. See examples below.
#### Trees endpoint
The most versatile endpoint is the trees endpoint which returns sets of individual trees. The endpoint allows filtering in a number of different ways (see https://github.com/codeforberlin/trees-api-v2#making-queries).
My basic wrapper function doesn't support anything other than a simple dump of all trees, by page, at this stage. This was sufficient for pulling all the data but I will look into enhancing this wrapper later, the ability to filter trees based on location is particular interesting.
```python
# Eg. request first page of all trees
ret_trees = api.get_trees()
# Eg. request the 5000th page of all trees
ret_trees = api.get_trees(page=5000)
```
#### Other endpoints
The other endpoints just return a count of the trees by borough, species and genus. Results can be filtered by page and the name of the borough etc. See examples below.
```python
# Eg. request first page of the borough count
ret_borough = api.get_boroughs()
# Eg. request the count for a specific borough
ret_borough = api.get_boroughs(borough = "Friedrichshain-Kreuzberg")
# Eg. request the count for a specific species
ret_species = api.get_species(species = "Fagus sylvatica")
# Eg. request a specific page of the count of genera
ret_genera = api.get_genera(page = 13)
```
## Data exploration
First, I need to get the data into a format I can analyse it easily.
### Look at structure for a single tree
I want to pull it all individual trees into a single dataframe. To do so, I returned to the trees endpoint. The relevant part of the json result is contained within "features" and an individual tree looks like this:
```json
{
"geometry": {
"coordinates": [
13.357809221770479,
52.56657685261005
],
"type": "Point"
},
"id": 38140,
"properties": {
"age": 80,
"borough": "Reinickendorf",
"circumference": 251,
"created": "2018-11-11T12:22:35.506000Z",
"feature_name": "s_wfs_baumbestand_an",
"genus": "ACER",
"height": 20,
"identifier": "s_wfs_baumbestand_an.7329",
"species": "Acer pseudoplatanus",
"updated": "2018-11-11T12:22:35.506000Z",
"year": 1938
},
"type": "Feature"
}
```
### Write script to pull all trees
Essentially I want to pull all of these trees into a single dataframe by iterating over every page of the trees endpoint. I hacked together this code to accomplish this. It also converted the result to a geodataframe based on the long/lat information returned. Note, this was really slow, probably wasn't the best way to do it and there are other ways of sourcing the raw data. That said, I wanted to do it as a PoC.
```python
# loop over the pages until we reach the end and append the values we're interested to lists
while True:
this_page = api.get_trees(page=page).json()
next_page = this_page["next"]
for row in range(len(this_page['features'])):
ids.append(this_page['features'][row]['id'])
age.append(this_page['features'][row]['properties']['age'])
borough.append(this_page['features'][row]['properties']['borough'])
circumference.append(this_page['features'][row]['properties']['circumference'])
genus.append(this_page['features'][row]['properties']['genus'])
height.append(this_page['features'][row]['properties']['height'])
species.append(this_page['features'][row]['properties']['species'])
year.append(this_page['features'][row]['properties']['year'])
lat.append(this_page['features'][row]['geometry']['coordinates'][0])
long.append(this_page['features'][row]['geometry']['coordinates'][1])
page = page + 1
if(next_page) is None:
break
# create dataframe from resulting lists
df = pd.DataFrame(
{'id': ids,
'age' : age,
'borough' : borough,
'circumference' : circumference,
'genus' : genus,
'height' : height,
'species' : species,
'year': year,
'Latitude': lat,
'Longitude': long})
```
After running once, I saved it to a csv for future analysis. As an aside, I've recently started using the amazing [VisiData](https://visidata.org/) for this sort of analysis of data in text form but have done it here using Pandas.
### Load into Pandas dataframe
```
# Import ilbraries
import numpy as np
import pandas as pd
import geopandas as gpd
# load csv
dataset_path = '../datasets/'
df = pd.read_csv(filepath_or_buffer = dataset_path + 'all_trees.csv', index_col = 0, encoding='utf-8')
```
### Convert to Geopandas dataframe
Given we have lat/long for each tree, let's convert it to a Geopandas dataframe which might come in handy later.
```
gdf = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df.Longitude, df.Latitude))
```
### Get an overview of the data
This gives an overview of the data which is a useful starting point and helps give insight into data quality issues there might be.
#### This is what the data looks like
```
gdf.head()
```
#### Get a row count
```
len(gdf.index)
```
#### Use the describe method on the numeric fields
```
gdf.describe(percentiles = [.20, .40, .60, .80], include = [ 'float', 'int'])
```
##### A number of things stand out at a glance:
* All columns seem to be populated for all rows except age (ie their counts match the total row count)
* That said, all of the value columns have zeros so there are some gaps in the data
* The max values for all measures are clearly spurious based on the percentiles
* There must be some duplicates in the id column which I'd believed should be unique
* The age and the year (presuming it means the year of planting) should correspond however the percentiles don't reflect this
* The long/lat values don't seem to have an extreme outliers
#### Use the describe method on the string fields
```
gdf[['borough', 'genus', 'species']].describe()
```
##### Things to note:
* Population of the borough field is complete but genus and species have some gaps
* Perhaps there is a mix of upper/lower case that might need to be normalised
```
### Try to address data quality
Let's try to either correct the outliers (if possible) or remove them from calculations by setting the values to NaN. For the circumference and height data, this is relatively straightforward, for the age / year numbers, it might be possible to derive one from the other.
#### Setting 0s to NaN
Doing this should remove the 0s from the calculations while retaining any information that is available for that tree.
gdf['age'].replace(0, np.nan, inplace=True)
gdf['circumference'].replace(0, np.nan, inplace=True)
gdf['height'].replace(0, np.nan, inplace=True)
gdf['year'].replace(0, np.nan, inplace=True)
```
#### Deriving age from year and vice versa
Let's check the assumption the age and year are connected, that is:
```
age = 2018 - year
```
Let's try to check that assumption, perhaps a bit of a hack but it does the trick. **There must be a better way to do this**
```
total = 0
for i in range(0,2020):
count = gdf[abs(gdf.age) == (i - gdf.year)]['id'].count()
if count != 0:
print(i, count)
total = total + count
print(total)
```
So there's a bit of variation but essentially, either the year is set 0 or the age is usually about equal to the number of years from the year. Let's just set the age to the 2018 - year
```
gdf['age'].replace(0, np.nan, inplace=True)
# Get oldest tree(s)
gdf[gdf['age'] == gdf['age'].max()]
# This seems to show that anything with a year has a sensible age
all_trees_gdf.loc[(all_trees_gdf['age'] == 0) & (all_trees_gdf['year'] >= 1) & (all_trees_gdf['year'] < 2018)]
# but there are a lot of missing ages that have years
all_trees_gdf.loc[(all_trees_gdf['age'].isnull()) & (all_trees_gdf['year'] >= 1) & (all_trees_gdf['year'] < 2018)]
# What about circumference?
all_trees_gdf.loc[(all_trees_gdf['circumference'] >= 500) & (all_trees_gdf['circumference'] <= 13000)]
# this should give the oldest tree by
all_trees_gdf.sort_values('age').drop_duplicates(['borough'], keep='last')[[]]
# this will give you the tree with the highest cirucmference for each borough
# more columns can be added to the list passed to drop_duplicates to effectively group by more columns
all_trees_gdf.sort_values('circumference').drop_duplicates(['borough'], keep='last').head()
```
| github_jupyter |
```
# Importing the Necessary Libraries
from tensorflow.keras.layers import Input, Dense, Flatten, Dropout
from tensorflow.keras.models import Model
from tensorflow.keras.models import load_model
from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras.applications.vgg19 import VGG19
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from google.colab import drive
from tensorflow.keras.layers import concatenate
from tensorflow.keras import optimizers
import numpy as np
import matplotlib.pyplot as plt
from glob import glob
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
import seaborn as sns
from tensorflow.keras.callbacks import ModelCheckpoint
from datetime import datetime
# Connecting to Google Drive
drive.mount('/content/drive')
# Defining the Training and Test Path
train_path = '/content/drive/MyDrive/Dataset/train'
test_path = '/content/drive/MyDrive/Dataset/test'
# Checking the Number of Folders/Classes (Normal and Pneumonia)
folders = glob('/content/drive/MyDrive/Dataset/train/*')
print(len(folders))
# The Settings for Generating the Training Set
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
# The Settings for Generating the Test Set
test_datagen = ImageDataGenerator(
rescale=1./255)
# Generating the Training Set
train_set = train_datagen.flow_from_directory(train_path,
target_size = (224, 224),
batch_size=32,
class_mode = 'categorical')
# Generating the Test Set
test_set = test_datagen.flow_from_directory(test_path,
target_size = (224, 224),
shuffle=False,
batch_size=32,
class_mode = 'categorical')
# Defining the Input Shape
input_tensor=Input(shape=(224,224,3))
# Importing Model 1 (VGG16)
base_model1 = VGG16(input_tensor=input_tensor, weights='imagenet', include_top=False)
# Extracting the Features
features1 = base_model1.output
for layer in base_model1.layers:
layer.trainable=True # All layers are trainble
for layer in base_model1.layers:
layer._name = layer._name + str('_C') # Because the names of some layers are the same in
# both networks, a letter is assigned to prevent error
# Importing the Model 2 (VGG16)
base_model2 = VGG19(input_tensor=input_tensor, weights='imagenet', include_top=False)
# Extracting the Features
features2 = base_model2.output
for layer in base_model2.layers:
layer.trainable=True # All layers are trainble
for layer in base_model2.layers:
layer._name = layer._name + str('_D') # Because the names of some layers are the same in
# both networks, a letter is assigned to prevent error
# Concatenating the Features
concatenated=concatenate([features1,features2])
# Creating FC Layers
x = Flatten(name='flatten')(concatenated)
x = Dense(4096, activation='relu', name='fc1')(x)
x = Dropout(0.5)(x)
x = Dense(4096, activation='relu', name='fc2')(x)
x = Dropout(0.5)(x)
x = Dense(len(folders), activation='softmax', name='predictions')(x)
# Creating the Final Hybrid CNN Model
Concatenated_model = Model(inputs=input_tensor, outputs=x)
# Showing the Architecture of the Hybrid Model
from tensorflow.keras.utils import plot_model
plot_model(Concatenated_model, show_shapes=True)
# Checking the Model Components
Concatenated_model.summary()
# Setting the Weight Optimizer and Loss Function
sgd = optimizers.SGD()
Concatenated_model.compile(loss='binary_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
# Saving the Weight Parameters, When Achieving a Higher Test Accuracy
checkpoint = ModelCheckpoint(filepath='/content/drive/MyDrive/ChestVGG_SGD.h5',
monitor='val_accuracy', verbose=1, save_best_only=True)
callbacks = [checkpoint]
start = datetime.now()
# Training the Hybrid Model
Concatenated_model_history=Concatenated_model.fit(
train_set,
validation_data=test_set,
epochs=20,
callbacks=callbacks ,verbose=1)
duration = datetime.now() - start
print("Training time: ", duration)
# Loading the Saved Model
network = load_model('/content/drive/MyDrive/ChestVGG_SGD.h5')
# Creating Evaluation Set and Evaluating the Model
test_set_evaluation = test_datagen.flow_from_directory(test_path,
target_size = (224, 224),
batch_size=1,
shuffle=False,
class_mode = 'categorical')
network.evaluate(test_set_evaluation, steps=1170)
# Making Predictions
predictions=network.predict(test_set_evaluation, steps=1170, verbose=1)
preds=np.argmax(predictions, axis=1)
# Creating the Confusion Matrix
cf_matrix=confusion_matrix(test_set_evaluation.classes, preds)
ax=plt.subplot()
sns.heatmap(cf_matrix, cmap='Blues', annot=True, linewidths=1, fmt = 'd', ax=ax)
ax.set_xlabel('Predicted Class');ax.set_ylabel('True Class')
ax.set_title('Confusion Matrix')
ax.xaxis.set_ticklabels(['Normal', 'Pneumonia']); ax.yaxis.set_ticklabels(['Normal', 'Pneumonia'])
accuracy_score(test_set_evaluation.classes, preds)
```
| github_jupyter |
# Feature Engineering with Open-Source
In this notebook, we will reproduce the Feature Engineering Pipeline from the notebook 2 (02-Machine-Learning-Pipeline-Feature-Engineering), but we will replace, whenever possible, the manually created functions by open-source classes, and hopefully understand the value they bring forward.
# Reproducibility: Setting the seed
With the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
```
# data manipulation and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for saving the pipeline
import joblib
# from Scikit-learn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, Binarizer
# from feature-engine
from feature_engine.imputation import (
AddMissingIndicator,
MeanMedianImputer,
CategoricalImputer,
)
from feature_engine.encoding import (
RareLabelEncoder,
OrdinalEncoder,
)
from feature_engine.transformation import (
LogTransformer,
YeoJohnsonTransformer,
)
import preprocessors as pp
from feature_engine.selection import DropFeatures
from feature_engine.wrappers import SklearnTransformerWrapper
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
df = pd.DataFrame([('bird', 2, 2),
('mammal', 4, np.nan),
('arthropod', 8, 0),
('bird', 2, np.nan)],
index=('falcon', 'horse', 'spider', 'ostrich'),
columns=('species', 'legs', 'wings'))
df['species','legs','wings'].mode()
# load dataset
data = pd.read_csv('../section-04-research-and-development/train.csv')
# rows and columns of the data
print(data.shape)
# visualise the dataset
data.head()
```
# Separate dataset into train and test
It is important to separate our data intro training and testing set.
When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.
Our feature engineering techniques will learn:
- mean
- mode
- exponents for the yeo-johnson
- category frequency
- and category to number mappings
from the train set.
**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
```
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['Id', 'SalePrice'], axis=1), # predictive variables
data['SalePrice'], # target
test_size=0.1, # portion of dataset to allocate to test set
random_state=0, # we are setting the seed here
)
X_train.shape, X_test.shape
```
# Feature Engineering
In the following cells, we will engineer the variables of the House Price Dataset so that we tackle:
1. Missing values
2. Temporal variables
3. Non-Gaussian distributed variables
4. Categorical variables: remove rare labels
5. Categorical variables: convert strings to numbers
5. Standardize the values of the variables to the same range
## Target
We apply the logarithm
```
y_train = np.log(y_train)
y_test = np.log(y_test)
```
## Missing values
### Categorical variables
We will replace missing values with the string "missing" in those variables with a lot of missing data.
Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values.
This is common practice.
```
# let's identify the categorical variables
# we will capture those of type object
cat_vars = [var for var in data.columns if data[var].dtype == 'O']
# MSSubClass is also categorical by definition, despite its numeric values
# (you can find the definitions of the variables in the data_description.txt
# file available on Kaggle, in the same website where you downloaded the data)
# lets add MSSubClass to the list of categorical variables
cat_vars = cat_vars + ['MSSubClass']
# cast all variables as categorical
X_train[cat_vars] = X_train[cat_vars].astype('O')
X_test[cat_vars] = X_test[cat_vars].astype('O')
# number of categorical variables
len(cat_vars)
# make a list of the categorical variables that contain missing values
cat_vars_with_na = [
var for var in cat_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)
# variables to impute with the string missing
with_string_missing = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]
# variables to impute with the most frequent category
with_frequent_category = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]
# I print the values here, because it makes it easier for
# later when we need to add this values to a config file for
# deployment
with_string_missing
with_frequent_category
# replace missing values with new label: "Missing"
# set up the class
cat_imputer_missing = CategoricalImputer(
imputation_method='missing', variables=with_string_missing)
# fit the class to the train set
cat_imputer_missing.fit(X_train)
# the class learns and stores the parameters
cat_imputer_missing.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_missing.transform(X_train)
X_test = cat_imputer_missing.transform(X_test)
# replace missing values with most frequent category
# set up the class
cat_imputer_frequent = CategoricalImputer(
imputation_method='frequent', variables=with_frequent_category)
# fit the class to the train set
cat_imputer_frequent.fit(X_train)
# the class learns and stores the parameters
cat_imputer_frequent.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_frequent.transform(X_train)
X_test = cat_imputer_frequent.transform(X_test)
# check that we have no missing information in the engineered variables
X_train[cat_vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
```
### Numerical variables
To engineer missing values in numerical variables, we will:
- add a binary missing indicator variable
- and then replace the missing values in the original variable with the mean
```
# now let's identify the numerical variables
num_vars = [
var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'
]
# number of numerical variables
len(num_vars)
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in num_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# print, makes my life easier when I want to create the config
vars_with_na
# add missing indicator
missing_ind = AddMissingIndicator(variables=vars_with_na)
missing_ind.fit(X_train)
X_train = missing_ind.transform(X_train)
X_test = missing_ind.transform(X_test)
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
# then replace missing data with the mean
# set the imputer
mean_imputer = MeanMedianImputer(
imputation_method='mean', variables=vars_with_na)
# learn and store parameters from train set
mean_imputer.fit(X_train)
# the stored parameters
mean_imputer.imputer_dict_
X_train = mean_imputer.transform(X_train)
X_test = mean_imputer.transform(X_test)
# IMPORTANT: note that we could save the imputers with joblib
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
```
## Temporal variables
### Capture elapsed time
There is in Feature-engine 2 classes that allow us to perform the 2 transformations below:
- [CombineWithFeatureReference](https://feature-engine.readthedocs.io/en/latest/creation/CombineWithReferenceFeature.html) to capture elapsed time
- [DropFeatures](https://feature-engine.readthedocs.io/en/latest/selection/DropFeatures.html) to drop the unwanted features
We will do the first one manually, so we take the opportunity to create 1 class ourselves for the course. For the second operation, we will use the DropFeatures class.
```
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
vars_with_temp = ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']
subtract_transformer = pp.SubtractTransformer(target_variable='YrSold', variables=vars_with_temp)
subtract_transformer.fit(X_train)
X_train = subtract_transformer.transform(X_train)
X_test = subtract_transformer.transform(X_test)
X_train[vars_with_temp].head()
# now we drop YrSold
drop_features = DropFeatures(features_to_drop=['YrSold'])
X_train = drop_features.fit_transform(X_train)
X_test = drop_features.transform(X_test)
```
## Numerical variable transformation
### Logarithmic transformation
In the previous notebook, we observed that the numerical variables are not normally distributed.
We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
```
log_transformer = LogTransformer(
variables=["LotFrontage", "1stFlrSF", "GrLivArea"])
X_train = log_transformer.fit_transform(X_train)
X_test = log_transformer.transform(X_test)
# check that test set does not contain null values in the engineered variables
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
```
### Yeo-Johnson transformation
We will apply the Yeo-Johnson transformation to LotArea.
```
yeo_transformer = YeoJohnsonTransformer(
variables=['LotArea'])
X_train = yeo_transformer.fit_transform(X_train)
X_test = yeo_transformer.transform(X_test)
# the learned parameter
yeo_transformer.lambda_dict_
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_train.columns if X_test[var].isnull().sum() > 0]
```
### Binarize skewed variables
There were a few variables very skewed, we would transform those into binary variables.
We can perform the below transformation with open source. We can use the [Binarizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Binarizer.html) from Scikit-learn, in combination with the [SklearnWrapper](https://feature-engine.readthedocs.io/en/latest/wrappers/Wrapper.html) from Feature-engine to be able to apply the transformation only to a subset of features.
Instead, we are going to do it manually, to give us another opportunity to code the class as an in-house package later in the course.
```
skewed = [
'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'MiscVal'
]
binarizer = SklearnTransformerWrapper(
transformer=Binarizer(threshold=0), variables=skewed
)
X_train = binarizer.fit_transform(X_train)
X_test = binarizer.transform(X_test)
X_train[skewed].head()
```
## Categorical variables
### Apply mappings
These are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
```
# re-map strings to numbers, which determine quality
qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}
qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GarageQual', 'GarageCond',
]
qual_mapper = pp.Mapper(qual_vars, qual_mappings)
qual_mapper.fit(X_train)
X_train=qual_mapper.transform(X_train)
X_test=qual_mapper.transform(X_test)
exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}
var = ['BsmtExposure']
exposure_mapper = pp.Mapper(var,exposure_mappings)
exposure_mapper.fit(X_train)
X_train=exposure_mapper.transform(X_train)
X_test=exposure_mapper.transform(X_test)
finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}
finish_vars = ['BsmtFinType1', 'BsmtFinType2']
finish_mapper = pp.Mapper(finish_vars, finish_mappings)
qual_mapper.fit(X_train)
X_train=finish_mapper.transform(X_train)
X_test=finish_mapper.transform(X_test)
garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}
var = ['GarageFinish']
garage_mapper = pp.Mapper(var, garage_mappings)
garage_mapper.fit(X_train)
X_train=garage_mapper.transform(X_train)
X_test=garage_mapper.transform(X_test)
fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}
var = ['Fence']
fence_mapper = pp.Mapper(var, fence_mappings)
fence_mapper.fit(X_train)
X_train=fence_mapper.transform(X_train)
X_test=fence_mapper.transform(X_test)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
```
### Removing Rare Labels
For the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".
To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
```
# capture all quality variables
qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']
# capture the remaining categorical variables
# (those that we did not re-map)
cat_others = [
var for var in cat_vars if var not in qual_vars
]
len(cat_others)
cat_others
rare_encoder = pp.RareLabelCategoricalEncoder(tol=0.05,variables=cat_others)
# find common labels
rare_encoder.fit(X_train)
# the common labels are stored, we can save the class
# and then use it later :)
rare_encoder.encoder_dict_
X_train = rare_encoder.transform(X_train)
X_test = rare_encoder.transform(X_test)
```
### Encoding of categorical variables
Next, we need to transform the strings of the categorical variables into numbers.
We will do it so that we capture the monotonic relationship between the label and the target.
To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
```
# set up the encoder
cat_encoder = OrdinalEncoder(encoding_method='ordered', variables=cat_others)
# create the mappings
cat_encoder.fit(X_train, y_train)
# mappings are stored and class can be saved
cat_encoder.encoder_dict_
X_train = cat_encoder.transform(X_train)
X_test = cat_encoder.transform(X_test)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(train, y_train, var):
# function plots median house sale price per encoded
# category
tmp = pd.concat([X_train, np.log(y_train)], axis=1)
tmp.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylim(2.2, 2.6)
plt.ylabel('SalePrice')
plt.show()
for var in cat_others:
analyse_vars(X_train, y_train, var)
```
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.
(remember that the target is log-transformed, that is why the differences seem so small).
## Feature Scaling
For use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
```
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train)
# transform the train and test set
# sklearn returns numpy arrays, so we wrap the
# array with a pandas dataframe
X_train = pd.DataFrame(
scaler.transform(X_train),
columns=X_train.columns
)
X_test = pd.DataFrame(
scaler.transform(X_test),
columns=X_train.columns
)
X_train.head()
```
# Conclusion
We now have several classes with parameters learned from the training dataset, that we can store and retrieve at a later stage, so that when a colleague comes with new data, we are in a better position to score it faster.
Still:
- we would need to save each class
- then we could load each class
- and apply each transformation individually.
Which sounds like a lot of work.
The good news is, we can reduce the amount of work, if we set up all the transformations within a pipeline.
**IMPORTANT**
In order to set up the entire feature transformation within a pipeline, we still need to create a class that can be used within a pipeline to map the categorical variables with the arbitrary mappings, and also, to capture elapsed time between the temporal variables.
We will take that opportunity to create an in-house package.
| github_jupyter |
```
import copy
import pandas
import numpy
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, roc_auc_score
from sklearn.metrics import euclidean_distances
"""
This tutorial shows how to generate adversarial examples
using FGSM in black-box setting.
The original paper can be found at:
https://arxiv.org/abs/1602.02697
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from six.moves import xrange
import logging
import tensorflow as tf
from tensorflow.python.platform import flags
from cleverhans.utils_mnist import data_mnist
from cleverhans.utils import to_categorical
from cleverhans.utils import set_log_level
from cleverhans.utils_tf import model_train, model_eval, batch_eval
from cleverhans.attacks import FastGradientMethod
from cleverhans.attacks_tf import jacobian_graph, jacobian_augmentation
from cleverhans_tutorials.tutorial_models import make_basic_cnn, MLP
from cleverhans_tutorials.tutorial_models import Flatten, Linear, ReLU, Softmax
from cleverhans.utils import TemporaryLogLevel
from lad import lad_Thibault as lad
from scipy.spatial.distance import euclidean
FLAGS = flags.FLAGS
```
# Functions
## Data
```
'''
MOONS
'''
def get_moon():
X, y = make_moons(noise=0.3, random_state=1, n_samples=10000)
y2 = numpy.zeros((X.shape[0],2))
for k in range(len(y)):
y2[k][y[k]] = 1
return X, y2
def get_german():
path_dataset='data/germancredit.csv'
X = pandas.read_csv(path_dataset, delimiter=",", index_col=0)
y = X.label
y = y - 1
X = X.iloc[:,X.columns != 'label']
X = (X-X.mean())/X.std()
y2 = numpy.zeros((X.shape[0],2)) #2= nb de classes
for k in range(len(y)):
y2[k][y[k]] = 1
return numpy.array(X), numpy.array(y2)
DATASETS_ = {'moons':get_moon,
'german': get_german}
```
## Training a black-box
```
'''
PAPERNOT BB
'''
def Papernot_bbox(sess, x, y, X_train, Y_train, X_test, Y_test,
nb_epochs, batch_size, learning_rate,
rng):
"""
Define and train a model that simulates the "remote"
black-box oracle described in the original paper.
:param sess: the TF session
:param x: the input placeholder for MNIST
:param y: the ouput placeholder for MNIST
:param X_train: the training data for the oracle
:param Y_train: the training labels for the oracle
:param X_test: the testing data for the oracle
:param Y_test: the testing labels for the oracle
:param nb_epochs: number of epochs to train model
:param batch_size: size of training batches
:param learning_rate: learning rate for training
:param rng: numpy.random.RandomState
:return:
"""
# Define TF model graph (for the black-box model)
model = make_basic_cnn()
predictions = model(x)
print("Defined TensorFlow model graph.")
# Train an MNIST model
train_params = {
'nb_epochs': nb_epochs,
'batch_size': batch_size,
'learning_rate': learning_rate
}
model_train(sess, x, y, predictions, X_train, Y_train,
args=train_params, rng=rng)
# Print out the accuracy on legitimate data
eval_params = {'batch_size': batch_size}
accuracy = model_eval(sess, x, y, predictions, X_test, Y_test,
args=eval_params)
print('Test accuracy of black-box on legitimate test '
'examples: ' + str(accuracy))
return model, predictions, accuracy
def RF_bbox(X_train, Y_train, X_test, Y_test):
# Define RF model graph (for the black-box model)
model = RandomForestClassifier(n_estimators=100, n_jobs=-1).fit(X_train, Y_train)
# Print out the accuracy on legitimate data
#predictions = model.predict_proba(X_test)[1] TEST CHANGER PREDICTIONS > FONCTION
predictions=lambda x: model.predict_proba(x)[1] #predict_proba required ou alors changer du code (argmax et compagnie) de papernot
accuracy = accuracy_score(Y_test, model.predict(X_test))
#roc_auc = roc_auc_score(Y_test, predictions[1][:,1])
print('Test accuracy of black-box on legitimate test '
'examples: ' + str(accuracy))
#print('Test ROC AUC of black-box on legitimate test ' 'examples: ' + str(roc_auc))
return model, predictions, accuracy
BB_MODELS_ = {'dnn': Papernot_bbox,
'rf': RF_bbox}
#ne pas utiliser dnn ca marche pas pour le moment
```
## Papernot Surrogate
```
def setup_tutorial():
"""
Helper function to check correct configuration of tf for tutorial
:return: True if setup checks completed
"""
# Set TF random seed to improve reproducibility
tf.set_random_seed(1234)
return True
def substitute_model(img_rows=1, img_cols=2, nb_classes=2):
"""
Defines the model architecture to be used by the substitute. Use
the example model interface.
:param img_rows: number of rows in input
:param img_cols: number of columns in input
:param nb_classes: number of classes in output
:return: tensorflow model
"""
input_shape = (None, img_rows, img_cols, 1) #on garde format d'origine parce qu'on comprend pas grand chose mais on change valeurs
# Define a fully connected model (it's different than the black-box)
'''layers2 = [Flatten(),
Linear(200),
ReLU(),
Linear(200),
ReLU(),
Linear(nb_classes),
Softmax()]'''
layers1 = [Flatten(), Linear(nb_classes), Softmax()] #surrogate simplifié
return MLP(layers1, input_shape)
def train_sub(sess, x, y, bb_model, X_sub, Y_sub, nb_classes,
nb_epochs_s, batch_size, learning_rate, data_aug, lmbda,
rng):
"""
This function creates the substitute by alternatively
augmenting the training data and training the substitute.
:param sess: TF session
:param x: input TF placeholder
:param y: output TF placeholder
:param bbox_preds: output of black-box model predictions
:param X_sub: initial substitute training data
:param Y_sub: initial substitute training labels
:param nb_classes: number of output classes
:param nb_epochs_s: number of epochs to train substitute model
:param batch_size: size of training batches
:param learning_rate: learning rate for training
:param data_aug: number of times substitute training data is augmented
:param lmbda: lambda from arxiv.org/abs/1602.02697
:param rng: numpy.random.RandomState instance
:return:
"""
# Define TF model graph (for the black-box model)
model_sub = substitute_model(img_cols=X_sub.shape[1])
preds_sub = model_sub(x)
print("Defined TensorFlow model graph for the substitute.")
# Define the Jacobian symbolically using TensorFlow
grads = jacobian_graph(preds_sub, x, nb_classes)
# Train the substitute and augment dataset alternatively
for rho in xrange(data_aug):
print("Substitute training epoch #" + str(rho))
train_params = {
'nb_epochs': nb_epochs_s,
'batch_size': batch_size,
'learning_rate': learning_rate
}
with TemporaryLogLevel(logging.WARNING, "cleverhans.utils.tf"):
model_train(sess, x, y, preds_sub, X_sub,
to_categorical(Y_sub, nb_classes),
init_all=False, args=train_params, rng=rng)
# If we are not at last substitute training iteration, augment dataset
if rho < data_aug - 1:
print("Augmenting substitute training data.")
# Perform the Jacobian augmentation
lmbda_coef = 2 * int(int(rho / 3) != 0) - 1
X_sub = jacobian_augmentation(sess, x, X_sub, Y_sub, grads,
lmbda_coef * lmbda)
print("Labeling substitute training data.")
# Label the newly generated synthetic points using the black-box
Y_sub = numpy.hstack([Y_sub, Y_sub])
X_sub_prev = X_sub[int(len(X_sub)/2):] #on a double le dataset donc prev = ce qu'il y a de nouveau = la moitie
eval_params = {'batch_size': batch_size}
#bbox_preds = tf.convert_to_tensor(bbox_preds, dtype=tf.float32) TEST CHANGER PREDICTIONS > FONCTION
#bbox_val = batch_eval2(sess, [x], [bbox_preds], [X_sub_prev], args=eval_params)[0] TEST CHANGER PREDICTIONS > FONCTION
#bbox_val = bbox_preds(X_sub_prev) #normalement batch eval sert juste à sortir les preds...?
bbox_val = bb_model.predict(X_sub_prev)
# Note here that we take the argmax because the adversary
# only has access to the label (not the probabilities) output
# by the black-box model
Y_sub[int(len(X_sub)/2):] = numpy.argmax(bbox_val, axis=1)
return model_sub, preds_sub
```
Usage:
print("Training the substitute model.")
train_sub_out = train_sub(sess, x, y, bbox_preds, X_sub, Y_sub,
nb_classes, nb_epochs_s, batch_size,
learning_rate, data_aug, lmbda, rng=rng)
model_sub, preds_sub = train_sub_out
# Our surrogate
# Local Fidelity
```
def get_random_points_hypersphere(x_center, radius_, n_points_):
res = []
while len(res) < n_points_:
n_points_left_ = n_points_ - len(res)
# About half the points are lost in the test hypercube => hypersphere
lbound = numpy.repeat([x_center.values-(radius_/2.)], n_points_left_*2, axis=0)
hbound = numpy.repeat([x_center.values+(radius_/2.)], n_points_left_*2, axis=0)
points = numpy.random.uniform(low=lbound, high=hbound)
# Check if x_generated is within hypersphere (if kind=='hypersphere')
for x_generated in points:
if euclidean(x_generated, x_center.values) < radius_:
res.append(x_generated)
if len(res) == n_points_:
break
return pandas.DataFrame(numpy.array(res))
def generate_inside_ball(center, segment=(0,1), n=1): #verifier algo comprendre racine 1/d et rapport entre segment et radius
d = center.shape[0]
z = numpy.random.normal(0, 1, (n, d))
z = numpy.array([a * b / c for a, b, c in zip(z, numpy.random.uniform(*segment, n), norm(z))])
z = z + center
return z
def norm(v):
return numpy.linalg.norm(v, ord=2, axis=1) #array of l2 norms of vectors in v
```
# Framework
```
def main_fidelity(radius):
accuracies = {}
fidelities = {}
# Seed random number generator so tutorial is reproducible
rng = numpy.random.RandomState([2017, 8, 30])
# Thibault: Tensorflow stuff
set_log_level(logging.DEBUG)
assert setup_tutorial()
sess = tf.Session()
# Data
X, Y = DATASETS_['german']()
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.20)
X_sub = X_test[:holdout]
Y_sub = numpy.argmax(Y_test[:holdout], axis=1)
## Redefine test set as remaining samples unavailable to adversaries
### N.B Thibault: c'est pour le substitute de Papernot
X_test = X_test[holdout:]
Y_test = Y_test[holdout:]
print("Training black box on",X_train.shape[0], "examples")
print('Testing black box and substitute on', X_test.shape[0],' examples')
print("Using ", holdout, " examples to start PP substitute")
## Define input and output TF placeholders
### N.B. Thibault: restes de Tensorflow, utilisé pour le substitute de Papernot...
x = tf.placeholder(tf.float32, shape=(None, 20))
y = tf.placeholder(tf.float32, shape=(None, 2))
# Instance to explain
x_toexplain = pandas.Series(X_test[0]).copy()
support_x_ = numpy.array(get_random_points_hypersphere(x_toexplain, radius_=radius, n_points_=1000))
# Simulate the black-box model
print("Preparing the black-box model.")
prep_bbox_out = BB_MODELS_['rf'](X_train, Y_train, X_test, Y_test)
bb_model, bbox_preds, accuracies['bbox'] = prep_bbox_out #bbox_preds fonction predict
# Train PAPERNOT substitute
print("Training the Pépèrenot substitute model.")
train_sub_pap = train_sub(sess, x, y, bb_model, X_sub, Y_sub,
nb_classes, nb_epochs_s, batch_size,
learning_rate, data_aug, lmbda, rng=rng)
model_sub, preds_sub = train_sub_pap
#feed_dict = {x:support_x_, y:bbox_preds(support_x_)}
eval_params = {'batch_size': batch_size}
pap_acc = model_eval(sess, x, y, preds_sub, X_test, Y_test, args=eval_params)
pap_fid = model_eval(sess, x, y, preds_sub, support_x_, bb_model.predict(support_x_) , args=eval_params)
accuracies['papernot'] = pap_acc
fidelities['papernot'] = pap_fid
# Train OUR subtitute
print("Training Local Surrogate substitute model.")
pred = bb_model.predict
bb_model.predict = lambda x: pred(x)[:,1]
_, train_sub_ls = lad.LocalSurrogate(pandas.DataFrame(X), blackbox=bb_model, n_support_points=100, max_depth=3).get_local_surrogate(x_toexplain)
#ls_acc = accuracy_score(train_sub_ls.predict(X_test), Y_test)
ls_fid = accuracy_score(train_sub_ls.predict(support_x_), bb_model.predict(support_x_))
#accuracies['localsurrogate'] = ls_acc
fidelities['localsurrogate'] = ls_fid
'''
'''# Initialize the Fast Gradient Sign Method (FGSM) attack object.
fgsm_par = {'eps': 0.5, 'ord': numpy.inf, 'clip_min': 0., 'clip_max': 1.} #ord: norme L1, l2 ou linfini
fgsm = FastGradientMethod(model_sub, sess=sess)
# Craft adversarial examples using the substitute
eval_params = {'batch_size': batch_size}
x_adv_sub = fgsm.generate(x, **fgsm_par)
# Evaluate the accuracy of the "black-box" model on adversarial examples
accuracy = accuracy_score(Y_test, bb_model.predict(sess.run(x_adv_sub, feed_dict={x: X_test})))
#model_eval(sess, x, y, bb_model.predict(x_adv_sub), X_test, Y_test,
# args=eval_params)
print('Test accuracy of oracle on adversarial examples generated '
'using the substitute: ' + str(accuracy))
accuracies['bbox_on_sub_adv_ex'] = accuracy
return fidelities, accuracies
nb_classes=2 #
batch_size=20 #
learning_rate=0.001 #
nb_epochs=0 # Nombre d'itération bbox osef
holdout=50 # Nombre d'exemples utilisés au début pour générer data (Pap-substitute)
data_aug=6 # Nombre d'itérations d'augmentation du dataset {IMPORTANT pour Pap-substitute}
nb_epochs_s=10 # Nombre d'itérations pour train substitute
lmbda=0.1 # params exploration pour augmentation data
radius_ = 0.5 # NEW
main_fidelity(radius_)
```
Il faut trouver une facon de faire la boucle
pour radius:
genere black box
genere surrogate papernot
pour observation dans test:
genere local surrogate
evalue papernot local
evalue local surrogate local
outputs:
papernot: {radius: [accuracy locale de chaque point}
pareil pour ls}
TODO: check histoire de boucle radius comment ca se goupille
voir si ca tourne
faire graphe...
```
azeazeazer
# Seed random number generator so tutorial is reproducible
rng = numpy.random.RandomState([2017, 8, 30])
# Thibault: Tensorflow stuff
set_log_level(logging.DEBUG)
assert setup_tutorial()
sess = tf.Session()
# Data
X, Y = DATASETS_['german']()
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.30)
X_sub = X_test[:holdout]
Y_sub = numpy.argmax(Y_test[:holdout], axis=1)
## Redefine test set as remaining samples unavailable to adversaries
### N.B Thibault: c'est pour le substitute de Papernot
X_test = X_test[holdout:]
Y_test = Y_test[holdout:]
print("Training black box on",X_train.shape[0], "examples")
print('Testing black box and substitute on', X_test.shape[0],' examples')
print("Using ", holdout, " examples to start PP substitute")
## Define input and output TF placeholders
### N.B. Thibault: restes de Tensorflow, utilisé pour le substitute de Papernot...
x = tf.placeholder(tf.float32, shape=(None, X.shape[1]))
y = tf.placeholder(tf.float32, shape=(None, Y.shape[1]))
# Simulate the black-box model
print("Preparing the black-box model.")
prep_bbox_out = BB_MODELS_['rf'](X_train, Y_train, X_test, Y_test)
bb_model, bbox_preds, _ = prep_bbox_out #bbox_preds fonction predict
# Train PAPERNOT substitute
print("Training the Pépèrenot substitute model.")
train_sub_pap = train_sub(sess, x, y, bb_model, X_sub, Y_sub,
nb_classes, nb_epochs_s, batch_size,
learning_rate, data_aug, lmbda, rng=rng)
model_sub, preds_sub = train_sub_pap
eval_params = {'batch_size': batch_size}
pap_acc = model_eval(sess, x, y, preds_sub, X_test, Y_test, args=eval_params)
print(pap_acc)
import copy
from multiprocessing import Pool
def pred(x):
return bb_model.predict(x)[:,1]
xs_toexplain = [pandas.Series(xi) for xi in X_test[:1000,:]]
radius_perc=[0.05,0.1,0.2,0.3,0.4,0.5]#,0.6,0.7,0.8,0.9,1]
papernot = {}
localsurr = {}
papernot = dict([(r, []) for r in radius_perc])
localsurrogate = dict([(r, []) for r in radius_perc])
c = 0
for x_toexplain in xs_toexplain:
c += 1
if c % 100 == 0:
print('iter', c)
print("Training Local Surrogate substitute model.")
_, train_sub_ls = lad.LocalSurrogate(pandas.DataFrame(X), blackbox=bb_model, n_support_points=100, max_depth=3).get_local_surrogate(x_toexplain)
print("Calculating distances.")
dists = euclidean_distances(x_toexplain.to_frame().T, X)
#dists = pandas.Series(dists[0], index=X.index)
radius_all_ = dists.max()*numpy.array(radius_perc)
for i in range(len(radius_all_)):
radius = radius_all_[i]
#support_x_ = numpy.array(get_random_points_hypersphere(x_toexplain, radius_=radius, n_points_=1000))
support_x_ = generate_inside_ball(numpy.array(x_toexplain), segment=(0, radius), n=1000)
pap_fid = model_eval(sess, x, y, preds_sub, support_x_, bb_model.predict(support_x_) , args=eval_params)
papernot[radius_perc[i]].append(pap_fid)
ls_fid = accuracy_score(train_sub_ls.predict(support_x_), pred(support_x_))
localsurrogate[radius_perc[i]].append(ls_fid)
X_sub.shape
import imp
imp.reload(lad)
out_localsurr = pandas.DataFrame(localsurrogate)
out_papernot = pandas.DataFrame(papernot)
import seaborn as sns
import matplotlib.pyplot as plt
sns.pointplot(data=out_papernot)
sns.pointplot(data=out_localsurr, color='orange')
plt.xlabel('Radius percent')
plt.ylabel('Local Accuracy')
plt.savefig('figures/local_fidelity_german.pdf')
plt.show()
out_papernot.to_csv('aze.csv')
from multiprocessing import Pool
def sq(x):
return sq[0] + sq[1] / sq[0] + sq[1]
with Pool(5) as p:
print(p.map(sq, [xs_toexplain]))
sum(xs_toexplain)
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
from nannon import *
start_pos
end_tuple
roll()
first_roll()
pos = ((0, 0, 1), (0, 2, 3))
print_board(pos)
swapped = swap_players(pos)
print(swapped)
print_board(swapped)
pos = ((0, 0, 1), (0, 2, 3))
print_board(pos)
print(who_won(pos))
pos = ((7, 7, 7), (0, 0, 1))
print_board(pos)
print(who_won(pos))
pos = ((0, 0, 1), (7, 7, 7))
print_board(pos)
print(who_won(pos))
pos = ((0, 0, 1), (0, 2, 3))
print_board(pos)
print(legal_moves(pos, 2))
legal_moves(pos, 3)
pos = ((1, 2, 3), (1, 2, 3))
print_board(pos)
legal_moves(pos, 2)
pos = ((0, 1, 2), (0, 1, 2))
die = 3
print('start')
print_board(pos)
lm = legal_moves(pos, die)
print('lm with die', die, lm, '\n')
m0 = make_move(pos, 0, die)
print('m0')
print_board(m0)
m1 = make_move(pos, 1, die)
print('m1')
print_board(m1)
pos_dict = explore()
print(len(pos_dict))
print(dict(list(pos_dict.items())[:5]))
rand_play
die = 2
print_board(start_pos)
npos = rand_play(start_pos, die)
print(npos)
print_board(npos)
first_play
die = 2
print_board(start_pos)
npos = first_play(start_pos, die)
print(npos)
print_board(npos)
play_game(rand_play, first_play)
play_tourn(rand_play, first_play)
players = [rand_play, first_play, last_play, score_play]
round_robin(players)
import pickle
mediocre_table = pickle.load(open('nannon/mediocre_table.p', 'rb'))
print(dict(list(mediocre_table.items())[:5]))
input_nodes = 2
hidden_nodes = 6
output_nodes = 1
learning_rate = 0.3
# Creates an instance of the scratch neural network.
# Here we teach it how to produce correct "XOR" output.
n = ScratchNetwork(input_nodes, hidden_nodes, output_nodes, learning_rate)
X = [[0,0],
[0,1],
[1,0],
[1,1]]
y = [[0],
[1],
[1],
[0]]
print('Before:', n.query(X))
for _ in range(5000):
n.train(X, y)
print('After', n.query(X))
# this shows the value player working
print_board(start_pos)
npos = value_play(start_pos, 2)
print(npos)
print_board(npos)
# This runs value player which is written in players.py
play_tourn(value_play, rand_play)
x, y = organize_input()
net = ScratchNetwork(6, 12, 1)
# for pos in x:
# print(net.query(pos))
pos_to_train = x[0]
print(pos_to_train)
print(net.query(pos_to_train))
for _ in range(1000):
net.train(pos_to_train, 1)
#print(net.query(pos_to_train))
print(net.query(pos_to_train))
# hill climbing
#use two networks and run against each other
#duplicate the winning position and replace loser, then add noise
# have mutate method that takes the weights of neural network, and adds noise (small value from a random generated weight with smaller range)
def training():
n1 = ScratchNetwork(6, 12, 1)
n2 = ScratchNetwork(6, 12, 1)
for i in range(0,1000): #1000
if play_tourn(neurotest(n1), neurotest(n2)) > 0.5:
random_init_range = pow(n1.n_input, -0.5)
n1.weights_ih = n.weights_ih + np.random.normal(0.0, random_init_range,
(n1.n_hidden, n1.n_input))
n1.weights_ho = n.weights_ho + np.random.normal(0.0, random_init_range,
(n1.n_output, n1.n_hidden))
n2 = copy.copy(n1)
return n1
else:
random_init_range = pow(n1.n_input, -0.5)
n2.weights_ih = n.weights_ih + np.random.normal(0.0, random_init_range,
(n2.n_hidden, n2.n_input))
n2.weights_ho = n.weights_ho + np.random.normal(0.0, random_init_range,
(n2.n_output, n2.n_hidden))
n1 = copy.copy(n2)
return n2
training()
# this is the result of a random player vs a trained neuroplayer through hill climbing
play_tourn(rand_play,neurotest(training()))
# this is a test of the neuroplayer
n = neurotest(ScratchNetwork(6, 12, 1))
n(start_pos, 6)
# this is the neuro player algorithm that uses neural networks to value a move
def neurotest(n):
def neuro_player(pos, roll):
best_move = []
best_val = 0
lm = legal_moves(pos,roll)
for moves in lm:
move = (make_move(pos, moves, roll))
move1 = (list(move)[0]+list(move)[1])
value = n.query(move1)[0][0]
if value > best_val:
best_val = value
best_move = move
return best_move
return neuro_player
n = ScratchNetwork(6, 12, 1)
play_tourn(rand_play, neurotest(training()))
# This is the expectimax algorithm that finds the best possible move by minimizing the opponents strength of moves
import pickle
def expectimax(pos, roll):
mediocre_table = pickle.load(open('nannon/mediocre_table.p', 'rb'))
lm = legal_moves(pos, roll)
candidates = []
for move in lm:
pos2 = swap_players(make_move(pos, move, roll))
current = []
for n in range(1,7):
lm2 = legal_moves(pos2, n)
for move2 in lm2:
pos3 = make_move(pos2, move2, n)
current.append((move, mediocre_table.get(pos3)))
best_move1= max(current, key=lambda x: x[1])
candidates.append(best_move1)
right_move = min(candidates, key=lambda x: x[1])
x, _ = right_move
return make_move(pos, x, roll)
expectimax(start_pos, 2)
#This runs expectimax for starting pos
expectimax(start_pos,2)
#this plays a tournament with expectimax and value_play
play_tourn(expectimax, value_play)
#This plays a tournament with expectimax and rand_play
play_tourn(expectimax, rand_play)
# this is not needed
import random
m = legal_moves(pos,roll)
candidates = []
pick random move
me = make_move(pos, move, roll)
save move to list
candidates.append(move)
return make_move(pos, move, roll)
def match_box_play(pos, roll):
lm = legal_moves(pos, roll)
move = random.choice(lm)
print(move)
print(me)
save move to list
return make_move(pos, move, roll)
match_box_play(start_pos, 2)
# ** I was unable to get this code to work so i wrote it out in text/pseudocode for what I was trying to do **
# get all positions in the game, give each position 25 beads
# play a game with another match_box player
# for each move get all beads/legal moves and randomly select a bead(which determines the move to make)
# save each move in a list for each player
# Use a step function to weigh the values of the beads more towards later positions.
# at the end of the game, for the winner add 3 beads to all positions moved
# take away 1 bead to all positions from the loser moved
# How many times to run?
#
import random
pos_table = {}
pos_table2 = {}
play1_moves = []
play2_moves = []
def match_box(pos, roll):
table = explore()
for pos in table:
pos_table[pos] = 25
pos_table2 = pos_table.copy()
play1_moves = []
play2_moves = []
# this will then run games between two match box players 1000 times
# after each game in the loop, the pos_tables will be updated based on the results of the game
# The length will be divided up into 5 sections, in step function fashion the positions made will have the number
#of beads adjusted accordingly for the winning player
#for example if the move list was 15 moves long, the first 3 moves would gain 1 bead, the next 3 would gain 2,
#the next 3 would gain 3 beads and so on and so forth
# the length of the losing player will be divided into 3 sections, and similarly lose beads in step function fashion,
#Ex: the first 5 would lose 1 bead, next 5 would lose 2 bead, last 5 would lose 3 beads
# after the 1000 games have finished there should remain a table with the most optimal moves given the number of beads,
#in each position
#this player will then match off against other players after it has finished training/learning
#for n in range(1000):
# play_game(matchbox_play(pos, roll), matchbox_play(pos,roll))
# update tables
#lm = legal_moves(pos, roll)
def matchbox_play(pos, roll):
# play a game between two matchbox players
# matchbox player 1 will select a legal move by picking randomly from the beads associated with those legal moves
# after that move player 2 will do the same thing
# following each move, the moves will be recorded into play1_moves and play2_moves respectivley
# this will repeat until a player has won the game, following which the will as well as the move list will be returned
lm = legal_moves(pos, roll)
for moves in lm:
move_list = [-1]
for item in pos_table:
print(make_move(pos,moves,roll))
if make_move(pos,moves,roll) == n:
move_list.append(n)
move = random.choice(move_list)
#save move to list
return make_move(pos, move, roll)
matchbox_play(start_pos, 2)
# this is the algorithm for back propogation where moves are valued using the value table and trained against one another
import pickle
def back_prop():
net = ScratchNetwork(6,12,1)
mediocre_table = pickle.load(open('nannon/mediocre_table.p', 'rb'))
lm = legal_moves(pos, roll)
for move in lm:
if len(lm) == 1:
return move
elif len(lm)==2:
move1_val = mediocretable.get(1)
move2_val = mediocretable.get(2)
if move1_val > move2_val:
train = 1
else:
train = 0
for n in range(100):
n.train[[2positions],[train]]
#return greater val
return move
elif len(lm) == 3:
move1_val = mediocretable.get(1)
move2_val = mediocretable.get(2)
if move1val > move2val:
train = 1
else:
train = 0
for n in range(100):
n.train[[2positions],[train]]
#compare greater val with move 3
#train
return move
# this is the round robin result of all the players and algorithms I got functioning
players = [rand_play, first_play, last_play, score_play, value_play, expectimax, neurotest(training())]
round_robin(players)
```
| github_jupyter |
```
import numpy as np
import scipy
import scipy.misc
import scipy.ndimage
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import OneHotEncoder
from datetime import datetime
import resource
np.set_printoptions(suppress=True, precision=5)
%matplotlib inline
class Laptimer:
def __init__(self):
self.start = datetime.now()
self.lap = 0
def click(self, message):
td = datetime.now() - self.start
td = (td.days*86400000 + td.seconds*1000 + td.microseconds / 1000) / 1000
memory = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / (1024 ** 2)
print("[%d] %s, %.2fs, memory: %dmb" % (self.lap, message, td, memory))
self.start = datetime.now()
self.lap = self.lap + 1
return td
def reset(self):
self.__init__()
def __call__(self, message = None):
return self.click(message)
timer = Laptimer()
timer()
def normalize_fetures(X):
return X * 0.98 / 255 + 0.01
def normalize_labels(y):
y = OneHotEncoder(sparse=False).fit_transform(y)
y[y == 0] = 0.01
y[y == 1] = 0.99
return y
url = "https://raw.githubusercontent.com/makeyourownneuralnetwork/makeyourownneuralnetwork/master/mnist_dataset/mnist_train_100.csv"
train = pd.read_csv(url, header=None, dtype="float64")
train.sample(10)
X_train = normalize_fetures(train.iloc[:, 1:].values)
y_train = train.iloc[:, [0]].values.astype("int32")
y_train_ohe = normalize_labels(y_train)
fig, _ = plt.subplots(5, 6, figsize = (15, 10))
for i, ax in enumerate(fig.axes):
ax.imshow(X_train[i].reshape(28, 28), cmap="Greys", interpolation="none")
ax.set_title("T: %d" % y_train[i])
plt.tight_layout()
url = "https://raw.githubusercontent.com/makeyourownneuralnetwork/makeyourownneuralnetwork/master/mnist_dataset/mnist_test_10.csv"
test = pd.read_csv(url, header=None, dtype="float64")
test.sample(10)
X_test = normalize_fetures(test.iloc[:, 1:].values)
y_test = test.iloc[:, 0].values.astype("int32")
```
# Neural Networks Classifier
Author: Abul Basar
```
class NeuralNetwork:
def __init__(self, layers, learning_rate, random_state = None):
self.layers_ = layers
self.num_features = layers[0]
self.num_classes = layers[-1]
self.hidden = layers[1:-1]
self.learning_rate = learning_rate
if not random_state:
np.random.seed(random_state)
self.W_sets = []
for i in range(len(self.layers_) - 1):
n_prev = layers[i]
n_next = layers[i + 1]
m = np.random.normal(0.0, pow(n_next, -0.5), (n_next, n_prev))
self.W_sets.append(m)
def activation_function(self, z):
return 1 / (1 + np.exp(-z))
def fit(self, training, targets):
inputs0 = inputs = np.array(training, ndmin=2).T
assert inputs.shape[0] == self.num_features, \
"no of features {0}, it must be {1}".format(inputs.shape[0], self.num_features)
targets = np.array(targets, ndmin=2).T
assert targets.shape[0] == self.num_classes, \
"no of classes {0}, it must be {1}".format(targets.shape[0], self.num_classes)
outputs = []
for i in range(len(self.layers_) - 1):
W = self.W_sets[i]
inputs = self.activation_function(W.dot(inputs))
outputs.append(inputs)
errors = [None] * (len(self.layers_) - 1)
errors[-1] = targets - outputs[-1]
#print("Last layer", targets.shape, outputs[-1].shape, errors[-1].shape)
#print("Last layer", targets, outputs[-1])
#Back propagation
for i in range(len(self.layers_) - 1)[::-1]:
W = self.W_sets[i]
E = errors[i]
O = outputs[i]
I = outputs[i - 1] if i > 0 else inputs0
#print("i: ", i, ", E: ", E.shape, ", O:", O.shape, ", I: ", I.shape, ",W: ", W.shape)
W += self.learning_rate * (E * O * (1 - O)).dot(I.T)
if i > 0:
errors[i-1] = W.T.dot(E)
def predict(self, inputs, cls = False):
inputs = np.array(inputs, ndmin=2).T
assert inputs.shape[0] == self.num_features, \
"no of features {0}, it must be {1}".format(inputs.shape[0], self.num_features)
for i in range(len(self.layers_) - 1):
W = self.W_sets[i]
input_next = W.dot(inputs)
inputs = activated = self.activation_function(input_next)
return np.argmax(activated.T, axis=1) if cls else activated.T
def score(self, X_test, y_test):
y_test = np.array(y_test).flatten()
y_test_pred = nn.predict(X_test, cls=True)
return np.sum(y_test_pred == y_test) / y_test.shape[0]
```
# Run neural net classifier on small dataset
### Training set size: 100, testing set size 10
```
nn = NeuralNetwork([784,100,10], 0.3, random_state=0)
for i in np.arange(X_train.shape[0]):
nn.fit(X_train[i], y_train_ohe[i])
nn.predict(X_train[2]), nn.predict(X_train[2], cls=True)
print("Testing accuracy: ", nn.score(X_test, y_test), ", training accuracy: ", nn.score(X_train, y_train))
#list(zip(y_test_pred, y_test))
```
# Load full MNIST dataset.
### Training set size 60,000 and test set size 10,000
Original: http://yann.lecun.com/exdb/mnist/
CSV version:
training: https://pjreddie.com/media/files/mnist_train.csv
testing: https://pjreddie.com/media/files/mnist_test.csv
```
train = pd.read_csv("../data/MNIST/mnist_train.csv", header=None, dtype="float64")
X_train = normalize_fetures(train.iloc[:, 1:].values)
y_train = train.iloc[:, [0]].values.astype("int32")
y_train_ohe = normalize_labels(y_train)
print(y_train.shape, y_train_ohe.shape)
test = pd.read_csv("../data/MNIST/mnist_test.csv", header=None, dtype="float64")
X_test = normalize_fetures(test.iloc[:, 1:].values)
y_test = test.iloc[:, 0].values.astype("int32")
```
## Runt the Neural Network classifier and measure performance
```
timer.reset()
nn = NeuralNetwork([784,100,10], 0.3, random_state=0)
for i in range(X_train.shape[0]):
nn.fit(X_train[i], y_train_ohe[i])
timer("training time")
accuracy = nn.score(X_test, y_test)
print("Testing accuracy: ", nn.score(X_test, y_test), ", Training accuracy: ", nn.score(X_train, y_train))
```
# Effect of learning rate
```
params = 10 ** - np.linspace(0.01, 2, 10)
scores_train = []
scores_test = []
timer.reset()
for p in params:
nn = NeuralNetwork([784,100,10], p, random_state = 0)
for i in range(X_train.shape[0]):
nn.fit(X_train[i], y_train_ohe[i])
scores_train.append(nn.score(X_train, y_train))
scores_test.append(nn.score(X_test, y_test))
timer()
plt.plot(params, scores_test, label = "Test score")
plt.plot(params, scores_train, label = "Training score")
plt.xlabel("Learning Rate")
plt.ylabel("Accuracy")
plt.legend()
plt.title("Effect of learning rate")
print("Accuracy scores")
pd.DataFrame({"learning_rate": params, "train": scores_train, "test": scores_test})
```
# Effect of Epochs
```
epochs = np.arange(20)
learning_rate = 0.077
scores_train, scores_test = [], []
nn = NeuralNetwork([784,100,10], learning_rate, random_state = 0)
indices = np.arange(X_train.shape[0])
timer.reset()
for _ in epochs:
np.random.shuffle(indices)
for i in indices:
nn.fit(X_train[i], y_train_ohe[i])
scores_train.append(nn.score(X_train, y_train))
scores_test.append(nn.score(X_test, y_test))
timer("test score: %f, training score: %f" % (scores_test[-1], scores_train[-1]))
plt.plot(epochs, scores_test, label = "Test score")
plt.plot(epochs, scores_train, label = "Training score")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(loc = "lower right")
plt.title("Effect of Epochs")
print("Accuracy scores")
pd.DataFrame({"epochs": epochs, "train": scores_train, "test": scores_test})
```
# Effect of size (num of nodes) of the single hidden layer
```
num_layers = 50 * (np.arange(10) + 1)
learning_rate = 0.077
scores_train, scores_test = [], []
timer.reset()
for p in num_layers:
nn = NeuralNetwork([784, p,10], learning_rate, random_state = 0)
indices = np.arange(X_train.shape[0])
for i in indices:
nn.fit(X_train[i], y_train_ohe[i])
scores_train.append(nn.score(X_train, y_train))
scores_test.append(nn.score(X_test, y_test))
timer("size: %d, test score: %f, training score: %f" % (p, scores_test[-1], scores_train[-1]))
plt.plot(num_layers, scores_test, label = "Test score")
plt.plot(num_layers, scores_train, label = "Training score")
plt.xlabel("Hidden Layer Size")
plt.ylabel("Accuracy")
plt.legend(loc = "lower right")
plt.title("Effect of size (num of nodes) of the hidden layer")
print("Accuracy scores")
pd.DataFrame({"layer": num_layers, "train": scores_train, "test": scores_test})
```
# Effect of using multiple hidden layers
```
num_layers = np.arange(5) + 1
learning_rate = 0.077
scores_train, scores_test = [], []
timer.reset()
for p in num_layers:
layers = [100] * p
layers.insert(0, 784)
layers.append(10)
nn = NeuralNetwork(layers, learning_rate, random_state = 0)
indices = np.arange(X_train.shape[0])
for i in indices:
nn.fit(X_train[i], y_train_ohe[i])
scores_train.append(nn.score(X_train, y_train))
scores_test.append(nn.score(X_test, y_test))
timer("size: %d, test score: %f, training score: %f" % (p, scores_test[-1], scores_train[-1]))
plt.plot(num_layers, scores_test, label = "Test score")
plt.plot(num_layers, scores_train, label = "Training score")
plt.xlabel("No of hidden layers")
plt.ylabel("Accuracy")
plt.legend(loc = "upper right")
plt.title("Effect of using multiple hidden layers, \nNodes per layer=100")
print("Accuracy scores")
pd.DataFrame({"layer": num_layers, "train": scores_train, "test": scores_test})
```
# Rotation
```
img = scipy.ndimage.interpolation.rotate(X_train[110].reshape(28, 28), -10, reshape=False)
print(img.shape)
plt.imshow(img, interpolation=None, cmap="Greys")
epochs = np.arange(10)
learning_rate = 0.077
scores_train, scores_test = [], []
nn = NeuralNetwork([784,250,10], learning_rate, random_state = 0)
indices = np.arange(X_train.shape[0])
timer.reset()
for _ in epochs:
np.random.shuffle(indices)
for i in indices:
for rotation in [-10, 0, 10]:
img = scipy.ndimage.interpolation.rotate(X_train[i].reshape(28, 28), rotation, cval=0.01, order=1, reshape=False)
nn.fit(img.flatten(), y_train_ohe[i])
scores_train.append(nn.score(X_train, y_train))
scores_test.append(nn.score(X_test, y_test))
timer("test score: %f, training score: %f" % (scores_test[-1], scores_train[-1]))
plt.plot(epochs, scores_test, label = "Test score")
plt.plot(epochs, scores_train, label = "Training score")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(loc = "lower right")
plt.title("Trained with rotation (+/- 10)\n Hidden Nodes: 250, LR: 0.077")
print("Accuracy scores")
pd.DataFrame({"epochs": epochs, "train": scores_train, "test": scores_test})
```
# Which charaters NN was most wrong about?
```
missed = y_test_pred != y_test
pd.Series(y_test[missed]).value_counts().plot(kind = "bar")
plt.title("No of mis classification by digit")
plt.ylabel("No of misclassification")
plt.xlabel("Digit")
fig, _ = plt.subplots(6, 4, figsize = (15, 10))
for i, ax in enumerate(fig.axes):
ax.imshow(X_test[missed][i].reshape(28, 28), interpolation="nearest", cmap="Greys")
ax.set_title("T: %d, P: %d" % (y_test[missed][i], y_test_pred[missed][i]))
plt.tight_layout()
img = scipy.ndimage.imread("/Users/abulbasar/Downloads/9-03.png", mode="L")
print("Original size:", img.shape)
img = normalize_fetures(scipy.misc.imresize(img, (28, 28)))
img = np.abs(img - 0.99)
plt.imshow(img, cmap="Greys", interpolation="none")
print("Predicted value: ", nn.predict(img.flatten(), cls=True))
```
| github_jupyter |
```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Overview"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This week we are going to learn a bit about __Data Visualization__, which is an important aspect in Computational Social Science. Why is it so important to make nice plots if we can use stats and modelling? I hope I will convince that it is _very_ important to make meaningful visualizations. Then, we will try to produce some beautiful figures using the data we downloaded last week. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here is the plan:\n",
"\n",
"* __Part 1__: Some talking from me on __why do we even care about visualizing data__. \n",
"* __Part 2__: Here is where you convince yourself that data visualization is useful by doing a __little visualization exercise__.\n",
"* __Part 3__: We will look at the relation between the attention to GME on Reddit and the evolution of the GME market indicators.\n",
"* __Part 4__: We will visualize the activity of Redditors posting about GME.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 1: Intro to visualization"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Start by watching this short introduction video to Data Visualization.\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> * _Video Lecture_: Intro to Data Visualization"
]
},
{
"cell_type": "code",
"execution_count": 80,
"metadata": {},
"outputs": [
{
"data": {
"image/jpeg": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAUDBAgICAgICAgICAgGBwgIBwcHBwgICAgICAgICAgICAgIChALCAgOCggIDhUNDhESExMTCAsWGBYSGBASExIBBQUFBwYHDwgIDx4VEhUfGB8YHRwbGxobGhsaGhkVHh0eHR4YHx4eFhoeHx0YGh0dGBUYHRgaGRcdFR4ZGhUYG//AABEIAWgB4AMBIgACEQEDEQH/xAAcAAEAAgMBAQEAAAAAAAAAAAAABggEBQcDAgH/xABWEAABBAECAgYGBwMGCgQPAAABAAIDBAUGERIhBxMYMZTVFCJBUVRVFSMyYXGBkQhCoRYzNFKCsSQlNVNicnOSo7NDRLLFFyY2RVZ0dYOipbS1wcLR/8QAGQEBAQEBAQEAAAAAAAAAAAAAAAECAwQF/8QAMREBAAECBAQDBgYDAAAAAAAAAAECEQMhMVEEEkFhgcHwE3GRobHhIzJSYtHxIiRC/9oADAMBAAIRAxEAPwCmSIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICKzPYq1V8fp/xmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/AIzI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8AGZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/wCMyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/ABmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P8AjMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/wAZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/AIzI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8AGZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/wCMyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/ABmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P8AjMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/wAZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/AIzI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8AGZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/wCMyPlyC/6IiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIijg15hDP6KMvjfSOLg6n0+vx8e+3Bw8f29+XD3rVNFVX5YusUzOiRoiLKCIiAiIgIiICL4ErS4s4m8bQHFm44g0kgEt7wDsef3FfaAiIgIsTI5OvXMLbE8MJtztr1hNKyMzTvDnMhiDj9ZKQ1xDRz9UrKcQOZ5Ad5KtpH6i0uE1bi70r4KeRpWpoty+GtbhlkaAdieBjieEH29y3StVM0zaqLLMTGoiL4bK0uLA5pc0AuaHDiaHb8JI7wDsf0Kyj7RFiVMnXllnginiknpOjbahjka6Su6VgkiEzAd4y5hDhvtuDurYZaLX57N06EXX3rVepDxBgltTMhYXnchoc8gF2wPIc+RXricnXtxNnqzw2YJPsTV5WSxu25HZ7CQdk5ZtzWyW02uy0XxLK1u3E5reJwa3icBu49zRv3uPuX2ogiIgIiICIhKAi0+N1TjLLxHWyNCxI77Mde7XlefwZG8kr0y2o8dTeI7d+lVkcwPbHatwQPLCS0PDJXglu7XDfu3afctclV7WXlnRtEWrp6ix84jMN6nMLEroYTFbgkEszGCR8UfA88cgYQ4tHMA7rMs3oInxRyzRRyWnFleOSVjHzPa3ic2JrjvI4NBJDd+QUmmYymC0shF+OIA3PIDmSe4LT4LVeMvySQ0shStywDeWKrahmewA8JcWxuJ4d+W/crFMzEzEaERMtyi0ud1ZjKEkcN3IUqks+xiitWoYXvBPCHBsjgeHflv3LctcCAQQQRuCOYIPcQfaEmmYi8wTExm/UXm2wwyOiD2mRjGSOjDhxtZIXtY9ze8NcY5AD7eB3uKx58rWjsQ1HzxMs2mSyV67pGiWVkPD1ro2E7uDeNu+3v+4qREyWZiLSZDWGJryvgsZTHQTRECSGe/WilYS0OAfG+QOaS1zTzHcQsnC5+jd4xSu1LfU8PW+iWobHV8fFwcfVOPBvwu237+E+5WaKoi9sjlm12yREWUEREBERAREQEREBERAREQEREHM/2irk7cdSpxTOrx5zM0sbcssPC6KrY610uzv3eLq2tJ9znD2rc1Oi3T8dZtUYqo6NgAD3wgzEt7nmb7fHvz33W71lpurlqU1C4wvgsgblp4Xse0h0csbv3ZGuAIP3c9wSFAYdDaqjDazNVE0mFrQ+THQOvdU0j1DYILnO2G3HvxL24dcThRRFfLMTO+fwjXp6l3pqiaIi9nzd6RMzZs5AYTEQXKODnkrWZrNswTWZ6/KeOo0NIBaQQC7ffbf27Lym6Vrt2xj6+DoV7Jy+G+ko3XbD4eocyw+GWOYRghwb1ZbyI3c4HfZZed6Mb4nvOxGblxlTNyvmyNMVopgZpRwzy1ZnevWdINyeH2nkeQA2mmejSHHX6FutMRDjMM/Ftruj3c/jnNg2DLxfaLnO3bt7e9dJq4WIvERO2u3XvfbJq+FEf380V0x0wZC19D25cXBBi85djxrZRbc+yLjmPLpGR8Ab6NxxvAB57NJ335L3i6R9Q2m5GXH4WpNXwl+/WnfLeex9llOZ7OGszg5S8DNySSN3bAHbnscV0TmDG4TH+ncX8nsw3JNm9H29IDTP8AVFnH9Wdp/tAn7PdzUN0PpLL34s/HTzE2Mgtajy8Vus+pHJxxPsO3lryvAkrvex2xLTsQARseZ6/6tV6qYi0T15tLz43s3+FN5jz3lP8ATfSW3IX8RXrwD0bOYafIiV7yJoZIZWxmAtA4XAHjBO/e3lyUE1t0h5q1XpTUYYq4g1g7GPLLksZsSV7Bjq15QG86s46zrO/h4G8jvymOS6K3RR4p2GyD8ddwNWWpDYfXjssmgn2dK2aGTlxce7gfZxHkeRH5U6JxHjqFL01z5aWfgzlm1JFxG1YjkdJKzhDh1YeXd/PbbuKxRXwtExVHwm/7vszTVhRMTHrX7Pi/r3MzZKTF4rF1bE+KrVZcy+zcdDDHYswtmFWtI1h4jwu5SEEd/Ibc4prXpDzNzBm5Wrtx8lfUf0fZDbb2TxdRbgZDESwbP6x7nRyAHYDfbdTnUmgLpyk+Vw+Vdi5slFDFk43VIrUc/UN6uGZjZeUc7WAN32I5fe7fBh6JnNw8+KOQdI6fNtyvpckO8ji2zDYLJRx+u9xiO79+ZcTsrh4nDU8tVo/53v3v010WmrCi0+7fxalucr47OZXJ5Ci2C9S0pTtZCSrakma9zpTG6rEx4DCOKGNrX8t9xvtzK2GL6RszDLj5czh4aeOzdiKtVnrWzNPVms/0ZtuNzRu1/dxN2293sUhzPR9DcyGStWZDJXzOGixc1UN4XMbHJJJ1rZd/tfWcuXItBWhxfRbfdPRGUzk2Sx+Enjnx9J1aKFxlhG1d9qZnrTuYO4nn3+8rPtOHqi9W0b7dPHfJObDmM/Pbo1DelzLCB2RdiqoxdTLnG25hck9JeTc9FbLXiLNtm8TNw483EgbAbre3de5azlLtLC4uC5Xwj2R5Cxatms6WZzQ90FXZpaHgct37gkHuGxP3L0Wl2EsYf03+k5Z2RFnqPs732XeqMfHz+zw8W/t32X3lujy8zJW72IzD8YzMFjsnX9Fish8jG8HX1nS7iCYt357HmSfuCa+FmZtERrbW3S1++pM4Wdu+/b7odqDVUuZpaUvTVxWkdrWKF0DXF3B6O67CNy4bh+zBuPYd12PWWIOQx16iJTCchTsVhM0EmMzROjD9gRuBxd243G6guL6JzBRxFL04v+g88cv1zofWsAyTv6l+8m7XfX837nct7uanuq8JFkqVmjOZGxXIXRPfC8skZvza9jh3Oa4A+7lzBG4XPHxcOaqfZzlEz4Re8M4ldN45en8uNdH9Srir+Kx2bwNelkIS+LEZ2ns+rdlbEWuD5G7Pinewnk/fck8m7gHZnpXy80FnL08LFPgKL5eOd1vq701au4ia3DERwBrQ1zuA89m9457Z+L6M8q+5j5MtnDkaeDmE9GAVGQSulY3gjfZkbzleG8tySTz953x8h0QW+CfH083PVwN2aSSfFCtE97GzPL5q8Fo/WMruJPqe47Hfc7+irE4aqu9cxM5X/NbWb26307Xu6TVhTN6vOzKvdJGSvXZKmnMfXutp1q1i3avWHV4t7cLbEMEQaN+PqnsJcTyJI25bnX1M5Ux+a1DlLNB1e3V09jbl8ssuldI50RBqiM/VB7TBEwPB2O2525k7fL9GNmGy61gMq/Dvs1a9W7H6NDaimZVjEMEzWyj6qw2MBvEO/Yd3PfOj6NmST5KS7afaZmsLUxdppZwSE1mSMdZEgcfrHl/F3ciPauUV8PEZaTEb31pvfpvZiKsOI/u/RGn9JeoK9aldvYSpFUy12jDA+K898leO5K1o9JjLN+Msdu1w2G4AIHEFuOjj/wAp9Y/7fC//AGxi1juijKzQ06tvUMlipiLdSejAaMTC5lSRrmNtSNdxzPDG8IJOw33IJ22nGm9K+h5TM5HrusGdfSf1PV8PUeiVhX24+I8fFtv3DZMTEwIoqii15jpf9VMxr2iSqrDimYjz3jdGOmvSNy7Pi8lTq1sk7Cus9bh7ruGK0yy2MF0bj6rZ2GIbcXLmO/bZ2kwevcbjsZK7E4Z9bJXM0KEuCd9Q5uWmj4vrXgFog6qPcOaACGgbN57TPpC0bbu2K2QxmSkxuRpRSQNkMYnrT15XBzop67/VPrAEO2/uaRHq3Q+59Oz6XkpZMxbyUOV+l4Y2xmC7XYY4HRQ/Z6prHObw8uTuW2w2uFi4M4VNOLOnTPeZz6THXdaKqOSIqn6+rfNpukO3fnqY8ajxFVr49S4tlJ1LITcAM7Zw6YFvrCWPhI4XbtPWAjuX3rnptkq5C5TpRY4x4lxjnfk7zq0tuZg3khpsYw7cJ9Xif3kHltzO+sdHWUuQRsymcN2avlqGQieKMUMTG0hKDAyKItDTJ1u5fzO7R3pqPoytuu3LWKyjcezLvEl6CahXuAT8IY6xVfMOKCRwHPblvz922qMThsortNr/AKrdO1/ksVYelXnZiO6VLt2bGQYXHw2HZvEyXmG7ZdCKr4rBhmbOY2u42MLHN9XmXFu2wXjW6W7tmtRgqYyOXN5C5kKjqjrJFSA4x4ZasOm4eJ0XrM2by73czwjilmK0MYMlj8h6W6U43DyY1zZImNfO6SVkpsvfHs0PJadwG8y7fdR09EckcTJKmTfVydPK5K/RyDK7HNZHkn8U1WaB7i2aLYAbn+r3cyFmmrhJyt9f3a9tGYnB2+vf7MTV/SrkcTXpwX6NGvlr89hoEl530bFWg6v/AAt8waX7PMoaI+/dr9zyG8g6HekducFuCRkDLmMcwTGnObFSeKUO6uevKWh3Du1wLSNwQOfPlgZHozv2Yak0+ZfJmsdPYkgyb6UDoDDaEYkpyUiOrfX+raR3EHcjvKkvR3pi1jmWHXbwv2LcoeXtqw1ooWhob1UEcTRwx8gdt9t9z3kk4xauG9jamI5vHfplpbvfslc4XJlr4+rM/UusMXjHMZkMhVpumY58TbU7Ii9jSA5zQ48wCR+q3DnBzOIEFrmbgjmCCNwR7wsLL4KlcLTbqVrJjBDDYgjlLQ7YkNLwdgdgs7qwG8DQGgN4QAOQG2wAHuC8c8totr1ccrRZwfod6N8NldK0ZLdOFtmaKyTkIwI7MTmWZ2xyiYbHdga3v5eqN1ptQcGU0EMtehisZOo2KnFk3xgzyQwZdldjxK71jxxl2/vL3n2qXYfobyUVNmKk1JZ+io2uY6lUpwVnPje9z5I3WBvKWPLnbgkjZxHdyU01boGC3gX4Kq4U4OrrsheGdZwCvYin5t4hxFxj5nfvcSvq1cXRGLzc9/8AK/XKM7/Hr0yeqcaIrve+d/dCH69wdOhkNIRUq0NWJ+ekkdHBG2NhldT4S8gfvENaN/uWz6Yv8s6Q/wDbM/8A9KVJekTRzcvVgjbYkqW8fZiuY+9E0OdXtQghryx3KSMhzgWHv3+5R/A6AyUmTq5POZVmQfi2yDH161VtWGJ8oDZJnNH2pCAP0C81GLRNMVVVZxFUWzvN72+u7nTXFomZ0ifnf+X3+0lclh05cETzH6VLUqyyNOxbBZtRRTc/YHMc5p+55UV6TtL0cEzTd7Gwx1rdTNUafWQtDX2a9lkjbEcxHOXcNJ3Pdu73rresNP18rRs4+00mC7EY38JAc07hzJGEjYPY9rXDf2tC59hui/Ivt0ZczmTk6uDcH46sKrID1jQGxy2XN/nZGtAG5JPI8+Z3vD41FGHETNrXvG94iPVzDriKc50v45PvpC0/p7FDKZjKxenS5hzY2wWQyaWSTqhFFToN4QWEhg22+zwkkgAleOkc9LpvSePOTa6S71Qgo0eP6+eWRzjVqAn7IawtBd3Naw+4BfWuei/J5LKtyYzbITUJGPruxsc8dRh234WTSOY+YkbmQt3Ow7gABJZOj+ter1WZ5sOYt02SMFyWEQcTZH8R2hhIYwkNYDsOfAFZxMPkpiurm0mdb5RaIi+Vt8/os1U8sRM33QPojt2KeV1NZy1pk9lmOxt66+Ih0cTRDbnkgrtBP1MLSGAD3c+Z3Oiweey1eabW2QxsNiheZHHGG2T6ZjcU6ThjdXh4Sx7fXa53MFxc4+qCdukaZ6JMXQvZCzDBC2vkaTKcdRkZaK8T2OZca1/GeJs3qEjYbFgUdd0P5N8DMRLnpJNPRSNLaBqxttOgjeJGVX2gOJ0YIHt25DkNgB3jHwKqpmZ15b5TGURnEW0n5d7N+0w5mZ93w8G16YY8LVqG+cVRv5PLPjgxsclWOSW5bmYGwF+44nRsYGucT+6wDcEhb3od0PHgscyA8Lrdk9ffma1reOd45taGgBsbPsgAbcvvX63RHHm2ZWzM2aHH02VsPS6vZtJzhtZmJJIklfwsAdy2AA/dBUzXgxMb8OMOmb9Z8o8Pr4OFVf8AjyxPv9dhEReZyEREBERAREQEREBERAREQEREGp1LqGtjhVNlzmjI36+Pr8DHP3s2iWwtdw/ZaS07uPILbLmX7REj4qeHsiC1YZj9UYi3ZZRqTW5m14JJHSyCGBrnuAHuHtHvXPukbNPv5avdnOroMRPg+LT8eEq5GrJ9NMt2I7Lb9eJgkjtcDavVi0BCWF5PIu3CxyKpkmX1DTw74Z49Qy29Q6Ew9PFOhivyvjzkb7kVrr5R/k+2BPA90kpYS1m+5LdlM7eCzz5NaX60+W+k6ULINPVpLM7afHNgaHpE1Su89TNOZxIGu5tbIw7bEu3CwC0urtTUcTDFYvTNgjtXalKJx/fs3JmwQt/Ddxc4/utY8nkFxDoPeRqKm2hJq2TF/wAm7vpp1IMiKwyptY4lsfp4B9K4BJxBv1fM9WecqzenLS2T1Vl3YyCnXfi8Bj5DM7KuuVq1nJ5avLDHNVfFC70h9OsS4PaS1slpwPrR8g7Hk9S1q+Qx+MkL/SsxFemqBrN4yzHtruscb9/UO1mLb38/ctyq3WbGorjtOyipYbnsLgtbYuexJWkFc5mvVx0FKyJpYxE6G0+Bs0bzsx4c7bk07RTourZ11bNtfltQMjl0zMLpdiNQOtVso9zQ2xX+krL3yZJg60OZSLWOadweJrCAt4iqXjbmT4NKTcGonej5CeuMc1+omx3YzlomfSTbs31kLBCHOFTJBzeoL2hw3DjkZd+SbftGaXV30+dZVeKOsL/0H/J8Ziua7mdUPRPQfQ+DfY9bx8fH6vW7haxaXNamq1LuOoTGQT5t9mOnwsLmF1WD0iUSOB9T1N9veVwPTuKz8E+Nv158y6/ksvrGnPDfsXJaEVaJmZkwrZa0+8VeD0mKm9khA3EgAJaQFpejirM/L6UfwarlvVfpN2opM3FkH1K2SlxsjD1UlpvVxyOkDw0wERuaGb+twgBbBY1q/BFJBFLNFHLce6OrFJKxkliRkb5nxwMcd5XiOOR5DdyGsce4FVVyeAz9fSen5xdznFlJYJdVS2n5i5bgayrIytC6tRlZegptkDWvbAQ7cRufxetvsMDpLI5GPRUmSsZqx6Nn8qxloDJ46eDGijcfUfZD5nTMJliYxk0zhI6KZrHc3OCCwNnW+KZYx9X02J82bsXK2P6jjsRzWMeHG5CZ4GuiikjLHtIe5p4mOb3ghSJVq6JNNXcZNpuONuWbDLqzVrsgyzJdljbA2LJxUpJxLuI45A2B4c7k97w8budubKoCIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIuVSZW/quSWPD3pMZp6tLLWsZ2mGnIZaxE50U8OGlkBZVoxPDmOu8LnPexwiAa3rHavBaen0/q3F0qGQydvHagxOVmyNLJ5GfICtNjX0upvQvsuc+J8jrgjcAdjv3chsHaUREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBRvpThfJgs1HFI6KSXDZJkcrDs6N7qczWSNI7nNJBB+5SRcs/abztmHDHFY4B+W1bKMNjmcXCGC2Ort2nuHOOKKBz/rB9l0kSDSdGXSvho8Hh8fgoLWau18NQYMViIesfWd6Oxm2SuSFtXHnrGvDnTyAkh2wceRn2hNO22T2MtlnwvyuQjZD1NYudVxlGNznxY+q94DpfXcZJZyGmV5HJrGRsbzn9h7TTsXpmaCaIR3PpzJx3wCHEWKc/oLmFw5EN9G2G3LvPtK7sgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiIC4n6V9Ma4rbF7q+nRedDtwmLjo12UrL9wN/rrebkjLSR6+mwQORXWNX5uLG4+9kZ/wCZxlKxblA7yyvC+VwHvcQzYD3kLj/7LtB7Ppm/aIM1b0PFWH+z0utDJmM3IHEbnfLZzItPs/wcDkWlBJf2cbgmpZpzeYGstTbfg/KzzD+EgUX6QN9WUcldOes4TS2G9LYy1i3sbPk7NElti9PPzP0ZDLHJHHAzYzOY95cB1K5v+yF0pVINO5uOw9tnK3c9ftUcHFK03Lpu1KzwyGLfjbWEkc5fOdmRtD3vLQ0lRroA6D9T5zTrKWQyr8VpfJXIchHj2xia5bY08XHDxbCtUl9R7eMvaXRxyiM8nPC1vQI7JHTOEdl5DLkH42B9iR7nPlc14LoDO9/rPs9QYeMncl/HuT3qbrzrQtjYyNg4WRMaxjR3BrQGtH5ABeiAiIgIiICIiAiIgIiICrPrrXT9Q5OzXxl/LPr0L0mIxeJ03km4u1mMnWibPlL9zJFpNXDVWS12B7dw97txxBzQ7q37QGSmr4mKKKV1WLK5fF4u/kGPMbqNDIXYq1uw2UD6h5jeYmycuB07Xbt23EU6KejfB0tXZrJYaGGvDj8VQxT4Kjz6OzITOks3hwAkNkFaPFb7H7U0pPrElBlfswa5vZOLMYzKC0MhpfJmpJ6ea7rpqyhzqwty1GtgsTNMc7OujaGyNjjf3uJPY1xjoToudq3pAyDdjXsZLD0WOHcbFDG72m/i02o/zcV2dAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREHPv2ha5nwT6m+zMnlcDj5/vr3s5jqthh3/ddFK9h+55UR0vZlqdH+SvM2NzJVtQ5GMgfbu5S5flqN7xueOeuzv8AYFtP2vq9mbSOQgpNkddtXMNDSbA4tmdZfmsf1AicCC2Tj22O42Kqx0GahzdjD4agMjJYxz9eYfF5LF2I2SPq1nWquSqS15i3rYoXy1LzXsc4t+pYGgcTgQsjgP2eKdC5ZFOWvWxGVjrtylOGmfTrccMMccuN+kHykQYieSPrZYo4w+QySML+AtDe4MaGgNaAGtADWgbAAcgAB3BfqICIiAiIgIiICIiAiIgIiIPG7VinjkhmjZNDOx0csMrGyRyRvBa9kjHgtewgkEEbEFVX6EdUvx1jWGm9J4k3LcOqrs1OaeaCDEY+CcQ0w+050gsPr15ak2zIY3l7WsaHAuDlMOn3pluR0blfTELp5I7cGMt6hkLY8dQtW52VRDTe875G+x0g4hCHti+07ctLVFOirotp4iHSuYoiavmxqCbD6hmFy1LFe6mTJ0MpE6KV/Bwek0+NpDR/NNO2+xAWH6OdKx4fHxU2yOsSl8tm9dkaGyXb9qR1i7ckA5NdJNI93D3NHC0cmhSJEQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERBEOk+UMbieLbgfqLFsfv3bmV3Vfn1vVAfeQqmZ/EXNL67txRRN+hs7qvS1p0j3cPUPu5Ka5XkY0d7WmtmoAO4Abk929mf2m4ZTpbK2K7+qs4llfLVJQATHPircGQjcAeR51tvwcVx39qTNV7+msTqeIPZXzWNbVmMW5krWJY2ZbETOLN9n1sjRdAXDfZt+wBvxcwtWijfRfqhmaw2MyrBw/SdGGd7P8ANyuaBPH94bKJG7+3hUkQEREBERAREQEREBERAUQ6acoaWnszaDp2ej4yy50lRzWWY4+rLZJK73AtZO1hc5riCA5rSVL1qdZYRmSx1/HSco8pQtU3n3NswPhJG3MEB++/3IKk9N2IfJonB6lP+CV8fcxNzEaeoyf4ux+NmcepZIS0Pu5J7ZIXSWH7bEuaxrd5HSd7wtSSbIy12M+oxesHZBpbzDqmQ03NbE/LuByVydvP2s+9cP6O87WtYLD6CzLuvsZCStFWc1h9ak708XIJDv8AU28fcp2ax5jfq6r278R4evdCGQk6/Hmy5xtZHTLMfdB34Bk9IZCbG5F3MDaWSXI+71m1gRyag7Gi53prpWrXtS5DTsdSw36Nge6LKO51LdmqawyNSEgbdZX9NqB3rE7veCG7MMnREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQQjp8idJpjPQt247WJt12cXdx2IjAzf8AtSBcE6XsDBjNJ6u0wA9tHBZDDZTFetu6DE5nKwSPja5+5d1NmPLNBO54er33JO/dOn+51ODk5gdflMHXPF3cFjOY6GX/AIb3/ouU9LNiHPZS/jK7DINSuxml4JG7jrI8LetZbUWRae59OoywysHDvsGZn7pQWLxWPgqQQ1a0TIK9SJkNeCJobHFFG0MZGxo5NaGgDb7lkoiAiIgIiICIiAiIgIiICIiCtWoOi61i9W4zLCxTOLvayfbqVWwvN2K3lcZL6dvM4cMcDpqhdwN34iWE7FqWOKxntWYOplxgb+AycGqMVlDDDYZFWyOKhZm2yVpiI5K3FYe93EdhJZa/vaFN/wBqbNTY6lgrsFd9uWrq/EOZUhG8tnjFqJ1eEf557Xua3/Sc1c16IdP4vpDkzGaydSzVfHqaKxXgjkEUsuNOHx8EVK47gJlp2IYYXvY0ji5cLtidwmH7JPR3JQpfS9u3ctvyL70uHZfI62tjMhZjsmeVvMm7c6itPIXOO20beR4+LvC/GNAAAAAaAAANgAOQAA7gv1AREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQERaDpB1PHh8fNdfG+d7XRQVKkX89cu2ZWV6dOEbH6yWeSNm/c0OLjyaSg5P+1QW5gVNLsnFaFxGa1LkiWiPE4PH8chlkcT6tiaZoEY2P8AMSEgNBcI5+y7Sfc1DbyxrtqY6HS+Ph0rjw4k0sHZv3oIDMCNhbndh5J3O4nk+lH1uZA0XSxpi/Nh9Q4/0trb1fGv1Hr3LQM4228g2m+fFabql7g5lOKKGN237kTKx23ncD2bodw3oN+1WafUxuldI4xrdubTTZmXkk+0n0kH9feg6iiIgIiICIiAiIgIiICIiAiIggfS3RbYn0zG4Ahuqqs+x99PHZS40/k6uD+Sg37LUPo7K0YbwtyWidJ5AEADjsMjyFOwSfaRDFjx+YUi15rii/UeLwETutyFGO/mLXARw04W4m/VhZKf8/J6bxBneGNDjsHs4tF0AF08mnnQ79Rh+jjDV7r9vUdbybKVmtC12/OWOClLI5vsbegP7yDuCIiAiIgIiICIiDT6kwDLwj4rN+sYS4tdj79ioTxbb9Y2FwbKPVG3GDtz27ytQdGWmt4YdR56Eew8WHsOH9q7jJSfzW81NqGhjIDZyNyrRrhwZ19yxHBGXuBLWB0jgHPIB2aOZ2KhbOmzATbihJfyxHyXCZXIMJ3I2FivWMG+4P7/AC9qDeQaWvtHPUuZkPvlrad5/iIsO1ZTMHfH/nq2775KeNJ/4dVqjbteZqf+gaQyhB34ZcxfxWLi5dxLWWJ7LWnv/md/eB3L56zXFg8o9L4th2245cpmJh37gtYymwEcuYce9BKRi8iO7Kk/7ShXP/Y4V+/R+U9mSr/2sZv/AHWQo5HpfU0gPpOqWRbnl9Faep1y0e7fIzWw4/eR+S9f5CZF329XaiJ9vVw6biB/JuF5D80G9NLL+zIUf7eImP8A2cg1ebqec9mRxQ/HBWz/AN8LVDQNn26m1Gfv67FD+DcaAvx2g7f7uqdRs+8Owb/+biXINuKmb9uQxR/DCWx/3uU9HzY/63i3/d9F24/4/SDv7lqBonJtHqauz/8A7yppmT9f8Sg/xWU3T+aYPU1A6Q7d9zEUpOfvIq9Qgy5DnG/ZbiZP9Z9yH+IbIsF+Q1O1/LE4KSP+sNR345PvIjODc38uNfkOP1Qw7uyuCnb/AFDp2/XeR7AZW5p7d/v4PyXpPa1NHtwUcHa95dl79Dl7w0Y2zufuLvzQekuoMrE3eTAWZnD93HZHGS7/AIG9PWH67L0Zq1zW8VjE5ir72mpFccPyxc9jf8t14SahzETd5dPzTn2txmUx8/8AunIPqb/nsvN2veqZxW8Nn6vdu0Ys5Fw3+7CyWt/y3QZsOuMc4bvdbrgfvX8Tk6Dfx4rtaMbfesrH6uxVh3BBk8fM8ciyG9XkeD7i1ryQfuWlPSrp9gabORjx/GdgMzDZxB358uHKRQkHkeS31HIY3Jx7wT0chC4b7wywW4yPfuwuaQg2oIPMcwfaF+rQDRWJaSY8fVrvd3y04W05T9/XVeB+/wCa8XaRaxpFXI5eo4/9IMlJeIP3NzAssH4cOyCSoonJic7EW+jZipOxv225XD9ZNIPYBPj7VaOI/f1LvwXwczn4OI2MLWtsb9g4fLsfYk9+9fJwVYoj93Xu/FBL0USZr+owht2tk8a4t4nG9jLHo8YHf1mRqtlosI/238FvsFm6d+IT0bda5C7ump2IrER390kLi0/qgz0REBERARaLUmssRjf8o5TH0fcLt6vXcTtvs1srwXH7go8/pcxD9vQ2ZXJ8W3C7E4HLXYXB32S23FW9GLT7+s2HedkE+Rc+OvMvL/RNI5lwJIEmQt4WhHy32cWm/JOGnYf9Hvz7l+jLawl+xhMDVG//AFvUlyZ22/fwVsPw77ezj/NB0BFCOp1a8f0jTtc+0eg5O6B+fpkG/wDBfrcbqs9+Z0+3/V0tkXf36hCCbLlXTV18OZ0bfkcDiKWdmhvxdXxcF3JUZ8fiLb3fuxssWHM35bOstP4b92M1X7M1gCfv0rkAP1/lCVC+lHSmusljbmPju6TsRXoHRPMuNy1CaMnnHPXkbdsNZZjeGPY4ggPY0+xBq8vGf5Ia9dKeCa5l9URzyO23LevdSqF23sFOOo0f6LGrqWlIv8cZ9+3fLjYx+EdBjwB+cp/VQGn0c567pDN4vKT0GZrUclmWWWuZTSY+RlaFpceHiHG2txu4WkB0zthy2Wi11pjWt6OOxUqMxWoIzA2TK4vVdhmIsdRsDNZw81Ux2eIcTeB7CQ3g3e4MDEFh0Wu0z6b6HV+kvRvTxXiF70EyGqbIYOuNfrgH9SXbkBw3AK2KAiIgIiICIiAiIgIiIChnSlq2ahHXpY6OOxnM299fEVZSeqa5oBsX7fD6zaFZjhI8jm71GD1pApfanbFG+R52ZExz3kAuIawFzjwtBJ5A8hzVa6eLyGobdGxJKac3SDRuXZ7cbuG3i9FUJKJq4XHuHEGXLjslVlnlBABlk5O4GBBg6bt0TqJtbGiS5Dh9P6qfkNSSNH+O85N9FfSkglaNpup/wVu7Twt60Mbs1jSe8dD2JgqYTGCGJsbrGMx0thzR60szcfUgEkhPNzhFBCwe5sTGjYNAEH1niamPyOPoUYI61XG6E1aK9eFvCyNhmwDRt7S4kOJcdy4uJJJJK6hoyPgxuPaO5mPqNH5QRj/8INsiIgIi8ZLLGkji3c3va3m7fYEDYe3Yj9UHsiwXBxfC5xIJkIEYd6rR1Up57cnv5Dn3DbYe0n6vTDcRvbvHMCwvDu4n2Ebd23t/H2AoMxFhY97ml0LzuY+bHHvc3l+u27f97bntuc1Bg5PDU7T4ZLNWtYfUc51Z9ivFK6BzwGvdC6RpMbiAAS3bcBZzQANhyA5ADuAREBERAREQEREBERAREQEREAjfkeYPeCoxm+jzA3XiW3hsZPM07tsSUK5sMIIILJwzrGO3AO4cO4KToghP/g2qx8RpZDOUHO326jO37ETCfbHVyUk9aPb3CPbkOS/G6VzcIPUaotTH936WxOKstb9x+j4ajnD8Xb9/NTdEEJhp6rjPrZHT9poA5fQmRpPPv3cMrO39Gr0kt6pZ9nH4Cf7zm8jU3/sjETbfqpkiCJ1snqHb67D4oH3V9RWZh+suHiWl1Bh5rb+vsaXoS2mtc1lyLKRQ3Yw7biEN6OuyxDvsObHDuHuXRkQcDzUWv6h/xHQPVsDQyvnNSVcrBsD625mqMyD3kcuJ90j7vf5v130nxxcL9E46acDnPXzdRsLnbcy2s62ZQPuL/wA1YBEHKaUucsMByeoDid2tMjcbph1BsJ73MdfzbrlZ/wDrNA9u33bLGaHw17Z82Sv5wgbP9I1BZnrP5EEyUKU0dEk8/wDodv0XRFr8tgqNvb0unVtbd3pNaGbbbmNusaUGFpzRuIxv+T8XjqPtJpUa9ckgbbudEwFzuXeea3qjs2iqB26ttusGncMx+UyNCMe76qnYZGR9xBC8Z9K2d29RncxWa079W04y013fsHvyFCaXh5+x4PIc0EoRRSzis80AVszRO3tyOCfYcR+NPIVmg9/Ph/Jfkr9SRt9SLCXHD+tYv41p/MQWy3+KCWIolDl8+0fX4SkT7qOeM+/4G3j66+JdWZJh2Ol8xJ/pVrmnXNH4+kZaJ38EEwRRGLV90j1tNZ2M+50un3n9Ycu4fxX3/K61/wCj2b/+T+ZoJWiiMurrwG7dM52T7mzadYf+NmWr9r6ryD9//FjMxEd3X2tOgH84MxIf4IJaijgy+UdtwYfg3+JyVdm34+jtl/gvht3Pk/5NxDG/1jnbj3/nGMQB/wDEgkyKPPizbx6s+KrH3Gnbuj9Rar7/AKJ9E5SQbTZYRn2ux2NggP5C8+0B+e6CQoo4zSznDhs5TLWveTaipHb/AFsVDXI/Ec19R6NoAbPbZsD3XcnkLo/S3YegkJK8+vZ3cbPw4h//AFR2To+wLju/CYl7v60mMqPd/vPiJX4/o70+RscFhyPccTSI/QxIJOijlXQeFh/o+KoVfvp1IqjvxDqzWkH816DSsLCXQ2slC8/vDK3bDR/qw3ZZYR/uIN+i1uLx9iE+vfsWwQf6XDTDgfYWmnBCPyIP5LPgDw1okc1zwBxuYwsaXe0tYXOLR9xcfxQfa4xgKcuM1PjsQ+tIalavmJtP32NaIIsVbFWWzhpNturlq2YK4iAHCa/UD7UZLupXMrLDw8VG1ICwF8lY15mMJ728JlbM8j/RjKwP5a0dyJG5CHhPN1nC5WCMH/by1REfbzDiEEC6UHl2o3s/q9HepnD8X3cQ3/8ARdQ0x/Qaf/qdf/ksXB+krpIwNTVtGzdvMGOuaUyeLsWoI5bDIJrV6nKxk4gY50PEyB/Nw5cidhzWN+z501y2cnS01NdxebYa80NTL4mDK1ZuGhXL2PyEN+s2B0kkcLt3V5HAO2GxB3AWSREQFgvl4HS8QcAXh3FwPLdurjBJcBsBuD392yzkQYT3bvh/2h/5MqwpYSyuYiA188n1UTTuGDdvqgj93ltv/pjfbcrO9DLZIywgRscXFh/dJY9vqfdu77P6e5e1iLve1rTK1pDHP7gee2+3Pbmf1PvQY7edpxHMMg4Hfc4vDwD+IP8Aes5Y1CsY2niPE953e73nmdgdu4bn2AczyG+wyUBFQDtq6q+X6f8AB5HzFO2rqr5fp/weR8xQX/RUA7auqvl+n/B5HzFO2rqr5fp/weR8xQX/AEVAO2rqr5fp/wAHkfMU7auqvl+n/B5HzFBf9FQDtq6q+X6f8HkfMU7auqvl+n/B5HzFBf8ARUA7auqvl+n/AAeR8xTtq6q+X6f8HkfMUF/0VAO2rqr5fp/weR8xTtq6q+X6f8HkfMUF/wBFQDtq6q+X6f8AB5HzFO2rqr5fp/weR8xQX/RUA7auqvl+n/B5HzFO2rqr5fp/weR8xQX/AEVAO2rqr5fp/wAHkfMU7auqvl+n/B5HzFBf9FQDtq6q+X6f8HkfMU7auqvl+n/B5HzFBf8ARUA7auqvl+n/AAeR8xTtq6q+X6f8HkfMUF/0VAO2rqr5fp/weR8xTtq6q+X6f8HkfMUF/wBFQDtq6q+X6f8AB5HzFO2rqr5fp/weR8xQX/RUA7auqvl+n/B5HzFO2rqr5fp/weR8xQX/AEVAO2rqr5fp/wAHkfMU7auqvl+n/B5HzFBf9FQDtq6q+X6f8HkfMU7auqvl+n/B5HzFBf8ARUA7auqvl+n/AAeR8xTtq6q+X6f8HkfMUF/0VAO2rqr5fp/weR8xTtq6q+X6f8HkfMUF/wBFQDtq6q+X6f8AB5HzFO2rqr5fp/weR8xQX/RUA7auqvl+n/B5HzFO2rqr5fp/weR8xQX/AEVAO2rqr5fp/wAHkfMU7auqvl+n/B5HzFBf9FQDtq6q+X6f8HkfMU7auqvl+n/B5HzFBf8ARUA7auqvl+n/AAeR8xTtq6q+X6f8HkfMUF/0VAO2rqr5fp/weR8xTtq6q+X6f8HkfMUF/wBFQDtq6q+X6f8AB5HzFO2rqr5fp/weR8xQX/RUA7auqvl+n/B5HzFO2rqr5fp/weR8xQX/AEVAO2rqr5fp/wAHkfMU7auqvl+n/B5HzFBf9FQDtq6q+X6f8HkfMU7auqvl+n/B5HzFBf8ARUA7auqvl+n/AAeR8xTtq6q+X6f8HkfMUF/0VAO2rqr5fp/weR8xTtq6q+X6f8HkfMUFZkREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQf/Z\n",
"text/html": [
"\n",
" <iframe\n",
" width=\"800\"\n",
" height=\"450\"\n",
" src=\"https://www.youtube.com/embed/oLSdlg3PUO0\"\n",
" frameborder=\"0\"\n",
" allowfullscreen\n",
" ></iframe>\n",
" "
],
"text/plain": [
"<IPython.lib.display.YouTubeVideo at 0x7ff95398cb50>"
]
},
"execution_count": 80,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from IPython.display import YouTubeVideo\n",
"YouTubeVideo(\"oLSdlg3PUO0\",width=800, height=450)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There are many types of data visualizations, serving different purposes. Today we will look at some of those types for visualizing single variable data: _line graphs_ and _histograms_. We will also use _scatter plots_ two visualize two variables against each other. \n",
"Before starting, read the following sections of the data visualization book."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> * _Reading_ [Sections 2,3.2 and 5 of the data visualization book](https://clauswilke.com/dataviz/aesthetic-mapping.html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 2: A little visualization exercise"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Ok, but is data visualization really so necessary? Let's see if I can convince you of that with this little visualization exercise."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"> *Exercise 1: Visualization vs stats*\n",
"> \n",
"> Start by downloading these four datasets: [Data 1](https://raw.githubusercontent.com/suneman/socialdataanalysis2020/master/files/data1.tsv), [Data 2](https://raw.githubusercontent.com/suneman/socialdataanalysis2020/master/files/data2.tsv), [Data 3](https://raw.githubusercontent.com/suneman/socialdataanalysis2020/master/files/data3.tsv), and [Data 4](https://raw.githubusercontent.com/suneman/socialdataanalysis2020/master/files/data4.tsv). The format is `.tsv`, which stands for _tab separated values_. \n",
"> Each file has two columns (separated using the tab character). The first column is $x$-values, and the second column is $y$-values. \n",
"> \n",
"> * Using the `numpy` function `mean`, calculate the mean of both $x$-values and $y$-values for each dataset. \n",
"> * Use python string formatting to print precisely two decimal places of these results to the output cell. Check out [this _stackoverflow_ page](http://stackoverflow.com/questions/8885663/how-to-format-a-floating-number-to-fixed-width-in-python) for help with the string formatting. \n",
"> * Now calculate the variance for all of the various sets of $x$- and $y$-values (to three decimal places).\n",
"> * Use [`scipy.stats.pearsonr`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html) to calculate the [Pearson correlation](https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient) between $x$- and $y$-values for all four data sets (also to three decimal places).\n",
"> * The next step is use _linear regression_ to fit a straight line $f(x) = a x + b$ through each dataset and report $a$ and $b$ (to two decimal places). An easy way to fit a straight line in Python is using `scipy`'s `linregress`. It works like this\n",
"> ```\n",
"> from scipy import stats\n",
"> slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)\n",
">```\n",
"> * Finally, it's time to plot the four datasets using `matplotlib.pyplot`. Use a two-by-two [`subplot`](http://matplotlib.org/examples/pylab_examples/subplot_demo.html) to put all of the plots nicely in a grid and use the same $x$ and $y$ range for all four plots. And include the linear fit in all four plots. (To get a sense of what I think the plot should look like, you can take a look at my version [here](https://raw.githubusercontent.com/suneman/socialdataanalysis2017/master/files/anscombe.png).)\n",
"> * Explain - in your own words - what you think my point with this exercise is.\n",
"\n",
"\n",
"Get more insight in the ideas behind this exercise by reading [here](https://en.wikipedia.org/wiki/Anscombe%27s_quartet).\n",
"\n",
"And the video below generalizes in the coolest way imaginable. It's a treat, but don't watch it until **after** you've done the exercises.\n"
]
},
{
"cell_type": "code",
"execution_count": 81,
"metadata": {
"scrolled": true
},
"outputs": [
{
"data": {
"image/jpeg": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAUDBAgICAgICAkICAgGCAgIBwgICAkICAgICAkICAgICAgIChwLCAgOCQgIDSENDh0dHx8fCAsgICAeIBweHx4BBQUFBwYIDQcIDRIIBwgSEhISEhISEhISEhISEhISEhISEhISEhISEhISEh4SEhISEhISEhIeHhISHh4SHh4eHv/AABEIAWgB4AMBIgACEQEDEQH/xAAdAAEAAgIDAQEAAAAAAAAAAAAAAgUEBgMHCQEI/8QARBAAAgIBAwICBgUKBQUAAQUAAQIAAwQFERITIQYxBxQiQVFSMmFxkrEVIzNCcoGR0dLwYqGissEIFiSCwvElJjRTY//EABcBAQEBAQAAAAAAAAAAAAAAAAABAgP/xAAaEQEAAwEBAQAAAAAAAAAAAAAAAhIxAREh/9oADAMBAAIRAxEAPwD8ZREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERA9mIicObaa6rLAFJrrdwGbgpKqWAZ9jxXt57GBzSLOo8yB9pAmsXeMUU2gVBhV0Dv1h3FuK2Y2/s+yempVR+se3Yd5caspLLt8PiPjLznozusvzL/ER1k+ZfvCU1aHf/ANW94+U/XI9M/wBkfzmqIu+snzL94R1k+ZfvCUnTP9kfzjpn+yP5xQXfWT5l+8I6yfMv3hKTpn+yP5x0z/ZH84oLvrL8y/xEdZPmX7wlNZWe32D3j4D65Hpn+yP5xQXfWT5l+8I6yfMv3hKTpn+yP5x0z/ZH84oLvrJ8y/eEdZfmX+IlJ0z/AGR/OSRDs32fEfMv1xQXPWT5l+8I6yfMv3hKTpn+yP5x0z/ZH84oLvrJ8y/eEdZPmX7wlJ0z/ZH846Z/sj+cUF31k+ZfvCDcvzL/ABE0a/xbp6anXo7ZCjUb6urXj8X7rxazbqhemLDWjtwJ32EzM7xBg159enWZNCZ2UrWUYrWAW2IOfdR9Yrs2B8+DbeUVG29ZPmX7wjrJ8y/eEpemf7I/nPnTP9kfzigu+snzL94R1k+ZfvCUnTP9kfzjpn+yP5xQXfWT5l/iI6yfMv3hKfpnj+/4j4fbIdM/2R/OKC76yfMv3hHWT5l+8JSdM/2R/OOmf7I/nFBd9ZPmX7wgWp8y/wARKTpn+yP5yVSHkv2j3j4/bFBc9ZPmX7wklYHuCCPqlF0z/ZH85a6aNqxv8T+MnY+KyYiJkIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgJ8ZQQQQCGBBBG4IPmCPeJ9iBBaUA2CqAOOwCgAcO6/wMwNX+kv2H8ZZSt1f6S/Yfxmo6MOvz/c3+0yMlX5/ub/aZGdEIiIQiIgSs937K/hIyVnu/ZX8JGAiIgJJPJvs/wDpZGY2qapjYdL35d9GLSvFWuyLUpqDM6hQbLCF3J90KygJ1h4p9OWg4DXVl8jItxMk419dFG3Fk6gtsV72VLK0atl9k+e23bvKvxz4L8Q65q5S3PGBoWI1eTgW4hRrXfjUOwSwWjIB6xFj7qO2wO87OxfDmn1W33V4eIlubYt2VYuPXzutQsyWWHj7Thndt/izHzMg6x8SekvxHXlZlGD4cvyKsTJSunIZMkrdSxIRwqIAeqNmDqSEB9qbn6UtJ1nNwaq9Gy007LF9Vlru5UNSFcNV1a62I2dkbsO/TI8jNviUU9fhvDOVTqF1FFupUUCn17pBbT7BVyvuXfk4+IDkeUhqXhDTr9Tq1a3HVs/DU10387BxX2wN6w3TdlFj7Mw3HL7JdyVv0j9pgaX4e8K6hj63qOpXanbkYWfXxx9Pbn06G5VFSAz9NOmqOo4Ab9Uk9/O50nxTp+Xl5WDjZNduXpx2y6F5cqyG4N3K8X4v7J4k7HsdjLmad4g8PDT6tU1LQ8DHbWs6vl7Rba9zYrWey1gQE+1ZxXbkyrvCNxiU3gnJz7tPxbNTpTHz7K98qms+wjcmC7e0QpKBGK7nYsRLmBP9X9//ABISf6v7/wDiQgIiICSq+kv7Q/GRkqvpL+0PxhUZbaX+jH2n8ZUy20v9GPtP4zM8OMqIic1IiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICVur/SX7D+MqdS1fUUyb668cNRXZjrQ4x73Lq9LvZuynhsbglfNey8mLDbbe31Ye0vcDsfPf4/UJqOjCr8/wBzf7TIzkrUb+Y8m+b5T/hkeI+Yf6v6Z0RGJLiPmH+r+mOI+Yf6v6YEYkuI+Yf6v6Y4j5h/q/pgLPd+yv4SM5LFHbuPoj5vh+zI8R8w/wBX9MCMSXEfMP8AV/TOO+2uvj1Lak5sqJzbjydvoovIe05+A+ECQnV+Pn6d4y/Kuk5eHmUV6DmV7WmwVs1ytdSGBUfmrOIt3rbftap8/K/w/Gz2eIcjQjg3rXjYy3jUSW6TkpXZ+j6WwqPU4B+X0kYbTdVA2b2h5f4viv8AhkGJp+JXj01UVLwqxq66al3J4V1KERdz3OyqB3lP478XYWi4oy85rFqe5KEFVZsse11dwqrvt2Suxtz8pmw8R8w/1f0zr3xX4qxrNao8O5+mrkYebStxzMpVfE6oWyysdG6ng6hquHPffk6jaUc/gbx5fqefqGN+TsinEwq1tws5uXDNrcjpFVsrCqba2FigE9t99pq93jzxJqOjXZmmaUun5mJn9K2vUHAU4a0mx7Q2YK1XhaURifIB9u/lseoemDw7i5j6dZmcMjHvOKyjGyDUjowqs2sWrgKq7CqE+7f4SF2Lq+qZ+s6TquLVVoF2O1eJlUuVvsJanp8X5EsxHVYhlABrUdx5wZ3of1DWcrTjfra4q32XOcZsR6bK7MYqnF+WPY1bDqdUAg+QXfv57rb9I/aZTeC/DONpGDRp+IzGjF6nE2sXsZrbHusd2CAci7sdgAO8u7VG59oeZ+b+Uo44kuI+Yf6v6Y4j5h/q/pgdeYGo6pp2oanla9n6fRoltgr0nqPTTwZnJqQNxD8ugrcg5PdSR2BnYCkEbjuDsQR3BB7ggjzEovHfhDTtYxlx9RTqU0WDIXhZdU6OqupIar2tjW7jb6/jtMP0YeNNO1vFe3TurXTg2DG4XV9MqqopqKBGI6ZrK7bnft3Agbb+r+//AIkJycRx8x5/Bvh+zI8R8w/1f0wIxJcR8w/1f0xxHzD/AFf0wIyVX0l/aH4xxHzD/V/TJVKOS9x9IfN8f2YHHLbS/wBGPtP4yr4j5h/q/plrpvase/ufLf4/XMzOMmJwajQ1tNtS2PS11b1rdXt1KmdSosr5ArzUncb/AAE6t9D/AIj1fVc2yvNZqU8J476TqwVaxXqevGwdXJTbuuMmHVj3qBtudWIP0JzV2zETpL0neNdY03xNbVidTJwT4fw6asBagyrreqZWsVabl2OiGxKWv06jFLdwPW1JHYmB3bE6t9HnjW3F8MaVl6pZkanqGZdbg1jGqrOXqWaMnLrSuiolalY1Y7vu5VVWlyxABM3Hwj4rq1BcoGjKwcnTLRTn4WYlYyMd2qS+sk41r03VvVYrB6mYHcjzBADYYmmeGfSDTmZlWFZg6np1mdjXZemvqNFNK5+PjtUtz0pXe12O69eljVkqjbWr7PntW0+lvEa2/fA1ZMLC1S7R8vVXx8cYFObVlnA2JGT6xZjtfwHWrQqvUHIqQwAdixMDW9ROMlTDHycrrZFFBTFRHesX2LWci0O4C49YbmzDcgKdgfKZ8BE4c20pXY4BY1o7BQCSSqkgADuSdvITp7/p71i68YX5T1TxDbqubpFeVdpusYVGFiMx9X9ayNP46ellwpuZa9uZ2F6kghlaB3PE681HxYX8SV4K5Pq+Do9VVef2G2brGrKfyZpvIruDXi1XZJVdv/5OJv2M7DgIlH42wc/JxRRp+UMCy26gZGWEV7qcMOGyjiC1GqGW1QKK1gIHPfY7bTQ/BHi7JGl+JsynJs1XB0S7OOh6hlCsnNTEwa7sivrUIq5eNVnjIoF4HcVkbkjch2xE6e9DHiG7IysNX1XU8v8AKGkeuXY+saacI5VofH31DRnGKi+qA3OjU99hZisNt927E1lrK8igi3ISqyxTa2ytQg7KlXZN15uQORPbcwL2JW6yGH5x7zj49SMzsnEObNxx3LqRxA39keZImPZlXnFxeZNV2U9FdrAAOnPu5CkbK5A27+XKBdRKzSrLN8qoubDjWBans23IequxVcqO+zORv9kx8B7UyVpNzXnos+XuF4VWEp0+HFRw5b2ewfcoMC7ia8clvWrerblVIuRWlQWsDHI4VbK7ms/SsLDz94mwwEREBERASt1f6S/YfxllK3V/pL9h/Gajow6/P9zf7TIyVfn+5v8AaZGdEIiIQiIgSs937K/hIyVnu/ZX8JrHpM0DM1PTrMTCzX0697KnF6GxSURt2qZ6mFiK3buvyj3bwMLxx4+r0vUNJ098XIvbXLukltWwWr85XXuFI3tYFwxUbbKCZyekz0fYXiCvFrzHyK1wbmtT1exULhwFsR+aHsQo9pdiNjsZsWhYttGLjUXXNk3Y+PTVdkuOL5Fldao9zDfszkFv/aZkD6T/AH/+Z9Tyb7P/AKWRkk8m+z/6WFRmp+k3wFh+IMenGzLMitMa/rqcd0UsSjVsjCxCpBVj323H8d9siEa/d4I0Z7/WX03BfINiXG98at7jbX9Gw2MOTPuAdz5kAnvNgiICSt+kftMjNS1vx/jY2vYugtRktfqFRtS9VXoJuL2UHdubDbHs3YDYbj69ittiJpvjjxu+mahpOCuBkZa6zd0nyKiQuP7aV7hRWRay8+ZBK7KpO5hG5TRtSy9Uw9ZwMLTdLxvyNlhrdRy6qxWa7nazqtvUwSt1Vam9oEv1Nh5TeZrXpM0DL1PTrMTBzX0697KnF9ZdSURt2qL0sLEVu3dflHu3gbR+r+//AIkJjaHiW0YeNTdc2Tdj001XZLji+RZXWEe9hv2Z2Vm/fMmAiIgJKr6S/tD8ZGSq+kv7Q/GFRltpf6MfafxlTLbS/wBGPtP4zM8OPmsvkLjXtiJXblLTacWu5zVTZeEPRS2xVLV1F+ILAHYE9jOu/Rt6O8rQszFupsqvTUtPdPFFjMa7MvWVubLTVqqwuzPbblZ9bAkeycUDcJtOz4nNSath6BeniHM1Qmv1bK0XTNPrAY9UX4mZq2RaWXjsK+GdTsd/c02mIHT2b6M8x/D+kYL14WTl6DqlmpHDuvsrw81Xs1GtsZsqukvSxx9QLh+J9qpAe25mw+B/CORjYWrKuPgaLkasbBijTi+S+KoxhRj35OTaAMrKWzlZ7IAAKL325HsCIHSno49G+bh6vpWdZpek6cum4Gdi6jk4ufdnZup5WQuKq5Ntl+Itj1747tytYtve2/lubrI8B5x0HVtNBx/WNR13UNRoPUYVer5WutqlYduG62ertsQB59t/fO0YgYOsW5SLWcSqm52vpW5brmoVMZnAyLUZa252pXuwQ7bkAbjzmdEQMfUqrHpuSm3oXWVWLTfwWzo2MpCW9N/Zs4sQ3Fux2nX3h3QdbytW0vUdZrwcdvD+nahh8sLLsyvyjlag2CLckI+KnqmOEwAwQ7nfII8l3bsmIHUHiL0O2+spkYGraqoyfEVet5+Pbdg9FHLE2247fk83NYla00qjsQFrUe4TtHWLcpFrOJVTc7X0rct1zUKmMzgZFqMtbc7Ur3YIdtyANx5zOiBo3ps0DUNU01MHBSq1MnMx/wAq49uZZgDL0xOdmThjKpod6xcy1VtsO6WWjcbzI0/S8zM0nN0zNwcLR67sS3T8WvT8w51NeNbjtQCqnDqFPDlsKwCNlHebjEDrHwh4Y1izUdHzNVpwcVfDGl5unVep5luX6/dm+oI+QFtxkOLQtenqwQ7ne8jyXdt2zsXKt5Ut02qe5LBby4ulSulnT6QTZmBQjff3y4iBUari3PkVWCuu6qhN0R7TWBeW/SEcCGIUADfy5Gc2o49ttVTBUW2m2u7plyUJQndOoF3G6k99vhLGIFZh416rlWHgt+UxetA3JUK1JVWGYr3O6Ant75DQce2lVqamtF7tZaLzY9lh+k7g1gszH37y2iBT5+LlW86W6bUvdXYLS3F0rR0s6XSCbMwKbct/fLiIgIiICIiAlbq/0l+w/jKrUV1b1jI6THoG2j1cAY68a/V2D7lzydfWtmbcA8R7O5Pa21YDku5PkfIb+/7ZqOjCr8/3N/tMjOSsLv5t5N+qPlP+KR2X4n7o/qnREYktl+J+6P6o2X4n7o/qgRiS2X4n7o/qjZfifuj+qBXeLMJsnCysdMhsN8rGelMqs7PjtanBbVPIHcMw8iD8CDKz0c+Hr9K06jCyMuzPtoNhbIsDAkPYzrWodywRAwUbn3e4dhX+m7QtL1DS+jqucdOxUvotGQWqrTqqrolbi08LAQ7nifeoPum1aRj1U49FVVj2VU0VV1WMRY1laVqqO1nL84SoB5e/eQfc/Mpx0Nt9tVFSkBrLrFqrBY7KC7niCSQJg6H4jwM6zIqw8qjJswHFeUlTh2qclgA23uJRxyHbdG+E0v8A6kfDuXqeijHwMdsvITNouCqypZWipcjW1q7hLG/OBOLe6xj5ibB6OPBGnaRTyxcVcPIzacb15UtsvVbK6+9Vb3WnapbHs+jtvvvKNpkk8m+z/wCljZfifuj+qSQLs3c+Q/VHzL/igccSWy/E/dH9UbL8T90f1QIxJbL8T90f1RsvxP3R/VAjNGbw0zeKX1NdWLCjE6b6OGBapbEFYLL1fYx2cC3uvdgO82Dxt4qwNGxhl59r1UtatKcaWtd7XDsqqiHc+yjt/wCplbofgvShqt3iLGe979Wx12bmDjGu5aW6tSbcwXSqr6RI+AEg2ufR+M+7L8T90f1T6OPxP3R/VKNK9FnirUNUry31DTLtKbFyOlStvPe1diW26iAsyEAF19k8htNymneivRtaw68xdb1BNQe7I54rVgnp1bHkTyUdMOdj0huF49j3m57L8T90f1QNN0jwBXj67m68MrIezUKRQ2K23SQbUjfnvu6DojZD5c285uEqvG/r/wCTcr8lFPyhw/8AE6ypw58l5fTPDn0+e3Ptvx37SPgj1/8AJ+L+VSn5R6X/AJfRCcOfJuP0G4c+nw34dt+W3aBbxJbL8T90f1RsvxP3R/VAjJVfSX9ofjGy/E/dH9UlUF5L3P0h+qPj+1A45baX+jH2n8ZV7L8W+6P6pa6b+jG3xPmNvf8AbMzOIa1p1eXjZGLabBXmU20WNVY1VqpahRmqtQ8qrAG3Dr3BAInUnoZfV9Q1Gz8rWP8A/sWuzQuSWkV6rqdgrsu1e6pG22bTfyeyo/0Wz8udzzXPB3ho6fkazebRb+XNU/KIUJw6A9RwMHpE8j1D/wCEX5dv0m23ac1bHOi/SVZra+Lch9Ia+4jwzgYLYosJx6LNWzddqq1dqCwRjjZWHhFiNj02u28tp3pNexvDhTW8rV+qCuZpeBp3Q4d0OFlalkm7qctiGGeF47dul59+wdb+BfGP5E8I6K9znNyMvLu0zEsz8wYyW3+s57Lbn6hcpGPUuPi2uX2J9gBQSQJuvo08dDV1z0avGXK0e9KckafnJqeHb1aUyKbMTMFaGxWR9irqpBRh5bE1L+jKwaNpun15VPrmg576lhZN2IbsRr2szd68nD6wa2hsfOvqOzAgkMDuNpeaF4UyRhajjZ+VXZZrHWRjp+MuBRhU20DHWrDXk1nJRys6ljElrGI2GwAa74D9LI1DVU0q+nTqrsrGycmlMDW8fVsnF9VelbMXV6MeoLgZZW5WARrFPC0cu254q/Sln7ZeY+kVJo+ma1k6Pk5g1TlmN0NRbSzm04HqfB8cWhCytYGG9nENsC3N4L9HWo4mbpGRlZumPj+HcDK0/FxtO0l8DqpkLjIMi5mzHVXC4tY6aAAbv8QBnXejp20fUdLGUobVNXzdUF/QO1QzNXbVuia+p7ZUN0+W4+O3ugblrWRk1rUcXHTJd8ihLVfI9XFWO9gW/IDGs9R66yWFfblttuPOZ8wdYqynWsYltNLrfQ1xupa9XxlcHIqRVsXha9e6hzvsSDsfKZ0CF1qorO5CpWpZ2YhVVVG7MxPYAAE7zrD0M+IxnZWXl3tkeseJKhq+m0Wcujj+H6bPUdM4qx2ruvUNlkAb/wDm7H6InY2t6ZRm4uTh5SC3Gz6LsbJqJZRZRfW1VtZZCGUMjMNwffNK8NeivB03WU1TDNtdVOmnBrxnys3I4ObefUD5GSy9IVewKtth5jvApfSjoGN+VdCoxDkVajrOs15eRamfmAJp+lD1/PcY/X6PTsavFxSNtv8AzhO25rl3hovrlOrtaCuLpeRp1GPwO6PlZOPkX3izlt7S4lCbbfqnv3lrrFWU61jEtppdb6GuN1LXq+Mrg5FSKti8LXr3UOd9iQdj5QNI/wCoPObH0ml+WT0H1XSas6jBssrz8zEtzKq7sPD6BF1t1nIb11EMyi0DzmN6BsoXLr1dXrdeBj6y9Gn4OebxnYNPqWC99VtWUxvxqXyXyLkrfyW9NgAQBtPpD8NWanRj+r3ri5mmZuPqOBfZScihcnH5rwyKBYrW0PVbdWQrA/nNwdxMPwt4Pvpr1ezNy+pneJLTZl34CPgpjKuHTgUJhA2tZW9dNCt1SxJYk9uwAaX4L0z8meL8jHvqy8enUdOuXw6o1G7Mw7qMB8RtTuya78g216g1mXj7bjiEqIB3Lb9nLSE1BeJf87i3u4NjsvIW44Gys2y7Bj2Hxmr+F/BWorn4Ofq+oY+fZoeDlYGnnHwrMR7RmNiesZec1mU/VyWTCoHFOI3Np94C7XZhZJyBeLKAER6lU0uT03dHO56vd/za9/rPaBwa9jWWXUnpWXVJVdyWu7o+2zVcdzzHI8Vbt9czcZKcjHq4hjSyIUBZ1YADZQzBuW48u5n3Nxry4sptCHgUZLFayvz3DqquNrB3+3efcbB6WMMdGI4VGtXI78uO3MgfX32gV+gqfV8iytuK3WXPjl2LKlajp1sS5+ieHP8A9ph+GrgbqOAvTqYjPkdcvtfZyq42V8zs5BLnkvudZe0YKrjLjHuopFJI7EjhwJHwMxsDTbVsqe6xH9VqaqkJWU3DcAz2bud22rUbD64EFpCagvEv+dxb3cGx2XkLccDZWbZdgx7D4y3lXZhZJyBeLKAER6lU0uT03dHO56vd/wA2vf6z2lpAREQESFwJVuPZip49wNjt27kEDv8AUZh6KMwJ/wCYcdrN+3q4cKB39ljZ9NgAPbAG+57CBnyt1f6S/YfxllK3V/pL9h/Gajow6/P9zf7TIyVfn+5v9pkZ0QiIhCIiBr/pJ8P6bqOBZXqqFsTEHrjsrvW1Rx63LWK1R5dqmsGw9zGcfo18SafqmAlul8xi4h9TWp6zW1PQrr41cST2FTVEHc9mHv3my3KD2IBDKAQRuCCNiCD5iaL4/wDRzTqGm1abg3DRqqMpckDCoVamIDh1ail1BJL8wd/NFPeBvQE+ATSvSb4Es1nCxMRNQycI4V1VpvQGx7hWhTezjYp63fmH37HvtPvj7wM+qZOk5C6hk4Y0a/rOlYLes+1U25YWAV3fmuPMg9rX7QNzDDcjcbrsWG/cb+W492+xkuYG+5A5bKu523YspAHxPY9vqmlaR4CowdW1PXaLMm/K1OmwHFexFp5HpWcQ2253aitRy+iGaatgeHMrxdi4mVruNlaNkaLn2tj008qvWKicVubVZANlTh6+AsHyuR59it38feOtO0NMezULHrXMsaurp1NaQECtZY4X6NaBl3Pn7Q2BmJieKdQfxBfpTaZamn044tr1P8507GKVv9Ip0iC7vXwB33rJ8vLZdW0rFywgysfHyRRYt1IyKa7hXcv0bKxYp4ONz7QmZvCNM8Labr9eqarbn5lFumZG/wCSqa1Bso3feolemOHCr2SCTyOxnz0Z+GtWwcPLo1TU3z78m61qMgNY70I6BQUa/uG57vw8l8hN0mm+kHG8QPlaWdHux6sVLydVW4JyerlUR9Oslq+n1xtWQd2T7QVp/o19GOrYl2dh67m4+uaJb+dxaMpGub1nlSUtFeQC1BVVv3HI7m3f4zuMoF9lQFVPZVVACqo7KqqOwAGw2HwnwyVv0j9p/v8AGPBGa76R/EraRpuRqCY1ma2P0wKK2Kb9R1Tm9gQmuteW5bY+UssPW8O7Juw6snHsy8QK2TjJarXUq22xsrB5KPaX7y/GUvo58fYGvJk2YHXC4Vq1P1qxUWDgtXbXsx9hgrdjse3cCEXXhrUjmYeLlmmzGOZj1Xmi3tZSbEDdN+3mN/gP3SwiIE/1f3/8TSdMwddr13NysrMx20J6AMTG9kWV2bUhSR0gUIYX7sWO/UXt8u7fq/v/AOJrHpI8H067gPp99ttKWW1Wh6eJPKpuShlccbEPwP1HzEDZImo+jDxJpeZjtg6XkWZK6DXj4VjWq4sKojVU2l3UC0OMez21+Q/VNugJKr6S/tD8ZGSq+kv7Q/GFRltpf6MfafxlTLbS/wBGPtP4zM8OMlmABJIAA3JJ2AA8yT7hPhddwNxu/wBEbjdtu52+PacediVX1W0XIttORW9V1bjklldilLEdT2ZWUkbfXOjv+nRRlalqPXylzP8Asqt/Duh7rZyOm+tX89SZ7BtdbccKjDNqdt9Fu2PtGc1d7zjvuRAC7KgYhQWYKCx7BRv5k/Cck6i9NfhwvqGLqmZo6eJNHxNPycXL03hXkZWDZZbXcdTwMG8dPMtNdfSKoRYAicN9yCHbVtqoN2ZVHluxCjf4bn3ySsCAQQQe4I7gj3ETpnxlg6Xm6P4MoqYatpOTrOkrQ2eoyvW8X1HO6RyVvT86/ELuLBvuO/eXnoawqsLUPFenYlaY+Bp2sYhwsSocMfFGVo+l5eQmPUPZpqbIuts4LsN7HIHeB2XERARMDWvXOFXqXq3P1ijr+tdTj6r1B6z0ul39Y6XLjv23237TPgJGuxW34kNsdjsQdiPcdvIyGXj13V2VWqtld6NXajDdXR1KujD3qVJH751t6CdKxsG/xXi4dFOLjY/iUrTj49S001qdF0NiErrHFd2Zj2+JgdlG5OYr5KLCpYJyHMqDsWC+fHf3zkn5roqPrOq6xm6Zi204njTofluvNNevU9LUcfT8P1dPVCBp1IerHag2DklmT27+1+lIELbVQbsyqPLdiAPs3MVWq/dGVgDsSrBhv8O0wvEOi4WfQcfUMXFzcYkO1GZRVk0ck7q5qvUpyHxnUHo1xa8Twl4h1fT6qtPTXPy3rGnVYlKYteLiJjNjaW9NNShay2JhY2R2H0r2gd11XI5YKysa24uFYMVb5W2+i3fyMkXG+243+G/edG+hvBq07UdEqu0nD0jI1bw/kNiNpeYbly6sVtOsvTWaziILc9Dk12reC3fIyxv33btHJxujfdk3Y+Paj5FPG1tmvrBFNKMgKeSuN9t/eYGxMwHmQN+w3O3f4QTKjMxq7s0Jai2IuISquAyhns4swB8mKgDeQ0ypb9Oo6yi3/wAdG/OAN7Sp2Y8vM/XAugYVgd9iDt2O3uPwMqcByum1Mp2ZcFCpHuIoBBExdMx0quwemqp1sO3q8Rt1CnqzKX2+kwLudz8xgbByG+243I32377fHafZqOo8E65sQjL9drem01sW6RtqFZW7bYVivddt/iPfNugIiICIiAlbq/0l+w/jK7WdDyr7MplyV6eQuOKK+Do1BrS9GbqI/t7Pat47A8qlG47EWWqkcl7b9j+M1HRhV+f7m/2mRnJWRv5fqt7z8pkeQ+X/ADM6IjElyHy/5mOQ+X/MwIxJch8v+ZjkPl/zMBZ7v2V/CRnJYw7ez+qPefhI8h8v+ZgRmLquo4+JU1+VdTjUV7c7r7EpqXkQqhrLCFBLED94nB4o8Q4Ol4tmbqF1eLi0cRZdYW4guwRFCqOTOzMAAo98r/Emi6b4h05Krd78LLFOVRdj2shOw5VW1v8AssRsR+sYFZ6TvCWRrVOEMPVLtOGNeMg243JhfWyjgytXaN3X6Sk7j2z28pu6/rfZ/wDS/CYmkYNGJj0YtFfCjDpqooTkzcaqUFda8mO7bKo7mZiEbN7Pu+J+ZYHHE+2WooLNxVUBZmZtlVVG5ZiTsAB33PwmkeNfSEmLpQ1TScca4rZKY4GHczom4YvZY9KMwClVXYDztT3QNW8Q+NL9fvTTdCVczSc1bMLWdQqW2q/C6/Ks20vYQqItO9gchg+zKPKb36PvDNehaXVgjIe+rCF1jX2gIFVma5+KA7VUqCfZ+ozL8F6Ng4mOHw8CvTvXxXk5GOq8HS2xAxrs27Bk3K8R2Gx2AnW/ii3E8aZGTpOLlahplnhvLf1qwVA15Q5WYtnS43g12pZU/FrB+sTxPfaC98Q+KsvV9JbJ8IX05GRXlpTabaxU61qpawImcgQOedDbuPos+3faZnjTwC2sW6NlZeVbjZOiut9yYn6G289B7ekXPKr85UQH7ni7Daaxpejv4D0G5cLHu1yzI1Hqsldb461C6ta+bJXzcIBjou439q1fITtzCyTbVVa9LUtdXXY9Nh/OUs6hmqfbtzUnif2TAodJ8F6di6llatRRwztRUrkW9R2XZ2R7OFZbjWXeqskj5ftlzhYVNAcUVVUi12tsFVaVh7G+lY4Qe05+YzK5D5f8zHIfL/mZRGJLkPl/zMch8v8AmYH39X9//EhOTccfo+/4n4SPIfL/AJmB1j439G2kpk4+uBzpdeh2W6lmjDpRRk9EjJex+mOSvsjgkb7h2G3ebD4Q9Iuj6oKBi5dYvyzaKsS9lpyyat+f5gtufZUtuN9wp+B22u1UZSrIrK4Kure0rKw2Ksp7EEHbY/GadmejfSRfXnYWJThZ+FjW06fZQDXjY9jJcKrmw6yKrGR7nbuPf9Q2g3GSq+kv7Q/GdeeDNVt0PFxcPxNqmPfqOpZdq4bcrbOVZ6SLUbjUCwFjb87AAOuq79p2NURyHs/rD3n4yjiltpf6MfafxlXyHy/5mWum/oxt27n8frmZ4cR1nFsvxsimq58W3IptqryagrW472IVW+pbBwNiEhhyBG6jcGaz4f8AR7h6ffpV2C1mOui6W2j9JeJTLwt6XoGSSOTW1XVNYHHvycjf6Rm0annU4tF2TkWJTj4tb3X3WMFrqqrUvZZYx7KiqCST8DOGzWMRbcWk5FIt1JbHwq+ovPJSpFsteld97EVGRiw+dfjOas+al4o8HW5GaNRwdQyNMzDh+oX2VU4+TXdjCx7qeVOVWVW+qyy5lcf/ANzhgw2A22UXi7xjpWkLU2qZ+Hp65TlKDl5FdHVZduXDqNuwXku5HlyG8CjzvRvR+TNI0zDycjCXw5kY2ThXgVZFxsxqrqQbvWEKWF+u7E7eZ90uPBPhZNNGW5vvzcvVcn1vPzMgVLZfctNONUBXj1rVVVXj49NYVR5J33JJOT4i8U6bp2MmZn5uJiYlzIteTkX110O1il6wlrHixZVYjb4TI8Pa5h6jjpl4GTj5uNaWCZGLcl9LFSVZRZWSvIMCCPqgWEREDB1jEuuWtacmzEau+i13qrpsNtVbh7cZhehC12qChZdiOR2IMzoiBw5tbvVYldhpset1rtVVdqnZSFsCOOLFTsdm7dpqHgbwRk6ZlZmS2q5OYuq5LZmZRbi4daWZRxsbEFivRSHQCnEpHEHbsfjN1lF4c8Y6VqVttOBn4eZbigm6vGyK7XVQ7VF9kbvX1EdOQ7bqw84Gt53oxrtvvHr+WmmZup06xl6UqY/RszabKsjZck1devFsyqKr2qB7sG2IUlZuesYl1y1rTk2YjV30Wu9VdNhtqrcPbjML0IWu1QULLsRyOxBlfd4y0lM8aW+oYS6ixQDCORWMjlYjWV1mvluLGrVnCHuQpIG0voGJrOF6zjZGPzar1qi2nqptzr6qMnUTftyXlv3+AmDpnhvGp0qnRyvUw6NPr00ofZ54yUDFKnh5b1jbt8Zma5q2LgY9uXm304mLjLyuyMixaqawSFHKxzsN2IH2sBOLQNews/H9awsmjKxwzobabFsRXqO1lblT7FikbFT3EDWPCPo9OFlYmVlajl6m+j4NunaUMmvFr9Wxrzjm97GxqVOTlOuJjIbG7bVdgCSTs1ukBmb864pstW56NlINilX7ORyVCyg8ftlb4Y8f6Jql3q+nangZt/Sa7pY2TXdZ0VKK1nFG34BrEG/+IS5t1THSzotdWtpKqKy4DbvtxHH4ncfxgRzsAu4srtamwI1ZZQrckYg7EONgwI3BHxM+nA40JRS5qWtQgPFXJQLx2PMbbnz3+qcmZn008erYlfPfjyO2+22+38R/Gfbc6laxc1la1OAVsZgEbl3XZj2O4gcOn4HSp6DubUCCtQyqu1YXhx9kd+3vMhgaX03R2te3oVmqgMFHBG477lR7bkIg5H5ZnVWK6qykMrqGVh3BVhuCPqInxbkLMgYFqwpdQe68t+O492+xgYF+k82be2zo2WLc9PslS6FWGzkclQsiniPgZZyD3KrKpYBrNwik924jk2w9+wG8nAREQEREBK3V/pL9h/GWUrdX+kv2H8ZqOjDr8/3N/tMjJV+f7m/2mRnRCIiEIiIErPd+yv4SMlZ7v2V/CRgda/8AUhj6LbooTXVzGxGzKOkMDiMn1kJcVKGw9MDpdffl+7vtM2nxtoulNoWj0Lclep4uImliukmqrGsCU4YuZ2DqXOy9gT2Jbbzlp6VdNz8vSsijTRjNluajWMqum2sqtis+y5KNV1eIOxcefw8xVaB4owjn4Gjal0b/ABFiYaWW2piqaUyGx1uvTHvKfm3aoF/YAG3w8oVv+0rNC8RYObZl04mTRkW6ewry0qcM1Ll+IDge7klg3HbdGHums+EvBmNomdrGpvnWMNWZsq9cl0rqx60d7Xsdy35xU6nEWNtsoAln4J0HRcT1vO0tccDWAMm/IqyDdVaosJ51s1hSukWPZ2TYbkwK7H8bYWoaxqHhp8a8vj4r+svYF9XtqdKRbXsrc1UpkqAx8/a+re78E+FMLR8Y4mn1tVS1rXOGse13tcKpZnc7k8URf/QSyzMzHo4vdZRSb2SpGtsSvqud+nUrOfbc99lHxM1fxDk+IF1vTq8OjHfRXr//AFG5ynVR+VvPfk/MbIKeIQHcl9+3kRWeM2u8RYir4b1qrGswc1fW7qLLO4Ctshen2mAOzhfotxPeb/j46JyIVA9vE3WKio1rKOId+I7nb4ym8H+D9N0hb007HXGXLsFtwD2PyYAhFHUYlK1BOyL2HIy+gfRNO8BazreVl6tXquBXhY+Lk8NOsTlvfWXtB3LWEXbItTdRQo/OkbfDcJQekunVbNPyF0WyurUCydF7OG3DqL1QhtUothr5AFht393nCr+JXeGPWvUsUZz1W5qUVrmvRt0myVUC7jsAPp7+QHv7CWMIREQJ/q/v/wCJCT/V/f8A8SEBERApvEHhbT9QtxbszGryLdNs6uI78t6nJVv1W2deSIeLbj2B2lTp2s623iK3CswK10WuhHpzxy5vbxrb6fU4tvY1idLjuOAO+3nt8lV9Jf2h+MKjLbS/0Y+0/jKmW2l/ox9p/GZnhxz3VK6sjqHSxSrowBVlYbMrA9iCCRsZ03/0/wDhh8bUNZF+QcpPCt//AGzoSspBxNJWvG1VayxY9S415uHjmz3jTKZ3PKzRNCxsOzNtoVlfVsv13LJdmD5Hq+Pi8lDHZB0cWkcR8s5qs51n6Q9OzU1ivVNHOn6hnYelPi6joOXatV+VpuRebarcO/ucO5r8e1N7VKPwIJBXcdmTWfFXgnD1G+vKsfMxsqqh8X1nT83Iwb3xbGDtj2vjOOpXzHIb91JYqQSYGhahm4OTpXga/TKXxsC7W9KbDxrAVfGp9S1ADHdSx4vX3TYEj2Ox2l/6L1C6340RQFX8s6fZxUALzt0HR3sfiO3Jm7k++Xmq+AtNyMHB04V242Lo1lFunrhZF2JZjPjVvTSa7qHFnZLHHc99+8z/AAl4ZxNLqsqxFs/8m5sjJuvvuysnJyHVEN2Rk5Lm25+FdaDkewrQDYACBcxEQMDW8XIuStcbJOIyZFFtrrTXf1aK7Fe7GK2jZBagKcx3HLcd5nzB1jTEylrR3vQU30ZCnHyLcdmehxYqWNSwNlJI2NbdiNwQRM6Bj6nXU9FyXkCh6rFvJc1gVMpFhNikGscSfaBG06s9GeKdV1DA1nExvyd4e0HTMnS/DdTKyZGpY+W2FyzzUTvj6YKsCla0f2m5FzxHEHs/WtOqzMbIxLwWpzaLce8KxRjVejVWBXU7qeLHuJR+F/BGNp1lb0ZOq2LTV0a6MrVczKxlTiFUDHvtNe6hQAdu0Dp3EbOox9az7jp+VpdPjlxZp9uHacuxjrWJipmLqK5I6eTRacdkUIRxxApPtez+ipp+Z6ONLtzGzHXJ/PZdOoX4YzMhdOvz8fh0cy3AFnRe5TVU3lsWqrYgsAZsWsaYmUtaO96Cm+jIU4+Rbjsz0OLFSxqWBspJGxrbsRuCCIGp+mTCvso0vIoqOUdK1vAz7MNbKa7suukXI1WN6zYtVmShtW9UYjc4w277Sn9Emq9TUvGGVkY9mm1jUcK16sxqa3rrTR8AHJv6Vhrp511izudwOPLY7gb54s8O4mqYxxMxHevqVXVtXbZRfRkY9i3Y+Rj5FLCyi+u1FYOhB7TH8OeEcLBoyaK0suGpWWXahbmWvl35llta0s2TbeSbF6KJWE8gqKAAIGkYVd2B4yR3vqz6vGOm5dmM7U8btMp0U4JqxcbIW0pbhWnUbbSoUHl3JPkN3vNuO1+QllVldmTX1Kgm7Dfo45HUD+y67A7be7+GD4R9H2naXcl+P63Y+NjHCw/XM7KzRhYZZHbFxBk2Ho1k1U7nzIpqBOygC9fSaTYbCH9p1tasWMKmsXbaxq9+Jb2VP7hA+6xj+sVvjraKmcAtsOR6e/cFeQPBtiv8Zx6dkdbDrsKqpenfiv0VIUj2B7h2nPnafXawcl0dVKc6rGrYo3coSh7ruAZ9twENS0rzrrrACip2rIUDiF3U7ldvdAxtHuWvAosfstWHU7nz2VaVY/5CVvhfU0ezhzqezLRsm0q/IrYzACgfsUhB/wCpl3p2EtC8ENhUABQ7s/EKNgq8j7I29wnJ6svV63fn0+nvv248uW23x398DXLdZqGYzl696rVw60ZtmRCd8m/j7t3CLv8A/wCc2mcOTjLZw5b/AJqxbF2O3tLvsTt5jv5TmgIiICIiBjW59CsUa6pXVq1ZWsQMrXHapSpO4ZyDsPft2mNq/wBJfsP4zG1Hw6l9tlr2272MhUDhtWgpsx3qTdeyOttpJ892HfYADK1VvaXy8veN/fNR0YVfn+5v9pkZyVv38h5N7h8pkef1L90ToiMSXP6l+6I5/Uv3RCIxJc/qX7ojn9S/dEBZ7v2V/CRnJY/l2Hkv6o+Ejz+pfuiBGYf5KxfWfXPV6PXOl0fWujX6x0t9+l1uPPp7/q7zO5/Uv3RHP6l+6IHX3i30UafqmqHU8q3KJfEOJZjpYEqZSllYYOB1EAFhbgDtuoPxB1vUf+nbR7Sxqyc/HQYy1LWtldg5ixWNjmxN3Rm3Y19hudxtO5ef1L90SSv2bsvl8o+ZZPFaB4g9FemZ+n6Zp2U2U9OiJXXQ63BLbK1rSp0ubhsVdUX6ABG3Yib3Jc/qX7olF4h8Z6bp+Th4mZkV05GpvwxUNbsGPJawXdEK0qXZV3cjuT9cqLuJ1piePNY1CjXasLSvVMzTLFq0t8ojpZrNZYp4dZFrN3RqNgVSVPVr3O3c/dc1LxeumaScTGwG1ax99Xpc08K6+RFTbG3iKyNuZrJIP0Y9V2VPuQoJYHfZtwdjxIB3HYjyP1zTvSXmeIK7NO/IWPiXVvkMNS6/TBSrerp/pHG1JU37sm7eymwlB428LeLL87VrcDWKMfEyqq10+hi6NUyvQXXdaD6q2yXfnkJJ6oGw909GN/0++As7w0mo42fmY91WoZqPpyq45sVV0dtii+29Yo9gbn8031TYPAPpHp1vE1HIwsTK56WzoKLgiPkWCt7Kq0YEhLWK8Sjd15Lv5zi1f0cJqaaHbq2Rdbn6AtbPdjFK68q4HHstLhq9wDbjoeScT5+XbbflIG+yoORLHZQN2Pmx2HmfjA6303x3q1nh/J1V9GvTPx7TXVp5W5WurD0qcgVNX1+Ciywldtz6u23Y9tz8I6jfl4GJlZOO+Hfk0pZdivvypdh3U8gGHx2YA9xv3lvz+pf4CfOf1L90So+/q/v/AOJCcvL2fIefwHwkOf1L90QIxJc/qX7ojn9S/dECMlV9Jf2h+Mc/qX7okqn9pey/SHuHxhXHLbS/0Y+0/jKvn9S/wEtdNP5sfafxmZ4ccuTelSPZYy111Kz2O5CqiKCzMzHsqgAnc/Can4N9I+matctGMcyuy/HOXh+uYGXgrnYasitlYTZdSjJpBtq7r32urO2xBmT6WkZvD+uqoLM2j6mFUDcljh3gAAdySZxeFNWwOjoWOz1Nl5ulDIwAENjNjU0YYyba7VUrXX+fxhuSN+a7bzmrbInQuo5uUcLVfEB1DUE1HTfE9+nYuMubcmnriY+t16XRgPpYf1a034pRi7qXJyQysPZ23f0nYtmXq3h7B9az8bFzG1Q5iYGZfhNkLTio9ddl2M4sVQ5DboQe3n3MDsOJ1v6TsbK07wrl01ahnPk46011aibAueq2ZtKoxuQe3alThObd247tuSd9o8I+Gfya2QEzdQyqMk1PXTqGVZnNj2qHFz1ZOSxv4W7oxrJ4goeIG5EDYIkVcHfYg8Tsdjvsfgfge/8AnJQESu13TkyUqV78jHFWTj3hsbIbGaxqLFsWixkO9lDlQrVnswJBljAREpPDmRqz2XDUcXTsapdvVmwtRyM2yzu2/Wrv0+sU+zxPslvM/vCsy/SNpdWa2Cz5HKvLo0+7KXEyG0+nUMla2owrs4J0UyHFtI2J7G+pSQzAHbp0J4nsB0XXKwQbB4902soCC/N9f0O5E4+fI1Oj7fBgfKd8u4G25A3IA3O25PYDv7zArPFXiDF0zGOVls619SqmtaqrL77r8ixaaMfHopU2XX2WOqhVHv8AhMfw34sws+jIyKnepdPtspz68up8W7DtpRbbEyargDXtU6WcvIrYhBIIMo/S64X8hMxCqPEmlqWYgAGw3VVgk9t2ssRAPeXUTUrVLp6TeIL8rGRePfk6+FdMUoNvNgSBt9cDevCHpB07VLlox/W67L8b17EGXhZOF65hckQ5eIcmsdakNZVvt3HWqJGzDfZrclFsrqO/O7mUAG/ZACzE+4DdRv8A4hOn/R8+Vhal4YxjqF+q0654dzcq18pMRhjWYi6Qa7sF8ahWox7VzChrJIPSp94JO+5Ge6ZTWGu5WbIrxa96HK+rKTzKPx2LPZue3uVYF7n6jXSyIwsZrQ7KtVb2nZOIYkINwPbX+MnfmqlQtZbeLAEKKnawchvs1aryUj65iahjJbkJtfZVbVQ5Ar4jet3TdiXUggMi9vsktMyWuwktf6dlHJiBsCeJ7ge4Hz/fAzaMlHrW0H826CwMe3sMvIE7+XaY2DqlVzBV5gsvUr51vWLEGwL1lh7Q9pfvCY+nVh9NqVjxV8GtWbbfiDSATt79hMLBe1sjA5iriuNfwauw2CwcaB1Nig4IfZ/iYFq2qV9VqQtzMjKjlKXatWcKw5WKOK+y6nv8ZnTXssCrr5NF9hY5da2VEL0y5amh6ypXkTx2O4PwmwwEREBERASt1f6S/YfxllK3V/pL9h/Gajow6/P9zf7TIyVfn+5v9pkZ0QiIhCIiBKz3fsr+EjJWe79lfwkYCIiAkk8m+z/6WRkk8m+z/wClhVB4/wAXUbtOyatJurxs91T1e2z6K7WIbRyKkIzVB1DbHYsPtmoeM/Den36HV/3Tbjvm4WBZ1c+oD1hHAU22YqBN7iGFfbjsT7hvOzJ1LZpvh3xPr9lxfJvy/DinFycWxFGFkpXbkJ3W1CbES57VOxG/s+Y83RrHijwdi5XhXSMrQ31DUxoN1l2CgDCy7rZinL6mMiixWqsqYAVbEAHYnzmwnwfVq+ZfqiahZpniK/SK6svApyUsfTLsjHWr85WhGRVXxP0SRsW5ec2XSvRjh4ut16xjW2Y6UYvqtWn0oqYyKUNZCle4pJJs6e30yW3lto3gbTsTU8vV6a3XN1FGS9jYzVgWNXZaa6z2Uu9VZP7PbaTw9Ynga+rTcA4Gdq1GdmaLRZdqVz3q11FJZ7la9Xc2pUlRVQ9nmEH2S/8AD/iDC1Kj1zBvS/GdnAtAZNmQ+2rrYoasjz2YDzEpMX0eaVXnahqIoL36zTbj5i2uXoerI4HIRaj2AsNaE7/Dtt3l34f8P4Wm0ep4NCUYyM5FQLPuzn22drGLWE+W7E+QlFTiekDRrcLI1GvOpbCwX6eTeBZslhKhV4FObli6bcQd+Q23lzhavi3Yq51V1b4llJyFyN9q+iFLNYS30QADvv5bHeU+H4A0arCyNOrwaVws6zq5NANhDuCpVuZfmhUom3EjbiNtpqXoP0lcKrUtEydVxtXbFfpnCRncYmMVaqyt0t+iHZtjUu4U7jfcmPqNwxvHWkWafbqqZlRwMdjXdkcbBwsDIorNbJ1eoTZXsu256i7ecz/+4cL1H8p+sV+odH1j1nv0+j8223Lf3cdt9+228wMbwLpFen26UmHUuBkOXux+Vh52Eo3UNjP1eoDXXs2+46a7eUz/APt7B9R/Jnq9fqHR9X9W79Po+XHfflv7+W++/ffeFZHh/WMXPxKsvDtW/HyNzVanIA8SyMCrAMrBlZSrdxsZmTD8P6Pi4GJViYdS0Y+PuKqk5EDkWdiWYlmYszMWbudzMyEIiICSq+kv7Q/GRkqvpL+0PxhUZbaX+jH2n8ZUy20v9GPtP4zM8OMqUHhnwZpOmWW26dp+FhWZA42vjY9dLMnNremCi+zV1GZuA7bsTtvL+JzVr9/gnR3zhqb6dhNqAdLfWzj1m821J06ri/Hdrkr9gWHuB2B2mP4k9Hmhalket6hpen5uSEWsX5OLVdb0134oHddwo5N2/wARm0RA13WPA2jZmJjYOXp2Dk4WnBFw8W7HrsoxxXX0kFNTLxQLX7I290zPC/hnT9LqejTcTGwabrDdZXi1JSj2lUrNjKg2L8K0G/8AgEtogYenaXjYzZD0U1UtnXHJymrQKb8golZutI+nYa6q15H3IszIiBh6rpeNlrWmTTVetF1OTUtqBwmRjuLKLlDeViOoYN7iBMyIgIiIFFf4P0qzPXVHwMNtRThxzGx6zkA1q1db9Qjc2LW7oGPcBiB2ljqul42WtaZNNV60XU5NS2oHCZGO4souUN5WI6hg3uIEzIgYWt6VjZ2PbiZlFOVjZK8Lse+tbarF3DAPW42OzAH7QJDQNEw9PoGNhY9OLQrO/SorWtC9hLWWMFHtWMxJLHud+8sIgUPhnwZpOmWW26dp+FhWZI42vjY9dJZObWdPdF9mvqO7cB23ZjtLu2pX48lDcGDruN9mHkw+BEnEDHzcGm7YW1pZw3481B2389t/cfhPuTh1WIK7K0ZF2KqQOIIBA2Hu2BInPEDHwsOqkEVIlYY7kKNtyOwnzEwKKSzVVV1s/wBIqoUkb77dvIb99pkxAxnwKDYLjVWbRttZxHLcDYHf4gdt5kxEBERAREQMXVNQqxqzbcxVF8yFZz7yTxQFiAATv8FMx9X+kv2H8Zk6lg1ZNZquXmjEEgMyHt/iQg7Ebgj3gkHsZj6s5DLsSOx8jt75qOjCr8/3N/tMjOSuxt/pHyb3n5TI9RvmP8TOiIxJdRvmP8THUb5j/EwIxJdRvmP8THUb5j/EwFnu/ZH4SM5LLG7e0fJfefhI9RvmP8TAjEl1G+Y/xMdRvmP8TAjJJ5N9n/0sdRvmP8TJJYdm7t9H4n5lgccw8HSsWiy+6jHx6bs1g+VbVTXXZkOu+zXOi8rWG57t8TM7qN8x/iY6jfMf4mBGJLqN8x/iY6jfMf4mBGSt+kftMdRvmP8AEyVtjcj7R8z7zA451h6JfCOfh6tr+oahjYlR1LI3xLaHBZ6mttss4orbV1v+YY8gGLBt52j1G+Y/xMdRvmP8TAjEl1G+Y/xMdRvmP8TAfq/+3/EjOTqHj9I+fxPwkeo3zH+JgRiS6jfMf4mOo3zH+JgRkqvpL+0PxjqN8x/iZKqw8l9o+Y95+MDjltpf6MfafxlX1G+Y/wATLXTTvWN9z3P4zMsOMmIic1IiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICCIiBVeJNUGHUlnTV+dnDYv0x9Cyzs3E7u3T4Bfe1iDtvLTiPgJ9iB84j4COI+An2IHziPgJwZ9601W3MUVaK3sZrG4VqEUsS78TwQAdzsffMiIGv8Ah3xB63aajSlZFKW7rb1COSUv7S9Mcaz19gx8zTd2HGX/ABHwE+xA+cR8BHEfAT7ED5xHwE1zxF4oXDOUvR5HDoouBsZqa7epZ03VbDUV3XlT3G+5t27bGbJED5xHwEcR8BPsQPnEfARxHwEr9P1eu7Jy8ZVYPp5pFjEpxbrobFKANy2AG27Adwdt9jLGBU+LdW9QxLcrpC3omscDYtI9uxKyTYw2UANv9e2w7z7oGqjK9a9hU9Ty7sX2X6nLpEDk3sjpufPh323Xv3lrED5xHwEcR8BPsQPnEfARxHwE+xA1ZfFe+S2P0FHHL9WDG8bt+cSrdUFf6Xdi/TO3s1u2+22+0cR8BPsQPnEfARxHwE+xA+cR8BKLxD4gXEtWorQWsoa5epkCkqFyMXHLWDpkpQPWQxs77dNu0vogUvh7WxlvenSFXQTHfYvysHXVzwtr4jpWDhvtudw6H37S6AiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAieM8QPZiJ4zxA9mInjPED2YieM8QPZiJ4zxA9mInjPED2YieM8QPZiJ4zxA9mInjPED2YieM8QPZiJ4zxA9mInjPED2YieM8QPZiJ4zxA9mInjPED2YieM8QPZiJ4zxA9mInjPED2YieM8QPZiJ4zxA9mInjPED2YieM8QPZiJ4zxA9mInjPED2YieM8QPZiJ4zxA9mInjPED2YieM8QPZiJ4zxA9mInjPEBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQERED/9k=\n",
"text/html": [
"\n",
" <iframe\n",
" width=\"800\"\n",
" height=\"450\"\n",
" src=\"https://www.youtube.com/embed/DbJyPELmhJc\"\n",
" frameborder=\"0\"\n",
" allowfullscreen\n",
" ></iframe>\n",
" "
],
"text/plain": [
"<IPython.lib.display.YouTubeVideo at 0x7ff95398cdc0>"
]
},
"execution_count": 81,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from IPython.display import YouTubeVideo\n",
"YouTubeVideo(\"DbJyPELmhJc\",width=800, height=450)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prelude to Part 3: Some tips to make nicer figures."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Before even starting visualizing some cool data, I just want to give a few tips for making nice plots in matplotlib. Unless you are already a pro-visualizer, those should be pretty useful to make your plots look much nicer. \n",
"Paying attention to details can make an incredible difference when we present our work to others."
]
},
{
"cell_type": "code",
"execution_count": 85,
"metadata": {},
"outputs": [
{
"data": {
"image/jpeg": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAUDBAgICAgICAgICAgGBwgIBwcHBwgICAgICAgICAgICAgIChALCAgOCggIDhUNDhESExMTCAsWGBYSGBASExIBBQUFBwYHDwgIDx4VEhUfGB8YHRwbGxobGhsaGhkVHh0eHR4YHx4eFhoeHx0YGh0dGBUYHRgaGRcdFR4ZGhUYG//AABEIAWgB4AMBIgACEQEDEQH/xAAcAAEAAgMBAQEAAAAAAAAAAAAABggEBQcDAgH/xABWEAABBAECAgYGBwMGCgQPAAABAAIDBAUGERIhBxMYMZTVFCJBUVRVFSMyYXGBkQhCoRYzNFKCsSQlNVNicnOSo7NDRLLFFyY2RVZ0dYOipbS1wcLR/8QAGQEBAQEBAQEAAAAAAAAAAAAAAAECAwQF/8QAMREBAAECBAQDBgYDAAAAAAAAAAECEQMhMVEEEkFhgcHwE3GRobHhIzJSYtHxIiRC/9oADAMBAAIRAxEAPwCmSIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICKzPYq1V8fp/xmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/AIzI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8AGZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/wCMyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/ABmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P8AjMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/wAZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/AIzI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8AGZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/wCMyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/ABmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P8AjMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/wAZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/AIzI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8AGZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/wCMyPlyC/6IiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIijg15hDP6KMvjfSOLg6n0+vx8e+3Bw8f29+XD3rVNFVX5YusUzOiRoiLKCIiAiIgIiICL4ErS4s4m8bQHFm44g0kgEt7wDsef3FfaAiIgIsTI5OvXMLbE8MJtztr1hNKyMzTvDnMhiDj9ZKQ1xDRz9UrKcQOZ5Ad5KtpH6i0uE1bi70r4KeRpWpoty+GtbhlkaAdieBjieEH29y3StVM0zaqLLMTGoiL4bK0uLA5pc0AuaHDiaHb8JI7wDsf0Kyj7RFiVMnXllnginiknpOjbahjka6Su6VgkiEzAd4y5hDhvtuDurYZaLX57N06EXX3rVepDxBgltTMhYXnchoc8gF2wPIc+RXricnXtxNnqzw2YJPsTV5WSxu25HZ7CQdk5ZtzWyW02uy0XxLK1u3E5reJwa3icBu49zRv3uPuX2ogiIgIiICIhKAi0+N1TjLLxHWyNCxI77Mde7XlefwZG8kr0y2o8dTeI7d+lVkcwPbHatwQPLCS0PDJXglu7XDfu3afctclV7WXlnRtEWrp6ix84jMN6nMLEroYTFbgkEszGCR8UfA88cgYQ4tHMA7rMs3oInxRyzRRyWnFleOSVjHzPa3ic2JrjvI4NBJDd+QUmmYymC0shF+OIA3PIDmSe4LT4LVeMvySQ0shStywDeWKrahmewA8JcWxuJ4d+W/crFMzEzEaERMtyi0ud1ZjKEkcN3IUqks+xiitWoYXvBPCHBsjgeHflv3LctcCAQQQRuCOYIPcQfaEmmYi8wTExm/UXm2wwyOiD2mRjGSOjDhxtZIXtY9ze8NcY5AD7eB3uKx58rWjsQ1HzxMs2mSyV67pGiWVkPD1ro2E7uDeNu+3v+4qREyWZiLSZDWGJryvgsZTHQTRECSGe/WilYS0OAfG+QOaS1zTzHcQsnC5+jd4xSu1LfU8PW+iWobHV8fFwcfVOPBvwu237+E+5WaKoi9sjlm12yREWUEREBERAREQEREBERAREQEREHM/2irk7cdSpxTOrx5zM0sbcssPC6KrY610uzv3eLq2tJ9znD2rc1Oi3T8dZtUYqo6NgAD3wgzEt7nmb7fHvz33W71lpurlqU1C4wvgsgblp4Xse0h0csbv3ZGuAIP3c9wSFAYdDaqjDazNVE0mFrQ+THQOvdU0j1DYILnO2G3HvxL24dcThRRFfLMTO+fwjXp6l3pqiaIi9nzd6RMzZs5AYTEQXKODnkrWZrNswTWZ6/KeOo0NIBaQQC7ffbf27Lym6Vrt2xj6+DoV7Jy+G+ko3XbD4eocyw+GWOYRghwb1ZbyI3c4HfZZed6Mb4nvOxGblxlTNyvmyNMVopgZpRwzy1ZnevWdINyeH2nkeQA2mmejSHHX6FutMRDjMM/Ftruj3c/jnNg2DLxfaLnO3bt7e9dJq4WIvERO2u3XvfbJq+FEf380V0x0wZC19D25cXBBi85djxrZRbc+yLjmPLpGR8Ab6NxxvAB57NJ335L3i6R9Q2m5GXH4WpNXwl+/WnfLeex9llOZ7OGszg5S8DNySSN3bAHbnscV0TmDG4TH+ncX8nsw3JNm9H29IDTP8AVFnH9Wdp/tAn7PdzUN0PpLL34s/HTzE2Mgtajy8Vus+pHJxxPsO3lryvAkrvex2xLTsQARseZ6/6tV6qYi0T15tLz43s3+FN5jz3lP8ATfSW3IX8RXrwD0bOYafIiV7yJoZIZWxmAtA4XAHjBO/e3lyUE1t0h5q1XpTUYYq4g1g7GPLLksZsSV7Bjq15QG86s46zrO/h4G8jvymOS6K3RR4p2GyD8ddwNWWpDYfXjssmgn2dK2aGTlxce7gfZxHkeRH5U6JxHjqFL01z5aWfgzlm1JFxG1YjkdJKzhDh1YeXd/PbbuKxRXwtExVHwm/7vszTVhRMTHrX7Pi/r3MzZKTF4rF1bE+KrVZcy+zcdDDHYswtmFWtI1h4jwu5SEEd/Ibc4prXpDzNzBm5Wrtx8lfUf0fZDbb2TxdRbgZDESwbP6x7nRyAHYDfbdTnUmgLpyk+Vw+Vdi5slFDFk43VIrUc/UN6uGZjZeUc7WAN32I5fe7fBh6JnNw8+KOQdI6fNtyvpckO8ji2zDYLJRx+u9xiO79+ZcTsrh4nDU8tVo/53v3v010WmrCi0+7fxalucr47OZXJ5Ci2C9S0pTtZCSrakma9zpTG6rEx4DCOKGNrX8t9xvtzK2GL6RszDLj5czh4aeOzdiKtVnrWzNPVms/0ZtuNzRu1/dxN2293sUhzPR9DcyGStWZDJXzOGixc1UN4XMbHJJJ1rZd/tfWcuXItBWhxfRbfdPRGUzk2Sx+Enjnx9J1aKFxlhG1d9qZnrTuYO4nn3+8rPtOHqi9W0b7dPHfJObDmM/Pbo1DelzLCB2RdiqoxdTLnG25hck9JeTc9FbLXiLNtm8TNw483EgbAbre3de5azlLtLC4uC5Xwj2R5Cxatms6WZzQ90FXZpaHgct37gkHuGxP3L0Wl2EsYf03+k5Z2RFnqPs732XeqMfHz+zw8W/t32X3lujy8zJW72IzD8YzMFjsnX9Fish8jG8HX1nS7iCYt357HmSfuCa+FmZtERrbW3S1++pM4Wdu+/b7odqDVUuZpaUvTVxWkdrWKF0DXF3B6O67CNy4bh+zBuPYd12PWWIOQx16iJTCchTsVhM0EmMzROjD9gRuBxd243G6guL6JzBRxFL04v+g88cv1zofWsAyTv6l+8m7XfX837nct7uanuq8JFkqVmjOZGxXIXRPfC8skZvza9jh3Oa4A+7lzBG4XPHxcOaqfZzlEz4Re8M4ldN45en8uNdH9Srir+Kx2bwNelkIS+LEZ2ns+rdlbEWuD5G7Pinewnk/fck8m7gHZnpXy80FnL08LFPgKL5eOd1vq701au4ia3DERwBrQ1zuA89m9457Z+L6M8q+5j5MtnDkaeDmE9GAVGQSulY3gjfZkbzleG8tySTz953x8h0QW+CfH083PVwN2aSSfFCtE97GzPL5q8Fo/WMruJPqe47Hfc7+irE4aqu9cxM5X/NbWb26307Xu6TVhTN6vOzKvdJGSvXZKmnMfXutp1q1i3avWHV4t7cLbEMEQaN+PqnsJcTyJI25bnX1M5Ux+a1DlLNB1e3V09jbl8ssuldI50RBqiM/VB7TBEwPB2O2525k7fL9GNmGy61gMq/Dvs1a9W7H6NDaimZVjEMEzWyj6qw2MBvEO/Yd3PfOj6NmST5KS7afaZmsLUxdppZwSE1mSMdZEgcfrHl/F3ciPauUV8PEZaTEb31pvfpvZiKsOI/u/RGn9JeoK9aldvYSpFUy12jDA+K898leO5K1o9JjLN+Msdu1w2G4AIHEFuOjj/wAp9Y/7fC//AGxi1juijKzQ06tvUMlipiLdSejAaMTC5lSRrmNtSNdxzPDG8IJOw33IJ22nGm9K+h5TM5HrusGdfSf1PV8PUeiVhX24+I8fFtv3DZMTEwIoqii15jpf9VMxr2iSqrDimYjz3jdGOmvSNy7Pi8lTq1sk7Cus9bh7ruGK0yy2MF0bj6rZ2GIbcXLmO/bZ2kwevcbjsZK7E4Z9bJXM0KEuCd9Q5uWmj4vrXgFog6qPcOaACGgbN57TPpC0bbu2K2QxmSkxuRpRSQNkMYnrT15XBzop67/VPrAEO2/uaRHq3Q+59Oz6XkpZMxbyUOV+l4Y2xmC7XYY4HRQ/Z6prHObw8uTuW2w2uFi4M4VNOLOnTPeZz6THXdaKqOSIqn6+rfNpukO3fnqY8ajxFVr49S4tlJ1LITcAM7Zw6YFvrCWPhI4XbtPWAjuX3rnptkq5C5TpRY4x4lxjnfk7zq0tuZg3khpsYw7cJ9Xif3kHltzO+sdHWUuQRsymcN2avlqGQieKMUMTG0hKDAyKItDTJ1u5fzO7R3pqPoytuu3LWKyjcezLvEl6CahXuAT8IY6xVfMOKCRwHPblvz922qMThsortNr/AKrdO1/ksVYelXnZiO6VLt2bGQYXHw2HZvEyXmG7ZdCKr4rBhmbOY2u42MLHN9XmXFu2wXjW6W7tmtRgqYyOXN5C5kKjqjrJFSA4x4ZasOm4eJ0XrM2by73czwjilmK0MYMlj8h6W6U43DyY1zZImNfO6SVkpsvfHs0PJadwG8y7fdR09EckcTJKmTfVydPK5K/RyDK7HNZHkn8U1WaB7i2aLYAbn+r3cyFmmrhJyt9f3a9tGYnB2+vf7MTV/SrkcTXpwX6NGvlr89hoEl530bFWg6v/AAt8waX7PMoaI+/dr9zyG8g6HekducFuCRkDLmMcwTGnObFSeKUO6uevKWh3Du1wLSNwQOfPlgZHozv2Yak0+ZfJmsdPYkgyb6UDoDDaEYkpyUiOrfX+raR3EHcjvKkvR3pi1jmWHXbwv2LcoeXtqw1ooWhob1UEcTRwx8gdt9t9z3kk4xauG9jamI5vHfplpbvfslc4XJlr4+rM/UusMXjHMZkMhVpumY58TbU7Ii9jSA5zQ48wCR+q3DnBzOIEFrmbgjmCCNwR7wsLL4KlcLTbqVrJjBDDYgjlLQ7YkNLwdgdgs7qwG8DQGgN4QAOQG2wAHuC8c8totr1ccrRZwfod6N8NldK0ZLdOFtmaKyTkIwI7MTmWZ2xyiYbHdga3v5eqN1ptQcGU0EMtehisZOo2KnFk3xgzyQwZdldjxK71jxxl2/vL3n2qXYfobyUVNmKk1JZ+io2uY6lUpwVnPje9z5I3WBvKWPLnbgkjZxHdyU01boGC3gX4Kq4U4OrrsheGdZwCvYin5t4hxFxj5nfvcSvq1cXRGLzc9/8AK/XKM7/Hr0yeqcaIrve+d/dCH69wdOhkNIRUq0NWJ+ekkdHBG2NhldT4S8gfvENaN/uWz6Yv8s6Q/wDbM/8A9KVJekTRzcvVgjbYkqW8fZiuY+9E0OdXtQghryx3KSMhzgWHv3+5R/A6AyUmTq5POZVmQfi2yDH161VtWGJ8oDZJnNH2pCAP0C81GLRNMVVVZxFUWzvN72+u7nTXFomZ0ifnf+X3+0lclh05cETzH6VLUqyyNOxbBZtRRTc/YHMc5p+55UV6TtL0cEzTd7Gwx1rdTNUafWQtDX2a9lkjbEcxHOXcNJ3Pdu73rresNP18rRs4+00mC7EY38JAc07hzJGEjYPY9rXDf2tC59hui/Ivt0ZczmTk6uDcH46sKrID1jQGxy2XN/nZGtAG5JPI8+Z3vD41FGHETNrXvG94iPVzDriKc50v45PvpC0/p7FDKZjKxenS5hzY2wWQyaWSTqhFFToN4QWEhg22+zwkkgAleOkc9LpvSePOTa6S71Qgo0eP6+eWRzjVqAn7IawtBd3Naw+4BfWuei/J5LKtyYzbITUJGPruxsc8dRh234WTSOY+YkbmQt3Ow7gABJZOj+ter1WZ5sOYt02SMFyWEQcTZH8R2hhIYwkNYDsOfAFZxMPkpiurm0mdb5RaIi+Vt8/os1U8sRM33QPojt2KeV1NZy1pk9lmOxt66+Ih0cTRDbnkgrtBP1MLSGAD3c+Z3Oiweey1eabW2QxsNiheZHHGG2T6ZjcU6ThjdXh4Sx7fXa53MFxc4+qCdukaZ6JMXQvZCzDBC2vkaTKcdRkZaK8T2OZca1/GeJs3qEjYbFgUdd0P5N8DMRLnpJNPRSNLaBqxttOgjeJGVX2gOJ0YIHt25DkNgB3jHwKqpmZ15b5TGURnEW0n5d7N+0w5mZ93w8G16YY8LVqG+cVRv5PLPjgxsclWOSW5bmYGwF+44nRsYGucT+6wDcEhb3od0PHgscyA8Lrdk9ffma1reOd45taGgBsbPsgAbcvvX63RHHm2ZWzM2aHH02VsPS6vZtJzhtZmJJIklfwsAdy2AA/dBUzXgxMb8OMOmb9Z8o8Pr4OFVf8AjyxPv9dhEReZyEREBERAREQEREBERAREQEREGp1LqGtjhVNlzmjI36+Pr8DHP3s2iWwtdw/ZaS07uPILbLmX7REj4qeHsiC1YZj9UYi3ZZRqTW5m14JJHSyCGBrnuAHuHtHvXPukbNPv5avdnOroMRPg+LT8eEq5GrJ9NMt2I7Lb9eJgkjtcDavVi0BCWF5PIu3CxyKpkmX1DTw74Z49Qy29Q6Ew9PFOhivyvjzkb7kVrr5R/k+2BPA90kpYS1m+5LdlM7eCzz5NaX60+W+k6ULINPVpLM7afHNgaHpE1Su89TNOZxIGu5tbIw7bEu3CwC0urtTUcTDFYvTNgjtXalKJx/fs3JmwQt/Ddxc4/utY8nkFxDoPeRqKm2hJq2TF/wAm7vpp1IMiKwyptY4lsfp4B9K4BJxBv1fM9WecqzenLS2T1Vl3YyCnXfi8Bj5DM7KuuVq1nJ5avLDHNVfFC70h9OsS4PaS1slpwPrR8g7Hk9S1q+Qx+MkL/SsxFemqBrN4yzHtruscb9/UO1mLb38/ctyq3WbGorjtOyipYbnsLgtbYuexJWkFc5mvVx0FKyJpYxE6G0+Bs0bzsx4c7bk07RTourZ11bNtfltQMjl0zMLpdiNQOtVso9zQ2xX+krL3yZJg60OZSLWOadweJrCAt4iqXjbmT4NKTcGonej5CeuMc1+omx3YzlomfSTbs31kLBCHOFTJBzeoL2hw3DjkZd+SbftGaXV30+dZVeKOsL/0H/J8Ziua7mdUPRPQfQ+DfY9bx8fH6vW7haxaXNamq1LuOoTGQT5t9mOnwsLmF1WD0iUSOB9T1N9veVwPTuKz8E+Nv158y6/ksvrGnPDfsXJaEVaJmZkwrZa0+8VeD0mKm9khA3EgAJaQFpejirM/L6UfwarlvVfpN2opM3FkH1K2SlxsjD1UlpvVxyOkDw0wERuaGb+twgBbBY1q/BFJBFLNFHLce6OrFJKxkliRkb5nxwMcd5XiOOR5DdyGsce4FVVyeAz9fSen5xdznFlJYJdVS2n5i5bgayrIytC6tRlZegptkDWvbAQ7cRufxetvsMDpLI5GPRUmSsZqx6Nn8qxloDJ46eDGijcfUfZD5nTMJliYxk0zhI6KZrHc3OCCwNnW+KZYx9X02J82bsXK2P6jjsRzWMeHG5CZ4GuiikjLHtIe5p4mOb3ghSJVq6JNNXcZNpuONuWbDLqzVrsgyzJdljbA2LJxUpJxLuI45A2B4c7k97w8budubKoCIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIuVSZW/quSWPD3pMZp6tLLWsZ2mGnIZaxE50U8OGlkBZVoxPDmOu8LnPexwiAa3rHavBaen0/q3F0qGQydvHagxOVmyNLJ5GfICtNjX0upvQvsuc+J8jrgjcAdjv3chsHaUREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBRvpThfJgs1HFI6KSXDZJkcrDs6N7qczWSNI7nNJBB+5SRcs/abztmHDHFY4B+W1bKMNjmcXCGC2Ort2nuHOOKKBz/rB9l0kSDSdGXSvho8Hh8fgoLWau18NQYMViIesfWd6Oxm2SuSFtXHnrGvDnTyAkh2wceRn2hNO22T2MtlnwvyuQjZD1NYudVxlGNznxY+q94DpfXcZJZyGmV5HJrGRsbzn9h7TTsXpmaCaIR3PpzJx3wCHEWKc/oLmFw5EN9G2G3LvPtK7sgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiIC4n6V9Ma4rbF7q+nRedDtwmLjo12UrL9wN/rrebkjLSR6+mwQORXWNX5uLG4+9kZ/wCZxlKxblA7yyvC+VwHvcQzYD3kLj/7LtB7Ppm/aIM1b0PFWH+z0utDJmM3IHEbnfLZzItPs/wcDkWlBJf2cbgmpZpzeYGstTbfg/KzzD+EgUX6QN9WUcldOes4TS2G9LYy1i3sbPk7NElti9PPzP0ZDLHJHHAzYzOY95cB1K5v+yF0pVINO5uOw9tnK3c9ftUcHFK03Lpu1KzwyGLfjbWEkc5fOdmRtD3vLQ0lRroA6D9T5zTrKWQyr8VpfJXIchHj2xia5bY08XHDxbCtUl9R7eMvaXRxyiM8nPC1vQI7JHTOEdl5DLkH42B9iR7nPlc14LoDO9/rPs9QYeMncl/HuT3qbrzrQtjYyNg4WRMaxjR3BrQGtH5ABeiAiIgIiICIiAiIgIiICrPrrXT9Q5OzXxl/LPr0L0mIxeJ03km4u1mMnWibPlL9zJFpNXDVWS12B7dw97txxBzQ7q37QGSmr4mKKKV1WLK5fF4u/kGPMbqNDIXYq1uw2UD6h5jeYmycuB07Xbt23EU6KejfB0tXZrJYaGGvDj8VQxT4Kjz6OzITOks3hwAkNkFaPFb7H7U0pPrElBlfswa5vZOLMYzKC0MhpfJmpJ6ea7rpqyhzqwty1GtgsTNMc7OujaGyNjjf3uJPY1xjoToudq3pAyDdjXsZLD0WOHcbFDG72m/i02o/zcV2dAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREHPv2ha5nwT6m+zMnlcDj5/vr3s5jqthh3/ddFK9h+55UR0vZlqdH+SvM2NzJVtQ5GMgfbu5S5flqN7xueOeuzv8AYFtP2vq9mbSOQgpNkddtXMNDSbA4tmdZfmsf1AicCC2Tj22O42Kqx0GahzdjD4agMjJYxz9eYfF5LF2I2SPq1nWquSqS15i3rYoXy1LzXsc4t+pYGgcTgQsjgP2eKdC5ZFOWvWxGVjrtylOGmfTrccMMccuN+kHykQYieSPrZYo4w+QySML+AtDe4MaGgNaAGtADWgbAAcgAB3BfqICIiAiIgIiICIiAiIgIiIPG7VinjkhmjZNDOx0csMrGyRyRvBa9kjHgtewgkEEbEFVX6EdUvx1jWGm9J4k3LcOqrs1OaeaCDEY+CcQ0w+050gsPr15ak2zIY3l7WsaHAuDlMOn3pluR0blfTELp5I7cGMt6hkLY8dQtW52VRDTe875G+x0g4hCHti+07ctLVFOirotp4iHSuYoiavmxqCbD6hmFy1LFe6mTJ0MpE6KV/Bwek0+NpDR/NNO2+xAWH6OdKx4fHxU2yOsSl8tm9dkaGyXb9qR1i7ckA5NdJNI93D3NHC0cmhSJEQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERBEOk+UMbieLbgfqLFsfv3bmV3Vfn1vVAfeQqmZ/EXNL67txRRN+hs7qvS1p0j3cPUPu5Ka5XkY0d7WmtmoAO4Abk929mf2m4ZTpbK2K7+qs4llfLVJQATHPircGQjcAeR51tvwcVx39qTNV7+msTqeIPZXzWNbVmMW5krWJY2ZbETOLN9n1sjRdAXDfZt+wBvxcwtWijfRfqhmaw2MyrBw/SdGGd7P8ANyuaBPH94bKJG7+3hUkQEREBERAREQEREBERAUQ6acoaWnszaDp2ej4yy50lRzWWY4+rLZJK73AtZO1hc5riCA5rSVL1qdZYRmSx1/HSco8pQtU3n3NswPhJG3MEB++/3IKk9N2IfJonB6lP+CV8fcxNzEaeoyf4ux+NmcepZIS0Pu5J7ZIXSWH7bEuaxrd5HSd7wtSSbIy12M+oxesHZBpbzDqmQ03NbE/LuByVydvP2s+9cP6O87WtYLD6CzLuvsZCStFWc1h9ak708XIJDv8AU28fcp2ax5jfq6r278R4evdCGQk6/Hmy5xtZHTLMfdB34Bk9IZCbG5F3MDaWSXI+71m1gRyag7Gi53prpWrXtS5DTsdSw36Nge6LKO51LdmqawyNSEgbdZX9NqB3rE7veCG7MMnREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQQjp8idJpjPQt247WJt12cXdx2IjAzf8AtSBcE6XsDBjNJ6u0wA9tHBZDDZTFetu6DE5nKwSPja5+5d1NmPLNBO54er33JO/dOn+51ODk5gdflMHXPF3cFjOY6GX/AIb3/ouU9LNiHPZS/jK7DINSuxml4JG7jrI8LetZbUWRae59OoywysHDvsGZn7pQWLxWPgqQQ1a0TIK9SJkNeCJobHFFG0MZGxo5NaGgDb7lkoiAiIgIiICIiAiIgIiICIiCtWoOi61i9W4zLCxTOLvayfbqVWwvN2K3lcZL6dvM4cMcDpqhdwN34iWE7FqWOKxntWYOplxgb+AycGqMVlDDDYZFWyOKhZm2yVpiI5K3FYe93EdhJZa/vaFN/wBqbNTY6lgrsFd9uWrq/EOZUhG8tnjFqJ1eEf557Xua3/Sc1c16IdP4vpDkzGaydSzVfHqaKxXgjkEUsuNOHx8EVK47gJlp2IYYXvY0ji5cLtidwmH7JPR3JQpfS9u3ctvyL70uHZfI62tjMhZjsmeVvMm7c6itPIXOO20beR4+LvC/GNAAAAAaAAANgAOQAA7gv1AREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQERaDpB1PHh8fNdfG+d7XRQVKkX89cu2ZWV6dOEbH6yWeSNm/c0OLjyaSg5P+1QW5gVNLsnFaFxGa1LkiWiPE4PH8chlkcT6tiaZoEY2P8AMSEgNBcI5+y7Sfc1DbyxrtqY6HS+Ph0rjw4k0sHZv3oIDMCNhbndh5J3O4nk+lH1uZA0XSxpi/Nh9Q4/0trb1fGv1Hr3LQM4228g2m+fFabql7g5lOKKGN237kTKx23ncD2bodw3oN+1WafUxuldI4xrdubTTZmXkk+0n0kH9feg6iiIgIiICIiAiIgIiICIiAiIggfS3RbYn0zG4Ahuqqs+x99PHZS40/k6uD+Sg37LUPo7K0YbwtyWidJ5AEADjsMjyFOwSfaRDFjx+YUi15rii/UeLwETutyFGO/mLXARw04W4m/VhZKf8/J6bxBneGNDjsHs4tF0AF08mnnQ79Rh+jjDV7r9vUdbybKVmtC12/OWOClLI5vsbegP7yDuCIiAiIgIiICIiDT6kwDLwj4rN+sYS4tdj79ioTxbb9Y2FwbKPVG3GDtz27ytQdGWmt4YdR56Eew8WHsOH9q7jJSfzW81NqGhjIDZyNyrRrhwZ19yxHBGXuBLWB0jgHPIB2aOZ2KhbOmzATbihJfyxHyXCZXIMJ3I2FivWMG+4P7/AC9qDeQaWvtHPUuZkPvlrad5/iIsO1ZTMHfH/nq2775KeNJ/4dVqjbteZqf+gaQyhB34ZcxfxWLi5dxLWWJ7LWnv/md/eB3L56zXFg8o9L4th2245cpmJh37gtYymwEcuYce9BKRi8iO7Kk/7ShXP/Y4V+/R+U9mSr/2sZv/AHWQo5HpfU0gPpOqWRbnl9Faep1y0e7fIzWw4/eR+S9f5CZF329XaiJ9vVw6biB/JuF5D80G9NLL+zIUf7eImP8A2cg1ebqec9mRxQ/HBWz/AN8LVDQNn26m1Gfv67FD+DcaAvx2g7f7uqdRs+8Owb/+biXINuKmb9uQxR/DCWx/3uU9HzY/63i3/d9F24/4/SDv7lqBonJtHqauz/8A7yppmT9f8Sg/xWU3T+aYPU1A6Q7d9zEUpOfvIq9Qgy5DnG/ZbiZP9Z9yH+IbIsF+Q1O1/LE4KSP+sNR345PvIjODc38uNfkOP1Qw7uyuCnb/AFDp2/XeR7AZW5p7d/v4PyXpPa1NHtwUcHa95dl79Dl7w0Y2zufuLvzQekuoMrE3eTAWZnD93HZHGS7/AIG9PWH67L0Zq1zW8VjE5ir72mpFccPyxc9jf8t14SahzETd5dPzTn2txmUx8/8AunIPqb/nsvN2veqZxW8Nn6vdu0Ys5Fw3+7CyWt/y3QZsOuMc4bvdbrgfvX8Tk6Dfx4rtaMbfesrH6uxVh3BBk8fM8ciyG9XkeD7i1ryQfuWlPSrp9gabORjx/GdgMzDZxB358uHKRQkHkeS31HIY3Jx7wT0chC4b7wywW4yPfuwuaQg2oIPMcwfaF+rQDRWJaSY8fVrvd3y04W05T9/XVeB+/wCa8XaRaxpFXI5eo4/9IMlJeIP3NzAssH4cOyCSoonJic7EW+jZipOxv225XD9ZNIPYBPj7VaOI/f1LvwXwczn4OI2MLWtsb9g4fLsfYk9+9fJwVYoj93Xu/FBL0USZr+owht2tk8a4t4nG9jLHo8YHf1mRqtlosI/238FvsFm6d+IT0bda5C7ump2IrER390kLi0/qgz0REBERARaLUmssRjf8o5TH0fcLt6vXcTtvs1srwXH7go8/pcxD9vQ2ZXJ8W3C7E4HLXYXB32S23FW9GLT7+s2HedkE+Rc+OvMvL/RNI5lwJIEmQt4WhHy32cWm/JOGnYf9Hvz7l+jLawl+xhMDVG//AFvUlyZ22/fwVsPw77ezj/NB0BFCOp1a8f0jTtc+0eg5O6B+fpkG/wDBfrcbqs9+Z0+3/V0tkXf36hCCbLlXTV18OZ0bfkcDiKWdmhvxdXxcF3JUZ8fiLb3fuxssWHM35bOstP4b92M1X7M1gCfv0rkAP1/lCVC+lHSmusljbmPju6TsRXoHRPMuNy1CaMnnHPXkbdsNZZjeGPY4ggPY0+xBq8vGf5Ia9dKeCa5l9URzyO23LevdSqF23sFOOo0f6LGrqWlIv8cZ9+3fLjYx+EdBjwB+cp/VQGn0c567pDN4vKT0GZrUclmWWWuZTSY+RlaFpceHiHG2txu4WkB0zthy2Wi11pjWt6OOxUqMxWoIzA2TK4vVdhmIsdRsDNZw81Ux2eIcTeB7CQ3g3e4MDEFh0Wu0z6b6HV+kvRvTxXiF70EyGqbIYOuNfrgH9SXbkBw3AK2KAiIgIiICIiAiIgIiIChnSlq2ahHXpY6OOxnM299fEVZSeqa5oBsX7fD6zaFZjhI8jm71GD1pApfanbFG+R52ZExz3kAuIawFzjwtBJ5A8hzVa6eLyGobdGxJKac3SDRuXZ7cbuG3i9FUJKJq4XHuHEGXLjslVlnlBABlk5O4GBBg6bt0TqJtbGiS5Dh9P6qfkNSSNH+O85N9FfSkglaNpup/wVu7Twt60Mbs1jSe8dD2JgqYTGCGJsbrGMx0thzR60szcfUgEkhPNzhFBCwe5sTGjYNAEH1niamPyOPoUYI61XG6E1aK9eFvCyNhmwDRt7S4kOJcdy4uJJJJK6hoyPgxuPaO5mPqNH5QRj/8INsiIgIi8ZLLGkji3c3va3m7fYEDYe3Yj9UHsiwXBxfC5xIJkIEYd6rR1Up57cnv5Dn3DbYe0n6vTDcRvbvHMCwvDu4n2Ebd23t/H2AoMxFhY97ml0LzuY+bHHvc3l+u27f97bntuc1Bg5PDU7T4ZLNWtYfUc51Z9ivFK6BzwGvdC6RpMbiAAS3bcBZzQANhyA5ADuAREBERAREQEREBERAREQEREAjfkeYPeCoxm+jzA3XiW3hsZPM07tsSUK5sMIIILJwzrGO3AO4cO4KToghP/g2qx8RpZDOUHO326jO37ETCfbHVyUk9aPb3CPbkOS/G6VzcIPUaotTH936WxOKstb9x+j4ajnD8Xb9/NTdEEJhp6rjPrZHT9poA5fQmRpPPv3cMrO39Gr0kt6pZ9nH4Cf7zm8jU3/sjETbfqpkiCJ1snqHb67D4oH3V9RWZh+suHiWl1Bh5rb+vsaXoS2mtc1lyLKRQ3Yw7biEN6OuyxDvsObHDuHuXRkQcDzUWv6h/xHQPVsDQyvnNSVcrBsD625mqMyD3kcuJ90j7vf5v130nxxcL9E46acDnPXzdRsLnbcy2s62ZQPuL/wA1YBEHKaUucsMByeoDid2tMjcbph1BsJ73MdfzbrlZ/wDrNA9u33bLGaHw17Z82Sv5wgbP9I1BZnrP5EEyUKU0dEk8/wDodv0XRFr8tgqNvb0unVtbd3pNaGbbbmNusaUGFpzRuIxv+T8XjqPtJpUa9ckgbbudEwFzuXeea3qjs2iqB26ttusGncMx+UyNCMe76qnYZGR9xBC8Z9K2d29RncxWa079W04y013fsHvyFCaXh5+x4PIc0EoRRSzis80AVszRO3tyOCfYcR+NPIVmg9/Ph/Jfkr9SRt9SLCXHD+tYv41p/MQWy3+KCWIolDl8+0fX4SkT7qOeM+/4G3j66+JdWZJh2Ol8xJ/pVrmnXNH4+kZaJ38EEwRRGLV90j1tNZ2M+50un3n9Ycu4fxX3/K61/wCj2b/+T+ZoJWiiMurrwG7dM52T7mzadYf+NmWr9r6ryD9//FjMxEd3X2tOgH84MxIf4IJaijgy+UdtwYfg3+JyVdm34+jtl/gvht3Pk/5NxDG/1jnbj3/nGMQB/wDEgkyKPPizbx6s+KrH3Gnbuj9Rar7/AKJ9E5SQbTZYRn2ux2NggP5C8+0B+e6CQoo4zSznDhs5TLWveTaipHb/AFsVDXI/Ec19R6NoAbPbZsD3XcnkLo/S3YegkJK8+vZ3cbPw4h//AFR2To+wLju/CYl7v60mMqPd/vPiJX4/o70+RscFhyPccTSI/QxIJOijlXQeFh/o+KoVfvp1IqjvxDqzWkH816DSsLCXQ2slC8/vDK3bDR/qw3ZZYR/uIN+i1uLx9iE+vfsWwQf6XDTDgfYWmnBCPyIP5LPgDw1okc1zwBxuYwsaXe0tYXOLR9xcfxQfa4xgKcuM1PjsQ+tIalavmJtP32NaIIsVbFWWzhpNturlq2YK4iAHCa/UD7UZLupXMrLDw8VG1ICwF8lY15mMJ728JlbM8j/RjKwP5a0dyJG5CHhPN1nC5WCMH/by1REfbzDiEEC6UHl2o3s/q9HepnD8X3cQ3/8ARdQ0x/Qaf/qdf/ksXB+krpIwNTVtGzdvMGOuaUyeLsWoI5bDIJrV6nKxk4gY50PEyB/Nw5cidhzWN+z501y2cnS01NdxebYa80NTL4mDK1ZuGhXL2PyEN+s2B0kkcLt3V5HAO2GxB3AWSREQFgvl4HS8QcAXh3FwPLdurjBJcBsBuD392yzkQYT3bvh/2h/5MqwpYSyuYiA188n1UTTuGDdvqgj93ltv/pjfbcrO9DLZIywgRscXFh/dJY9vqfdu77P6e5e1iLve1rTK1pDHP7gee2+3Pbmf1PvQY7edpxHMMg4Hfc4vDwD+IP8Aes5Y1CsY2niPE953e73nmdgdu4bn2AczyG+wyUBFQDtq6q+X6f8AB5HzFO2rqr5fp/weR8xQX/RUA7auqvl+n/B5HzFO2rqr5fp/weR8xQX/AEVAO2rqr5fp/wAHkfMU7auqvl+n/B5HzFBf9FQDtq6q+X6f8HkfMU7auqvl+n/B5HzFBf8ARUA7auqvl+n/AAeR8xTtq6q+X6f8HkfMUF/0VAO2rqr5fp/weR8xTtq6q+X6f8HkfMUF/wBFQDtq6q+X6f8AB5HzFO2rqr5fp/weR8xQX/RUA7auqvl+n/B5HzFO2rqr5fp/weR8xQX/AEVAO2rqr5fp/wAHkfMU7auqvl+n/B5HzFBf9FQDtq6q+X6f8HkfMU7auqvl+n/B5HzFBf8ARUA7auqvl+n/AAeR8xTtq6q+X6f8HkfMUF/0VAO2rqr5fp/weR8xTtq6q+X6f8HkfMUF/wBFQDtq6q+X6f8AB5HzFO2rqr5fp/weR8xQX/RUA7auqvl+n/B5HzFO2rqr5fp/weR8xQX/AEVAO2rqr5fp/wAHkfMU7auqvl+n/B5HzFBf9FQDtq6q+X6f8HkfMU7auqvl+n/B5HzFBf8ARUA7auqvl+n/AAeR8xTtq6q+X6f8HkfMUF/0VAO2rqr5fp/weR8xTtq6q+X6f8HkfMUF/wBFQDtq6q+X6f8AB5HzFO2rqr5fp/weR8xQX/RUA7auqvl+n/B5HzFO2rqr5fp/weR8xQX/AEVAO2rqr5fp/wAHkfMU7auqvl+n/B5HzFBf9FQDtq6q+X6f8HkfMU7auqvl+n/B5HzFBf8ARUA7auqvl+n/AAeR8xTtq6q+X6f8HkfMUF/0VAO2rqr5fp/weR8xTtq6q+X6f8HkfMUF/wBFQDtq6q+X6f8AB5HzFO2rqr5fp/weR8xQX/RUA7auqvl+n/B5HzFO2rqr5fp/weR8xQX/AEVAO2rqr5fp/wAHkfMU7auqvl+n/B5HzFBf9FQDtq6q+X6f8HkfMU7auqvl+n/B5HzFBf8ARUA7auqvl+n/AAeR8xTtq6q+X6f8HkfMUF/0VAO2rqr5fp/weR8xTtq6q+X6f8HkfMUFZkREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQf/Z\n",
"text/html": [
"\n",
" <iframe\n",
" width=\"800\"\n",
" height=\"450\"\n",
" src=\"https://www.youtube.com/embed/sdszHGaP_ag\"\n",
" frameborder=\"0\"\n",
" allowfullscreen\n",
" ></iframe>\n",
" "
],
"text/plain": [
"<IPython.lib.display.YouTubeVideo at 0x7ff95398c1c0>"
]
},
"execution_count": 85,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from IPython.display import YouTubeVideo\n",
"YouTubeVideo(\"sdszHGaP_ag\",width=800, height=450)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 3: Time series of Reddit activity and market indicators."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It's really time to put into practice what we learnt by plotting some data! We will start by looking at the time series describing the number of comments about GME in wallstreetbets over time. We will try to see how that relates to the volume and price of GME over time, through some exploratory data visualization."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" We will use two datasets today: \n",
" * the _GME market data_, that you can download from [here](https://finance.yahoo.com/quote/GME/history/). \n",
" * the dataset you downloaded in Week1, Exercise 3. We will refer to this as the _comments dataset_."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> _Exercise 2 : Plotting prices and comments using line-graphs._\n",
"> 1. Plot the daily volume of the GME stock over time using the _GME market data_. On top of the daily data, plot the rolling average, using a 7 days window (you can use the function [``pd.rolling``](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rolling.html)). Use a [log-scale on the y-axis](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.yscale.html) to appreciate changes across orders of magnitude.\n",
"> 2. Now make a second plot where you plot the total number of comments on Reddit per day. Follow the same steps you followed in step 1.\n",
"> 3. Now take a minute to __look at these two figures__. Then write in a couple of lines: What are the three most important observations you can draw by looking at the figures?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> _Exercise 3 : Returns vs comments using scatter-plots_.\n",
"> In this exercise, we will look at the association between GME market indicators and the attention on Reddit. First, we will create the time-series of daily [returns](https://en.wikipedia.org/wiki/Price_return). Returns measure the change in price given two given points in time (in our case two consecutive days). They really constitute the quantity of interest when it comes to stock time-series, because they tell us how much _money_ one would make if he/she bought the stock on a given day and sold it at a later time. For consistency, we will also compute returns (corresponding to daily changes) for the number of Reddit comments over time.\n",
"> 1. Compute the daily log-returns as ``np.log(Close_price(t)/Close_price(t-1))``, where ``Close_price(t)`` is the Close Price of GME on day t. You can use the function [pd.Series.shift](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.shift.html). Working with log-returns instead of regular returns is a standard thing to do in economics, if you are interested in why, check out [this blog post](https://quantivity.wordpress.com/2011/02/21/why-log-returns/).\n",
"> 2. Compute the daily log-change in number of new submissions as ``np.log(submissions(t)/submissions(t-1))`` where ``submissions(t)`` is the number of submissions on day t. \n",
"> 3. Compute the [Pearson correlation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html) between the series computed in step 1 and step 2 (note that you need to first remove days without any comments from the time-series). Is the correlation statistically significant? \n",
"> 4. Make a [scatter plot](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.scatter.html) of the daily log-return on investment for the GME stock against the daily log-change in number of submission. Color the markers for 2020 and 2021 in different colors, and make the marker size proportional to the price. \n",
"> 5. Now take a minute to __look at the figure you just prepared__. Then write in a couple of lines: What are the three most salient observations you can draw by looking at it?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 4 : The activity of Redditors"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is time to start looking at redditors activity. The [r/wallstreetbets]() subreddit has definitely become really popular in recent weeks. But probably many users only jumped on board recently, while only a few were discussing about investing on GME [for a long time](https://www.reddit.com/user/DeepFuckingValue/). Now, we wil look at the activity of redditors over time? How different are authors?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> _Video Lecture_: Start by watching the short video lecture below about plotting histograms in matplotlib.\n",
"\n",
"> _Reading_: [Section 7 of the Data Visualization book](https://clauswilke.com/dataviz/histograms-density-plots.html)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"image/jpeg": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAUDBAgICAgICAgICAgGBwgIBwcHBwgICAgICAgICAgICAgIChALCAgOCggIDhUNDhESExMTCAsWGBYSGBASExIBBQUFBwYHDwgIDx4VEhUfGB8YHRwbGxobGhsaGhkVHh0eHR4YHx4eFhoeHx0YGh0dGBUYHRgaGRcdFR4ZGhUYG//AABEIAWgB4AMBIgACEQEDEQH/xAAcAAEAAgMBAQEAAAAAAAAAAAAABggEBQcDAgH/xABWEAABBAECAgYGBwMGCgQPAAABAAIDBAUGERIhBxMYMZTVFCJBUVRVFSMyYXGBkQhCoRYzNFKCsSQlNVNicnOSo7NDRLLFFyY2RVZ0dYOipbS1wcLR/8QAGQEBAQEBAQEAAAAAAAAAAAAAAAECAwQF/8QAMREBAAECBAQDBgYDAAAAAAAAAAECEQMhMVEEEkFhgcHwE3GRobHhIzJSYtHxIiRC/9oADAMBAAIRAxEAPwCmSIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICKzPYq1V8fp/xmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/AIzI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8AGZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/wCMyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/ABmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P8AjMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/wAZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/AIzI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8AGZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/wCMyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/ABmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P8AjMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/wAZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/4zI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/AIzI+XIKzIrM9irVXx+n/GZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8AGZHy5OxVqr4/T/jMj5cgrMisz2KtVfH6f8ZkfLk7FWqvj9P+MyPlyCsyKzPYq1V8fp/xmR8uTsVaq+P0/wCMyPlyC/6IiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIijg15hDP6KMvjfSOLg6n0+vx8e+3Bw8f29+XD3rVNFVX5YusUzOiRoiLKCIiAiIgIiICL4ErS4s4m8bQHFm44g0kgEt7wDsef3FfaAiIgIsTI5OvXMLbE8MJtztr1hNKyMzTvDnMhiDj9ZKQ1xDRz9UrKcQOZ5Ad5KtpH6i0uE1bi70r4KeRpWpoty+GtbhlkaAdieBjieEH29y3StVM0zaqLLMTGoiL4bK0uLA5pc0AuaHDiaHb8JI7wDsf0Kyj7RFiVMnXllnginiknpOjbahjka6Su6VgkiEzAd4y5hDhvtuDurYZaLX57N06EXX3rVepDxBgltTMhYXnchoc8gF2wPIc+RXricnXtxNnqzw2YJPsTV5WSxu25HZ7CQdk5ZtzWyW02uy0XxLK1u3E5reJwa3icBu49zRv3uPuX2ogiIgIiICIhKAi0+N1TjLLxHWyNCxI77Mde7XlefwZG8kr0y2o8dTeI7d+lVkcwPbHatwQPLCS0PDJXglu7XDfu3afctclV7WXlnRtEWrp6ix84jMN6nMLEroYTFbgkEszGCR8UfA88cgYQ4tHMA7rMs3oInxRyzRRyWnFleOSVjHzPa3ic2JrjvI4NBJDd+QUmmYymC0shF+OIA3PIDmSe4LT4LVeMvySQ0shStywDeWKrahmewA8JcWxuJ4d+W/crFMzEzEaERMtyi0ud1ZjKEkcN3IUqks+xiitWoYXvBPCHBsjgeHflv3LctcCAQQQRuCOYIPcQfaEmmYi8wTExm/UXm2wwyOiD2mRjGSOjDhxtZIXtY9ze8NcY5AD7eB3uKx58rWjsQ1HzxMs2mSyV67pGiWVkPD1ro2E7uDeNu+3v+4qREyWZiLSZDWGJryvgsZTHQTRECSGe/WilYS0OAfG+QOaS1zTzHcQsnC5+jd4xSu1LfU8PW+iWobHV8fFwcfVOPBvwu237+E+5WaKoi9sjlm12yREWUEREBERAREQEREBERAREQEREHM/2irk7cdSpxTOrx5zM0sbcssPC6KrY610uzv3eLq2tJ9znD2rc1Oi3T8dZtUYqo6NgAD3wgzEt7nmb7fHvz33W71lpurlqU1C4wvgsgblp4Xse0h0csbv3ZGuAIP3c9wSFAYdDaqjDazNVE0mFrQ+THQOvdU0j1DYILnO2G3HvxL24dcThRRFfLMTO+fwjXp6l3pqiaIi9nzd6RMzZs5AYTEQXKODnkrWZrNswTWZ6/KeOo0NIBaQQC7ffbf27Lym6Vrt2xj6+DoV7Jy+G+ko3XbD4eocyw+GWOYRghwb1ZbyI3c4HfZZed6Mb4nvOxGblxlTNyvmyNMVopgZpRwzy1ZnevWdINyeH2nkeQA2mmejSHHX6FutMRDjMM/Ftruj3c/jnNg2DLxfaLnO3bt7e9dJq4WIvERO2u3XvfbJq+FEf380V0x0wZC19D25cXBBi85djxrZRbc+yLjmPLpGR8Ab6NxxvAB57NJ335L3i6R9Q2m5GXH4WpNXwl+/WnfLeex9llOZ7OGszg5S8DNySSN3bAHbnscV0TmDG4TH+ncX8nsw3JNm9H29IDTP8AVFnH9Wdp/tAn7PdzUN0PpLL34s/HTzE2Mgtajy8Vus+pHJxxPsO3lryvAkrvex2xLTsQARseZ6/6tV6qYi0T15tLz43s3+FN5jz3lP8ATfSW3IX8RXrwD0bOYafIiV7yJoZIZWxmAtA4XAHjBO/e3lyUE1t0h5q1XpTUYYq4g1g7GPLLksZsSV7Bjq15QG86s46zrO/h4G8jvymOS6K3RR4p2GyD8ddwNWWpDYfXjssmgn2dK2aGTlxce7gfZxHkeRH5U6JxHjqFL01z5aWfgzlm1JFxG1YjkdJKzhDh1YeXd/PbbuKxRXwtExVHwm/7vszTVhRMTHrX7Pi/r3MzZKTF4rF1bE+KrVZcy+zcdDDHYswtmFWtI1h4jwu5SEEd/Ibc4prXpDzNzBm5Wrtx8lfUf0fZDbb2TxdRbgZDESwbP6x7nRyAHYDfbdTnUmgLpyk+Vw+Vdi5slFDFk43VIrUc/UN6uGZjZeUc7WAN32I5fe7fBh6JnNw8+KOQdI6fNtyvpckO8ji2zDYLJRx+u9xiO79+ZcTsrh4nDU8tVo/53v3v010WmrCi0+7fxalucr47OZXJ5Ci2C9S0pTtZCSrakma9zpTG6rEx4DCOKGNrX8t9xvtzK2GL6RszDLj5czh4aeOzdiKtVnrWzNPVms/0ZtuNzRu1/dxN2293sUhzPR9DcyGStWZDJXzOGixc1UN4XMbHJJJ1rZd/tfWcuXItBWhxfRbfdPRGUzk2Sx+Enjnx9J1aKFxlhG1d9qZnrTuYO4nn3+8rPtOHqi9W0b7dPHfJObDmM/Pbo1DelzLCB2RdiqoxdTLnG25hck9JeTc9FbLXiLNtm8TNw483EgbAbre3de5azlLtLC4uC5Xwj2R5Cxatms6WZzQ90FXZpaHgct37gkHuGxP3L0Wl2EsYf03+k5Z2RFnqPs732XeqMfHz+zw8W/t32X3lujy8zJW72IzD8YzMFjsnX9Fish8jG8HX1nS7iCYt357HmSfuCa+FmZtERrbW3S1++pM4Wdu+/b7odqDVUuZpaUvTVxWkdrWKF0DXF3B6O67CNy4bh+zBuPYd12PWWIOQx16iJTCchTsVhM0EmMzROjD9gRuBxd243G6guL6JzBRxFL04v+g88cv1zofWsAyTv6l+8m7XfX837nct7uanuq8JFkqVmjOZGxXIXRPfC8skZvza9jh3Oa4A+7lzBG4XPHxcOaqfZzlEz4Re8M4ldN45en8uNdH9Srir+Kx2bwNelkIS+LEZ2ns+rdlbEWuD5G7Pinewnk/fck8m7gHZnpXy80FnL08LFPgKL5eOd1vq701au4ia3DERwBrQ1zuA89m9457Z+L6M8q+5j5MtnDkaeDmE9GAVGQSulY3gjfZkbzleG8tySTz953x8h0QW+CfH083PVwN2aSSfFCtE97GzPL5q8Fo/WMruJPqe47Hfc7+irE4aqu9cxM5X/NbWb26307Xu6TVhTN6vOzKvdJGSvXZKmnMfXutp1q1i3avWHV4t7cLbEMEQaN+PqnsJcTyJI25bnX1M5Ux+a1DlLNB1e3V09jbl8ssuldI50RBqiM/VB7TBEwPB2O2525k7fL9GNmGy61gMq/Dvs1a9W7H6NDaimZVjEMEzWyj6qw2MBvEO/Yd3PfOj6NmST5KS7afaZmsLUxdppZwSE1mSMdZEgcfrHl/F3ciPauUV8PEZaTEb31pvfpvZiKsOI/u/RGn9JeoK9aldvYSpFUy12jDA+K898leO5K1o9JjLN+Msdu1w2G4AIHEFuOjj/wAp9Y/7fC//AGxi1juijKzQ06tvUMlipiLdSejAaMTC5lSRrmNtSNdxzPDG8IJOw33IJ22nGm9K+h5TM5HrusGdfSf1PV8PUeiVhX24+I8fFtv3DZMTEwIoqii15jpf9VMxr2iSqrDimYjz3jdGOmvSNy7Pi8lTq1sk7Cus9bh7ruGK0yy2MF0bj6rZ2GIbcXLmO/bZ2kwevcbjsZK7E4Z9bJXM0KEuCd9Q5uWmj4vrXgFog6qPcOaACGgbN57TPpC0bbu2K2QxmSkxuRpRSQNkMYnrT15XBzop67/VPrAEO2/uaRHq3Q+59Oz6XkpZMxbyUOV+l4Y2xmC7XYY4HRQ/Z6prHObw8uTuW2w2uFi4M4VNOLOnTPeZz6THXdaKqOSIqn6+rfNpukO3fnqY8ajxFVr49S4tlJ1LITcAM7Zw6YFvrCWPhI4XbtPWAjuX3rnptkq5C5TpRY4x4lxjnfk7zq0tuZg3khpsYw7cJ9Xif3kHltzO+sdHWUuQRsymcN2avlqGQieKMUMTG0hKDAyKItDTJ1u5fzO7R3pqPoytuu3LWKyjcezLvEl6CahXuAT8IY6xVfMOKCRwHPblvz922qMThsortNr/AKrdO1/ksVYelXnZiO6VLt2bGQYXHw2HZvEyXmG7ZdCKr4rBhmbOY2u42MLHN9XmXFu2wXjW6W7tmtRgqYyOXN5C5kKjqjrJFSA4x4ZasOm4eJ0XrM2by73czwjilmK0MYMlj8h6W6U43DyY1zZImNfO6SVkpsvfHs0PJadwG8y7fdR09EckcTJKmTfVydPK5K/RyDK7HNZHkn8U1WaB7i2aLYAbn+r3cyFmmrhJyt9f3a9tGYnB2+vf7MTV/SrkcTXpwX6NGvlr89hoEl530bFWg6v/AAt8waX7PMoaI+/dr9zyG8g6HekducFuCRkDLmMcwTGnObFSeKUO6uevKWh3Du1wLSNwQOfPlgZHozv2Yak0+ZfJmsdPYkgyb6UDoDDaEYkpyUiOrfX+raR3EHcjvKkvR3pi1jmWHXbwv2LcoeXtqw1ooWhob1UEcTRwx8gdt9t9z3kk4xauG9jamI5vHfplpbvfslc4XJlr4+rM/UusMXjHMZkMhVpumY58TbU7Ii9jSA5zQ48wCR+q3DnBzOIEFrmbgjmCCNwR7wsLL4KlcLTbqVrJjBDDYgjlLQ7YkNLwdgdgs7qwG8DQGgN4QAOQG2wAHuC8c8totr1ccrRZwfod6N8NldK0ZLdOFtmaKyTkIwI7MTmWZ2xyiYbHdga3v5eqN1ptQcGU0EMtehisZOo2KnFk3xgzyQwZdldjxK71jxxl2/vL3n2qXYfobyUVNmKk1JZ+io2uY6lUpwVnPje9z5I3WBvKWPLnbgkjZxHdyU01boGC3gX4Kq4U4OrrsheGdZwCvYin5t4hxFxj5nfvcSvq1cXRGLzc9/8AK/XKM7/Hr0yeqcaIrve+d/dCH69wdOhkNIRUq0NWJ+ekkdHBG2NhldT4S8gfvENaN/uWz6Yv8s6Q/wDbM/8A9KVJekTRzcvVgjbYkqW8fZiuY+9E0OdXtQghryx3KSMhzgWHv3+5R/A6AyUmTq5POZVmQfi2yDH161VtWGJ8oDZJnNH2pCAP0C81GLRNMVVVZxFUWzvN72+u7nTXFomZ0ifnf+X3+0lclh05cETzH6VLUqyyNOxbBZtRRTc/YHMc5p+55UV6TtL0cEzTd7Gwx1rdTNUafWQtDX2a9lkjbEcxHOXcNJ3Pdu73rresNP18rRs4+00mC7EY38JAc07hzJGEjYPY9rXDf2tC59hui/Ivt0ZczmTk6uDcH46sKrID1jQGxy2XN/nZGtAG5JPI8+Z3vD41FGHETNrXvG94iPVzDriKc50v45PvpC0/p7FDKZjKxenS5hzY2wWQyaWSTqhFFToN4QWEhg22+zwkkgAleOkc9LpvSePOTa6S71Qgo0eP6+eWRzjVqAn7IawtBd3Naw+4BfWuei/J5LKtyYzbITUJGPruxsc8dRh234WTSOY+YkbmQt3Ow7gABJZOj+ter1WZ5sOYt02SMFyWEQcTZH8R2hhIYwkNYDsOfAFZxMPkpiurm0mdb5RaIi+Vt8/os1U8sRM33QPojt2KeV1NZy1pk9lmOxt66+Ih0cTRDbnkgrtBP1MLSGAD3c+Z3Oiweey1eabW2QxsNiheZHHGG2T6ZjcU6ThjdXh4Sx7fXa53MFxc4+qCdukaZ6JMXQvZCzDBC2vkaTKcdRkZaK8T2OZca1/GeJs3qEjYbFgUdd0P5N8DMRLnpJNPRSNLaBqxttOgjeJGVX2gOJ0YIHt25DkNgB3jHwKqpmZ15b5TGURnEW0n5d7N+0w5mZ93w8G16YY8LVqG+cVRv5PLPjgxsclWOSW5bmYGwF+44nRsYGucT+6wDcEhb3od0PHgscyA8Lrdk9ffma1reOd45taGgBsbPsgAbcvvX63RHHm2ZWzM2aHH02VsPS6vZtJzhtZmJJIklfwsAdy2AA/dBUzXgxMb8OMOmb9Z8o8Pr4OFVf8AjyxPv9dhEReZyEREBERAREQEREBERAREQEREGp1LqGtjhVNlzmjI36+Pr8DHP3s2iWwtdw/ZaS07uPILbLmX7REj4qeHsiC1YZj9UYi3ZZRqTW5m14JJHSyCGBrnuAHuHtHvXPukbNPv5avdnOroMRPg+LT8eEq5GrJ9NMt2I7Lb9eJgkjtcDavVi0BCWF5PIu3CxyKpkmX1DTw74Z49Qy29Q6Ew9PFOhivyvjzkb7kVrr5R/k+2BPA90kpYS1m+5LdlM7eCzz5NaX60+W+k6ULINPVpLM7afHNgaHpE1Su89TNOZxIGu5tbIw7bEu3CwC0urtTUcTDFYvTNgjtXalKJx/fs3JmwQt/Ddxc4/utY8nkFxDoPeRqKm2hJq2TF/wAm7vpp1IMiKwyptY4lsfp4B9K4BJxBv1fM9WecqzenLS2T1Vl3YyCnXfi8Bj5DM7KuuVq1nJ5avLDHNVfFC70h9OsS4PaS1slpwPrR8g7Hk9S1q+Qx+MkL/SsxFemqBrN4yzHtruscb9/UO1mLb38/ctyq3WbGorjtOyipYbnsLgtbYuexJWkFc5mvVx0FKyJpYxE6G0+Bs0bzsx4c7bk07RTourZ11bNtfltQMjl0zMLpdiNQOtVso9zQ2xX+krL3yZJg60OZSLWOadweJrCAt4iqXjbmT4NKTcGonej5CeuMc1+omx3YzlomfSTbs31kLBCHOFTJBzeoL2hw3DjkZd+SbftGaXV30+dZVeKOsL/0H/J8Ziua7mdUPRPQfQ+DfY9bx8fH6vW7haxaXNamq1LuOoTGQT5t9mOnwsLmF1WD0iUSOB9T1N9veVwPTuKz8E+Nv158y6/ksvrGnPDfsXJaEVaJmZkwrZa0+8VeD0mKm9khA3EgAJaQFpejirM/L6UfwarlvVfpN2opM3FkH1K2SlxsjD1UlpvVxyOkDw0wERuaGb+twgBbBY1q/BFJBFLNFHLce6OrFJKxkliRkb5nxwMcd5XiOOR5DdyGsce4FVVyeAz9fSen5xdznFlJYJdVS2n5i5bgayrIytC6tRlZegptkDWvbAQ7cRufxetvsMDpLI5GPRUmSsZqx6Nn8qxloDJ46eDGijcfUfZD5nTMJliYxk0zhI6KZrHc3OCCwNnW+KZYx9X02J82bsXK2P6jjsRzWMeHG5CZ4GuiikjLHtIe5p4mOb3ghSJVq6JNNXcZNpuONuWbDLqzVrsgyzJdljbA2LJxUpJxLuI45A2B4c7k97w8budubKoCIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIuVSZW/quSWPD3pMZp6tLLWsZ2mGnIZaxE50U8OGlkBZVoxPDmOu8LnPexwiAa3rHavBaen0/q3F0qGQydvHagxOVmyNLJ5GfICtNjX0upvQvsuc+J8jrgjcAdjv3chsHaUREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBRvpThfJgs1HFI6KSXDZJkcrDs6N7qczWSNI7nNJBB+5SRcs/abztmHDHFY4B+W1bKMNjmcXCGC2Ort2nuHOOKKBz/rB9l0kSDSdGXSvho8Hh8fgoLWau18NQYMViIesfWd6Oxm2SuSFtXHnrGvDnTyAkh2wceRn2hNO22T2MtlnwvyuQjZD1NYudVxlGNznxY+q94DpfXcZJZyGmV5HJrGRsbzn9h7TTsXpmaCaIR3PpzJx3wCHEWKc/oLmFw5EN9G2G3LvPtK7sgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiIC4n6V9Ma4rbF7q+nRedDtwmLjo12UrL9wN/rrebkjLSR6+mwQORXWNX5uLG4+9kZ/wCZxlKxblA7yyvC+VwHvcQzYD3kLj/7LtB7Ppm/aIM1b0PFWH+z0utDJmM3IHEbnfLZzItPs/wcDkWlBJf2cbgmpZpzeYGstTbfg/KzzD+EgUX6QN9WUcldOes4TS2G9LYy1i3sbPk7NElti9PPzP0ZDLHJHHAzYzOY95cB1K5v+yF0pVINO5uOw9tnK3c9ftUcHFK03Lpu1KzwyGLfjbWEkc5fOdmRtD3vLQ0lRroA6D9T5zTrKWQyr8VpfJXIchHj2xia5bY08XHDxbCtUl9R7eMvaXRxyiM8nPC1vQI7JHTOEdl5DLkH42B9iR7nPlc14LoDO9/rPs9QYeMncl/HuT3qbrzrQtjYyNg4WRMaxjR3BrQGtH5ABeiAiIgIiICIiAiIgIiICrPrrXT9Q5OzXxl/LPr0L0mIxeJ03km4u1mMnWibPlL9zJFpNXDVWS12B7dw97txxBzQ7q37QGSmr4mKKKV1WLK5fF4u/kGPMbqNDIXYq1uw2UD6h5jeYmycuB07Xbt23EU6KejfB0tXZrJYaGGvDj8VQxT4Kjz6OzITOks3hwAkNkFaPFb7H7U0pPrElBlfswa5vZOLMYzKC0MhpfJmpJ6ea7rpqyhzqwty1GtgsTNMc7OujaGyNjjf3uJPY1xjoToudq3pAyDdjXsZLD0WOHcbFDG72m/i02o/zcV2dAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREHPv2ha5nwT6m+zMnlcDj5/vr3s5jqthh3/ddFK9h+55UR0vZlqdH+SvM2NzJVtQ5GMgfbu5S5flqN7xueOeuzv8AYFtP2vq9mbSOQgpNkddtXMNDSbA4tmdZfmsf1AicCC2Tj22O42Kqx0GahzdjD4agMjJYxz9eYfF5LF2I2SPq1nWquSqS15i3rYoXy1LzXsc4t+pYGgcTgQsjgP2eKdC5ZFOWvWxGVjrtylOGmfTrccMMccuN+kHykQYieSPrZYo4w+QySML+AtDe4MaGgNaAGtADWgbAAcgAB3BfqICIiAiIgIiICIiAiIgIiIPG7VinjkhmjZNDOx0csMrGyRyRvBa9kjHgtewgkEEbEFVX6EdUvx1jWGm9J4k3LcOqrs1OaeaCDEY+CcQ0w+050gsPr15ak2zIY3l7WsaHAuDlMOn3pluR0blfTELp5I7cGMt6hkLY8dQtW52VRDTe875G+x0g4hCHti+07ctLVFOirotp4iHSuYoiavmxqCbD6hmFy1LFe6mTJ0MpE6KV/Bwek0+NpDR/NNO2+xAWH6OdKx4fHxU2yOsSl8tm9dkaGyXb9qR1i7ckA5NdJNI93D3NHC0cmhSJEQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERBEOk+UMbieLbgfqLFsfv3bmV3Vfn1vVAfeQqmZ/EXNL67txRRN+hs7qvS1p0j3cPUPu5Ka5XkY0d7WmtmoAO4Abk929mf2m4ZTpbK2K7+qs4llfLVJQATHPircGQjcAeR51tvwcVx39qTNV7+msTqeIPZXzWNbVmMW5krWJY2ZbETOLN9n1sjRdAXDfZt+wBvxcwtWijfRfqhmaw2MyrBw/SdGGd7P8ANyuaBPH94bKJG7+3hUkQEREBERAREQEREBERAUQ6acoaWnszaDp2ej4yy50lRzWWY4+rLZJK73AtZO1hc5riCA5rSVL1qdZYRmSx1/HSco8pQtU3n3NswPhJG3MEB++/3IKk9N2IfJonB6lP+CV8fcxNzEaeoyf4ux+NmcepZIS0Pu5J7ZIXSWH7bEuaxrd5HSd7wtSSbIy12M+oxesHZBpbzDqmQ03NbE/LuByVydvP2s+9cP6O87WtYLD6CzLuvsZCStFWc1h9ak708XIJDv8AU28fcp2ax5jfq6r278R4evdCGQk6/Hmy5xtZHTLMfdB34Bk9IZCbG5F3MDaWSXI+71m1gRyag7Gi53prpWrXtS5DTsdSw36Nge6LKO51LdmqawyNSEgbdZX9NqB3rE7veCG7MMnREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQQjp8idJpjPQt247WJt12cXdx2IjAzf8AtSBcE6XsDBjNJ6u0wA9tHBZDDZTFetu6DE5nKwSPja5+5d1NmPLNBO54er33JO/dOn+51ODk5gdflMHXPF3cFjOY6GX/AIb3/ouU9LNiHPZS/jK7DINSuxml4JG7jrI8LetZbUWRae59OoywysHDvsGZn7pQWLxWPgqQQ1a0TIK9SJkNeCJobHFFG0MZGxo5NaGgDb7lkoiAiIgIiICIiAiIgIiICIiCtWoOi61i9W4zLCxTOLvayfbqVWwvN2K3lcZL6dvM4cMcDpqhdwN34iWE7FqWOKxntWYOplxgb+AycGqMVlDDDYZFWyOKhZm2yVpiI5K3FYe93EdhJZa/vaFN/wBqbNTY6lgrsFd9uWrq/EOZUhG8tnjFqJ1eEf557Xua3/Sc1c16IdP4vpDkzGaydSzVfHqaKxXgjkEUsuNOHx8EVK47gJlp2IYYXvY0ji5cLtidwmH7JPR3JQpfS9u3ctvyL70uHZfI62tjMhZjsmeVvMm7c6itPIXOO20beR4+LvC/GNAAAAAaAAANgAOQAA7gv1AREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQERaDpB1PHh8fNdfG+d7XRQVKkX89cu2ZWV6dOEbH6yWeSNm/c0OLjyaSg5P+1QW5gVNLsnFaFxGa1LkiWiPE4PH8chlkcT6tiaZoEY2P8AMSEgNBcI5+y7Sfc1DbyxrtqY6HS+Ph0rjw4k0sHZv3oIDMCNhbndh5J3O4nk+lH1uZA0XSxpi/Nh9Q4/0trb1fGv1Hr3LQM4228g2m+fFabql7g5lOKKGN237kTKx23ncD2bodw3oN+1WafUxuldI4xrdubTTZmXkk+0n0kH9feg6iiIgIiICIiAiIgIiICIiAiIggfS3RbYn0zG4Ahuqqs+x99PHZS40/k6uD+Sg37LUPo7K0YbwtyWidJ5AEADjsMjyFOwSfaRDFjx+YUi15rii/UeLwETutyFGO/mLXARw04W4m/VhZKf8/J6bxBneGNDjsHs4tF0AF08mnnQ79Rh+jjDV7r9vUdbybKVmtC12/OWOClLI5vsbegP7yDuCIiAiIgIiICIiDT6kwDLwj4rN+sYS4tdj79ioTxbb9Y2FwbKPVG3GDtz27ytQdGWmt4YdR56Eew8WHsOH9q7jJSfzW81NqGhjIDZyNyrRrhwZ19yxHBGXuBLWB0jgHPIB2aOZ2KhbOmzATbihJfyxHyXCZXIMJ3I2FivWMG+4P7/AC9qDeQaWvtHPUuZkPvlrad5/iIsO1ZTMHfH/nq2775KeNJ/4dVqjbteZqf+gaQyhB34ZcxfxWLi5dxLWWJ7LWnv/md/eB3L56zXFg8o9L4th2245cpmJh37gtYymwEcuYce9BKRi8iO7Kk/7ShXP/Y4V+/R+U9mSr/2sZv/AHWQo5HpfU0gPpOqWRbnl9Faep1y0e7fIzWw4/eR+S9f5CZF329XaiJ9vVw6biB/JuF5D80G9NLL+zIUf7eImP8A2cg1ebqec9mRxQ/HBWz/AN8LVDQNn26m1Gfv67FD+DcaAvx2g7f7uqdRs+8Owb/+biXINuKmb9uQxR/DCWx/3uU9HzY/63i3/d9F24/4/SDv7lqBonJtHqauz/8A7yppmT9f8Sg/xWU3T+aYPU1A6Q7d9zEUpOfvIq9Qgy5DnG/ZbiZP9Z9yH+IbIsF+Q1O1/LE4KSP+sNR345PvIjODc38uNfkOP1Qw7uyuCnb/AFDp2/XeR7AZW5p7d/v4PyXpPa1NHtwUcHa95dl79Dl7w0Y2zufuLvzQekuoMrE3eTAWZnD93HZHGS7/AIG9PWH67L0Zq1zW8VjE5ir72mpFccPyxc9jf8t14SahzETd5dPzTn2txmUx8/8AunIPqb/nsvN2veqZxW8Nn6vdu0Ys5Fw3+7CyWt/y3QZsOuMc4bvdbrgfvX8Tk6Dfx4rtaMbfesrH6uxVh3BBk8fM8ciyG9XkeD7i1ryQfuWlPSrp9gabORjx/GdgMzDZxB358uHKRQkHkeS31HIY3Jx7wT0chC4b7wywW4yPfuwuaQg2oIPMcwfaF+rQDRWJaSY8fVrvd3y04W05T9/XVeB+/wCa8XaRaxpFXI5eo4/9IMlJeIP3NzAssH4cOyCSoonJic7EW+jZipOxv225XD9ZNIPYBPj7VaOI/f1LvwXwczn4OI2MLWtsb9g4fLsfYk9+9fJwVYoj93Xu/FBL0USZr+owht2tk8a4t4nG9jLHo8YHf1mRqtlosI/238FvsFm6d+IT0bda5C7ump2IrER390kLi0/qgz0REBERARaLUmssRjf8o5TH0fcLt6vXcTtvs1srwXH7go8/pcxD9vQ2ZXJ8W3C7E4HLXYXB32S23FW9GLT7+s2HedkE+Rc+OvMvL/RNI5lwJIEmQt4WhHy32cWm/JOGnYf9Hvz7l+jLawl+xhMDVG//AFvUlyZ22/fwVsPw77ezj/NB0BFCOp1a8f0jTtc+0eg5O6B+fpkG/wDBfrcbqs9+Z0+3/V0tkXf36hCCbLlXTV18OZ0bfkcDiKWdmhvxdXxcF3JUZ8fiLb3fuxssWHM35bOstP4b92M1X7M1gCfv0rkAP1/lCVC+lHSmusljbmPju6TsRXoHRPMuNy1CaMnnHPXkbdsNZZjeGPY4ggPY0+xBq8vGf5Ia9dKeCa5l9URzyO23LevdSqF23sFOOo0f6LGrqWlIv8cZ9+3fLjYx+EdBjwB+cp/VQGn0c567pDN4vKT0GZrUclmWWWuZTSY+RlaFpceHiHG2txu4WkB0zthy2Wi11pjWt6OOxUqMxWoIzA2TK4vVdhmIsdRsDNZw81Ux2eIcTeB7CQ3g3e4MDEFh0Wu0z6b6HV+kvRvTxXiF70EyGqbIYOuNfrgH9SXbkBw3AK2KAiIgIiICIiAiIgIiIChnSlq2ahHXpY6OOxnM299fEVZSeqa5oBsX7fD6zaFZjhI8jm71GD1pApfanbFG+R52ZExz3kAuIawFzjwtBJ5A8hzVa6eLyGobdGxJKac3SDRuXZ7cbuG3i9FUJKJq4XHuHEGXLjslVlnlBABlk5O4GBBg6bt0TqJtbGiS5Dh9P6qfkNSSNH+O85N9FfSkglaNpup/wVu7Twt60Mbs1jSe8dD2JgqYTGCGJsbrGMx0thzR60szcfUgEkhPNzhFBCwe5sTGjYNAEH1niamPyOPoUYI61XG6E1aK9eFvCyNhmwDRt7S4kOJcdy4uJJJJK6hoyPgxuPaO5mPqNH5QRj/8INsiIgIi8ZLLGkji3c3va3m7fYEDYe3Yj9UHsiwXBxfC5xIJkIEYd6rR1Up57cnv5Dn3DbYe0n6vTDcRvbvHMCwvDu4n2Ebd23t/H2AoMxFhY97ml0LzuY+bHHvc3l+u27f97bntuc1Bg5PDU7T4ZLNWtYfUc51Z9ivFK6BzwGvdC6RpMbiAAS3bcBZzQANhyA5ADuAREBERAREQEREBERAREQEREAjfkeYPeCoxm+jzA3XiW3hsZPM07tsSUK5sMIIILJwzrGO3AO4cO4KToghP/g2qx8RpZDOUHO326jO37ETCfbHVyUk9aPb3CPbkOS/G6VzcIPUaotTH936WxOKstb9x+j4ajnD8Xb9/NTdEEJhp6rjPrZHT9poA5fQmRpPPv3cMrO39Gr0kt6pZ9nH4Cf7zm8jU3/sjETbfqpkiCJ1snqHb67D4oH3V9RWZh+suHiWl1Bh5rb+vsaXoS2mtc1lyLKRQ3Yw7biEN6OuyxDvsObHDuHuXRkQcDzUWv6h/xHQPVsDQyvnNSVcrBsD625mqMyD3kcuJ90j7vf5v130nxxcL9E46acDnPXzdRsLnbcy2s62ZQPuL/wA1YBEHKaUucsMByeoDid2tMjcbph1BsJ73MdfzbrlZ/wDrNA9u33bLGaHw17Z82Sv5wgbP9I1BZnrP5EEyUKU0dEk8/wDodv0XRFr8tgqNvb0unVtbd3pNaGbbbmNusaUGFpzRuIxv+T8XjqPtJpUa9ckgbbudEwFzuXeea3qjs2iqB26ttusGncMx+UyNCMe76qnYZGR9xBC8Z9K2d29RncxWa079W04y013fsHvyFCaXh5+x4PIc0EoRRSzis80AVszRO3tyOCfYcR+NPIVmg9/Ph/Jfkr9SRt9SLCXHD+tYv41p/MQWy3+KCWIolDl8+0fX4SkT7qOeM+/4G3j66+JdWZJh2Ol8xJ/pVrmnXNH4+kZaJ38EEwRRGLV90j1tNZ2M+50un3n9Ycu4fxX3/K61/wCj2b/+T+ZoJWiiMurrwG7dM52T7mzadYf+NmWr9r6ryD9//FjMxEd3X2tOgH84MxIf4IJaijgy+UdtwYfg3+JyVdm34+jtl/gvht3Pk/5NxDG/1jnbj3/nGMQB/wDEgkyKPPizbx6s+KrH3Gnbuj9Rar7/AKJ9E5SQbTZYRn2ux2NggP5C8+0B+e6CQoo4zSznDhs5TLWveTaipHb/AFsVDXI/Ec19R6NoAbPbZsD3XcnkLo/S3YegkJK8+vZ3cbPw4h//AFR2To+wLju/CYl7v60mMqPd/vPiJX4/o70+RscFhyPccTSI/QxIJOijlXQeFh/o+KoVfvp1IqjvxDqzWkH816DSsLCXQ2slC8/vDK3bDR/qw3ZZYR/uIN+i1uLx9iE+vfsWwQf6XDTDgfYWmnBCPyIP5LPgDw1okc1zwBxuYwsaXe0tYXOLR9xcfxQfa4xgKcuM1PjsQ+tIalavmJtP32NaIIsVbFWWzhpNturlq2YK4iAHCa/UD7UZLupXMrLDw8VG1ICwF8lY15mMJ728JlbM8j/RjKwP5a0dyJG5CHhPN1nC5WCMH/by1REfbzDiEEC6UHl2o3s/q9HepnD8X3cQ3/8ARdQ0x/Qaf/qdf/ksXB+krpIwNTVtGzdvMGOuaUyeLsWoI5bDIJrV6nKxk4gY50PEyB/Nw5cidhzWN+z501y2cnS01NdxebYa80NTL4mDK1ZuGhXL2PyEN+s2B0kkcLt3V5HAO2GxB3AWSREQFgvl4HS8QcAXh3FwPLdurjBJcBsBuD392yzkQYT3bvh/2h/5MqwpYSyuYiA188n1UTTuGDdvqgj93ltv/pjfbcrO9DLZIywgRscXFh/dJY9vqfdu77P6e5e1iLve1rTK1pDHP7gee2+3Pbmf1PvQY7edpxHMMg4Hfc4vDwD+IP8Aes5Y1CsY2niPE953e73nmdgdu4bn2AczyG+wyUBFQDtq6q+X6f8AB5HzFO2rqr5fp/weR8xQX/RUA7auqvl+n/B5HzFO2rqr5fp/weR8xQX/AEVAO2rqr5fp/wAHkfMU7auqvl+n/B5HzFBf9FQDtq6q+X6f8HkfMU7auqvl+n/B5HzFBf8ARUA7auqvl+n/AAeR8xTtq6q+X6f8HkfMUF/0VAO2rqr5fp/weR8xTtq6q+X6f8HkfMUF/wBFQDtq6q+X6f8AB5HzFO2rqr5fp/weR8xQX/RUA7auqvl+n/B5HzFO2rqr5fp/weR8xQX/AEVAO2rqr5fp/wAHkfMU7auqvl+n/B5HzFBf9FQDtq6q+X6f8HkfMU7auqvl+n/B5HzFBf8ARUA7auqvl+n/AAeR8xTtq6q+X6f8HkfMUF/0VAO2rqr5fp/weR8xTtq6q+X6f8HkfMUF/wBFQDtq6q+X6f8AB5HzFO2rqr5fp/weR8xQX/RUA7auqvl+n/B5HzFO2rqr5fp/weR8xQX/AEVAO2rqr5fp/wAHkfMU7auqvl+n/B5HzFBf9FQDtq6q+X6f8HkfMU7auqvl+n/B5HzFBf8ARUA7auqvl+n/AAeR8xTtq6q+X6f8HkfMUF/0VAO2rqr5fp/weR8xTtq6q+X6f8HkfMUF/wBFQDtq6q+X6f8AB5HzFO2rqr5fp/weR8xQX/RUA7auqvl+n/B5HzFO2rqr5fp/weR8xQX/AEVAO2rqr5fp/wAHkfMU7auqvl+n/B5HzFBf9FQDtq6q+X6f8HkfMU7auqvl+n/B5HzFBf8ARUA7auqvl+n/AAeR8xTtq6q+X6f8HkfMUF/0VAO2rqr5fp/weR8xTtq6q+X6f8HkfMUF/wBFQDtq6q+X6f8AB5HzFO2rqr5fp/weR8xQX/RUA7auqvl+n/B5HzFO2rqr5fp/weR8xQX/AEVAO2rqr5fp/wAHkfMU7auqvl+n/B5HzFBf9FQDtq6q+X6f8HkfMU7auqvl+n/B5HzFBf8ARUA7auqvl+n/AAeR8xTtq6q+X6f8HkfMUF/0VAO2rqr5fp/weR8xTtq6q+X6f8HkfMUFZkREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQf/Z\n",
"text/html": [
"\n",
" <iframe\n",
" width=\"800\"\n",
" height=\"450\"\n",
" src=\"https://www.youtube.com/embed/UpwEsguMtY4\"\n",
" frameborder=\"0\"\n",
" allowfullscreen\n",
" ></iframe>\n",
" "
],
"text/plain": [
"<IPython.lib.display.YouTubeVideo at 0x7ff965692040>"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"\n",
"YouTubeVideo(\"UpwEsguMtY4\",width=800, height=450)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> _Exercise 4: Authors overall activity_\n",
"> 1. Compute the total number of comments per author using the _comments dataset_. Then, make a histogram of the number of comments per author, using the function [``numpy.histogram``](https://numpy.org/doc/stable/reference/generated/numpy.histogram.html), using logarithmic binning. Here are some important points on histograms (they should be already quite clear if you have watched the video above):\n",
"> * __Binning__: By default numpy makes 10 equally spaced bins, but you always have to customize the binning. The number and size of bins you choose for your histograms can completely change the visualization. If you use too few bins, the histogram doesn't portray well the data. If you have too many, you get a broken comb look. Unfortunately is no \"best\" number of bins, because different bin sizes can reveal different features of the data. Play a bit with the binning to find a suitable number of bins. Define a vector $\\nu$ including the desired bins and then feed it as a parameter of numpy.histogram, by specifying _bins=\\nu_ as an argument of the function. You always have at least two options:\n",
"> * _Linear binning_: Use linear binning, when the data is not heavy tailed, by using ``np.linspace`` to define bins.\n",
"> * _Logarithmic binning_: Use logarithmic binning, when the data is [heavy tailed](https://en.wikipedia.org/wiki/Fat-tailed_distribution), by using ``np.logspace`` to define your bins.\n",
"> * __Normalization__: To plot [probability densities](https://en.wikipedia.org/wiki/Probability_density_function), you can set the argument _density=True_ of the ``numpy.histogram`` function.\n",
">\n",
"> 3. Compute the mean and the median value of the number of comments per author and plot them as vertical lines on top of your histogram. What do you observe? Which value do you think is more meaningful?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> _Exercise 5: Authors lifespan_\n",
">\n",
"> 1. For each author, find the time of publication of their first comment, _minTime_, and the time of publication of their last comment, _maxTime_, in [unix timestamp](https://www.unixtimestamp.com/). \n",
"> 2. Compute the \"lifespan\" of authors as the difference between _maxTime_ and _minTime_. Note that timestamps are measured in seconds, but it is appropriate here to compute the lifespan in days. Make a histogram showing the distribution of lifespans, choosing appropriate binning. What do you observe?\n",
"> 3. Now, we will look at how many authors joined and abandoned the discussion on GME over time. First, use the numpy function [numpy.histogram2d](https://numpy.org/doc/stable/reference/generated/numpy.histogram2d.html) to create a 2-dimensional histogram for the two variables _minTime_ and _maxTime_. A 2D histogram, is nothing but a histogram where bins have two dimensions, as we look simultaneously at two variables. You need to specify two arrays of bins, one for the values along the x-axis (_minTime_) and the other for the values along the y-axis (_maxTime_). Choose bins with length 1 week.\n",
"> 4. Now, use the matplotlib function [``plt.imshow``](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.imshow.html) to visualize the 2d histogram. You can follow [this example](https://stackoverflow.com/questions/2369492/generate-a-heatmap-in-matplotlib-using-a-scatter-data-set) on StackOverflow. To show dates instead of unix timestamps in the x and y axes, use [``mdates.date2num``](https://matplotlib.org/api/dates_api.html#matplotlib.dates.date2num). More details in this [StackOverflow example](https://stackoverflow.com/questions/23139595/dates-in-the-xaxis-for-a-matplotlib-plot-with-imshow), see accepted answer.\n",
"> 5. Make sure that the colormap allows to well interpret the data, by passing ``norm=mpl.colors.LogNorm()`` as an argument to imshow. This will ensure that your colormap is log-scaled. Then, add a [colorbar](https://matplotlib.org/3.1.0/gallery/color/colorbar_basics.html) on the side of the figure, with the appropriate [colorbar label](https://matplotlib.org/3.1.1/api/colorbar_api.html#matplotlib.colorbar.ColorbarBase.set_label).\n",
"> 6. As usual :) Look at the figure, and write down three key observations.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.3"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
```
| github_jupyter |
# Use Spark to recommend mitigation for car rental company with `ibm-watson-machine-learning`
This notebook contains steps and code to create a predictive model, and deploy it on WML. This notebook introduces commands for pipeline creation, model training, model persistance to Watson Machine Learning repository, model deployment, and scoring.
Some familiarity with Python is helpful. This notebook uses Python 3.6 and Apache® Spark 2.4.
You will use **car_rental_training** dataset.
## Learning goals
The learning goals of this notebook are:
- Load a CSV file into an Apache® Spark DataFrame.
- Explore data.
- Prepare data for training and evaluation.
- Create an Apache® Spark machine learning pipeline.
- Train and evaluate a model.
- Persist a pipeline and model in Watson Machine Learning repository.
- Deploy a model for online scoring using Wastson Machine Learning API.
- Score sample scoring data using the Watson Machine Learning API.
## Contents
This notebook contains the following parts:
1. [Setup](#setup)
2. [Load and explore data](#load)
3. [Create an Apache Spark machine learning model](#model)
4. [Store the model in the Watson Machine Learning repository](#persistence)
5. [Deploy the model in the IBM Cloud](#persistence)
6. [Score](#logging)
7. [Clean up](#cleanup)
8. [Summary and next steps](#summary)
**Note:** This notebook works correctly with kernel `Python 3.6 with Spark 2.4`, please **do not change kernel**.
<a id="setup"></a>
## 1. Set up the environment
Before you use the sample code in this notebook, you must perform the following setup tasks:
- Create a <a href="https://console.ng.bluemix.net/catalog/services/ibm-watson-machine-learning/" target="_blank" rel="noopener no referrer">Watson Machine Learning (WML) Service</a> instance (a free plan is offered and information about how to create the instance can be found <a href="https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html?context=analytics" target="_blank" rel="noopener no referrer">here</a>).
### Connection to WML
Authenticate the Watson Machine Learning service on IBM Cloud. You need to provide platform `api_key` and instance `location`.
You can use [IBM Cloud CLI](https://cloud.ibm.com/docs/cli/index.html) to retrieve platform API Key and instance location.
API Key can be generated in the following way:
```
ibmcloud login
ibmcloud iam api-key-create API_KEY_NAME
```
In result, get the value of `api_key` from the output.
Location of your WML instance can be retrieved in the following way:
```
ibmcloud login --apikey API_KEY -a https://cloud.ibm.com
ibmcloud resource service-instance WML_INSTANCE_NAME
```
In result, get the value of `location` from the output.
**Tip**: Your `Cloud API key` can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam#/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below. You can also get a service specific url by going to the [**Endpoint URLs** section of the Watson Machine Learning docs](https://cloud.ibm.com/apidocs/machine-learning). You can check your instance location in your <a href="https://console.ng.bluemix.net/catalog/services/ibm-watson-machine-learning/" target="_blank" rel="noopener no referrer">Watson Machine Learning (WML) Service</a> instance details.
You can also get service specific apikey by going to the [**Service IDs** section of the Cloud Console](https://cloud.ibm.com/iam/serviceids). From that page, click **Create**, then copy the created key and paste it below.
**Action**: Enter your `api_key` and `location` in the following cell.
```
api_key = 'PASTE YOUR PLATFORM API KEY HERE'
location = 'PASTE YOUR INSTANCE LOCATION HERE'
wml_credentials = {
"apikey": api_key,
"url": 'https://' + location + '.ml.cloud.ibm.com'
}
```
### Install and import the `ibm-watson-machine-learning` package
**Note:** `ibm-watson-machine-learning` documentation can be found <a href="http://ibm-wml-api-pyclient.mybluemix.net/" target="_blank" rel="noopener no referrer">here</a>.
```
!pip install -U ibm-watson-machine-learning
from ibm_watson_machine_learning import APIClient
client = APIClient(wml_credentials)
```
### Working with spaces
First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use [Deployment Spaces Dashboard](https://dataplatform.cloud.ibm.com/ml-runtime/spaces?context=cpdaas) to create one.
- Click New Deployment Space
- Create an empty space
- Select Cloud Object Storage
- Select Watson Machine Learning instance and press Create
- Copy `space_id` and paste it below
**Tip**: You can also use SDK to prepare the space for your work. More information can be found [here](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/instance-management/Space%20management.ipynb).
**Action**: Assign space ID below
```
space_id = 'PASTE YOUR SPACE ID HERE'
```
You can use `list` method to print all existing spaces.
```
client.spaces.list(limit=10)
```
To be able to interact with all resources available in Watson Machine Learning, you need to set **space** which you will be using.
```
client.set.default_space(space_id)
```
**Note**: Please restart the kernel (Kernel -> Restart)
### Test Spark
```
try:
from pyspark.sql import SparkSession
except:
print('Error: Spark runtime is missing. If you are using Watson Studio change the notebook runtime to Spark.')
raise
```
<a id="load"></a>
## 2. Load and explore data
In this section you will load the data as an Apache Spark DataFrame and perform a basic exploration.
Read data into Spark DataFrame from DB2 database and show sample record.
### Load data
```
import os
from wget import download
sample_dir = 'spark_sample_model'
if not os.path.isdir(sample_dir):
os.mkdir(sample_dir)
filename = os.path.join(sample_dir, 'car_rental_training_data.csv')
if not os.path.isfile(filename):
filename = download('https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/data/cars-4-you/car_rental_training_data.csv', out=sample_dir)
spark = SparkSession.builder.getOrCreate()
df_data = spark.read\
.format('org.apache.spark.sql.execution.datasources.csv.CSVFileFormat')\
.option('header', 'true')\
.option('inferSchema', 'true')\
.option("delimiter", ";")\
.load(filename)
df_data.take(3)
```
### Explore data
```
df_data.printSchema()
```
As you can see, the data contains eleven fields. `Action` field is the one you would like to predict using feedback data in `Customer_Service` field.
```
print("Number of records: " + str(df_data.count()))
```
As you can see, the data set contains 243 records.
```
df_data.select('Business_area').groupBy('Business_area').count().show()
df_data.select('Action').groupBy('Action').count().show(truncate=False)
```
<a id="model"></a>
## 3. Create an Apache Spark machine learning model
In this section you will learn how to:
- [3.1 Prepare data for training a model](#prep)
- [3.2 Create an Apache Spark machine learning pipeline](#pipe)
- [3.3 Train a model](#train)
<a id="prep"></a>
### 3.1 Prepare data for training a model
In this subsection you will split your data into: train and test data set.
```
train_data, test_data = df_data.randomSplit([0.8, 0.2], 24)
print("Number of training records: " + str(train_data.count()))
print("Number of testing records : " + str(test_data.count()))
```
### 3.2 Create the pipeline<a id="pipe"></a>
In this section you will create an Apache Spark machine learning pipeline and then train the model.
```
from pyspark.ml.feature import OneHotEncoder, StringIndexer, IndexToString, VectorAssembler, HashingTF, IDF, Tokenizer
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.ml import Pipeline, Model
```
In the following step, use the StringIndexer transformer to convert all the string fields to numeric ones.
```
string_indexer_gender = StringIndexer(inputCol="Gender", outputCol="gender_ix")
string_indexer_customer_status = StringIndexer(inputCol="Customer_Status", outputCol="customer_status_ix")
string_indexer_status = StringIndexer(inputCol="Status", outputCol="status_ix")
string_indexer_owner = StringIndexer(inputCol="Car_Owner", outputCol="owner_ix")
string_business_area = StringIndexer(inputCol="Business_Area", outputCol="area_ix")
assembler = VectorAssembler(inputCols=["gender_ix", "customer_status_ix", "status_ix", "owner_ix", "area_ix", "Children", "Age", "Satisfaction"], outputCol="features")
string_indexer_action = StringIndexer(inputCol="Action", outputCol="label").fit(df_data)
label_action_converter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=string_indexer_action.labels)
dt_action = DecisionTreeClassifier()
pipeline_action = Pipeline(stages=[string_indexer_gender, string_indexer_customer_status, string_indexer_status, string_indexer_action, string_indexer_owner, string_business_area, assembler, dt_action, label_action_converter])
model_action = pipeline_action.fit(train_data)
predictions_action = model_action.transform(test_data)
predictions_action.select('Business_Area','Action','probability','predictedLabel').show(2)
evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="accuracy")
accuracy = evaluator.evaluate(predictions_action)
print("Accuracy = %g" % accuracy)
```
<a id="persistence"></a>
## 4. Persist model
In this section you will learn how to store your pipeline and model in Watson Machine Learning repository by using python client libraries.
**Note**: Apache® Spark 2.4 is required.
#### Save training data in your Cloud Object Storage
ibm-cos-sdk library allows Python developers to manage Cloud Object Storage (COS).
```
import ibm_boto3
from ibm_botocore.client import Config
```
**Action**: Put credentials from Object Storage Service in Bluemix here.
```
cos_credentials = {
"apikey": "***",
"cos_hmac_keys": {
"access_key_id": "***",
"secret_access_key": "***"
},
"endpoints": "***",
"iam_apikey_description": "***",
"iam_apikey_name": "***",
"iam_role_crn": "***",
"iam_serviceid_crn": "***",
"resource_instance_id": "***"
}
connection_apikey = cos_credentials['apikey']
connection_resource_instance_id = cos_credentials["resource_instance_id"]
connection_access_key_id = cos_credentials['cos_hmac_keys']['access_key_id']
connection_secret_access_key = cos_credentials['cos_hmac_keys']['secret_access_key']
```
**Action**: Define the service endpoint we will use. <br>
**Tip**: You can find this information in Endpoints section of your Cloud Object Storage intance's dashbord.
```
service_endpoint = 'https://s3.us.cloud-object-storage.appdomain.cloud'
```
You also need IBM Cloud authorization endpoint to be able to create COS resource object.
```
auth_endpoint = 'https://iam.cloud.ibm.com/identity/token'
```
We create COS resource to be able to write data to Cloud Object Storage.
```
cos = ibm_boto3.resource('s3',
ibm_api_key_id=cos_credentials['apikey'],
ibm_service_instance_id=cos_credentials['resource_instance_id'],
ibm_auth_endpoint=auth_endpoint,
config=Config(signature_version='oauth'),
endpoint_url=service_endpoint)
```
Now you will create bucket in COS and copy `training dataset` for model from **car_rental_training_data.csv**.
```
from uuid import uuid4
bucket_uid = str(uuid4())
score_filename = "car_rental_training_data.csv"
buckets = ["car-rental-" + bucket_uid]
for bucket in buckets:
if not cos.Bucket(bucket) in cos.buckets.all():
print('Creating bucket "{}"...'.format(bucket))
try:
cos.create_bucket(Bucket=bucket)
except ibm_boto3.exceptions.ibm_botocore.client.ClientError as e:
print('Error: {}.'.format(e.response['Error']['Message']))
bucket_obj = cos.Bucket(buckets[0])
print('Uploading data {}...'.format(score_filename))
with open(filename, 'rb') as f:
bucket_obj.upload_fileobj(f, score_filename)
print('{} is uploaded.'.format(score_filename))
```
### 4.2 Save the pipeline and model<a id="save"></a>
```
training_data_references = [
{
"id":"car-rental-training",
"type": "s3",
"connection": {
"access_key_id": connection_access_key_id,
"endpoint_url": service_endpoint,
"secret_access_key": connection_secret_access_key
},
"location": {
"bucket": buckets[0],
"path": score_filename,
}
}
]
saved_model = client.repository.store_model(
model=model_action,
meta_props={
client.repository.ModelMetaNames.NAME:"CARS4U - Action Recommendation Model",
client.repository.ModelMetaNames.SPACE_UID: space_id,
client.repository.ModelMetaNames.TYPE: "mllib_2.4",
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: client.software_specifications.get_id_by_name('spark-mllib_2.4'),
client.repository.ModelMetaNames.TRAINING_DATA_REFERENCES: training_data_references,
client.repository.ModelMetaNames.LABEL_FIELD: "Action",
},
training_data=train_data,
pipeline=pipeline_action)
```
Get saved model metadata from Watson Machine Learning.
```
published_model_id = client.repository.get_model_uid(saved_model)
print("Model Id: " + str(published_model_id))
```
**Model Id** can be used to retrive latest model version from Watson Machine Learning instance.
Below you can see stored model details.
```
client.repository.get_model_details(published_model_id)
```
<a id="deploy"></a>
## 5. Deploy model in the IBM Cloud
You can use following command to create online deployment in cloud.
```
deployment_details = client.deployments.create(
published_model_id,
meta_props={
client.deployments.ConfigurationMetaNames.NAME: "CARS4U - Action Recommendation model deployment",
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
)
deployment_details
```
## 6. Score
```
fields = ['ID', 'Gender', 'Status', 'Children', 'Age', 'Customer_Status','Car_Owner', 'Customer_Service', 'Business_Area', 'Satisfaction']
values = [3785, 'Male', 'S', 1, 17, 'Inactive', 'Yes', 'The car should have been brought to us instead of us trying to find it in the lot.', 'Product: Information', 0]
import json
payload_scoring = {"input_data": [{"fields": fields,"values": [values]}]}
scoring_response = client.deployments.score(client.deployments.get_id(deployment_details), payload_scoring)
print(json.dumps(scoring_response, indent=3))
```
<a id="cleanup"></a>
## 7. Clean up
If you want to clean up all created assets:
- experiments
- trainings
- pipelines
- model definitions
- models
- functions
- deployments
please follow up this sample [notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb).
<a id="summary"></a>
## 8. Summary and next steps
You successfully completed this notebook! You learned how to use Apache Spark machine learning as well as Watson Machine Learning for model creation and deployment. Check out our [Online Documentation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html?context=analytics) for more samples, tutorials, documentation, how-tos, and blog posts.
### Authors
**Amadeusz Masny**, Python Software Developer in Watson Machine Learning at IBM
Copyright © 2020 IBM. This notebook and its source code are released under the terms of the MIT License.
| github_jupyter |
# Saving and Loading Models
In this notebook, I'll show you how to save and load models with PyTorch. This is important because you'll often want to load previously trained models to use in making predictions or to continue training on new data.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms
import helper
import fc_model
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
```
Here we can see one of the images.
```
image, label = next(iter(trainloader))
helper.imshow(image[0,:]);
```
# Train a network
To make things more concise here, I moved the model architecture and training code from the last part to a file called `fc_model`. Importing this, we can easily create a fully-connected network with `fc_model.Network`, and train the network using `fc_model.train`. I'll use this model (once it's trained) to demonstrate how we can save and load models.
```
# Create the network, define the criterion and optimizer
model = fc_model.Network(784, 10, [512, 256, 128])
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
fc_model.train(model, trainloader, testloader, criterion, optimizer, epochs=2)
```
## Saving and loading networks
As you can imagine, it's impractical to train a network every time you need to use it. Instead, we can save trained networks then load them later to train more or use them for predictions.
The parameters for PyTorch networks are stored in a model's `state_dict`. We can see the state dict contains the weight and bias matrices for each of our layers.
```
print("Our model: \n\n", model, '\n')
print("The state dict keys: \n\n", model.state_dict().keys())
```
The simplest thing to do is simply save the state dict with `torch.save`. For example, we can save it to a file `'checkpoint.pth'`.
```
torch.save(model.state_dict(), 'checkpoint.pth')
```
Then we can load the state dict with `torch.load`.
```
state_dict = torch.load('checkpoint.pth')
print(state_dict.keys())
```
And to load the state dict in to the network, you do `model.load_state_dict(state_dict)`.
```
model.load_state_dict(state_dict)
```
Seems pretty straightforward, but as usual it's a bit more complicated. Loading the state dict works only if the model architecture is exactly the same as the checkpoint architecture. If I create a model with a different architecture, this fails.
```
# Try this
model = fc_model.Network(784, 10, [400, 200, 100])
# This will throw an error because the tensor sizes are wrong!
model.load_state_dict(state_dict)
```
This means we need to rebuild the model exactly as it was when trained. Information about the model architecture needs to be saved in the checkpoint, along with the state dict. To do this, you build a dictionary with all the information you need to compeletely rebuild the model.
```
checkpoint = {'input_size': 784,
'output_size': 10,
'hidden_layers': [each.out_features for each in model.hidden_layers],
'state_dict': model.state_dict()}
torch.save(checkpoint, 'checkpoint.pth')
```
Now the checkpoint has all the necessary information to rebuild the trained model. You can easily make that a function if you want. Similarly, we can write a function to load checkpoints.
```
def load_checkpoint(filepath):
checkpoint = torch.load(filepath)
model = fc_model.Network(checkpoint['input_size'],
checkpoint['output_size'],
checkpoint['hidden_layers'])
model.load_state_dict(checkpoint['state_dict'])
return model
model = load_checkpoint('checkpoint.pth')
print(model)
```
| github_jupyter |
# Example 01: General Use of BinaryClassificationMetrics
[](https://colab.research.google.com/github/slickml/slick-ml/blob/master/examples/metrics/example_01_BinaryClassificationMetrics.ipynb)
### Google Colab Configuration
```
# !git clone https://github.com/slickml/slick-ml.git
# %cd slick-ml
# !pip install -r requirements.txt
```
### Local Environment Configuration
```
# Change path to project root
%cd ../..
```
### Import Python Libraries
```
%load_ext autoreload
# widen the screen
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
# change the path and loading class
import os, sys
import pandas as pd
import numpy as np
import seaborn as sns
%autoreload
from slickml.metrics import BinaryClassificationMetrics
```
_____
# BinaryClassificationMetrics Docstring
```
help(BinaryClassificationMetrics)
```
### Example 1
```
# y_true values
y_true = [0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1,
1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1,
1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1,
1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0,
1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0,
1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1,
1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1,
1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0,
1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1]
# Y_pred_proba values
y_pred_proba = [0. , 0.12, 0.78, 0.07, 1. , 0.05, 1. , 0. , 1. , 0. , 1. ,
0.99, 0.93, 0.88, 0.86, 1. , 0.99, 1. , 1. , 0.74, 0. , 1. ,
1. , 0.79, 1. , 0.58, 1. , 0.95, 1. , 1. , 1. , 0.38, 1. ,
0.94, 1. , 1. , 1. , 0.01, 0.81, 1. , 0.99, 1. , 0.4 , 1. ,
1. , 1. , 0.9 , 0.06, 0. , 0.02, 0.99, 0.45, 1. , 1. , 0.52,
0.99, 0.02, 0. , 1. , 0.04, 0.19, 0.99, 0. , 0. , 0.11, 1. ,
1. , 0.31, 1. , 0.25, 0. , 0. , 0.99, 1. , 0.01, 0.09, 0. ,
1. , 0.98, 0. , 0.6 , 0.1 , 1. , 1. , 0. , 1. , 0.96, 0.02,
1. , 0.84, 1. , 0.97, 0.01, 0.99, 0.4 , 0. , 0.18, 1. , 1. ,
1. , 0.96, 0.04, 1. , 0.17, 1. , 0.96, 1. , 0. , 1. , 0.06,
1. , 0.75, 0.64, 0.74, 0.5 , 0.97, 0.11, 0.9 , 0. , 0.15, 1. ,
0.11, 1. , 0.02, 1. , 0.27, 0.95, 0.91, 0.99, 0. , 1. , 0.79,
1. , 1. , 0.87, 1. , 1. , 0. , 0.73, 0.97, 1. , 0.82, 0.3 ,
0. , 0.09, 1. , 1. , 1. , 1. , 1. , 0.76, 0.75, 0.99, 0.99,
0.96, 0.01, 0.08, 0.98, 1. , 0. , 1. , 1. , 0.82, 0.04, 0.98,
0. , 1. , 1. , 0.02, 0. , 1. , 0.99, 1. , 0.96, 0. , 0. ,
1. , 0. , 1. , 1. , 0. , 0.83, 0. , 0.15, 1. , 0.98, 0.98,
1. ]
example1 = BinaryClassificationMetrics(y_true, y_pred_proba, precision_digits=3)
example1.plot(figsize=(12, 12),
save_path=None)
```
## Example 2
```
example = BinaryClassificationMetrics(y_true, y_pred_proba, display_df=False)
print(F"Accuracy = {example.accuracy_}")
print(F"Balanced Accuracy = {example.balanced_accuracy_}")
print(F"AUC ROC = {example.auc_roc_}")
print(F"AUC PR = {example.auc_pr_}")
print(F"Precision = {example.precision_}")
print(F"Recall = {example.recall_}")
print(F"F1-Score = {example.f1_}")
print(F"F2-Score = {example.f2_}")
print(F"F0.5-Score = {example.f05_}")
print(F"Average Precision = {example.average_precision_}")
print(F"Threat Score = {example.threat_score_}")
print(F"Metrics Dict = {example.metrics_dict_}")
print(F"Thresholds Dict = {example.thresholds_dict_}")
example.plot()
thresholds = example.thresholds_dict_
methods = example.average_methods_
frames = []
for method in methods:
for threshold in thresholds:
ex = BinaryClassificationMetrics(y_true, y_pred_proba, threshold=thresholds[threshold], average_method=method, display_df=False)
frames.append(ex.metrics_df_)
df_to_show = pd.concat(frames)
# Set CSS properties
th_props = [("font-size", "12px"),
("text-align", "left"),
("font-weight", "bold")]
td_props = [("font-size", "12px"),
("text-align", "center")]
# Set table styles
styles = [dict(selector = "th", props = th_props),
dict(selector = "td", props = td_props)]
cm = sns.light_palette("blue", as_cmap = True)
display(df_to_show.style.background_gradient(cmap = cm) \
.set_table_styles(styles))
```
## Example 3
```
# loading data from slick-ml/data
data = pd.read_csv("./data/clf_data.csv")
data.head()
# setting up the X, y
y = data["CLASS"].values
X = data.drop(["CLASS"], axis=1)
# train-test split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, shuffle=True, stratify=y)
# train a classifier
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier()
clf.fit(X_train, y_train)
y_pred_proba = clf.predict_proba(X_test)
example3 = BinaryClassificationMetrics(y_test, y_pred_proba[:,1])
example3.plot()
thresholds = example3.thresholds_dict_
methods = example3.average_methods_
frames = []
for method in methods:
for threshold in thresholds:
ex = BinaryClassificationMetrics(y_test, y_pred_proba[:,1], threshold=thresholds[threshold], average_method=method, display_df=False)
frames.append(ex.metrics_df_)
df_to_show = pd.concat(frames)
# Set CSS properties
th_props = [("font-size", "12px"),
("text-align", "left"),
("font-weight", "bold")]
td_props = [("font-size", "12px"),
("text-align", "center")]
# Set table styles
styles = [dict(selector = "th", props = th_props),
dict(selector = "td", props = td_props)]
cm = sns.light_palette("blue", as_cmap = True)
display(df_to_show.round(decimals=3).style.background_gradient(cmap = cm).set_table_styles(styles))
```
## Example 4
```
from sklearn.datasets import load_breast_cancer
data = load_breast_cancer()
X = data.data
y = data.target
# train-test split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, shuffle=True, stratify=y)
# train a classifier
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier()
clf.fit(X_train, y_train)
y_pred_proba = clf.predict_proba(X_test)[:, 1]
example4 = BinaryClassificationMetrics(y_test, y_pred_proba)
example4.plot()
```
| github_jupyter |
<img src="../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
## _*Shor's Algorithm for Integer Factorization*_
The latest version of this tutorial notebook is available on https://github.com/qiskit/qiskit-tutorial.
In this tutorial, we first introduce the problem of [integer factorization](#factorization) and describe how [Shor's algorithm](#shorsalgorithm) solves it in detail. We then [implement](#implementation) a version of it in Qiskit.
### Contributors
Anna Phan
***
## Integer Factorization <a id='factorization'></a>
Integer factorization is the decomposition of an composite integer into a product of smaller integers, for example, the integer $100$ can be factored into $10 \times 10$. If these factors are restricted to prime numbers, the process is called prime factorization, for example, the prime factorization of $100$ is $2 \times 2 \times 5 \times 5$.
When the integers are very large, no efficient classical integer factorization algorithm is known. The hardest factorization problems are semiprime numbers, the product of two prime numbers. In [2009](https://link.springer.com/chapter/10.1007/978-3-642-14623-7_18), a team of researchers factored a 232 decimal digit semiprime number (768 bits), spending the computational equivalent of more than two thousand years on a single core 2.2 GHz AMD Opteron processor with 2 GB RAM:
```
RSA-768 = 12301866845301177551304949583849627207728535695953347921973224521517264005
07263657518745202199786469389956474942774063845925192557326303453731548268
50791702612214291346167042921431160222124047927473779408066535141959745985
6902143413
= 33478071698956898786044169848212690817704794983713768568912431388982883793
878002287614711652531743087737814467999489
× 36746043666799590428244633799627952632279158164343087642676032283815739666
511279233373417143396810270092798736308917
```
The presumed difficulty of this semiprime factorization problem underlines many encryption algorithms, such as [RSA](https://www.google.com/patents/US4405829), which is used in online credit card transactions, amongst other applications.
***
## Shor's Algorithm <a id='shorsalgorithm'></a>
Shor's algorithm, named after mathematician Peter Shor, is a polynomial time quantum algorithm for integer factorization formulated in [1994](http://epubs.siam.org/doi/10.1137/S0097539795293172). It is arguably the most dramatic example of how the paradigm of quantum computing changed our perception of which computational problems should be considered tractable, motivating the study of new quantum algorithms and efforts to design and construct quantum computers. It also has expedited research into new cryptosystems not based on integer factorization.
Shor's algorithm has been experimentally realised by multiple teams for specific composite integers. The composite $15$ was first factored into $3 \times 5$ in [2001](https://www.nature.com/nature/journal/v414/n6866/full/414883a.html) using seven NMR qubits, and has since been implemented using four photon qubits in 2007 by [two](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.99.250504) [teams](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.99.250505), three solid state qubits in [2012](https://www.nature.com/nphys/journal/v8/n10/full/nphys2385.html) and five trapped ion qubits in [2016](http://science.sciencemag.org/content/351/6277/1068). The composite $21$ has also been factored into $3 \times 7$ in [2012](http://www.nature.com/nphoton/journal/v6/n11/full/nphoton.2012.259.html) using a photon qubit and qutrit (a three level system). Note that these experimental demonstrations rely on significant optimisations of Shor's algorithm based on apriori knowledge of the expected results. In general, [$2 + \frac{3}{2}\log_2N$](https://link-springer-com.virtual.anu.edu.au/chapter/10.1007/3-540-49208-9_15) qubits are needed to factor the composite integer $N$, meaning at least $1,154$ qubits would be needed to factor $RSA-768$ above.
```
from IPython.display import HTML
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/hOlOY7NyMfs?start=75&end=126" frameborder="0" allowfullscreen></iframe>')
```
As Peter Shor describes in the video above from [PhysicsWorld](http://physicsworld.com/cws/article/multimedia/2015/sep/30/what-is-shors-factoring-algorithm), Shor’s algorithm is composed of three parts. The first part turns the factoring problem into a period finding problem using number theory, which can be computed on a classical computer. The second part finds the period using the quantum Fourier transform and is responsible for the quantum speedup of the algorithm. The third part uses the period found to calculate the factors.
The following sections go through the algorithm in detail, for those who just want the steps, without the lengthy explanation, refer to the [blue](#stepsone) [boxes](#stepstwo) before jumping down to the [implemention](#implemention).
### From Factorization to Period Finding
The number theory that underlines Shor's algorithm relates to periodic modulo sequences. Let's have a look at an example of such a sequence. Consider the sequence of the powers of two:
$$1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, ...$$
Now let's look at the same sequence 'modulo 15', that is, the remainder after fifteen divides each of these powers of two:
$$1, 2, 4, 8, 1, 2, 4, 8, 1, 2, 4, ...$$
This is a modulo sequence that repeats every four numbers, that is, a periodic modulo sequence with a period of four.
Reduction of factorization of $N$ to the problem of finding the period of an integer $x$ less than $N$ and greater than $1$ depends on the following result from number theory:
> The function $\mathcal{F}(a) = x^a \bmod N$ is a periodic function, where $x$ is an integer coprime to $N$ and $a \ge 0$.
Note that two numbers are coprime, if the only positive integer that divides both of them is 1. This is equivalent to their greatest common divisor being 1. For example, 8 and 15 are coprime, as they don't share any common factors (other than 1). However, 9 and 15 are not coprime, since they are both divisible by 3 (and 1).
> Since $\mathcal{F}(a)$ is a periodic function, it has some period $r$. Knowing that $x^0 \bmod N = 1$, this means that $x^r \bmod N = 1$ since the function is periodic, and thus $r$ is just the first nonzero power where $x^r = 1 (\bmod N)$.
Given this information and through the following algebraic manipulation:
$$ x^r \equiv 1 \bmod N $$
$$ x^r = (x^{r/2})^2 \equiv 1 \bmod N $$
$$ (x^{r/2})^2 - 1 \equiv 0 \bmod N $$
and if $r$ is an even number:
$$ (x^{r/2} + 1)(x^{r/2} - 1) \equiv 0 \bmod N $$
From this, the product $(x^{r/2} + 1)(x^{r/2} - 1)$ is an integer multiple of $N$, the number to be factored. Thus, so long as $(x^{r/2} + 1)$ or $(x^{r/2} - 1)$ is not a multiple of $N$, then at least one of $(x^{r/2} + 1)$ or $(x^{r/2} - 1)$ must have a nontrivial factor in common with $N$.
So computing $\text{gcd}(x^{r/2} - 1, N)$ and $\text{gcd}(x^{r/2} + 1, N)$ will obtain a factor of $N$, where $\text{gcd}$ is the greatest common denominator function, which can be calculated by the polynomial time [Euclidean algorithm](https://en.wikipedia.org/wiki/Euclidean_algorithm).
#### Classical Steps to Shor's Algorithm
Let's assume for a moment that a period finding machine exists that takes as input coprime integers $x, N$ and outputs the period of $x \bmod N$, implemented by as a brute force search below. Let's show how to use the machine to find all prime factors of $N$ using the number theory described above.
```
# Brute force period finding algorithm
def find_period_classical(x, N):
n = 1
t = x
while t != 1:
t *= x
t %= N
n += 1
return n
```
For simplicity, assume that $N$ has only two distinct prime factors: $N = pq$.
<div class="alert alert-block alert-info"> <a id='stepsone'></a>
<ol>
<li>Pick a random integer $x$ between $1$ and $N$ and compute the greatest common divisor $\text{gcd}(x,N)$ using Euclid's algorithm.</li>
<li>If $x$ and $N$ have some common prime factors, $\text{gcd}(x,N)$ will equal $p$ or $q$. Otherwise $\text{gcd}(x,N) = 1$, meaning $x$ and $N$ are coprime. </li>
<li>Let $r$ be the period of $x \bmod N$ computed by the period finding machine. Repeat the above steps with different random choices of $x$ until $r$ is even.</li>
<li>Now $p$ and $q$ can be found by computing $\text{gcd}(x^{r/2} \pm 1, N)$ as long as $x^{r/2} \neq \pm 1$.</li>
</ol>
</div>
As an example, consider $N = 15$. Let's look at all values of $1 < x < 15$ where $x$ is coprime with $15$:
| $x$ | $x^a \bmod 15$ | Period $r$ |$\text{gcd}(x^{r/2}-1,15)$|$\text{gcd}(x^{r/2}+1,15)$ |
|:-----:|:----------------------------:|:----------:|:------------------------:|:-------------------------:|
| 2 | 1,2,4,8,1,2,4,8,1,2,4... | 4 | 3 | 5 |
| 4 | 1,4,1,4,1,4,1,4,1,4,1... | 2 | 3 | 5 |
| 7 | 1,7,4,13,1,7,4,13,1,7,4... | 4 | 3 | 5 |
| 8 | 1,8,4,2,1,8,4,2,1,8,4... | 4 | 3 | 5 |
| 11 | 1,11,1,11,1,11,1,11,1,11,1...| 2 | 5 | 3 |
| 13 | 1,13,4,7,1,13,4,7,1,13,4,... | 4 | 3 | 5 |
| 14 | 1,14,1,14,1,14,1,14,1,14,1,,,| 2 | 1 | 15 |
As can be seen, any value of $x$ except $14$ will return the factors of $15$, that is, $3$ and $5$. $14$ is an example of the special case where $(x^{r/2} + 1)$ or $(x^{r/2} - 1)$ is a multiple of $N$ and thus another $x$ needs to be tried.
In general, it can be shown that this special case occurs infrequently, so on average only two calls to the period finding machine are sufficient to factor $N$.
For a more interesting example, first let's find larger number N, that is semiprime that is relatively small. Using the [Sieve of Eratosthenes](https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes) [Python implementation](http://archive.oreilly.com/pub/a/python/excerpt/pythonckbk_chap1/index1.html?page=last), let's generate a list of all the prime numbers less than a thousand, randomly select two, and muliply them.
```
import random, itertools
# Sieve of Eratosthenes algorithm
def sieve( ):
D = { }
yield 2
for q in itertools.islice(itertools.count(3), 0, None, 2):
p = D.pop(q, None)
if p is None:
D[q*q] = q
yield q
else:
x = p + q
while x in D or not (x&1):
x += p
D[x] = p
# Creates a list of prime numbers up to the given argument
def get_primes_sieve(n):
return list(itertools.takewhile(lambda p: p<n, sieve()))
def get_semiprime(n):
primes = get_primes_sieve(n)
l = len(primes)
p = primes[random.randrange(l)]
q = primes[random.randrange(l)]
return p*q
N = get_semiprime(1000)
print("semiprime N =",N)
```
Now implement the [above steps](#stepsone) of Shor's Algorithm:
```
import math
def shors_algorithm_classical(N):
x = random.randint(0,N) # step one
if(math.gcd(x,N) != 1): # step two
return x,0,math.gcd(x,N),N/math.gcd(x,N)
r = find_period_classical(x,N) # step three
while(r % 2 != 0):
r = find_period_classical(x,N)
p = math.gcd(x**int(r/2)+1,N) # step four, ignoring the case where (x^(r/2) +/- 1) is a multiple of N
q = math.gcd(x**int(r/2)-1,N)
return x,r,p,q
x,r,p,q = shors_algorithm_classical(N)
print("semiprime N = ",N,", coprime x = ",x,", period r = ",r,", prime factors = ",p," and ",q,sep="")
```
### Quantum Period Finding <a id='quantumperiodfinding'></a>
Let's first describe the quantum period finding algorithm, and then go through a few of the steps in detail, before going through an example. This algorithm takes two coprime integers, $x$ and $N$, and outputs $r$, the period of $\mathcal{F}(a) = x^a\bmod N$.
<div class="alert alert-block alert-info"><a id='stepstwo'></a>
<ol>
<li> Choose $T = 2^t$ such that $N^2 \leq T \le 2N^2$. Initialise two registers of qubits, first an argument register with $t$ qubits and second a function register with $n = log_2 N$ qubits. These registers start in the initial state:
$$\vert\psi_0\rangle = \vert 0 \rangle \vert 0 \rangle$$ </li>
<li> Apply a Hadamard gate on each of the qubits in the argument register to yield an equally weighted superposition of all integers from $0$ to $T$:
$$\vert\psi_1\rangle = \frac{1}{\sqrt{T}}\sum_{a=0}^{T-1}\vert a \rangle \vert 0 \rangle$$ </li>
<li> Implement the modular exponentiation function $x^a \bmod N$ on the function register, giving the state:
$$\vert\psi_2\rangle = \frac{1}{\sqrt{T}}\sum_{a=0}^{T-1}\vert a \rangle \vert x^a \bmod N \rangle$$
This $\vert\psi_2\rangle$ is highly entangled and exhibits quantum parallism, i.e. the function entangled in parallel all the 0 to $T$ input values with the corresponding values of $x^a \bmod N$, even though the function was only executed once. </li>
<li> Perform a quantum Fourier transform on the argument register, resulting in the state:
$$\vert\psi_3\rangle = \frac{1}{T}\sum_{a=0}^{T-1}\sum_{z=0}^{T-1}e^{(2\pi i)(az/T)}\vert z \rangle \vert x^a \bmod N \rangle$$
where due to the interference, only the terms $\vert z \rangle$ with
$$z = qT/r $$
have significant amplitude where $q$ is a random integer ranging from $0$ to $r-1$ and $r$ is the period of $\mathcal{F}(a) = x^a\bmod N$. </li>
<li> Measure the argument register to obtain classical result $z$. With reasonable probability, the continued fraction approximation of $T / z$ will be an integer multiple of the period $r$. Euclid's algorithm can then be used to find $r$.</li>
</ol>
</div>
Note how quantum parallelism and constructive interference have been used to detect and measure periodicity of the modular exponentiation function. The fact that interference makes it easier to measure periodicity should not come as a big surprise. After all, physicists routinely use scattering of electromagnetic waves and interference measurements to determine periodicity of physical objects such as crystal lattices. Likewise, Shor's algorithm exploits interference to measure periodicity of arithmetic objects, a computational interferometer of sorts.
#### Modular Exponentiation
The modular exponentiation, step 3 above, that is the evaluation of $x^a \bmod N$ for $2^t$ values of $a$ in parallel, is the most demanding part of the algorithm. This can be performed using the following identity for the binary representation of any integer: $x = x_{t-1}2^{t-1} + ... x_12^1+x_02^0$, where $x_t$ are the binary digits of $x$. From this, it follows that:
\begin{aligned}
x^a \bmod N & = x^{2^{(t-1)}a_{t-1}} ... x^{2a_1}x^{a_0} \bmod N \\
& = x^{2^{(t-1)}a_{t-1}} ... [x^{2a_1}[x^{2a_0} \bmod N] \bmod N] ... \bmod N \\
\end{aligned}
This means that 1 is first multiplied by $x^1 \bmod N$ if and only if $a_0 = 1$, then the result is multiplied by $x^2 \bmod N$ if and only if $a_1 = 1$ and so forth, until finally the result is multiplied by $x^{2^{(s-1)}}\bmod N$ if and only if $a_{t-1} = 1$.
Therefore, the modular exponentiation consists of $t$ serial multiplications modulo $N$, each of them controlled by the qubit $a_t$. The values $x,x^2,...,x^{2^{(t-1)}} \bmod N$ can be found efficiently on a classical computer by repeated squaring.
#### Quantum Fourier Transform
The Fourier transform occurs in many different versions throughout classical computing, in areas ranging from signal processing to data compression to complexity theory. The quantum Fourier transform (QFT), step 4 above, is the quantum implementation of the discrete Fourier transform over the amplitudes of a wavefunction.
The classical discrete Fourier transform acts on a vector $(x_0, ..., x_{N-1})$ and maps it to the vector $(y_0, ..., y_{N-1})$ according to the formula
$$y_k = \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}x_j\omega_N^{jk}$$
where $\omega_N^{jk} = e^{2\pi i \frac{jk}{N}}$.
Similarly, the quantum Fourier transform acts on a quantum state $\sum_{i=0}^{N-1} x_i \vert i \rangle$ and maps it to the quantum state $\sum_{i=0}^{N-1} y_i \vert i \rangle$ according to the formula
$$y_k = \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}x_j\omega_N^{jk}$$
with $\omega_N^{jk}$ defined as above. Note that only the amplitudes of the state were affected by this transformation.
This can also be expressed as the map:
$$\vert x \rangle \mapsto \frac{1}{\sqrt{N}}\sum_{y=0}^{N-1}\omega_N^{xy} \vert y \rangle$$
Or the unitary matrix:
$$ U_{QFT} = \frac{1}{\sqrt{N}} \sum_{x=0}^{N-1} \sum_{y=0}^{N-1} \omega_N^{xy} \vert y \rangle \langle x \vert$$
As an example, we've actually already seen the quantum Fourier transform for when $N = 2$, it is the Hadamard operator ($H$):
$$H = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}$$
Suppose we have the single qubit state $\alpha \vert 0 \rangle + \beta \vert 1 \rangle$, if we apply the $H$ operator to this state, we obtain the new state:
$$\frac{1}{\sqrt{2}}(\alpha + \beta) \vert 0 \rangle + \frac{1}{\sqrt{2}}(\alpha - \beta) \vert 1 \rangle
\equiv \tilde{\alpha}\vert 0 \rangle + \tilde{\beta}\vert 1 \rangle$$
Notice how the Hadamard gate performs the discrete Fourier transform for $N = 2$ on the amplitudes of the state.
So what does the quantum Fourier transform look like for larger N? Let's derive a circuit for $N=2^n$, $QFT_N$ acting on the state $\vert x \rangle = \vert x_1...x_n \rangle$ where $x_1$ is the most significant bit.
\begin{aligned}
QFT_N\vert x \rangle & = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1}\omega_N^{xy} \vert y \rangle \\
& = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} e^{2 \pi i xy / 2^n} \vert y \rangle \:\text{since}\: \omega_N^{xy} = e^{2\pi i \frac{xy}{N}} \:\text{and}\: N = 2^n\\
& = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} e^{2 \pi i \left(\sum_{k=1}^n y_k/2^k\right) x} \vert y_1 ... y_n \rangle \:\text{rewriting in fractional binary notation}\: y = y_1...y_k, y/2^n = \sum_{k=1}^n y_k/2^k \\
& = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} \prod_{k=0}^n e^{2 \pi i x y_k/2^k } \vert y_1 ... y_n \rangle \:\text{after expanding the exponential of a sum to a product of exponentials} \\
& = \frac{1}{\sqrt{N}} \bigotimes_{k=1}^n \left(\vert0\rangle + e^{2 \pi i x /2^k } \vert1\rangle \right) \:\text{after rearranging the sum and products, and expanding} \\
& = \frac{1}{\sqrt{N}} \left(\vert0\rangle + e^{2 \pi i[0.x_n]} \vert1\rangle\right) \otimes...\otimes \left(\vert0\rangle + e^{2 \pi i[0.x_1.x_2...x_{n-1}.x_n]} \vert1\rangle\right) \:\text{as}\: e^{2 \pi i x/2^k} = e^{2 \pi i[0.x_k...x_n]}
\end{aligned}
This is a very useful form of the QFT for $N=2^n$ as only the last qubit depends on the the
values of all the other input qubits and each further bit depends less and less on the input qubits. Furthermore, note that $e^{2 \pi i.0.x_n}$ is either $+1$ or $-1$, which resembles the Hadamard transform.
Before we create the circuit code for general $N=2^n$, let's look at $N=8,n=3$:
$$QFT_8\vert x_1x_2x_3\rangle = \frac{1}{\sqrt{8}} \left(\vert0\rangle + e^{2 \pi i[0.x_3]} \vert1\rangle\right) \otimes \left(\vert0\rangle + e^{2 \pi i[0.x_2.x_3]} \vert1\rangle\right) \otimes \left(\vert0\rangle + e^{2 \pi i[0.x_1.x_2.x_3]} \vert1\rangle\right) $$
The steps to creating the circuit for $\vert y_1y_2x_3\rangle = QFT_8\vert x_1x_2x_3\rangle$, remembering the [controlled phase rotation gate](../tools/quantum_gates_and_linear_algebra.ipynb
) $CU_1$, would be:
1. Apply a Hadamard to $\vert x_3 \rangle$, giving the state $\frac{1}{\sqrt{2}}\left(\vert0\rangle + e^{2 \pi i.0.x_3} \vert1\rangle\right) = \frac{1}{\sqrt{2}}\left(\vert0\rangle + (-1)^{x_3} \vert1\rangle\right)$
2. Apply a Hadamard to $\vert x_2 \rangle$, then depending on $k_3$ (before the Hadamard gate) a $CU_1(\frac{\pi}{2})$, giving the state $\frac{1}{\sqrt{2}}\left(\vert0\rangle + e^{2 \pi i[0.x_2.x_3]} \vert1\rangle\right)$.
3. Apply a Hadamard to $\vert x_1 \rangle$, then $CU_1(\frac{\pi}{2})$ depending on $k_2$, and $CU_1(\frac{\pi}{4})$ depending on $k_3$.
4. Measure the bits in reverse order, that is $y_3 = x_1, y_2 = x_2, y_1 = y_3$.
In Qiskit, this is:
```
q3 = QuantumRegister(3, 'q3')
c3 = ClassicalRegister(3, 'c3')
qft3 = QuantumCircuit(q3, c3)
qft3.h(q[0])
qft3.cu1(math.pi/2.0, q3[1], q3[0])
qft3.h(q[1])
qft3.cu1(math.pi/4.0, q3[2], q3[0])
qft3.cu1(math.pi/2.0, q3[2], q3[1])
qft3.h(q[2])
```
For $N=2^n$, this can be generalised, as in the `qft` function in [tools.qi](https://github.com/Q/qiskit-terra/blob/master/qiskit/tools/qi/qi.py):
```
def qft(circ, q, n):
"""n-qubit QFT on q in circ."""
for j in range(n):
for k in range(j):
circ.cu1(math.pi/float(2**(j-k)), q[j], q[k])
circ.h(q[j])
```
#### Example
Let's factorize $N = 21$ with coprime $x=2$, following the [above steps](#stepstwo) of the quantum period finding algorithm, which should return $r = 6$. This example follows one from [this](https://arxiv.org/abs/quant-ph/0303175) tutorial.
1. Choose $T = 2^t$ such that $N^2 \leq T \le 2N^2$. For $N = 21$, the smallest value of $t$ is 9, meaning $T = 2^t = 512$. Initialise two registers of qubits, first an argument register with $t = 9$ qubits, and second a function register with $n = log_2 N = 5$ qubits:
$$\vert\psi_0\rangle = \vert 0 \rangle \vert 0 \rangle$$
2. Apply a Hadamard gate on each of the qubits in the argument register:
$$\vert\psi_1\rangle = \frac{1}{\sqrt{T}}\sum_{a=0}^{T-1}\vert a \rangle \vert 0 \rangle = \frac{1}{\sqrt{512}}\sum_{a=0}^{511}\vert a \rangle \vert 0 \rangle$$
3. Implement the modular exponentiation function $x^a \bmod N$ on the function register:
\begin{eqnarray}
\vert\psi_2\rangle
& = & \frac{1}{\sqrt{T}}\sum_{a=0}^{T-1}\vert a \rangle \vert x^a \bmod N \rangle
= \frac{1}{\sqrt{512}}\sum_{a=0}^{511}\vert a \rangle \vert 2^a \bmod 21 \rangle \\
& = & \frac{1}{\sqrt{512}} \bigg( \;\; \vert 0 \rangle \vert 1 \rangle + \vert 1 \rangle \vert 2 \rangle +
\vert 2 \rangle \vert 4 \rangle + \vert 3 \rangle \vert 8 \rangle + \;\; \vert 4 \rangle \vert 16 \rangle + \;\,
\vert 5 \rangle \vert 11 \rangle \, + \\
& & \;\;\;\;\;\;\;\;\;\;\;\;\;\, \vert 6 \rangle \vert 1 \rangle + \vert 7 \rangle \vert 2 \rangle + \vert 8 \rangle \vert 4 \rangle + \vert 9 \rangle \vert 8 \rangle + \vert 10 \rangle \vert 16 \rangle + \vert 11 \rangle \vert 11 \rangle \, +\\
& & \;\;\;\;\;\;\;\;\;\;\;\;\, \vert 12 \rangle \vert 1 \rangle + \ldots \bigg)\\
\end{eqnarray}
Notice that the above expression has the following pattern: the states of the second register of each “column” are the same. Therefore we can rearrange the terms in order to collect the second register:
\begin{eqnarray}
\vert\psi_2\rangle
& = & \frac{1}{\sqrt{512}} \bigg[ \big(\,\vert 0 \rangle + \;\vert 6 \rangle + \vert 12 \rangle \ldots + \vert 504 \rangle + \vert 510 \rangle \big) \, \vert 1 \rangle \, + \\
& & \;\;\;\;\;\;\;\;\;\;\; \big(\,\vert 1 \rangle + \;\vert 7 \rangle + \vert 13 \rangle \ldots + \vert 505 \rangle + \vert 511 \rangle \big) \, \vert 2 \rangle \, + \\
& & \;\;\;\;\;\;\;\;\;\;\; \big(\,\vert 2 \rangle + \;\vert 8 \rangle + \vert 14 \rangle \ldots + \vert 506 \rangle + \big) \, \vert 4 \rangle \, + \\
& & \;\;\;\;\;\;\;\;\;\;\; \big(\,\vert 3 \rangle + \;\vert 9 \rangle + \vert 15 \rangle \ldots + \vert 507 \rangle + \big) \, \vert 8 \rangle \, + \\
& & \;\;\;\;\;\;\;\;\;\;\; \big(\,\vert 4 \rangle + \vert 10 \rangle + \vert 16 \rangle \ldots + \vert 508 \rangle + \big) \vert 16 \rangle \, + \\
& & \;\;\;\;\;\;\;\;\;\;\; \big(\,\vert 5 \rangle + \vert 11 \rangle + \vert 17 \rangle \ldots + \vert 509 \rangle + \big) \vert 11 \rangle \, \bigg]\\
\end{eqnarray}
4. To simplify following equations, we'll measure the function register before performing a quantum Fourier transform on the argument register. This will yield one of the following numbers with equal probability: $\{1,2,4,6,8,16,11\}$. Suppose that the result of the measurement was $2$, then:
$$\vert\psi_3\rangle = \frac{1}{\sqrt{86}}(\vert 1 \rangle + \;\vert 7 \rangle + \vert 13 \rangle \ldots + \vert 505 \rangle + \vert 511 \rangle)\, \vert 2 \rangle $$
It does not matter what is the result of the measurement; what matters is the periodic pattern. The period of the states of the first register is the solution to the problem and the quantum Fourier transform can reveal the value of the period.
5. Perform a quantum Fourier transform on the argument register:
$$
\vert\psi_4\rangle
= QFT(\vert\psi_3\rangle)
= QFT(\frac{1}{\sqrt{86}}\sum_{a=0}^{85}\vert 6a+1 \rangle)\vert 2 \rangle
= \frac{1}{\sqrt{512}}\sum_{j=0}^{511}\bigg(\big[ \frac{1}{\sqrt{86}}\sum_{a=0}^{85} e^{-2 \pi i \frac{6ja}{512}} \big] e^{-2\pi i\frac{j}{512}}\vert j \rangle \bigg)\vert 2 \rangle
$$
6. Measure the argument register. The probability of measuring a result $j$ is:
$$ \rm{Probability}(j) = \frac{1}{512 \times 86} \bigg\vert \sum_{a=0}^{85}e^{-2 \pi i \frac{6ja}{512}} \bigg\vert^2$$
This peaks at $j=0,85,171,256,341,427$. Suppose that the result of the measement yielded $j = 85$, then using continued fraction approximation of $\frac{512}{85}$, we obtain $r=6$, as expected.
## Implementation <a id='implementation'></a>
```
from qiskit import Aer
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import execute, register, get_backend, compile
from qiskit.tools.visualization import plot_histogram, circuit_drawer
```
As mentioned [earlier](#shorsalgorithm), many of the experimental demonstrations of Shor's algorithm rely on significant optimisations based on apriori knowledge of the expected results. We will follow the formulation in [this](http://science.sciencemag.org/content/351/6277/1068) paper, which demonstrates a reasonably scalable realisation of Shor's algorithm using $N = 15$. Below is the first figure from the paper, showing various quantum circuits, with the following caption: _Diagrams of Shor’s algorithm for factoring $N = 15$, using a generic textbook approach (**A**) compared with Kitaev’s approach (**B**) for a generic base $a$. (**C**) The actual implementation for factoring $15$ to base $11$, optimized for the corresponding single-input state. Here $q_i$ corresponds to the respective qubit in the computational register. (**D**) Kitaev’s approach to Shor’s algorithm for the bases ${2, 7, 8, 13}$. Here, the optimized map of the first multiplier is identical in all four cases, and the last multiplier is implemented with full modular multipliers, as depicted in (**E**). In all cases, the single QFT qubit is used three times, which, together with the four qubits in the computation register, totals seven effective qubits. (**E**) Circuit diagrams of the modular multipliers of the form $a \bmod N$ for bases $a = {2, 7, 8, 11, 13}$._
<img src="images/shoralgorithm.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="center">
Note that we cannot run this version of Shor's algorithm on an IBM Quantum Experience device at the moment as we currently lack the ability to do measurement feedforward and qubit resetting. Thus we'll just be building the ciruits to run on the simulators for now. Based on Pinakin Padalia & Amitabh Yadav's implementation, found [here](https://github.com/amitabhyadav/Shor-Algorithm-on-IBM-Quantum-Experience)
First we'll construct the $a^1 \bmod 15$ circuits for $a = 2,7,8,11,13$ as in **E**:
```
# qc = quantum circuit, qr = quantum register, cr = classical register, a = 2, 7, 8, 11 or 13
def circuit_amod15(qc,qr,cr,a):
if a == 2:
qc.cswap(qr[4],qr[3],qr[2])
qc.cswap(qr[4],qr[2],qr[1])
qc.cswap(qr[4],qr[1],qr[0])
elif a == 7:
qc.cswap(qr[4],qr[1],qr[0])
qc.cswap(qr[4],qr[2],qr[1])
qc.cswap(qr[4],qr[3],qr[2])
qc.cx(qr[4],qr[3])
qc.cx(qr[4],qr[2])
qc.cx(qr[4],qr[1])
qc.cx(qr[4],qr[0])
elif a == 8:
qc.cswap(qr[4],qr[1],qr[0])
qc.cswap(qr[4],qr[2],qr[1])
qc.cswap(qr[4],qr[3],qr[2])
elif a == 11: # this is included for completeness
qc.cswap(qr[4],qr[2],qr[0])
qc.cswap(qr[4],qr[3],qr[1])
qc.cx(qr[4],qr[3])
qc.cx(qr[4],qr[2])
qc.cx(qr[4],qr[1])
qc.cx(qr[4],qr[0])
elif a == 13:
qc.cswap(qr[4],qr[3],qr[2])
qc.cswap(qr[4],qr[2],qr[1])
qc.cswap(qr[4],qr[1],qr[0])
qc.cx(qr[4],qr[3])
qc.cx(qr[4],qr[2])
qc.cx(qr[4],qr[1])
qc.cx(qr[4],qr[0])
```
Next we'll build the rest of the period finding circuit as in **D**:
```
# qc = quantum circuit, qr = quantum register, cr = classical register, a = 2, 7, 8, 11 or 13
def circuit_aperiod15(qc,qr,cr,a):
if a == 11:
circuit_11period15(qc,qr,cr)
return
# Initialize q[0] to |1>
qc.x(qr[0])
# Apply a**4 mod 15
qc.h(qr[4])
# controlled identity on the remaining 4 qubits, which is equivalent to doing nothing
qc.h(qr[4])
# measure
qc.measure(qr[4],cr[0])
# reinitialise q[4] to |0>
qc.reset(qr[4])
# Apply a**2 mod 15
qc.h(qr[4])
# controlled unitary
qc.cx(qr[4],qr[2])
qc.cx(qr[4],qr[0])
# feed forward
if cr[0] == 1:
qc.u1(math.pi/2.,qr[4])
qc.h(qr[4])
# measure
qc.measure(qr[4],cr[1])
# reinitialise q[4] to |0>
qc.reset(qr[4])
# Apply a mod 15
qc.h(qr[4])
# controlled unitary.
circuit_amod15(qc,qr,cr,a)
# feed forward
if cr[1] == 1:
qc.u1(math.pi/2.,qr[4])
if cr[0] == 1:
qc.u1(math.pi/4.,qr[4])
qc.h(qr[4])
# measure
qc.measure(qr[4],cr[2])
```
Next we build the optimised circuit for $11 \bmod 15$ as in **C**.
```
def circuit_11period15(qc,qr,cr):
# Initialize q[0] to |1>
qc.x(qr[0])
# Apply a**4 mod 15
qc.h(qr[4])
# controlled identity on the remaining 4 qubits, which is equivalent to doing nothing
qc.h(qr[4])
# measure
qc.measure(qr[4],cr[0])
# reinitialise q[4] to |0>
qc.reset(qr[4])
# Apply a**2 mod 15
qc.h(qr[4])
# controlled identity on the remaining 4 qubits, which is equivalent to doing nothing
# feed forward
if cr[0] == 1:
qc.u1(math.pi/2.,qr[4])
qc.h(qr[4])
# measure
qc.measure(qr[4],cr[1])
# reinitialise q[4] to |0>
qc.reset(qr[4])
# Apply 11 mod 15
qc.h(qr[4])
# controlled unitary.
qc.cx(qr[4],qr[3])
qc.cx(qr[4],qr[1])
# feed forward
if cr[1] == 1:
qc.u1(math.pi/2.,qr[4])
if cr[0] == 1:
qc.u1(math.pi/4.,qr[4])
qc.h(qr[4])
# measure
qc.measure(qr[4],cr[2])
```
Let's build and run a circuit for $a = 7$, and plot the results:
```
q = QuantumRegister(5, 'q')
c = ClassicalRegister(5, 'c')
shor = QuantumCircuit(q, c)
circuit_aperiod15(shor,q,c,7)
backend = Aer.get_backend('qasm_simulator')
sim_job = execute([shor], backend)
sim_result = sim_job.result()
sim_data = sim_result.get_counts(shor)
plot_histogram(sim_data)
```
We see here that the period, $r = 4$, and thus calculate the factors $p = \text{gcd}(a^{r/2}+1,15) = 3$ and $q = \text{gcd}(a^{r/2}-1,15) = 5$. Why don't you try seeing what you get for $a = 2, 8, 11, 13$?
| github_jupyter |
## ewf-ext-03-03-03 - Flood hazard
### <a name="service">Service definition
```
service = dict([('title', 'ewf-ext-03-03-03 - Flood exposure'),
('abstract', 'ewf-ext-03-03-03 - Flood exposure'),
('id', 'ewf-ext-03-03-03')])
start_year = dict([('id', 'start_year'),
('value', '2015'),
('title', 'start year'),
('abstract', 'start year')])
end_year = dict([('id', 'end_year'),
('value', '2019'),
('title', 'end_year'),
('abstract', 'end_year')])
area_of_interest = dict([('id', 'areaOfInterest'),
('value', 'IberianPeninsula'),
('title', 'Area of the region'),
('abstract', 'Area of the region of interest')])
regionOfInterest = dict([('id', 'regionOfInterest'),
('value', 'POLYGON((-9.586 39.597,-8.100 39.597,-8.100 40.695,-9.586 40.695,-9.586 39.597))'),
('title', 'WKT Polygon for the Region of Interest (-1 if no crop)'),
('abstract', 'Set the value of WKT Polygon')])
```
### Parameter Definition
### <a name="runtime">Runtime parameter definition
**Input identifiers**
This is the Sentinel-1 stack of products' identifiers
```
input_identifiers = ('FEI_IberianPeninsula_GHS_2015_CLC_2019.tif', 'binary_flood_map_S1A_IW_GRDH_1SDV_20191223T064251_20191223T064316_030472_037D16_1012.tif')
```
**Input references**
This is the Sentinel-1 stack catalogue references
```
input_references = ('https://catalog.terradue.com/chirps/search?format=atom&uid=chirps-v2.0.2017.01.01','https://catalog.terradue.com/chirps/search?format=atom&uid=chirps-v2.0.2017.01.02')
```
**Data path**
This path defines where the data is staged-in.
```
data_path = ""
etc_path = "/application/notebook/etc"
#etc_path = "/workspace/Better_3rd_phase/Applications/EXT-03-03-03/ewf-ext-03-03-03/src/main/app-resources/notebook/etc"
output_folder = ""
#output_folder = "/workspace/Better_3rd_phase/Applications/EXT-03-03-03/ewf-ext-03-03-03/src/main/app-resources/notebook/libexec"
temp_folder = 'Temp'
cropped_output_folder = 'Output/Crop'
```
#### Import Modules
```
import os
import shutil
import sys
import string
import numpy as np
from osgeo import gdal, ogr, osr
from shapely.wkt import loads
import datetime
import gdal
import pdb
from calendar import monthrange
```
#### Auxiliary methods
```
# remove contents of a given folder
# used to clean a temporary folder
def rm_cfolder(folder):
#folder = '/path/to/folder'
for the_file in os.listdir(folder):
file_path = os.path.join(folder, the_file)
try:
if os.path.isfile(file_path):
os.unlink(file_path)
elif os.path.isdir(file_path): shutil.rmtree(file_path)
except Exception as e:
print(e)
def crop_image(input_image, polygon_wkt, output_path):
dataset = gdal.Open(input_image)
polygon_ogr = ogr.CreateGeometryFromWkt(polygon_wkt)
envelope = polygon_ogr.GetEnvelope()
bounds = [envelope[0], envelope[3], envelope[1], envelope[2]]
print bounds
no_data = dataset.GetRasterBand(1).GetNoDataValue()
gdal.Translate(output_path, dataset, outputType=gdal.GDT_Float32, projWin=bounds, projWinSRS='EPSG:4326', noData=no_data)
dataset = None
def write_output_image(filepath, output_matrix, image_format, data_format, mask=None, output_projection=None, output_geotransform=None, no_data_value=None):
driver = gdal.GetDriverByName(image_format)
out_rows = np.size(output_matrix, 0)
out_columns = np.size(output_matrix, 1)
if mask is not None and mask is not 0:
# TODO: check if output folder exists
output = driver.Create(filepath, out_columns, out_rows, 2, data_format)
mask_band = output.GetRasterBand(2)
mask_band.WriteArray(mask)
if no_data_value is not None:
output_matrix[mask > 0] = no_data_value
else:
output = driver.Create(filepath, out_columns, out_rows, 1, data_format)
if output_projection is not None:
output.SetProjection(output_projection)
if output_geotransform is not None:
output.SetGeoTransform(output_geotransform)
raster_band = output.GetRasterBand(1)
if no_data_value is not None:
raster_band.SetNoDataValue(no_data_value)
raster_band.WriteArray(output_matrix)
if filepath is None:
print "filepath"
if output is None:
print "output"
gdal.Warp(filepath, output, format="GTiff", outputBoundsSRS='EPSG:4326', xRes=output_geotransform[1], yRes=-output_geotransform[5], targetAlignedPixels=True)
return filepath
def matrix_multiply(mat1, mat2, no_data_value=None):
#if no_data_value is not None:
#if not isinstance(mat1, int):
#mat1[(mat1 == no_data_value)] = 0
#if not isinstance(mat2, int):
#mat2[(mat2 == no_data_value)] = 0
mats_nodata = np.logical_or(mat1 == no_data_value, mat2 == no_data_value)
mat1 = mat1.astype('float32')
mat2 = mat2.astype('float32')
multiply = mat1 * mat2
multiply = np.where(mats_nodata, no_data_value, multiply)
return multiply
def get_matrix_list(image_list):
projection = None
geo_transform = None
no_data = None
mat_list = []
for img in image_list:
dataset = gdal.Open(img)
print dataset
projection = dataset.GetProjection()
print projection
geo_transform = dataset.GetGeoTransform()
no_data = dataset.GetRasterBand(1).GetNoDataValue()
product_array = dataset.GetRasterBand(1).ReadAsArray()
mat_list.append(product_array)
dataset = None
return mat_list, projection, geo_transform, no_data
def write_outputs(product_name, first_date, last_date, averages, standard_deviation, image_format, projection, geo_transform, no_data_value):
filenames = []
areaofinterest = area_of_interest['value']
filenames.append(product_name + '_averages_' + areaofinterest + '_' + first_date + '_' + last_date + '.tif')
filenames.append(product_name + '_standarddeviation_' + areaofinterest + '_'+ first_date + '_' + last_date + '.tif')
write_output_image(filenames[0], averages, image_format, gdal.GDT_Int16, None, projection, geo_transform, no_data_value)
write_output_image(filenames[1], standard_deviation, image_format, gdal.GDT_Int16, None, projection, geo_transform, no_data_value)
return filenames
def write_properties_file(output_name, first_date, last_date):
title = 'Output %s' % output_name
first_date = get_formatted_date(first_date)
last_date = get_formatted_date(last_date)
with open(output_name + '.properties', 'wb') as file:
file.write('title=%s\n' % title)
file.write('date=%s/%s\n' % (first_date, last_date))
file.write('geometry=%s' % (regionOfInterest['value']))
def get_formatted_date(date_obj):
date = datetime.datetime.strftime(date_obj, '%Y-%m-%dT00:00:00Z')
return date
def reproject_image_to_master ( master, slave, dst_filename, res=None ):
slave_ds = gdal.Open( slave )
if slave_ds is None:
raise IOError, "GDAL could not open slave file %s " \
% slave
slave_proj = slave_ds.GetProjection()
slave_geotrans = slave_ds.GetGeoTransform()
data_type = slave_ds.GetRasterBand(1).DataType
n_bands = slave_ds.RasterCount
#no_data_value that does not exist on the image
slave_ds.GetRasterBand(1).SetNoDataValue(-300.0)
master_ds = gdal.Open( master )
if master_ds is None:
raise IOError, "GDAL could not open master file %s " \
% master
master_proj = master_ds.GetProjection()
master_geotrans = master_ds.GetGeoTransform()
w = master_ds.RasterXSize
h = master_ds.RasterYSize
if res is not None:
master_geotrans[1] = float( res )
master_geotrans[-1] = - float ( res )
dst_ds = gdal.GetDriverByName('GTiff').Create(dst_filename, w, h, n_bands, data_type)
dst_ds.SetGeoTransform( master_geotrans )
dst_ds.SetProjection( master_proj)
gdal.ReprojectImage( slave_ds, dst_ds, slave_proj,
master_proj, gdal.GRA_NearestNeighbour)
dst_ds = None # Flush to disk
return dst_filename
def project_coordinates(file, dst_filename):
input_raster = gdal.Open(file)
output_raster = dst_filename
gdal.Warp(output_raster,input_raster,dstSRS='EPSG:4326')
def get_pixel_weights(mat):
urban_fabric=[111.,112.]
industrial_commercial_transport_units=[121.,122.,123.,124.]
mine_dump_construction_sites=[131.,132.,133.]
artificial_areas=[141.,142.]
arable_land=[211.,212.,213.]
permanent_crops=[221.,222.,223.]
pastures=[231.]
agricultural_areas=[241.,242.,243.,244.]
forest=[311.,312.,313.]
vegetation_associations=[321.,322.,323.,324.]
little_no_vegetation=[331.,332.,333.,334.,335.]
inland_wetlands=[411.,412.]
coastal_wetlands=[421.,422.,423.]
inland_waters=[511.,512.]
marine_waters=[521.,522.,523.]
exposure_dictionary = dict()
exposure_dictionary[1.0] = urban_fabric
exposure_dictionary[0.5] = industrial_commercial_transport_units + arable_land + permanent_crops
exposure_dictionary[0.3] = mine_dump_construction_sites + agricultural_areas
exposure_dictionary[0.0] = artificial_areas + marine_waters
exposure_dictionary[0.4] = pastures
exposure_dictionary[0.1] = forest + vegetation_associations + little_no_vegetation + inland_wetlands + coastal_wetlands + inland_waters
rows = mat.shape[0]
cols = mat.shape[1]
for i in range(0, rows):
for j in range(0, cols):
for exposure, value_list in exposure_dictionary.iteritems():
for value in value_list:
if mat[i,j] == value:
mat[i,j] = exposure
return mat
if len(output_folder) > 0:
if not os.path.isdir(output_folder):
os.mkdir(output_folder)
if not os.path.isdir(temp_folder):
os.mkdir(temp_folder)
area_of_interest['value'], start_year['value'], end_year['value']
```
#### Workflow
#### Update AOI if crop not needed
```
first_year = start_year['value']
last_year = end_year['value']
product_path_name = output_folder
projection = None
geo_transform = None
no_data = None
areaofinterest = area_of_interest['value']
if input_identifiers[0] >=0:
file_list = [os.path.join(etc_path, filename) for filename in input_identifiers]
flood_frequency = os.path.join(temp_folder, 'flood_frequency_cropped.tif')
crop_image(file_list[1],regionOfInterest['value'],flood_frequency)
flood_exposure=file_list[0]
image_list=[flood_exposure,flood_frequency]
dst_filename = os.path.basename(flood_exposure)
dst_filename = dst_filename.replace(".tif", "_reprojected.tif" )
dst_filename = os.path.join(temp_folder, dst_filename)
#co-registration (slave on master)
flood_exposure_reprojected = reproject_image_to_master(flood_frequency, flood_exposure, dst_filename)
image_list=[flood_exposure_reprojected,flood_frequency]
mat_list, projection, geo_transform, no_data=get_matrix_list(image_list)
flood_frequency_mat = mat_list[1]
flood_exposure_mat = mat_list[0]
no_data=-200.0
flood_hazard = matrix_multiply(flood_frequency_mat,flood_exposure_mat, no_data)
flood_hazard = np.where(flood_exposure==no_data, no_data, flood_hazard)
flood_hazard = np.where(flood_hazard==0.0, no_data, flood_hazard)
file = write_output_image(os.path.join(product_path_name , 'flood_hazard_' + areaofinterest + first_year + last_year + '.tif'), flood_hazard, 'GTiff', gdal.GDT_Float32, None, projection, geo_transform, no_data)
firstdate_obj = datetime.datetime.strptime(first_year, "%Y").date()
lastdate_obj = datetime.datetime.strptime(last_year, "%Y").date()
else:
print "error" + file_list
if input_identifiers[0] >=0:
if regionOfInterest['value'] == '-1':
#dataset = gdal.Open('/vsigzip//vsicurl/%s' % gpd_final.iloc[0]['enclosure'])
dataset = gdal.Open(file)
geoTransform = dataset.GetGeoTransform()
minx = geoTransform[0]
maxy = geoTransform[3]
maxx = minx + geoTransform[1] * dataset.RasterXSize
miny = maxy + geoTransform[5] * dataset.RasterYSize
regionOfInterest['value'] = 'POLYGON(({0} {1}, {2} {1}, {2} {3}, {0} {3}, {0} {1}))'.format(minx, maxy, maxx, miny)
dataset = None
else:
crop_image(file,regionOfInterest['value'],file.split('.tif')[0] + '_cropped.tif')
regionofinterest = regionOfInterest['value']
write_properties_file(file, firstdate_obj, lastdate_obj)
```
#### Remove temporay files and folders
```
try:
shutil.rmtree(temp_folder)
shutil.rmtree(cropped_output_folder)
except OSError as e:
print("Error: %s : %s" % (temp_folder, e.strerror))
print("Error: %s : %s" % (cropped_output_folder, e.strerror))
```
| github_jupyter |
```
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
#load dataset
mnist = input_data.read_data_sets("MNIST_data",one_hot=True)
#define batch_size
batch_size = 100
#compute the number of batch
n_batch = mnist.train.num_examples // batch_size #
#define 2 placeholders
x = tf.placeholder(tf.float32,[None,784])
y = tf.placeholder(tf.float32,[None,10])
keep_prob = tf.placeholder(tf.float32)
W1 = tf.Variable(tf.truncated_normal([784,2000],stddev=0.1))
b1 = tf.Variable(tf.zeros([2000])+0.1)
L1 = tf.nn.tanh(tf.matmul(x,W1) + b1)
L1_drop = tf.nn.dropout(L1,keep_prob)
W2 = tf.Variable(tf.truncated_normal([2000,2000],stddev=0.1))
b2 = tf.Variable(tf.zeros([2000])+0.1)
L2 = tf.nn.tanh(tf.matmul(L1_drop,W2)+b2)
L2_drop = tf.nn.dropout(L2,keep_prob)
W3 = tf.Variable(tf.truncated_normal([2000,1000],stddev=0.1))
b3 = tf.Variable(tf.zeros([1000])+0.1)
L3 = tf.nn.tanh(tf.matmul(L2_drop,W3)+b3)
L3_drop = tf.nn.dropout(L3,keep_prob)
#create a neural network
W4 = tf.Variable(tf.truncated_normal([1000,10],stddev=0.1))
b4 = tf.Variable(tf.zeros([10])+0.1)
prediction = tf.matmul(L3_drop,W4)+b4
#quadratic cost function
#loss = tf.reduce_mean(tf.square(y-prediction))
#Here we use cross_entropy to define loss function instead of quadratic
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=prediction))
train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss)
#initialize
init = tf.global_variables_initializer()
#compute accuracy
#argmax will return the largest number
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(prediction,1))
#tf.cast will change the format of correct_prediction to tf.float32
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
with tf.Session() as sess:
sess.run(init)
for epoch in range(21):
for batch in range(n_batch):
batch_xs,batch_ys = mnist.train.next_batch(batch_size)
sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys,keep_prob:0.6})#keep_prob:0.6 means we only let 60% of neurons to work
test_acc = sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels,keep_prob:1.0})
train_acc = sess.run(accuracy,feed_dict={x:mnist.train.images,y:mnist.train.labels,keep_prob:1.0})
print("Iter" + str(epoch) + ",Testing Accuracy:" + str(test_acc)+",Training Accuracy"+ str(train_acc))
Iter0,Testing Accuracy:0.8541,Training Accuracy0.867927
Iter1,Testing Accuracy:0.9557,Training Accuracy0.969618
Iter2,Testing Accuracy:0.9627,Training Accuracy0.980218
Iter3,Testing Accuracy:0.9661,Training Accuracy0.985745
Iter4,Testing Accuracy:0.9674,Training Accuracy0.987964
```
| github_jupyter |
# A-weightening filter implementation
The A-weighting transfer function is defined in the ANSI Standards S1.4-1983 and S1.42-2001:
$$
H(s) = \frac{\omega_4^2 s^4}{(s-\omega_1)^2(s-\omega_2)(s-\omega_3)(s-\omega_4)^2}
$$
Where $\omega_i = 2\pi f_i$ are the angular frequencies defined by:
```
import numpy as np
f1 = 20.598997 # Hz
f4 = 12194.217 # Hz
f2 = 107.65265 # Hz
f3 = 737.86223 # Hz
w1 = 2*np.pi*f1 # rad/s
w2 = 2*np.pi*f2 # rad/s
w3 = 2*np.pi*f3 # rad/s
w4 = 2*np.pi*f4 # rad/s
```
In [1] ther is a method to convert this function transform to a discrete time-domain using the bilinear transform. We use a similar method, but we separate it into four filters of order one or two, in order to keep the filter stable:
$$
H(s) = \omega_4^2 H_1(s) H_2(s) H_3(s) H_4(s),
$$
where:
$$
H_i(s) = \left\{ \begin{array}{lcc}
\frac{s}{(s-\omega_i)^2} & \text{for} & i=1,4 \\
\\ \frac{s}{(s-\omega_i)} & \text{for} & i = 2,3. \\
\end{array}
\right.
$$
Now, we conver the $H_i(s)$ filters to their discrete-time implementation by using the bilinear transform:
$$
s \rightarrow 2f_s\frac{1+z^{-1}}{1-z^{-1}}.
$$
Therefore:
$$
H_i(z) = \frac{2f_s(1-z^{-2})}{(\omega_i-2f_s)^2z^{-2}+2(\omega_i^2-4f_s^2)z^{-1}+(\omega_i+2f_s)^2} \text{ for } i = 1,4
$$
$$
H_i(z) = \frac{2f_s(1-z^{-1})}{(\omega_i-2f_s)z^{-1}+(\omega_i+2f_s)} \text{ for } i = 2,3
$$
We define two python functions to calculates coefficients of both types of function transforms:
```
def filter_first_order(w,fs): #s/(s+w)
a0 = w + 2.0*fs
b = 2*fs*np.array([1, -1])/a0
a = np.array([a0, w - 2*fs])/a0
return b,a
def filter_second_order(w,fs): #s/(s+w)^2
a0 = (w + 2.0*fs)**2
b = 2*fs*np.array([1,0,-1])/a0
a = np.array([a0,2*(w**2-4*fs**2),(w-2*fs)**2])/a0
return b,a
```
Now, we calculate b and a coefficients of the four filters for some sampling rate:
```
fs = 48000 #Hz
b1,a1 = filter_second_order(w1,fs)
b2,a2 = filter_first_order(w2,fs)
b3,a3 = filter_first_order(w3,fs)
b4,a4 = filter_second_order(w4,fs)
```
Then, we calculate the impulse response of the overall filter, $h[n]$, by concatenating the four filters and using the impulse signal, $\delta[n]$, as input.
```
from scipy import signal
# generate delta[n]
N = 8192*2 #number of points
delta = np.zeros(N)
delta[0] = 1
# apply filters
x1 = signal.lfilter(b1,a1,delta)
x2 = signal.lfilter(b2,a2,x1)
x3 = signal.lfilter(b3,a3,x2)
h = signal.lfilter(b4,a4,x3)
GA = 10**(2/20.) # 0dB at 1Khz
h = h*GA*w4**2
```
Lets find the filter's frequency response, $H(e^{j\omega})$, by calcuating the FFT of $h[n]$.
```
H = np.abs(np.fft.fft(h))[:N/2]
H = 20*np.log10(H)
```
Compare the frequency response to the expresion defined in the norms:
```
eps = 10**-6
f = np.linspace(0,fs/2-fs/float(N),N/2)
curveA = f4**2*f**4/((f**2+f1**2)*np.sqrt((f**2+f2**2)*(f**2+f3**2))*(f**2+f4**2))
HA = 20*np.log10(curveA+eps)+2.0
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,10))
plt.title('Digital filter frequency response')
plt.plot(f,H, 'b',label= 'Devised filter')
plt.plot(f,HA, 'r',label= 'Norm filter')
plt.ylabel('Amplitude [dB]')
plt.xlabel('Frequency [Hz]')
plt.legend()
plt.xscale('log')
plt.xlim([10,fs/2.0])
plt.ylim([-80,3])
plt.grid()
plt.show()
```
Now we also can check if the filter designed fullfill the tolerances given in the ANSI norm [2].
```
import csv
freqs = []
tol_type0_low = []
tol_type0_high = []
tol_type1_low = []
tol_type1_high = []
with open('ANSI_tolerances.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
line_count = 0
for row in csv_reader:
if line_count == 0:
#print('Column names are {", ".join(row)}')
line_count += 1
else:
freqs.append(float(row[0]))
Aw = float(row[1])
tol_type0_low.append(Aw + float(row[2]))
tol_type0_high.append(Aw + float(row[3]))
tol_type1_low.append(Aw + float(row[4]))
if row[5] != '':
tol_type1_high.append(Aw + float(row[5]))
else:
tol_type1_high.append(np.Inf)
line_count += 1
print('Processed %d lines.'%line_count)
fig = plt.figure(figsize=(10,10))
plt.title('Digital filter frequency response')
plt.plot(f,H, 'b',label= 'Devised filter')
plt.plot(f,HA, 'r',label= 'Norm filter')
plt.plot(freqs,tol_type0_low,'k.',label='type0 tolerances')
plt.plot(freqs,tol_type0_high,'k.')
plt.plot(freqs,tol_type1_low,'r.',label='type1 tolerances')
plt.plot(freqs,tol_type1_high,'r.')
plt.ylabel('Amplitude [dB]')
plt.xlabel('Frequency [Hz]')
plt.legend()
plt.xscale('log')
plt.xlim([10,fs/2.0])
plt.ylim([-80,3])
plt.grid()
plt.show()
```
## References
[1] Rimell, Andrew; Mansfield, Neil; Paddan, Gurmail (2015). "Design of digital filters for frequency weightings (A and C) required for risk assessments of workers exposed to noise". Industrial Health (53): 21–27.
[2] ANSI S1.4-1983. Specifications for Sound Level Meters.
| github_jupyter |
<h1 align="center"> Image Captioning </h1>
In this notebook you will teach a network to do image captioning

_image [source](https://towardsdatascience.com/image-captioning-in-deep-learning-9cd23fb4d8d2)_
#### Alright, here's our plan:
1. Take a pre-trained inception v3 to vectorize images
2. Stack an LSTM on top of it
3. Train the thing on [MSCOCO](http://cocodataset.org/#download)
```
# Please eithrer download data from https://yadi.sk/d/b4nAwIE73TVcp5 or generate it manually with preprocess_data.
!wget https://www.dropbox.com/s/zl9wy31p6r05j34/handout.tar.gz -O handout.tar.gz
!tar xzf handout.tar.gz
```
### Data preprocessing
```
%%time
# Read Dataset
import numpy as np
import json
img_codes = np.load("data/image_codes.npy")
captions = json.load(open('data/captions_tokenized.json'))
```
### Data structure
To save your time, we've already vectorized all MSCOCO17 images with a pre-trained inception_v3 network from [torchvision](https://github.com/pytorch/vision/blob/master/torchvision/models/inception.py).
The whole process takes anywhere between a day on CPU and 10min on 3x tesla m40. If you want to play with that yourself, [you're welcome](https://gist.github.com/justheuristic/11fd01f9c12c0bf960499580d104130b).
```
print("Each image code is a 2048-unit vector [ shape: %s ]" % str(img_codes.shape))
print(img_codes[0,:10], end='\n\n')
print("For each image there are 5 reference captions, e.g.:\n")
print('\n'.join(captions[0]))
```
As you can see, all captions are already tokenized and lowercased. We now want to split them and add some special tokens for start/end of caption.
```
#split descriptions into tokens
for img_i in range(len(captions)):
for caption_i in range(len(captions[img_i])):
sentence = captions[img_i][caption_i]
captions[img_i][caption_i] = ["#START#"]+sentence.split(' ')+["#END#"]
```
You don't want your network to predict a million-size vector of probabilities at each step, so we're gotta make some cuts.
We want you to __count the occurences of each word__ so that we can decide which words to keep in our vocabulary.
```
# Build a Vocabulary
from collections import Counter
word_counts = Counter()
#Compute word frequencies for each word in captions. See code above for data structure
<YOUR CODE HERE>
vocab = ['#UNK#', '#START#', '#END#', '#PAD#']
vocab += [k for k, v in word_counts.items() if v >= 5 if k not in vocab]
n_tokens = len(vocab)
assert 10000 <= n_tokens <= 10500
word_to_index = {w: i for i, w in enumerate(vocab)}
eos_ix = word_to_index['#END#']
unk_ix = word_to_index['#UNK#']
pad_ix = word_to_index['#PAD#']
def as_matrix(sequences, max_len=None):
""" Convert a list of tokens into a matrix with padding """
max_len = max_len or max(map(len,sequences))
matrix = np.zeros((len(sequences), max_len), dtype='int32') + pad_ix
for i,seq in enumerate(sequences):
row_ix = [word_to_index.get(word, unk_ix) for word in seq[:max_len]]
matrix[i, :len(row_ix)] = row_ix
return matrix
#try it out on several descriptions of a random image
as_matrix(captions[1337])
```
### Building our neural network
As we mentioned earlier, we shall build an rnn "language-model" conditioned on vectors from the convolutional part.

_image: http://bit.ly/2FKnqHm_
We'll unbox the inception net later to save memory, for now just pretend that it's available.
```
import torch, torch.nn as nn
import torch.nn.functional as F
class CaptionNet(nn.Module):
def __init__(self, n_tokens=n_tokens, emb_size=128, lstm_units=256, cnn_feature_size=2048):
""" A recurrent 'head' network for image captioning. See scheme above. """
super(self.__class__, self).__init__()
# a layer that converts conv features to
self.cnn_to_h0 = nn.Linear(cnn_feature_size, lstm_units)
self.cnn_to_c0 = nn.Linear(cnn_feature_size, lstm_units)
# recurrent part, please create the layers as per scheme above.
# create embedding for input words. Use the parameters (e.g. emb_size).
self.emb = <YOUR CODE>
# lstm: create a recurrent core of your network. Use either LSTMCell or just LSTM.
# In the latter case (nn.LSTM), make sure batch_first=True
self.lstm = <YOUR CODE>
# create logits: linear layer that takes lstm hidden state as input and computes one number per token
self.logits = <YOUR CODE>
def forward(self, image_vectors, captions_ix):
"""
Apply the network in training mode.
:param image_vectors: torch tensor containing inception vectors. shape: [batch, cnn_feature_size]
:param captions_ix: torch tensor containing captions as matrix. shape: [batch, word_i].
padded with pad_ix
:returns: logits for next token at each tick, shape: [batch, word_i, n_tokens]
"""
initial_cell = self.cnn_to_c0(image_vectors)
initial_hid = self.cnn_to_h0(image_vectors)
# compute embeddings for captions_ix
captions_emb = <YOUR CODE>
# apply recurrent layer to captions_emb.
# 1. initialize lstm state with initial_* from above
# 2. feed it with captions. Mind the dimension order in docstring
# 3. compute logits for next token probabilities
# Note: if you used nn.LSTM, you can just give it (initial_cell[None], initial_hid[None]) as second arg
# lstm_out should be lstm hidden state sequence of shape [batch, caption_length, lstm_units]
lstm_out = <YOUR_CODE>
# compute logits from lstm_out
logits = <YOUR_CODE>
return logits
network = CaptionNet(n_tokens)
dummy_img_vec = torch.randn(len(captions[0]), 2048)
dummy_capt_ix = torch.tensor(as_matrix(captions[0]), dtype=torch.int64)
dummy_logits = network.forward(dummy_img_vec, dummy_capt_ix)
print('shape:', dummy_logits.shape)
assert dummy_logits.shape == (dummy_capt_ix.shape[0], dummy_capt_ix.shape[1], n_tokens)
def compute_loss(network, image_vectors, captions_ix):
"""
:param image_vectors: torch tensor containing inception vectors. shape: [batch, cnn_feature_size]
:param captions_ix: torch tensor containing captions as matrix. shape: [batch, word_i].
padded with pad_ix
:returns: crossentropy (neg llh) loss for next captions_ix given previous ones. Scalar float tensor
"""
# captions for input - all except last cuz we don't know next token for last one.
captions_ix_inp = captions_ix[:, :-1].contiguous()
captions_ix_next = captions_ix[:, 1:].contiguous()
# apply the network, get predictions for captions_ix_next
logits_for_next = network.forward(image_vectors, captions_ix_inp)
# compute the loss function between logits_for_next and captions_ix_next
# Use the mask, Luke: make sure that predicting next tokens after EOS do not contribute to loss
# you can do that either by multiplying elementwise loss by (captions_ix_next != pad_ix)
# or by using ignore_index in some losses.
loss = <YOUR CODE>
return loss
dummy_loss = compute_loss(network, dummy_img_vec, dummy_capt_ix)
assert len(dummy_loss.shape) <= 1, 'loss must be scalar'
assert dummy_loss.data.numpy() > 0, "did you forget the 'negative' part of negative log-likelihood"
dummy_loss.backward()
assert all(param.grad is not None for param in network.parameters()), \
'loss should depend differentiably on all neural network weights'
```
Create ~~adam~~ your favorite optimizer for the network.
```
<YOUR CODE>
```
# Training
* First implement the batch generator
* Than train the network as usual
```
from sklearn.model_selection import train_test_split
captions = np.array(captions)
train_img_codes, val_img_codes, train_captions, val_captions = train_test_split(img_codes, captions,
test_size=0.1,
random_state=42)
from random import choice
def generate_batch(img_codes, captions, batch_size, max_caption_len=None):
#sample random numbers for image/caption indicies
random_image_ix = np.random.randint(0, len(img_codes), size=batch_size)
#get images
batch_images = img_codes[random_image_ix]
#5-7 captions for each image
captions_for_batch_images = captions[random_image_ix]
#pick one from a set of captions for each image
batch_captions = list(map(choice,captions_for_batch_images))
#convert to matrix
batch_captions_ix = as_matrix(batch_captions,max_len=max_caption_len)
return torch.tensor(batch_images, dtype=torch.float32), torch.tensor(batch_captions_ix, dtype=torch.int64)
generate_batch(img_codes,captions,3)
```
### Main loop
Train on minibatches just as usual. Evaluate on val from time to time.
##### TIps
* If training loss has become close to 0 or model produces garbage,
double-check that you're predicting __next__ words, not current or t+2'th words.
* If the model generates fluent captions that have nothing to do with the images
* this may be due to recurrent net not receiving image vectors.
* alternatively it may be caused by gradient explosion, try clipping 'em or just restarting the training
* finally, you may just need to train the model a bit more
* Crossentropy is a poor measure of overfitting
* Model can overfit validation crossentropy but keep improving validation quality.
* Use human _(manual)_ evaluation or try automated metrics: [cider](https://github.com/vrama91/cider) or [bleu](https://www.nltk.org/_modules/nltk/translate/bleu_score.html)
* We recommend you to periodically evaluate the network using the next "apply trained model" block
* its safe to interrupt training, run a few examples and start training again
* The typical loss values should be around 3~5 if you average over time, scale by length if you sum over time. The reasonable captions began appearing at loss=2.8 ~ 3.0
```
batch_size = 50 # adjust me
n_epochs = 100 # adjust me
n_batches_per_epoch = 50 # adjust me
n_validation_batches = 5 # how many batches are used for validation after each epoch
from tqdm import tqdm
for epoch in range(n_epochs):
train_loss=0
network.train(True)
for _ in tqdm(range(n_batches_per_epoch)):
loss_t = compute_loss(network, *generate_batch(train_img_codes, train_captions, batch_size))
# clear old gradients; do a backward pass to get new gradients; then train with opt
<YOUR CODE>
train_loss += loss_t.detach().numpy()
train_loss /= n_batches_per_epoch
val_loss=0
network.train(False)
for _ in range(n_validation_batches):
loss_t = compute_loss(network, *generate_batch(val_img_codes, val_captions, batch_size))
val_loss += loss_t.detach().numpy()
val_loss /= n_validation_batches
print('\nEpoch: {}, train loss: {}, val loss: {}'.format(epoch, train_loss, val_loss))
print("Finished!")
```
### Apply trained model
Let's unpack our pre-trained inception network and see what our model is capable of.
```
from beheaded_inception3 import beheaded_inception_v3
inception = beheaded_inception_v3().train(False)
```
### Generate caption
The function below creates captions by sampling from probabilities defined by the net.
The implementation used here is simple but inefficient (quadratic in lstm steps). We keep it that way since it isn't a performance bottleneck.
```
def generate_caption(image, caption_prefix = ("#START#",),
t=1, sample=True, max_len=100):
assert isinstance(image, np.ndarray) and np.max(image) <= 1\
and np.min(image) >=0 and image.shape[-1] == 3
image = torch.tensor(image.transpose([2, 0, 1]), dtype=torch.float32)
vectors_8x8, vectors_neck, logits = inception(image[None])
caption_prefix = list(caption_prefix)
for _ in range(max_len):
prefix_ix = as_matrix([caption_prefix])
prefix_ix = torch.tensor(prefix_ix, dtype=torch.int64)
next_word_logits = network.forward(vectors_neck, prefix_ix)[0, -1]
next_word_probs = F.softmax(next_word_logits, -1).detach().numpy()
assert len(next_word_probs.shape) ==1, 'probs must be one-dimensional'
next_word_probs = next_word_probs ** t / np.sum(next_word_probs ** t) # apply temperature
if sample:
next_word = np.random.choice(vocab, p=next_word_probs)
else:
next_word = vocab[np.argmax(next_word_probs)]
caption_prefix.append(next_word)
if next_word=="#END#":
break
return caption_prefix
from matplotlib import pyplot as plt
from scipy.misc import imresize
%matplotlib inline
#sample image
!wget https://pixel.nymag.com/imgs/daily/selectall/2018/02/12/12-tony-hawk.w710.h473.jpg -O data/img.jpg
img = plt.imread('data/img.jpg')
img = imresize(img, (299, 299)).astype('float32') / 255.
plt.imshow(img)
for i in range(10):
print(' '.join(generate_caption(img, t=5.)[1:-1]))
!wget http://ccanimalclinic.com/wp-content/uploads/2017/07/Cat-and-dog-1.jpg -O data/img.jpg
img = plt.imread('data/img.jpg')
img = imresize(img, (299, 299)).astype('float32') / 255.
plt.imshow(img)
plt.show()
for i in range(10):
print(' '.join(generate_caption(img, t=5.)[1:-1]))
```
# Demo
### Find at least 10 images to test it on.
* Seriously, that's part of an assignment. Go get at least 10 pictures to get captioned
* Make sure it works okay on __simple__ images before going to something more comples
* Photos, not animation/3d/drawings, unless you want to train CNN network on anime
* Mind the aspect ratio
```
#apply your network on image sample you found
#
#
```
### Now what?
Your model produces some captions but you still strive to improve it? You're damn right to do so. Here are some ideas that go beyond simply "stacking more layers". The options are listed easiest to hardest.
##### Attention
You can build better and more interpretable captioning model with attention.
* How it works: https://distill.pub/2016/augmented-rnns/
* One way of doing this in captioning: https://arxiv.org/abs/1502.03044
* You will have to create a dataset for attention with [this notebook](https://gist.github.com/justheuristic/11fd01f9c12c0bf960499580d104130b).
##### Subword level captioning
In the base version, we replace all rare words with UNKs which throws away a lot of information and reduces quality. A better way to deal with vocabulary size problem would be to use Byte-Pair Encoding
* BPE implementation you can use: [github_repo](https://github.com/rsennrich/subword-nmt).
* Theory: https://arxiv.org/abs/1508.07909
* It was originally built for machine translation, but it should work with captioning just as well.
#### Reinforcement learning
* After your model has been pre-trained in a teacher forced way, you can tune for captioning-speific models like CIDEr.
* Tutorial on RL for sequence models: [practical_rl week8](https://github.com/yandexdataschool/Practical_RL/tree/master/week8_scst)
* Theory: https://arxiv.org/abs/1612.00563
| github_jupyter |
# Supervised baselines
Notebook with strong supervised learning baseline on cifar-10
```
%reload_ext autoreload
%autoreload 2
```
You probably need to install dependencies
```
# All things needed
!git clone https://github.com/puhsu/sssupervised
!pip install -q fastai2
!pip install -qe sssupervised
```
After running cell above you should restart your kernel
```
from sssupervised.cifar_utils import CifarFactory
from sssupervised.randaugment import RandAugment
from fastai2.data.transforms import parent_label, Categorize
from fastai2.optimizer import ranger, Adam
from fastai2.layers import LabelSmoothingCrossEntropy
from fastai2.metrics import error_rate
from fastai2.callback.all import *
from fastai2.vision.all import *
```
Baseline uses wideresnet-28-2 model with randaugment augmentation policy. It is optiimzed with RAadam with lookahead with one-cycle learning rate and momentum schedules for 200 epochs (we count epochs in number of steps on standard cifar, so we set 4000 epochs in our case, because we only have $2400$ training examples ($50000/2400 \approx 20$)
```
cifar = untar_data(URLs.CIFAR)
files, (train, test, unsup) = CifarFactory(n_same_cls=3, seed=42, n_labeled=400).splits_from_path(cifar)
sup_ds = Datasets(files, [[PILImage.create, RandAugment, ToTensor], [parent_label, Categorize]], splits=(train, test))
sup_dl = sup_ds.dataloaders(after_batch=[IntToFloatTensor, Normalize.from_stats(*cifar_stats)])
sup_dl.train.show_batch(max_n=9)
# https://github.com/uoguelph-mlrg/Cutout
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
class BasicBlock(nn.Module):
def __init__(self, in_planes, out_planes, stride, dropRate=0.0):
super().__init__()
self.bn1 = nn.BatchNorm2d(in_planes)
self.relu1 = nn.ReLU(inplace=True)
self.conv1 = nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_planes)
self.relu2 = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(out_planes, out_planes, kernel_size=3, stride=1,
padding=1, bias=False)
self.droprate = dropRate
self.equalInOut = (in_planes == out_planes)
self.convShortcut = (not self.equalInOut) and nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride,
padding=0, bias=False) or None
def forward(self, x):
if not self.equalInOut: x = self.relu1(self.bn1(x))
else: out = self.relu1(self.bn1(x))
out = self.relu2(self.bn2(self.conv1(out if self.equalInOut else x)))
if self.droprate > 0:
out = F.dropout(out, p=self.droprate, training=self.training)
out = self.conv2(out)
return torch.add(x if self.equalInOut else self.convShortcut(x), out)
class NetworkBlock(nn.Module):
def __init__(self, nb_layers, in_planes, out_planes, block, stride, dropRate=0.0):
super().__init__()
self.layer = self._make_layer(block, in_planes, out_planes, nb_layers, stride, dropRate)
def _make_layer(self, block, in_planes, out_planes, nb_layers, stride, dropRate):
layers = []
for i in range(nb_layers):
layers.append(block(i == 0 and in_planes or out_planes, out_planes, i == 0 and stride or 1, dropRate))
return nn.Sequential(*layers)
def forward(self, x): return self.layer(x)
class WideResNet(nn.Module):
def __init__(self, depth, num_classes, widen_factor=1, dropRate=0.0):
super().__init__()
nChannels = [16, 16*widen_factor, 32*widen_factor, 64*widen_factor]
assert((depth - 4) % 6 == 0)
n = (depth - 4) // 6
block = BasicBlock
# 1st conv before any network block
self.conv1 = nn.Conv2d(3, nChannels[0], kernel_size=3, stride=1,
padding=1, bias=False)
self.block1 = NetworkBlock(n, nChannels[0], nChannels[1], block, 1, dropRate)
self.block2 = NetworkBlock(n, nChannels[1], nChannels[2], block, 2, dropRate)
self.block3 = NetworkBlock(n, nChannels[2], nChannels[3], block, 2, dropRate)
self.bn1 = nn.BatchNorm2d(nChannels[3])
self.relu = nn.ReLU(inplace=True)
self.fc = nn.Linear(nChannels[3], num_classes)
self.nChannels = nChannels[3]
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.Linear): m.bias.data.zero_()
def forward(self, x):
out = self.conv1(x)
out = self.block1(out)
out = self.block2(out)
out = self.block3(out)
out = self.relu(self.bn1(out))
out = F.adaptive_avg_pool2d(out, 1)
out = out.view(-1, self.nChannels)
return self.fc(out)
def wrn_22(): return WideResNet(depth=22, num_classes=10, widen_factor=6, dropRate=0.)
def wrn_22_k8(): return WideResNet(depth=22, num_classes=10, widen_factor=8, dropRate=0.)
def wrn_22_k10(): return WideResNet(depth=22, num_classes=10, widen_factor=10, dropRate=0.)
def wrn_22_k8_p2(): return WideResNet(depth=22, num_classes=10, widen_factor=8, dropRate=0.2)
def wrn_28(): return WideResNet(depth=28, num_classes=10, widen_factor=6, dropRate=0.)
def wrn_28_k8(): return WideResNet(depth=28, num_classes=10, widen_factor=8, dropRate=0.)
def wrn_28_k8_p2(): return WideResNet(depth=28, num_classes=10, widen_factor=8, dropRate=0.2)
def wrn_28_p2(): return WideResNet(depth=28, num_classes=10, widen_factor=6, dropRate=0.2)
```
We override default callbacks (the best way I found, to pass extra arguments to callbacks)
```
defaults.callbacks = [
TrainEvalCallback(),
Recorder(train_metrics=True),
ProgressCallback(),
]
class SkipSomeValidations(Callback):
"""Perform validation regularly, but not every epoch
(usefull for small datasets, where training is quick)"""
def __init__(self, n_epochs=20): self.n_epochs=n_epochs
def begin_validate(self):
if self.train_iter % self.n_epochs != 0:
raise CancelValidException()
learner = Learner(
sup_dl,
wrn_28(),
CrossEntropyLossFlat(),
opt_func=ranger,
wd=1e-2,
metrics=error_rate,
cbs=[ShowGraphCallback(), SkipSomeValidations(n_epochs=20)]
)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
train=pd.read_csv(r'C:\Users\prath\LoanEligibilityPrediction\Dataset\train.csv')
train.Loan_Status=train.Loan_Status.map({'Y':1,'N':0})
train.isnull().sum()
Loan_status=train.Loan_Status
train.drop('Loan_Status',axis=1,inplace=True)
test=pd.read_csv(r'C:\Users\prath\LoanEligibilityPrediction\Dataset\test.csv')
Loan_ID=test.Loan_ID
data=train.append(test)
data.head()
data.describe()
data.isnull().sum()
data.Dependents.dtypes
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
corrmat=data.corr()
f,ax=plt.subplots(figsize=(9,9))
sns.heatmap(corrmat,vmax=.8,square=True)
data.Gender=data.Gender.map({'Male':1,'Female':0})
data.Gender.value_counts()
corrmat=data.corr()
f,ax=plt.subplots(figsize=(9,9))
sns.heatmap(corrmat,vmax=.8,square=True)
data.Married=data.Married.map({'Yes':1,'No':0})
data.Married.value_counts()
data.Dependents=data.Dependents.map({'0':0,'1':1,'2':2,'3+':3})
data.Dependents.value_counts()
corrmat=data.corr()
f,ax=plt.subplots(figsize=(9,9))
sns.heatmap(corrmat,vmax=.8,square=True)
data.Education=data.Education.map({'Graduate':1,'Not Graduate':0})
data.Education.value_counts()
data.Self_Employed=data.Self_Employed.map({'Yes':1,'No':0})
data.Self_Employed.value_counts()
data.Property_Area.value_counts()
data.Property_Area=data.Property_Area.map({'Urban':2,'Rural':0,'Semiurban':1})
data.Property_Area.value_counts()
corrmat=data.corr()
f,ax=plt.subplots(figsize=(9,9))
sns.heatmap(corrmat,vmax=.8,square=True)
data.head()
data.Credit_History.size
data.Credit_History.fillna(np.random.randint(0,2),inplace=True)
data.isnull().sum()
data.Married.fillna(np.random.randint(0,2),inplace=True)
data.isnull().sum()
data.LoanAmount.fillna(data.LoanAmount.median(),inplace=True)
data.Loan_Amount_Term.fillna(data.Loan_Amount_Term.mean(),inplace=True)
data.isnull().sum()
data.Gender.value_counts()
from random import randint
data.Gender.fillna(np.random.randint(0,2),inplace=True)
data.Gender.value_counts()
data.Dependents.fillna(data.Dependents.median(),inplace=True)
data.isnull().sum()
corrmat=data.corr()
f,ax=plt.subplots(figsize=(9,9))
sns.heatmap(corrmat,vmax=.8,square=True)
data.Self_Employed.fillna(np.random.randint(0,2),inplace=True)
data.isnull().sum()
data.head()
data.drop('Loan_ID',inplace=True,axis=1)
data.isnull().sum()
train_X=data.iloc[:614,]
train_y=Loan_status
X_test=data.iloc[614:,]
seed=7
from sklearn.model_selection import train_test_split
train_X,test_X,train_y,test_y=train_test_split(train_X,train_y,random_state=seed)
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
models=[]
models.append(("logreg",LogisticRegression()))
models.append(("tree",DecisionTreeClassifier()))
models.append(("lda",LinearDiscriminantAnalysis()))
models.append(("svc",SVC()))
models.append(("knn",KNeighborsClassifier()))
models.append(("nb",GaussianNB()))
seed=7
scoring='accuracy'
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
result=[]
names=[]
for name,model in models:
#print(model)
kfold=KFold(n_splits=10,random_state=seed)
cv_result=cross_val_score(model,train_X,train_y,cv=kfold,scoring=scoring)
result.append(cv_result)
names.append(name)
print("%s %f %f" % (name,cv_result.mean(),cv_result.std()))
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
svc=LogisticRegression()
svc.fit(train_X,train_y)
pred=svc.predict(test_X)
print(accuracy_score(test_y,pred))
print(confusion_matrix(test_y,pred))
print(classification_report(test_y,pred))
df_output=pd.DataFrame()
outp=svc.predict(X_test).astype(int)
outp
df_output['Loan_ID']=Loan_ID
df_output['Loan_Status']=outp
df_output.head()
df_output[['Loan_ID','Loan_Status']].to_csv(r'C:\Users\prath\LoanEligibilityPrediction\Dataset\outputlr.csv',index=False)
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
svc=DecisionTreeClassifier()
svc.fit(train_X,train_y)
pred=svc.predict(test_X)
print(accuracy_score(test_y,pred))
print(confusion_matrix(test_y,pred))
print(classification_report(test_y,pred))
df_output=pd.DataFrame()
outp=svc.predict(X_test).astype(int)
outp
df_output['Loan_ID']=Loan_ID
df_output['Loan_Status']=outp
df_output.head()
df_output[['Loan_ID','Loan_Status']].to_csv(r'C:\Users\prath\LoanEligibilityPrediction\Dataset\outputdt.csv',index=False)
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
svc=LinearDiscriminantAnalysis()
svc.fit(train_X,train_y)
pred=svc.predict(test_X)
print(accuracy_score(test_y,pred))
print(confusion_matrix(test_y,pred))
print(classification_report(test_y,pred))
df_output=pd.DataFrame()
outp=svc.predict(X_test).astype(int)
outp
df_output['Loan_ID']=Loan_ID
df_output['Loan_Status']=outp
df_output.head()
df_output[['Loan_ID','Loan_Status']].to_csv(r'C:\Users\prath\LoanEligibilityPrediction\Dataset\outputld.csv',index=False)
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
svc=SVC()
svc.fit(train_X,train_y)
pred=svc.predict(test_X)
print(accuracy_score(test_y,pred))
print(confusion_matrix(test_y,pred))
print(classification_report(test_y,pred))
df_output=pd.DataFrame()
outp=svc.predict(X_test).astype(int)
outp
df_output['Loan_ID']=Loan_ID
df_output['Loan_Status']=outp
df_output.head()
df_output[['Loan_ID','Loan_Status']].to_csv(r'C:\Users\prath\LoanEligibilityPrediction\Dataset\outputSVC.csv',index=False)
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
svc=KNeighborsClassifier()
svc.fit(train_X,train_y)
pred=svc.predict(test_X)
print(accuracy_score(test_y,pred))
print(confusion_matrix(test_y,pred))
print(classification_report(test_y,pred))
df_output=pd.DataFrame()
outp=svc.predict(X_test).astype(int)
outp
df_output['Loan_ID']=Loan_ID
df_output['Loan_Status']=outp
df_output.head()
df_output[['Loan_ID','Loan_Status']].to_csv(r'C:\Users\prath\LoanEligibilityPrediction\Dataset\outputknn.csv',index=False)
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
svc=GaussianNB()
svc.fit(train_X,train_y)
pred=svc.predict(test_X)
print(accuracy_score(test_y,pred))
print(confusion_matrix(test_y,pred))
print(classification_report(test_y,pred))
df_output=pd.DataFrame()
outp=svc.predict(X_test).astype(int)
outp
df_output['Loan_ID']=Loan_ID
df_output['Loan_Status']=outp
df_output.head()
df_output[['Loan_ID','Loan_Status']].to_csv(r'C:\Users\prath\LoanEligibilityPrediction\Dataset\outputgnb.csv',index=False)
```
| github_jupyter |
# Jupyter Notebooks and CONSTELLATION
This notebook is an introduction to using Jupyter notebooks with CONSTELLATION. In part 1, we'll learn how to send data to CONSTELLATION to create and modify graphs. In part 2, we'll learn how to retrieve graph data from CONSTELLATION. Part 3 will be about getting and setting information about the graph itself. Part 4 will show how to call plugins. Part 5 is a quick look at types. Part 6 will be fun (and occasionally useful). Part 7 introduces some advanced graph usage.
This notebook uses Python libraries that are included in the [Python Anaconda3 distribution](https://www.anaconda.com/distribution/) version 2020.02, Python v3.7.6.
To run through the notebook, click on the triangular 'run cell' button in the toolbar to execute the current cell and move to the next cell.
Let's start by seeing if we can talk to CONSTELLATION. Make sure that CONSTELLATION is running, and you've started the external scripting server (which has been done for you if you started the Jupyter notebook server from CONSTELLATION). The external scripting server makes a REST HTTP API available for use by any HTTP client.
The Python ``import`` statement looks for a library with the given name. Click the 'run cell' button to execute it.
(All of the libraries used here are included in the Anaconda Python distribution.)
```
import io
import os
import pandas as pd
import PIL.Image, PIL.ImageDraw, PIL.ImageFilter, PIL.ImageFont
# Also import some of the notebook display methods so we can display nice things.
#
from IPython.display import display, HTML, Image
# This is a convenient Python interface to the REST API.
#
import constellation_client
cc = constellation_client.Constellation()
```
When the external scripting server started, it automatically downloaded ``constellation_client.py`` into your ``.ipython`` directory. It's also important that you create a client instance **after** you start the REST server, because the server creates a secret that the client needs to know to communicate with the server.
After the import succeeds, we then create a Python object that communicates with CONSTELLATION on our behalf. CONSTELLATION provides communication with the outside world using HTTP (as if it were a web server) and JSON (a common data format). The ``constellation_client`` library hides these details so you can just use Python.
## Part 1: Sending Data to CONSTELLATION
Typically you'll have some data in a CSV file. We'll use some Python tricks (in this case, ``io.StringIO``) to make it look like we have a separate CSV file that we're reading into a dataframe. (If your data is in an Excel spreadsheet, you could use ``read_excel()`` to read it it directly, rather than saving it to a CVS file first.)
```
csv_data = '''
from_address,from_country,to_address,to_country,dtg
abc@example1.com,Brazil,def@example2.com,India,2017-01-01 12:34:56
abc@example1.com,Brazil,ghi@example3.com,Zambia,2017-01-01 14:30:00
jkl@example4.com,India
'''.strip()
df = pd.read_csv(io.StringIO(csv_data))
df
```
Putting our data in a dataframe is a good idea; not only can we easily manipulate it, but it's easy to send a dataframe to CONSTELLATION, as long as we tell CONSTELLATION what data belongs where.
A dataframe is a table of data, but CONSTELLATION deals with graphs, so we need to reconcile a data table and a graph. It shouldn't be too hard to notice (especially given the column names) that a row of data in the dataframe represents a transaction: the source node has the "from" attributes, the destination node has the "to" attributes, and the transaction has the dtg attribute. The first row therefore represents a connection from `abc@example1.com` with country value `Brazil` to `def@example2.com` with country value `India`. The last row represents a node that is not connected to any other node.
Let's massage the data to something that CONSTELLATION likes. All of the addresses are email addresses, which CONSTELLATION should be clever enough to recognise, but we'd prefer to be explicit, so let's add the types.
```
df.from_address = df.from_address + '<Email>'
df.to_address = df.to_address + '<Email>'
df
```
Dataframes are clever enough to work on a column at a time; we don't have to do our own loops.
Let's check the data types.
```
df.dtypes
```
All of the columns are of type ``object``, which in this case means "string". However, CONSTELLATION expects datetimes to actually be of ``datetime`` type; if we try and upload datetimes as strings, CONSTELLATION won't recognise them as datetimes.
Not to worry: pandas can fix that for us.
```
df.dtg = pd.to_datetime(df.dtg)
df
```
The datetimes look exactly the same, but notice that the ``Not a Number`` value in the last row has become a ``Not a Timestamp`` value. If we look at the data types again, we can see that the ``dtg`` values are now datetimes, not objects.
```
df.dtypes
```
The ``datetime64[ns]`` type means that datetimes are stored as a 64-bit number representing a number of nanoseconds from a zero timestamp. Not that we care that much about the storage: the important thing is that ``dtg`` is now
a datetime column.
CONSTELLATION recognises source, destination and transaction attributes by the prefixes of their names. It won't be too surprising to find out that the prefixes are ``source``, ``destination``, and ``transaction``, with a ``.`` separating the prefixes from the attribute names.
Let's rename the columns to match what CONSTELLATION expects. (We didn't do this first because the column headers were valid Python identifiers, it was easier to type ``df.dtg`` than ``df['transaction.DateTime']``.)
Note that we use the name ``Identifier`` for the values that uniquely identify a particular node.
```
df = df.rename(columns={
'from_address': 'source.Label',
'from_country': 'source.Geo.Country',
'to_address': 'destination.Label',
'to_country': 'destination.Geo.Country',
'dtg': 'transaction.DateTime'})
df
```
Now the dataframe is ready to be sent to CONSTELLATION. We'll create a new graph (using the ``new_graph()`` method), and send the dataframe to CONSTELLATION using the ``put_dataframe()`` method.
If you get a Python `ConnectionRefusedError` when you run this cell, you've probably forgotten to start the CONSTELLATION external scripting server in the Tools menu. If you start it now, you'll have to go back and re-execute the "`cc = constellation_client.Constellation()`" cell, then come back here.)
```
cc.new_graph()
cc.put_dataframe(df)
```
CONSTELLATION creates a new graph, accepts the contents of the dataframe, applies the schema, and automatically arranges the graph. Finally, it resets the view so you can see the complete graph.
In this simple case, it's easy to see that the first two rows of the dataframe are correctly represented as nodes with transactions between them. The third row of the dataframe does not have a destination, so there is no transaction.
If you open the `Attribute Editor` view and select a transaction, you'll see that they have the correct ``DateTime`` values.
Of course, we didn't have to create a new graph. In the same graph, let's add a new node with a transaction from an existing node (`ghi@example3.com`). We'll use another (pretend) CSV file and modify the dataframe as we did before.
```
csv_data = '''
from_address,from_country,to_address,to_country,dtg
ghi@example3.com,Zambia,mno@example3.com,Brazil,2017-01-02 01:22:33
'''.strip()
dfn = pd.read_csv(io.StringIO(csv_data))
dfn.from_address = dfn.from_address + '<Email>'
dfn.to_address = dfn.to_address + '<Email>'
dfn.dtg = pd.to_datetime(dfn.dtg)
dfn = dfn.rename(columns={
'from_address': 'source.Label',
'from_country': 'source.Geo.Country',
'to_address': 'destination.Label',
'to_country': 'destination.Geo.Country',
'dtg': 'transaction.DateTime'})
cc.put_dataframe(dfn)
```
## Part 2: Getting Data from CONSTELLATION
We'll use the graph that we created in Part 1 to see what happens when we get data from CONSTELLATION. Make sure that the graph is still displayed in CONSTELLATION.
```
df = cc.get_dataframe()
df.head()
```
There seems to be more data there. Let's look at the columns.
```
print(f'Number of columns: {len(df.columns)}')
df.columns
```
We added five columns in part 1, but we get 50+ columns back! (The number may vary depending on the version of CONSTELLATION and your default schema.)
What's going on?
Remember that CONSTELLATION will apply the graph's schema to your data, and do an arrangement. Those other columns are the result of applying the schema, or (in the case of the x, y, z columns) applying an arrangement. The columns are in the dataframe in no particular order.
Let's have a look at the data types in the dataframe.
```
df.dtypes
```
The various ``selected`` columns are bool (that is, ``true`` or ``false`` values): an element is either selected or not selected. The ``transaction.DateTime`` is a ``datetime64[ns]`` as expected. Everything else should be unsurprising. One thing to notice is that ``source.nradius`` may be an ``int64``, even though in CONSTELLATION it's a ``float``. This is because ``nradius`` usually has integer values (typically 1.0), so the dataframe will convert it to an ``int64``. This shouldn't be a problem for us; it's still a number. This can happen for any column that only has integral values.
We can see what the CONSTELLATION types are using ``cc``'s type attribute: the ``Constellation`` instance will remember the types after each call to ``get_dataframe()``. (Usually you won't have to worry about these.)
```
cc.types
```
CONSTELLATION types such ``boolean``, ``datetime``, ``float``, ``int``, ``string`` convert to their obvious types in a dataframe. Other types convert to reasonable string equivalents; for example, ``icon`` converts to a string containing the name of the icon.
The ``color`` type converts to a ``[red, green, blue, alpha]`` list, where each value ranges from 0 to 1. Some people are more used to web colors (in the format #RRGGBB). The following function converts a color list to a web color.
```
def to_web_color(color):
"""Convert an RGB tuple of 0..1 to a web color."""
return f'#{int(color[0]*255):02x}{int(color[1]*255):02x}{int(color[2]*255):02x}'
```
For example:
```
print(df['source.color'])
print(df['source.color'].apply(to_web_color))
```
Which allows us to display labels using their node's schema color.
```
import html
for label,color in df[['source.Label', 'source.color']].values:
h = '<span style="color:{}">{}</span>'.format(to_web_color(color), html.escape(label))
display(HTML(h))
```
### Graph elements
Calling ``get_dataframe()`` with no parameters gave us four rows representing the whole graph: one row for each transaction, and a row for the singleton node.
Sometimes we don't want all of the graph. We can ask for just the nodes.
```
df = cc.get_dataframe(vx=True)
df
```
Five rows, one for each node. Note that all of the columns use the ``source`` prefix.
We can ask for just the transactions.
```
df = cc.get_dataframe(tx=True)
df
```
Three rows, one for each transaction. Note that transactions always include the source and destination nodes.
Finally, you can get just the elements that are selected. Before you run the next cell, use your mouse to select two nodes in the current graph.
```
df = cc.get_dataframe(vx=True, selected=True)
df
```
Two rows, one for each selected node. Select some different nodes and try again. (If you don't see any rows here, it's because you didn't select any nodes. Select a couple of nodes and run the cell again.)
Generally, you'll probably want one of ``vx=True`` when you're looking at nodes, or ``tx=True`` when you're looking at transactions.
Select a couple of transactions, then run the next cell.
```
df = cc.get_dataframe(tx=True, selected=True)
df
```
When you ask for transactions, you not only get the transaction data, but the data for the modes at each end of the transaction as well.
### Choosing attributes
You generally don't want all of the attributes that CONSTELLATION knows about. For example, the x,y,z coordinates are rarely useful when you're analysing data. The ``get_dataframe()`` method allows you to specify only the attributes you want. Not only does this use less space in the dataframe, but particularly for larger graphs, it can greatly reduce the time taken to get the data from the graph into a dataframe.
First we'll find out what graph, node, and transaction attributes exist. The `get_attributes()` method returns a dictionary mapping attribute names to their CONSTELLATION types. For consistency with the other method return values, the attribute names are prefixed with `graph.`, `source.`, and `transaction.`. (Attributes that start with `graph.` are attributes of the graph itself, such as the graph's background color. You can see these in the "Graph" section of the Attribute Editor.)
```
attrs = cc.get_attributes()
attrs
```
To specify just the attributes you want, pass a list of attribute names using the ``attrs`` parameter.
```
df = cc.get_dataframe(vx=True, attrs=['source.Identifier', 'source.Type'])
df
```
### Updating the graph: nodes
There is a special attribute for each element that isn't visible in CONSTELLATION: ``source.[id]``, ``destination.[id]``, and ``transaction.[id]``. These are unique identifiers for each element. These identifiers can change whenever a graph is modified, so they can't be relied on to track an element. However, they can be used to identify a unique element when you get a dataframe, modify a value, and send the dataframe back to CONSTELLATION.
For example, suppose we want to make all nodes in the ``@example3.com`` domain larger, and color them blue. We need the ``Identifier`` attribute (for the domain name), the ``nradius`` attribute so we can modify it, and the ``source.[id]`` attribute to tell CONSTELLATION which nodes to modify. We don't need to get the color, because we don't care what it is before we change it. xx
```
df = cc.get_dataframe(vx=True, attrs=['source.Identifier', 'source.nradius', 'source.[id]'])
df
```
Let's filter out the ``example3.com`` nodes and double their radii.
```
e3 = df[df['source.Identifier'].str.endswith('@example3.com')].copy()
e3['source.nradius'] *= 2
e3
```
We don't need to send the ``source.Identifier`` column back to CONSTELLATION, so let's drop it. We'll also add the color column. (Fortunately, CONSTELLATION is quite forgiving about color values.)
```
e3 = e3.drop('source.Identifier', axis=1)
e3['source.color'] = 'blue'
e3
```
Finally, we can send this dataframe to CONSTELLATION.
```
cc.put_dataframe(e3)
```
The two ``example3.com`` nodes should be noticably larger. However, the colors didn't change. This is because one of the things that CONSTELLATION does for us is to apply the graph's schema whenever you call ``put_dataframe()``, so the color changes to blue, then is immediately overridden by the schema.
Let's put the node sizes back to 1, and call ``put_dataframe()`` again, but this time tell CONSTELLATION not to apply the schema.
```
e3['source.nradius'] = 1
cc.put_dataframe(e3, complete_with_schema=False)
```
Better.
Another thing that CONSTELLATION does for a ``put_dataframe()`` is a simple arrangement. If you want to create your own arrangement, you have to tell CONSTELLATION not to do this using the ``arrange`` parameter.
Let's arrange the nodes in a circle, just like the built-in circle arrangement. (Actually, wih only five nodes, it's more of a pentagon.) We don't need to know anything about the nodes for this one, we just need to know they exist. In particular, we don't need to know their current x, y, and z positions; we'll just create new ones.
```
df = cc.get_dataframe(vx=True, attrs=['source.[id]'])
df
n = len(df)
import numpy as np
df['source.x'] = n * np.sin(2*np.pi*(df.index/n))
df['source.y'] = n * np.cos(2*np.pi*(df.index/n))
df['source.z'] = 0
df
cc.put_dataframe(df, arrange='')
```
The empty string tells CONSTELLATION not to perform any arrangement. (You could put the name of any arrangement plugin there, but there are better ways of doing that.)
Also note that the blue nodes aren't blue any more, because the schema was applied.
### Updating the graph: transactions
The graph we created earlier has a problem: the transactions have the wrong type. More precisely, they don't have any type. Let's fix that. We'll get all of the transactions from the graph, give them a type, and update the graph.
When you run this, the transactions will turn green, indicating that schema completion has happened. You can look at the Attribute Editor to see that the transactions types are now `Communication`.
```
# Get the transactions from the graph.
#
tx_df = cc.get_dataframe(tx=True, attrs=['transaction.[id]'])
display(tx_df)
# Add the transaction type.
#
tx_df['transaction.Type'] = 'Communication'
display(tx_df)
# Update the graph.
#
cc.put_dataframe(tx_df)
```
### Updating the graph: custom attributes
Sometimes we want to add attributes that aren't defined in the graph's schema. For example, let's add an attribute called ``Country.Chars`` that shows the number of characters in each node's country name.
```
c_df = cc.get_dataframe(vx=True, attrs=['source.[id]', 'source.Geo.Country'])
c_df['source.Country.Chars'] = c_df['source.Geo.Country'].str.len()
display(c_df)
display(c_df.dtypes)
cc.put_dataframe(c_df)
```
If you look at the Attribute Editor, you'll see the new node attribute ``Country.Chars``. However, if you right-click on the attribute and select ``Modify Attribute``, you'll see that the new attribute is a string, not an integer, even though the value is an integer in the dataframe. This is because CONSTELLATION assumes that everything it doesn't recognise is a string.
We can fix this by suffixing a type indicator to the column name. Let's create a new attribute called ``Country.Length`` which we turn into an integer by adding ``<integer>`` to the name.
```
c_df = cc.get_dataframe(vx=True, attrs=['source.[id]', 'source.Geo.Country'])
c_df['source.Country.Length<integer>'] = c_df['source.Geo.Country'].str.len()
display(c_df)
cc.put_dataframe(c_df)
```
Looking at ``Country.Length`` in the Attribute Editor, we can see that it is an integer. (Click on the Edit button to see the different dialog box.)
Other useful types are ``float`` and ``datetime``. You can see the complete list of types by adding a custom attribute in the Attribute Editor and looking at the ``Attribute Type`` dropdown list.
(Note that there is currently no way to delete attributes externally, so if you want to delete the ``Country.Chars`` attribute, you'll have to do it manually.)
### Deleting nodes and vertices
The special identifier ``[delete]`` lets you delete nodes and transactions from the graph. It doesn't matter what value is in the ``source.[delete]`` column - just the fact that the column is there is sufficient to delete the graph elements. This means that all of the elements in the dataframe will be deleted, so be careful..
Let's delete all singleton nodes. These nodes have no transactions connected to them, so when we get a dataframe, the ``destination.[id]`` value will be ``NaN``.
(If we get all nodes with ``vx=True``, we won't get any data about transactions. If we get all transactions with ``tx=True``, we won't get the singleton nodes.)
```
# Get the graph. (Names are included so we can check that the dataframe matches the graph.)
#
df = cc.get_dataframe(attrs=['source.[id]', 'source.Identifier', 'destination.[id]', 'destination.Identifier'])
display(df)
# Keep the singleton rows (where the destination.[id] is null).
#
df = df[df['destination.[id]'].isnull()]
display(df)
# Create a new dataframe with a source.[id] column containing all of the values from the df source.[id] column,
# and a source.[delete] column containing any non-null value
#
del_df = pd.DataFrame({'source.[id]': df['source.[id]'], 'source.[delete]': 0})
display(del_df)
# Delete the singletons.
#
cc.put_dataframe(del_df)
```
Likewise, we can delete transactions. Let's delete all transactions originating from ``ghi`` .
```
# Get all transactions.
# We don't need all of the attributes for the delete, but we'll get them to use below.
#
df = cc.get_dataframe(tx=True)
display(df)
# Keep the transactions originating from 'ghi'.
#
df = df[df['source.Identifier'].str.startswith('ghi@')]
display(df)
# Create a new dataframe containing the transaction ids in the original dataframe.
# It doesn't matter what the value of 'transaction.[delete]' is,
# but we have to give it something.
#
del_df = pd.DataFrame({'transaction.[id]': df['transaction.[id]'], 'transaction.[delete]': 0})
display(del_df)
# Delete the transactions.
#
cc.put_dataframe(del_df)
```
And let's add a transaction that is exactly the same as the original. Remember that we originally fetched all of the attributes, so this new transaction will have the same attribute values.
```
cc.put_dataframe(df)
```
## Part 3: Graph Attributes
As well as node and transaction attributes, we can also get graph attributes. (Graph attributes can be seen in CONSTELLATION's Attribute Editor, above the node and transaction attributes.)
```
df = cc.get_graph_attributes()
df
```
There is only one set of graph attributes, so there is one row in the dataframe.
Let's display the `Geo.Country` attribute in a small size above the nodes, and the country flag as a decorator on the top-right of the node icon.
A node label is defined as *``attribute-name``*``;``*``color``*``;``*``size``*, with multiple labels separated by pipes "|".
A decorator is defined as ``"nw";"ne";"se";"sw";`` where any of the direction ordinals may be blank.
We don't care what the top labels and decorators are right now, so we'll just create a new dataframe.
```
labels = 'Geo.Country;Orange;0.5'
df = pd.DataFrame({'node_labels_top': [labels], 'decorators': [';"Geo.Country";;;']})
cc.set_graph_attributes(df)
```
(You may have to zoom in to see the smaller labels.)
To add a label on the bottom in addition to the default ``Label`` attribute, you have to specify both labels.
```
labels = 'Type;Teal;0.5|Label;LightBlue;1'
df = pd.DataFrame({'node_labels_bottom': [labels]})
cc.set_graph_attributes(df)
```
## Part 4: Types
CONSTELLATION defines many types. Use the ``describe_type()`` method to get a description of a particular type.
```
t = cc.describe_type('Communication')
t
```
## Part 5: Plugins
You can call CONSTELLATION plugins from Python (if you know what they're called). Let's arrange the graph in trees.
```
cc.run_plugin('ArrangeInTrees')
```
If we can't see all of the graph, reset the view.
```
cc.run_plugin('ResetView')
```
You can also call plugins with parameters (if you know what they are). For example, the ``AddBlaze`` plugin accepts a node id to add a blaze to.
Let's add a blaze to each ``example3.com`` node.
```
# Get all nodes and their identifiers.
#
df = cc.get_dataframe(vx=True, attrs=['source.Identifier', 'source.[id]'])
# Whioch nodes belong to the example3.com domain?
#
e3 = df[df['source.Identifier'].str.endswith('@example3.com')]
# Add a blaze to those nodes.
#
cc.run_plugin('AddBlaze', args={'BlazeUtilities.vertex_ids': list(e3['source.[id]'])})
```
Let's be neat and tidy and remove them again. We can reuse the dataframe.
```
cc.run_plugin('RemoveBlaze', args={'BlazeUtilities.vertex_ids': list(e3['source.[id]'])})
```
### Multichoice parameters
While most parameter values are quite simple (strings, integers, etc), some are a little more complex to deal with, such as the multichoice parameter. In order to pass multichoice parameter values to a plugin, you need to know the possible choices, and you need to know how to select them.
Let's use the <i>select top n</i> plugin as an example. The schema view tells us that this plugin has a multichoice parameter called <i>SelectTopNPlugin.type</i>.
Looking in the Data Access View, the type options will vary depending on the value given to the <i>SelectTopN.type_category</i> parameter. For this example we we set the type category to "Online Identifier", which will result in the possible type options being:
- Online Identifier
- Email
In order to use this parameter, we need to create a string containing all options by joining each option with '\n'. We also need to select all the options we want by prefixing them with '`✓ `' (i.e. Unicode character U+2713 (CHECK MARK) followed by character U+0020 (SPACE)).
This is obviously not an ideal system, but this is how multichoice parameters were implemented at a time when it wasn't expected that CONSTELLATION's internal workings would be exposed via scripting or a REST API.
(This plugin won't do anything on this simple graph.)
```
# Select a node.
#
cc.run_plugin('SelectSources')
# Run the "select top n" plugin with a custom multichoice parameter value.
#
CHECK = '\u2713'
options = ['Online Identifier', 'Communication', 'User Name']
checked = ['Communication']
parameters = {
'SelectTopNPlugin.mode': "Node",
'SelectTopNPlugin.type_category': 'Online Location',
'SelectTopNPlugin.type': '\n'.join([f'{CHECK} {v}' if v in checked else v for v in options]),
'SelectTopNPlugin.limit': 2
}
cc.run_plugin('SelectTopN', args=parameters)
```
So how do we know what plugins exist?
```
plugins = cc.list_plugins()
sorted(plugins)
```
Unfortunately, at the moment there is no way of using the REST API to find out what each plugin does or what parameters it takes. However, you can go the the Schema View in CONSTELLATION and look at the ``Plugins`` tab.
If you'd like to find out what a particular plugin does:
```
cc.describe_plugin('ARRANGEINGRIDGENERAL')
```
## Part 6: Data Access Plugins
Data Access plugins in CONSTELLATION are like any other plugins; they just have a different user interface. This means that they can be called from an external scripting client just like any other plugin.
One caveat is that many of these plugins use the global parameters (seen at the top of the Data Access View).
- Query Name
- Range
Let's try running a data access plugin, although to avoid connectivity problems we'll use the <i>Test Parameters</i> plugin in the <strong>Developer</strong> category of the Data Access View. This plugin doesn't actually access any external data, but rather simply exists to test the mechanisms CONSTELLATION uses to build and use plugin parameters. The plugin has many parameters, but for this example we will focus on the following:
- ``GlobalCoreParameters.query_name``: A string representing the name of the query.
- ``GlobalCoreParameters.datetime_range``: The datetime range; see below.
You might want to try running this plugin manually on an empty graph before running the code below. The plugin will create two connected nodes containing `Comment` attribute values reflecting the values specified by the plugin parameters. (You can see these in the Attribute Editor after you've run the cell.)
Note that the global parameters and plugin-specific parameters are named so they can be differentiated.
Run the plugin a few times, changing the parameters each time, to satisfy yourself that this is the case. After you've done that, let's try running it programmatically.
```
def get_data():
"""Display the results of the plugin."""
df = cc.get_dataframe()
print('query_name :', df.loc[0, 'source.Comment'])
print('datetime_range :', df.loc[0, 'destination.Comment'])
print('all_parameters :', df.loc[0, 'transaction.Comment'])
# Set up a counter.
#
counter = 0
cc.new_graph()
counter += 1
parameters = {
'CoreGlobalParameters.query_name': f'Query {counter} from a REST client',
'CoreGlobalParameters.datetime_range': 'P1D',
'TestParametersPlugin.robot': 'Bender',
'TestParametersPlugin.planets': f'{CHECK} Venus\n{CHECK} Mars'
}
cc.run_plugin('TestParameters', args=parameters)
get_data()
```
The datetime range can be an explicit range, or a duration from the current time.
### Datetime range
A range is represented by two ISO 8601 datetime values separated by a semi-colon. This represents an explicit start and end point. Examples are:
- ``2016-01-01T00:00:00Z;2016-12-31T23:59:59Z``
- ``2017-06-01T12:00:00Z;2017-06-01T13:00:00Z``
### Datetime duration
A duration is represented by a single ISO 8601 duration. This is converted to an explicit datetime range when the query is run. Examples are:
- ``P1D``: one day
- ``P7D``: 7 days
- ``P1M``: one month
- ``P1Y``: one year
- ``P1M7D``: one month and seven days
Note that only years, months, and days are supported (so ``P1H`` for one hour is not a valid period, for example.) For durations other than those, use Python to determine an explicit range.
Let's try calling the plugin again.
```
cc.new_graph()
counter += 1
parameters['CoreGlobalParameters.query_name'] = f'Query {counter} from a REST client'
parameters['CoreGlobalParameters.datetime_range'] = '2017-07-01T00:21:15Z;2017-07-14T00:21:15Z'
cc.run_plugin('TestParameters', args=parameters)
get_data()
```
### Something's wrong?
Sometimes things don't work. Like this.
```
cc.run_plugin('seletall')
```
That's not particularly helpful. Fortunately, when something goes wrong the Python client remembers the most recent response, so we can look at what the REST server is telling us.
```
HTML(cc.r.content.decode('latin1'))
```
What do you mean, "No such plugin as"... Oh, we missed a letter. Let's try that again.
```
cc.run_plugin('selectall')
```
## Part 6: Taking a Screenshot
It can be useful to include a screenshot of the graph in a notebook. It's easy to get an image encoded as data representing a PNG file.
```
buf = cc.get_graph_image()
Image(buf)
```
Here we used the built-in notebook facilities to display the image (which is returned from CONSTELLATION as a sequence of bytes, the encoding of the image in PNG format).
If another window overlaps CONSTELLATION's graph display, you might see that window in the image. One way of avoiding this is to resize the CONSTELLATION window slightly first. Another way is to add a sleep before the `get_graph_image()` call and click in the CONSTELLATION window to bring it to the top.
We can also use PIL (the Python Image Library) to turn the bytes into an image and manipulate it.
```
img = PIL.Image.open(io.BytesIO(buf))
```
You might want to resize the image to fit it into a report.
```
def resize(img, max_size):
w0 = img.width
h0 = img.height
s = max(w0, h0)/max_size
w1 = int(w0//s)
h1 = int(h0//s)
print(f'Resizing from {w0}x{h0} to {w1}x{h1}')
return img.resize((w1, h1))
small = resize(img, 512)
# PIL images know how to display themselves.
#
small
```
The image can be saved to a file. You can either write the bytes directly (remember the bytes are already in PNG format), or save the PIL image.
```
with open('my_constellation_graph.png', 'wb') as f:
f.write(buf)
img.save('my_small_constellation_graph.png')
```
PIL is fun.
```
small.filter(PIL.ImageFilter.EMBOSS)
w = small.width
h = small.height
small.crop((int(w*0.25), int(h*0.25), int(w*0.75), int(h*0.75)))
# Fonts depend on the operating system.
#
if os.name=='nt':
font = PIL.ImageFont.truetype('calibri.ttf', 20)
else:
font = PIL.ImageFont.truetype('Oxygen-Sans.ttf', 20)
draw = PIL.ImageDraw.Draw(small)
draw.text((0, 0), 'This is my graph, it is mine.', (255, 200, 40), font=font)
small
```
# Part 7: NetworkX
NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks.
This notebook isn't going to teach you how to use NetworkX, but you can extract your CONSTELLATION graph into a NetworkX graph for further analysis.
We'll start by getting a dataframe containing the graph data.
```
cc.run_plugin('ArrangeInGridGeneral')
df = cc.get_dataframe()
df.head()
```
The ``constellation_client`` library contains a function that converts a dataframe to a NetworkX graph. You can see the documentation for it using the notebook's built-in help mechanism.
```
constellation_client.nx_from_dataframe?
```
When you've looked at the help, close the help window and create a NetworkX graph from the dataframe.
```
g = constellation_client.nx_from_dataframe(df)
g
```
We can look at a node and see that it has the expected attributes.
```
g.nodes(data=True)['0']
```
We can look at an edge and see that it has the expected attributes.
```
list(g.edges(data=True))[0]
```
NetworkX can draw its graphs using a plotting library called ``matplotlib``. We just need to tell ``matplotlib`` to draw in the notebook, and get the correct positions and colors from the node and edge attributes. (We can use a convenience function provided by ``constellation_client`` to get the positions.)
```
%matplotlib inline
import networkx as nx
pos = constellation_client.get_nx_pos(g)
node_colors = [to_web_color(g.nodes[n]['color']) for n in g.nodes()]
edge_colors = [to_web_color(g.edges[e]['color']) for e in g.edges()]
nx.draw(g, pos=pos, node_color=node_colors, edge_color=edge_colors)
```
| github_jupyter |
# Step 1 - Prepare Data
Data cleaning.
```
%load_ext autoreload
%autoreload 2
import pandas as pd
# Custom Functions
import sys
sys.path.append('../src')
import data as dt
import prepare as pr
import helper as he
```
### Load Data
```
dt_task = dt.Data()
data = dt_task.load('fn_clean')
fn_data = he.get_config()['path']['sample_dir'] + 'data.txt'
data = pd.read_csv(fn_data, sep='\t', encoding='utf-8')
data.columns
data = data[data.answer_markedAsAnswer == True].reset_index(drop=True).copy()
data.head().to_json()
task_params = {
1 : {
'label' : 'subcat',
'type' : 'classification',
'language' : 'en',
'prepare' : ''
},
2 : {
'label' : 'cat',
'type' : 'classification',
'language' : 'en',
'prepare' : ''
},
4 : {
'type' : 'qa',
'language' : 'en',
'prepare' : None
}
}
for t in task_params.keys():
print(t)
data.head()
cl = pr.Clean(language='en')
%%time
title_clean = cl.transform(data.question_title,
do_remove=True,
do_placeholder=True)
%%time
body_clean = cl.transform(data.question_text,
do_remove=True,
do_placeholder=True)
title_clean[0:20]
body_clean[0:20]
data['text'] = title_clean
data.head()
len(data[data.answer_markedAsAnswer == True])
tt = ['Asdas', 'asdasd sad asd', 'Asd ss asda asd']
[t.split(' ') for t in tt]
task_type_lookup = {
1 : 'classification',
2 : 'classification',
3 : 'ner',
4 : 'qa'
}
task_type_lookup[0]
task_type_lookup[1]
data[data.answer_upvotes > 1].head()
len(data)
data_red = data.drop_duplicates(subset=['text'])
data_red['text'] = data_red.text.replace('\t',' ',regex=True).replace('"','').replace("'",' ').replace('\n',' ',regex=True)
data_red['subcat'] = data_red.subcat.replace('\t',' ',regex=True).replace('"','').replace("'",' ').replace('\n',' ',regex=True)
len(data_red)
# data_red['subcat'] = data_red.subcat.str.replace(r'\D', '')
# data_red['text'] = data_red.text.str.replace(r'\D', '')
data_red.subcat.value_counts()
tt = data_red[data_red.groupby('subcat').subcat.transform('size') > 14]
tt.subcat.value_counts()
pd.DataFrame(data_red.subcat.drop_duplicates())
list(set(data.subcat.drop_duplicates()) - set(data_red.subcat.drop_duplicates()))
list(data_red.subcat.drop_duplicates())
data_red = data_red[data_red.subcat.isin(['msoffice',
'edge',
'ie',
'windows',
'insider',
'mobiledevices',
'outlook_com',
'protect',
'skype',
'surface',
'windowslive'])].copy()
len(data_red)
data_red[['text','subcat']].head(6000).reset_index(drop=True).to_csv(he.get_config()['path']['sample_dir'] + 'train.txt', sep='\t', encoding='utf-8', index=False)
data_red[['text','subcat']].tail(7733-6000).reset_index(drop=True).to_csv(he.get_config()['path']['sample_dir'] + 'test.txt', sep='\t', encoding='utf-8', index=False)
```
| github_jupyter |
## Create Azure Resources¶
This notebook creates relevant Azure resources. It creates a recource group where an IoT hub with an IoT edge device identity is created. It also creates an Azure container registry (ACR).
```
from dotenv import set_key, get_key, find_dotenv
from pathlib import Path
import json
import time
```
To create or access an Azure ML Workspace, you will need the following information:
* An Azure subscription id
* A resource group name
* A region for your resources
We also require you to provide variable names that will be used to create these resources in later notebooks.
```
# Azure resources
subscription_id = "<subscription_id>"
resource_group = "<resource_group>"
resource_region = "resource_region" # e.g. resource_region = "eastus"
# IoT hub name - a globally UNIQUE name is required, e.g. iot_hub_name = "myiothubplusrandomnumber".
iot_hub_name = "<iot_hub_name>"
device_id = "<device_id>" # the name you give to the edge device. e.g. device_id = "mydevice"
# azure container registry name - a globally UNIQUE name is required, e.g. arc_name = "myacrplusrandomnumber"
acr_name = '<acr_name>'
```
Create and initialize a dotenv file for storing parameters used in multiple notebooks.
```
env_path = find_dotenv()
if env_path == "":
Path(".env").touch()
env_path = find_dotenv()
set_key(env_path, "subscription_id", subscription_id)
set_key(env_path, "resource_group", resource_group)
set_key(env_path, "resource_region", resource_region)
set_key(env_path, "iot_hub_name", iot_hub_name)
set_key(env_path, "device_id", device_id)
set_key(env_path,"acr_name", acr_name)
acr_login_server = '{}.azurecr.io'.format(acr_name)
set_key(env_path,"acr_login_server", acr_login_server)
```
## Create Azure Resources
```
# login in your account
accounts = !az account list --all -o tsv
if "Please run \"az login\" to access your accounts." in accounts[0]:
!az login -o table
else:
print("Already logged in")
```
Below we will reload it just to make sure that everything is working.
```
!az account set --subscription $subscription_id
# create a new resource group
!az group create -l $resource_region -n $resource_group
```
### Create IoT Hub
```
# install az-cli iot extension - I had to use "sudo -i" to make it work
!sudo -i az extension add --name azure-cli-iot-ext
!az iot hub list --resource-group $resource_group -o table
# Command to create a Standard tier S1 hub with name `iot_hub_name` in the resource group `resource_group`.
!az iot hub create --resource-group $resource_group --name $iot_hub_name --sku S1
# Command to create a free tier F1 hub. You may encounter error "Max number of Iot Hubs exceeded for sku = Free" if quota is reached.
# !az iot hub create --resource-group $resource_group --name $iot_hub_name --sku F1
```
### Register an IoT Edge device
We create a device with name `device_id` under previously created iot hub.
```
time.sleep(30) # Wait 30 seconds to let IoT hub stable before creating a device
print("az iot hub device-identity create --hub-name {} --device-id {} --edge-enabled -g {}".format(iot_hub_name,device_id,resource_group))
!az iot hub device-identity create --hub-name $iot_hub_name --device-id $device_id --edge-enabled -g $resource_group
```
Obtain device_connection_string. It will be used in the next step.
```
print("az iot hub device-identity show-connection-string --device-id {} --hub-name {} -g {}".format(device_id, iot_hub_name,resource_group))
json_data = !az iot hub device-identity show-connection-string --device-id $device_id --hub-name $iot_hub_name -g $resource_group
print(json_data)
device_connection_string = json.loads(''.join([i for i in json_data if 'WARNING' not in i]))['connectionString']
print(device_connection_string)
set_key(env_path, "device_connection_string", device_connection_string)
```
### Create Azure Container Registry
```
!az acr create -n $acr_name -g $resource_group --sku Standard --admin-enabled
!az acr login --name $acr_name
acr_password = !az acr credential show -n $acr_name --query passwords[0].value
acr_password = "".join(acr_password)
acr_password = acr_password.strip('\"')
set_key(env_path,"acr_password", acr_password)
```
In this notebook, we created relevant Azure resources. We also created a ".env" file to save and reuse the variables needed cross all the notebooks. We can now move on to the next notebook [02_IoTEdgeConfig.ipynb](02_IoTEdgeConfig.ipynb).
| github_jupyter |
# Deutsch-Jozsa Algorithm
In this section, we first introduce the Deutsch-Jozsa problem, and classical and quantum algorithms to solve it. We then implement the quantum algorithm using Qiskit, and run it on a simulator and device.
## Contents
1. [Introduction](#introduction)
1.1 [Deutsch-Jozsa Problem](#djproblem)
1.2 [Deutsch-Jozsa Algorithm](#classical-solution)
1.3 [The Quantum Solution](#quantum-solution)
1.4 [Why Does This Work?](#why-does-this-work)
2. [Worked Example](#example)
3. [Creating Quantum Oracles](#creating-quantum-oracles)
4. [Qiskit Implementation](#implementation)
4.1 [Constant Oracle](#const_oracle)
4.2 [Balanced Oracle](#balanced_oracle)
4.3 [The Full Algorithm](#full_alg)
4.4 [Generalised Circuit](#general_circs)
5. [Running on Real Devices](#device)
6. [Problems](#problems)
7. [References](#references)
## 1. Introduction <a id='introduction'></a>
The Deutsch-Jozsa algorithm, first introduced in Reference [1], was the first example of a quantum algorithm that performs better than the best classical algorithm. It showed that there can be advantages to using a quantum computer as a computational tool for a specific problem.
### 1.1 Deutsch-Jozsa Problem <a id='djproblem'> </a>
We are given a hidden Boolean function $f$, which takes as input a string of bits, and returns either $0$ or $1$, that is:
$$
f(\{x_0,x_1,x_2,...\}) \rightarrow 0 \textrm{ or } 1 \textrm{ , where } x_n \textrm{ is } 0 \textrm{ or } 1$$
The property of the given Boolean function is that it is guaranteed to either be balanced or constant. A constant function returns all $0$'s or all $1$'s for any input, while a balanced function returns $0$'s for exactly half of all inputs and $1$'s for the other half. Our task is to determine whether the given function is balanced or constant.
Note that the Deutsch-Jozsa problem is an $n$-bit extension of the single bit Deutsch problem.
### 1.2 The Classical Solution <a id='classical-solution'> </a>
Classically, in the best case, two queries to the oracle can determine if the hidden Boolean function, $f(x)$, is balanced:
e.g. if we get both $f(0,0,0,...)\rightarrow 0$ and $f(1,0,0,...) \rightarrow 1$, then we know the function is balanced as we have obtained the two different outputs.
In the worst case, if we continue to see the same output for each input we try, we will have to check exactly half of all possible inputs plus one in order to be certain that $f(x)$ is constant. Since the total number of possible inputs is $2^n$, this implies that we need $2^{n-1}+1$ trial inputs to be certain that $f(x)$ is constant in the worst case. For example, for a $4$-bit string, if we checked $8$ out of the $16$ possible combinations, getting all $0$'s, it is still possible that the $9^\textrm{th}$ input returns a $1$ and $f(x)$ is balanced. Probabilistically, this is a very unlikely event. In fact, if we get the same result continually in succession, we can express the probability that the function is constant as a function of $k$ inputs as:
$$ P_\textrm{constant}(k) = 1 - \frac{1}{2^{k-1}} \qquad \textrm{for } 1 < k \leq 2^{n-1}$$
Realistically, we could opt to truncate our classical algorithm early, say if we were over x% confident. But if we want to be 100% confident, we would need to check $2^{n-1}+1$ inputs.
### 1.3 Quantum Solution <a id='quantum-solution'> </a>
Using a quantum computer, we can solve this problem with 100% confidence after only one call to the function $f(x)$, provided we have the function $f$ implemented as a quantum oracle, which maps the state $\vert x\rangle \vert y\rangle $ to $ \vert x\rangle \vert y \oplus f(x)\rangle$, where $\oplus$ is addition modulo $2$. Below is the generic circuit for the Deutsh-Jozsa algorithm.

Now, let's go through the steps of the algorithm:
<ol>
<li>
Prepare two quantum registers. The first is an $n$-qubit register initialized to $|0\rangle$, and the second is a one-qubit register initialized to $|1\rangle$:
$$\vert \psi_0 \rangle = \vert0\rangle^{\otimes n} \vert 1\rangle$$
</li>
<li>
Apply a Hadamard gate to each qubit:
$$\vert \psi_1 \rangle = \frac{1}{\sqrt{2^{n+1}}}\sum_{x=0}^{2^n-1} \vert x\rangle \left(|0\rangle - |1 \rangle \right)$$
</li>
<li>
Apply the quantum oracle $\vert x\rangle \vert y\rangle$ to $\vert x\rangle \vert y \oplus f(x)\rangle$:
$$
\begin{aligned}
\lvert \psi_2 \rangle
& = \frac{1}{\sqrt{2^{n+1}}}\sum_{x=0}^{2^n-1} \vert x\rangle (\vert f(x)\rangle - \vert 1 \oplus f(x)\rangle) \\
& = \frac{1}{\sqrt{2^{n+1}}}\sum_{x=0}^{2^n-1}(-1)^{f(x)}|x\rangle ( |0\rangle - |1\rangle )
\end{aligned}
$$
since for each $x,f(x)$ is either $0$ or $1$.
</li>
<li>
At this point the second single qubit register may be ignored. Apply a Hadamard gate to each qubit in the first register:
$$
\begin{aligned}
\lvert \psi_3 \rangle
& = \frac{1}{2^n}\sum_{x=0}^{2^n-1}(-1)^{f(x)}
\left[ \sum_{y=0}^{2^n-1}(-1)^{x \cdot y}
\vert y \rangle \right] \\
& = \frac{1}{2^n}\sum_{y=0}^{2^n-1}
\left[ \sum_{x=0}^{2^n-1}(-1)^{f(x)}(-1)^{x \cdot y} \right]
\vert y \rangle
\end{aligned}
$$
where $x \cdot y = x_0y_0 \oplus x_1y_1 \oplus \ldots \oplus x_{n-1}y_{n-1}$ is the sum of the bitwise product.
</li>
<li>
Measure the first register. Notice that the probability of measuring $\vert 0 \rangle ^{\otimes n} = \lvert \frac{1}{\sqrt{2^n}}\sum_{x=0}^{2^n-1}(-1)^{f(x)} \rvert^2$, which evaluates to $1$ if $f(x)$ is constant and $0$ if $f(x)$ is balanced.
</li>
</ol>
### 1.4 Why Does This Work? <a id='why-does-this-work'> </a>
- **Constant Oracle**
When the oracle is *constant*, it has no effect (up to a global phase) on the input qubits, and the quantum states before and after querying the oracle are the same. Since the H-gate is its own inverse, in Step 4 we reverse Step 2 to obtain the initial quantum state of $|00\dots 0\rangle$ in the first register.
$$
H^{\otimes n}\begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix}
=
\tfrac{1}{\sqrt{2^n}}\begin{bmatrix} 1 \\ 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix}
\quad \xrightarrow{\text{after } U_f} \quad
H^{\otimes n}\tfrac{1}{\sqrt{2^n}}\begin{bmatrix} 1 \\ 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix}
=
\begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix}
$$
- **Balanced Oracle**
After step 2, our input register is an equal superposition of all the states in the computational basis. When the oracle is *balanced*, phase kickback adds a negative phase to exactly half these states:
$$
U_f \tfrac{1}{\sqrt{2^n}}\begin{bmatrix} 1 \\ 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix}
=
\tfrac{1}{\sqrt{2^n}}\begin{bmatrix} -1 \\ 1 \\ -1 \\ \vdots \\ 1 \end{bmatrix}
$$
The quantum state after querying the oracle is orthogonal to the quantum state before querying the oracle. Thus, in Step 4, when applying the H-gates, we must end up with a quantum state that is orthogonal to $|00\dots 0\rangle$. This means we should never measure the all-zero state.
## 2. Worked Example <a id='example'></a>
Let's go through a specific example for a two bit balanced function:
<ol>
<li> The first register of two qubits is initialized to $|00\rangle$ and the second register qubit to $|1\rangle$
(Note that we are using subscripts 1, 2, and 3 to index the qubits. A subscript of "12" indicates the state of the register containing qubits 1 and 2)
$$\lvert \psi_0 \rangle = \lvert 0 0 \rangle_{12} \otimes \lvert 1 \rangle_{3} $$
</li>
<li> Apply Hadamard on all qubits
$$\lvert \psi_1 \rangle = \frac{1}{2} \left( \lvert 0 0 \rangle + \lvert 0 1 \rangle + \lvert 1 0 \rangle + \lvert 1 1 \rangle \right)_{12} \otimes \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} $$
</li>
<li> The oracle function can be implemented as $\text{Q}_f = CX_{13}CX_{23}$,
$$
\begin{align*}
\lvert \psi_2 \rangle = \frac{1}{2\sqrt{2}} \left[ \lvert 0 0 \rangle_{12} \otimes \left( \lvert 0 \oplus 0 \oplus 0 \rangle - \lvert 1 \oplus 0 \oplus 0 \rangle \right)_{3} \\
+ \lvert 0 1 \rangle_{12} \otimes \left( \lvert 0 \oplus 0 \oplus 1 \rangle - \lvert 1 \oplus 0 \oplus 1 \rangle \right)_{3} \\
+ \lvert 1 0 \rangle_{12} \otimes \left( \lvert 0 \oplus 1 \oplus 0 \rangle - \lvert 1 \oplus 1 \oplus 0 \rangle \right)_{3} \\
+ \lvert 1 1 \rangle_{12} \otimes \left( \lvert 0 \oplus 1 \oplus 1 \rangle - \lvert 1 \oplus 1 \oplus 1 \rangle \right)_{3} \right]
\end{align*}
$$
</li>
<li>Simplifying this, we get the following:
$$
\begin{aligned}
\lvert \psi_2 \rangle & = \frac{1}{2\sqrt{2}} \left[ \lvert 0 0 \rangle_{12} \otimes \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} - \lvert 0 1 \rangle_{12} \otimes \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} - \lvert 1 0 \rangle_{12} \otimes \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} + \lvert 1 1 \rangle_{12} \otimes \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} \right] \\
& = \frac{1}{2} \left( \lvert 0 0 \rangle - \lvert 0 1 \rangle - \lvert 1 0 \rangle + \lvert 1 1 \rangle \right)_{12} \otimes \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} \\
& = \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{1} \otimes \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{2} \otimes \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3}
\end{aligned}
$$
</li>
<li> Apply Hadamard on the first register
$$ \lvert \psi_3\rangle = \lvert 1 \rangle_{1} \otimes \lvert 1 \rangle_{2} \otimes \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} $$
</li>
<li> Measuring the first two qubits will give the non-zero $11$, indicating a balanced function.
</li>
</ol>
You can try out similar examples using the widget below. Press the buttons to add H-gates and oracles, re-run the cell and/or set `case="constant"` to try out different oracles.
```
from qiskit_textbook.widgets import dj_widget
dj_widget(size="small", case="balanced")
```
## 3. Creating Quantum Oracles <a id='creating-quantum-oracles'> </a>
Let's see some different ways we can create a quantum oracle.
For a constant function, it is simple:
$\qquad$ 1. if f(x) = 0, then apply the $I$ gate to the qubit in register 2.
$\qquad$ 2. if f(x) = 1, then apply the $X$ gate to the qubit in register 2.
For a balanced function, there are many different circuits we can create. One of the ways we can guarantee our circuit is balanced is by performing a CNOT for each qubit in register 1, with the qubit in register 2 as the target. For example:

In the image above, the top three qubits form the input register, and the bottom qubit is the output register. We can see which input states give which output in the table below:
| Input states that output 0 | Input States that output 1 |
|:--------------------------:|:--------------------------:|
| 000 | 001 |
| 011 | 100 |
| 101 | 010 |
| 110 | 111 |
We can change the results while keeping them balanced by wrapping selected controls in X-gates. For example, see the circuit and its results table below:

| Input states that output 0 | Input states that output 1 |
|:--------------------------:|:--------------------------:|
| 001 | 000 |
| 010 | 011 |
| 100 | 101 |
| 111 | 110 |
## 4. Qiskit Implementation <a id='implementation'></a>
We now implement the Deutsch-Jozsa algorithm for the example of a three-bit function, with both constant and balanced oracles. First let's do our imports:
```
# initialization
import numpy as np
# importing Qiskit
from qiskit import IBMQ, Aer
from qiskit.providers.ibmq import least_busy
from qiskit import QuantumCircuit, assemble, transpile
# import basic plot tools
from qiskit.visualization import plot_histogram
```
Next, we set the size of the input register for our oracle:
```
# set the length of the n-bit input string.
n = 3
```
### 4.1 Constant Oracle <a id='const_oracle'></a>
Let's start by creating a constant oracle, in this case the input has no effect on the ouput so we just randomly set the output qubit to be 0 or 1:
```
# set the length of the n-bit input string.
n = 3
const_oracle = QuantumCircuit(n+1)
output = np.random.randint(2)
if output == 1:
const_oracle.x(n)
const_oracle.draw()
```
### 4.2 Balanced Oracle <a id='balanced_oracle'></a>
```
balanced_oracle = QuantumCircuit(n+1)
```
Next, we create a balanced oracle. As we saw in section 1b, we can create a balanced oracle by performing CNOTs with each input qubit as a control and the output bit as the target. We can vary the input states that give 0 or 1 by wrapping some of the controls in X-gates. Let's first choose a binary string of length `n` that dictates which controls to wrap:
```
b_str = "101"
```
Now we have this string, we can use it as a key to place our X-gates. For each qubit in our circuit, we place an X-gate if the corresponding digit in `b_str` is `1`, or do nothing if the digit is `0`.
```
balanced_oracle = QuantumCircuit(n+1)
b_str = "101"
# Place X-gates
for qubit in range(len(b_str)):
if b_str[qubit] == '1':
balanced_oracle.x(qubit)
balanced_oracle.draw()
```
Next, we do our controlled-NOT gates, using each input qubit as a control, and the output qubit as a target:
```
balanced_oracle = QuantumCircuit(n+1)
b_str = "101"
# Place X-gates
for qubit in range(len(b_str)):
if b_str[qubit] == '1':
balanced_oracle.x(qubit)
# Use barrier as divider
balanced_oracle.barrier()
# Controlled-NOT gates
for qubit in range(n):
balanced_oracle.cx(qubit, n)
balanced_oracle.barrier()
balanced_oracle.draw()
```
Finally, we repeat the code from two cells up to finish wrapping the controls in X-gates:
```
balanced_oracle = QuantumCircuit(n+1)
b_str = "101"
# Place X-gates
for qubit in range(len(b_str)):
if b_str[qubit] == '1':
balanced_oracle.x(qubit)
# Use barrier as divider
balanced_oracle.barrier()
# Controlled-NOT gates
for qubit in range(n):
balanced_oracle.cx(qubit, n)
balanced_oracle.barrier()
# Place X-gates
for qubit in range(len(b_str)):
if b_str[qubit] == '1':
balanced_oracle.x(qubit)
# Show oracle
balanced_oracle.draw()
```
We have just created a balanced oracle! All that's left to do is see if the Deutsch-Joza algorithm can solve it.
### 4.3 The Full Algorithm <a id='full_alg'></a>
Let's now put everything together. This first step in the algorithm is to initialize the input qubits in the state $|{+}\rangle$ and the output qubit in the state $|{-}\rangle$:
```
dj_circuit = QuantumCircuit(n+1, n)
# Apply H-gates
for qubit in range(n):
dj_circuit.h(qubit)
# Put qubit in state |->
dj_circuit.x(n)
dj_circuit.h(n)
dj_circuit.draw()
```
Next, let's apply the oracle. Here we apply the `balanced_oracle` we created above:
```
dj_circuit = QuantumCircuit(n+1, n)
# Apply H-gates
for qubit in range(n):
dj_circuit.h(qubit)
# Put qubit in state |->
dj_circuit.x(n)
dj_circuit.h(n)
# Add oracle
dj_circuit += balanced_oracle
dj_circuit.draw()
```
Finally, we perform H-gates on the $n$-input qubits, and measure our input register:
```
dj_circuit = QuantumCircuit(n+1, n)
# Apply H-gates
for qubit in range(n):
dj_circuit.h(qubit)
# Put qubit in state |->
dj_circuit.x(n)
dj_circuit.h(n)
# Add oracle
dj_circuit += balanced_oracle
# Repeat H-gates
for qubit in range(n):
dj_circuit.h(qubit)
dj_circuit.barrier()
# Measure
for i in range(n):
dj_circuit.measure(i, i)
# Display circuit
dj_circuit.draw()
```
Let's see the output:
```
# use local simulator
qasm_sim = Aer.get_backend('qasm_simulator')
shots = 1024
qobj = assemble(dj_circuit, qasm_sim)
results = qasm_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
```
We can see from the results above that we have a 0% chance of measuring `000`. This correctly predicts the function is balanced.
### 4.4 Generalised Circuits <a id='general_circs'></a>
Below, we provide a generalised function that creates Deutsch-Joza oracles and turns them into quantum gates. It takes the `case`, (either `'balanced'` or '`constant`', and `n`, the size of the input register:
```
def dj_oracle(case, n):
# We need to make a QuantumCircuit object to return
# This circuit has n+1 qubits: the size of the input,
# plus one output qubit
oracle_qc = QuantumCircuit(n+1)
# First, let's deal with the case in which oracle is balanced
if case == "balanced":
# First generate a random number that tells us which CNOTs to
# wrap in X-gates:
b = np.random.randint(1,2**n)
# Next, format 'b' as a binary string of length 'n', padded with zeros:
b_str = format(b, '0'+str(n)+'b')
# Next, we place the first X-gates. Each digit in our binary string
# corresponds to a qubit, if the digit is 0, we do nothing, if it's 1
# we apply an X-gate to that qubit:
for qubit in range(len(b_str)):
if b_str[qubit] == '1':
oracle_qc.x(qubit)
# Do the controlled-NOT gates for each qubit, using the output qubit
# as the target:
for qubit in range(n):
oracle_qc.cx(qubit, n)
# Next, place the final X-gates
for qubit in range(len(b_str)):
if b_str[qubit] == '1':
oracle_qc.x(qubit)
# Case in which oracle is constant
if case == "constant":
# First decide what the fixed output of the oracle will be
# (either always 0 or always 1)
output = np.random.randint(2)
if output == 1:
oracle_qc.x(n)
oracle_gate = oracle_qc.to_gate()
oracle_gate.name = "Oracle" # To show when we display the circuit
return oracle_gate
```
Let's also create a function that takes this oracle gate and performs the Deutsch-Joza algorithm on it:
```
def dj_algorithm(oracle, n):
dj_circuit = QuantumCircuit(n+1, n)
# Set up the output qubit:
dj_circuit.x(n)
dj_circuit.h(n)
# And set up the input register:
for qubit in range(n):
dj_circuit.h(qubit)
# Let's append the oracle gate to our circuit:
dj_circuit.append(oracle, range(n+1))
# Finally, perform the H-gates again and measure:
for qubit in range(n):
dj_circuit.h(qubit)
for i in range(n):
dj_circuit.measure(i, i)
return dj_circuit
```
Finally, let's use these functions to play around with the algorithm:
```
n = 4
oracle_gate = dj_oracle('balanced', n)
dj_circuit = dj_algorithm(oracle_gate, n)
dj_circuit.draw()
```
And see the results of running this circuit:
```
transpiled_dj_circuit = transpile(dj_circuit, qasm_sim)
qobj = assemble(transpiled_dj_circuit)
results = qasm_sim.run(qobj).result()
answer = results.get_counts()
plot_histogram(answer)
```
## 5. Experiment with Real Devices <a id='device'></a>
We can run the circuit on the real device as shown below. We first look for the least-busy device that can handle our circuit.
```
# Load our saved IBMQ accounts and get the least busy backend device with greater than or equal to (n+1) qubits
IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q')
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= (n+1) and
not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
# Run our circuit on the least busy backend. Monitor the execution of the job in the queue
from qiskit.tools.monitor import job_monitor
shots = 1024
transpiled_dj_circuit = transpile(dj_circuit, backend, optimization_level=3)
qobj = assemble(transpiled_dj_circuit, backend)
job = backend.run(qobj)
job_monitor(job, interval=2)
# Get the results of the computation
results = job.result()
answer = results.get_counts()
plot_histogram(answer)
```
As we can see, the most likely result is `1111`. The other results are due to errors in the quantum computation.
## 6. Problems <a id='problems'></a>
1. Are you able to create a balanced or constant oracle of a different form?
2. The function `dj_problem_oracle` (below) returns a Deutsch-Joza oracle for `n = 4` in the form of a gate. The gate takes 5 qubits as input where the final qubit (`q_4`) is the output qubit (as with the example oracles above). You can get different oracles by giving `dj_problem_oracle` different integers between 1 and 5. Use the Deutsch-Joza algorithm to decide whether each oracle is balanced or constant (**Note:** It is highly recommended you try this example using the `qasm_simulator` instead of a real device).
```
from qiskit_textbook.problems import dj_problem_oracle
oracle = dj_problem_oracle(1)
```
## 7. References <a id='references'></a>
1. David Deutsch and Richard Jozsa (1992). "Rapid solutions of problems by quantum computation". Proceedings of the Royal Society of London A. 439: 553–558. [doi:10.1098/rspa.1992.0167](https://doi.org/10.1098%2Frspa.1992.0167).
2. R. Cleve; A. Ekert; C. Macchiavello; M. Mosca (1998). "Quantum algorithms revisited". Proceedings of the Royal Society of London A. 454: 339–354. [doi:10.1098/rspa.1998.0164](https://doi.org/10.1098%2Frspa.1998.0164).
```
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |
# 💻IDS507 | Lab03
<font size=5><b>Regression Analysis<b></font>
<div align='right'>TA: 류 회 성(Hoe Sung Ryu)</div>
## Concepts | 오늘 배울 개념
---
- 내 데이터를 train/test dataset으로 나누기
- 내 데이터로 `Logistic regression` (로지스틱 회귀) 모델 만들어보기
- 생성한 모델을 이용해 새로운 데이터를 예측해보기
- 내 모델이 얼마나 잘 기능하는가?
- Confusion Matrix, Roc Curve & AUC 계산
## 📌1.드라이브 연동
```
from google.colab import drive # 드라이브 연동
drive.mount('/content/gdrive')
import os
os.chdir('/content/gdrive/My Drive/IDS507-00/2022_IDS507_Lab') # DataPath 설정
!pwd
```
## 📌2. 회귀 분석
### 1) 회귀(Regression)
- 데이터의 값은 평균과 같은 기존의 경향으로 돌아가려는 경향
- 여러 변수들 간의 상관 관계를 파악하여, 어떤 특정 변수의 값을 다른 변수들의 값을 이용하여 설명/예측하는 기법
- 독립변수, 종속변수
### 2) 회귀 분석의 유형
- 변수의 개수 및 계수의 형태에 따라 구분
- 독립변수의 개수에 따라
- 단순 : 독립변수가 1개인 경우
- 다중 : 독립변수가 여러 개인 경우
- 회귀계수의 형태에 따라
- 선형 : 계수를 선형 결합으로 표현할 수 있는 경우
- 비선형 : 계수를 선형 결합으로 표현할 수 없는 경우
```
# sample data
# 1. train
X_train = [[1],[2],[3],[4],[5]] # 독립변수의 특성이 1개 밖에 없더라도 각 값들은 리스트 또는 배열의 형태
y_train = [2.3, 3.99, 5.15, 7.89, 8.6]
# 2. test
X_test = [[6],[7]]
y_test = [10.1, 11.9]
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
reg = lr.fit(X_train,y_train)
```
### 3) 단순 선형 회귀
- 독립변수가 1개이고 종속변수도 1개인 경우, 그들 간의 관계를 **선형적으로 파악**하는 회귀 방식
- `독립변수 X`와 `종속변수 Y`의 **`관계`**를 **`Y = aX + b 형태의 1차 함수식`**으로 표현
#### 회귀 계수 (coefficient) → y = **`a`**x+b
- 독립변수가 종속변수에 끼치는 영향력의 정도로서, 직선의 기울기(slope)
#### 절편 (intercept) → y = ax+**`b`**
- 독립변수가 0일 때의 상수 값
#### 잔차 (residual) → y = ax+b+**`Error`**
- 실제 값과 회귀식의 차이에 따른 오류 값
- 잔차 값이 작을수록, 구해진 회귀식이 데이터들을 더욱 잘 설명하고 있다
```
y_pred = reg.predict(X_test)
y_pred
# 추정된 회귀 모형의 회귀 계수 및 절편 값을 확인
# 회귀 계수는 coef_ 속성, 절편은 intercept_ 속성에 각각 값이 할당
print("회귀 계수 : ",reg.coef_)
print("절편 : ",reg.intercept_)
print(f'선형식:y= {reg.coef_[0]}X + {reg.intercept_:.4f}')
```
### 4) 사이킷런으로 성능 평가 지표 확인
- 회귀 분석의 평가 지표
|지표|의미|대응함수|
|---|---|---|
|MAE|Mean Absolute Error, 즉 실제값과 예측값의 차이의 절대값들의 평균|metrics 모듈의 mean_absolute_error|
|MSE|Mean Absolute Error, 즉 실제값과 예측값의 차이의 절대값들의 평균|metrics 모듈의 mean_squared_error|
|RMSE|Root of MSE, 즉 MSE의 제곱근 값|math 또는 numpy 모듈의 sqrt|
|$R^2$|결정 계수라고 하며, 실제값의 분산 대비 예측값의 분산의 비율|metrics 모듈의 r2_score 또는 LinearRegression의 score|
```
# 결과분석
from sklearn.metrics import (mean_squared_error,
r2_score,
mean_absolute_error,
)
print(mean_squared_error(y_test, y_pred))
print(r2_score(y_test, y_pred))
print(mean_absolute_error(y_test, y_pred))
print(mean_absolute_error(y_test, y_pred)**(1/2))
# 분석 결과 표로 표시하기
import matplotlib.pyplot as plt
x = range(1,8)
plt.title("Linear Regression")
plt.plot(X_train+X_test,y_train+y_test,'o',color = 'blue')
plt.plot(x,reg.coef_*x+reg.intercept_,'--',color='red')
plt.plot(X_test,y_pred,'x',color = 'black')
plt.show()
```
## 📌3. 실제 데이터를 활용하여 로지스틱회귀 분석
### 1) 로지스틱 회귀란?
- 선형 회귀 모형을 **`분류`** 에 적용한 기법
- 데이터가 특정 레이블(클래스)에 소속될 확률을 추정
- 이 이메일이 스팸일 확률은 얼마
- 이번 시험에서 합격할 확률은 얼마
- 다른 선형 회귀 모형과는 다르게, 종속변수가 수치형 (numerical)이 아니라 범주형(categorical)
- 스팸메일, 정상메일
- 합격, 불합격
- 특정 클래스에 대해서 추정된 확률이 50% 이상이면 해당 데이터를 그 클래스에 속하는 것으로 분류
- 기본적인 로지스틱 회귀는 이항형(binomial)으로서, 종속 변수의 값의 종류는 0과 1의 두 종류
- 즉, 이 경우의 종속변수는 곧 클래스 그 자체
- 값이 0이면 음성, 1이면 양성이라고 표현
- 이러한 이진 데이터에 대해서 올바른 결과를 나타내는 선형 회귀를 수행하려면 다음과 같은 성질이 필요
- 연속적인 단조 증가(monotone increasing) 함수일 것
- 함수의 결과가 [0, 1] 사이의 값
- 이와 같은 성질을 만족하는 함수를 시그모이드(sigmoid) 함수
$$ y = \frac{1}{1+e^{-x}} $$
### 2) 분류함수의 성능지표
|함수명|설명|
|---|---|
|**accuracy_score**|정확도를 계산한다.|
|**confusion_matrix** |오차 행렬을 도출한다.|
|**precision_score** |정밀도를 계산한다.|
|**recall_score** |재현율을 계산한다.|
|**f1_score** |F1 스코어를 계산한다.|
|**classification_report** | 정밀도, 재현율, F1 스코어를 함께 보여준다|
```
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
X = np.arange(-10,10,0.1)
y = 1 / (1+np.exp(-X))
plt.plot(X,y,label = 'Sigmoid')
plt.plot(X,[0.5 for _ in X],color='red',label = 'Threshold')
plt.legend()
plt.grid()
plt.show()
```
### 3) 당뇨병 데이터 불러오기
reference:https://www.kaggle.com/saurabh00007/diabetescsv
* Pregnancies: 임신 횟수
* Glucose: 포도당 부하 검사 수치
* BloodPressure: 혈압(mm Hg)
* SkinThickness: 팔 삼두근 뒤쪽의 피하지방 측정값(mm)
* Insulin: 혈청 인슐린(mu U/ml)
* BMI: 체질량지수(체중(kg)/(키(m))^2)
* DiabetesPedigreeFunction: 당뇨 내력 가중치 값
* Age: 나이
* Outcome: 클래스 결정 값(0또는 1)
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, precision_score, recall_score, roc_auc_score
from sklearn.metrics import f1_score, confusion_matrix, precision_recall_curve, roc_curve
from sklearn.preprocessing import StandardScaler,MinMaxScaler
from sklearn.linear_model import LogisticRegression
diabetes_data = pd.read_csv('./data/diabetes.csv') # 데이터 로드
diabetes_data.head(3)
print(diabetes_data['Outcome'].value_counts())
# 'Glucose' 피처의 분포도
plt.hist(diabetes_data['Glucose'], bins=10)
```
### 4) scikit-learn 패키지를 사용하여 Train / Test 셋 분리하기
parameter 설명
- `train_test_split(arrays, test_size, train_size, random_state, shuffle, stratify)`의 인자(parameter)
```
arrays : 분할시킬 데이터를 입력 (Python list, Numpy array, Pandas dataframe 등..)
test_size : 테스트 데이터셋의 비율(float)이나 갯수(int) (default = 0.25)
train_size : 학습 데이터셋의 비율(float)이나 갯수(int) (default = test_size의 나머지)
random_state : 데이터 분할시 셔플이 이루어지는데 이를 기억하기 위한 임의의 시드값 (int나 RandomState로 입력)
shuffle : 셔플여부설정 (default = True)
stratify : 지정한 Data의 비율을 유지한다. 예를 들어, Label Set인 Y가 25%의 0과 75%의 1로 이루어진 Binary Set일 때,
stratify=Y로 설정하면 나누어진 데이터셋들도 0과 1을 각각 25%, 75%로 유지한 채 분할된다.
```
```
# 피처 데이터 세트 X, 레이블 데이터 세트 y를 추출.
X = diabetes_data.iloc[:, :-1]
y = diabetes_data.iloc[:, -1]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 156, stratify=y)
# 로지스틱 회귀로 학습,예측 및 평가 수행.
lr_clf = LogisticRegression(max_iter=1000,)
lr_clf.fit(X_train , y_train)
y_pred = lr_clf.predict(X_test)
y_pred
accuracy = accuracy_score(y_test , y_pred)
print("Accuracy : ",round(accuracy,76))
print(500/diabetes_data['Outcome'].value_counts().sum())
```
### 5) Confusion Matrix(오차행렬)
```
# #calculate AUC of model
# pred_proba = lr_clf.predict_proba(X_test)
# pred_proba_c1 = pred_proba[:,1].reshape(-1,1)
# auc = roc_auc_score(y_test, pred_proba_c1)
# #print AUC score
# print(auc)
roc_auc = roc_auc_score(y_test, pred_proba[:,1]) # calculate AUC of model
confusion = confusion_matrix( y_test, y_pred)
print('AUC score:', roc_auc)
print('오차 행렬')
print(confusion)
import pandas as pd
import seaborn as sns
matrix = pd.DataFrame(confusion,
columns = ['Positive','Negative'],
index= ['True','False']
)
sns.heatmap(matrix, annot=True, cmap='Blues', fmt='d')
from sklearn.metrics import roc_curve
# roc curve for models
fpr1, tpr1, thresh1 = roc_curve(y_test, pred_proba[:,1], pos_label=1)
# fpr2, tpr2, thresh2 = roc_curve(y_test, pred_prob2[:,1], pos_label=1)
#
# roc curve for tpr = fpr
# random_probs = [0 for i in range(len(y_test))]
# p_fpr, p_tpr, _ = roc_curve(y_test, random_probs, pos_label=1)
import matplotlib.pyplot as plt
# plt.style.use('seaborn')
# plot roc curves
plt.plot(fpr1, tpr1, linestyle='--',color='orange', label='Logistic Regression')
# plt.plot(fpr2, tpr2, linestyle='--',color='green', label='KNN')
# plt.plot(p_fpr, p_tpr, linestyle='--', color='blue')
plt.plot([0,1],[0,1],linestyle='--', color='blue')
# title
plt.title('ROC curve for Classification')
# x label
plt.xlabel('False Positive Rate')
# y label
plt.ylabel('True Positive rate')
plt.legend(loc='best')
# plt.savefig('ROC',dpi=300)
plt.show();
```
### 6) Threshold(입계값) 변경하며 성능측정하기
```
thresholds = [0.3 , 0.33 ,0.36,0.39, 0.42 , 0.45 ,0.48, 0.50]
# pred_proba = lr_clf.predict_proba(X_test)
pred_proba_c1 = pred_proba[:,1].reshape(-1,1)
from sklearn.preprocessing import Binarizer
for custom_threshold in thresholds:
binarizer = Binarizer(threshold=custom_threshold).fit(pred_proba_c1)
custom_predict = binarizer.transform(pred_proba_c1)
print('Threshold:',custom_threshold)
accuracy = accuracy_score(y_test , custom_predict)
print("Accuracy: ",round(accuracy,3))
print(" ")
```
### 7) 교차검증
일반적으로 회귀에는 기본 k-겹 교차검증을 사용하고, 분류에는 StratifiedKFold를 사용한다.
데이터가 편항되어 단순 k-겹 교차검증을 사용하면 성능 평가가 잘 되지 않을 수 있기때문이다.
<img src='https://jinnyjinny.github.io/assets/post_img/deep%20learning/2020-04-02-Kfold/main3.jpg'>
<br>
<!-- <center>leave-one-out</center>
<center><img src='https://smlee729.github.io/img/2015-03-19-1-loocv/loocv1.png' width=70%></center> -->
```
# cross_validation
from sklearn.model_selection import KFold, StratifiedKFold, LeaveOneOut
kfold = KFold(n_splits=5)
sfold = StratifiedKFold()
# loo = LeaveOneOut()
from sklearn.model_selection import cross_val_score
lr_clf = LogisticRegression(max_iter=1000,)
kfold_score = cross_val_score(lr_clf, X, y, cv=kfold)
sfold_score = cross_val_score(lr_clf, X, y, cv=sfold)
# loo_score = cross_val_score(lr_clf, X, y, cv=loo)
print('Kfold 정확도: {:.2f} %'.format(kfold_score.mean()*100))
print('StratifiedKFold 정확도: {:.2f}'.format(sfold_score.mean()))
# print('LeaveOneOut 정확도: {:.2f}'.format(loo_score.mean()))
```
| github_jupyter |
# Example of optimizing Xgboost XGBClassifier function
# Goal is to test the objective values found by Mango
# Benchmarking Serial Evaluation: Iterations 60
```
from mango.tuner import Tuner
from scipy.stats import uniform
def get_param_dict():
param_dict = {"learning_rate": uniform(0, 1),
"gamma": uniform(0, 5),
"max_depth": range(1,10),
"n_estimators": range(1,300),
"booster":['gbtree','gblinear','dart']
}
return param_dict
from sklearn.model_selection import cross_val_score
from xgboost import XGBClassifier
from sklearn.datasets import load_wine
X, Y = load_wine(return_X_y=True)
count_called = 1
def objfunc(args_list):
global X, Y, count_called
#print('count_called:',count_called)
count_called = count_called + 1
results = []
for hyper_par in args_list:
clf = XGBClassifier(**hyper_par)
result = cross_val_score(clf, X, Y, scoring='accuracy').mean()
results.append(result)
return results
def get_conf():
conf = dict()
conf['batch_size'] = 1
conf['initial_random'] = 5
conf['num_iteration'] = 60
conf['domain_size'] = 5000
return conf
def get_optimal_x():
param_dict = get_param_dict()
conf = get_conf()
tuner = Tuner(param_dict, objfunc,conf)
results = tuner.maximize()
return results
optimal_X = []
Results = []
num_of_tries = 100
for i in range(num_of_tries):
results = get_optimal_x()
Results.append(results)
optimal_X.append(results['best_params']['x'])
print(i,":",results['best_params']['x'])
# import numpy as np
# optimal_X = np.array(optimal_X)
# plot_optimal_X=[]
# for i in range(optimal_X.shape[0]):
# plot_optimal_X.append(optimal_X[i]['x'])
```
# Plotting the serial run results
```
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,10))
n, bins, patches = plt.hist(optimal_X, 20, facecolor='g', alpha=0.75)
def autolabel(rects):
"""
Attach a text label above each bar displaying its height
"""
for rect in rects:
height = rect.get_height()
plt.text(rect.get_x() + rect.get_width()/2., 1.0*height,
'%d' % int(height),
ha='center', va='bottom',fontsize=15)
plt.xlabel('X-Value',fontsize=25)
plt.ylabel('Number of Occurence',fontsize=25)
plt.title('Optimal Objective: Iterations 60',fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.grid(True)
autolabel(patches)
plt.show()
```
# Benchmarking test with different iterations for serial executions
```
from mango.tuner import Tuner
def get_param_dict():
param_dict = {
'x': range(-5000, 5000)
}
return param_dict
def objfunc(args_list):
results = []
for hyper_par in args_list:
x = hyper_par['x']
result = -(x**2)
results.append(result)
return results
def get_conf_20():
conf = dict()
conf['batch_size'] = 1
conf['initial_random'] = 5
conf['num_iteration'] = 20
conf['domain_size'] = 5000
return conf
def get_conf_30():
conf = dict()
conf['batch_size'] = 1
conf['initial_random'] = 5
conf['num_iteration'] = 30
conf['domain_size'] = 5000
return conf
def get_conf_40():
conf = dict()
conf['batch_size'] = 1
conf['initial_random'] = 5
conf['num_iteration'] = 40
conf['domain_size'] = 5000
return conf
def get_conf_60():
conf = dict()
conf['batch_size'] = 1
conf['initial_random'] = 5
conf['num_iteration'] = 60
conf['domain_size'] = 5000
return conf
def get_optimal_x():
param_dict = get_param_dict()
conf_20 = get_conf_20()
tuner_20 = Tuner(param_dict, objfunc,conf_20)
conf_30 = get_conf_30()
tuner_30 = Tuner(param_dict, objfunc,conf_30)
conf_40 = get_conf_40()
tuner_40 = Tuner(param_dict, objfunc,conf_40)
conf_60 = get_conf_60()
tuner_60 = Tuner(param_dict, objfunc,conf_60)
results_20 = tuner_20.maximize()
results_30 = tuner_30.maximize()
results_40 = tuner_40.maximize()
results_60 = tuner_60.maximize()
return results_20, results_30, results_40 , results_60
Store_Optimal_X = []
Store_Results = []
num_of_tries = 100
for i in range(num_of_tries):
results_20, results_30, results_40 , results_60 = get_optimal_x()
Store_Results.append([results_20, results_30, results_40 , results_60])
Store_Optimal_X.append([results_20['best_params']['x'],results_30['best_params']['x'],results_40['best_params']['x'],results_60['best_params']['x']])
print(i,":",[results_20['best_params']['x'],results_30['best_params']['x'],results_40['best_params']['x'],results_60['best_params']['x']])
import numpy as np
Store_Optimal_X=np.array(Store_Optimal_X)
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,10))
n, bins, patches = plt.hist(Store_Optimal_X[:,0], 20, facecolor='g', alpha=0.75)
def autolabel(rects):
"""
Attach a text label above each bar displaying its height
"""
for rect in rects:
height = rect.get_height()
plt.text(rect.get_x() + rect.get_width()/2., 1.0*height,
'%d' % int(height),
ha='center', va='bottom',fontsize=15)
plt.xlabel('X-Value',fontsize=25)
plt.ylabel('Number of Occurence',fontsize=25)
plt.title('Optimal Objective: Iterations 20',fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.grid(True)
autolabel(patches)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,10))
n, bins, patches = plt.hist(Store_Optimal_X[:,1], 20, facecolor='g', alpha=0.75)
def autolabel(rects):
"""
Attach a text label above each bar displaying its height
"""
for rect in rects:
height = rect.get_height()
plt.text(rect.get_x() + rect.get_width()/2., 1.0*height,
'%d' % int(height),
ha='center', va='bottom',fontsize=15)
plt.xlabel('X-Value',fontsize=25)
plt.ylabel('Number of Occurence',fontsize=25)
plt.title('Optimal Objective: Iterations 30',fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.grid(True)
autolabel(patches)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,10))
n, bins, patches = plt.hist(Store_Optimal_X[:,2], 20, facecolor='g', alpha=0.75)
def autolabel(rects):
"""
Attach a text label above each bar displaying its height
"""
for rect in rects:
height = rect.get_height()
plt.text(rect.get_x() + rect.get_width()/2., 1.0*height,
'%d' % int(height),
ha='center', va='bottom',fontsize=15)
plt.xlabel('X-Value',fontsize=25)
plt.ylabel('Number of Occurence',fontsize=25)
plt.title('Optimal Objective: Iterations 40',fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.grid(True)
autolabel(patches)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,10))
n, bins, patches = plt.hist(Store_Optimal_X[:,3], 20, facecolor='g', alpha=0.75)
def autolabel(rects):
"""
Attach a text label above each bar displaying its height
"""
for rect in rects:
height = rect.get_height()
plt.text(rect.get_x() + rect.get_width()/2., 1.0*height,
'%d' % int(height),
ha='center', va='bottom',fontsize=15)
plt.xlabel('X-Value',fontsize=25)
plt.ylabel('Number of Occurence',fontsize=25)
plt.title('Optimal Objective: Iterations 60',fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.grid(True)
autolabel(patches)
plt.show()
```
| github_jupyter |
<img src='./img/intel-logo.jpg' width=50%, Fig1>
# OpenCV 기초강좌
<font size=5><b>01. 이미지, 비디오 입출력 <b></font>
<div align='right'>성 민 석 (Minsuk Sung)</div>
<div align='right'>류 회 성 (Hoesung Ryu)</div>
<img src='./img/OpenCV_Logo_with_text.png' width=20%, Fig2>
---
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#이미지-읽기" data-toc-modified-id="이미지-읽기-1"><span class="toc-item-num">1 </span>이미지 읽기</a></span></li><li><span><a href="#Matplotlib을-이용해-이미지-시각화-하기" data-toc-modified-id="Matplotlib을-이용해-이미지-시각화-하기-2"><span class="toc-item-num">2 </span>Matplotlib을 이용해 이미지 시각화 하기</a></span></li><li><span><a href="#이미지-저장하기" data-toc-modified-id="이미지-저장하기-3"><span class="toc-item-num">3 </span>이미지 저장하기</a></span></li><li><span><a href="#웹캠을-사용하여-비디오-읽기" data-toc-modified-id="웹캠을-사용하여-비디오-읽기-4"><span class="toc-item-num">4 </span>웹캠을 사용하여 비디오 읽기</a></span></li><li><span><a href="#영상-저장" data-toc-modified-id="영상-저장-5"><span class="toc-item-num">5 </span>영상 저장</a></span></li></ul></div>
## 이미지 읽기
`cv2.imread(file, flag)`
flag에 다양한 옵션을 주어 여러가지 형태로 불러 올 수 있습니다.
1. file : 저장위치
2. flag
- cv2.IMREAD_ANYCOLOR: 원본 파일로 읽어옵니다.
- cv2.IMREAD_COLOR: 이미지 파일을 Color로 읽음. 투명한 부분은 무시하며 Default 설정입니다.
- cv2.IMREAD_GRAYSCALE: 이미지 파일을 Grayscale로 읽음. 실제 이미지 처리시 중간 단계로 많이 사용합니다
- cv2.IMREAD_UNCHAGED: 이미지 파일을 alpha channel 까지 포함해 읽음
```
import cv2
# 원본그대로 불러오기
image = cv2.imread("./img/toy.jpg")
# 회색조로 불러오기
img_gray = cv2.imread("./img/toy.jpg", cv2.IMREAD_GRAYSCALE)
```
## Matplotlib을 이용해 이미지 시각화 하기
`jupyter notebook`에서 작업하는 경우 Matplotlib을 이용하여 시각화하는 방법을 추천합니다.
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.title("image")
plt.imshow(image)
plt.xticks([]) # x축 눈금 없애기
plt.yticks([]) # y축 눈금 없애기
plt.show()
plt.title("image_gray")
plt.imshow(img_gray,cmap='gray')
plt.xticks([]) # x축 눈금 없애기
plt.yticks([]) # y축 눈금 없애기
plt.show()
```
## 이미지 저장하기
```
cv2.imwrite('./data/toy_image.jpg', image)
cv2.imwrite('./data/toy_gray_image.jpg', img_gray)
```
---
## 웹캠을 사용하여 비디오 읽기
**`MAC_카탈리나` 환경에서는 창이 안닫히는 현상이 있으므로 실행하지 않는 것을 추천합니다.**
- `cv2.VideoCapture()`: 캡쳐 객체를 생성 합니다. 소유하고 있는 웹캠의 갯수많큼 인덱스가 생기며 인덱스는 0부터 시작합니다. 예를들어, 웹캠을 하나만 가지고 있다면 0 을 입력합니다. `cv2.VideoCapture(0)`
- `ret, fram = cap.read()`: 비디오의 한 프레임씩 읽습니다. 제대로 프레임을 읽으면 ret값이 True, 실패하면 False가 나타납니다. fram에 읽은 프레임이 나옵니다
- `cv2.cvtColor()`: frame을 흑백으로 변환합니다
- `cap.release()`: 오픈한 캡쳐 객체를 해제합니다
```
import cv2
OPTION = 'color' # gray: 흑백
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if ret:
if OPTION == 'gray':
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # 입력 받은 화면 Gray로 변환
cv2.imshow('frame_gray', gray) # Gray 화면 출력
if cv2.waitKey(1) == ord('q'): # q 를 누를시 정지
break
elif OPTION == 'color':
cv2.imshow('frame_color', frame) # 컬러 화면 출력
if cv2.waitKey(1) == ord('q'):
break
else:
print('error')
cap.release()
cv2.destroyAllWindows()
```
## 영상 저장
영상을 저장하기 위해서는 `cv2.VideoWriter` Object를 생성해야 합니다.
```
cv2.VideoWriter(outputFile, fourcc, frame, size)
영상을 저장하기 위한 Object
Parameters:
outputFile (str) – 저장될 파일명
fourcc – Codec정보. cv2.VideoWriter_fourcc()
frame (float) – 초당 저장될 frame
size (list) – 저장될 사이즈(ex; 640, 480)
```
- `cv2.VideoWriter(outputFile, fourcc, frame, size)` : fourcc는 코덱 정보, frame은 초당 저장될 프레임, size는 저장될 사이즈를 뜻합니다.
- `cv2.VideoWriter_fourcc('D','I','V','X')` 이런식으로소 사용하곤 합니다 적용 가능한 코덱은 DIVX, XVID, MJPG, X264, WMV1, WMV2 등이 있습니다
```
import cv2
cap = cv2.VideoCapture(0)
fourcc = cv2.VideoWriter_fourcc(*'DIVX')
out = cv2.VideoWriter('./data/output.avi',
fourcc,
25.0,
(640, 480))
while (cap.isOpened()):
ret, frame = cap.read()
if ret:
out.write(frame)
cv2.imshow('frame', frame)
if cv2.waitKey(0) & 0xFF == ord('q'):
break
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
```
| github_jupyter |
```
import logging
import warnings
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import neurolib.optimize.exploration.explorationUtils as eu
import neurolib.utils.pypetUtils as pu
from neurolib.optimize.exploration import BoxSearch
logger = logging.getLogger()
warnings.filterwarnings("ignore")
logger.setLevel(logging.INFO)
results_path = "/Users/valery/Google_Drive/NI-Project/data/hdf/"
from neurolib.models.aln import ALNModel
from neurolib.utils.parameterSpace import ParameterSpace
model = ALNModel()
# define the parameter space to explore
parameters = ParameterSpace({"mue_ext_mean": np.linspace(0, 3, 21), # input to E
"mui_ext_mean": np.linspace(0, 3, 21)}) # input to I
# define exploration
search = BoxSearch(model, parameters)
pu.getTrajectorynamesInFile(results_path + "scz_sleep_reduce_abs_resolution-8.hdf")
search.loadResults(
filename= results_path + "scz_sleep_reduce_abs_resolution-8.hdf",
trajectoryName="results-2021-06-25-18H-59M-03S")
df = search.dfResults.copy()
search2 = BoxSearch(model, parameters)
pu.getTrajectorynamesInFile(results_path + "scz_sleep_Jei_resolution-50.hdf")
search2.loadResults(
filename=results_path + "scz_sleep_Jei_resolution-50.hdf",
trajectoryName="results-2021-06-26-00H-40M-29S")
df2 = search2.dfResults.copy()
search3 = BoxSearch(model, parameters)
pu.getTrajectorynamesInFile(results_path + "scz_sleep_resolution-50.hdf")
search3.loadResults(
filename=results_path + "scz_sleep_resolution-50.hdf",
trajectoryName="results-2021-06-25-08H-34M-46S")
df3 = search3.dfResults.copy()
search4 = BoxSearch(model, parameters)
pu.getTrajectorynamesInFile(results_path + "scz_sleep_Jii_resolution-50.hdf")
search4.loadResults(
filename=results_path + "scz_sleep_Jii_resolution-50.hdf",
trajectoryName="results-2021-06-26-04H-08M-21S")
df4 = search4.dfResults.copy()
images = "/Users/valery/Downloads/results/"
df3.loc[:, 'Global_SWS_per_min'] = df3.loc[:, 'n_global_waves']*3
eu.plotExplorationResults(
df, par1=['mue_ext_mean', 'Input to E [nA]'], par2=['Jie_max', 'Synaptic current from E to I [nA]'],
by=["Ke_gl"], plot_key='SWS_per_min', plot_clim=[0, 25],
nan_to_zero=False, plot_key_label="SWS/min", one_figure=True, savename=images + "scz_sleep1.png")
eu.plotExplorationResults(
df, par1=['mue_ext_mean', 'Input to E [nA]'], par2=['Jie_max', 'Synaptic current from E to I [nA]'],
by=["Ke_gl"], plot_key='perc_local_waves', plot_clim=[0, 100],
nan_to_zero=False, plot_key_label="'perc_local_waves'", one_figure=True, savename=images + "scz_sleep1_1.png")
eu.plotExplorationResults(
df, par1=['mue_ext_mean', 'Input to E [nA]'], par2=['Jei_max', 'Synaptic current from I to E [nA]'],
by=["Ke_gl"], plot_key='SWS_per_min', plot_clim=[0, 25],
nan_to_zero=False, plot_key_label="SWS/min", one_figure=True, savename=images + "scz_sleep2.png")
eu.plotExplorationResults(
df, par1=['mue_ext_mean', 'Input to E [nA]'], par2=['Jii_max', 'Synaptic current from I to I [nA]'],
by=["Ke_gl"], plot_key='SWS_per_min', plot_clim=[0, 25],
nan_to_zero=False, plot_key_label="SWS/min", one_figure=True, savename=images + "scz_slee3.png")
df.columns
df.describe()
df_2 = df.loc[df['Ke_gl'] == 200.0,
['mue_ext_mean', 'Ke_gl','Jie_max', 'Jei_max', 'Jii_max', 'SWS_per_min',
'perc_local_waves', 'max_output', 'normalized_up_lengths_mean', 'n_global_waves'
]].round(decimals=2)
df_2['interactions'] = False
dfdf = pd.DataFrame()
for n, (jie, jei, jii) in enumerate(zip(df_2['Jie_max'].unique(), df_2['Jei_max'].unique(), df_2['Jii_max'].unique())):
mask = (df_2['Jie_max'] == jie) & (df_2['Jei_max'] == jei) & (df_2['Jii_max'] == jii)
df_2.loc[mask, 'interactions'] = True
df_2.loc[mask, 'J'] = 8 - n
dfdf.loc[8-n, ['Jie_max', 'Jei_max', 'Jii_max']] = jie, jei, jii
df_2_interaction = df_2.loc[df_2['interactions'], :]
df_2_interaction.loc[:, 'global_SWS_per_min'] = df_2_interaction.loc[:, 'n_global_waves'] *3
dfdf
eu.plotExplorationResults(
df_2_interaction, par1=['mue_ext_mean', 'Input to E [nA]'], par2=['J', 'Decrease all J simultaneously'],
by=["Ke_gl"], plot_key='SWS_per_min', plot_clim=[0, 40],
nan_to_zero=False, plot_key_label="SWS/min", one_figure=True, savename=images + "scz_sleep4.png")
eu.plotExplorationResults(
df_2_interaction, par1=['mue_ext_mean', 'Input to E [nA]'], par2=['J', 'Decrease all J simultaneously'],
by=["Ke_gl"], plot_key='perc_local_waves', plot_clim=[0, 100],
nan_to_zero=False, plot_key_label="Fraction of the local waves %", one_figure=True, savename=images + "scz_sleep5.png")
eu.plotExplorationResults(
df_2_interaction, par1=['mue_ext_mean', 'Input to E [nA]'], par2=['J', 'Decrease all J simultaneously'],
by=["Ke_gl"], plot_key='normalized_up_lengths_mean', plot_clim=[0, 100],
nan_to_zero=False, plot_key_label="Time spent in Up state %", one_figure=True, savename=images + "scz_sleep6.png")
palette = sns.color_palette("hls", 8)
sns.relplot( # .relplot(
data=df_2_interaction[(df_2_interaction["Ke_gl"] == 200.)],
x="mue_ext_mean", y="SWS_per_min",
hue='J', # col='Jie_max', # size="choice", size_order=["T1", "T2"],
kind="line", # palette=palette,
# order=3,
height=5, aspect=1., legend=False, palette=palette
# facet_kws=dict(sharex=False),
)
plt.xlim([3.32,4.5])
plt.ylim([0, 45])
# plt.tight_layout()
# plt.title('All SW / min')
plt.gcf().subplots_adjust(bottom=0.15)
plt.savefig(images + "scz_sleep13.png", dpi=100)
palette = sns.color_palette("hls", 8)
sns.relplot(
data=df_2_interaction[(df_2_interaction["Ke_gl"] == 200.)],
x="mue_ext_mean", y="global_SWS_per_min",
hue='J', # col='Jie_max', # size="choice", size_order=["T1", "T2"],
kind="line", # palette=palette,
height=5, aspect=1., legend="full",
palette=palette
# facet_kws=dict(sharex=False),
)
plt.xlim([3.32,4.5])
plt.ylim([0, 45])
# plt.tight_layout()
plt.gcf().subplots_adjust(bottom=0.15)
# plt.title('Global SW / min')
plt.savefig(images + "scz_sleep14.png", dpi=100)
df3.columns
eu.plotExplorationResults(
df3, par1=['mue_ext_mean', 'Input to E [nA]'], par2=['Jie_max', 'Synaptic current from E to I [nA]'],
by=["Ke_gl"], plot_key='SWS_per_min', plot_clim=[0, 40], # plot_clim=[0.0, 100.0],
contour=['perc_local_waves', 'normalized_up_lengths_mean'],
contour_color=[['white'], ['red']], contour_levels=[[70], [65]], contour_alpha=[1.0, 1.0],
contour_kwargs={0: {"linewidths": (2,)}, 1: {"linewidths": (2,)}},
nan_to_zero=False, plot_key_label="SWS/min", one_figure=True, savename=images + "scz_sleep9.png")
eu.plotExplorationResults(
df3, par1=['mue_ext_mean', 'Input to E [nA]'], par2=['Jie_max', 'Synaptic current from E to I [nA]'],
by=["Ke_gl"], plot_key='frontal_SWS_per_min', plot_clim=[0, 40], # plot_clim=[0.0, 100.0],
contour=['frontal_perc_local_waves', 'frontalnormalized_up_lengths_mean'],
contour_color=[['white'], ['red']], contour_levels=[[70], [65]], contour_alpha=[1.0, 1.0],
contour_kwargs={0: {"linewidths": (2,)}, 1: {"linewidths": (2,)}},
nan_to_zero=False, plot_key_label="Frontal SWS/min", one_figure=True, savename=images + "scz_sleep9_1.png")
sns.lmplot( # .relplot(
data=df3[(df3["Ke_gl"] == 200.)&((df3['Jie_max'] < 1.4) | (df3['Jie_max'] == 2.6))].round(3),
x="mue_ext_mean", y="SWS_per_min",
hue='Jie_max', # col='Jie_max', # size="choice", size_order=["T1", "T2"],
# kind="line", # palette=palette,
order=5,
height=5, aspect=1., legend=False,
# facet_kws=dict(sharex=False),
)
plt.xlim([3.32,4.5])
plt.ylim([0, 45])
# plt.tight_layout()
# plt.title('All SW / min')
plt.gcf().subplots_adjust(bottom=0.15)
plt.savefig(images + "scz_sleep11.png", dpi=100)
sns.lmplot( # .relplot(
data=df3[(df3["Ke_gl"] == 200.)&((df3['Jie_max'] < 1.4) | (df3['Jie_max'] == 2.6))].round(3),
x="mue_ext_mean", y="Global_SWS_per_min",
hue='Jie_max', # col='Jie_max', # size="choice", size_order=["T1", "T2"],
# kind="line", # palette=palette,
order=5,
height=5, aspect=1., # legend="full"
# facet_kws=dict(sharex=False),
)
plt.xlim([3.32,4.5])
plt.ylim([0, 45])
# plt.tight_layout()
plt.gcf().subplots_adjust(bottom=0.15)
# plt.title('Global SW / min')
plt.savefig(images + "scz_sleep12.png", dpi=100)
```
| github_jupyter |
# Operator Upgrade Tests
## Setup Seldon Core
Follow the instructions to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Setup-Cluster) with [Ambassador Ingress](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Ambassador) and [Install Seldon Core](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Install-Seldon-Core).
```
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
import json
import time
```
## Install Stable Version
```
!kubectl create namespace seldon-system
!helm upgrade seldon seldon-core-operator --repo https://storage.googleapis.com/seldon-charts --namespace seldon-system --set istio.enabled=true --wait
```
## Launch a Range of Models
```
%%writefile resources/model.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: seldon-model
spec:
name: test-deployment
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/mock_classifier:1.9.1
name: classifier
graph:
name: classifier
type: MODEL
endpoint:
type: REST
name: example
replicas: 1
!kubectl create -f resources/model.yaml
%%writefile ../servers/tfserving/samples/halfplustwo_rest.yaml
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
name: hpt
spec:
name: hpt
protocol: tensorflow
transport: rest
predictors:
- graph:
name: halfplustwo
implementation: TENSORFLOW_SERVER
modelUri: gs://seldon-models/tfserving/half_plus_two
parameters:
- name: model_name
type: STRING
value: halfplustwo
name: default
replicas: 1
!kubectl create -f ../servers/tfserving/samples/halfplustwo_rest.yaml
%%writefile ../examples/models/payload_logging/model_logger.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: model-logs
spec:
name: model-logs
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/mock_classifier_rest:1.3
name: classifier
imagePullPolicy: Always
graph:
name: classifier
type: MODEL
endpoint:
type: REST
logger:
url: http://logger.seldon/
mode: all
name: logging
replicas: 1
!kubectl create -f ../examples/models/payload_logging/model_logger.yaml
```
Wait for all models to be available
```
def waitStatus(desired):
for i in range(360):
allAvailable = True
failedGet = False
state = !kubectl get sdep -o json
state = json.loads("".join(state))
for model in state["items"]:
if "status" in model:
print("model", model["metadata"]["name"], model["status"]["state"])
if model["status"]["state"] != "Available":
allAvailable = False
break
else:
failedGet = True
if allAvailable == desired and not failedGet:
break
time.sleep(1)
return allAvailable
actual = waitStatus(True)
assert actual == True
```
## Count the number of resources
```
def getOwned(raw):
count = 0
for res in raw["items"]:
if (
"ownerReferences" in res["metadata"]
and res["metadata"]["ownerReferences"][0]["kind"] == "SeldonDeployment"
):
count += 1
return count
def getResourceStats():
# Get number of deployments
dps = !kubectl get deployment -o json
dps = json.loads("".join(dps))
numDps = getOwned(dps)
print("Number of deployments owned", numDps)
# Get number of services
svcs = !kubectl get svc -o json
svcs = json.loads("".join(svcs))
numSvcs = getOwned(svcs)
print("Number of services owned", numSvcs)
# Get number of virtual services
vss = !kubectl get vs -o json
vss = json.loads("".join(vss))
numVs = getOwned(vss)
print("Number of virtual services owned", numVs)
# Get number of hpas
hpas = !kubectl get hpa -o json
hpas = json.loads("".join(hpas))
numHpas = getOwned(hpas)
print("Number of hpas owned", numHpas)
return (numDps, numSvcs, numVs, numHpas)
(dp1, svc1, vs1, hpa1) = getResourceStats()
```
## Upgrade to latest
```
!helm upgrade seldon ../helm-charts/seldon-core-operator --namespace seldon-system --set istio.enabled=true --wait
actual = waitStatus(False)
assert actual == False
actual = waitStatus(True)
assert actual == True
# Give time for resources to terminate
for i in range(120):
(dp2, svc2, vs2, hpa2) = getResourceStats()
if dp1 == dp2 and svc1 == svc2 and vs1 == vs2 and hpa1 == hpa2:
break
time.sleep(1)
assert dp1 == dp2
assert svc1 == svc2
assert vs1 == vs2
assert hpa1 == hpa2
!kubectl delete sdep --all
```
| github_jupyter |
```
# default_exp scrape8K
```
# scrape8K
> Scrape item summaries from 8-K SEC filings.
```
#hide
%load_ext autoreload
%autoreload 2
from nbdev import show_doc
#export
import collections
import itertools
import os
import re
from secscan import utils, dailyList, basicInfo, infoScraper
default8KDir = os.path.join(utils.stockDataRoot,'scraped8K')
```
8K scraper class - scrape items summary from the SEC filing:
```
#export
itemPat = re.compile(r'item\s*(\d+(?:\.\d*)?)',re.IGNORECASE)
explanPat = re.compile(r'explanatory\s*note',re.IGNORECASE)
def parse8K(accNo, formType=None, textLimit=basicInfo.defaultTextLimit) :
info = basicInfo.getSecFormInfo(accNo, formType=formType, get99=True, textLimit=textLimit)
links = info['links']
if len(links) == 0 :
utils.printErrInfoOrAccessNo('NO LINKS LIST in',accNo)
return info
if formType is None :
formType = links[0][2]
items = info.get('items',[])
if len(items) == 0 :
return info
mainText = utils.downloadSecUrl(links[0][3], toFormat='souptext')
if formType.lower() == '8-k/a' :
m = explanPat.search(mainText)
if m is not None :
info['explanatoryNote'] = mainText[m.start():m.start()+textLimit]
itemPosL = [0]
info['itemTexts'] = itemTexts = [None for item in items]
for i,item in enumerate(items) :
m = itemPat.match(item)
if m is None :
utils.printErrInfoOrAccessNo(f"unexpected format for item header {item}",accNo)
continue
m = re.search(r'item\s*' + r'\s*'.join(m.group(1)).replace('.',r'\.'),
mainText[itemPosL[-1]:], re.IGNORECASE)
if m is None :
utils.printErrInfoOrAccessNo(f"couldn't find {item}",accNo)
continue
itemPosL.append(itemPosL[-1]+m.start())
itemTexts[i] = ''
# print('pos for',item,itemPosL[-1])
itemPosL.append(len(mainText))
j = 1
for i in range(len(itemTexts)) :
if itemTexts[i] is None :
itemTexts[i] = items[i] + ' ???'
else :
itemTexts[i] = mainText[itemPosL[j] : min(itemPosL[j]+textLimit, itemPosL[j+1])]
j += 1
return info
class scraper8K(infoScraper.scraperBase) :
@utils.delegates(infoScraper.scraperBase.__init__)
def __init__(self, infoDir=default8KDir, **kwargs) :
super().__init__(infoDir, '8-K', **kwargs)
def scrapeInfo(self, accNo, formType=None) :
return parse8K(accNo, formType), None
```
Test 8-K scraper class:
```
dl = dailyList.dailyList(startD='empty')
dl.updateForDays('20210701','20210704')
assert len(dl.getFilingsList(None,'8-K')[0])==600,"testing 8-K scraper class (daily list count)"
info = parse8K('0001165002-21-000068', formType='8-K', textLimit=1000)
assert (info['itemTexts'][0].startswith('ITEM 2.02: RESULTS OF OPERATIONS AND FINANCIAL CONDITION '
+'On July 27, 2021, Westwood')
and info['itemTexts'][0].endswith('otherwise expressly stated in such filing. ')
and info['itemTexts'][1].startswith('ITEM 7.01: REGULATION FD DISCLOSURE Westwood')
and info['itemTexts'][1].endswith('of record on August 6, 2021. ')
and info['itemTexts'][2].startswith('ITEM 9.01: FINANCIAL STATEMENTS AND EXHIBITS (d) ')
and info['itemTexts'][2].endswith('Financial Officer and Treasurer')
and info['text99'][1].startswith('EX-99.1 2 a2q21earningsrelease.htm EX-99.1 '
+'Document Westwood Holdings Group, Inc. Reports')
and info['text99'][1].endswith('High Income achieved a top decile ranking, Income Opportunity and Total Retur')
),"testing 8-K scraper class (parsing)"
info = parse8K('0001606757-21-000040', formType='8-K/A', textLimit=1000)
assert (info['explanatoryNote'].startswith('Explanatory Note This Amendment No. 1')
and info['explanatoryNote'].endswith('Ms. Croom accepted a written offer ')
),"testing 8-K scraper class (parsing explanatory note)"
#hide
# uncomment and run to regenerate all library Python files
# from nbdev.export import notebook2script; notebook2script()
```
| github_jupyter |
Comparison for decision boundary generated on iris dataset between Label Propagation and SVM.
This demonstrates Label Propagation learning a good boundary even with a small amount of labeled data.
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
### Version
```
import sklearn
sklearn.__version__
```
### Imports
```
print(__doc__)
import plotly.plotly as py
import plotly.graph_objs as go
from plotly import tools
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn import svm
from sklearn.semi_supervised import label_propagation
```
### Calculations
```
rng = np.random.RandomState(0)
iris = datasets.load_iris()
X = iris.data[:, :2]
y = iris.target
# step size in the mesh
h = .02
y_30 = np.copy(y)
y_30[rng.rand(len(y)) < 0.3] = -1
y_50 = np.copy(y)
y_50[rng.rand(len(y)) < 0.5] = -1
# we create an instance of SVM and fit out data. We do not scale our
# data since we want to plot the support vectors
ls30 = (label_propagation.LabelSpreading().fit(X, y_30),
y_30)
ls50 = (label_propagation.LabelSpreading().fit(X, y_50),
y_50)
ls100 = (label_propagation.LabelSpreading().fit(X, y), y)
rbf_svc = (svm.SVC(kernel='rbf').fit(X, y), y)
# create a mesh to plot in
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
x_ = np.arange(x_min, x_max, h)
y_ = np.arange(y_min, y_max, h)
xx, yy = np.meshgrid(x_, y_)
# title for the plots
titles = ['Label Spreading 30% data',
'Label Spreading 50% data',
'Label Spreading 100% data',
'SVC with rbf kernel']
```
### Plot Results
```
fig = tools.make_subplots(rows=2, cols=2,
subplot_titles=tuple(titles),
print_grid=False)
def matplotlib_to_plotly(cmap, pl_entries):
h = 1.0/(pl_entries-1)
pl_colorscale = []
for k in range(pl_entries):
C = map(np.uint8, np.array(cmap(k*h)[:3])*255)
pl_colorscale.append([k*h, 'rgb'+str((C[0], C[1], C[2]))])
return pl_colorscale
cmap = matplotlib_to_plotly(plt.cm.Paired, 6)
for i, (clf, y_train) in enumerate((ls30, ls50, ls100, rbf_svc)):
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
trace1 = go.Heatmap(x=x_, y=y_, z=Z,
colorscale=cmap,
showscale=False)
fig.append_trace(trace1, i/2+1, i%2+1)
# Plot also the training points
trace2 = go.Scatter(x=X[:, 0], y=X[:, 1],
mode='markers',
showlegend=False,
marker=dict(color=X[:, 0],
colorscale=cmap,
line=dict(width=1, color='black'))
)
fig.append_trace(trace2, i/2+1, i%2+1)
for i in map(str,range(1, 5)):
y = 'yaxis' + i
x = 'xaxis' + i
fig['layout'][y].update(showticklabels=False, ticks='')
fig['layout'][x].update(showticklabels=False, ticks='')
fig['layout'].update(height=700)
py.iplot(fig)
```
### License
Authors:
Clay Woolam <clay@woolam.org>
License:
BSD
```
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'Decision Boundary of Label Propagation versus SVM on the Iris dataset.ipynb', 'scikit-learn/plot-label-propagation-versus-svm-iris/', 'Decision Boundary of Label Propagation versus SVM on the Iris dataset | plotly',
' ',
title = 'Decision Boundary of Label Propagation versus SVM on the Iris dataset | plotly',
name = 'Decision Boundary of Label Propagation versus SVM on the Iris dataset',
has_thumbnail='true', thumbnail='thumbnail/svm.jpg',
language='scikit-learn', page_type='example_index',
display_as='semi_supervised', order=3,
ipynb= '~Diksha_Gabha/3520')
```
| github_jupyter |
# Pouch cell model
In this notebook we compare the solutions of two reduced-order models of a lithium-ion pouch cell with the full solution obtained using COMSOL. This example is based on the results in [[6]](#References). The code used to produce the results in [[6]](#References) can be found [here](https://github.com/rtimms/asymptotic-pouch-cell).
The full model is based on the Doyle-Fuller-Newman model [[2]](#References) and, in the interest of simplicity, considers a one-dimensional current collector (i.e. variation in one of the current collector dimensions is ignored), resulting in a 2D macroscopic model.
The first of the reduced order models, which is applicable in the limit of large conductivity in the current collectors, solves a one-dimensional problem in the current collectors coupled to a one-dimensional DFN model describing the through-cell electrochemistry at each point. We refer to this as a 1+1D model, though since the DFN is already a pseudo-two-dimensional model, perhaps it is more properly a 1+1+1D model.
The second reduced order model, which is applicable in the limit of very large conductivity in the current collectors, solves a single (averaged) one-dimensional DFN model for the through-cell behaviour and an uncoupled problem for the distribution of potential in the current collectors (from which the resistance and heat source can be calculated). We refer to this model as the DFNCC, where the "CC" indicates the additional (uncoupled) current collector problem.
All of the model equations, and derivations of the reduced-order models, can be found in [[6]](#References).
## Solving the reduced-order pouch cell models in PyBaMM
We begin by importing PyBaMM along with the other packages required in this notebook
```
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import sys
import pickle
import matplotlib.pyplot as plt
import numpy as np
import scipy.interpolate as interp
```
We then need to load up the appropriate models. For the DFNCC we require a 1D model of the current collectors and an average 1D DFN model for the through-cell electrochemistry. The 1+1D pouch cell model is built directly into PyBaMM and are accessed by passing the model option "dimensionality" which can be 1 or 2, corresponding to 1D or 2D current collectors. This option can be passed to any existing electrochemical model (e.g. [SPM](./SPM.ipynb), [SPMe](./SPMe.ipynb), [DFN](./DFN.ipynb)). Here we choose the DFN model.
For both electrochemical models we choose an "x-lumped" thermal model, meaning we assume that the temperature is uniform in the through-cell direction $x$, but account for the variation in temperature in the transverse direction $z$.
```
cc_model = pybamm.current_collector.EffectiveResistance({"dimensionality": 1})
dfn_av = pybamm.lithium_ion.DFN({"thermal": "x-lumped"}, name="Average DFN")
dfn = pybamm.lithium_ion.DFN(
{"current collector": "potential pair", "dimensionality": 1, "thermal": "x-lumped"},
name="1+1D DFN",
)
```
We then add the models to a dictionary for easy access later
```
models = {"Current collector": cc_model, "Average DFN": dfn_av, "1+1D DFN": dfn}
```
Next we update the parameters to match those used in the COMSOL simulation. In particular, we set the current to correspond to a 3C discharge and assume uniform Newton cooling on all boundaries.
```
param = dfn.default_parameter_values
I_1C = param["Nominal cell capacity [A.h]"] # 1C current is cell capacity multipled by 1 hour
param.update(
{
"Current function [A]": I_1C * 3,
"Negative electrode diffusivity [m2.s-1]": 3.9 * 10 ** (-14),
"Positive electrode diffusivity [m2.s-1]": 10 ** (-13),
"Negative current collector surface heat transfer coefficient [W.m-2.K-1]": 10,
"Positive current collector surface heat transfer coefficient [W.m-2.K-1]": 10,
"Negative tab heat transfer coefficient [W.m-2.K-1]": 10,
"Positive tab heat transfer coefficient [W.m-2.K-1]": 10,
"Edge heat transfer coefficient [W.m-2.K-1]": 10,
}
)
```
In this example we choose to discretise in space using 16 nodes per domain.
```
npts = 16
var_pts = {
"x_n": npts,
"x_s": npts,
"x_p": npts,
"r_n": npts,
"r_p": npts,
"z": npts,
}
```
Before solving the models we load the COMSOL data so that we can request the output at the times in the COMSOL solution
```
comsol_results_path = pybamm.get_parameters_filepath(
"input/comsol_results/comsol_1plus1D_3C.pickle"
)
comsol_variables = pickle.load(open(comsol_results_path, "rb"))
```
Next we loop over the models, creating and solving a simulation for each.
```
simulations = {}
solutions = {} # store solutions in a separate dict for easy access later
for name, model in models.items():
sim = pybamm.Simulation(model, parameter_values=param, var_pts=var_pts)
simulations[name] = sim # store simulation for later
if name == "Current collector":
# model is independent of time, so just solve arbitrarily at t=0 using
# the default algebraic solver
t_eval = np.array([0])
solutions[name] = sim.solve(t_eval=t_eval)
else:
# solve at COMSOL times using Casadi solver in "fast" mode
t_eval = comsol_variables["time"]
solutions[name] = sim.solve(solver=pybamm.CasadiSolver(mode="fast"), t_eval=t_eval)
```
## Creating the COMSOL model
In this section we show how to create a PyBaMM "model" from the COMSOL solution. If you are just interested in seeing the comparison the skip ahead to the section "Comparing the full and reduced-order models".
To create a PyBaMM model from the COMSOL data we must create a `pybamm.Function` object for each variable. We do this by interpolating in space to match the PyBaMM mesh and then creating a function to interpolate in time. The following cell defines the function that handles the creation of the `pybamm.Function` object.
```
# set up times
tau = param.evaluate(dfn.param.tau_discharge)
comsol_t = comsol_variables["time"]
pybamm_t = comsol_t / tau
# set up space
mesh = simulations["1+1D DFN"].mesh
L_z = param.evaluate(dfn.param.L_z)
pybamm_z = mesh["current collector"].nodes
z_interp = pybamm_z * L_z
def get_interp_fun_curr_coll(variable_name):
"""
Create a :class:`pybamm.Function` object using the variable (interpolate in space
to match nodes, and then create function to interpolate in time)
"""
comsol_z = comsol_variables[variable_name + "_z"]
variable = comsol_variables[variable_name]
variable = interp.interp1d(comsol_z, variable, axis=0, kind="linear")(z_interp)
# Make sure to use dimensional time
fun = pybamm.Interpolant(
comsol_t,
variable.T,
pybamm.t * tau,
name=variable_name + "_comsol"
)
fun.domain = "current collector"
fun.mesh = mesh.combine_submeshes("current collector")
fun.secondary_mesh = None
return fun
```
We then pass the variables of interest to the interpolating function
```
comsol_voltage = pybamm.Interpolant(
comsol_t,
comsol_variables["voltage"],
pybamm.t * tau,
name="voltage_comsol",
)
comsol_voltage.mesh = None
comsol_voltage.secondary_mesh = None
comsol_phi_s_cn = get_interp_fun_curr_coll("phi_s_cn")
comsol_phi_s_cp = get_interp_fun_curr_coll("phi_s_cp")
comsol_current = get_interp_fun_curr_coll("current")
comsol_temperature = get_interp_fun_curr_coll("temperature")
```
and add them to a `pybamm.BaseModel` object
```
comsol_model = pybamm.BaseModel()
comsol_model.variables = {
"Terminal voltage [V]": comsol_voltage,
"Negative current collector potential [V]": comsol_phi_s_cn,
"Positive current collector potential [V]": comsol_phi_s_cp,
"Current collector current density [A.m-2]": comsol_current,
"X-averaged cell temperature [K]": comsol_temperature,
# Add spatial variables to match pybamm model
"z": simulations["1+1D DFN"].built_model.variables["z"],
"z [m]": simulations["1+1D DFN"].built_model.variables["z [m]"],
}
```
We then add the solution object from the 1+1D model. This is just so that PyBaMM uses the same (dimensionless) times behind the scenes when dealing with COMSOL model and the reduced-order models: the variables in `comsol_model.variables` are functions of time only that return the (interpolated in space) COMSOL solution. We also need to update the time and length scales for the COMSOL model so that any dimensionless variables are scaled correctly.
```
comsol_model.timescale = simulations["1+1D DFN"].model.timescale
comsol_model.length_scales = simulations["1+1D DFN"].model.length_scales
comsol_solution = pybamm.Solution(solutions["1+1D DFN"].t, solutions["1+1D DFN"].y, comsol_model, {})
```
## Comparing the full and reduced-order models
The DFNCC requires some post-processing to extract the solution variables. In particular, we need to pass the current and voltage from the average DFN model to the current collector model in order to compute the distribution of the potential in the current collectors and to account for the effect of the current collector resistance in the terminal voltage.
This process is automated by the method `post_process` which accepts the current collector solution object, the parameters and the voltage and current from the average DFN model. The results are stored in the dictionary `dfncc_vars`
```
V_av = solutions["Average DFN"]["Terminal voltage"]
I_av = solutions["Average DFN"]["Total current density"]
dfncc_vars = cc_model.post_process(
solutions["Current collector"], param, V_av, I_av
)
```
Next we create a function to create some custom plots. For a given variable the plots will show: (a) the COMSOL results as a function of position in the current collector $z$ and time $t$; (b) a comparison of the full and reduced-order models and a sequence of times; (c) the time-averaged error between the full and reduced-order models as a function of space; and (d) the space-averaged error between the full and reduced-order models as a function of time.
```
def plot(
t_plot,
z_plot,
t_slices,
var_name,
units,
comsol_var_fun,
dfn_var_fun,
dfncc_var_fun,
param,
cmap="viridis",
):
fig, ax = plt.subplots(2, 2, figsize=(13, 7))
fig.subplots_adjust(
left=0.15, bottom=0.1, right=0.95, top=0.95, wspace=0.4, hspace=0.8
)
# plot comsol var
comsol_var = comsol_var_fun(t=t_plot, z=z_plot)
comsol_var_plot = ax[0, 0].pcolormesh(
z_plot * 1e3, t_plot, np.transpose(comsol_var), shading="gouraud", cmap=cmap
)
if "cn" in var_name:
format = "%.0e"
elif "cp" in var_name:
format = "%.0e"
else:
format = None
fig.colorbar(
comsol_var_plot,
ax=ax,
format=format,
location="top",
shrink=0.42,
aspect=20,
anchor=(0.0, 0.0),
)
# plot slices
ccmap = plt.get_cmap("inferno")
for ind, t in enumerate(t_slices):
color = ccmap(float(ind) / len(t_slices))
comsol_var_slice = comsol_var_fun(t=t, z=z_plot)
dfn_var_slice = dfn_var_fun(t=t, z=z_plot)
dfncc_var_slice = dfncc_var_fun(t=np.array([t]), z=z_plot)
ax[0, 1].plot(
z_plot * 1e3, comsol_var_slice, "o", fillstyle="none", color=color
)
ax[0, 1].plot(
z_plot * 1e3,
dfn_var_slice,
"-",
color=color,
label="{:.0f} s".format(t_slices[ind]),
)
ax[0, 1].plot(z_plot * 1e3, dfncc_var_slice, ":", color=color)
# add dummy points for legend of styles
comsol_p, = ax[0, 1].plot(np.nan, np.nan, "ko", fillstyle="none")
pybamm_p, = ax[0, 1].plot(np.nan, np.nan, "k-", fillstyle="none")
dfncc_p, = ax[0, 1].plot(np.nan, np.nan, "k:", fillstyle="none")
# compute errors
dfn_var = dfn_var_fun(t=t_plot, z=z_plot)
dfncc_var = dfncc_var_fun(t=t_plot, z=z_plot)
error = np.abs(comsol_var - dfn_var)
error_bar = np.abs(comsol_var - dfncc_var)
# plot time averaged error
ax[1, 0].plot(z_plot * 1e3, np.mean(error, axis=1), "k-", label=r"$1+1$D")
ax[1, 0].plot(z_plot * 1e3, np.mean(error_bar, axis=1), "k:", label="DFNCC")
# plot z averaged error
ax[1, 1].plot(t_plot, np.mean(error, axis=0), "k-", label=r"$1+1$D")
ax[1, 1].plot(t_plot, np.mean(error_bar, axis=0), "k:", label="DFNCC")
# set ticks
ax[0, 0].tick_params(which="both")
ax[0, 1].tick_params(which="both")
ax[1, 0].tick_params(which="both")
if var_name in ["$\mathcal{I}^*$"]:
ax[1, 0].set_yscale("log")
ax[1, 0].set_yticks = [1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1e-2, 1e-1, 1]
else:
ax[1, 0].ticklabel_format(style="sci", scilimits=(-2, 2), axis="y")
ax[1, 1].tick_params(which="both")
if var_name in ["$\phi^*_{\mathrm{s,cn}}$", "$\phi^*_{\mathrm{s,cp}} - V^*$"]:
ax[1, 0].ticklabel_format(style="sci", scilimits=(-2, 2), axis="y")
else:
ax[1, 1].set_yscale("log")
ax[1, 1].set_yticks = [1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1e-2, 1e-1, 1]
# set labels
ax[0, 0].set_xlabel(r"$z^*$ [mm]")
ax[0, 0].set_ylabel(r"$t^*$ [s]")
ax[0, 0].set_title(r"{} {}".format(var_name, units), y=1.5)
ax[0, 1].set_xlabel(r"$z^*$ [mm]")
ax[0, 1].set_ylabel(r"{}".format(var_name))
ax[1, 0].set_xlabel(r"$z^*$ [mm]")
ax[1, 0].set_ylabel("Time-averaged" + "\n" + r"absolute error {}".format(units))
ax[1, 1].set_xlabel(r"$t^*$ [s]")
ax[1, 1].set_ylabel("Space-averaged" + "\n" + r"absolute error {}".format(units))
ax[0, 0].text(-0.1, 1.6, "(a)", transform=ax[0, 0].transAxes)
ax[0, 1].text(-0.1, 1.6, "(b)", transform=ax[0, 1].transAxes)
ax[1, 0].text(-0.1, 1.2, "(c)", transform=ax[1, 0].transAxes)
ax[1, 1].text(-0.1, 1.2, "(d)", transform=ax[1, 1].transAxes)
leg1 = ax[0, 1].legend(
bbox_to_anchor=(0, 1.1, 1.0, 0.102),
loc="lower left",
borderaxespad=0.0,
ncol=3,
mode="expand",
)
leg2 = ax[0, 1].legend(
[comsol_p, pybamm_p, dfncc_p],
["COMSOL", r"$1+1$D", "DFNCC"],
bbox_to_anchor=(0, 1.5, 1.0, 0.102),
loc="lower left",
borderaxespad=0.0,
ncol=3,
mode="expand",
)
ax[0, 1].add_artist(leg1)
ax[1, 0].legend(
bbox_to_anchor=(0.0, 1.1, 1.0, 0.102),
loc="lower right",
borderaxespad=0.0,
ncol=3,
)
ax[1, 1].legend(
bbox_to_anchor=(0.0, 1.1, 1.0, 0.102),
loc="lower right",
borderaxespad=0.0,
ncol=3,
)
```
We then set up the times and points in space to use in the plots
```
t_plot = comsol_t
z_plot = z_interp
t_slices = np.array([600, 1200, 1800, 2400, 3000]) / 3
```
and plot the negative current collector potential
```
var = "Negative current collector potential [V]"
comsol_var_fun = comsol_solution[var]
dfn_var_fun = solutions["1+1D DFN"][var]
dfncc_var_fun = dfncc_vars[var]
plot(
t_plot,
z_plot,
t_slices,
"$\phi^*_{\mathrm{s,cn}}$",
"[V]",
comsol_var_fun,
dfn_var_fun,
dfncc_var_fun,
param,
cmap="cividis",
)
```
the positive current collector potential with respect to terminal voltage
```
var = "Positive current collector potential [V]"
comsol_var = comsol_solution[var]
V_comsol = comsol_solution["Terminal voltage [V]"]
def comsol_var_fun(t, z):
return comsol_var(t=t, z=z) - V_comsol(t=t)
dfn_var = solutions["1+1D DFN"][var]
V = solutions["1+1D DFN"]["Terminal voltage [V]"]
def dfn_var_fun(t, z):
return dfn_var(t=t, z=z) - V(t=t)
dfncc_var = dfncc_vars[var]
V_dfncc = dfncc_vars["Terminal voltage [V]"]
def dfncc_var_fun(t, z):
return dfncc_var(t=t, z=z) - V_dfncc(t)
plot(
t_plot,
z_plot,
t_slices,
"$\phi^*_{\mathrm{s,cp}} - V^*$",
"[V]",
comsol_var_fun,
dfn_var_fun,
dfncc_var_fun,
param,
cmap="viridis",
)
```
the through-cell current
```
var = "Current collector current density [A.m-2]"
comsol_var_fun = comsol_solution[var]
dfn_var_fun = solutions["1+1D DFN"][var]
I_av = solutions["Average DFN"][var]
def dfncc_var_fun(t, z):
"In the DFNCC the current is just the average current"
return np.transpose(np.repeat(I_av(t)[:, np.newaxis], len(z), axis=1))
plot(
t_plot,
z_plot,
t_slices,
"$\mathcal{I}^*$",
"[A/m${}^2$]",
comsol_var_fun,
dfn_var_fun,
dfncc_var_fun,
param,
cmap="plasma",
)
```
and the temperature with respect to reference temperature
```
T_ref = param.evaluate(dfn.param.T_ref)
var = "X-averaged cell temperature [K]"
comsol_var = comsol_solution[var]
def comsol_var_fun(t, z):
return comsol_var(t=t, z=z) - T_ref
dfn_var = solutions["1+1D DFN"][var]
def dfn_var_fun(t, z):
return dfn_var(t=t, z=z) - T_ref
T_av = solutions["Average DFN"][var]
def dfncc_var_fun(t, z):
"In the DFNCC the temperature is just the average temperature"
return np.transpose(np.repeat(T_av(t)[:, np.newaxis], len(z), axis=1)) - T_ref
plot(
t_plot,
z_plot,
t_slices,
"$\\bar{T}^* - \\bar{T}_0^*$",
"[K]",
comsol_var_fun,
dfn_var_fun,
dfncc_var_fun,
param,
cmap="inferno",
)
```
We see that the electrical conductivity of the current collectors is sufficiently
high that the potentials remain fairly uniform in space, and both the 1+1D DFN and DFNCC models are able to accurately capture the potential distribution in the current collectors.
In the plot of the current we see that positioning both tabs at the top of the cell means that for most of the simulation the current preferentially travels through the upper part of the cell. Eventually, as the cell continues to discharge, this part becomes more (de)lithiated until the resultant local increase in through-cell resistance is sufficient for it to become preferential for the current to travel further along the current collectors and through the lower part of the cell. This behaviour is well captured by the 1+1D model. In the DFNCC formulation the through-cell current density is assumed uniform,
so the greatest error is found at the ends of the current collectors where the current density deviates most from its average.
For the parameters used in this example we find that the temperature exhibits a relatively weak variation along the length of the current collectors.
## References
The relevant papers for this notebook are:
```
pybamm.print_citations()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/satyajitghana/TSAI-DeepVision-EVA4.0/blob/master/05_CodingDrill/EVA4S5F1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Import Libraries
```
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
```
## Data Transformations
We first start with defining our data transformations. We need to think what our data is and how can we augment it to correct represent images which it might not see otherwise.
Here is the list of all the transformations which come pre-built with PyTorch
1. Compose
2. ToTensor
3. ToPILImage
4. Normalize
5. Resize
6. Scale
7. CenterCrop
8. Pad
9. Lambda
10. RandomApply
11. RandomChoice
12. RandomOrder
13. RandomCrop
14. RandomHorizontalFlip
15. RandomVerticalFlip
16. RandomResizedCrop
17. RandomSizedCrop
18. FiveCrop
19. TenCrop
20. LinearTransformation
21. ColorJitter
22. RandomRotation
23. RandomAffine
24. Grayscale
25. RandomGrayscale
26. RandomPerspective
27. RandomErasing
You can read more about them [here](https://pytorch.org/docs/stable/_modules/torchvision/transforms/transforms.html)
```
# Train Phase transformations
train_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)) # The mean and std have to be sequences (e.g., tuples), therefore you should add a comma after the values.
# Note the difference between (0.1307) and (0.1307,)
])
# Test Phase transformations
test_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
```
# Dataset and Creating Train/Test Split
```
train = datasets.MNIST('./data', train=True, download=True, transform=train_transforms)
test = datasets.MNIST('./data', train=False, download=True, transform=test_transforms)
```
# Dataloader Arguments & Test/Train Dataloaders
```
SEED = 1
# CUDA?
cuda = torch.cuda.is_available()
print("CUDA Available?", cuda)
# For reproducibility
torch.manual_seed(SEED)
if cuda:
torch.cuda.manual_seed(SEED)
# dataloader arguments - something you'll fetch these from cmdprmt
dataloader_args = dict(shuffle=True, batch_size=128, num_workers=4, pin_memory=True) if cuda else dict(shuffle=True, batch_size=64)
# train dataloader
train_loader = torch.utils.data.DataLoader(train, **dataloader_args)
# test dataloader
test_loader = torch.utils.data.DataLoader(test, **dataloader_args)
```
# Data Statistics
It is important to know your data very well. Let's check some of the statistics around our data and how it actually looks like
```
# We'd need to convert it into Numpy! Remember above we have converted it into tensors already
train_data = train.train_data
train_data = train.transform(train_data.numpy())
print('[Train]')
print(' - Numpy Shape:', train.train_data.cpu().numpy().shape)
print(' - Tensor Shape:', train.train_data.size())
print(' - min:', torch.min(train_data))
print(' - max:', torch.max(train_data))
print(' - mean:', torch.mean(train_data))
print(' - std:', torch.std(train_data))
print(' - var:', torch.var(train_data))
dataiter = iter(train_loader)
images, labels = dataiter.next()
print(images.shape)
print(labels.shape)
# Let's visualize some of the images
%matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(images[0].numpy().squeeze(), cmap='gray_r')
```
## MORE
It is important that we view as many images as possible. This is required to get some idea on image augmentation later on
```
figure = plt.figure()
num_of_images = 60
for index in range(1, num_of_images + 1):
plt.subplot(6, 10, index)
plt.axis('off')
plt.imshow(images[index].numpy().squeeze(), cmap='gray_r')
```
# How did we get those mean and std values which we used above?
Let's run a small experiment
```
# simple transform
simple_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
# transforms.Normalize((0.1307,), (0.3081,)) # The mean and std have to be sequences (e.g., tuples), therefore you should add a comma after the values.
# Note the difference between (0.1307) and (0.1307,)
])
exp = datasets.MNIST('./data', train=True, download=True, transform=simple_transforms)
exp_data = exp.train_data
exp_data = exp.transform(exp_data.numpy())
print('[Train]')
print(' - Numpy Shape:', exp.train_data.cpu().numpy().shape)
print(' - Tensor Shape:', exp.train_data.size())
print(' - min:', torch.min(exp_data))
print(' - max:', torch.max(exp_data))
print(' - mean:', torch.mean(exp_data))
print(' - std:', torch.std(exp_data))
print(' - var:', torch.var(exp_data))
```
# The model
Let's start with the model we first saw
```
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, padding=1) #input -? OUtput? RF
self.conv2 = nn.Conv2d(32, 64, 3, padding=1)
self.pool1 = nn.MaxPool2d(2, 2)
self.conv3 = nn.Conv2d(64, 128, 3, padding=1)
self.conv4 = nn.Conv2d(128, 256, 3, padding=1)
self.pool2 = nn.MaxPool2d(2, 2)
self.conv5 = nn.Conv2d(256, 512, 3)
self.conv6 = nn.Conv2d(512, 1024, 3)
self.conv7 = nn.Conv2d(1024, 10, 3)
def forward(self, x):
x = self.pool1(F.relu(self.conv2(F.relu(self.conv1(x)))))
x = self.pool2(F.relu(self.conv4(F.relu(self.conv3(x)))))
x = F.relu(self.conv6(F.relu(self.conv5(x))))
# x = F.relu(self.conv7(x))
x = self.conv7(x)
x = x.view(-1, 10)
return F.log_softmax(x, dim=-1)
```
# Model Params
Can't emphasize on how important viewing Model Summary is.
Unfortunately, there is no in-built model visualizer, so we have to take external help
```
!pip install torchsummary
from torchsummary import summary
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
print(device)
model = Net().to(device)
summary(model, input_size=(1, 28, 28))
```
# Training and Testing
All right, so we have 6.3M params, and that's too many, we know that. But the purpose of this notebook is to set things right for our future experiments.
Looking at logs can be boring, so we'll introduce **tqdm** progressbar to get cooler logs.
Let's write train and test functions
```
from tqdm import tqdm
train_losses = []
test_losses = []
train_acc = []
test_acc = []
def train(model, device, train_loader, optimizer, epoch):
model.train()
pbar = tqdm(train_loader)
correct = 0
processed = 0
for batch_idx, (data, target) in enumerate(pbar):
# get samples
data, target = data.to(device), target.to(device)
# Init
optimizer.zero_grad()
# In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes.
# Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly.
# Predict
y_pred = model(data)
# Calculate loss
loss = F.nll_loss(y_pred, target)
train_losses.append(loss)
# Backpropagation
loss.backward()
optimizer.step()
# Update pbar-tqdm
pred = y_pred.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
processed += len(data)
pbar.set_description(desc= f'Loss={loss.item()} Batch_id={batch_idx} Accuracy={100*correct/processed:0.2f}')
train_acc.append(100*correct/processed)
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
test_acc.append(100. * correct / len(test_loader.dataset))
```
# Let's Train and test our model
```
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
EPOCHS = 20
for epoch in range(EPOCHS):
print("EPOCH:", epoch)
train(model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
fig, axs = plt.subplots(2,2,figsize=(15,10))
axs[0, 0].plot(train_losses)
axs[0, 0].set_title("Training Loss")
axs[1, 0].plot(train_acc)
axs[1, 0].set_title("Training Accuracy")
axs[0, 1].plot(test_losses)
axs[0, 1].set_title("Test Loss")
axs[1, 1].plot(test_acc)
axs[1, 1].set_title("Test Accuracy")
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import os
plt.rcParams['figure.figsize'] = [10.0, 5.0]
plt.rcParams['figure.dpi'] = 220
def rand_signal_generator(len):
times = np.arange(0, len)
signal = np.sin(times) + np.random.normal(scale=0.1, size=times.size)
return signal
def generate_block(input_data, seed_vector, m, n):
meas_mat = np.zeros((m, n), dtype=np.float32)
for idx, seed in enumerate(seed_vector):
seed_int = np.asarray(seed, dtype=np.float32).view(np.uint32)
meas_mat[idx] = np.random.RandomState(seed_int).binomial(1, .5, n) * 2 - 1
meas_mat /= np.sqrt(m)
out_data = meas_mat.dot(input_data)
return out_data, meas_mat
dataset_path = "./datasets/extrasensory/"
sample_names = os.listdir(dataset_path)[:]
m = 8
y = np.arange(0, m, dtype=np.float32)
cs_blockchain = np.zeros((len(sample_names), m))
sample_list = []
for idx, sample_name in enumerate(sample_names):
sample = np.loadtxt(dataset_path + sample_name)[:, 3]
sample_list.append(sample)
n = sample.size
y, _ = generate_block(sample, y, m, n)
cs_blockchain[idx] = y
sample = sample_list[0]
plt.plot(sample, "k", linewidth=.7)
plt.xlim([0, len(sample)])
plt.xlabel("ticks")
plt.ylabel("magnitude")
plt.show()
cs_blockchain_frauded = np.zeros_like(cs_blockchain)
fraud_idx = 100
y = np.arange(0, m, dtype=np.float32)
for idx, sample in enumerate(sample_list):
n = sample.size
y, _ = generate_block(sample, y, m, n)
if idx == fraud_idx:
y += 1e-1
cs_blockchain_frauded[idx] = y
y_idx = 0
plt.plot(cs_blockchain_frauded.mean(axis=1), "r", linewidth=.7, label="malicious sub-chain")
plt.plot(cs_blockchain.mean(axis=1), "g", linewidth=.7, label="true chain")
plt.xlim([0, cs_blockchain.shape[0]-1])
plt.xlabel("block #")
plt.ylabel(r"mean value of $y$")
plt.legend()
plt.show()
injects = list(np.power(10., np.arange(-38, 39)))
l2s = []
l2_vals = []
for idx, inject in enumerate(injects):
y = np.arange(0, m, dtype=np.float32) + inject
cs_blockchain_frauded = np.zeros_like(cs_blockchain)
for jdx, sample in enumerate(sample_list):
n = sample.size
y, _ = generate_block(sample, y, m, n)
cs_blockchain_frauded[jdx] = y
l2_val = np.linalg.norm(cs_blockchain - cs_blockchain_frauded, ord=2, axis=1)
l2_vals.append(l2_val)
l2s.append(l2_val.mean())
plt.plot(injects, l2s, "k", linewidth=.7)
plt.xscale("log")
plt.xticks(injects[0::4])
plt.xlim([injects[0], injects[-1]])
plt.ylim([0, 25000])
plt.xlabel("injection probe magnitude")
plt.ylabel(r"$\ell_2$ distance averaged over all blocks")
plt.grid()
plt.show()
plt.plot(l2_vals[30], "k", linewidth=.7)
plt.xlabel("block #")
plt.ylabel(r"$\ell_2$ distance")
plt.xlim([0, cs_blockchain.shape[0]-1])
plt.grid()
plt.show()
print(np.argwhere(l2_vals[30]))
```
| github_jupyter |
# Gene Regulatory Networks
Resources:
1. [week5_feedback_systems.ipynb](https://pages.hmc.edu/pandey/reading/week5_feedback_systems.ipynb): This notebook introduces the analysis of feedback systems using Python and describes the role of feedback in system design using simulations of mathematical models.
1. [week6_system_analysis.ipynb](https://pages.hmc.edu/pandey/reading/week6_system_analysis.ipynb): This notebook uses analytical and computational tools to discuss functions and utilities of different gene regulatory networks.
1. Python tutorials online: You are free to use any tutorials from the Internet on Numpy, Scipy, or any other Python package that you may use to solve the homework problems.
1. Submit your homework to GradeScope by downloading the jupyter notebook as a PDF. Go to Files -> Download as -> PDF. If that does not work, you can go to File -> Print Preview -> Ctrl + P (to print) -> Save as PDF.
Due date: 1st March on GradeScope.
# Problem 1: Cascade Gene Regulation
(Adapted from [Alon] Problem 1.4) Cascades: Consider a cascade of three activators, $X$ → $Y$ → $Z$. Protein $X$ is initially present in the cell in its inactive form. The input signal of $X$, $u_X$, appears at time $t = 0$. As a result, $X$ rapidly becomes active and binds the promoter of gene $Y$, so that protein $Y$ starts to be produced at rate $\beta$. When $Y$ levels exceed a threshold $K_y$, gene $Z$ begins to be transcribed. All proteins have the same degradation/dilution rate $\alpha$.
(a) Write a mathematical model to model the cascade phenomena described above. You may model the system by describing the rate of change of protein concentrations.
(b) Simulate your model in (a) by choosing biologically relevant parameters. What is the concentration of protein $Z$ as a function of time?
(c) Compute the response time of $Z$ with respect to the time of addition of $u_X$? Discuss how you can improve the speed of the response of this system by changing parameters. To compare the response times for different parameter settings, normalize the steady-state to show a fair comparison of the response time.
(d) Assume that you have a repressor cascade instead of an activation cascade. How do your conclusions change for parts (a) - (c).
# Problem 2: Eukaryotic Transcriptional Control
Read this short paper on eukaryotic transcriptional control:
Kornberg, Roger D. "Eukaryotic transcriptional control." Trends in biochemical sciences 24.12 (1999): M46-M49. [URL](https://www.sciencedirect.com/science/article/pii/S0968000499014899)
Write a one paragraph summary on key differences in transcriptional control in prokaryotes and the eukaryotes.
# Problem 3: Autorepression Speeds Up Response
A paper published in 2002 showed that autorepression speeds up the response times of transcription networks in cells. We discussed the autorepression mechanisms in Week 4 and Week 5. In your simulations for HW 4, you could observe that the autorepression shows faster response when compared to unregulated gene expression. The goal of this problem is to use a similar mathematical model to analytically reason about the response time. Read the paper:
Rosenfeld, Nitzan, Michael B. Elowitz, and Uri Alon. "Negative autoregulation speeds the response times of transcription networks." Journal of molecular biology 323.5 (2002): 785-793. [URL](https://www.sciencedirect.com/science/article/pii/S0022283602009944?via%3Dihub)
Re-derive the expression of rise-time as shown in this paper to analytically comment about how autorepression can lead to faster response times.
# Feedback:
Please submit the feedback form on Gradescope (if you haven't yet) and write here the number of hours you needed to finish this problem set.
| github_jupyter |
# GPU Computing for Data Scientists
#### Using CUDA, Jupyter, PyCUDA, ArrayFire and Thrust
https://github.com/QuantScientist/Data-Science-ArrayFire-GPU
```
%reset -f
import pycuda
from pycuda import compiler
import pycuda.driver as drv
import pycuda.driver as cuda
```
# Make sure we have CUDA
```
drv.init()
print("%d device(s) found." % drv.Device.count())
for ordinal in range(drv.Device.count()):
dev = drv.Device(ordinal)
print ("Device #%d: %s" % (ordinal, dev.name()))
drv
```
## Simple addition the GPU: compilation
```
import pycuda.autoinit
import numpy
from pycuda.compiler import SourceModule
srcGPU = """
#include <stdio.h>
__global__ void multGPU(float *dest, float *a, float *b)
{
const int i = threadIdx.x;
dest[i] = a[i] * b[i];
//dest[i] = threadIdx.x + threadIdx.y + blockDim.x;
//dest[i] = blockDim.x;
//printf("I am %d.%d\\n", threadIdx.x, threadIdx.y);
}
"""
srcGPUModule = SourceModule(srcGPU)
print (srcGPUModule)
```
# Simple addition on the GPU: Host memory allocation
```
ARR_SIZE=16
a = numpy.random.randn(ARR_SIZE).astype(numpy.float32)
a=numpy.ones_like(a)*3
b = numpy.random.randn(ARR_SIZE).astype(numpy.float32)
b=numpy.ones_like(b)*2
dest = numpy.zeros_like(a)
# print dest
```
## Simple addition on the GPU: execution
```
multGPUFunc = srcGPUModule.get_function("multGPU")
print (multGPUFunc)
multGPUFunc(drv.Out(dest), drv.In(a), drv.In(b),
block=(ARR_SIZE,32,1))
print (dest)
# print "Calculating %d iterations" % (n_iter)
import timeit
rounds =3
print ('pycuda', timeit.timeit(lambda:
multGPUFunc(drv.Out(dest), drv.In(a), drv.In(b),
grid=(ARR_SIZE,1,1),
block=(1,1,1)),
number=rounds))
# print dest
# print 'pycuda', timeit.timeit(lambda:
# multGPUFunc(drv.Out(dest), drv.In(a), drv.In(b),
# block=(ARR_SIZE,1,1)),
# number=rounds)
# print dest
print ('npy', timeit.timeit(lambda:a*b , number=rounds))
```
# Threads and Blocks
```
a = numpy.random.randn(4,4)
a=numpy.ones_like(a)
a = a.astype(numpy.float32)
a_gpu = cuda.mem_alloc(a.nbytes)
cuda.memcpy_htod(a_gpu, a)
mod = SourceModule("""
#include <stdio.h>
__global__ void doublify(float *a)
{
int idx = threadIdx.x + threadIdx.y*4;
a[idx] *= 2;
//printf("I am %d.%d\\n", threadIdx.x, threadIdx.y);
printf("I am %dth thread in threadIdx.x:%d.threadIdx.y:%d blockIdx.:%d blockIdx.y:%d blockDim.x:%d blockDim.y:%d\\n",(threadIdx.x+threadIdx.y*blockDim.x+(blockIdx.x*blockDim.x*blockDim.y)+(blockIdx.y*blockDim.x*blockDim.y)),threadIdx.x, threadIdx.y,blockIdx.x,blockIdx.y,blockDim.x,blockDim.y);
}
""")
func = mod.get_function("doublify")
func(a_gpu, block=(16,1,1))
a_doubled = numpy.empty_like(a)
cuda.memcpy_dtoh(a_doubled, a_gpu)
print (a_doubled)
```
[block]

| github_jupyter |
```
import os
os.chdir('/Users/yufei/Documents/2-CMU/DebiasingCvxConstrained/Code/Library')
from math import log
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
from ExperimentFunc import exp_func, beta_gen_lasso
from Step1 import solve_beta_lasso
from Step2 import find_v_lasso
from Step3 import solve_omega, gw_l1, proj_l1_tan_cone, proj_l1_neg_tan_cone
from collections import namedtuple
from copy import deepcopy
Params = namedtuple('Params', ['step1', 'step2', 'step3'])
```
### <span style="color:purple">1) Cov(X) = I</span>
```
N = 100
n = 1000
p = 1000
Sigma_sqrt = np.eye(p)
noise_sd = 9
debias_idx = p - 1
cardi = 0.005
l1_bound = p*cardi
param_set = Params([l1_bound],
[l1_bound],
[gw_l1, proj_l1_tan_cone, proj_l1_neg_tan_cone])
z, z_biased = exp_func(N,
n,
p,
Sigma_sqrt,
noise_sd,
debias_idx,
param_set,
beta_gen_lasso,
solve_beta_lasso,
find_v_lasso,
solve_omega)
```
#### Compare the mean of the (debiased_beta - beta) and (non-debiased_beta - beta)
```
mean_non_debiased = np.mean(z_biased)
print("The mean of (non_debiased_beta - beta) is: ", mean_non_debiased)
mean_debiased = np.mean(np.array(z))
print("The mean of (debiased_beta - beta) is: ", mean_debiased)
```
#### Check if the (debiased_beta - beta) and (non-debiased_beta - beta) is standard normal
```
# non-debiased
fig = plt.figure()
ax = fig.add_subplot()
res = stats.probplot(z_biased, plot=ax)
plt.show()
# debiased
fig = plt.figure()
ax = fig.add_subplot()
res = stats.probplot(z, plot=ax)
plt.show()
```
#### Save simulation results
```
np.save('/Users/yufei/Documents/2-CMU/DebiasingCvxConstrained/ExpResults/Lasso/identity_z_biased.npy', z_biased)
np.save('/Users/yufei/Documents/2-CMU/DebiasingCvxConstrained/ExpResults/Lasso/identity_z.npy', z)
```
### <span style="color:purple">2) Cov(X) with bounded eigenvalues</span>
```
# other parameters are the same as cov=I case
# Generate a cov matrix with bounded eigenvalues
# generate eigenvalues
cov_eigv = np.random.uniform(low = 0.5, high = 3.0, size = (p,))
D_sqrt = np.diag(cov_eigv**0.5)
# generate an orthonormal matrix
a = np.random.normal(size = (p,p))
u, s, vh = np.linalg.svd(a.T@a, full_matrices=True)
# generate the square root of cov matrix
Sigma_sqrt = D_sqrt @ u.T
z, z_biased = exp_func(N,
n,
p,
Sigma_sqrt,
noise_sd,
debias_idx,
param_set,
beta_gen_lasso,
solve_beta_lasso,
find_v_lasso,
solve_omega)
```
#### Compare the mean of the (debiased_beta - beta) and (non-debiased_beta - beta)
```
mean_non_debiased = np.mean(z_biased)
print("The mean of (non_debiased_beta - beta) is: ", mean_non_debiased)
mean_debiased = np.mean(np.array(z))
print("The mean of (debiased_beta - beta) is: ", mean_debiased)
```
#### Check if the (debiased_beta - beta) and (non-debiased_beta - beta) is standard normal
```
# non-debiased
fig = plt.figure()
ax = fig.add_subplot()
res = stats.probplot(z_biased, plot=ax)
plt.show()
# debiased
fig = plt.figure()
ax = fig.add_subplot()
res = stats.probplot(z, plot=ax)
plt.show()
```
#### Save the simulation results
```
np.save('/Users/yufei/Documents/2-CMU/DebiasingCvxConstrained/ExpResults/Lasso/bddeig_z_biased.npy', z_biased)
np.save('/Users/yufei/Documents/2-CMU/DebiasingCvxConstrained/ExpResults/Lasso/bddeig_z.npy', z)
```
### <span style = 'color:purple'>3) Cov(X) is the Cov of AR(1) Process</span>
```
# other parameters are the same as cov=I case
# Generate the squar root of cov matrix
rho = 0.4
rho_vec = []
for i in range(p):
rho_vec.append(rho**i)
rho_vec = np.array(rho_vec)
# The cholesky decomposition of cov == the squar root of cov
Sigma_sqrt = [rho_vec]
for i in range(1, p):
rho_vec_shifted = np.concatenate((np.zeros(i), rho_vec[:-i]))
# print(rho_vec_shifted)
Sigma_sqrt.append(rho_vec_shifted * (1-rho**2)**0.5)
Sigma_sqrt = np.array(Sigma_sqrt)
z, z_biased = exp_func(N,
n,
p,
Sigma_sqrt,
noise_sd,
debias_idx,
param_set,
beta_gen_lasso,
solve_beta_lasso,
find_v_lasso,
solve_omega)
```
#### Compare the mean of the (debiased_beta - beta) and (non-debiased_beta - beta)
```
mean_non_debiased = np.mean(z_biased)
print("The mean of (non_debiased_beta - beta) is: ", mean_non_debiased)
mean_debiased = np.mean(np.array(z))
print("The mean of (debiased_beta - beta) is: ", mean_debiased)
```
#### Check if the (debiased_beta - beta) and (non-debiased_beta - beta) is standard normal
```
# non-debiased
fig = plt.figure()
ax = fig.add_subplot()
res = stats.probplot(z_biased, plot=ax)
plt.show()
# debiased
fig = plt.figure()
ax = fig.add_subplot()
res = stats.probplot(z, plot=ax)
plt.show()
```
#### Print out (debiased beta - beta) and (non-debiased beta - beta)
```
np.save('/Users/yufei/Documents/2-CMU/DebiasingCvxConstrained/ExpResults/Lasso/ar1_z_biased.npy', z_biased)
np.save('/Users/yufei/Documents/2-CMU/DebiasingCvxConstrained/ExpResults/Lasso/ar1_z.npy', z)
```
| github_jupyter |
## <span style="color:#0B3B2E;float:right;font-family:Calibri">Jordan Graesser</span>
# MpGlue
### Handling image files with MpGlue
---
## Opening images
#### Everything begins with the `ropen` function.
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import skimage
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (10, 10)
import mpglue as gl
# Setup the name of the image you want to open.
image2open = '../testing/data/225r85_etm_2000_0424.tif'
# Load a pointer to an image and give it a variable name.
with gl.ropen(image2open) as i_info:
print(dir(i_info))
```
* We haven't actually loaded any image data.
* The variable, `i_info`, acts as a pointer to the GeoTiff.
* In Python terms, it is a **class instance**. The only way you can know this is by checking the documentation of `ropen` and knowing that it creates a class object, or by checking the variable's type with the built-in **type** function.
```
# Check the variable type
print(type(i_info))
```
* A class instance of `mpglue.raster_tools`.
## Getting image information
* Now, back to `i_info`. A class instance can contain various methods, the pinnacle of object orientated programming.
* Check the instance methods with **dir**.
* Now we know what type of information we can get without opening the entire image.
* Class instance methods are called as objects of the instance, which in Python is by **instance._method_**.
```
# Get the name of the directory and file.
with gl.ropen(image2open) as i_info:
print i_info.file_name
print i_info.filename
print i_info.rows, i_info.cols
print i_info.shape
print i_info.name
print i_info.left, i_info.right, i_info.top, i_info.bottom
print i_info.extent
```
## Getting image data
* Not all methods of `ropen` are information. Others are class functions.
* For example, we can open the image as a n-dimensional array by calling `read` from the class instance itself.
```
# Load the image as an n-dimensional array (NumPy array).
with gl.ropen(image2open) as i_info:
image_array = i_info.read()
# Now check the type of the variable, `image_array`.
print type(image_array)
```
* The variable `image_array` is a **NumPy** object.
```
# Check the shape of the image_array. *It should be the same size as the loaded image, except only one band.
print image_array.shape
```
### What happened to the other 5 bands?
* We need to check the documentation of `read`.
```
print help(i_info.read)
```
### Now we see that the default is to only open 1 band (the first band of the image).
* If we want to open all of the bands then we have to specify this information in the `bands2open` parameter.
```
# We can load the new image into the same variable and it will be overwritten.
# Rather than open all 6 bands, we can start by loading two bands, the red and NIR.
with gl.ropen(image2open) as i_info:
image_array = i_info.read(bands2open=[3, 4])
# Check the array shape again.
print image_array.shape
```
### What if we only want a portion of an image?
* First, go back to the help documentation.
* The parameters, `i` and `j` are the starting row and column positions, respectively.
* The parameters, `rows` and `cols` are the number of samples to load.
```
# Open a 500 x 500 pixel array
with gl.ropen(image2open) as i_info:
image_array = i_info.read(bands2open=[3, 4],
i=20,
j=30,
rows=50,
cols=50)
print image_array.shape
```
### We can also extract a subset of an image via x,y coordinates
* To do this, use the parameters `x` and `y` in place of `i` and `j`.
* In the example below, we are reading from the top left corner of the image.
```
# Open a 500 x 500 pixel array
with gl.ropen(image2open) as i_info:
image_array = i_info.read(bands2open=[3, 4],
x=-45314.7978005,
y=-353296.312702,
rows=50,
cols=50)
print image_array.shape
```
### MpGlue also supports parallel reading
* This must be done with the `raster_tools` module, using `n_jobs`.
```
from mpglue import raster_tools
image_array = raster_tools.read(image2open,
bands2open=-1,
i=20,
j=30,
rows=50,
cols=50,
n_jobs=-1)
print image_array.shape
```
## Vegetation indexes
* MpGlue's `read` class has built-in vegetation indices.
### We see that NDVI is one option.
* By default, `compute_index` is set to 'none'.
* Use the `compute_index` option to return NDVI instead of the spectral bands.
```
with gl.ropen(image2open) as i_info:
image_array = i_info.read(compute_index='ndvi')
```
### Viewing images
* For quick visualization, you can use built-it methods.
```
# Let's view the NDVI array
with gl.ropen(image2open) as i_info:
i_info.read(compute_index='ndvi')
i_info.show(show_which='ndvi', color_map='Greens')
# We can also view a band in greyscale, but note that
# the array will be set as the red and NIR bands if
# an index was computed. Here, we are viewing band 1,
# which is the red band (1 of 2 bands in the array).
with gl.ropen(image2open) as i_info:
i_info.read()
i_info.show(color_map='Greys', band=1)
# In order to view any of the image's original bands,
# reload the array. Here, we load the entire image.
with gl.ropen(image2open) as i_info:
image_array = i_info.read(bands2open=-1)
print i_info.array_shape
# Now view the MidIR band.
with gl.ropen(image2open) as i_info:
image_array = i_info.read(bands2open=-1)
i_info.show(band=5, color_map='afmhot')
# Load the three visible bands and
# view the true color plot.
# !Warning! 16-bit arrays are scaled to byte
# when displaying RGB images.
with gl.ropen(image2open) as i_info:
image_array = i_info.read(bands2open=[3, 2, 1],
sort_bands2open=False)
i_info.show(band='rgb', clip_percentiles=(2, 98))
```
| github_jupyter |
# Tutorial 05: Creating Custom Networks
This tutorial walks you through the process of generating custom networks. Networks define the network geometry of a task, as well as the constituents of the network, e.g., vehicles, traffic lights, etc... Various networks are available in Flow, depicting a diverse set of open and closed traffic networks such as ring roads, intersections, traffic light grids, straight highway merges, and more.
In this tutorial, we will recreate the ring road network, seen in the figure below.
<img src="img/ring_scenario.png">
In order to recreate this network, we will design a *network* class. This class creates the configuration files needed to produce a transportation network within the simulator. It also specifies the location of edge nodes in the network, as well as the positioning of vehicles at the start of a run.
We begin by creating a class that inherits the methods of Flow's base network class. The separate methods are filled in in later sections.
```
# import Flow's base network class
from flow.networks import Network
# define the network class, and inherit properties from the base network class
class myNetwork(Network):
pass
```
The rest of the tutorial is organized as follows: Sections 1 and 2 discuss the steps needed to specify custom traffic network geometry features and auxiliary features, respectively, while Section 3 implements the new network in a simulation for visualization and testing purposes.
## 1. Specifying Traffic Network Features
One of the core responsibilities of the network class is to generate the necessary xml files needed to initialize a SUMO instance. These xml files describe specific network features such as the position and directions of nodes and edges (see the figure above). Once the base network has been inherited, specifying these features becomes very systematic. All child classes are required to define at least the following three methods:
* **specify_nodes**: specifies the attributes of nodes in the network.
* **specify_edges**: specifies the attributes of edges containing pairs of nodes in the network.
* **specify_routes**: specifies the routes which vehicles can take starting from any edge.
Additionally, the following optional functions may also be defined:
* **specify_types**: specifies the attributes of various edge types (if any exist).
* **specify_connections**: specifies the attributes of connections. These attributes are used to describe how any specific node's incoming and outgoing edge/lane pairs are connected. If no connections are specified, SUMO will generate default connections.
All of the functions mentioned above take in as input `net_params`, and output a list of dictionary elements, with each element providing the attributes of the component to be specified.
This tutorial will cover the first three methods. For examples of `specify_types` and `specify_routes`, we refer interested users to the source code located in `flow/networks/ring.py` and `flow/networks/bridge_toll.py`, respectively.
### 1.1 ADDITIONAL_NET_PARAMS
The features used to parametrize the network are specified within the `NetParams` input, as discussed in tutorial 1. Specifically, for the sake of our network, the `additional_params` attribute within `NetParams` will be responsible for storing information on the radius, number of lanes, and speed limit within each lane, as seen in the figure above. Accordingly, we define `ADDITIONAL_NET_PARAMS` as follows:
```
ADDITIONAL_NET_PARAMS = {
"radius": 40,
"num_lanes": 1,
"speed_limit": 30,
}
```
All networks presented in Flow provide a unique `ADDITIONAL_NET_PARAMS` component containing the information needed to properly define the network parameters. We assume that these values are always provided by the user, and accordingly can be called from `net_params`. For example, if we would like to call the "radius" parameter, we simply type:
radius = net_params.additional_params["radius"]
### 1.2 specify_nodes
The nodes of a network are the positions of selected points in the network. These points are connected together using edges (see section 1.4). In order to specify the location of the nodes, the function `specify_nodes` is used. This function returns a list of dictionary elements, where each dictionary depicts the attributes of a single node. These node attributes include:
* **id**: the name of the node
* **x**: the x coordinate of the node
* **y**: the y coordinate of the node
* For other SUMO-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptions#Node_Descriptions
Refering to the figure at the top of this tutorial, we specify four nodes at the bottom (0,-r), top (0,r), left (-r,0), and right (0,r) of the ring, respectively. This is done as follows:
```
class myNetwork(myNetwork): # update my network class
def specify_nodes(self, net_params):
# one of the elements net_params will need is a "radius" value
r = net_params.additional_params["radius"]
# specify the name and position (x,y) of each node
nodes = [{"id": "bottom", "x": 0, "y": -r},
{"id": "right", "x": r, "y": 0},
{"id": "top", "x": 0, "y": r},
{"id": "left", "x": -r, "y": 0}]
return nodes
```
### 1.3 specify_edges
Once specified, the nodes are linked using directed edges. This is done through the `specify_edges` method which, similar to `specify_nodes`, returns a list of dictionary elements, with each dictionary specifying the attributes of a single edge. The attributes include:
* **id**: the name of the edge
* **from**: the name of the node the edge starts from
* **to**: the name of the node the edges ends at
* **length**: the length of the edge
* **numLanes**: the number of lanes on the edge
* **speed**: the speed limit for vehicles on the edge
* For other SUMO-related attributes, see: http://sumo.dlr.de/wiki/Networks/Building_Networks_from_own_XML-descriptions#Edge_Descriptions.
One useful additional attribute is **shape**, which specifies the shape of the edge connecting two nodes. The shape consists of a series of subnodes (internal to SUMO) that are connected by straight lines to create a curved edge. If no shape is specified, the nodes are connected by a straight line. This attribute is needed for creating circular arcs in the system.
We now create four arcs connecting the nodes specified in Section 1.2 counter-clockwisely:
```
# some mathematical operations that may be used
from numpy import pi, sin, cos, linspace
class myNetwork(myNetwork): # update my network class
def specify_edges(self, net_params):
r = net_params.additional_params["radius"]
edgelen = r * pi / 2
# this will let us control the number of lanes in the network
lanes = net_params.additional_params["num_lanes"]
# speed limit of vehicles in the network
speed_limit = net_params.additional_params["speed_limit"]
edges = [
{
"id": "edge0",
"numLanes": lanes,
"speed": speed_limit,
"from": "bottom",
"to": "right",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(-pi/2, 0, 40)]
},
{
"id": "edge1",
"numLanes": lanes,
"speed": speed_limit,
"from": "right",
"to": "top",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(0, pi/2, 40)]
},
{
"id": "edge2",
"numLanes": lanes,
"speed": speed_limit,
"from": "top",
"to": "left",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi/2, pi, 40)]},
{
"id": "edge3",
"numLanes": lanes,
"speed": speed_limit,
"from": "left",
"to": "bottom",
"length": edgelen,
"shape": [(r*cos(t), r*sin(t)) for t in linspace(pi, 3*pi/2, 40)]
}
]
return edges
```
### 1.4 specify_routes
The route is a sequence of edges, which vehicles can traverse given their current positions. For example, a vehicle beginning in the edge titled "edge0" (see section 1.3) must traverse, in sequence, "edge0", "edge1", "edge2", and "edge3", before restarting its path.
In order to specify the routes a vehicle may take, the function `specify_routes` is used. The routes in this method can be specified in one of three ways:
**1. Single route per edge:**
For deterministic routes (as is the case in the ring road scenario), the routes can be specified as a dictionary where the keys represent the starting edges and the elements represent sequences of edges that the vehicle must traverse, with the first edge corresponding to the edge that the vehicle begins on. Note that the edges must be connected for the route to be valid.
For this network, the available routes can be defined as follows:
```
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"]}
return rts
```
**2. Multiple routes per edge:**
Alternatively, if the routes are meant to be stochastic, each element in the dictionary can be enriched to contain a list of (route, probability) tuples, where the first element in the tuple is one of the routes a vehicle can take from a specific starting edge, and the second element is the probability that a vehicle will choose that route. Note that, in this case, the sum of probability values for each dictionary key must sum up to one
For example, modifying the code snippet we presented above, another valid way of representing the route in a more probabilistic setting is:
```
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": [(["edge0", "edge1", "edge2", "edge3"], 1)],
"edge1": [(["edge1", "edge2", "edge3", "edge0"], 1)],
"edge2": [(["edge2", "edge3", "edge0", "edge1"], 1)],
"edge3": [(["edge3", "edge0", "edge1", "edge2"], 1)]}
return rts
```
**3. Per-vehicle routes:**
Finally, if you would like to assign a specific starting route to a vehicle, you can do so by adding an element into the dictionary whose key is the name of the vehicle and whose content is the list of edges the vehicle is meant to traverse as soon as being introduced to the network.
As an example, assume we have a vehicle named \"human_0\" in the network, and it is initialized on the edge named \"edge0\". Then, the route for this vehicle can be specifically added through the `specify_routes` method as follows:
```
class myNetwork(myNetwork): # update my network class
def specify_routes(self, net_params):
rts = {"edge0": ["edge0", "edge1", "edge2", "edge3"],
"edge1": ["edge1", "edge2", "edge3", "edge0"],
"edge2": ["edge2", "edge3", "edge0", "edge1"],
"edge3": ["edge3", "edge0", "edge1", "edge2"],
"human_0": ["edge0", "edge1", "edge2", "edge3"]}
return rts
```
In all three cases, the routes are ultimately represented in the class in the form described under the multiple routes setting, i.e.
>>> print(network.rts)
{
"edge0": [
(["edge0", "edge1", "edge2", "edge3"], 1)
],
"edge1": [
(["edge1", "edge2", "edge3", "edge0"], 1)
],
"edge2": [
(["edge2", "edge3", "edge0", "edge1"], 1)
],
"edge3": [
(["edge3", "edge0", "edge1", "edge2"], 1)
],
"human_0": [
(["edge0", "edge1", "edge2", "edge3"], 1)
]
}
where the vehicle-specific route is only included in the third case.
## 2. Specifying Auxiliary Network Features
Other auxiliary methods exist within the base network class to help support vehicle state initialization and acquisition. Of these methods, the only required abstract method is:
* **specify_edge_starts**: defines edge starts for road sections with respect to some global reference.
Other optional abstract methods within the base network class include:
* **specify_internal_edge_starts**: defines the edge starts for internal edge nodes caused by finite length connections between road sections.
* **specify_intersection_edge_starts**: defines edge starts for intersections with respect to some global reference frames. Only needed by environments containing intersections.
* **gen_custom_start_pos**: used to generate a user defined set of starting positions for vehicles in the network.
### 2.2 Specifying the Starting Position of Edges
All of the above functions with prefix "specify" receive no inputs, and return a list of tuples in which the first element of the tuple is the name of the edge/intersection/internal_link, and the second element is the distance of the link from some global reference, i.e. [(link_0, pos_0), (link_1, pos_1), ...].
The data specified in `specify_edge_starts` is used to provide a "global" sense of the location of vehicles, in one dimension. This is done either through the `get_x_by_id` method within an environment, or the `get_absolute_position` method in the `Vehicles` object within an environment. The `specify_internal_edge_starts` allows us to do the same to junctions/internal links when they are also located within the network (this is not the case for the ring road).
In section 1, we created a network with four edges named: \"edge0\", \"edge1\", \"edge2\", and \"edge3\". We assume \"edge0\" is the origin. Accordingly, the position of the edge start of \"edge0\" is 0. The next edge, \"edge1\", begins a quarter of the length of the network from the starting point of \"edge0\", and accordingly the position of its edge start is radius * $\\frac{pi}{2}$. This process continues for each of the edges. We can then define the starting position of the edges as follows:
```
# import some math functions we may use
from numpy import pi
class myNetwork(myNetwork): # update my network class
def specify_edge_starts(self):
r = self.net_params.additional_params["radius"]
edgestarts = [("edge0", 0),
("edge1", r * 1/2 * pi),
("edge2", r * pi),
("edge3", r * 3/2 * pi)]
return edgestarts
```
## 3. Testing the New Network
In this section, we run a new sumo simulation using our newly generated network class. For information on running sumo experiments, see `exercise01_sumo.ipynb`.
We begin by defining some of the components needed to run a sumo experiment.
```
from flow.core.params import VehicleParams
from flow.controllers import IDMController, ContinuousRouter
from flow.core.params import SumoParams, EnvParams, InitialConfig, NetParams
vehicles = VehicleParams()
vehicles.add(veh_id="human",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=22)
sumo_params = SumoParams(sim_step=0.1, render=True)
initial_config = InitialConfig(bunching=40)
```
For visualizing purposes, we use the environment `AccelEnv`, as it works on any given network.
```
from flow.envs.ring.accel import AccelEnv, ADDITIONAL_ENV_PARAMS
env_params = EnvParams(additional_params=ADDITIONAL_ENV_PARAMS)
```
Next, using the `ADDITIONAL_NET_PARAMS` component see created in Section 1.1, we prepare the `NetParams` component.
```
additional_net_params = ADDITIONAL_NET_PARAMS.copy()
net_params = NetParams(additional_params=additional_net_params)
```
We are ready now to create and run our network. Using the newly defined network classes, we create a network object and feed it into a `Experiment` simulation. Finally, we are able to visually confirm that are network has been properly generated.
```
from flow.core.experiment import Experiment
network = myNetwork( # we use the newly defined network class
name="test_network",
vehicles=vehicles,
net_params=net_params,
initial_config=initial_config
)
# AccelEnv allows us to test any newly generated network quickly
env = AccelEnv(env_params, sumo_params, network)
exp = Experiment(env)
# run the sumo simulation for a set number of time steps
_ = exp.run(1, 1500)
```
| github_jupyter |
# Solving Linear Systems: Iterative Methods
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://licensebuttons.net/l/by/4.0/80x15.png" /></a><br />This notebook by Xiaozhou Li is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
All code examples are also licensed under the [MIT license](http://opensource.org/licenses/MIT).
## General Form
For solving the linear system
$$
Ax = b,
$$
with the exact solution $x^{*}$. The general form based on the fixed point interation:
\begin{equation}
\begin{split}
x^{(0)} & = \text{initial guess} \\
x^{(k+1)} & = g(x^{(k)}) \quad k = 0,1,2,\ldots,
\end{split}
\end{equation}
where
$$
g(x) = x - C(Ax - b).
$$
Difficult: find a matrix $C$ such that
$$
\lim\limits_{k\rightarrow\infty}x^{(k)} = x^{*}
$$
and the algorithm needs to be converge fast and economy.
**Example 1**
\begin{equation*}
A = \left[\begin{array}{ccc} 9& -1 & -1 \\ -1 & 10 & -1 \\ -1 & -1& 15\end{array}\right],\quad b = \left[\begin{array}{c} 7 \\ 8 \\ 13\end{array}\right],
\end{equation*}
has the exact solution $x^{*} = {[1, 1, 1]}^T$
```
import numpy as np
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import clear_output, display
def IterC(A, b, C, x0, x_star, iters):
x = np.copy(x0)
print ('Iteration No. Numerical Solution Max norm error ')
print (0, x, np.linalg.norm(x_star-x, np.inf))
for i in range(iters):
x = x + np.dot(C, b - np.dot(A,x))
print (i+1, x, np.linalg.norm(x_star-x,np.inf))
A = np.array([[9., -1., -1.],[-1.,10.,-1.],[-1.,-1.,15.]])
b = np.array([7.,8.,13.])
```
**Naive Choice**
Choosing $C = I$, then
$$g(x) = x - (Ax - b),$$
and the fixed-point iteration
$$x^{(k+1)} = (I - A)x^{(k)} + b \quad k = 0,1,2,\ldots. $$
Let the intial guess $x_0 = [0, 0, 0]^T$.
```
C = np.eye(3)
x0 = np.zeros(3)
x_star = np.array([1.,1.,1.])
w = interactive(IterC, A=fixed(A), b=fixed(b), C=fixed(C), x0=fixed(x0), x_star=fixed(x_star), iters=widgets.IntSlider(min=0,max=20,value=0))
display(w)
```
**Best Choice (theoretically)**
Choosing $C = A^{-1}$, then
$$g(x) = x - A^{-1}(Ax - b),$$
and the fixed-point iteration
$$x^{(k+1)} = A^{-1}b \quad k = 0,1,2,\ldots. $$
* It equals to solve $Ax = b$ directly.
* However, it gives a hint that $C$ should be close to $A^{-1}$
**First Approach**
Let $D$ denote the main diagonal of $A$, $L$ denote the lower triangle of $A$ (entries below the main diagonal), and $U$ denote the upper triangle (entries above the main diagonal). Then $A = L + D + U$
Choosing $C = \text{diag}(A)^{-1} = D^{-1}$, then
$$g(x) = x - D^{-1}(Ax - b),$$
and the fixed-point iteration
$$Dx^{(k+1)} = (L + U)x^{(k)} + b \quad k = 0,1,2,\ldots. $$
```
C = np.diag(1./np.diag(A))
x0 = np.zeros(np.size(b))
#x0 = np.array([0,1.,0])
x_star = np.array([1.,1.,1.])
#IterC(A, b, C, x0, x_star, 10)
w = interactive(IterC, A=fixed(A), b=fixed(b), C=fixed(C), x0=fixed(x0), x_star=fixed(x_star), iters=widgets.IntSlider(min=0,max=20,value=0))
display(w)
```
## Jacobi Method
### Matrix Form:
$$
x^{(k+1)} = x^{(k)} - D^{-1}(Ax^{(k)} - b)
$$
or
$$
Dx^{(k+1)} = b - (L+U)x^{(k)}
$$
### Algorithm
$$
x^{(k+1)}_i = \frac{b_i - \sum\limits_{j < i}a_{ij}x^{(k)}_j - \sum\limits_{j > i}a_{ij}x^{(k)}_j}{a_{ii}}
$$
```
def Jacobi(A, b, x0, x_star, iters):
x_old = np.copy(x0)
x_new = np.zeros(np.size(x0))
print (0, x_old, np.linalg.norm(x_star-x_old,np.inf))
for k in range(iters):
for i in range(np.size(x0)):
x_new[i] = (b[i] - np.dot(A[i,:i],x_old[:i]) - np.dot(A[i,i+1:],x_old[i+1:]))/A[i,i]
print (k+1, x_new, np.linalg.norm(x_star-x_new,np.inf))
x_old = np.copy(x_new)
w = interactive(Jacobi, A=fixed(A), b=fixed(b), x0=fixed(x0), x_star=fixed(x_star), iters=widgets.IntSlider(min=0,max=20,value=0))
display(w)
```
**Second Approach**
Let $D$ denote the main diagonal of $A$, $L$ denote the lower triangle of $A$ (entries below the main diagonal), and $U$ denote the upper triangle (entries above the main diagonal). Then $A = L + D + U$
Choosing $C = (L + D)^{-1}$, then
$$g(x) = x - (L + D)^{-1}(Ax - b),$$
and the fixed-point iteration
$$(L + D)x^{(k+1)} = Ux^{(k)} + b \quad k = 0,1,2,\ldots. $$
```
def GS(A, b, x0, x_star, iters):
x = np.copy(x0)
print (0, x, np.linalg.norm(x_star-x,np.inf))
for k in range(iters):
for i in range(np.size(x0)):
x[i] = (b[i] - np.dot(A[i,:i],x[:i]) - np.dot(A[i,i+1:],x[i+1:]))/A[i,i]
print (k+1, x, np.linalg.norm(x_star-x,np.inf))
w = interactive(GS, A=fixed(A), b=fixed(b), x0=fixed(x0), x_star=fixed(x_star), iters=widgets.IntSlider(min=0,max=20,value=0))
display(w)
```
## Gauss-Seidel Method
### Algorithm
$$
x^{(k+1)}_i = \frac{b_i - \sum\limits_{j < i}a_{ij}x^{(k+1)}_j - \sum\limits_{j > i}a_{ij}x^{(k)}_j}{a_{ii}}
$$
### Matrix Form:
$$
x^{(k+1)} = x^{(k)} - (L+D)^{-1}(Ax^{(k)} - b)
$$
or
$$
(L+D)x^{(k+1)} = b - Ux^{(k)}
$$
**Example 2**
\begin{equation*}
A = \left[\begin{array}{ccc} 3& 1 & -1 \\ 2 & 4 & 1 \\ -1 & 2& 5\end{array}\right],\quad b = \left[\begin{array}{c} 4 \\ 1 \\ 1\end{array}\right],
\end{equation*}
has the exact solution $x^{*} = {[2, -1, 1]}^T$
```
A = np.array([[3, 1, -1],[2,4,1],[-1,2,5]])
b = np.array([4,1,1])
x0 = np.zeros(np.size(b))
x_star = np.array([2.,-1.,1.])
w = interactive(GS, A=fixed(A), b=fixed(b), x0=fixed(x0), x_star=fixed(x_star), iters=widgets.IntSlider(min=0,max=40,value=0))
display(w)
```
**Example 3**
\begin{equation*}
A = \left[\begin{array}{ccc} 1& 2 & -2 \\ 1 & 1 & 1 \\ 2 & 2& 1\end{array}\right],\quad b = \left[\begin{array}{c} 7 \\ 8 \\ 13\end{array}\right],
\end{equation*}
has the exact solution $x^{*} = {[-3, 8, 3]}^T$
```
A = np.array([[1, 2, -2],[1,1,1],[2,2,1]])
b = np.array([7,8,13])
#x0 = np.zeros(np.size(b))
x0 = np.array([-1,1,1])
x_star = np.array([-3.,8.,3.])
w = interactive(GS, A=fixed(A), b=fixed(b), x0=fixed(x0), x_star=fixed(x_star), iters=widgets.IntSlider(min=0,max=20,value=0))
display(w)
B = np.eye(3) - np.dot(np.diag(1./np.diag(A)),A)
print(B)
print (np.linalg.eig(B))
```
**Example 4**
\begin{equation*}
A = \left[\begin{array}{cc} 1& 2 \\ 3 & 1 \end{array}\right],\quad b = \left[\begin{array}{c} 5 \\ 5\end{array}\right],
\end{equation*}
has the exact solution $x^{*} = {[1, 2]}^T$
or
\begin{equation*}
A = \left[\begin{array}{cc} 3& 1 \\ 1 & 2 \end{array}\right],\quad b = \left[\begin{array}{c} 5 \\ 5\end{array}\right],
\end{equation*}
```
#A = np.array([[1, 2],[3,1]])
A = np.array([[3, 1],[1,2]])
b = np.array([5,5])
#x0 = np.zeros(np.size(b))
x0 = np.array([0,0])
x_star = np.array([1.,2.,])
w = interactive(GS, A=fixed(A), b=fixed(b), x0=fixed(x0), x_star=fixed(x_star), iters=widgets.IntSlider(min=0,max=20,value=0))
display(w)
```
**Example 5**
Are Jacobi iteration and Gauss-Seidel iteration convergent for the following equations?
\begin{equation*}
A_1 = \left[\begin{array}{ccc} 3& 0 & 4 \\ 7 & 4 & 2 \\ -1 & 1 & 2\end{array}\right],\quad A_2 = \left[\begin{array}{ccc} -3& 3 & -6 \\ -4 & 7 & -8 \\ 5 & 7 & -9\end{array}\right],
\end{equation*}
* Consider the **spectral radius** of the iterative matrix
* $B_J = -D^{-1}(L+U)$ and $B_{GS} = -(L+D)^{-1}U$
```
def Is_Jacobi_Gauss(A):
L = np.tril(A,-1)
U = np.triu(A,1)
D = np.diag(np.diag(A))
B_J = np.dot(np.diag(1./np.diag(A)), L+U)
B_GS = np.dot(np.linalg.inv(L+D),U)
rho_J = np.linalg.norm(np.linalg.eigvals(B_J), np.inf)
rho_GS = np.linalg.norm(np.linalg.eigvals(B_GS), np.inf)
print ("Spectral Radius")
print ("Jacobi: ", rho_J)
print ("Gauss Sediel: ", rho_GS)
A1 = np.array([[3, 0, 4],[7, 4, 2], [-1,1,2]])
A2 = np.array([[-3, 3, -6], [-4, 7, -8], [5, 7, -9]])
Is_Jacobi_Gauss(A2)
```
## Successive Over-Relaxation (SOR)
### Algorithm
$$
x^{(k+1)}_i = x^{(k)} + \omega \frac{b_i - \sum\limits_{j < i}a_{ij}x^{(k+1)}_j - \sum\limits_{j \geq i}a_{ij}x^{(k)}_j}{a_{ii}}
$$
### Matrix Form:
$$
x^{(k+1)} = x^{(k)} - \omega(\omega L+D)^{-1}(Ax^{(k)} - b)
$$
or
$$
(\omega L+D)x^{(k+1)} = ((1-\omega)D - \omega U)x^{(k)} + \omega b
$$
```
def SOR(A, b, x0, x_star, omega, iters):
x = np.copy(x0)
print (0, x, np.linalg.norm(x_star-x,np.inf))
for k in range(iters):
for i in range(np.size(x0)):
x[i] = x[i] + omega*(b[i] - np.dot(A[i,:i],x[:i]) - np.dot(A[i,i:],x[i:]))/A[i,i]
print (k+1, x, np.linalg.norm(x_star-x,np.inf))
def SOR2(A, b, x0, x_star, omega, iters):
x = np.copy(x0)
for k in range(iters):
for i in range(np.size(x0)):
x[i] = x[i] + omega*(b[i] - np.dot(A[i,:i],x[:i]) - np.dot(A[i,i:],x[i:]))/A[i,i]
return (np.linalg.norm(x_star-x,np.inf))
def SOR3(A, b, x0, x_star, omega, iters):
x = np.copy(x0)
print (0, np.linalg.norm(x_star-x,np.inf))
for k in range(iters):
for i in range(np.size(x0)):
x[i] = x[i] + omega*(b[i] - np.dot(A[i,:i],x[:i]) - np.dot(A[i,i:],x[i:]))/A[i,i]
print (k+1, np.linalg.norm(x_star-x,np.inf))
A = np.array([[9., -1., -1.],[-1.,10.,-1.],[-1.,-1.,15.]])
b = np.array([7.,8.,13.])
x0 = np.array([0.,0.,0.])
x_star = np.array([1.,1.,1.])
omega = 1.01
w = interactive(SOR, A=fixed(A), b=fixed(b), x0=fixed(x0), x_star=fixed(x_star), omega=fixed(omega), iters=widgets.IntSlider(min=0,max=20,value=0))
display(w)
w = interactive(GS, A=fixed(A), b=fixed(b), x0=fixed(x0), x_star=fixed(x_star), iters=widgets.IntSlider(min=0,max=20,value=0))
display(w)
```
**Example 6**
\begin{equation*}
A = \left[\begin{array}{ccc} 2& -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1& 2\end{array}\right],\quad b = \left[\begin{array}{c} 1 \\ 0 \\ 1.8\end{array}\right],
\end{equation*}
has the exact solution $x^{*} = {[1.2, 1.4, 1.6]}^T$
```
A = np.array([[2, -1, 0],[-1, 2, -1], [0, -1, 2]])
b = np.array([1., 0, 1.8])
x0 = np.array([1.,1.,1.])
x_star = np.array([1.2,1.4,1.6])
omega = 1.2
w = interactive(SOR, A=fixed(A), b=fixed(b), x0=fixed(x0), x_star=fixed(x_star), omega=fixed(omega), iters=widgets.IntSlider(min=0,max=20,value=0))
display(w)
w = interactive(GS, A=fixed(A), b=fixed(b), x0=fixed(x0), x_star=fixed(x_star), iters=widgets.IntSlider(min=0,max=20,value=0))
display(w)
num = 21
omega = np.linspace(0.8, 1.8, num)
err1 = np.zeros(num)
for i in range(num):
err1[i] = SOR2(A, b, x0, x_star, omega[i], 10)
print (err1)
plt.plot(omega, np.log10(err1), 'o')
```
**Example 7**
\begin{equation*}
A = \left[\begin{array}{cccc} -4& 1 & 1 & 1 \\ 1 & -4 & 1 & 1 \\ 1 & 1& -4 &1 \\ 1 & 1 &1 & -4\end{array}\right],\quad b = \left[\begin{array}{c} 1 \\ 1 \\ 1 \\ 1\end{array}\right],
\end{equation*}
has the exact solution $x^{*} = {[-1, -1, -1, -1]}^T$
```
A = np.array([[-4, 1, 1, 1],[1, -4, 1, 1], [1, 1, -4, 1], [1, 1, 1, -4]])
b = np.array([1, 1, 1, 1])
x0 = np.zeros(np.size(b))
x_star = np.array([-1,-1,-1,-1])
omega = 1.25
w = interactive(SOR, A=fixed(A), b=fixed(b), x0=fixed(x0), x_star=fixed(x_star), omega=fixed(omega), iters=widgets.IntSlider(min=0,max=20,value=0))
display(w)
w = interactive(GS, A=fixed(A), b=fixed(b), x0=fixed(x0), x_star=fixed(x_star), iters=widgets.IntSlider(min=0,max=100,value=0))
display(w)
num = 21
omega = np.linspace(0.8, 1.8, num)
err1 = np.zeros(num)
for i in range(num):
err1[i] = SOR2(A, b, x0, x_star, omega[i], 10)
print (err1)
plt.plot(omega, np.log10(err1), 'o')
```
**Example 8**
\begin{equation*}
A=\begin{pmatrix}{3} & {-1} & {0} & 0 & 0 & \frac{1}{2} \\ {-1} & {3} & {-1} & {0} & \frac{1}{2} & 0\\ {0} & {-1} & {3} & {-1} & {0} & 0 \\ 0& {0} & {-1} & {3} & {-1} & {0} \\ {0} & \frac{1}{2} & {0} & {-1} & {3} & {-1} \\ \frac{1}{2} & {0} & 0 & 0 & {-1} & {3}\end{pmatrix},\,\,b=\begin{pmatrix}\frac{5}{2} \\ \frac{3}{2} \\ 1 \\ 1 \\ \frac{3}{2} \\ \frac{5}{2} \end{pmatrix}
\end{equation*}
has the exact solution $x^{*} = {[1, 1, 1, 1, 1, 1]}^T$
```
n0 = 6
A = 3*np.eye(n0) - np.diag(np.ones(n0-1),-1) - np.diag(np.ones(n0-1),+1)
for i in range(n0):
if (abs(n0-1 - 2*i) > 1):
A[i, n0-1-i] = - 1/2
print (A)
x_star = np.ones(n0)
b = np.dot(A, x_star)
x0 = np.zeros(np.size(b))
omega = 1.25
w = interactive(SOR, A=fixed(A), b=fixed(b), x0=fixed(x0), x_star=fixed(x_star), omega=fixed(omega), iters=widgets.IntSlider(min=0,max=20,value=0))
display(w)
num = 21
omega = np.linspace(0.8, 1.8, num)
err1 = np.zeros(num)
for i in range(num):
err1[i] = SOR2(A, b, x0, x_star, omega[i], 10)
print (err1)
plt.plot(omega, np.log10(err1), 'o')
w = interactive(Jacobi, A=fixed(A), b=fixed(b), x0=fixed(x0), x_star=fixed(x_star), iters=widgets.IntSlider(min=0,max=100,value=0))
display(w)
```
## Sparse Matrix Computations
A coefficient matrix is called sparse if many of the matrix entries are known to be zero. Often, of the $n^2$ eligible entries in a sparse matrix, only $\mathcal{O}(n)$ of them are nonzero. A full matrix is the opposite, where few entries may be assumed to be zero.
**Example 9**
Consider the $n$-equation version of
\begin{equation*}
A=\begin{pmatrix}{3} & {-1} & {0} & 0 & 0 & \frac{1}{2} \\ {-1} & {3} & {-1} & {0} & \frac{1}{2} & 0\\ {0} & {-1} & {3} & {-1} & {0} & 0 \\ 0& {0} & {-1} & {3} & {-1} & {0} \\ {0} & \frac{1}{2} & {0} & {-1} & {3} & {-1} \\ \frac{1}{2} & {0} & 0 & 0 & {-1} & {3}\end{pmatrix},
\end{equation*}
has the exact solution $x^{*} = {[1, 1,\ldots, 1]}^T$ and $b = A x^{*}$
* First, let us have a look about the matrix $A$
```
n0 = 10000
A = 3*np.eye(n0) - np.diag(np.ones(n0-1),-1) - np.diag(np.ones(n0-1),+1)
for i in range(n0):
if (abs(n0-1 - 2*i) > 1):
A[i, n0-1-i] = - 1/2
#plt.spy(A)
#plt.show()
```
* How about the $PA = LU$ for the above matrix $A$?
* Are the $L$ and $U$ matrices still sparse?
```
import scipy.linalg
#P, L, U = scipy.linalg.lu(A)
#plt.spy(L)
#plt.show()
```
Gaussian elimination applied to a sparse matrix usually causes **fill-in**, where the coefficient matrix changes from sparse to full due to the necessary row operations. For this reason, the efficiency of Gaussian elimination and its $PA = LU$ implementation become questionable for sparse matrices, leaving iterative methods as a feasible alternative.
* Let us solve it with SOR method
```
x_star = np.ones(n0)
b = np.dot(A, x_star)
x0 = np.zeros(np.size(b))
omega = 1.25
w = interactive(SOR3, A=fixed(A), b=fixed(b), x0=fixed(x0), x_star=fixed(x_star), omega=fixed(omega), iters=widgets.IntSlider(min=0,max=200,value=0, step=10))
display(w)
```
## Application for Solving Laplace's Equation
### Laplace's equation
Consider the Laplace's equation given as
$$
\nabla^2 u = 0,\quad\quad (x,y) \in D,
$$
where $\nabla^2 = \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}$, and the boundary conditions are given as

### Finite Difference Approximation
Here, we use a rectangular grid $(x_i,y_j)$, where
$$
x_i = i\Delta x, \,\,\text{for }\, i = 0,1,\ldots,N+1;\quad y_j = j\Delta y,\,\,\text{for }\, j = 0,1,\ldots,M+1.
$$
Five-points scheme:
$$
-\lambda^2 u_{i+1,j} + 2(1+\lambda^2)u_{i,j} - \lambda^2u_{i-1,j} - u_{i,j+1} - u_{i,j-1} = 0,\quad\text{for}\,\, i = 1,\ldots,N,\,\, j = 1,\ldots,M,
$$
where $\lambda = \frac{\Delta y}{\Delta x}$. The boundary conditions are
- $x = 0: u_{0,j} = g_L(y_j), \quad\text{for }\, j = 1,\ldots,M$,
- $x = a: u_{N+1,j} = g_R(y_j), \quad\text{for }\, j = 1,\ldots,M$,
- $y = 0: u_{i,0} = g_B(x_i), \quad\text{for }\, i = 1,\ldots,N$,
- $y = b: u_{i,M+1} = g_T(x_i), \quad\text{for }\, i = 1,\ldots,N$.
```
def generate_TD(N, dx, dy):
T = np.zeros([N,N])
a = - (dy/dx)**2
b = 2*(1 - a)
for i in range(N):
T[i,i] += b
if (i < N-1):
T[i,i+1] += a
if (i > 0):
T[i,i-1] += a
D = -np.identity(N)
return T, D
def assemble_matrix_A(dx, dy, N, M):
T, D = generate_TD(N, dx, dy)
A = np.zeros([N*M, N*M])
for j in range(M):
A[j*N:(j+1)*N,j*N:(j+1)*N] += T
if (j < M-1):
A[j*N:(j+1)*N,(j+1)*N:(j+2)*N] += D
if (j > 0):
A[j*N:(j+1)*N,(j-1)*N:j*N] += D
return A
N = 4
M = 4
dx = 1./(N+1)
dy = 1./(M+1)
T, D = generate_TD(N, dx, dy)
#print (T)
A = assemble_matrix_A(dx, dy, N, M)
#print (A)
plt.spy(A)
plt.show()
# Set boundary conditions
def gL(y):
return 0.
def gR(y):
return 0.
def gB(x):
return 0.
def gT(x):
return 1.
#return x*(1-x)*(4./5-x)*np.exp(6*x)
def assemble_vector_b(x, y, dx, dy, N, M, gL, gR, gB, gT):
b = np.zeros(N*M)
# Left BCs
for j in range(M):
b[(j-1)*N] += (dy/dx)**2*gL(y[j+1])
# Right BCs
# b +=
# Bottom BCs
# b +=
# Top BCs:
for i in range(N):
b[(M-1)*N+i] += gT(x[i+1])
return b
from mpl_toolkits import mplot3d
from mpl_toolkits.mplot3d import axes3d
def Laplace_solver(a, b, N, M, gL, gR, gB, gT):
dx = b/(M+1)
dy = a/(N+1)
x = np.linspace(0, a, N+2)
y = np.linspace(0, b, M+2)
A = assemble_matrix_A(dx, dy, N, M)
b = assemble_vector_b(x, y, dx, dy, N, M, gL, gR, gB, gT)
v = np.linalg.solve(A,b)
# add boundary points + plotting
u = np.zeros([(N+2),(M+2)])
#u[1:(N+1),1:(M+1)] = np.reshape(v, (N, M))
# Top BCs
for i in range(N+2):
u[i,M+1] = gT(x[i])
u = np.transpose(u)
u[1:(M+1),1:(N+1)] = np.reshape(v, (M, N))
X, Y = np.meshgrid(x, y)
#Z = np.sin(2*np.pi*X)*np.sin(2*np.pi*Y)
fig = plt.figure()
#ax = plt.axes(projection='3d')
ax = fig.add_subplot(1, 1, 1, projection='3d')
ax.plot_surface(X, Y, u, rstride=1, cstride=1,
cmap='viridis', edgecolor='none')
ax.set_title('surface')
plt.show()
Laplace_solver(1, 1, 40, 40, gL, gR, gB, gT)
def Jacobi_tol(A, b, x0, tol):
x_old = np.copy(x0)
x_new = np.zeros(np.size(x0))
for i in range(np.size(x0)):
x_new[i] = (b[i] - np.dot(A[i,:i],x_old[:i]) - np.dot(A[i,i+1:],x_old[i+1:]))/A[i,i]
iters = 1
while ((np.linalg.norm(x_new-x_old,np.inf)) > tol):
x_old = np.copy(x_new)
for i in range(np.size(x0)):
x_new[i] = (b[i] - np.dot(A[i,:i],x_old[:i]) - np.dot(A[i,i+1:],x_old[i+1:]))/A[i,i]
iters += 1
return x_new, iters
def GS_tol(A, b, x0, tol):
x_old = np.copy(x0)
x = np.copy(x0)
for i in range(np.size(x0)):
x[i] = (b[i] - np.dot(A[i,:i],x[:i]) - np.dot(A[i,i+1:],x[i+1:]))/A[i,i]
iters = 1
while ((np.linalg.norm(x-x_old,np.inf)) > tol):
x_old = np.copy(x)
for i in range(np.size(x0)):
x[i] = (b[i] - np.dot(A[i,:i],x[:i]) - np.dot(A[i,i+1:],x[i+1:]))/A[i,i]
iters += 1
return x, iters
def SOR_tol(A, b, x0, omega, tol):
x_old = np.copy(x0)
x = np.copy(x0)
for i in range(np.size(x0)):
x[i] = x[i] + omega*(b[i] - np.dot(A[i,:i],x[:i]) - np.dot(A[i,i:],x[i:]))/A[i,i]
iters = 1
while ((np.linalg.norm(x-x_old,np.inf)) > tol):
x_old = np.copy(x)
for i in range(np.size(x0)):
x[i] = x[i] + omega*(b[i] - np.dot(A[i,:i],x[:i]) - np.dot(A[i,i:],x[i:]))/A[i,i]
iters += 1
return x, iters
def CG_tol(A, b, x0, x_star, tol):
r_new = b - np.dot(A, x0)
r_old = np.copy(np.size(x0))
d_old = np.zeros(np.size(x0))
x = np.copy(x0)
iters = 0
while ((np.linalg.norm(x-x_star,np.inf)) > tol):
if (iters == 0):
d_new = np.copy(r_new)
else:
beta = np.dot(r_new,r_new)/np.dot(r_old,r_old)
d_new = r_new + beta*d_old
Ad = np.dot(A, d_new)
alpha = np.dot(r_new,r_new)/np.dot(d_new,Ad)
x += alpha*d_new
d_old = d_new
r_old = r_new
r_new = r_old - alpha*Ad
iters += 1
return x, iters
def Iterative_solver(a, b, N, M, gL, gR, gB, gT, tol):
dx = b/(M+1)
dy = a/(N+1)
x = np.linspace(0, a, N+2)
y = np.linspace(0, b, M+2)
A = assemble_matrix_A(dx, dy, N, M)
b = assemble_vector_b(x, y, dx, dy, N, M, gL, gR, gB, gT)
v = np.linalg.solve(A,b)
#tol = 1.e-8
v0 = np.zeros(np.size(b))
#v_J, iters = Jacobi_tol(A, b, v0, tol)
#print ("Jacobi Method: %4d %7.2e" %(iters, np.linalg.norm(v - v_J, np.inf)))
#v_GS, iters = GS_tol(A, b, v0, tol)
#print ("Gauss Seidel : %4d %7.2e" %(iters, np.linalg.norm(v - v_GS, np.inf)))
omega = 2./(1 + np.sin(np.pi*dx))
print ("omega = ", omega)
v_SOR, iters = SOR_tol(A, b, v0, omega, tol)
print ("SOR Method : %4d %7.2e" %(iters, np.linalg.norm(v - v_SOR, np.inf)))
v_CG, iters = CG_tol(A, b, v0, v, tol)
print ("CG Method : %4d %7.2e" %(iters, np.linalg.norm(v - v_CG, np.inf)))
Iterative_solver(1, 1, 80, 80, gL, gR, gB, gT, 1.e-4)
```
| github_jupyter |
# Kesten Processes and Firm Dynamics
<a id='index-0'></a>
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import quantecon as qe
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
import yfinance as yf
import pandas as pd
s = yf.download('^IXIC', '2006-1-1', '2019-11-1')['Adj Close']
r = s.pct_change()
fig, ax = plt.subplots()
ax.plot(r, alpha=0.7)
ax.set_ylabel('returns', fontsize=12)
ax.set_xlabel('date', fontsize=12)
plt.show()
μ = -0.5
σ = 1.0
def kesten_ts(ts_length=100):
x = np.zeros(ts_length)
for t in range(ts_length-1):
a = np.exp(μ + σ * np.random.randn())
b = np.exp(np.random.randn())
x[t+1] = a * x[t] + b
return x
fig, ax = plt.subplots()
num_paths = 10
np.random.seed(12)
for i in range(num_paths):
ax.plot(kesten_ts())
ax.set(xlabel='time', ylabel='$X_t$')
plt.show()
μ_a = -0.5 # location parameter for a
σ_a = 0.1 # scale parameter for a
μ_b = 0.0 # location parameter for b
σ_b = 0.5 # scale parameter for b
μ_e = 0.0 # location parameter for e
σ_e = 0.5 # scale parameter for e
s_bar = 1.0 # threshold
T = 500 # sampling date
M = 1_000_000 # number of firms
s_init = 1.0 # initial condition for each firm
α_0 = 1e-5
α_1 = 0.1
β = 0.9
years = 15
days = years * 250
def garch_ts(ts_length=days):
σ2 = 0
r = np.zeros(ts_length)
for t in range(ts_length-1):
ξ = np.random.randn()
σ2 = α_0 + σ2 * (α_1 * ξ**2 + β)
r[t] = np.sqrt(σ2) * np.random.randn()
return r
fig, ax = plt.subplots()
np.random.seed(12)
ax.plot(garch_ts(), alpha=0.7)
ax.set(xlabel='time', ylabel='$\\sigma_t^2$')
plt.show()
from numba import njit, prange
from numpy.random import randn
@njit(parallel=True)
def generate_draws(μ_a=-0.5,
σ_a=0.1,
μ_b=0.0,
σ_b=0.5,
μ_e=0.0,
σ_e=0.5,
s_bar=1.0,
T=500,
M=1_000_000,
s_init=1.0):
draws = np.empty(M)
for m in prange(M):
s = s_init
for t in range(T):
if s < s_bar:
new_s = np.exp(μ_e + σ_e * randn())
else:
a = np.exp(μ_a + σ_a * randn())
b = np.exp(μ_b + σ_b * randn())
new_s = a * s + b
s = new_s
draws[m] = s
return draws
data = generate_draws()
fig, ax = plt.subplots()
qe.rank_size_plot(data, ax, c=0.01)
plt.show()
```
| github_jupyter |
# Self-Driving Car Engineer Nanodegree
## Project: **Finding Lane Lines on the Road**
***
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
---
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
---
**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
---
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
## Import Packages
```
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
```
## Read in an Image
```
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
```
## Ideas for Lane Detection Pipeline
**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
`cv2.inRange()` for color selection
`cv2.fillPoly()` for regions selection
`cv2.line()` to draw lines on an image given endpoints
`cv2.addWeighted()` to coadd / overlay two images
`cv2.cvtColor()` to grayscale or change color
`cv2.imwrite()` to output images to file
`cv2.bitwise_and()` to apply a mask to an image
**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
## Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
```
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def hsv(image):
return cv2.cvtColor(image,cv2.COLOR_RGB2HSV)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=10):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
x_size = img.shape[1]
y_size = img.shape[0]
#creating an array using points from houghspace
lines_slope_intercept = np.zeros(shape=(len(lines),2))
for index,line in enumerate(lines):
for x1,y1,x2,y2 in line:
#calculating slope and intercepts
slope = (y2-y1)/(x2-x1)
intercept = y1 - (x1 * slope)
#storing the slope and intercept in a list
lines_slope_intercept[index]=[slope,intercept]
#finding max and min slope lines
max_slope_line = lines_slope_intercept[lines_slope_intercept.argmax(axis=0)[0]]
min_slope_line = lines_slope_intercept[lines_slope_intercept.argmin(axis=0)[0]]
left_slopes = []
left_intercepts = []
right_slopes = []
right_intercepts = []
for line in lines_slope_intercept:
if abs(line[0] - max_slope_line[0]) < 0.15 and abs(line[1] - max_slope_line[1]) < (0.15 * x_size):
left_slopes.append(line[0])
left_intercepts.append(line[1])
elif abs(line[0] - min_slope_line[0]) < 0.15 and abs(line[1] - min_slope_line[1]) < (0.15 * x_size):
right_slopes.append(line[0])
right_intercepts.append(line[1])
# left and right lines are averages of these slopes and intercepts, extrapolate lines to edges and center*
new_lines = np.zeros(shape=(1,2,4), dtype=np.int32)
if len(left_slopes) > 0:
left_line = [sum(left_slopes)/len(left_slopes),sum(left_intercepts)/len(left_intercepts)]
left_bottom_x = (y_size - left_line[1])/left_line[0]
left_top_x = (y_size*.575 - left_line[1])/left_line[0]
if (left_bottom_x >= 0):
new_lines[0][0] =[left_bottom_x,y_size,left_top_x,y_size*.575]
if len(right_slopes) > 0:
right_line = [sum(right_slopes)/len(right_slopes),sum(right_intercepts)/len(right_intercepts)]
right_bottom_x = (y_size - right_line[1])/right_line[0]
right_top_x = (y_size*.575 - right_line[1])/right_line[0]
if (right_bottom_x <= x_size):
new_lines[0][1]=[right_bottom_x,y_size,right_top_x,y_size*.575]
for line in new_lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
```
## Test Images
Build your pipeline to work on the images in the directory "test_images"
**You should make sure your pipeline works well on these images before you try the videos.**
```
import os
os.listdir("test_images/")
```
## Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
```
#reading in an image
for index, img in enumerate(os.listdir("test_images/")):
image = mpimg.imread('test_images/' + img)
#print(image.shape)
gray_img = grayscale(image)
hsv_img = hsv(image)
# define range of color in HSV
lower_yellow = np.array([20,150,150])
upper_yellow = np.array([40,255,255])
lower_white = np.array([0,0,230])
upper_white = np.array([255,255,255])
# Threshold the HSV image to get only yellow/white
yellow_mask = cv2.inRange(hsv_img, lower_yellow, upper_yellow)
white_mask = cv2.inRange(hsv_img, lower_white, upper_white)
# Bitwise-OR mask and original image
full_mask = cv2.bitwise_or(yellow_mask, white_mask)
subdued_gray = (gray_img / 2).astype('uint8')
boosted_lanes = cv2.bitwise_or(subdued_gray, full_mask)
#definig kernel size for gaussian smoothing/blurring
kernel_size = 5
blurred_img = gaussian_blur(boosted_lanes,kernel_size)
#defining threshold for canny edge detection
canny_low_threshold = 60
canny_high_threshold = 150
edges_img = canny(blurred_img,canny_low_threshold,canny_high_threshold)
#defining vertices for fillpoly
x = edges_img.shape[1]
y = edges_img.shape[0]
vertices = np.array([[(0,y),(450, 290), (490, 290), (x,y)]], dtype=np.int32)
masked_image = region_of_interest(edges_img, vertices)
#defining parameters for hough transform
rho = 2.5
theta = np.pi/180
threshold = 68
min_line_length = 70
max_line_gap = 250
hough_image = hough_lines(masked_image,rho,theta,threshold,min_line_length,max_line_gap)
result = weighted_img(hough_image,image)
fig = plt.figure(figsize=(6,10))
plt.imshow(result, cmap="gray")
```
## Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
`solidWhiteRight.mp4`
`solidYellowLeft.mp4`
**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
**If you get an error that looks like this:**
```
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
```
**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
```
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
gray_img = grayscale(image)
hsv_img = hsv(image)
# define range of color in HSV
lower_yellow = np.array([20,150,150])
upper_yellow = np.array([40,255,255])
lower_white = np.array([0,0,230])
upper_white = np.array([255,255,255])
# Threshold the HSV image to get only yellow/white
yellow_mask = cv2.inRange(hsv_img, lower_yellow, upper_yellow)
white_mask = cv2.inRange(hsv_img, lower_white, upper_white)
# Bitwise-OR mask and original image
full_mask = cv2.bitwise_or(yellow_mask, white_mask)
subdued_gray = (gray_img / 2).astype('uint8')
boosted_lanes = cv2.bitwise_or(subdued_gray, full_mask)
#definig kernel size for gaussian smoothing/blurring
kernel_size = 5
blurred_img = gaussian_blur(boosted_lanes,kernel_size)
#defining threshold for canny edge detection
canny_low_threshold = 60
canny_high_threshold = 150
edges_img = canny(blurred_img,canny_low_threshold,canny_high_threshold)
#defining vertices for fillpoly
x = edges_img.shape[1]
y = edges_img.shape[0]
vertices = np.array([[(0,y),(450, 290), (490, 290), (x,y)]], dtype=np.int32)
masked_image = region_of_interest(edges_img, vertices)
#defining parameters for hough transform
rho = 2.5
theta = np.pi/180
threshold = 68
min_line_length = 70
max_line_gap = 250
hough_image = hough_lines(masked_image,rho,theta,threshold,min_line_length,max_line_gap)
result = weighted_img(hough_image,image)
return result
```
Let's try the one with the solid white lane on the right first ...
```
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
```
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
```
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
```
## Improve the draw_lines() function
**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
Now for the one with the solid yellow lane on the left. This one's more tricky!
```
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
```
## Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
## Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
```
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
```
| github_jupyter |
```
import subprocess
import shlex
import pandas as pd
import numpy as np
from astropy.table import Table
from astropy.table import Column
import os
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
from matplotlib.ticker import MultipleLocator
import glob
from matplotlib import pyplot
import matplotlib.gridspec as gridspec
import gc
basedir = "/home/xhall/Documents/"
RedshiftClass = Table.from_pandas(pd.read_csv(basedir + "NewZTF/ML_sample_snid200.csv"))
ML_sample_snid_examples = Table.from_pandas(pd.read_csv(basedir + "NewZTF/ML_sample_snid_brightexamples.csv"))
sample_2018 = Table.from_pandas(pd.read_csv(basedir + "NewZTF/sample_2018/ML_sample_snid_2018.csv"))
source = basedir + "NewZTF/sample_2018/SNIDoutput/"
output = basedir + "NewZTF/sample_2018/ImageOutput/"
def read_tables(files):
matches_files = files[0:len(files)-1]
spectra = Table.read(files[-1], format = "ascii", names = ["wavelength", "flux"])
matches = []
for i in matches_files:
input_data = open(i,'r').readlines()[0].split()
row = [[int(input_data[3][0]), input_data[4],input_data[5][1::],float(input_data[-3].split("-")[-1]),float(input_data[-1])]]
row.append(Table.read(i, format = "ascii", names = ["redshifted_wavelength", "flux"]))
matches.append(row)
return matches, spectra
def plot_box_spec(wave, flux):
flux_plot = np.repeat(flux, 2)
wv_plot = wave.copy()
wv_plot[:-1] += np.diff(wave)/2
wv_plot = np.append(wave[0]-(wave[1]-wave[0])/2,
np.append(np.repeat(wv_plot[0:-1], 2),
wave[-1]+(wave[-1]-wave[-2])/2))
return wv_plot, flux_plot
def specplot(x,y,xi,yi,snid_type,fname,output,best_num, z_template, z_template_unc, z_snid,
spec_num, show_redshift=False):
fig, ax = plt.subplots(figsize=(8,4.5))
ax.plot(xi,yi,color='#32384D',alpha=0.5,
label='New SN')
ax.plot(x,y,color='#217CA3',
label='SNID template', lw=3)
if show_redshift:
ax.plot(x[-3],y[-3],color='white',lw=0,
label=r'$z_\mathrm{} = $ {:.3f}$\,\pm\,${:.3f}'.format("{SNID}", z_template, z_template_unc))
ax.text(0.78, 0.955, r'$z_\mathrm{} = ${:.4f}'.format("{SN}", z_snid),
va='center',
fontsize=15, transform=plt.gcf().transFigure)
else:
ax.text(0.78, 0.955, 'Match #{d}'.format(spec_num),
va='center',
fontsize=15, transform=plt.gcf().transFigure)
ax.plot(x[-3],y[-3],color='#217CA3', lw=3)
ax.set_xlabel(r'Rest Frame Wavelength ($\mathrm{\AA}$)', fontsize=17)
ax.set_ylabel('Relative Flux', fontsize=17)
ax.tick_params(which='both',labelsize=15)
ax.grid(axis='x', color='0.7', ls=':')
ax.xaxis.set_minor_locator(MultipleLocator(250))
ax.set_yticklabels([])
ax.text(0.105, 0.955, 'SNID type: ',
va='center',
fontsize=15, transform=plt.gcf().transFigure)
ax.text(0.245, 0.955, snid_type,
color='#217CA3', weight='bold', va='center',
fontsize=23, transform=plt.gcf().transFigure)
ax.legend(fancybox=True)
fig.subplots_adjust(left=0.055,right=0.975,top=0.925,bottom=0.145)
fig.savefig(output + 'snidfits_emclip_' + fname + "_" + str(best_num) + '.png', dpi = 600)
plt.close(fig)
def plot_best_5(source, output, spectra_name, z_snid):
source_folder = source + spectra_name
files = np.sort(glob.glob(source_folder+"/*.dat"))
if(len(files)==0):
print(spectra_name)
return -1
matches, spectra = read_tables(files)
for spec_num, i in enumerate(matches):
z = i[0][3]
snid_type = i[0][2][:-1]
xi, yi = plot_box_spec(spectra["wavelength"], spectra["flux"])
xi /= (1+z)
x, y = i[1]["redshifted_wavelength"] / (1+z), i[1]["flux"]
specplot(x,y,xi,yi,snid_type,spectra_name,output,i[0][0], z, i[0][4], z_snid, spec_num)
sample_2018[0]
counter = 0
for i in sample_2018:
spectra_name = i["Version"].split(".")[0]
z_snid = i["z_snid"]
plot_best_5(source,output,spectra_name,z_snid)
gc.collect()
if(counter%20 == 0):
print(counter)
counter += 1
break
pngs = glob.glob(output + "/*.png")
len(pngs)
len(sample_2018)*5
glob.glob(source + "/ZTF18aaxdrjn_20180531_P60_v1/*.*")
read_tables(np.sort(glob.glob(source + "/ZTF18aaxdrjn_20180531_P60_v1/*.dat")))
glob.glob(source + "/ZTF18aabssth_20180309_P60_v1/*.dat")
plot_best_5("/home/xhall/Documents/RandomSNID/","/home/xhall/Documents/RandomSNID/","lris20201012_ZTF20acdehpz",0.1751)
plt.boxplot([2 * -1, 2 * 1])
plt.hlines(.5,.75,1.25, color = "Blue")
```
| github_jupyter |
```
import holoviews as hv
hv.extension('bokeh')
hv.opts.defaults(hv.opts.Curve(width=500),
hv.opts.Histogram(width=500),
hv.opts.HLine(alpha=0.5, color='r', line_dash='dashed'))
import numpy as np
import scipy.stats
```
# Cadenas de Markov
## Introducción
En la lección anterior vimos caminatas aleatorias y definimos lo que es un proceso estocástico. En lo que sigue nos restringiremos a procesos estocásticos que sólo puede tomar valores de un conjunto discreto $\mathcal{S}$ en tiempos $n>0$ que también son discretos.
Llamaremos a $\mathcal{S}=\{1, 2, \ldots, M\}$ el conjunto de **estados** del proceso. Cada estado en particular se suele denotar por un número natural.
Recordemos que para que un proceso estocástico sea considerado una **cadena de Markov** se debe cumplir
$$
P(X_{n+1}|X_{n}, X_{n-1}, \ldots, X_{1}) = P(X_{n+1}|X_{n})
$$
que se conoce como la propiedad de Markov o propiedad markoviana.
:::{important}
En una cadena de markov el estado futuro es independiente del pasado cuando conozco el presente
:::
## Matriz de transición
Si la cadena de Markov tiene estados discretos y es homogenea, podemos escribir
$$
P(X_{n+1}=j|X_{n}=i) = P_{ij},
$$
donde homogeneo quiere decir que la probabilidad de transicionar de un estado a otro no cambia con el tiempo. La probabilidad $P_{i,j}$ se suele llamar probabilidad de transición "a un paso".
El conjunto con todas las posibles combinaciones $P_{ij}$ para $i,j \in \mathcal{S}$ forma una matriz cuadrada de $M \times M$ que se conoce como matriz de transición
$$
P = \begin{pmatrix} P_{11} & P_{12} & \ldots & P_{1M} \\
P_{21} & P_{22} & \ldots & P_{2M} \\
\vdots & \vdots & \ddots & \vdots \\
P_{M1} & P_{M2} & \ldots & P_{MM}\end{pmatrix}
$$
donde siempre se debe cumplir que las filas sumen 1
$$
\sum_{j \in \mathcal{S}} P_{ij} = 1
$$
y además todos los $P_{ij} \geq 0$ y $P_{ij} \in [0, 1]$.
Una matriz de transición o matriz estocástica puede representarse como un grafo dirigido donde los vertices son los estados y las aristas las probabilidades de transición o pesos.
El siguiente es un ejemplo de grafo para un sistema de cuatro estados con todas sus transiciones equivalentes e iguales a $1/2$. Las transiciones con probabilidad $0$ no se muestran.
<img src="images/markov2.png" width="300">
Considere ahora el siguiente ejemplo
<img src="images/markov-ruin.png" width="400">
:::{note}
Si salimos del estado $0$ o del estado $3$ ya no podemos volver a ellos.
:::
Los estados a los cuales no podemos retornar se conocen como estados **transitorios** o transientes. Por el contrario los estados a los que si tenemos posibilidad de retornar se llaman estados **recurrentes**.
En general cuando se tienen estados a los que no se puede retornar se dice que cadena es **reducible**. Por el contrario si podemos regresar a todos los estados se dice que la cadena es **irreducible**.
:::{note}
Una cadena reducible puede "dividirse" para crear cadenas irreducibles.
:::
En el ejemplo de arriba podemos separar $\{0\}$, $\{1,2\}$ y $\{3\}$ en tres cadenas irreducibles [^ruina]
[^ruina]: La cadena de Markov anterior modela un problema conocido como la [ruina del apostador](https://en.wikipedia.org/wiki/Gambler%27s_ruin), puedes estudiar de que se trata [aquí](http://manjeetdahiya.com/posts/markov-chains-gamblers-ruin/)
## Ejemplo: Cadena de dos estados
Digamos que queremos predecir el clima de Valdivia por medio utilizando una cadena de Markov. Por lo tanto asumiremos que el clima de mañana es perfectamente predecible a partir del clima de hoy. Sean dos estados
- $s_A$ Luvioso
- $s_B$ Soleado
Con probabilidades condicionales $P(s_A|s_A) = 0.7$, $P(s_B|s_A) = 0.3$, $P(s_A|s_B) = 0.45$ y $P(s_B|s_B) = 0.55$. En este caso la matriz de transición es
$$
P = \begin{pmatrix} P(s_A|s_A) & P(s_B|s_A) \\ P(s_A|s_B) & P(s_B|s_B) \end{pmatrix} = \begin{pmatrix} 0.7 & 0.3 \\ 0.45 & 0.55 \end{pmatrix}
$$
que también se puede visualizar como un mapa de transición
<img src="images/markov1.png" width="500">
Si está soleado hoy, ¿Cuál es la probabilidad de que llueva mañana, en tres dias más y en una semana más?
Utilicemos `Python` y la matriz de transición para responder esta pregunta. Primero escribimos la matriz de transición como un `ndarray` de Numpy
```
P = np.array([[0.70, 0.30],
[0.45, 0.55]])
```
En segunda lugar vamos a crear un vector de estado inicial
```
s0 = np.array([0, 1]) # Estado soleado
```
Luego, las probabilidades para mañana dado que hoy esta soleado pueden calcularse como
$$
s_1 = s_0 P
$$
que se conoce como transición a un paso
```
np.dot(s0, P)
```
La probabilidad para tres días más puede calcularse como
$$
s_3 = s_2 P = s_1 P^2 = s_0 P^3
$$
que se conoce como transición a 3 pasos. Sólo necesitamos elevar la matriz al cubo y multiplicar por el estado inicial
```
np.dot(s0, np.linalg.matrix_power(P, 3))
```
El pronóstico para una semana sería entonces la transición a 7 pasos
```
np.dot(s0, np.linalg.matrix_power(P, 7))
```
Notamos que el estado de nuestro sistema comienza a converger
```
np.dot(s0, np.linalg.matrix_power(P, 1000))
```
Esto se conoce como el estado estacionario de la cadena.
## Estado estacionario de la cadena de Markov
Si la cadena de Markov converge a un estado, ese estado se llama **estado estacionario**. Una cadena puede tener más de un estado estacionario.
Por definición en un estado estacionario se cumple que
$$
s P = s
$$
Que corresponde al problema de valores y vectores propios.
:::{note}
Los estados estacionarios son los vectores propios del sistema
:::
Para el ejemplo anterior teniamos que
$$
\begin{pmatrix} s_1 & s_2 \end{pmatrix} P = \begin{pmatrix} s_1 & s_2 \end{pmatrix}
$$
Que resulta en las siguientes ecuaciones
$$
0.7 s_1 + 0.45 s_2 = s_1
$$
$$
0.3 s_1 + 0.55 s_2 = s_2
$$
Ambas nos dicen que $s_2 = \frac{2}{3} s_1$. Si además consideramos que $s_1 + s_2 = 1$ podemos despejar y obtener
- $s_1 = 3/5 = 0.6$
- $s_2 = 0.4$
Que es lo que vimos antes. Esto nos dice que en un 60\% de los días lloverá y en el restante 40% estará soleado
## Probabilidad de transición luego de n-pasos
Una pregunta interesante a responder con una cadena de Markov es
> ¿Cuál es la probabilidad de llegar al estado $j$ dado que estoy en el estado $i$ si doy exactamente $n$ pasos?
Consideremos por ejemplo
<img src="images/markov3.png" width="400">
donde la matriz de transición es claramente
$$
P = \begin{pmatrix} 1/2 & 1/4 & 1/4 \\
1/8 & 3/4 & 1/8 \\
1/4 & 1/4 & 1/2\end{pmatrix}
$$
Para este ejemplo particular
> ¿Cúal es la probabilidad de llegar al estado $2$ desde el estado $0$ en 2 pasos?
Podemos resolver esto matemáticamente como
$$
\begin{pmatrix} P_{00} & P_{01} & P_{02} \end{pmatrix} \begin{pmatrix} P_{02} \\ P_{12} \\ P_{22} \end{pmatrix} = P_{00}P_{02} + P_{01}P_{12} + P_{02}P_{22} = 0.28125
$$
Que corresponde al elemento en la fila $0$ y columna $2$ de la matriz $P^2$
```
P = np.array([[1/2, 1/4, 1/4],
[1/8, 3/4, 1/8],
[1/4, 1/4, 1/2]])
np.dot(P, P)[0, 2]
```
:::{important}
En general la probabilidad de llegar al estado $j$ desde el estado $i$ en $n$ pasos es equivalente al elemento en la fila $i$ y columna $j$ de la matriz $P^n$
:::
¿Qué ocurre cuando $n$ tiene a infinito?
```
display(np.linalg.matrix_power(P, 3),
np.linalg.matrix_power(P, 5),
np.linalg.matrix_power(P, 100))
```
Todas las filas convergen a un mismo valor. Este conjunto de probabilidades se conoce como $\pi$ la distribución estacionaria de la cadena de Markov. Notar que las filas de $P^\infty$ convergen solo si la cadena es irreducible.
El elemento $\pi_j$ (es decir $P_{ij}^\infty$) nos da la probabilidad de estar en $j$ luego de infinitos pasos. Notar que el subíndice $i$ ya no tiene importancia, es decir que el punto de partida ya no tiene relevancia.
## Algoritmo general para simular una cadena de Markov discreta
Asumiendo que tenemos un sistema con un conjunto discreto de estados $\mathcal{S}$ y que conocemos la matriz de probabilidades de transición $P$ podemos simular su evolución con el siguiente algoritmo
1. Setear $n=0$ y seleccionar un estado inicial $X_n = i$
1. Para $n = 1,2,\ldots,T$
1. Obtener la fila de $P$ que corresponde al estado actual $X_n$, es decir $P[X_n, :]$
1. Generar $X_{n+1}$ muestreando de una distribución multinomial con vector de probabilidad igual a la fila seleccionada
En este caso $T$ es el horizonte de la simulación. A continuación veremos como simular una cadena de Markov discreta usando Python
Digamos que tenemos una cadena con tres estados y que la fila de $P$ asociada a $X_n$ es $[0.7, 0.2, 0.1]$. Podemos usar `scipy.stats.multinomial` para generar una aleatoriamente una variable multinomial y luego aplicar el argumento máximo para obtener el índice del estado $X_{n+1}$
```
np.argmax(scipy.stats.multinomial.rvs(n=1, p=[0.7, 0.2, 0.1], size=1), axis=1)
```
Si repetimos esto 100 veces se obtiene la siguiente distribución para $X_{n+1}$
```
x = np.argmax(scipy.stats.multinomial.rvs(n=1, p=[0.7, 0.2, 0.1], size=100), axis=1)
edges, bins = np.histogram(x, range=(np.amin(x)-0.5, np.amax(x)+0.5), bins=len(np.unique(x)))
hv.Histogram((edges, bins), kdims='x', vdims='Frecuencia').opts(xticks=[0, 1, 2])
```
Lo cual coincide con la fila de $P$ que utilizamos
Ahora que sabemos como obtener el estado siguiente probemos algo un poco más complicado.
Consideremos el ejemplo de predicción de clima y simulemos 1000 cadenas a un horizonte de 10 pasos
```
P = np.array([[0.70, 0.30],
[0.45, 0.55]])
n_chains = 1000
horizon = 10
states = np.zeros(shape=(n_chains, horizon), dtype='int')
states[:, 0] = 1 # Estado inicial para todas las cadenas
for i in range(n_chains):
for j in range(1, horizon):
states[i, j] = np.argmax(scipy.stats.multinomial.rvs(n=1, p=P[states[i, j-1], :], size=1))
```
A continuación se muestran las tres primeras simulaciones como series de tiempo
```
p =[]
for i in range(3):
p.append(hv.Curve((states[i, :]), 'n', 'Estados').opts(yticks=[0, 1]))
hv.Overlay(p)
```
A continuación se muestra el estado más probable en cada paso
```
n_states = len(np.unique(states))
hist = np.zeros(shape=(horizon, n_states))
for j in range(horizon):
hist[j, :] = np.array([sum(states[:, j] == s) for s in range(n_states)])
hv.Curve((np.argmax(hist, axis=1)), 'n', 'Estado más probable').opts(yticks=[0, 1])
```
## Ley de los grandes números para variables no i.i.d.
Previamente vimos que el promedio de un conjunto de $N$ variables independientes e idénticamente distribuidas (iid) converge a su valor esperado cuando $N$ es grande.
Por ejemplo
$$
\lim_{N \to \infty} \frac{1}{N} \sum_{i=1}^N X_i = \mu
$$
En esta lección vimos que la cadena de markov, un proceso estocástico donde no se cumple el supuesto iid, puede tener en ciertos casos una distribución estacionaria
:::{note}
La **distribución estacionaria** $\pi$ de una cadena de Markov con matriz de transición $P$ es tal que $\pi P = \pi$
:::
**Teorema de ergodicidad:** Una cadena de Markov irreducible y aperiodica tiene una distribución estacionaria $\pi$ única, independiente de valor del estado inicial y que cumple
$$
\lim_{n\to \infty} s_j(n) = \pi_j
$$
donde los componentes de $\pi$ representan la fracción de tiempo que la cadena estará en cada uno de los estados luego de observarla por un largo tiempo
:::{important}
El límite de observar la cadena por un tiempo largo es análogo al análisis de estadísticos estáticos sobre muestras grandes. Esto es el equivalente a la ley de los grandes números para el caso de la cadena de Markov
:::
### Notas históricas
- **La primera ley de los grandes números:** [Jacob Bernoulli](https://en.wikipedia.org/wiki/Jacob_Bernoulli) mostró la primera versión de la Ley de los grandes números en su Ars Conjectandi en 1713. Esta primera versión parte del supuesto de que las VAs son iid. Bernoulli era un firme creyente del destino, se oponía al libre albedrío y abogaba por el determinismo en los fenómenos aleatorios.
- **La segunda ley de los grandes números:** En 1913 el matemático ruso [Andrei Markov](https://en.wikipedia.org/wiki/Andrey_Markov) celebró el bicentenario de la famosa prueba de Bernoulli organizando un simposio donde presentó su nueva versión de la Ley de los grandes números que aplica sobre la clase de procesos estocásticos que hoy llamamos procesos de Markov, de esta forma extendiendo el resultado de Bernoulli a un caso que no es iid.
- **La pugna de Markov y Nekrasov:** En aquellos tiempos Markov estaba en pugna con otro matemático ruso: [Pavel Nekrasov](https://en.wikipedia.org/wiki/Pavel_Nekrasov). Nekrasov había publicado previamente que "la independencia era una condición necesaria para que se cumpla la ley de los grandes números". Nekrasov mantenia que el comportamiento humano al no ser iid no podía estar guiado por la ley de los grandes números, es decir que los humanos actuan voluntariamente y con libre albedrío. Markov reaccionó a esta afirmación desarrollando un contra-ejemplo que terminó siendo lo que hoy conocemos como los procesos de Markov
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
np.random.seed(0)
```
# Model definition
Let *k-nearest neighbors* of a new example $x$ be the $k$ examples out of the training set $X$ that minimize the distance function $d$ between $x$ and themselves.
For *classification* we can take the k-nearest neighbors of $x$ and assign the most popular class between them to $x$.
For *regression* we can take the k-nearest neighbors of $x$ and assign the average of these data points' targets to $x$. We could also use an inverse distance weighted average.
```
def get_k_nearest_neighbors(x, X, y, dist, k):
sorted_X = sorted(zip(X, y), key=dist(x))
return list(zip(*sorted_X[:k])) # [(training examples), (corresponding targets)]
```
Let A and B be two $n$-row column vectors.
Let's define a few distance functions:
1. Euclidean distance: $d(A, B) = \Vert {A - B}\Vert_2 = \sqrt{\displaystyle \sum_{i=1}^{n}(A_i - B_i)^2}$
2. Manhattan distance: $d(A, B) = \Vert {A - B}\Vert_1 = \displaystyle \sum_{i=1}^{n} \vert A_i - B_i \vert$
3. Chebyshev distance: $d(A, B) = \displaystyle \max_{i} \vert A_i - B_i \vert$
```
def d_euclidean(x):
def d(Xi):
return np.sqrt(np.sum((x - Xi[0]) ** 2))
return d
def d_manhattan(x):
def d(Xi):
return np.sum(np.abs(x - Xi[0]))
return d
def d_chebyshev(x):
def d(Xi):
return np.max(np.abs(x - Xi[0]))
return d
```
Let's define the classification and regression functions now.
Let $X_{train}$ be the training set ($X$ in previous cells), $X_{test}$ be the test set (each row contains an example to classify), $y_{train}$ be the targets for the training set.
```
from scipy.stats import mode
def knn_classification(X_train, y_train, X_test, dist=d_euclidean, k=3):
classes = []
for x in X_test:
k_nearest_neighbors, targets = get_k_nearest_neighbors(x, X_train, y_train, dist, k)
classes.append(mode(targets)[0][0])
return np.array(classes).reshape(-1, 1)
def knn_regression(X_train, y_train, X_test, dist=d_euclidean, k=3):
avg_targets = []
for x in X_test:
k_nearest_neighbors, targets = get_k_nearest_neighbors(x, X_train, y_train, dist, k)
avg_targets.append(np.mean(targets))
return np.array(avg_targets).reshape(-1, 1)
```
# K-Nearest Neigbors in practice
## Classification
### 1. Generating data
```
from sklearn.datasets import make_blobs
X_train, y_train = make_blobs(n_samples=100, centers=3, n_features=2, random_state=1)
sns.scatterplot(x=0, y=1, hue=y_train, data=pd.DataFrame(X_train))
X_test = np.array([[-10, -1], [0, 0], [-6, -10], [-8, -6], [-5, 0]]) # some random points on the scatterplot
plt.scatter(x=X_test[:, 0], y=X_test[:, 1], marker='X', s=20 ** 2)
plt.show()
```
### 2. Training the model
```
sns.scatterplot(x=0, y=1, hue=y_train, data=pd.DataFrame(X_train))
y_test = knn_classification(X_train, y_train, X_test)
sns.scatterplot(x=0, y=1, hue=y_test.reshape(-1), data=pd.DataFrame(X_test), legend=False, marker='X', s=20 ** 2)
plt.show()
```
## Regression
### 1. Generating data
```
m = 50
X_train = np.linspace(-5, 5, m).reshape(-1, 1)
y_train = -4 * X_train ** 2 - 3.5 * X_train + 7.2
noise = np.random.normal(0, 10, m).reshape(-1, 1)
y_train += noise
plt.plot(X_train, y_train, 'b.')
plt.show()
```
### 2. Training the model
```
plt.plot(X_train, y_train, 'b.')
y_test = knn_regression(X_train, y_train, X_train)
plt.plot(X_train, y_test, 'r')
plt.show()
```
| github_jupyter |
# The R Programming Language
1. **R**: Popular **open-source programming language** for statistical analysis
2. Widely used in statistics and econometrics
3. **User-friendly and powerful IDE**: [RStudio](https://www.rstudio.com/)
4. Basic functionalities of **R** can be extended by **packages**
5. Large number of packages available on the
[Comprehensive R Archive Network](https://cran.r-project.org/) (CRAN)
6. **Goal of this presentation:** Illustrate how to use `R` for the estimation of a
Poisson regression model
```
# install.packages("psych")
# install.packages("wooldridge")
# install.packages("xtable")
```
## Count data models
**Count data** models are used to explain dependent variables that are natural
numbers, i.e., positive integers such that $y_i \in \mathbb{N}$, where
$\mathbb{N} = \{0, 1, 2,\ldots\}$.
Count data models are frequently used in economics to study **countable events**:
Number of years of education, number of patent applications filed by companies,
number of doctor visits, number of crimes committed in a given city, etc.
The **Poisson model** is a popular count data model.
## Poisson regression model
Given a parameter $\lambda_i > 0$, the **Poisson model** assumes that the
probability of observing $Y_i=y_i$, where $y_i\in\mathbb{N}$, is equal to:
$$Prob(Y_i = y_i \mid \lambda_i) = \frac{\lambda_i^{y_i}\exp\{-\lambda_i\}}{y_i!},$$
for $i=1,\ldots,N$.
The mean and the variance of $Y_i$ are equal to the parameter $\lambda_i$:
$$E(Y_i\mid\lambda_i) = V(Y_i\mid\lambda_i) = \lambda_i,$$
implying *equi-dispersion* of the data.
To control for **observed characteristics**, the parameter $\lambda_i$ can be
parametrized as follows (implying $\lambda_i > 0$):
$$E(Y_i|X_i,\beta) \equiv \lambda_i = \exp\{X_i'\beta\},$$
where $X_i$ is a vector containing the covariates.
## Simulating data
`R` function simulating data from Poisson regression model:
```
simul_poisson <- function(n, beta) {
k <- length(beta) # number of covariates
x <- replicate(k - 1, rnorm(n)) # simulate covariates
x <- cbind(1, x) # for intercept term
lambda <- exp(x %*% beta) # individual means
y <- rpois(n, lambda) # simulate count
return(data.frame(y, x)) # return variables
}
```
Using function to generate data:
```
set.seed(123)
nobs <- 1000
beta <- c(-.5, .4, -.7)
data <- simul_poisson(nobs, beta)
```
## Data description
Descriptive statistics:
```
# extract variables of interest from data set
y <- data[, 1]
x <- as.matrix(data[, 2:4])
# descriptive statistics
library(psych)
describe(data)
```
## Data Description
Histogram of count variable:
```
barplot(table(y))
```
## Data Description
Relationship between count variable and covariates:
```
par(mfrow = c(1, 2))
plot(y, x[, 2])
plot(y, x[, 3])
```
## Likelihood Function and ML Estimator
Individual contribution to the likelihood function:
$$L_i(\beta;y_i,x_i) = \frac{\exp\{y_ix_i\beta\}\exp\{-\exp\{x_i\beta\}\}}{y_i!}$$
Individual log-Likelihood function:
$$\ell_i(\beta;y_i,x_i) = \log L_i(\beta;y_i,x_i)
= y_ix_i\beta - \exp\{x_i\beta\} - \log(y_i!)$$
Maximum Likelihood Estimator:
$$\hat{\beta}_{\text{MLE}} = \arg\max_{\beta} \sum_{i=1}^N \ell(\beta;y,X)$$
Optimization (using *minimization* of objective function):
$$\hat{\beta}_{\text{MLE}} = \arg\min_{\beta} Q(\beta;y,X) \qquad
Q(\beta;y,X) = -\frac{1}{N}\sum_{i=1}^N \ell_i(\beta;y_i,x_i)$$
## Coding the Objective Function
```
# Objective function of Poisson regression model
obj_poisson <- function(beta, y, x) {
lambda <- x %*% beta
llik <- y*lambda - exp(lambda) - lfactorial(y)
return(-mean(llik))
}
# Evaluating objective function
beta0 <- c(1, 2, 3)
obj_poisson(beta0, y, x)
```
## Maximizing the Objective Function
Set starting values:
```
beta0 <- rep(0, length(beta))
```
Optimize using quasi-Newton method (BFGS algorithm):
```
opt <- optim(beta0, obj_poisson, method = "BFGS",
y = y, x = x)
```
Show results:
```
cat("ML estimates:", opt$par,
"\nObjective function:", opt$value, "\n")
```
## Comparing Results to Built-in Function
```
opt_glm <- glm(y ~ 0 + x, family = poisson)
summary(opt_glm)
```
## Comparing Results to Built-in Function
Collect results from the two approaches to compare them:
```
res <- cbind("True" = beta, "MLE" = opt$par,
"GLM" = opt_glm$coefficients)
row.names(res) <- c("constant", "x1", "x2")
res
```
**Question:** Our results (`MLE`) are virtually the same as those obtained
with the built-in function `GLM`, but not identical. Where do the small
differences come from?
## Empirical Illustration
**Goal:** Investigate the determinants of fertility.
Poisson regression model used to estimate the relationship between explanatory
variables and count outcome variable.
Both our estimator coded from scratch and `R` built-in function will be used.
## Data
**Source:** Botswana's 1988 Demographic and Health Survey.
Data set borrowed from Wooldridge:
```
library(wooldridge)
data(fertil2)
```
Outcome variable: Total number of living children:
```
y_lab <- "children"
```
Explanatory variables: Education, age, marital status, living in urban area,
having electricity/TV at home:
```
x_lab <- c("educ", "age", "agesq", "evermarr", "urban",
"electric", "tv")
```
## Loading data
Selecting variables and removing missing values:
```
data <- fertil2[, c(y_lab, x_lab)]
data <- na.omit(data)
```
Show first 6 observations on first 8 variables:
```
head(data[, 1:8], n = 6)
```
## Descriptive Statitics
```
library(psych)
describe(data)
```
## Plot
```
attach(data)
par(mfrow = c(1, 2))
blue_transp <- adjustcolor("blue", alpha.f = 0.1)
plot(age, children, pch = 19, col = blue_transp)
plot(educ, children, pch = 19, col = blue_transp)
```
## MLE of the Poisson Model
Maximum likelihood function using built-in function `glm()`:
```
mle <- glm(children ~ educ + age + agesq + evermarr +
urban + electric + tv,
family = "poisson", data = data)
```
Maximum likelihood function using our own function:
```
y <- data[, y_lab]
x <- as.matrix(data[, x_lab])
x <- cbind(1, x) # for intercept term
beta0 <- rep(0, ncol(x)) # starting values
opt <- optim(beta0, obj_poisson, method = "BFGS",
y = y, x = x)
```
## MLE of the Poisson Model
**Results different from `glm()`?**
Optimization algorithms are iterative methods that rely on different criteria
to dertermine if/when the optimum has been reached.
**For example:** Change in the objective function, change in the parameter values,
change in the gradient, step size, etc.
*[More in Advanced Microeconometrics course].*
**Try to adjust tuning parameters**, for example add
`control = list(ndeps = rep(1e-8, ncol(x)))` to `optim()` to change step size
of gradient approximation.
## Summarizing the Empirical Results
```
summary(mle)
```
## Fitted Values
```
plot(density(mle$fitted.values),
main = "Density of fitted mean values")
```
## Formatting the results
```
library(xtable)
xtable(mle)
```
| github_jupyter |
```
####################################################################################################
# Copyright 2019 Srijan Verma and EMBL-European Bioinformatics Institute
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
####################################################################################################
```
# Ensembl numeric data extraction (negative)
## Below function to get Ensembl IDs from .csv and convert to a python list in JSON format
```
def csv_to_id(path):
df = pd.read_csv(path)
ids = df.TEST_neg.tolist()
for loc in ids:
loc = str(loc) #here 'nan' is converted to a string to compare with if
if loc != 'nan':
cleaned_ids.append(loc)
cleaned = json.dumps(cleaned_ids)
correct_format = "{" +'"ids": ' + cleaned + "}"
return correct_format
import pandas as pd
import numpy as np
import json
cleaned_ids = []
path = '/Training set.example.csv'
cleaned_IDs = csv_to_id(path)
#print(cleaned_IDs)
```
## Passing the list to Ensembl REST API to get JSON response
```
# Single request, multiple IDs
import requests, sys
import json, urllib
server = "https://rest.ensembl.org"
ext = '/lookup/id/?format=full;expand=1;utr=1;phenotypes=1'
#ext = '/lookup/id/?
headers = {'Content-Type' : 'application/json', "Accept" : 'application/json'}
#'{"ids" : ["ENSG00000255689", "ENSG00000254443"]}'
#cleaned_IDs = {"ids": ["ENSG00000255689", "ENSG00000254443"]}
r = requests.post(server+ext,headers=headers, data='{0}'.format(cleaned_IDs))
print(str(r))
print(type(r))
decoded = r.json()
#print(repr(decoded))
```
## Saving JSON response on local machine and then loading the .json file
```
import json
with open('/negative_data.json', 'w') as outfile:
json.dump(decoded, outfile, indent=4, sort_keys=True)
with open('/negative_data.json') as access_json:
read_content = json.load(access_json)
```
## 'read_content' variable contains the json response received
```
gene_display_name = []
gene_start = []
gene_end = []
gene_strand = []
gene_seq_region_name = []
gene_biotype = []
```
## Below function [get_gene_data() ] to extract 'gene' data. Data Extracted are :
1. gene display_name
2. gene start
3. gene end
4. gene strand
5. gene seq_region_name
6. gene biotype
```
def get_gene_data():
count = 0
for i in range(len(cleaned_ids)):
gene_display_name.append(read_content[cleaned_ids[i]]['display_name'])
gene_start.append(read_content[cleaned_ids[i]]['start'])
gene_end.append(read_content[cleaned_ids[i]]['end'])
gene_strand.append(read_content[cleaned_ids[i]]['strand'])
gene_seq_region_name.append(read_content[cleaned_ids[i]]['seq_region_name'])
gene_biotype.append(read_content[cleaned_ids[i]]['biotype'])
if cleaned_ids[i] in read_content:
count = count + 1
print(count)
get_gene_data()
print('No. of contents of gene_start is {0}'.format(len(gene_start)))
print('No. of contents of gene_end is {0}'.format(len(gene_end)))
print('No. of contents of gene_strand is {0}'.format(len(gene_strand)))
print('No. of contents of gene_seq_region_name is {0}'.format(len(gene_seq_region_name)))
print('No. of contents of gene_display_name is {0}'.format(len(gene_display_name)))
print('No. of contents of gene_biotype is {0}'.format(len(gene_biotype)))
no_of_transcripts = []
gene_ids_for_transcripts = []
```
## Below function [ get_no_of_transcripts() ] to calculate no. of transcripts in a particular gene
```
def get_no_of_transcripts():
for i in range(len(cleaned_ids)):
no_of_transcripts.append(len(read_content[cleaned_ids[i]]['Transcript']))
for k in range(len(read_content[cleaned_ids[i]]['Transcript'])):
gene_ids_for_transcripts.append(cleaned_ids[i])
for j in range(len(cleaned_ids)):
print('No. of transcripts in gene "{0}" are {1}'.format(cleaned_ids[j],no_of_transcripts[j]))
get_no_of_transcripts()
#read_content[cleaned_ids[0]]['Transcript'][0]
transcript_id = []
transcript_start = []
transcript_end = []
transcript_biotype = []
#gene_ids_for_transcripts
```
## Below function [get_transcript_data() ] to extract 'transcript' data. Data Extracted are :
1. transcript id
2. transcript start
3. transcript end
4. transcript biotype
```
def get_transcript_data():
for i in range(len(cleaned_ids)):
for j in range(len(read_content[cleaned_ids[i]]['Transcript'])):
transcript_id.append(read_content[cleaned_ids[i]]['Transcript'][j]['id'])
transcript_start.append(read_content[cleaned_ids[i]]['Transcript'][j]['start'])
transcript_end.append(read_content[cleaned_ids[i]]['Transcript'][j]['end'])
transcript_biotype.append(read_content[cleaned_ids[i]]['Transcript'][j]['biotype'])
for k in range(len(gene_ids_for_transcripts)):
print('Transcript "{0}" of gene ID "{1}" has start and end as : "{2}" & "{3}"'.format(transcript_id[k],gene_ids_for_transcripts[k],transcript_start[k],transcript_end[k]))
get_transcript_data()
print(len(transcript_id))
print(len(transcript_start))
print(len(transcript_end))
print(len(gene_ids_for_transcripts))
len(read_content[cleaned_ids[0]]['Transcript'][0]["Exon"])
no_of_exons = []
transcript_ids_for_exons = []
```
## Below function [ get_no_of_exons() ] to calculate no. of exons for a particular transcript
```
def get_no_of_exons():
for i in range(len(cleaned_ids)):
for j in range(len(read_content[cleaned_ids[i]]['Transcript'])):
no_of_exons.append(len(read_content[cleaned_ids[i]]['Transcript'][j]["Exon"]))
for k in range(len(read_content[cleaned_ids[i]]['Transcript'][j]["Exon"])):
transcript_ids_for_exons.append(read_content[cleaned_ids[i]]['Transcript'][j]['id'])
for l in range(len(cleaned_ids)):
print('No. of exons in transcript "{0}" are {1}'.format(transcript_id[l],no_of_exons[l]))
len(read_content[cleaned_ids[0]]['Transcript'][0]["Exon"])
get_no_of_exons()
sum(no_of_exons)
len(transcript_ids_for_exons)
#read_content[cleaned_ids[0]]['Transcript'][0]["Exon"][0]
exon_id = []
exon_start = []
exon_end = []
gene_ids_for_exons = []
```
## Below function [get_exon_data() ] to extract 'exon' data. Data Extracted are :
1. exon id
2. exon start
3. exon end
```
def get_exon_data():
for i in range(len(cleaned_ids)):
for j in range(len(read_content[cleaned_ids[i]]['Transcript'])):
for k in range(len(read_content[cleaned_ids[i]]['Transcript'][j]["Exon"])):
exon_id.append(read_content[cleaned_ids[i]]['Transcript'][j]["Exon"][k]['id'])
exon_start.append(read_content[cleaned_ids[i]]['Transcript'][j]["Exon"][k]['start'])
exon_end.append(read_content[cleaned_ids[i]]['Transcript'][j]["Exon"][k]['end'])
gene_ids_for_exons.append(cleaned_ids[i])
for l in range(len(transcript_ids_for_exons)):
print('Exon "{0}" of Transcript ID "{1}" having gene ID "{2}" has start and end as : "{3}" & "{4}"'.format(exon_id[l],transcript_ids_for_exons[l],gene_ids_for_exons[l],exon_start[l],exon_end[l]))
get_exon_data()
len(exon_id)
len(gene_ids_for_exons)
transcript_len = []
```
## Below function[ get_transcript_length() ] to calculate length of transcript
```
def get_transcript_length():
# for i in range(transcript_id):
# for j in range(exon)
for i in range(len(cleaned_ids)):
for j in range(len(read_content[cleaned_ids[i]]['Transcript'])):
trans_len = 0
start = 0
end = 0
total_exon_len = 0
for k in range(len(read_content[cleaned_ids[i]]['Transcript'][j]["Exon"])):
start = read_content[cleaned_ids[i]]['Transcript'][j]["Exon"][k]['start']
end = read_content[cleaned_ids[i]]['Transcript'][j]["Exon"][k]['end']
total_exon_len = total_exon_len + (end - start + 1)
transcript_len.append(total_exon_len)
for k in range(len(transcript_id)):
print('Transcript ID "{0}" has length of {1} bps'.format(transcript_id[k], transcript_len[k]))
len(transcript_id)
get_transcript_length()
len(transcript_len)
transcript_len[-1]
transcript_id[-1]
exon_len = []
```
## Below function[ get_exon_length() ] to calculate length of exon
```
def get_exon_length():
# for i in range(transcript_id):
# for j in range(exon)
#exon_id
for i in range(len(cleaned_ids)):
for j in range(len(read_content[cleaned_ids[i]]['Transcript'])):
# exon_len = 0
# start = 0
# end = 0
# exon_len = 0
for k in range(len(read_content[cleaned_ids[i]]['Transcript'][j]["Exon"])):
start = 0
end = 0
exon_len_sum = 0
start = read_content[cleaned_ids[i]]['Transcript'][j]["Exon"][k]['start']
end = read_content[cleaned_ids[i]]['Transcript'][j]["Exon"][k]['end']
exon_len_sum = (end - start + 1)
exon_len.append(exon_len_sum)
for k in range(len(exon_id)):
print('Exon ID "{0}" has length of {1} bps'.format(exon_id[k], exon_len[k]))
get_exon_length()
len(exon_len)
len(exon_id)
```
## Exporting gene data to gene_data.csv file
```
import csv
header = ['SNO', 'Gene ID', 'Display Name', 'Biotype', 'Start', 'End', 'Strand', 'Seq region Name', 'No. of Transcripts']
path = '/negative_data/gene_data.csv'
with open(path, 'wt', newline ='') as file:
writer = csv.writer(file, delimiter=',')
writer.writerow(i for i in header)
s_no = []
for i in range(len(cleaned_ids)):
s_no.append(i+1)
import pandas as pd
df = pd.read_csv(path)
df[df.columns[0]] = s_no
df[df.columns[1]] = cleaned_ids
df[df.columns[2]] = gene_display_name
df[df.columns[3]] = gene_biotype
df[df.columns[4]] = gene_start
df[df.columns[5]] = gene_end
df[df.columns[6]] = gene_strand
df[df.columns[7]] = gene_seq_region_name
df[df.columns[8]] = no_of_transcripts
df.to_csv(path)
```
## Exporting transcript data to transcript_data.csv file
```
import csv
header = ['SNO', 'Gene ID', 'Transcript ID', 'Biotype', 'Transcript Start', 'Transcript End', 'Transcript Length','No. of Exons']
path = '/negative_data/transcript_data.csv'
with open(path, 'wt', newline ='') as file:
writer = csv.writer(file, delimiter=',')
writer.writerow(i for i in header)
s_no = []
for i in range(len(transcript_id)):
s_no.append(i+1)
import pandas as pd
df = pd.read_csv(path)
df[df.columns[0]] = s_no
df[df.columns[1]] = gene_ids_for_transcripts
df[df.columns[2]] = transcript_id
df[df.columns[3]] = transcript_biotype
df[df.columns[4]] = transcript_start
df[df.columns[5]] = transcript_end
df[df.columns[6]] = transcript_len
df[df.columns[7]] = no_of_exons
df.to_csv(path)
```
## Exporting exon data to exon_data.csv file
```
import csv
header = ['SNO', 'Gene ID', 'Transcript ID', 'Exon ID', 'Exon Start', 'Exon End', 'Exon Length']
path = '/negative_data/exon_data.csv'
with open(path, 'wt', newline ='') as file:
writer = csv.writer(file, delimiter=',')
writer.writerow(i for i in header)
s_no = []
for i in range(len(exon_id)):
s_no.append(i+1)
import pandas as pd
df = pd.read_csv(path)
df[df.columns[0]] = s_no
df[df.columns[1]] = gene_ids_for_exons
df[df.columns[2]] = transcript_ids_for_exons
df[df.columns[3]] = exon_id
df[df.columns[4]] = exon_start
df[df.columns[5]] = exon_end
df[df.columns[6]] = exon_len
df.to_csv(path)
```
| github_jupyter |
# Running attribute inference attacks on Regression Models
In this tutorial we will show how to run black-box inference attacks on regression model. This will be demonstrated on the Nursery dataset (original dataset can be found here: https://archive.ics.uci.edu/ml/datasets/nursery).
## Preliminaries
In order to mount a successful attribute inference attack, the attacked feature must be categorical, and with a relatively small number of possible values (preferably binary).
In the case of the diabetes dataset, the sensitive feature we want to infer is the 'sex' feature, which is a binary feature.
## Load data
```
import os
import sys
sys.path.insert(0, os.path.abspath('..'))
from art.utils import load_diabetes
(x_train, y_train), (x_test, y_test), _, _ = load_diabetes(test_set=0.5)
```
## Train MLP model
```
from sklearn.tree import DecisionTreeRegressor
from art.estimators.regression.scikitlearn import ScikitlearnRegressor
model = DecisionTreeRegressor()
model.fit(x_train, y_train)
art_regressor = ScikitlearnRegressor(model)
print('Base model score: ', model.score(x_test, y_test))
```
## Attack
### Black-box attack
The black-box attack basically trains an additional classifier (called the attack model) to predict the attacked feature's value from the remaining n-1 features as well as the original (attacked) model's predictions.
#### Train attack model
```
import numpy as np
from art.attacks.inference.attribute_inference import AttributeInferenceBlackBox
attack_train_ratio = 0.5
attack_train_size = int(len(x_train) * attack_train_ratio)
attack_x_train = x_train[:attack_train_size]
attack_y_train = y_train[:attack_train_size]
attack_x_test = x_train[attack_train_size:]
attack_y_test = y_train[attack_train_size:]
attack_feature = 1 # sex
# get original model's predictions
attack_x_test_predictions = np.array([np.argmax(arr) for arr in art_regressor.predict(attack_x_test)]).reshape(-1,1)
# only attacked feature
attack_x_test_feature = attack_x_test[:, attack_feature].copy().reshape(-1, 1)
# training data without attacked feature
attack_x_test = np.delete(attack_x_test, attack_feature, 1)
bb_attack = AttributeInferenceBlackBox(art_regressor, attack_feature=attack_feature)
# train attack model
bb_attack.fit(attack_x_train)
```
#### Infer sensitive feature and check accuracy
```
# get inferred values
values = [-0.88085106, 1.]
inferred_train_bb = bb_attack.infer(attack_x_test, pred=attack_x_test_predictions, values=values)
# check accuracy
train_acc = np.sum(inferred_train_bb == np.around(attack_x_test_feature, decimals=8).reshape(1,-1)) / len(inferred_train_bb)
print(train_acc)
```
This means that for 56% of the training set, the attacked feature is inferred correctly using this attack.
Now let's check the precision and recall:
```
def calc_precision_recall(predicted, actual, positive_value=1):
score = 0 # both predicted and actual are positive
num_positive_predicted = 0 # predicted positive
num_positive_actual = 0 # actual positive
for i in range(len(predicted)):
if predicted[i] == positive_value:
num_positive_predicted += 1
if actual[i] == positive_value:
num_positive_actual += 1
if predicted[i] == actual[i]:
if predicted[i] == positive_value:
score += 1
if num_positive_predicted == 0:
precision = 1
else:
precision = score / num_positive_predicted # the fraction of predicted “Yes” responses that are correct
if num_positive_actual == 0:
recall = 1
else:
recall = score / num_positive_actual # the fraction of “Yes” responses that are predicted correctly
return precision, recall
print(calc_precision_recall(inferred_train_bb, np.around(attack_x_test_feature, decimals=8), positive_value=1.))
```
To verify the significance of these results, we now run a baseline attack that uses only the remaining features to try to predict the value of the attacked feature, with no use of the model itself.
```
from art.attacks.inference.attribute_inference import AttributeInferenceBaseline
baseline_attack = AttributeInferenceBaseline(attack_feature=attack_feature)
# train attack model
baseline_attack.fit(attack_x_train)
# infer values
inferred_train_baseline = baseline_attack.infer(attack_x_test, values=values)
# check accuracy
baseline_train_acc = np.sum(inferred_train_baseline == np.around(attack_x_test_feature, decimals=8).reshape(1,-1)) / len(inferred_train_baseline)
print(baseline_train_acc)
```
In this case, the black-box attack does not do better than the baseline.
| github_jupyter |
<h1><center>ERM with DNN under penalty of Equalized Odds</center></h1>
We implement here a regular Empirical Risk Minimization (ERM) of a Deep Neural Network (DNN) penalized to enforce an Equalized Odds constraint. More formally, given a dataset of size $n$ consisting of context features $x$, target $y$ and a sensitive information $z$ to protect, we want to solve
$$
\text{argmin}_{h\in\mathcal{H}}\frac{1}{n}\sum_{i=1}^n \ell(y_i, h(x_i)) + \lambda \chi^2|_1
$$
where $\ell$ is for instance the MSE and the penalty is
$$
\chi^2|_1 = \left\lVert\chi^2\left(\hat{\pi}(h(x)|y, z|y), \hat{\pi}(h(x)|y)\otimes\hat{\pi}(z|y)\right)\right\rVert_1
$$
where $\hat{\pi}$ denotes the empirical density estimated through a Gaussian KDE.
### The dataset
We use here the _communities and crimes_ dataset that can be found on the UCI Machine Learning Repository (http://archive.ics.uci.edu/ml/datasets/communities+and+crime). Non-predictive information, such as city name, state... have been removed and the file is at the arff format for ease of loading.
```
import sys, os
sys.path.append(os.path.abspath(os.path.join('../..')))
from examples.data_loading import read_dataset
x_train, y_train, z_train, x_test, y_test, z_test = read_dataset(name='crimes', fold=1)
n, d = x_train.shape
```
### The Deep Neural Network
We define a very simple DNN for regression here
```
from torch import nn
import torch.nn.functional as F
class NetRegression(nn.Module):
def __init__(self, input_size, num_classes):
super(NetRegression, self).__init__()
size = 50
self.first = nn.Linear(input_size, size)
self.fc = nn.Linear(size, size)
self.last = nn.Linear(size, num_classes)
def forward(self, x):
out = F.selu(self.first(x))
out = F.selu(self.fc(out))
out = self.last(out)
return out
```
### The fairness-inducing regularizer
We implement now the regularizer. The empirical densities $\hat{\pi}$ are estimated using a Gaussian KDE. The L1 functional norm is taken over the values of $y$.
$$
\chi^2|_1 = \left\lVert\chi^2\left(\hat{\pi}(x|z, y|z), \hat{\pi}(x|z)\otimes\hat{\pi}(y|z)\right)\right\rVert_1
$$
This used to enforce the conditional independence $X \perp Y \,|\, Z$.
Practically, we will want to enforce $\text{prediction} \perp \text{sensitive} \,|\, \text{target}$
```
from facl.independence.density_estimation.pytorch_kde import kde
from facl.independence.hgr import chi_2_cond
def chi_squared_l1_kde(X, Y, Z):
return torch.mean(chi_2_cond(X, Y, Z, kde))
```
### The fairness-penalized ERM
We now implement the full learning loop. The regression loss used is the quadratic loss with a L2 regularization and the fairness-inducing penalty.
```
import torch
import numpy as np
import torch.utils.data as data_utils
def regularized_learning(x_train, y_train, z_train, model, fairness_penalty, lr=1e-5, num_epochs=10):
# wrap dataset in torch tensors
Y = torch.tensor(y_train.astype(np.float32))
X = torch.tensor(x_train.astype(np.float32))
Z = torch.tensor(z_train.astype(np.float32))
dataset = data_utils.TensorDataset(X, Y, Z)
dataset_loader = data_utils.DataLoader(dataset=dataset, batch_size=200, shuffle=True)
# mse regression objective
data_fitting_loss = nn.MSELoss()
# stochastic optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=lr, weight_decay=0.01)
for j in range(num_epochs):
for i, (x, y, z) in enumerate(dataset_loader):
def closure():
optimizer.zero_grad()
outputs = model(x).flatten()
loss = data_fitting_loss(outputs, y)
loss += fairness_penalty(outputs, z, y)
loss.backward()
return loss
optimizer.step(closure)
return model
```
### Evaluation
For the evaluation on the test set, we compute two metrics: the MSE (accuracy) and HGR$|_\infty$ (fairness).
```
from facl.independence.hgr import hgr_cond
def evaluate(model, x, y, z):
Y = torch.tensor(y.astype(np.float32))
Z = torch.Tensor(z.astype(np.float32))
X = torch.tensor(x.astype(np.float32))
prediction = model(X).detach().flatten()
loss = nn.MSELoss()(prediction, Y)
hgr_infty = np.max(hgr_cond(prediction, Z, Y, kde))
return loss.item(), hgr_infty
```
### Running everything together
```
model = NetRegression(d, 1)
num_epochs = 20
lr = 1e-5
# $\chi^2|_1$
penalty_coefficient = 1.0
penalty = chi_squared_l1_kde
model = regularized_learning(x_train, y_train, z_train, model=model, fairness_penalty=penalty, lr=lr, \
num_epochs=num_epochs)
mse, hgr_infty = evaluate(model, x_test, y_test, z_test)
print("MSE:{} HGR_infty:{}".format(mse, hgr_infty))
```
| github_jupyter |
# Extract Wind Information from NREL WKT
By Mauricio Hernandez
Goal(s):
- Collect and download data from a set of wind stations using the NREL API Wind Toolkit Data Downloads.
- Get insights from wind speed and wind direction data
---
See documentation at: https://developer.nrel.gov/docs/wind/wind-toolkit/mexico-wtk.download/ <br>
See examples at: https://developer.nrel.gov/docs/wind/wind-toolkit/mexico-wtk.download/#examples
```
#Import libraries
import requests
#Read API_key
with open('NREL_API_Key.txt') as f:
line = f.readline()
api_key = lines.replace('\n', '')
api_key = 'RSbHTzGc9ChRkhKO3twc63rgZQ18Hkabcm67ca6o'
```
*Define longitude and latitude and other parameters*
```
lon = -100.432474084658
lat = 20.8333616168693
email = 'mmh54@duke.edu'
attr = 'windspeed_80m,winddirection_80m'
year = '2010'
```
## Option 1: Send HTTP requests
```
#Request format
#https://developer.nrel.gov//api/wind-toolkit/v2/wind/mexico-wtk-download.format?parameters
try:
url = "https://developer.nrel.gov/api/wind-toolkit/v2/wind/mexico-wtk-download.json?api_key=%s&attributes=%s&names=%s&utc=false&leap_day=true&email=%s&wkt=POINT(%f %f)" % (api_key, attr, year, email, lon, lat)
r = requests.get(url)
print("HTML:\n", r.text)
except:
print("Invalid URL or some error occured while making the GET request to the specified URL")
```
## Option 2: POST request where a very large WKT value is requiredPOST request
```
url = "https://developer.nrel.gov/api/wind-toolkit/v2/wind/mexico-wtk-download.json?api_key=%s" % (api_key)
polygon = '(-100.3555 20.5888, -100.3555 20.3444, -100.4555 20.3444, -100.3555 20.5888)'
#POLYGON instead of point
payload = 'attributes=%s&names=2014&utc=false&leap_day=true&email=%s&wkt=POLYGON(%s)' % (attr, email, polygon)
headers = {
'content-type': "application/x-www-form-urlencoded",
'cache-control': "no-cache"
}
response = requests.request("POST", url, data=payload, headers=headers)
print(response.text)
```
---
## Activity:
1. Use the client's location or your location and download the following data for the assigned year (2007-2014):
- windspeed_10m, windspeed_40m, windspeed_60m, windspeed_80m, windspeed_100m, winddirection_10m, winddirection_40m, winddirection_60m, winddirection_80m, winddirection_100m.
2. Obtain the descriptive statistics of the annual values of windspeeds grouped by height. i.e. average wind speed at 10 meters in 2007, average wind speed at 80 meters in 2007. Based on the data, and answer the following questions:
- Does the average wind speed increases/decreases as the height increases?
- Does the variability of the wind speed increases/decreases as the height increases?
3. From step 3, select the data with the maximum and minimum annual average speeds (i.e heights of 10m and 60m) and obtain the descriptive statistics of the wind directions. Compare the median values from each data subset? Are they similar?
| github_jupyter |
# Estimating The Mortality Rate For COVID-19
> Using Country-Level Covariates To Correct For Testing & Reporting Biases And Estimate a True Mortality Rate.
- author: Joseph Richards
- image: images/corvid-mortality.png
- comments: true
- categories: [MCMC, mortality]
- permalink: /covid-19-mortality-estimation/
- toc: true
```
#hide
# ! pip install pymc3 arviz xlrd
#hide
# Setup and imports
%matplotlib inline
import warnings
warnings.simplefilter('ignore')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
from IPython.display import display, Markdown
#hide
# constants
ignore_countries = [
'Others',
'Cruise Ship'
]
cpi_country_mapping = {
'United States of America': 'US',
'China': 'Mainland China'
}
wb_country_mapping = {
'United States': 'US',
'Egypt, Arab Rep.': 'Egypt',
'Hong Kong SAR, China': 'Hong Kong',
'Iran, Islamic Rep.': 'Iran',
'China': 'Mainland China',
'Russian Federation': 'Russia',
'Slovak Republic': 'Slovakia',
'Korea, Rep.': 'Korea, South'
}
wb_covariates = [
('SH.XPD.OOPC.CH.ZS',
'healthcare_oop_expenditure'),
('SH.MED.BEDS.ZS',
'hospital_beds'),
('HD.HCI.OVRL',
'hci'),
('SP.POP.65UP.TO.ZS',
'population_perc_over65'),
('SP.RUR.TOTL.ZS',
'population_perc_rural')
]
#hide
# data loading and manipulation
from datetime import datetime
import os
import numpy as np
import pandas as pd
def get_all_data():
'''
Main routine that grabs all COVID and covariate data and
returns them as a single dataframe that contains:
* count of cumulative cases and deaths by country (by today's date)
* days since first case for each country
* CPI gov't transparency index
* World Bank data on population, healthcare, etc. by country
'''
all_covid_data = _get_latest_covid_timeseries()
covid_cases_rollup = _rollup_by_country(all_covid_data['Confirmed'])
covid_deaths_rollup = _rollup_by_country(all_covid_data['Deaths'])
todays_date = covid_cases_rollup.columns.max()
# Create DataFrame with today's cumulative case and death count, by country
df_out = pd.DataFrame({'cases': covid_cases_rollup[todays_date],
'deaths': covid_deaths_rollup[todays_date]})
_clean_country_list(df_out)
_clean_country_list(covid_cases_rollup)
# Add observed death rate:
df_out['death_rate_observed'] = df_out.apply(
lambda row: row['deaths'] / float(row['cases']),
axis=1)
# Add covariate for days since first case
df_out['days_since_first_case'] = _compute_days_since_first_case(
covid_cases_rollup)
# Add CPI covariate:
_add_cpi_data(df_out)
# Add World Bank covariates:
_add_wb_data(df_out)
# Drop any country w/o covariate data:
num_null = df_out.isnull().sum(axis=1)
to_drop_idx = df_out.index[num_null > 1]
print('Dropping %i/%i countries due to lack of data' %
(len(to_drop_idx), len(df_out)))
df_out.drop(to_drop_idx, axis=0, inplace=True)
return df_out, todays_date
def _get_latest_covid_timeseries():
''' Pull latest time-series data from JHU CSSE database '''
repo = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/'
data_path = 'csse_covid_19_data/csse_covid_19_time_series/'
all_data = {}
for status in ['Confirmed', 'Deaths', 'Recovered']:
file_name = 'time_series_19-covid-%s.csv' % status
all_data[status] = pd.read_csv(
'%s%s%s' % (repo, data_path, file_name))
return all_data
def _rollup_by_country(df):
'''
Roll up each raw time-series by country, adding up the cases
across the individual states/provinces within the country
:param df: Pandas DataFrame of raw data from CSSE
:return: DataFrame of country counts
'''
gb = df.groupby('Country/Region')
df_rollup = gb.sum()
df_rollup.drop(['Lat', 'Long'], axis=1, inplace=True, errors='ignore')
# Drop dates with all 0 count data
df_rollup.drop(df_rollup.columns[df_rollup.sum(axis=0) == 0],
axis=1,
inplace=True)
# Convert column strings to dates:
idx_as_dt = [datetime.strptime(x, '%m/%d/%y') for x in df_rollup.columns]
df_rollup.columns = idx_as_dt
return df_rollup
def _clean_country_list(df):
''' Clean up input country list in df '''
# handle recent changes in country names:
country_rename = {
'Hong Kong SAR': 'Hong Kong',
'Taiwan*': 'Taiwan',
'Czechia': 'Czech Republic',
'Brunei': 'Brunei Darussalam',
'Iran (Islamic Republic of)': 'Iran',
'Viet Nam': 'Vietnam',
'Russian Federation': 'Russia',
'Republic of Korea': 'South Korea',
'Republic of Moldova': 'Moldova',
'China': 'Mainland China'
}
df.rename(country_rename, axis=0, inplace=True)
df.drop(ignore_countries, axis=0, inplace=True, errors='ignore')
def _compute_days_since_first_case(df_cases):
''' Compute the country-wise days since first confirmed case
:param df_cases: country-wise time-series of confirmed case counts
:return: Series of country-wise days since first case
'''
date_first_case = df_cases[df_cases > 0].idxmin(axis=1)
days_since_first_case = date_first_case.apply(
lambda x: (df_cases.columns.max() - x).days)
# Add 1 month for China, since outbreak started late 2019:
days_since_first_case.loc['Mainland China'] += 30
return days_since_first_case
def _add_cpi_data(df_input):
'''
Add the Government transparency (CPI - corruption perceptions index)
data (by country) as a column in the COVID cases dataframe.
:param df_input: COVID-19 data rolled up country-wise
:return: None, add CPI data to df_input in place
'''
cpi_data = pd.read_excel(
'https://github.com/jwrichar/COVID19-mortality/blob/master/data/CPI2019.xlsx?raw=true',
skiprows=2)
cpi_data.set_index('Country', inplace=True, drop=True)
cpi_data.rename(cpi_country_mapping, axis=0, inplace=True)
# Add CPI score to input df:
df_input['cpi_score_2019'] = cpi_data['CPI score 2019']
def _add_wb_data(df_input):
'''
Add the World Bank data covariates as columns in the COVID cases dataframe.
:param df_input: COVID-19 data rolled up country-wise
:return: None, add World Bank data to df_input in place
'''
wb_data = pd.read_csv(
'https://raw.githubusercontent.com/jwrichar/COVID19-mortality/master/data/world_bank_data.csv',
na_values='..')
for (wb_name, var_name) in wb_covariates:
wb_series = wb_data.loc[wb_data['Series Code'] == wb_name]
wb_series.set_index('Country Name', inplace=True, drop=True)
wb_series.rename(wb_country_mapping, axis=0, inplace=True)
# Add WB data:
df_input[var_name] = _get_most_recent_value(wb_series)
def _get_most_recent_value(wb_series):
'''
Get most recent non-null value for each country in the World Bank
time-series data
'''
ts_data = wb_series[wb_series.columns[3::]]
def _helper(row):
row_nn = row[row.notnull()]
if len(row_nn):
return row_nn[-1]
else:
return np.nan
return ts_data.apply(_helper, axis=1)
#hide
# Load the data (see source/data.py):
df, todays_date = get_all_data()
# Impute NA's column-wise:
df = df.apply(lambda x: x.fillna(x.mean()),axis=0)
```
# Observed mortality rates
```
#collapse-hide
display(Markdown('Data as of %s' % todays_date))
reported_mortality_rate = df['deaths'].sum() / df['cases'].sum()
display(Markdown('Overall reported mortality rate: %.2f%%' % (100.0 * reported_mortality_rate)))
df_highest = df.sort_values('cases', ascending=False).head(15)
mortality_rate = pd.Series(
data=(df_highest['deaths']/df_highest['cases']).values,
index=map(lambda x: '%s (%i cases)' % (x, df_highest.loc[x]['cases']),
df_highest.index))
ax = mortality_rate.plot.bar(
figsize=(14,7), title='Reported Mortality Rate by Country (countries w/ highest case counts)')
ax.axhline(reported_mortality_rate, color='k', ls='--')
plt.show()
```
# Model
Estimate COVID-19 mortality rate, controling for country factors.
```
#hide
import numpy as np
import pymc3 as pm
def initialize_model(df):
# Normalize input covariates in a way that is sensible:
# (1) days since first case: upper
# mu_0 to reflect asymptotic mortality rate months after outbreak
_normalize_col(df, 'days_since_first_case', how='upper')
# (2) CPI score: upper
# mu_0 to reflect scenario in absence of corrupt govts
_normalize_col(df, 'cpi_score_2019', how='upper')
# (3) healthcare OOP spending: mean
# not sure which way this will go
_normalize_col(df, 'healthcare_oop_expenditure', how='mean')
# (4) hospital beds: upper
# more beds, more healthcare and tests
_normalize_col(df, 'hospital_beds', how='mean')
# (5) hci = human capital index: upper
# HCI measures education/health; mu_0 should reflect best scenario
_normalize_col(df, 'hci', how='mean')
# (6) % over 65: mean
# mu_0 to reflect average world demographic
_normalize_col(df, 'population_perc_over65', how='mean')
# (7) % rural: mean
# mu_0 to reflect average world demographic
_normalize_col(df, 'population_perc_rural', how='mean')
n = len(df)
covid_mortality_model = pm.Model()
with covid_mortality_model:
# Priors:
mu_0 = pm.Beta('mu_0', alpha=0.3, beta=10)
sig_0 = pm.Uniform('sig_0', lower=0.0, upper=mu_0 * (1 - mu_0))
beta = pm.Normal('beta', mu=0, sigma=5, shape=7)
sigma = pm.HalfNormal('sigma', sigma=5)
# Model mu from country-wise covariates:
# Apply logit transformation so logistic regression performed
mu_0_logit = np.log(mu_0 / (1 - mu_0))
mu_est = mu_0_logit + \
beta[0] * df['days_since_first_case_normalized'].values + \
beta[1] * df['cpi_score_2019_normalized'].values + \
beta[2] * df['healthcare_oop_expenditure_normalized'].values + \
beta[3] * df['hospital_beds_normalized'].values + \
beta[4] * df['hci_normalized'].values + \
beta[5] * df['population_perc_over65_normalized'].values + \
beta[6] * df['population_perc_rural_normalized'].values
mu_model_logit = pm.Normal('mu_model_logit',
mu=mu_est,
sigma=sigma,
shape=n)
# Transform back to probability space:
mu_model = np.exp(mu_model_logit) / (np.exp(mu_model_logit) + 1)
# tau_i, mortality rate for each country
# Parametrize with (mu, sigma)
# instead of (alpha, beta) to ease interpretability.
tau = pm.Beta('tau', mu=mu_model, sigma=sig_0, shape=n)
# tau = pm.Beta('tau', mu=mu_0, sigma=sig_0, shape=n)
# Binomial likelihood:
d_obs = pm.Binomial('d_obs',
n=df['cases'].values,
p=tau,
observed=df['deaths'].values)
return covid_mortality_model
def _normalize_col(df, colname, how='mean'):
'''
Normalize an input column in one of 3 ways:
* how=mean: unit normal N(0,1)
* how=upper: normalize to [-1, 0] with highest value set to 0
* how=lower: normalize to [0, 1] with lowest value set to 0
Returns df modified in place with extra column added.
'''
colname_new = '%s_normalized' % colname
if how == 'mean':
mu = df[colname].mean()
sig = df[colname].std()
df[colname_new] = (df[colname] - mu) / sig
elif how == 'upper':
maxval = df[colname].max()
minval = df[colname].min()
df[colname_new] = (df[colname] - maxval) / (maxval - minval)
elif how == 'lower':
maxval = df[colname].max()
minval = df[colname].min()
df[colname_new] = (df[colname] - minval) / (maxval - minval)
#hide
# Initialize the model:
mod = initialize_model(df)
# Run MCMC sampler1
with mod:
trace = pm.sample(300, tune=100,
chains=3, cores=2)
#collapse-hide
n_samp = len(trace['mu_0'])
mu0_summary = pm.summary(trace).loc['mu_0']
print("COVID-19 Global Mortality Rate Estimation:")
print("Posterior mean: %0.2f%%" % (100*trace['mu_0'].mean()))
print("Posterior median: %0.2f%%" % (100*np.median(trace['mu_0'])))
lower = np.sort(trace['mu_0'])[int(n_samp*0.025)]
upper = np.sort(trace['mu_0'])[int(n_samp*0.975)]
print("95%% posterior interval: (%0.2f%%, %0.2f%%)" % (100*lower, 100*upper))
prob_lt_reported = sum(trace['mu_0'] < reported_mortality_rate) / len(trace['mu_0'])
print("Probability true rate less than reported rate (%.2f%%) = %.2f%%" %
(100*reported_mortality_rate, 100*prob_lt_reported))
print("")
# Posterior plot for mu0
print('Posterior probability density for COVID-19 mortality rate, controlling for country factors:')
ax = pm.plot_posterior(trace, var_names=['mu_0'], figsize=(18, 8), textsize=18,
credible_interval=0.95, bw=3.0, lw=3, kind='kde',
ref_val=round(reported_mortality_rate, 3))
```
## Magnitude and Significance of Factors
For bias in reported COVID-19 mortality rate
```
#collapse-hide
# Posterior summary for the beta parameters:
beta_summary = pm.summary(trace).head(7)
beta_summary.index = ['days_since_first_case', 'cpi', 'healthcare_oop', 'hospital_beds', 'hci', 'percent_over65', 'percent_rural']
beta_summary.reset_index(drop=False, inplace=True)
err_vals = ((beta_summary['hpd_3%'] - beta_summary['mean']).values,
(beta_summary['hpd_97%'] - beta_summary['mean']).values)
ax = beta_summary.plot(x='index', y='mean', kind='bar', figsize=(14, 7),
title='Posterior Distribution of Beta Parameters',
yerr=err_vals, color='lightgrey',
legend=False, grid=True,
capsize=5)
beta_summary.plot(x='index', y='mean', color='k', marker='o', linestyle='None',
ax=ax, grid=True, legend=False, xlim=plt.gca().get_xlim())
plt.savefig('../images/corvid-mortality.png')
```
# About This Analysis
This analysis was done by [Joseph Richards](https://twitter.com/joeyrichar)
In this project[^3], we attempt to estimate the true mortality rate[^1] for COVID-19 while controlling for country-level covariates[^2][^4] such as:
* age of outbreak in the country
* transparency of the country's government
* access to healthcare
* demographics such as age of population and rural vs. urban
Estimating a mortality rate lower than the overall reported rate likely implies that there has been **significant under-testing and under-reporting of cases globally**.
## Interpretation of Country-Level Parameters
1. days_since_first_case - positive (very statistically significant). As time since outbreak increases, expected mortality rate **increases**, as expected.
2. cpi - negative (statistically significant). As government transparency increases, expected mortality rate **decreases**. This may mean that less transparent governments under-report cases, hence inflating the mortality rate.
3. healthcare avg. out-of-pocket spending - no significant trend.
4. hospital beds per capita - no significant trend.
5. Human Capital Index - no significant trend (slightly negative = mortality rates decrease with increased mobilization of the country)
6. percent over 65 - positive (statistically significant). As population age increases, the mortality rate also **increases**, as expected.
7. percent rural - no significant trend.
[^1]: As of March 10, the **overall reported mortality rate is 3.5%**. However, this figure does not account for **systematic biases in case reporting and testing**. The observed mortality of COVID-19 has varied widely from country to country (as of early March 2020). For instance, as of March 10, mortality rates have ranged from < 0.1% in places like Germany (1100+ cases) to upwards of 5% in Italy (9000+ cases) and 3.9% in China (80k+ cases).
[^2]: The point of our modelling work here is to **try to understand and correct for the country-to-country differences that may cause the observed discrepancies in COVID-19 country-wide mortality rates**. That way we can "undo" those biases and try to **pin down an overall *real* mortality rate**.
[^3]: Full details about the model are available at: https://github.com/jwrichar/COVID19-mortality
[^4]: The affects of these parameters are subject to change as more data are collected.
# Appendix: Model Diagnostics
The following trace plots help to assess the convergence of the MCMC sampler.
```
#hide_input
import arviz as az
az.plot_trace(trace, compact=True);
```
| github_jupyter |
Building the dataset of numerical data
```
#### STOP - ONLY if needed
# Allows printing full text
import pandas as pd
pd.set_option('display.max_colwidth', None)
#mid_keywords = best_keywords(data, 1, 0.49, 0.51) # same as above, but for average papers
#low_keywords = best_keywords(data, 1, 0.03, 0.05) # same as above, but for poor papers
### PUT MAIN HERE ###
# Machine Learning Challenge
# Course: Machine Learning (880083-M-6)
# Group 58
##########################################
# Import packages #
##########################################
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import pearsonr
import yake #NOTE: with Anaconda: conda install -c conda-forge yake
##########################################
# Import self-made functions #
##########################################
from CODE.data_preprocessing.split_val import split_val
from CODE.data_preprocessing.find_outliers_tukey import find_outliers_tukey
#feature based on the title of the paper
from CODE.features.length_title import length_title
# features based on 'field_of_study' column
from CODE.features.field_variety import field_variety
from CODE.features.field_popularity import field_popularity
from CODE.features.field_citations_avarage import field_citations_avarage
# features based on the topics of the paper
from CODE.features.topic_citations_avarage import topic_citations_avarage
from CODE.features.topic_variety import topics_variety
from CODE.features.topic_popularity import topic_popularity
from CODE.features.topic_citations_avarage import topic_citations_avarage
# features based on the abstract of the paper
from CODE.features.keywords import best_keywords
from CODE.features.abst_words import abst_words
from CODE.features.abst_words import abst_count
# features based on the venue of the paper
from CODE.features.venue_popularity import venue_popularity
from CODE.features.venue_citations import venues_citations
from CODE.features.age import age
# features based on the authors of the paper
from CODE.features.author_h_index import author_h_index
from CODE.features.paper_h_index import paper_h_index
from CODE.features.team_size import team_size
from CODE.features.author_database import author_database
##########################################
# Load datasets #
##########################################
# Main datasets
data = pd.read_json('DATA/train.json') # Training set
test = pd.read_json('DATA/test.json') # Test set
# Author-centric datasets
# These datasets were made using our self-made functions 'citations_per_author' (for the author_citation_dic)
# These functions took a long time to make (ballpark ~10 minutes on a laptop in 'silent mode'), so instead we
# decided to run this function once, save the data, and reload the datasets instead of running the function again.
import pickle
with open('my_dataset1.pickle', 'rb') as dataset:
author_citation_dic = pickle.load(dataset)
with open('my_dataset2.pickle', 'rb') as dataset2:
author_db = pickle.load(dataset2)
##########################################
# Missing values handling #
##########################################
# Missing values for feature 'fields_of_study'
data.loc[data['fields_of_study'].isnull(), 'fields_of_study'] = ""
# Missing values for feature 'title'
data.loc[data['title'].isnull(), 'title'] = ""
# Missing values for feature 'abstract'
data.loc[data['abstract'].isnull(), 'abstract'] = ""
# Missing values for features 'authors'
data.loc[data['authors'].isnull(), 'authors'] = ""
# Missing values for feature 'venue'
data.loc[data['venue'].isnull(), 'venue'] = ""
# Missing values for feature 'year'
# data.loc[data['fields_of_study'].isnull(), 'fields_of_study'] = mean(year)
# Take mean by venue instead
# If venue not known, take something else?
# Missing values for feature 'references'
data.loc[data['references'].isnull(), 'references'] = ""
# Missing values for feature 'topics'
data.loc[data['topics'].isnull(), 'topics'] = ""
# Missing values for feature 'is_open_access'
#data.loc[data['is_open_access'].isnull(), 'is_open_access'] = ""
# Take most frequent occurrence for venue
# If venue not known, do something else?
##########################################
# Create basic numeric df #
##########################################
end = len(data)
num_X = data.loc[ 0:end+1 , ('doi', 'citations', 'year', 'references') ] ##REMOVE DOI
##########################################
# Feature creation #
##########################################
"""
FEATURE DATAFRAME: num_X
ALL: After writing a funtion to create a feature, please incorporate your new feature as a column on the dataframe below.
This is the dataframe we will use to train the models.
DO NOT change the order in this section if at all possible
"""
num_X['title_length'] = length_title(data) # returns a numbered series
num_X['field_variety'] = field_variety(data) # returns a numbered series
num_X['field_popularity'] = field_popularity(data) # returns a numbered series
# num_X['field_citations_avarage'] = field_citations_avarage(data) # returns a numbered series
num_X['team_sz'] = team_size(data) # returns a numbered series
num_X['topic_var'] = topics_variety(data) # returns a numbered series
num_X['topic_popularity'] = topic_popularity(data) # returns a numbered series
num_X['topic_citations_avarage'] = topic_citations_avarage(data) # returns a numbered series
num_X['venue_popularity'], num_X['venue'] = venue_popularity(data) # returns a numbered series and a pandas.Series of the 'venues' column reformatted
num_X['open_access'] = pd.get_dummies(data["is_open_access"], drop_first = True) # returns pd.df (True = 1)
num_X['age'] = age(data) # returns a numbered series. Needs to be called upon AFTER the venues have been reformed (from venue_frequency)
num_X['venPresL'] = venues_citations(data) # returns a numbered series. Needs to be called upon AFTER the venues have been reformed (from venue_frequency)
keywords = best_keywords(data, 1, 0.954, 0.955) # from [data set] get [integer] keywords from papers btw [lower bound] and [upper bound] quantiles; returns list
num_X['has_keyword'] = abst_words(data, keywords)#returns a numbered series: 1 if any of the words is present in the abstract, else 0
num_X['keyword_count'] = abst_count(data, keywords) # same as above, only a count (noot bool)
# Author H-index
author_db, reformatted_authors = author_database(data)
data['authors'] = reformatted_authors
num_X['h_index'] = paper_h_index(data, author_citation_dic) # Returns a numbered series. Must come after author names have been reformatted.
field_avg_cit = num_X.groupby('field_variety').citations.mean()
for field, field_avg in zip(field_avg_cit.index, field_avg_cit):
num_X.loc[num_X['field_variety'] == field, 'field_cit'] = field_avg
"""
END do not reorder
"""
##########################################
# Deal with specific missing values #
##########################################
# Open_access, thanks to jreback (27th of July 2016) https://github.com/pandas-dev/pandas/issues/13809
OpAc_by_venue = num_X.groupby('venue').open_access.apply(lambda x: x.mode()) # Take mode for each venue
OpAc_by_venue = OpAc_by_venue.to_dict()
missing_OpAc = num_X.loc[num_X['open_access'].isnull(),]
for i, i_paper in missing_OpAc.iterrows():
venue = i_paper['venue']
doi = i_paper['doi']
index = num_X[num_X['doi'] == doi].index[0]
if venue in OpAc_by_venue.keys(): # If a known venue, append the most frequent value for that venue
num_X[num_X['doi'] == doi]['open_access'] = OpAc_by_venue[venue] # Set most frequent occurrence
else: # Else take most occurring value in entire dataset
num_X.loc[index,'open_access'] = num_X.open_access.mode()[0] # Thanks to BENY (2nd of February, 2018) https://stackoverflow.com/questions/48590268/pandas-get-the-most-frequent-values-of-a-column
### Drop columns containing just strings
num_X = num_X.drop(['venue', 'doi', 'field_variety'], axis = 1)
num_X = num_X.dropna()
##########################################
# Train/val split #
##########################################
## train/val split
X_train, X_val, y_train, y_val = split_val(num_X, target_variable = 'citations')
"""
INSERT outlier detection on X_train here - ALBERT
"""
##########################################
# Outlier detection #
##########################################
### MODEL code for outlier detection
### names: X_train, X_val, y_train, y_val
# print(list(X_train.columns))
out_y = (find_outliers_tukey(x = y_train['citations'], top = 93, bottom = 0))[0]
out_rows = out_y
# out_X = (find_outliers_tukey(x = X_train['team_sz'], top = 99, bottom = 0))[0]
# out_rows = out_y + out_X
out_rows = sorted(list(set(out_rows)))
# print("X_train:")
# print(X_train.shape)
X_train = X_train.drop(labels = out_rows)
# print(X_train.shape)
# print()
# print("y_train:")
# print(y_train.shape)
y_train = y_train.drop(labels = out_rows)
# print(y_train.shape)
# Potential features to get rid of: team_sz
##########################################
# Model implementations #
##########################################
"""
IMPLEMENT models here
NOTE: Please do not write over X_train, X_val, y_train, y_val in your model - make new variables if needed
"""
#-----------simple regression, all columns
"""
MODEL RESULTS:
R2: 0.03724
MSE: 33.38996
"""
#-----------logistic regression, all columns
"""
MODEL RESULTS:
R2: 0.006551953988217396
MSE: 34.07342328208346
"""
#-----------SGD regression, all columns
"""
# MODEL RESULTS:
# Best outcome: ('constant', 0.01, 'squared_error', 35.74249957361433, 0.04476790061780822)
"""
#-----------polynomial regression, all columns
"""
"""
#model.fit(X_train, y_train)
#print('Best score: ', model.best_score_)
#print('Best parameters: ', model.best_params_)
#y_pred = model.predict(X_val)
#from sklearn.metrics import r2_score
#print(r2_score(y_val,y_pred))
# import json
#with open("sample.json", "w") as outfile:
#json.dump(dictionary, outfile)
"""
-----------------------------------------------------------------------------------------------------------
------------------------------ LETS EXPLORE!!! ------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------
"""
"""
"""
### FOR: exploring the new dataframe with numerical columns
# --> NOTE: it would be more efficient to combine these first and only expand the df once (per addition type)
num_X
### FOR: explore data train/val split (should be 6470 train rows and 3188 validation rows)
# names: X_train, X_val, y_train, y_val
print("number of keywords:", len(keywords))
print("total train rows:", X_train.shape)
print("numer w keyword:", sum(X_train['has_keyword']))
print()
print(keywords)
#X_val
#y_train
#y_val
#6210 of 6313
#6136 (of 6313) for 1 keyword from the top 1% of papers
#4787 for 2 keywords from top .01% of papers (correlation: 0.036)
#2917 for 1 keyword from top .01% of papers (correlation: 0.049)
"""
Look at some correlations - full num_X
"""
# names: X_train, X_val, y_train, y_val
# From: https://www.kaggle.com/ankitjha/comparing-regression-models
import seaborn as sns
corr_mat = num_X.corr(method='pearson')
plt.figure(figsize=(20,10))
sns.heatmap(corr_mat,vmax=1,square=True,annot=True,cmap='cubehelix')
"""
Look at some correlations - X_train
NOTE: there is no y here
"""
# names: X_train, X_val, y_train, y_val
#temp = y_train hstack X_train
# From: https://www.kaggle.com/ankitjha/comparing-regression-models
corr_mat = X_train.corr(method='pearson')
plt.figure(figsize=(20,10))
sns.heatmap(corr_mat,vmax=1,square=True,annot=True,cmap='cubehelix')
"""
-----------------------------------------------------------------------------------------------------------
------------------------- LETS CODE!!! --------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------
"""
"""
"""
print(list(X_train.columns))
"""
Choose your columns
"""
#X_train_small = X_train.loc[ : , 'topic_var':'h_index'].copy()
#X_val_small = X_val.loc[ : , 'topic_var':'h_index'].copy()
drops = ['year', 'team_sz', 'has_keyword']
X_train_small = X_train.copy()
X_train_small.drop(drops, inplace = True, axis=1)
X_val_small = X_val.copy()
X_val_small.drop(drops, inplace = True, axis=1)
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import r2_score, mean_absolute_error
from CODE.models.regression import simple_linear
from CODE.models.regression import log_reg
summaries = list(X_train.columns)
print(summaries)
for i in range(len(summaries)):
# fs = summaries[:i] + summaries[i+1:]
X_train_small = X_train.copy()
X_val_small = X_val.copy()
drops = summaries[i]
X_train_small.drop(drops, inplace = True, axis=1)
X_val_small.drop(drops, inplace = True, axis=1)
print("dropped:", summaries[i])
# simple_linear(X_train_small, y_train, X_val_small, y_val) #dropping venue_popularity helps a tiny bit
log_reg(X_train_small, y_train, X_val_small, y_val)
# print('r2:', r2_score(y_val, y_pred_val)) # 0.006551953988217396
# print("MAE:", mean_absolute_error(y_val, y_pred_val)) # 34.07342328208346
# print()
# helps to drop: year, field_popularity, team_size, topic_var, age, has_keyword, keyword_count
# hurts to drop: references, title length, topic_popularity, opic_citations_avarage, venue_popularity(!),
# venPresL(!), h_index(!), field_cit
X_train_small
#X_val_small
def abst_categories (the_data, keywords, mid_keywords, low_keywords):
abst = the_data['abstract']
counts = []
abst_key = []
for i in abst:
if i == None:
abst_key.append(0)
continue
else:
high = 0
for word in keywords:
if word in i.lower():
high += 1
mid = 0
for word in mid_keywords:
if word in i.lower():
mid += 1
low = 0
for word in low_keywords:
if word in i.lower():
low +=1
# abst_key = np.argmax(abst_key)
# abst_key = (max(abst_key)).index
return pd.Series(abst_key)
print(sum(abst_categories (data, keywords, mid_keywords, low_keywords))) #9499 rows
"""
Remove outliers
NOTE: can't rerun this code without restarting the kernal
"""
#names: X_train, X_val, y_train, y_val
#print(list(X_train.columns))
# print("citations:", find_outliers_tukey(x = y_train['citations'], top = 93, bottom = 0))
# print("year:", find_outliers_tukey(X_train['year'], top = 74, bottom = 25)) # seems unnecessary
# print("references:", find_outliers_tukey(X_train['references'], top = 90, bottom = 10)) # seems unnecessary
# print("team_size:", find_outliers_tukey(X_train['team_size'], top = 99, bottom = 0)) # Meh
# print("topic_variety:", find_outliers_tukey(X_train['topic_variety'], top = 75, bottom = 10)) # not much diff btw top and normal
# print("age:", find_outliers_tukey(X_train['age'], top = 90, bottom = 10)) # Meh
# print("open_access:", find_outliers_tukey(X_train['open_access'], top = 100, bottom = 0)) # Not necessary: boolean
# print("has_keyword:", find_outliers_tukey(X_train['has_keyword'], top = 100, bottom = 0)) # Not necessary: boolean
# print("title_length:", find_outliers_tukey(X_train['title_length'], top = 90, bottom = 10)) # Meh
# print("field_variety:", find_outliers_tukey(X_train['field_variety'], top = 90, bottom = 10)) # seems unnecessary
# print("venue_freq:", find_outliers_tukey(X_train['venue_freq'], top = 90, bottom = 10)) # seems unnecessary
out_y = (find_outliers_tukey(x = y_train['citations'], top = 95, bottom = 0))[0]
#out_X = (find_outliers_tukey(x = X_train['team_size'], top = 99, bottom = 0))[0]
out_rows = out_y
#out_rows = out_y + out_X
out_rows = sorted(list(set(out_rows)))
print("X_train:")
print(X_train.shape)
X_train = X_train.drop(labels = out_rows)
print(X_train.shape)
print()
print("y_train:")
print(y_train.shape)
y_train = y_train.drop(labels = out_rows)
print(y_train.shape)
X_train
# Create a mini version of the main 'data' dataframe
import pandas as pd
import numpy as np
# %pwd
# %cd C:\Users\r_noc\Desktop\Python\GIT\machinelearning
play = data.sample(100, replace = False, axis = 0, random_state = 123)
print(play.shape)
# print(play['abstract'])
print(list(play.columns))
# play['has_keyword'] = np.nan
# print(play.shape)
# play
from sklearn.linear_model import PoissonRegressor
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_z = scaler.fit_transform(X_train_small)
X_val_z =scaler.transform(X_val_small)
polynomial_features = PolynomialFeatures(degree = 2)
x_train_poly = polynomial_features.fit_transform(X_train_z)
x_val_poly = polynomial_features.transform(X_val_z)
model = LinearRegression()
model.fit(x_train_poly, y_train)
y_poly_pred = model.predict(x_val_poly)
print(r2_score(y_val, y_poly_pred)) # -0.04350391168707901
print(mean_absolute_error(y_val, y_poly_pred)) # 32.65668266590838
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_z = scaler.fit_transform(X_train_small)
X_val_z =scaler.transform(X_val_small)
model = PolynomialFeatures(degree = 2)
X_poly = model.fit_transform(X_train_z)
model.fit(X_poly, y_train)
model2 = LinearRegression()
model2.fit(X_poly, y_train)
y_pred_val = model2.predict(model.fit_transform(X_val_z))
print(r2_score(y_val, y_pred_val)) #0.03724015197555319
print(mean_absolute_error(y_val, y_pred_val)) #33.38996938585591
#names: X_train, X_val, y_train, y_val
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import SGDRegressor
scaler = StandardScaler()
X_train_z = scaler.fit_transform(X_train_small)
X_val_z =scaler.transform(X_val_small)
y_ravel = np.ravel(y_train)
lr = [ 1.1, 1, .1, .01, .001, .0001]
settings = []
for learning_rate in ['constant', 'optimal', 'invscaling']:
for loss in ['squared_error', 'huber']:
for eta0 in lr:
model = SGDRegressor(learning_rate=learning_rate, eta0=eta0, loss=loss,random_state=666, max_iter=5000)
model.fit(X_train_z, y_ravel)
y_pred = model.predict(X_val_z)
mae = mean_absolute_error(y_val, y_pred)
r2 = r2_score(y_val, y_pred)
settings.append((learning_rate, eta0, loss, mae, r2))
print(settings[-1])
# Best outcome: ('constant', 0.01, 'squared_error', 35.74249957361433, 0.04476790061780822)
# With small: ('invscaling', 1, 'squared_error', 48.92137807970932, 0.05128477811871335)
X_train
```
| github_jupyter |
```
import requests
import json
headers = {'content-type': 'application/json'}
url = 'https://nid.naver.com/nidlogin.login'
data = {"eventType": "AAS_PORTAL_START", "data": {"id": "lafamila", "pw": "als01060"}}
#params = {'sessionKey': '9ebbd0b25760557393a43064a92bae539d962103', 'format': 'xml', 'platformId': 1}
#requests.post(url, params=params, data=json.dumps(data), headers=headers)
source = requests.post(url, data=json.dumps(data), headers=headers)
```
<b>params</b> is for GET-style URL parameters, <b>data</b> is for POST-style body information
```
form = """
<form name="frmNIDLogin" id="frmNIDLogin" action="https://nid.naver.com/nidlogin.login" method="post" target="_top">
<input name="enctp" id="enctp" type="hidden" value="1">
<input name="encpw" id="encpw" type="hidden" value="">
<input name="encnm" id="encnm" type="hidden" value="">
<input name="svctype" id="svctype" type="hidden" value="0">
<input name="url" id="url" type="hidden" value="https://www.naver.com/">
<input name="enc_url" id="enc_url" type="hidden" value="https%3A%2F%2Fwww.naver.com%2F">
<input name="postDataKey" id="postDataKey" type="hidden" value="">
<input name="nvlong" id="nvlong" type="hidden" value="">
<input name="saveID" id="saveID" type="hidden" value="">
<input name="smart_level" id="smart_level" type="hidden" value="1">
<fieldset>
<legend class="blind">로그인</legend>
<div class="htmlarea" id="flasharea" style="visibility: hidden;"><object width="148" height="67" id="flashlogin" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="https://fpdownload.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,0,0" style="visibility: hidden;"><param name="allowScriptAccess" value="always"><param name="quality" value="high"><param name="menu" value="false"><param name="movie" value="https://static.nid.naver.com/loginv3/commonLoginF_201505.swf"><param name="wmode" value="window"><param name="bgcolor" value="#f7f7f7"><param name="FlashVars" value="null"><param name="allowFullScreen" value="false"><embed name="flashlogin" width="148" height="67" align="middle" pluginspage="https://www.macromedia.com/go/getflashplayer" src="https://static.nid.naver.com/loginv3/commonLoginF_201505.swf" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="false" bgcolor="#f7f7f7" flashvars="null" menu="false" wmode="window" quality="high"></object>
<div class="error_box_v2" id="div_capslock2" style="left: -14px; top: 59px; display: none; position: absolute;">
<p><strong>Caps Lock</strong>이 켜져 있습니다.</p>
</div>
</div>
<div class="htmlarea" id="htmlarea" style="display: block;">
<div class="input_box"><label class="lbl_in" id="label_id" for="id">아이디</label><input name="id" title="아이디" class="int" id="id" accesskey="L" style="-ms-ime-mode: disabled;" type="text" maxlength="41" placeholder="아이디"></div>
<div class="input_box"><label class="lbl_in" id="label_pw" for="pw">비밀번호</label><input name="pw" title="비밀번호" class="int" id="pw" type="password" maxlength="16" placeholder="비밀번호">
<div class="error_box_v2" id="div_capslock" style="display: none;">
<p><strong>Caps Lock</strong>이 켜져 있습니다.</p>
</div>
</div>
</div>
<div class="chk_id_login">
<input title="로그인 상태유지" class="chk_login" id="chk_log" type="checkbox">
<label class="lbl_long" id="lbl_long" for="chk_log"><i class="ico_chk"></i>로그인 상태 유지</label>
</div>
<div class="login_help">
<div class="chk_ip"><a title="" id="ip_guide" href="https://static.nid.naver.com/loginv3/help_ip.html" target="_blank">IP보안</a> <span class="ip_box"><input title="IP 보안이 켜져 있습니다. IP보안을 사용하지 않으시려면 선택을 해제해주세요." class="chb_b" id="ckb_type" type="checkbox"><label class="lbl_type on" id="lbl_type" for="ckb_type">IP보안 체크</label></span></div>
</div>
<span class="btn_login"><input title="로그인" type="submit" value="로그인"></span>
<a class="btn_dis" href="https://nid.naver.com/nidlogin.login?mode=number&svctype=&logintp=&viewtype=&url=https://www.naver.com" target="_top">일회용 로그인</a>
<p class="btn_lnk">
<a class="btn_join" href="https://nid.naver.com/nidregister.form?url=https://www.naver.com" target="_blank">회원가입</a>
<a class="btn_id" href="https://nid.naver.com/user/help.nhn?todo=idinquiry" target="_blank">아이디<span class="blind">찾기</span></a>/<a href="https://nid.naver.com/nidreminder.form" target="_blank">비밀번호 찾기</a>
</p>
</fieldset>
</form>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(form, 'html.parser')
values = soup.find_all('input')
datas = {}
for val in values:
inputs = str(val).split("\n")[0]
inp = BeautifulSoup(inputs, 'html.parser')
if "name" in str(inp):
name = inp.find('input')['name'].decode('utf-8').encode('utf-8')
if "value" not in str(inp):
datas[name] = raw_input(name)
else:
datas[name] = inp.find('input')['value'].decode('utf-8').encode('utf-8')
print datas
import requests
import json
headers = {'content-type': 'application/json'}
url = 'https://nid.naver.com/nidlogin.login'
data = {"data": datas}
#params = {'sessionKey': '9ebbd0b25760557393a43064a92bae539d962103', 'format': 'xml', 'platformId': 1}
#requests.post(url, params=params, data=json.dumps(data), headers=headers)
source = requests.post(url, data=json.dumps(data), headers=headers)
print source.text
#https://gist.github.com/blmarket/9012444
```
| github_jupyter |
# 1A See Handwritten Notes
# 1B
```
import os
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['figure.figsize'] = [12,8]
# sine wave with noise, aka swell with wind chop
arr = np.linspace(0, 20, 50)
ts = np.sin(arr) * 5 + np.random.uniform(-1, 1, len(arr))
plt.plot(arr, ts, 'r+')
plt.show()
data = pd.DataFrame(data={'Time': arr, 'Wave Height': ts})
x = data['Time'].as_matrix().astype(float)
y = data['Wave Height'].as_matrix().astype(float)
X = np.column_stack([np.ones_like(x),x])
w = np.linalg.solve(np.dot(X.T,X),np.dot(X.T,y))
xhat = np.linspace(x.min(),x.max(),101)
Xhat = np.column_stack([np.ones_like(xhat),xhat])
yhat = np.dot(Xhat,w)
x_tilde = 2*(x - x.min())/(x.max()-x.min()) - 1
xhat = 2*(xhat - x.min())/(x.max()-x.min()) - 1
x = x_tilde
degree = 10
X = np.vander(x,degree+1,increasing=True)
gamma = 1e-4
Eye = np.eye(X.shape[1])
Eye[0,0] = 0
w = np.linalg.solve(np.dot(X.T,X),np.dot(X.T,y))
Xhat = np.vander(xhat,degree+1,increasing=True)
yhat = np.dot(Xhat,w)
avg_rmse = np.sqrt(np.sum((np.dot(X,w) - y)**2)/len(y))
print(avg_rmse)
plt.plot(xhat,yhat,'k-')
plt.plot(x,y,'ro')
plt.show()
```
## 1C
```
test_path = 'PC1_test.csv'
training_path = 'PC1_training.csv'
degree = 3
test_series = pd.read_csv(test_path, names=['Test'])
training_series = pd.read_csv(training_path, names=['Training'])
y = training_series['Training'].values
x = training_series['Training'].index
x_tilde = 2*(x - x.min())/(x.max()-x.min()) - 1
xhat = 2*(xhat - x.min())/(x.max()-x.min()) - 1
x = x_tilde
X = np.vander(x, degree+1, increasing=True)
gamma = 1e-4
Eye = np.eye(X.shape[1])
Eye[0,0] = 0
w = np.linalg.solve(np.dot(X.T,X) + gamma*Eye, np.dot(X.T,y))
Xhat = np.vander(xhat,degree+1,increasing=True)
yhat = np.dot(Xhat, w)
avg_rmse = np.sqrt(np.sum((np.dot(X,w) - y)**2)/len(y))
plt.plot(x,y,'ro')
plt.plot(xhat,yhat,'k-')
plt.show()
```
# 1D
```
training_rmse = []
for degree in range(0, 15):
X = np.vander(x, degree+1, increasing=True)
w = np.linalg.solve(np.dot(X.T,X), np.dot(X.T,y))
Xhat = np.vander(xhat,degree+1,increasing=True)
yhat = np.dot(Xhat, w)
avg_rmse = np.sqrt(np.sum((np.dot(X,w) - y)**2)/len(y))
training_rmse.append(avg_rmse)
yt = test_series['Test'].values
xt = test_series['Test'].index
test_rmse = []
for degree in range(0, 15):
X = np.vander(x, degree+1, increasing=True)
w = np.linalg.solve(np.dot(X.T,X), np.dot(X.T,yt))
Xhat = np.vander(xhat,degree+1,increasing=True)
yhat = np.dot(Xhat, w)
avg_rmse = np.sqrt(np.sum((np.dot(X,w) - yt)**2)/len(yt))
test_rmse.append(avg_rmse)
rmse_hat = [x for x in range(0, 15)]
plt.ylabel('Avg. RMSE')
plt.xlabel('Polygon Order')
plt.plot(rmse_hat, test_rmse, 'k-')
plt.plot(rmse_hat, training_rmse, 'b-')
plt.show()
```
# 1E
```
degree = 15
gamma_arr = [x for x in np.logspace(-5, 1, 10)]
training_rmse = []
for gamma in np.logspace(-5, 1, 10):
X = np.vander(x, degree+1, increasing=True)
Eye = np.eye(X.shape[1])
Eye[0,0] = 0
w = np.linalg.solve(np.dot(X.T,X) + gamma*Eye,np.dot(X.T,y))
avg_rmse = np.sqrt(np.sum((np.dot(X,w) - y)**2)/len(y))
training_rmse.append(avg_rmse)
test_rmse = []
for gamma in np.logspace(-5, 1, 10):
X = np.vander(xt, degree+1, increasing=True)
Eye = np.eye(X.shape[1])
Eye[0,0] = 0
w = np.linalg.solve(np.dot(X.T,X) + gamma*Eye,np.dot(X.T,yt))
avg_rmse = np.sqrt(np.sum((np.dot(X,w) - yt)**2)/len(yt))
test_rmse.append(avg_rmse)
plt.ylabel('Avg. RMSE')
plt.xlabel('Gamma')
plt.semilogx(gamma_arr, training_rmse, 'b-')
plt.semilogx(gamma_arr, test_rmse, 'k-')
plt.show()
```
# 1F
```
from sklearn import linear_model
N = 11
degree = 15
gamma_arr = [x for x in np.logspace(-5, 1, 10)]
training_rmse = []
for gamma in np.logspace(-5, 1, 10):
X = np.vander(x,degree)[:,1:]
lasso = linear_model.Lasso(alpha=gamma,max_iter=100000)
lasso.fit(X,y)
rmse = np.sqrt(np.sum((lasso.predict(X) - y)**2)/N)
training_rmse.append(rmse)
test_rmse = []
for gamma in np.logspace(-5, 1, 10):
X = np.vander(xt,degree)[:,1:]
lasso = linear_model.Lasso(alpha=gamma,max_iter=100000)
lasso.fit(X,yt)
rmse = np.sqrt(np.sum((lasso.predict(X) - yt)**2)/N)
test_rmse.append(rmse)
plt.ylabel('Avg. RMSE')
plt.xlabel('Gamma')
plt.semilogx(gamma_arr, training_rmse, 'b-')
plt.semilogx(gamma_arr, test_rmse, 'k-')
plt.show()
```
# 1G
```
import numpy.polynomial.legendre as leg
training_rmse = []
for degree in range(0, 20):
X = np.vander(x, degree+1, increasing=True)
w = np.linalg.solve(np.dot(X.T,X), np.dot(X.T,y))
Xhat = np.vander(xhat,degree+1,increasing=True)
yhat = np.dot(Xhat, w)
avg_rmse = np.sqrt(np.sum((np.dot(X,w) - y)**2)/len(y))
training_rmse.append(avg_rmse)
legendre_rmse = []
for degree in range(0, 20):
X = leg.legvander(x,degree)
w = np.linalg.solve(np.dot(X.T,X),np.dot(X.T,y))
avg_rmse = np.sqrt(np.sum((np.dot(X,w) - y)**2)/len(y))
legendre_rmse.append(avg_rmse)
rmse_hat = [x for x in range(0, 20)]
plt.ylabel('Avg. RMSE')
plt.xlabel('Polygon Order')
plt.plot(rmse_hat, legendre_rmse, 'ko')
plt.plot(rmse_hat, training_rmse, 'b-')
plt.show()
```
# 3A
Can ignore denominator because probability of a class is proportional to numerator, and the denominator is the same for each class, so the classifier can just choose the class with the highest valued numerator without normalizing each case.
# 3B
```
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import BernoulliNB
import matplotlib.pyplot as plt
import numpy as np
import pprint
digits = load_digits()
X = np.round(digits.data/16.)
y = digits.target
print(y)
n = y.shape[0]
X, X_test, y, y_test = train_test_split(X, y, test_size=0.33,
random_state=42)
print(len(y_test))
m = X.shape[0]
m_test = X_test.shape[0]
N = 10
mu_array = np.zeros((n,N))
sigma2_array = np.zeros((n,N))
prior_array = np.zeros((N))
for i in
plt.figure(1, figsize=(3, 3))
plt.imshow(digits.images[-1], cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()
```
| github_jupyter |
# NumPy
NumPy is also incredibly fast, as it has bindings to C libraries. For more info on why you would want to use arrays instead of lists, check out this great [StackOverflow post](http://stackoverflow.com/questions/993984/why-numpy-instead-of-python-lists).
```
import numpy as np
```
# NumPy Arrays
NumPy arrays is the main way in which Numpy arrays are used.<br/>
NumPy arrays essentially come in two flavors: vectors and matrices.<br/>
Vectors are strictly 1-dimensional (1D) arrays and matrices are 2D (but you should note a matrix can still have only one row or one column).
## Creating NumPy Arrays
### From a Python List
We can create an array by directly converting a list or list of lists:
```
my_list = [1,2,3]
my_list
np.array(my_list)
my_matrix = [[1,2,3],[4,5,6],[7,8,9]]
my_matrix
np.array(my_matrix)
```
## Built-in Methods
There are lots of built-in ways to generate arrays.
### arange
Return evenly spaced values within a given interval. [[reference](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.arange.html)]
```
np.arange(0,10)
np.arange(0,11,2)
```
### zeros and ones
Generate arrays of zeros or ones. [[reference](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.zeros.html)]
```
np.zeros(3)
np.zeros((5,5))
np.ones(3)
np.ones((3,3))
```
### linspace
Return evenly spaced numbers over a specified interval. [[reference](https://www.numpy.org/devdocs/reference/generated/numpy.linspace.html)]
```
np.linspace(0,10,3)
np.linspace(0,5,20)
```
<font color=green>Note that `.linspace()` *includes* the stop value. To obtain an array of common fractions, increase the number of items:</font>
```
np.linspace(0,5,21)
```
### eye
Creates an identity matrix [[reference](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.eye.html)]
```
np.eye(4)
```
## Random
Numpy also has lots of ways to create random number arrays:
### rand
Creates an array of the given shape and populates it with random samples from a uniform distribution over ``[0, 1)``. [[reference](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.rand.html)]
```
np.random.rand(2)
np.random.rand(5,5)
```
### randn
Returns a sample (or samples) from the "standard normal" distribution [σ = 1]. Unlike **rand** which is uniform, values closer to zero are more likely to appear. [[reference](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.randn.html)]
```
np.random.randn(2)
np.random.randn(5,5)
```
### randint
Returns random integers from `low` (inclusive) to `high` (exclusive). [[reference](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.randint.html)]
```
np.random.randint(1,100)
np.random.randint(1,100, (10, 10))
```
### seed
Can be used to set the random state, so that the same "random" results can be reproduced. [[reference](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.seed.html)]
```
np.random.seed(42)
np.random.rand(4)
np.random.seed(42)
np.random.rand(4)
```
## Array Attributes and Methods
Let's discuss some useful attributes and methods for an array:
```
arr = np.arange(25)
ranarr = np.random.randint(0,50,10)
arr
ranarr
```
## Reshape
Returns an array containing the same data with a new shape. [[reference](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.reshape.html)]
```
arr.reshape(5,5)
```
### max, min, argmax, argmin
These are useful methods for finding max or min values. Or to find their index locations using argmin or argmax
```
ranarr
ranarr.max()
ranarr.argmax()
ranarr.min()
ranarr.argmin()
```
## Shape
Shape is an attribute that arrays have (not a method): [[reference](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.ndarray.shape.html)]
```
# Vector
arr.shape
# Notice the two sets of brackets
arr.reshape(1,25)
arr.reshape(1,25).shape
arr.reshape(25,1)
arr.reshape(25,1).shape
```
### dtype
You can also grab the data type of the object in the array: [[reference](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.ndarray.dtype.html)]
```
arr.dtype
arr2 = np.array([1.2, 3.4, 5.6])
arr2.dtype
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Scott-Huston/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling/blob/master/LS_DS_123_Make_Explanatory_Visualizations.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
_Lambda School Data Science_
# Make Explanatory Visualizations
### Objectives
- identify misleading visualizations and how to fix them
- use Seaborn to visualize distributions and relationships with continuous and discrete variables
- add emphasis and annotations to transform visualizations from exploratory to explanatory
- remove clutter from visualizations
### Links
- [How to Spot Visualization Lies](https://flowingdata.com/2017/02/09/how-to-spot-visualization-lies/)
- [Visual Vocabulary - Vega Edition](http://ft.com/vocabulary)
- [Choosing a Python Visualization Tool flowchart](http://pbpython.com/python-vis-flowchart.html)
- [Searborn example gallery](http://seaborn.pydata.org/examples/index.html) & [tutorial](http://seaborn.pydata.org/tutorial.html)
- [Strong Titles Are The Biggest Bang for Your Buck](http://stephanieevergreen.com/strong-titles/)
- [Remove to improve (the data-ink ratio)](https://www.darkhorseanalytics.com/blog/data-looks-better-naked)
- [How to Generate FiveThirtyEight Graphs in Python](https://www.dataquest.io/blog/making-538-plots/)
# Avoid Misleading Visualizations
Did you find/discuss any interesting misleading visualizations in your Walkie Talkie?
## What makes a visualization misleading?
[5 Ways Writers Use Misleading Graphs To Manipulate You](https://venngage.com/blog/misleading-graphs/)
## Two y-axes
<img src="https://kieranhealy.org/files/misc/two-y-by-four-sm.jpg" width="800">
Other Examples:
- [Spurious Correlations](https://tylervigen.com/spurious-correlations)
- <https://blog.datawrapper.de/dualaxis/>
- <https://kieranhealy.org/blog/archives/2016/01/16/two-y-axes/>
- <http://www.storytellingwithdata.com/blog/2016/2/1/be-gone-dual-y-axis>
## Y-axis doesn't start at zero.
<img src="https://i.pinimg.com/originals/22/53/a9/2253a944f54bb61f1983bc076ff33cdd.jpg" width="600">
## Pie Charts are bad
<img src="https://i1.wp.com/flowingdata.com/wp-content/uploads/2009/11/Fox-News-pie-chart.png?fit=620%2C465&ssl=1" width="600">
## Pie charts that omit data are extra bad
- A guy makes a misleading chart that goes viral
What does this chart imply at first glance? You don't want your user to have to do a lot of work in order to be able to interpret you graph correctly. You want that first-glance conclusions to be the correct ones.
<img src="https://pbs.twimg.com/media/DiaiTLHWsAYAEEX?format=jpg&name=medium" width='600'>
<https://twitter.com/michaelbatnick/status/1019680856837849090?lang=en>
- It gets picked up by overworked journalists (assuming incompetency before malice)
<https://www.marketwatch.com/story/this-1-chart-puts-mega-techs-trillions-of-market-value-into-eye-popping-perspective-2018-07-18>
- Even after the chart's implications have been refuted, it's hard a bad (although compelling) visualization from being passed around.
<https://www.linkedin.com/pulse/good-bad-pie-charts-karthik-shashidhar/>
**["yea I understand a pie chart was probably not the best choice to present this data."](https://twitter.com/michaelbatnick/status/1037036440494985216)**
## Pie Charts that compare unrelated things are next-level extra bad
<img src="http://www.painting-with-numbers.com/download/document/186/170403+Legalizing+Marijuana+Graph.jpg" width="600">
## Be careful about how you use volume to represent quantities:
radius vs diameter vs volume
<img src="https://static1.squarespace.com/static/5bfc8dbab40b9d7dd9054f41/t/5c32d86e0ebbe80a25873249/1546836082961/5474039-25383714-thumbnail.jpg?format=1500w" width="600">
## Don't cherrypick timelines or specific subsets of your data:
<img src="https://wattsupwiththat.com/wp-content/uploads/2019/02/Figure-1-1.png" width="600">
Look how specifically the writer has selected what years to show in the legend on the right side.
<https://wattsupwiththat.com/2019/02/24/strong-arctic-sea-ice-growth-this-year/>
Try the tool that was used to make the graphic for yourself
<http://nsidc.org/arcticseaicenews/charctic-interactive-sea-ice-graph/>
## Use Relative units rather than Absolute Units
<img src="https://imgs.xkcd.com/comics/heatmap_2x.png" width="600">
## Avoid 3D graphs unless having the extra dimension is effective
Usually you can Split 3D graphs into multiple 2D graphs
3D graphs that are interactive can be very cool. (See Plotly and Bokeh)
<img src="https://thumbor.forbes.com/thumbor/1280x868/https%3A%2F%2Fblogs-images.forbes.com%2Fthumbnails%2Fblog_1855%2Fpt_1855_811_o.jpg%3Ft%3D1339592470" width="600">
## Don't go against typical conventions
<img src="http://www.callingbullshit.org/twittercards/tools_misleading_axes.png" width="600">
# Tips for choosing an appropriate visualization:
## Use Appropriate "Visual Vocabulary"
[Visual Vocabulary - Vega Edition](http://ft.com/vocabulary)
## What are the properties of your data?
- Is your primary variable of interest continuous or discrete?
- Is in wide or long (tidy) format?
- Does your visualization involve multiple variables?
- How many dimensions do you need to include on your plot?
Can you express the main idea of your visualization in a single sentence?
How hard does your visualization make the user work in order to draw the intended conclusion?
## Which Visualization tool is most appropriate?
[Choosing a Python Visualization Tool flowchart](http://pbpython.com/python-vis-flowchart.html)
## Anatomy of a Matplotlib Plot
```
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import AutoMinorLocator, MultipleLocator, FuncFormatter
np.random.seed(19680801)
X = np.linspace(0.5, 3.5, 100)
Y1 = 3+np.cos(X)
Y2 = 1+np.cos(1+X/0.75)/2
Y3 = np.random.uniform(Y1, Y2, len(X))
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(1, 1, 1, aspect=1)
def minor_tick(x, pos):
if not x % 1.0:
return ""
return "%.2f" % x
ax.xaxis.set_major_locator(MultipleLocator(1.000))
ax.xaxis.set_minor_locator(AutoMinorLocator(4))
ax.yaxis.set_major_locator(MultipleLocator(1.000))
ax.yaxis.set_minor_locator(AutoMinorLocator(4))
ax.xaxis.set_minor_formatter(FuncFormatter(minor_tick))
ax.set_xlim(0, 4)
ax.set_ylim(0, 4)
ax.tick_params(which='major', width=1.0)
ax.tick_params(which='major', length=10)
ax.tick_params(which='minor', width=1.0, labelsize=10)
ax.tick_params(which='minor', length=5, labelsize=10, labelcolor='0.25')
ax.grid(linestyle="--", linewidth=0.5, color='.25', zorder=-10)
ax.plot(X, Y1, c=(0.25, 0.25, 1.00), lw=2, label="Blue signal", zorder=10)
ax.plot(X, Y2, c=(1.00, 0.25, 0.25), lw=2, label="Red signal")
ax.plot(X, Y3, linewidth=0,
marker='o', markerfacecolor='w', markeredgecolor='k')
ax.set_title("Anatomy of a figure", fontsize=20, verticalalignment='bottom')
ax.set_xlabel("X axis label")
ax.set_ylabel("Y axis label")
ax.legend()
def circle(x, y, radius=0.15):
from matplotlib.patches import Circle
from matplotlib.patheffects import withStroke
circle = Circle((x, y), radius, clip_on=False, zorder=10, linewidth=1,
edgecolor='black', facecolor=(0, 0, 0, .0125),
path_effects=[withStroke(linewidth=5, foreground='w')])
ax.add_artist(circle)
def text(x, y, text):
ax.text(x, y, text, backgroundcolor="white",
ha='center', va='top', weight='bold', color='blue')
# Minor tick
circle(0.50, -0.10)
text(0.50, -0.32, "Minor tick label")
# Major tick
circle(-0.03, 4.00)
text(0.03, 3.80, "Major tick")
# Minor tick
circle(0.00, 3.50)
text(0.00, 3.30, "Minor tick")
# Major tick label
circle(-0.15, 3.00)
text(-0.15, 2.80, "Major tick label")
# X Label
circle(1.80, -0.27)
text(1.80, -0.45, "X axis label")
# Y Label
circle(-0.27, 1.80)
text(-0.27, 1.6, "Y axis label")
# Title
circle(1.60, 4.13)
text(1.60, 3.93, "Title")
# Blue plot
circle(1.75, 2.80)
text(1.75, 2.60, "Line\n(line plot)")
# Red plot
circle(1.20, 0.60)
text(1.20, 0.40, "Line\n(line plot)")
# Scatter plot
circle(3.20, 1.75)
text(3.20, 1.55, "Markers\n(scatter plot)")
# Grid
circle(3.00, 3.00)
text(3.00, 2.80, "Grid")
# Legend
circle(3.70, 3.80)
text(3.70, 3.60, "Legend")
# Axes
circle(0.5, 0.5)
text(0.5, 0.3, "Axes")
# Figure
circle(-0.3, 0.65)
text(-0.3, 0.45, "Figure")
color = 'blue'
ax.annotate('Spines', xy=(4.0, 0.35), xytext=(3.3, 0.5),
weight='bold', color=color,
arrowprops=dict(arrowstyle='->',
connectionstyle="arc3",
color=color))
ax.annotate('', xy=(3.15, 0.0), xytext=(3.45, 0.45),
weight='bold', color=color,
arrowprops=dict(arrowstyle='->',
connectionstyle="arc3",
color=color))
ax.text(4.0, -0.4, "Made with http://matplotlib.org",
fontsize=10, ha="right", color='.5')
plt.show()
```
# Making Explanatory Visualizations with Seaborn
Today we will reproduce this [example by FiveThirtyEight:](https://fivethirtyeight.com/features/al-gores-new-movie-exposes-the-big-flaw-in-online-movie-ratings/)
```
from IPython.display import display, Image
url = 'https://fivethirtyeight.com/wp-content/uploads/2017/09/mehtahickey-inconvenient-0830-1.png'
example = Image(url=url, width=400)
display(example)
```
Using this data: https://github.com/fivethirtyeight/data/tree/master/inconvenient-sequel
Links
- [Strong Titles Are The Biggest Bang for Your Buck](http://stephanieevergreen.com/strong-titles/)
- [Remove to improve (the data-ink ratio)](https://www.darkhorseanalytics.com/blog/data-looks-better-naked)
- [How to Generate FiveThirtyEight Graphs in Python](https://www.dataquest.io/blog/making-538-plots/)
## Make prototypes
This helps us understand the problem
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
plt.style.use('fivethirtyeight')
fake = pd.Series([38, 3, 2, 1, 2, 4, 6, 5, 5, 33],
index=range(1,11))
fake.plot.bar(color='C1', width=0.9);
fake2 = pd.Series(
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
2, 2, 2,
3, 3, 3,
4, 4,
5, 5, 5,
6, 6, 6, 6,
7, 7, 7, 7, 7,
8, 8, 8, 8,
9, 9, 9, 9,
10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10])
fake2.value_counts().sort_index().plot.bar(color='C1', width=0.9);
```
## Annotate with text
```
plt.style.use('fivethirtyeight')
fig = plt.figure()
fig.patch.set_facecolor('white')
ax = fake.plot.bar(color='#ED713A', width = .9)
ax.set(facecolor = 'white')
ax.text(x=-2,y = 46, s="'An Inconvenient Sequel: Truth To Power' is divisive", fontweight = 'bold')
ax.text(x=-2, y = 43, s = 'IMDb ratings for the film as of Aug. 29')
ax.set_xticklabels(range(1,11), rotation = 0, color = '#A3A3A3')
ax.set_yticklabels(['0', '10', '20', '30', '40%'], color = '#A3A3A3')
ax.set_yticks(range(0,50,10))
plt.ylabel('Percent of total votes', fontweight = 'bold', fontsize = '12')
plt.xlabel('Rating', fontweight = 'bold', fontsize = '12')
```
## Reproduce with real data
```
df = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/inconvenient-sequel/ratings.csv')
pd.set_option('display.max_columns', 50)
print(df.shape)
df.head(20)
df.sample(1).T
df.tail()
df.dtypes
df['timestamp'] = pd.to_datetime(df['timestamp'])
df.timestamp.describe()
df.dtypes
df.set_index(df['timestamp'], inplace = True)
df['2017-08-29']
lastday = df['2017-08-29']
lastday_filtered = lastday[lastday['category']=='IMDb users']
lastday_filtered.tail(30)
df.category.value_counts()
lastday_filtered.respondents.plot()
plt.show()
final = lastday_filtered.tail(1)
final.T
pct_columns = ['1_pct', '2_pct', '3_pct', '4_pct', '5_pct','6_pct','7_pct','8_pct','9_pct','10_pct']
final = final[pct_columns]
final.T
plot_data = final.T
plot_data.index = range(1,11)
plot_data
plt.style.use('fivethirtyeight')
fig = plt.figure()
fig.patch.set_facecolor('white')
ax = plot_data.plot.bar(color='#ED713A', width = .9, legend = False)
ax.set(facecolor = 'white')
ax.text(x=-2,y = 46, s="'An Inconvenient Sequel: Truth To Power' is divisive", fontweight = 'bold')
ax.text(x=-2, y = 43, s = 'IMDb ratings for the film as of Aug. 29')
ax.set_xticklabels(range(1,11), rotation = 0, color = '#A3A3A3')
ax.set_yticklabels(['0', '10', '20', '30', '40%'], color = '#A3A3A3')
ax.set_yticks(range(0,50,10))
plt.ylabel('Percent of total votes', fontweight = 'bold', fontsize = '12')
plt.xlabel('Rating', fontweight = 'bold', fontsize = '12', labelpad = 15)
plt.show()
```
# ASSIGNMENT
Replicate the lesson code. I recommend that you [do not copy-paste](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit).
# STRETCH OPTIONS
#### 1) Reproduce another example from [FiveThityEight's shared data repository](https://data.fivethirtyeight.com/).
#### 2) Reproduce one of the following using a library other than Seaborn or Matplotlib.
For example:
- [thanksgiving-2015](https://fivethirtyeight.com/features/heres-what-your-part-of-america-eats-on-thanksgiving/) (try the [`altair`](https://altair-viz.github.io/gallery/index.html#maps) library)
- [candy-power-ranking](https://fivethirtyeight.com/features/the-ultimate-halloween-candy-power-ranking/) (try the [`statsmodels`](https://www.statsmodels.org/stable/index.html) library)
- or another example of your choice!
#### 3) Make more charts!
Choose a chart you want to make, from [Visual Vocabulary - Vega Edition](http://ft.com/vocabulary).
Find the chart in an example gallery of a Python data visualization library:
- [Seaborn](http://seaborn.pydata.org/examples/index.html)
- [Altair](https://altair-viz.github.io/gallery/index.html)
- [Matplotlib](https://matplotlib.org/gallery.html)
- [Pandas](https://pandas.pydata.org/pandas-docs/stable/visualization.html)
Reproduce the chart. [Optionally, try the "Ben Franklin Method."](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) If you want, experiment and make changes.
Take notes. Consider sharing your work with your cohort!
```
# Stretch option #1
!pip install pandas==0.23.4
import pandas as pd
from IPython.display import display, Image
# url = 'https://fivethirtyeight.com/wp-content/uploads/2017/09/mehtahickey-inconvenient-0830-1.png'
# example = Image(url=url, width=400)
# example = Image(filename = '/Users/scotthuston/Desktop/FTE_image')
# display(example)
FTE = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/checking-our-work-data/master/mlb_games.csv')
FTE.head()
prob1_bins = pd.cut(FTE['prob1'],13)
ct = pd.crosstab(FTE['prob1_outcome'], [prob1_bins])
# FTE.boxplot(column = 'prob1')
df1 = FTE[FTE['prob1'] <= .278]
df2 = FTE[(FTE['prob1'] <= .322) & (FTE['prob1']>.278)]
df3 = FTE[(FTE['prob1'] <= .367) & (FTE['prob1']>.322)]
df4 = FTE[(FTE['prob1'] <= .411) & (FTE['prob1']>.367)]
df5 = FTE[(FTE['prob1'] <= .456) & (FTE['prob1']>.411)]
df6 = FTE[(FTE['prob1'] <= .501) & (FTE['prob1']>.456)]
df7 = FTE[(FTE['prob1'] <= .545) & (FTE['prob1']>.501)]
df8 = FTE[(FTE['prob1'] <= .59) & (FTE['prob1']>.545)]
df9 = FTE[(FTE['prob1'] <= .634) & (FTE['prob1']>.59)]
df10 = FTE[(FTE['prob1'] <= .679) & (FTE['prob1']>.634)]
df11= FTE[(FTE['prob1'] <= .723) & (FTE['prob1']>.679)]
df12 = FTE[(FTE['prob1'] <= .768) & (FTE['prob1']>.723)]
df13 = FTE[(FTE['prob1'] <= .812) & (FTE['prob1']>.768)]
df1.head()
df2.head(10)
import matplotlib.pyplot as plt
import seaborn as sns
plt.errorbar(df1['prob1'],df1['prob1_outcome'], xerr = df1['prob1_outcome']-df1['prob1'])
sns.set(style="darkgrid")
lst = []
for i in len(df2.prob1_outcome):
lst.append(1)
sns.pointplot(lst, y="prob1_outcome", data=df2)
# df2['prob1_outcome']
```
| github_jupyter |
# Bloodmeal Calling
In this notebook, we analyze contigs from each bloodfed mosquito sample with LCA in *Vertebrata*. The potential bloodmeal call is the lowest taxonomic group consistent with the LCAs of all such contigs in a sample.
```
import pandas as pd
import numpy as np
from ete3 import NCBITaxa
import boto3
import tempfile
import subprocess
import os
import io
import re
import time
import json
ncbi = NCBITaxa()
df = pd.read_csv('../../figures/fig3/all_contigs_df.tsv', sep='\t',
dtype={'taxid': np.int})
df = df[df['group'] == 'Metazoa']
def taxid2name(taxid):
return ncbi.get_taxid_translator([taxid])[taxid]
```
There is a partial order on taxa: $a < b$ if $a$ is an ancestor of $b$. A taxon $t$ is admissible as a bloodmeal call for a given sample if it is consistent with all *Vertebrata* LCA taxa $b$: $t < b$ or $b < t$ for all $b$. That is, a taxon is admissable if t in lineage(b) or b in lineage(t) for all b.
We will report the lowest admissable taxon for each sample.
```
def get_lowest_admissable_taxon(taxa):
lineages = [ncbi.get_lineage(taxid) for taxid in taxa]
if len(lineages) == 0:
return 0
all_taxa = np.unique([taxid for lineage in lineages for taxid in lineage])
non_leaf_taxa = np.unique([taxid for lineage in lineages for taxid in lineage[:-1]])
leaf_taxa = [taxid for taxid in all_taxa if taxid not in non_leaf_taxa]
leaf_lineages = [ncbi.get_lineage(taxid) for taxid in leaf_taxa]
leaf_common_ancestors = set.intersection(*[set(l) for l in leaf_lineages])
lca = [taxid for taxid in leaf_lineages[0] if taxid in leaf_common_ancestors][-1]
return lca
def filter_taxon(taxid, exclude = [], # drop these taxa
exclude_children = [], # drop children of these taxa
parent=None # only keep children of the parent
):
if taxid in exclude:
return False
lineage = ncbi.get_lineage(taxid)
exclude_children = set(exclude_children)
if len(set(lineage) & set(exclude_children)) > 0:
return False
if parent and parent not in lineage:
return False
return True
vertebrate_taxid = 7742
primate_taxid = 9443
euarchontoglires_taxid = 314146
df['filter_taxon'] = df['taxid'].apply(lambda x: filter_taxon(x,
exclude = [euarchontoglires_taxid],
exclude_children = [primate_taxid],
parent = vertebrate_taxid))
```
How many nonprimate vertebrate contigs per sample? 1 to 11.
```
%pprint
sorted(df[df['filter_taxon']].groupby('sample').count()['taxid'])
sorted(df[df['filter_taxon']].groupby('sample')['reads'].sum())
lowest_admissable_taxa = []
for sample in df['sample'].unique():
taxid = get_lowest_admissable_taxon(df[(df['sample'] == sample) & df['filter_taxon']]['taxid'])
name = taxid2name(taxid) if taxid else "NA"
lowest_admissable_taxa.append({'sample': sample, 'name': name, 'taxid': taxid})
lowest_admissable_taxa = pd.DataFrame(lowest_admissable_taxa).sort_values('sample')
lowest_admissable_taxa = lowest_admissable_taxa[['sample', 'taxid', 'name']]
lowest_admissable_taxa.head()
partition = "Pecora Carnivora Homininae Rodentia Leporidae Aves".split()
partition = ncbi.get_name_translator(partition)
partition = {v[0]: k for k, v in partition.items()}
def get_category(taxid):
if not taxid:
return None
lineage = ncbi.get_lineage(taxid)
for k in partition:
if k in lineage:
return partition[k]
else:
return 'NA'
```
The ranks of the categories are:
```
ncbi.get_rank(partition.keys())
bloodmeal_calls = lowest_admissable_taxa
bloodmeal_calls['category'] = bloodmeal_calls['taxid'].apply(get_category)
bloodmeal_calls = bloodmeal_calls[bloodmeal_calls['category'] != 'NA']
bloodmeal_calls = bloodmeal_calls[bloodmeal_calls['name'] != 'NA']
bloodmeal_calls = bloodmeal_calls[['sample', 'category', 'name']]
bloodmeal_calls = bloodmeal_calls.sort_values('sample')
bloodmeal_calls = bloodmeal_calls.rename(columns={'sample': 'Sample',
'category': 'Bloodmeal Category',
'name': 'Bloodmeal Call'})
metadata = pd.read_csv('../../data/metadata/CMS001_CMS002_MergedAnnotations.csv')
metadata = metadata[['NewIDseqName', 'Habitat', 'collection_lat', 'collection_long', 'ska_genus', 'ska_species']].rename(
columns = {'NewIDseqName': 'Sample',
'ska_genus': 'Genus',
'ska_species': 'Species',
'collection_lat': 'Lat',
'collection_long': 'Long'})
bloodmeal_calls = bloodmeal_calls.merge(metadata, on='Sample', how='left')
bloodmeal_calls.to_csv(
'../../figures/fig4/bloodmeal_calls.csv', index=False)
```
| github_jupyter |
# LAB 4b: Create Keras DNN model.
**Learning Objectives**
1. Set CSV Columns, label column, and column defaults
1. Make dataset of features and label from CSV files
1. Create input layers for raw features
1. Create feature columns for inputs
1. Create DNN dense hidden layers and output layer
1. Create custom evaluation metric
1. Build DNN model tying all of the pieces together
1. Train and evaluate
## Introduction
In this notebook, we'll be using Keras to create a DNN model to predict the weight of a baby before it is born.
We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a deep neural network in Keras. We'll create a custom evaluation metric and build our DNN model. Finally, we'll train and evaluate our model.
Each learning objective will correspond to a __#TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/4b_keras_dnn_babyweight.ipynb).
## Load necessary libraries
```
import datetime
import os
import shutil
import matplotlib.pyplot as plt
import tensorflow as tf
print(tf.__version__)
```
## Verify CSV files exist
In the seventh lab of this series [4a_sample_babyweight](../solutions/4a_sample_babyweight.ipynb), we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.
```
%%bash
ls *.csv
%%bash
head -5 *.csv
```
## Create Keras model
### Set CSV Columns, label column, and column defaults.
Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.
* `CSV_COLUMNS` are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files
* `LABEL_COLUMN` is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.
* `DEFAULTS` is a list with the same length as `CSV_COLUMNS`, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.
```
# Determine CSV, label, and key columns
# Create list of string column headers, make sure order matches.
CSV_COLUMNS = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks"]
# Add string name for label column
LABEL_COLUMN = "weight_pounds"
# Set default values for each CSV column as a list of lists.
# Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0]]
```
### Make dataset of features and label from CSV files.
Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use `tf.data.experimental.make_csv_dataset`. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.
```
def features_and_labels(row_data):
"""Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
"""
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
"""Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: tf.estimator.ModeKeys to determine if training or evaluating.
Returns:
`Dataset` object.
"""
# Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS)
# Map dataset to features and label
dataset = dataset.map(map_func=features_and_labels) # features, label
# Shuffle and repeat for training
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
```
### Create input layers for raw features.
We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers [(tf.Keras.layers.Input)](https://www.tensorflow.org/api_docs/python/tf/keras/Input) by defining:
* shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.
* name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.
* dtype: The data type expected by the input, as a string (float32, float64, int32...)
```
def create_input_layers():
"""Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
"""
inputs = {
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="float32")
for colname in ["mother_age", "gestation_weeks"]}
inputs.update({
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="string")
for colname in ["is_male", "plurality"]})
return inputs
```
### Create feature columns for inputs.
Next, define the feature columns. `mother_age` and `gestation_weeks` should be numeric. The others, `is_male` and `plurality`, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
```
def categorical_fc(name, values):
"""Helper function to wrap categorical feature by indicator column.
Args:
name: str, name of feature.
values: list, list of strings of categorical values.
Returns:
Indicator column of categorical feature.
"""
cat_column = tf.feature_column.categorical_column_with_vocabulary_list(
key=name, vocabulary_list=values)
return tf.feature_column.indicator_column(categorical_column=cat_column)
def create_feature_columns():
"""Creates dictionary of feature columns from inputs.
Returns:
Dictionary of feature columns.
"""
feature_columns = {
colname : tf.feature_column.numeric_column(key=colname)
for colname in ["mother_age", "gestation_weeks"]
}
feature_columns["is_male"] = categorical_fc(
"is_male", ["True", "False", "Unknown"])
feature_columns["plurality"] = categorical_fc(
"plurality", ["Single(1)", "Twins(2)", "Triplets(3)",
"Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"])
return feature_columns
```
### Create DNN dense hidden layers and output layer.
So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. Let's create some hidden dense layers beginning with our inputs and end with a dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.
```
def get_model_outputs(inputs):
"""Creates model architecture and returns outputs.
Args:
inputs: Dense tensor used as inputs to model.
Returns:
Dense tensor output from the model.
"""
# Create two hidden layers of [64, 32] just in like the BQML DNN
h1 = tf.keras.layers.Dense(64, activation="relu", name="h1")(inputs)
h2 = tf.keras.layers.Dense(32, activation="relu", name="h2")(h1)
# Final output is a linear activation because this is regression
output = tf.keras.layers.Dense(
units=1, activation="linear", name="weight")(h2)
return output
```
### Create custom evaluation metric.
We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.
```
def rmse(y_true, y_pred):
"""Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
"""
return tf.sqrt(tf.reduce_mean((y_pred - y_true) ** 2))
```
### Build DNN model tying all of the pieces together.
Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is a simple feedforward model with no branching, side inputs, etc. so we could have used Keras' Sequential Model API but just for fun we're going to use Keras' Functional Model API. Here we will build the model using [tf.keras.models.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.
```
def build_dnn_model():
"""Builds simple DNN using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
"""
# Create input layer
inputs = create_input_layers()
# Create feature columns
feature_columns = create_feature_columns()
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(
feature_columns=feature_columns.values())(inputs)
# Get output of model given inputs
output = get_model_outputs(dnn_inputs)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
return model
print("Here is our DNN architecture so far:\n")
model = build_dnn_model()
print(model.summary())
```
We can visualize the DNN using the Keras plot_model utility.
```
tf.keras.utils.plot_model(
model=model, to_file="dnn_model.png", show_shapes=False, rankdir="LR")
```
## Run and evaluate model
### Train and evaluate.
We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard.
```
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around
NUM_EVALS = 5 # how many times to evaluate
# Enough to get a reasonable sample, but not so much that it slows down
NUM_EVAL_EXAMPLES = 10000
trainds = load_dataset(
pattern="train*",
batch_size=TRAIN_BATCH_SIZE,
mode=tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset(
pattern="eval*",
batch_size=1000,
mode=tf.estimator.ModeKeys.EVAL).take(count=NUM_EVAL_EXAMPLES // 1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
logdir = os.path.join(
"logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=logdir, histogram_freq=1)
history = model.fit(
trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch,
callbacks=[tensorboard_callback])
```
### Visualize loss curve
```
# Plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(["loss", "rmse"]):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history["val_{}".format(key)])
plt.title("model {}".format(key))
plt.ylabel(key)
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left");
```
### Save the model
```
OUTPUT_DIR = "babyweight_trained"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(
OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
```
## Monitor and experiment with training
To begin TensorBoard from within AI Platform Notebooks, click the + symbol in the top left corner and select the **Tensorboard** icon to create a new TensorBoard. Before you click make sure you are in the directory of your TensorBoard log_dir.
In TensorBoard, look at the learned embeddings. Are they getting clustered? How about the weights for the hidden layers? What if you run this longer? What happens if you change the batchsize?
## Lab Summary:
In this lab, we started by defining the CSV column names, label column, and column defaults for our data inputs. Then, we constructed a tf.data Dataset of features and the label from the CSV files and created inputs layers for the raw features. Next, we set up feature columns for the model inputs and built a deep neural network in Keras. We created a custom evaluation metric and built our DNN model. Finally, we trained and evaluated our model.
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
# Часть 1. k-Nearest Neighbor (kNN) классификатор
kNN классификатор:
- Во время обучения получает данные и просто запоминает их
- Во время тестирования каждое тестовое изображение сравнивается с каждым обучающим. Итоговая метка получается на основе анализа меток k ближайших обучающих изображений
- Значение k подбирается с помощью кросс-валидации.
Первое упражнение разминочное. Направлено на поминимание pipeline классификации изображений, кросс-валидации и получения практики написания эффективного векторизованного кода, осознания его эффективности.
```
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print(X_train.shape, X_test.shape)
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
```
Нам хочется классифицировать данные с помощью kNN. Вспомним, что этот процесс можно разбить на 2 шага:
1. Посчитать расстояние между каждым обучающим и каждым тестовым примером.
2. Имея эти расстояния для каждого тестового примера найти k ближайших примеров и дать им проголосовать за итоговую метку.
Начать стоит с подсчета матрицы расстояний между всеми обучающими и всеми тестовыми примерами. Например, если у вас **Ntr** обучающих примеров и **Nte** тестовых примеров, то на этом этапе у вас должна получиться матрица из **Nte x Ntr** элементов, где элемент (i, j) равен расстоянию от i-ого обучающего до j-ого тестового примера.
Откройте файл `cs231n/classifiers/k_nearest_neighbor.py` и реализуйте в нём функцию `compute_distances_two_loops`, используя (крайне неэффективный) вложенный цикл по всем парам из (test, train), подсчитывая по одному элементу матрицы за одну итерацию.
```
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print(dists.shape)
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
```
**Вопрос №1** Обратите внимание на структурные элементы в матрице расстояний. Какие-то строки и столбцы являются более яркими. (При этом в цветовой схеме по умолчанию чёрный свет соответсвует малым расстояниям, а белый - большим.)
- Что именно в данных приводит к тому, что некоторые строчки отчётливо яркие?
- А столбцы?
**Ваш ответ**: Разница между столбцами и строками заключается в том, что одни отвечают за обучающую выборку, а другие - за тестовую. Поэтому приведу рассуждения для строк.
"Белая строка" означает, что данный объект находится далеко от всех объектов тестовой выборки. Так как объекты - это вектора, представляющие исходные изображения, логично предположить, что это происходит из-за того, что некоторые объекты отличаются по признакам, связанным исключительно со значениями компонент векторов (т.е. с интенсивностью цвета). Простой пример - самолет часто легко отличить от лягушки, потому что самолет - это белые и синие цвета, а лягушка - зеленые и коричневые. Как мне кажется, проблема происходит из-за того, что когда мы сэмплировали выборку, мы не делали никакого решафла, а в исходных данных были какие-то неочевидные зависимости между номерами изображений.
Далее нужно реализовать функцию `
predict_labels`, и запустить следующий код. Получим accuracy для `k = 1`
```
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
```
Должно получится где-то `27%` accuracy. Теперь попробуем большее значение `k`, например `k = 5`:
```
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
```
Должно стать немного лучше, чем с `k = 1`.
**Вопрос №2**
Можно также попробовать другую метрику расстояний, например L1.
Качество классификатора по ближайшему соседу с L1 расстоянием не изменится, если (Выберите все подходящие варианты):
1. Данные предобработаны вычитанием среднего.
2. Данные предобработаны вычитанием среднего и делением на дисперсию.
3. Координатные оси данных повёрнуты.
4. Ни одно из вышеперечисленного.
**Ваш ответ**: Верными являются утверждения #1 и #2.
_Подробнее по каждому пункту:_
1. $||x - y|| = ||(x - mean) - (y - mean)||$
Таким образом, при поиске соседей в матрице расстояний получим те же самые значения. Тогда качество не изменится, так как ответы будут теми же самыми.
2. $\cfrac{||x - y||}{variance} = \left|\left|\cfrac{x - mean}{variance} - \cfrac{y - mean}{variance}\right|\right|$
В данном случае матрица расстояний будет уменьшена в `variance` раз. Но это не повлияет на поиск ближайших соседей - качество останется прежним.
3. `L1`-метрика не является симметричной

При повороте осей матрица расстояний изменится, и ответы классификатора (т.е. качество) могут измениться.
Теперь нужно немного ускорить подсчёт матрицы расстояний, ограничив число циклов до одного. Реализуйте функцию `compute_distances_one_loop` и запустите код ниже
```
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
```
Наконец реализуем полностью векторизованную версию: `compute_distances_no_loops`
```
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
```
Осталось сравнить эффективность всех 3 написанных версий
```
# Let's compare how fast the implementations are
def time_function(f, *args):
"""
Call a function f with args and return the time (in seconds) that it took to execute.
"""
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print('Two loop version took %f seconds' % two_loop_time)
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print('One loop version took %f seconds' % one_loop_time)
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print('No loop version took %f seconds' % no_loop_time)
# you should see significantly faster performance with the fully vectorized implementation
```
### Cross-validation
Мы построили классификатор, используя k = 5 по умолчанию. Теперь подберём оптимальное значение гиперпараметра с использованием кросс-валидации методом k-fold. Требуется разбить ваши данные на группы (folds) и для каждой группы посчитать accuracy, когда она выделяется как тестовая.
```
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
for k in k_choices:
accuracies = []
for val_fold in range(num_folds):
classifier.train(np.delete(X_train_folds, val_fold, 0).reshape(-1, X_train_folds[0].shape[1]),
np.delete(y_train_folds, val_fold, 0).reshape(-1,))
y_val_pred = classifier.predict(X_train_folds[val_fold], k=k)
num_correct = np.sum(y_val_pred == y_train_folds[val_fold])
accuracies.append(float(num_correct) / num_test)
k_to_accuracies[k] = accuracies
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print('k = %d, accuracy = %f' % (k, accuracy))
```
Построим график зависимости качества от k
```
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
```
Наконец, выберите лучшее значение k и переобучите классификатор с использованием всех данных для обучения.
```
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 10
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
```
**Вопрос №3**
Какие утверждения про классификатор $k$-Nearest Neighbor ($k$-NN) верны, и для всех ли значений $k$?
1. Качество на обучающих данных будет для 1-NN всегда выше, чем для 5-NN.
2. Качество на тестовых данных будет для 1-NN всегда выше, чем для 5-NN.
3. Разделяющие поверхности k-NN классификатора линейные.
4. Время нужное для классификации тестового примера k-NN классификатором растёт с ростом размера обучающей выборки.
5. Ничего из вышеперечисленного.
**Ваш ответ**: Верными являются утверждения #1 и #4.
_Подробнее по каждому пункту_:
1. В случае `1-NN` во время `predict` будет рассматриваться только одна метка из обучающей выборки. Если подать на вход элемент из обучающей выборки, то ближайшей для него точкой из обучающей выборки будет сама эта точка. Как следствие - отсутствие ошибок при классификации обучающей выборки. В случае `5-NN` во время `predict` будут рассматриваться уже 5 меток - это может сместить классификацию; точность может уменьшиться.
2. Рассмотрим следующий случай: пусть в обучающей выборке один объект первого типа, и 100 объектов второго типа, а в тестовой - 10 объектов второго типа, причем ближайшей для них точкой из обучающей выборки является объект первого типа. Тогда `1-NN` будет постоянно ошибаться, а `5-NN` - правильно классифицировать.

3. Для начала посмотрим на такой случай: пусть у нас в обучающей выборке всего два объекта, причем эти объекты имеют разные метки. Тогда классификатор поделит всю плоскость на две части следующим образом: проведет серединную гиперплоскость, точки со стороны первого объекта будут получать его метку, а с другой стороны - метку второго объекта. В этом примере получим, что разделяющая гиперплоскость будет линейной.
Если посмотреть на [этот пример](https://archive.ics.uci.edu/ml/datasets/iris), то видим, что разделяющие поверхности здесь не являются линейными (хотя части поверхностей, несомненно, являются прямыми линиями)

Говорим про гиперплоскости, т.к. признаков у объектов может быть неограниченное количество.
4. Во время `predict` рассчитывается расстояние от тестового примера до всех объектов обучающей выборки. Время прямо пропорционально зависит от размера обучающей выборки.
# Часть 2. SVM классификатор
В этом упражнении вы:
- реализуете полностью векторизованную **функцию потерь** для SVM классификатора
- реализуете полностью векторизованное представление его **аналитического градиента**
- **проверите реализацию** числовым градиентом
- используете валидационное множество **чтобы подобрать параметр learning rate и силу регуляризации**
- **оптимизируете** функцию потерь с помощью **SGD**
- **визуализируете** итоговые полученные веса
Загружаем и предобрабатываем данные
```
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Split the data into train, val, and test sets. In addition we will
# create a small development set as a subset of the training data;
# we can use this for development so our code runs faster.
num_training = 49000
num_validation = 1000
num_test = 1000
num_dev = 500
# Our validation set will be num_validation points from the original
# training set.
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
# Our training set will be the first num_train points from the original
# training set.
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
# We will also make a development set, which is a small subset of
# the training set.
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# We use the first num_test points of the original test set as our
# test set.
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# As a sanity check, print out the shapes of the data
print('Training data shape: ', X_train.shape)
print('Validation data shape: ', X_val.shape)
print('Test data shape: ', X_test.shape)
print('dev data shape: ', X_dev.shape)
# Preprocessing: subtract the mean image
# first: compute the image mean based on the training data
mean_image = np.mean(X_train, axis=0)
print(mean_image[:10]) # print a few of the elements
plt.figure(figsize=(4,4))
plt.imshow(mean_image.reshape((32,32,3)).astype('uint8')) # visualize the mean image
plt.show()
# second: subtract the mean image from train and test data
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# third: append the bias dimension of ones (i.e. bias trick) so that our SVM
# only has to worry about optimizing a single weight matrix W.
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
print(X_train.shape, X_val.shape, X_test.shape, X_dev.shape)
```
## SVM Classifier
Весь дальнейший код нужно будет реализовать в файле **cs231n/classifiers/linear_svm.py**.
Функция `svm_loss_naive` в данном случае уже частично реализована за вас и производит неэффективный подсчёт самого значения loss-а.
```
# Evaluate the naive implementation of the loss we provided for you:
from cs231n.classifiers.linear_svm import svm_loss_naive
import time
# generate a random SVM weight matrix of small numbers
W = np.random.randn(3073, 10) * 0.0001
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.000005)
print('loss: %f' % (loss, ))
```
Значение `grad`, возаращаемое из функции сейчас состоит из нулей. Реализуйте подсчёт градиента и добавьте его в функцию `svm_loss_naive`.
Для проверки корректности вычисленного градента, можно использовать сравнение с численным градиентом. Код ниже проводит тестирование:
```
# Once you've implemented the gradient, recompute it with the code below
# and gradient check it with the function we provided for you
# Compute the loss and its gradient at W.
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.0)
# Numerically compute the gradient along several randomly chosen dimensions, and
# compare them with your analytically computed gradient. The numbers should match
# almost exactly along all dimensions.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad)
# do the gradient check once again with regularization turned on
# you didn't forget the regularization gradient did you?
loss, grad = svm_loss_naive(W, X_dev, y_dev, 5e1)
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, grad)
```
**Вопрос №1**
Иногда бывает так, что в какой-то момент одно из измерений при сравнении градиентов не будет подностью совпадать. Что может приводить к подобному разбросу? Стоит ли по этому поводу волноваться? Можно ли привести простой пример, в котором сравнение градиентов в одном из измерений сработает неправильно? Как можно повлиять на частоту возникновения подобных граничных эффектов? *Подсказка: SVM loss строго говоря не дифференцируется*
**Ваш ответ:**
Ошибки могут быть связаны с погрешностью вычислений. Для вычисления производной используется следующая приближенная формула
$$f'(x) = \cfrac{f(x+h) - f(x-h)}{2h}$$
Погрешность этой формулы имеет второй порядок по `h`. Тогда, при $h = 10^{-5}$ погрешность численной производной будет $10^{-10}$. При расчетах с использованием `double` точность измеряется в пределах $2^{-64} \approx 10^{-19}$. Видим, что при вычислении градиента с использованием точной формулы погрешность значительно меньше.
Эта проблема не является существенной - при изменении параметров моделей с помощью градиентного спуска мы все равно будем смещаться в нужную сторону.
Простой пример, когда возникает проблема: возьмем какую-нибудь недифференцируемую в нуле функцию
$$x \cdot I\{x > 0\}$$
В случае с точной формулой мы вообще мало что можем сделать (логично, наверное, доопределить каким-то значением). При вычислении по численной формуле получим
$$f'(x)_{x=x_{0}} = 0.5$$
Стандартный прием борьбы с такими проблемами - использование штрафных и барьерных функций.
Далее нужно реализовать векторизованную версию кода: `svm_loss_vectorized`.
```
# Next implement the function svm_loss_vectorized; for now only compute the loss;
# we will implement the gradient in a moment.
tic = time.time()
loss_naive, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Naive loss: %e computed in %fs' % (loss_naive, toc - tic))
from cs231n.classifiers.linear_svm import svm_loss_vectorized
tic = time.time()
loss_vectorized, _ = svm_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))
# The losses should match but your vectorized implementation should be much faster.
print('difference: %f' % (loss_naive - loss_vectorized))
# Complete the implementation of svm_loss_vectorized, and compute the gradient
# of the loss function in a vectorized way.
# The naive implementation and the vectorized implementation should match, but
# the vectorized version should still be much faster.
tic = time.time()
_, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Naive loss and gradient: computed in %fs' % (toc - tic))
tic = time.time()
_, grad_vectorized = svm_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Vectorized loss and gradient: computed in %fs' % (toc - tic))
# The loss is a single number, so it is easy to compare the values computed
# by the two implementations. The gradient on the other hand is a matrix, so
# we use the Frobenius norm to compare them.
difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('difference: %f' % difference)
```
### Stochastic Gradient Descent
Теперь мы умеем эффективно считать выражения для loss-а и его градиента, причём градиент совпадает с численным. Теперь мы готовы к оптимизации loss-а.
Реализуйте функцию `LinearClassifier.train()` в файле `linear_classifier.py`
```
# In the file linear_classifier.py, implement SGD in the function
# LinearClassifier.train() and then run it with the code below.
from cs231n.classifiers import LinearSVM
svm = LinearSVM()
tic = time.time()
loss_hist = svm.train(X_train, y_train, learning_rate=1e-7, reg=2.5e4,
num_iters=1500, verbose=True)
toc = time.time()
print('That took %fs' % (toc - tic))
# A useful debugging strategy is to plot the loss as a function of
# iteration number:
plt.plot(loss_hist)
plt.xlabel('Iteration number')
plt.ylabel('Loss value')
plt.show()
```
А теперь потребуется реализацию функции `LinearClassifier.predict()`
```
# Write the LinearSVM.predict function and evaluate the performance on both the
# training and validation set
y_train_pred = svm.predict(X_train)
print('training accuracy: %f' % (np.mean(y_train == y_train_pred), ))
y_val_pred = svm.predict(X_val)
print('validation accuracy: %f' % (np.mean(y_val == y_val_pred), ))
```
Подберите значения гиперпараметров: силы регуляризации и скорости обучения
```
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of about 0.4 on the validation set.
learning_rates = np.arange(1.6e-7, 1.8e-7, 0.3e-8)
regularization_strengths = np.arange(4e3, 6e3, 3e2)
# results is dictionary mapping tuples of the form
# (learning_rate, regularization_strength) to tuples of the form
# (training_accuracy, validation_accuracy). The accuracy is simply the fraction
# of data points that are correctly classified.
results = {}
best_val = -1 # The highest validation accuracy that we have seen so far.
best_svm = None # The LinearSVM object that achieved the highest validation rate.
for lr in learning_rates:
for reg in regularization_strengths:
clf = LinearSVM()
clf.train(X_train, y_train, learning_rate=lr, reg=reg, num_iters=1000)
y_train_pred = clf.predict(X_train)
train_acc = np.mean(y_train == y_train_pred)
y_val_pred = clf.predict(X_val)
val_acc = np.mean(y_val == y_val_pred)
results[(lr, reg)] = (train_acc, val_acc)
if val_acc > best_val:
best_val = val_acc
best_svm = clf
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Visualize the cross-validation results
import math
x_scatter = [math.log10(x[0]) for x in results]
y_scatter = [math.log10(x[1]) for x in results]
# plot training accuracy
marker_size = 100
colors = [results[x][0] for x in results]
plt.subplot(2, 1, 1)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 training accuracy')
# plot validation accuracy
colors = [results[x][1] for x in results] # default size of markers is 20
plt.subplot(2, 1, 2)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 validation accuracy')
plt.show()
# Evaluate the best svm on test set
y_test_pred = best_svm.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('linear SVM on raw pixels final test set accuracy: %f' % test_accuracy)
```
Осталось визуализировать обученные веса для всех классов
```
# Visualize the learned weights for each class.
# Depending on your choice of learning rate and regularization strength, these may
# or may not be nice to look at.
w = best_svm.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
```
# Часть 3. Softmax классификатор
В этом упражнении вы:
- реализуете полностью векторизованную **функцию потерь** для Softmax классификатора
- реализуете полностью векторизованное представление его **аналитического градиента**
- **проверите реализацию** числовым градиентом
- используете валидационное множество **чтобы подобрать параметр learning rate и силу регуляризации**
- **оптимизируете** функцию потерь с помощью **SGD**
- **визуализируете** итоговые полученные веса
Примечание: требуется код, написанный в части 2.
```
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the linear classifier. These are the same steps as we used for the
SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis = 0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# add bias dimension and transform into columns
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
print('dev data shape: ', X_dev.shape)
print('dev labels shape: ', y_dev.shape)
```
## Softmax Classifier
Код в этой секции нужно писать в файле **cs231n/classifiers/softmax.py**.
Для начала реализуйте функцию `softmax_loss_naive`
```
# First implement the naive softmax loss function with nested loops.
# Open the file cs231n/classifiers/softmax.py and implement the
# softmax_loss_naive function.
from cs231n.classifiers.softmax import softmax_loss_naive
import time
# Generate a random softmax weight matrix and use it to compute the loss.
W = np.random.randn(3073, 10) * 0.0001
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As a rough sanity check, our loss should be something close to -log(0.1).
print('loss: %f' % loss)
print('sanity check: %f' % (-np.log(0.1)))
```
**Вопрос №1**
Почему мы ожидаем значение функции потерь -log(0.1)? Дайте краткий ответ.
**Ваш ответ**: В начале все классы имеют почти одинаковый `score` (близкий к 0), поэтому значения `softmax` будут похожими. Так как классов 10, то получим
$$Loss \approx -log\left(\cfrac{e^{0}}{\sum_{1}^{10}e^{0}}\right) \approx -log(0.1)$$
Допишите вашу реализацию, чтобы она также возвращала и корректный градиент. Ячейка ниже проверит его на корректность по сравнению с числовым градиентом.
```
# Complete the implementation of softmax_loss_naive and implement a (naive)
# version of the gradient that uses nested loops.
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As we did for the SVM, use numeric gradient checking as a debugging tool.
# The numeric gradient should be close to the analytic gradient.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# similar to SVM case, do another gradient check with regularization
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 5e1)
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
```
Теперь реализуйте функцию `softmax_loss_vectorized` - подсчёт того же самого значения и градиента с использованием векторных операций.
```
# Now that we have a naive implementation of the softmax loss function and its gradient,
# implement a vectorized version in softmax_loss_vectorized.
# The two versions should compute the same results, but the vectorized version should be
# much faster.
tic = time.time()
loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('naive loss: %e computed in %fs' % (loss_naive, toc - tic))
from cs231n.classifiers.softmax import softmax_loss_vectorized
tic = time.time()
loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))
# As we did for the SVM, we use the Frobenius norm to compare the two versions
# of the gradient.
grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('Loss difference: %f' % np.abs(loss_naive - loss_vectorized))
print('Gradient difference: %f' % grad_difference)
```
Используйте валидационное множество для подбора гиперпараметров силы регуляризации и скорости обучения.
```
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of over 0.35 on the validation set.
from cs231n.classifiers import Softmax
results = {}
best_val = -1
best_softmax = None
learning_rates = np.arange(1.6e-7, 1.8e-7, 0.3e-8)
regularization_strengths = np.arange(4e3, 6e3, 3e2)
for lr in learning_rates:
for reg in regularization_strengths:
clf = Softmax()
clf.train(X_train, y_train, learning_rate=lr, reg=reg, num_iters=1000)
y_train_pred = clf.predict(X_train)
train_acc = np.mean(y_train == y_train_pred)
y_val_pred = clf.predict(X_val)
val_acc = np.mean(y_val == y_val_pred)
results[(lr, reg)] = (train_acc, val_acc)
if val_acc > best_val:
best_val = val_acc
best_softmax = clf
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
```
Наконец посчитайте значение accuracy для лучшего классификатора.
```
# evaluate on test set
# Evaluate the best softmax on test set
y_test_pred = best_softmax.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('softmax on raw pixels final test set accuracy: %f' % (test_accuracy, ))
```
**Вопрос №2**
Возможно ли, что при добавлении нового примера в обучающих данных SVM loss бы не изменился, но Softmax loss при этом бы поменялся?
**Ваш ответ**: Возможно.
_Подробнее_: Допустим, что мы решаем задачу трехклассовой классификации, и что в выборке сейчас два примера:
| Scores | Object 1 | Object 2 |
|---------|----------|----------|
| Class 1 | **3.2** | 2.2 |
| Class 2 | 5.1 | 2.5 |
| Class 3 | -1.7 | **-3.1** |
| Object Loss | Object 1 | Object 2 |
|--------------|----------|----------|
| SVM Loss | 2.9 | 12.9 |
| Softmax Loss | 2.04 | 6.16 |
| Total Loss | SVM | Softmax |
|------------|---------|---------|
| Value | 7.9 | 4.1 |
Добавим еще один объект в выборку:
| Scores | Object 1 | Object 2 | Object 3 |
|---------|----------|----------|----------|
| Class 1 | **3.2** | 2.2 | 7.0 |
| Class 2 | 5.1 | 2.5 | **1.0** |
| Class 3 | -1.7 | **-3.1** | 0.9 |
| Object Loss | Object 1 | Object 2 | Object 3 |
|--------------|----------|----------|----------|
| SVM Loss | 2.9 | 12.9 | 7.9 |
| Softmax Loss | 2.04 | 6.16 | 6.0 |
| Total Loss | SVM | Softmax |
|------------|---------|---------|
| Value | 7.9 | 4.73 |
Как видим, `SVM TotalLoss` остался прежним, а `Softmax TotalLoss` изменился.
Осталось визуализировать обученные веса для всех классов
```
# Visualize the learned weights for each class
w = best_softmax.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
```
| github_jupyter |
```
%env CUDA_VISIBLE_DEVICES=1
DATA_DIR='/home/HDD6TB/datasets/emotions/zoom/'
import os
from PIL import Image
import cv2
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier,RandomForestRegressor
from sklearn import svm,metrics,preprocessing
from sklearn.neighbors import KNeighborsClassifier
from sklearn.decomposition import PCA
from sklearn.pipeline import Pipeline
from sklearn.metrics.pairwise import pairwise_distances
from collections import defaultdict
import os
import random
import numpy as np
from tqdm import tqdm
import time
import pickle
import pandas as pd
import matplotlib.pyplot as plt
compare_filenames=lambda x: int(os.path.splitext(x)[0])
video_path=os.path.join(DATA_DIR,'videos/4.mp4')
print(video_path)
faces_path=os.path.join(DATA_DIR,'faces/mtcnn_new/4')
```
# Face detection + OCR
```
import tensorflow as tf
print(tf.__version__)
from tensorflow.compat.v1.keras.backend import set_session
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
sess=tf.compat.v1.Session(config=config)
set_session(sess)
from facial_analysis import FacialImageProcessing
imgProcessing=FacialImageProcessing(False)
import numpy as np
import cv2
import math
from skimage import transform as trans
def get_iou(bb1, bb2):
"""
Calculate the Intersection over Union (IoU) of two bounding boxes.
Parameters
----------
bb1 : array
order: {'x1', 'y1', 'x2', 'y2'}
The (x1, y1) position is at the top left corner,
the (x2, y2) position is at the bottom right corner
bb2 : array
order: {'x1', 'y1', 'x2', 'y2'}
The (x1, y1) position is at the top left corner,
the (x2, y2) position is at the bottom right corner
Returns
-------
float
in [0, 1]
"""
# determine the coordinates of the intersection rectangle
x_left = max(bb1[0], bb2[0])
y_top = max(bb1[1], bb2[1])
x_right = min(bb1[2], bb2[2])
y_bottom = min(bb1[3], bb2[3])
if x_right < x_left or y_bottom < y_top:
return 0.0
# The intersection of two axis-aligned bounding boxes is always an
# axis-aligned bounding box
intersection_area = (x_right - x_left) * (y_bottom - y_top)
# compute the area of both AABBs
bb1_area = (bb1[2] - bb1[0]) * (bb1[3] - bb1[1])
bb2_area = (bb2[2] - bb2[0]) * (bb2[3] - bb2[1])
# compute the intersection over union by taking the intersection
# area and dividing it by the sum of prediction + ground-truth
# areas - the interesection area
iou = intersection_area / float(bb1_area + bb2_area - intersection_area)
return iou
#print(get_iou([10,10,20,20],[15,15,25,25]))
def preprocess(img, bbox=None, landmark=None, **kwargs):
M = None
image_size = [224,224]
src = np.array([
[30.2946, 51.6963],
[65.5318, 51.5014],
[48.0252, 71.7366],
[33.5493, 92.3655],
[62.7299, 92.2041] ], dtype=np.float32 )
if image_size[1]==224:
src[:,0] += 8.0
src*=2
if landmark is not None:
dst = landmark.astype(np.float32)
tform = trans.SimilarityTransform()
#dst=dst[:3]
#src=src[:3]
#print(dst.shape,src.shape,dst,src)
tform.estimate(dst, src)
M = tform.params[0:2,:]
#M = cv2.estimateRigidTransform( dst.reshape(1,5,2), src.reshape(1,5,2), False)
#print(M)
if M is None:
if bbox is None: #use center crop
det = np.zeros(4, dtype=np.int32)
det[0] = int(img.shape[1]*0.0625)
det[1] = int(img.shape[0]*0.0625)
det[2] = img.shape[1] - det[0]
det[3] = img.shape[0] - det[1]
else:
det = bbox
margin = 0#kwargs.get('margin', 44)
bb = np.zeros(4, dtype=np.int32)
bb[0] = np.maximum(det[0]-margin//2, 0)
bb[1] = np.maximum(det[1]-margin//2, 0)
bb[2] = np.minimum(det[2]+margin//2, img.shape[1])
bb[3] = np.minimum(det[3]+margin//2, img.shape[0])
ret = img[bb[1]:bb[3],bb[0]:bb[2],:]
if len(image_size)>0:
ret = cv2.resize(ret, (image_size[1], image_size[0]))
return ret
else: #do align using landmark
assert len(image_size)==2
warped = cv2.warpAffine(img,M,(image_size[1],image_size[0]), borderValue = 0.0)
return warped
import pytesseract
if not os.path.exists(faces_path):
os.mkdir(faces_path)
cap = cv2.VideoCapture(video_path)
fps = cap.get(cv2.CAP_PROP_FPS)
total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
print('total_frames:',total_frames)
cap.set(cv2.CAP_PROP_POS_FRAMES,1)
frame_count = 0
counter=0
bboxes,all_text=[],[]
for frame_count in tqdm(range(total_frames-1)):
ret, frame_bgr = cap.read()
counter+=1
if not ret:
#cap.release()
#break
continue
frame = cv2.cvtColor(frame_bgr, cv2.COLOR_BGR2RGB)
bounding_boxes, points = imgProcessing.detect_faces(frame)
points = points.T
if len(bounding_boxes)!=0:
sorted_indices=bounding_boxes[:,0].argsort()
bounding_boxes=bounding_boxes[sorted_indices]
points=points[sorted_indices]
faces_folder=os.path.join(faces_path, str(counter))
if not os.path.exists(faces_folder):
os.mkdir(faces_folder)
for i,b in enumerate(bounding_boxes):
outfile=os.path.join(faces_folder, str(i)+'.png')
if not os.path.exists(outfile):
if True:
p=None
else:
p=points[i]
p = p.reshape((2,5)).T
face_img=preprocess(frame_bgr,b,p)
if np.prod(face_img.shape)==0:
print('Empty face ',b,' found for ',filename)
continue
cv2.imwrite(outfile, face_img)
bboxes.append(bounding_boxes)
frame = cv2.resize(frame, None, fx=2.0, fy=2.0, interpolation=cv2.INTER_LINEAR)
results=pytesseract.image_to_data(frame,lang='rus+eng',output_type=pytesseract.Output.DICT)
frame_text=[]
for i in range(0, len(results["text"])):
x = results["left"][i]
y = results["top"][i]
w = results["width"][i]
h = results["height"][i]
text = results["text"][i].strip()
conf = float(results["conf"][i])
if conf > 0 and len(text)>1:
frame_text.append((text,int(x/frame.shape[1]*frame_bgr.shape[1]),int(y/frame.shape[0]*frame_bgr.shape[0]),
int(w/frame.shape[1]*frame_bgr.shape[1]),int(h/frame.shape[0]*frame_bgr.shape[1])))
all_text.append(frame_text)
cap.release()
```
## Text processing
```
def combine_words(photo_text):
#print(photo_text)
if len(photo_text)>0:
new_text=[photo_text[0]]
for word_ind in range(1,len(photo_text)):
prev_text,x1,y1,w1,h1=new_text[-1]
center1_x,center1_y=x1+w1,y1+h1/2
cur_text,x2,y2,w2,h2=photo_text[word_ind]
center2_x,center2_y=x2,y2+h2/2
dist=abs(center1_x-center2_x)+abs(center1_y-center2_y)
#print(prev_text,cur_text,dist)
if dist>=7: #0.01:
new_text.append(photo_text[word_ind])
else:
new_text[-1]=(prev_text+' '+cur_text,x1,y1,x2+w2-x1,y2+h2-y1)
else:
new_text=[]
return new_text
def get_closest_texts(bboxes,photo_text):
best_texts,best_distances=[],[]
for (x1,y1,x2,y2,_) in bboxes:
face_x,face_y=x1,y2
#print(x1,y1,x2,y2)
best_dist=10000
best_text=''
for (text,x,y,w,h) in photo_text:
if y>y1:
dist_y=abs(face_y-y)
if face_x<x:
dist_x=x-face_x
elif face_x>x+w:
dist_x=face_x-x-w
else:
dist_x=0
#print(text,dist_x, dist_y,x,y,w,h)
if dist_x<best_dist and dist_y<1.5*(y2-y1):
best_dist=dist_x
best_text=text
#print(best_text,best_dist,(x2-x1))
if best_dist>=(x2-x1)*2:
best_text=''
if best_text!='':
for i,prev_txt in enumerate(best_texts):
if prev_txt==best_text:
if best_distances[i]<best_dist:
best_text=''
break
else:
best_texts[i]=''
best_texts.append(best_text)
best_distances.append(best_dist)
return best_texts
```
# FaceId
```
import torch
from PIL import Image
from torchvision import datasets, transforms
print(f"Torch: {torch.__version__}")
device = 'cuda'
import timm
model=timm.create_model('tf_efficientnet_b0_ns', pretrained=False)
model.classifier=torch.nn.Identity()
model.load_state_dict(torch.load('../models/pretrained_faces/state_vggface2_enet0_new.pt'))
model=model.to(device)
model.eval()
test_transforms = transforms.Compose(
[
transforms.Resize((224,224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
]
)
embeddings=[]
i=0
for filename in tqdm(sorted(os.listdir(faces_path), key=compare_filenames)):
faces_dir=os.path.join(faces_path,filename)
imgs=[]
for img_name in sorted(os.listdir(faces_dir), key=compare_filenames):
img = Image.open(os.path.join(faces_dir,img_name))
img_tensor = test_transforms(img)
imgs.append(img_tensor)
if len(imgs)>0:
scores = model(torch.stack(imgs, dim=0).to(device))
scores=scores.data.cpu().numpy()
else:
scores=[]
embeddings.append(scores)
if len(scores)!=len(bboxes[i]):
print('Error',videoname,filename,i,len(scores),len(bboxes[i]))
i+=1
print(len(embeddings))
```
## Faces only
```
face_files=[]
subjects=None
X_recent_features=None
for i,filename in enumerate(sorted(os.listdir(faces_path), key=compare_filenames)):
f=preprocessing.normalize(embeddings[i],norm='l2')
if X_recent_features is None:
for face_ind in range(len(f)):
face_files.append([(i,filename,face_ind)])
X_recent_features=f
else:
dist_matrix=pairwise_distances(f,X_recent_features)
sorted_indices=dist_matrix.argsort(axis=1)
for face_ind,sorted_inds in enumerate(sorted_indices):
closest_ind=sorted_inds[0]
min_dist=dist_matrix[face_ind][closest_ind]
if min_dist<0.85 or (len(sorted_inds)>1 and min_dist<dist_matrix[face_ind][sorted_inds[1]]-0.1):
X_recent_features[closest_ind]=f[face_ind]
face_files[closest_ind].append((i,filename,face_ind))
else:
face_files.append([(i,filename,face_ind)])
X_recent_features=np.concatenate((X_recent_features,[f[face_ind]]),axis=0)
print(len(face_files), [len(files) for files in face_files])
```
## Faces+bboxes
```
def get_square(bb):
return abs((bb[2]-bb[0])*(bb[3]-bb[1]))
SQUARE_THRESHOLD=900
face_files=[]
subjects=None
X_recent_features=None
recent_bboxes=[]
for i,filename in enumerate(sorted(os.listdir(faces_path), key=compare_filenames)):
f=preprocessing.normalize(embeddings[i],norm='l2')
if X_recent_features is None:
large_face_indices=[]
for face_ind in range(len(f)):
if get_square(bboxes[i][face_ind])>SQUARE_THRESHOLD:
large_face_indices.append(face_ind)
recent_bboxes.append(bboxes[i][face_ind])
face_files.append([(i,filename,face_ind)])
if len(large_face_indices)>0:
X_recent_features=f[np.array(large_face_indices)]
#print(X_recent_features.shape)
#recent_bboxes=list(deepcopy(bboxes[i]))
else:
matched_faces=[]
for face_ind,face_bbox in enumerate(bboxes[i]):
closest_ind=-1
best_iou=0
for ind, bbox in enumerate(recent_bboxes):
iou=get_iou(face_bbox,bbox)
if iou>best_iou:
best_iou=iou
closest_ind=ind
if best_iou>0.15:
d=np.linalg.norm(f[face_ind]-X_recent_features[closest_ind])
if d<1.0:
X_recent_features[closest_ind]=f[face_ind]
face_files[closest_ind].append((i,filename,face_ind))
recent_bboxes[closest_ind]=bboxes[i][face_ind]
matched_faces.append(face_ind)
if len(matched_faces)<len(bboxes[i]):
dist_matrix=pairwise_distances(f,X_recent_features)
sorted_indices=dist_matrix.argsort(axis=1)
for face_ind,sorted_inds in enumerate(sorted_indices):
if face_ind in matched_faces or get_square(bboxes[i][face_ind])<=SQUARE_THRESHOLD:
continue
closest_ind=sorted_inds[0]
min_dist=dist_matrix[face_ind][closest_ind]
if min_dist<0.85:# or (len(sorted_inds)>1 and min_dist<dist_matrix[face_ind][sorted_inds[1]]-0.1):
X_recent_features[closest_ind]=f[face_ind]
face_files[closest_ind].append((i,filename,face_ind))
recent_bboxes[closest_ind]=bboxes[i][face_ind]
else:
face_files.append([(i,filename,face_ind)])
X_recent_features=np.concatenate((X_recent_features,[f[face_ind]]),axis=0)
recent_bboxes.append(bboxes[i][face_ind])
#print(filename,i,X_recent_features.shape,face_ind,closest_ind,dist_matrix[face_ind][closest_ind])
#print(dist_matrix)
print(len(face_files), [len(files) for files in face_files])
```
## Text + faces
```
import editdistance
def levenstein(txt1,txt2):
if txt1=='' or txt2=='':
return 1
#return editdistance.eval(txt1,txt2)
return (editdistance.eval(txt1,txt2))/(max(len(txt1),len(txt2)))
def get_name(name2count):
#print(name2count)
return max(name2count, key=name2count.get)
face_files=[]
recent_texts=[]
X_recent_features=[]
for i,filename in enumerate(sorted(os.listdir(faces_path), key=compare_filenames)):
photo_text=combine_words(all_text[i])
best_texts=get_closest_texts(bboxes[i],photo_text)
f=preprocessing.normalize(embeddings[i],norm='l2')
if len(recent_texts)==0:
for face_ind,txt in enumerate(best_texts):
if len(txt)>=4:
recent_texts.append({txt:1})
face_files.append([(i,filename,face_ind)])
X_recent_features.append(f[face_ind])
else:
for face_ind,txt in enumerate(best_texts):
if len(txt)>=4:
closest_ind=-1
best_d_txt=1
for ind,recent_text_set in enumerate(recent_texts):
d_txt=min([levenstein(txt,recent_text) for recent_text in recent_text_set])
if d_txt<best_d_txt:
best_d_txt=d_txt
closest_ind=ind
face_dist=np.linalg.norm(X_recent_features[closest_ind]-f[face_ind])
if (best_d_txt<=0.45 and face_dist<=1.0) or face_dist<=0.8:
if txt in recent_texts[closest_ind]:
recent_texts[closest_ind][txt]+=1
else:
recent_texts[closest_ind][txt]=1
face_files[closest_ind].append((i,filename,face_ind))
X_recent_features[closest_ind]=f[face_ind]
elif best_d_txt>0.45:
recent_texts.append({txt:1})
face_files.append([(i,filename,face_ind)])
X_recent_features.append(f[face_ind])
#print(videoname,filename,i,face_ind,face_dist,txt,best_d_txt,recent_texts[closest_ind])
subjects=[get_name(name2count) for name2count in recent_texts]
```
---------------
```
import random
plt_ind=1
minNoPhotos=20
min_num_files=100
no_clusters=len([i for i,files in enumerate(face_files) if len(files)>min_num_files])
plt.figure(figsize=(10,10))
for i,files in enumerate(face_files):
if len(files)>min_num_files:
print(i,len(files),files[0])
for j in range(minNoPhotos):
f=random.choice(files)
fpath=os.path.join(faces_path,f[1],str(f[2])+'.png')
plt.subplot(no_clusters,minNoPhotos,plt_ind)
if j==0 and subjects is not None:
plt.title(subjects[i])
plt.imshow(Image.open(fpath))
plt.axis('off')
plt_ind+=1
plt.show()
```
# Emotions
```
if False:
model_name='enet_b2_8'
IMG_SIZE=260 #224 #
else:
model_name='enet_b0_8_best_afew'
IMG_SIZE=224
PATH='../models/affectnet_emotions/'+model_name+'.pt'
test_transforms = transforms.Compose(
[
transforms.Resize((IMG_SIZE,IMG_SIZE)),
#transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
]
)
feature_extractor_model = torch.load(PATH)
classifier_weights=feature_extractor_model.classifier[0].weight.cpu().data.numpy()
classifier_bias=feature_extractor_model.classifier[0].bias.cpu().data.numpy()
print(classifier_weights.shape,classifier_weights)
print(classifier_bias.shape,classifier_bias)
feature_extractor_model.classifier=torch.nn.Identity()
feature_extractor_model.eval()
def get_probab(features):
x=np.dot(features,np.transpose(classifier_weights))+classifier_bias
#print(x)
e_x = np.exp(x - np.max(x,axis=0))
return e_x / e_x.sum(axis=1)[:,None]
if len(classifier_bias)==7:
idx_to_class={0: 'Anger', 1: 'Disgust', 2: 'Fear', 3: 'Happiness', 4: 'Neutral', 5: 'Sadness', 6: 'Surprise'}
INTERESTING_STATES=[0,1,2,3,6]
else:
idx_to_class={0: 'Anger', 1: 'Contempt', 2: 'Disgust', 3: 'Fear', 4: 'Happiness', 5: 'Neutral', 6: 'Sadness', 7: 'Surprise'}
INTERESTING_STATES=[0,2,3,4,7]
print(idx_to_class)
X_global_features,X_scores=[],[]
for filename in tqdm(sorted(os.listdir(faces_path), key=compare_filenames)):
faces_dir=os.path.join(faces_path,filename)
imgs=[]
for img_name in sorted(os.listdir(faces_dir), key=compare_filenames):
img = Image.open(os.path.join(faces_dir,img_name))
img_tensor = test_transforms(img)
if img.size:
imgs.append(img_tensor)
if len(imgs)>0:
features = feature_extractor_model(torch.stack(imgs, dim=0).to(device))
features=features.data.cpu().numpy()
scores=get_probab(features)
#print(videoname,filename,features.shape,scores.shape)
X_global_features.append(features)
X_scores.append(scores)
```
# Create gifs
```
from IPython import display
from PIL import Image, ImageFont, ImageDraw
min_num_files=100
unicode_font = ImageFont.truetype("DejaVuSans.ttf", 8)
gif=[]
no_clusters=len([i for i,files in enumerate(face_files) if len(files)>min_num_files])
for subject_ind,files in enumerate(face_files):
if len(files)>min_num_files:
print(len(files),files[0])
prev_filename_ind=-1
start_i=0
current_scores,current_features=[],[]
current_emotion=-1
emotion2longest_sequence={}
for i,(file_ind,filename,face_ind) in enumerate(files):
filename_ind=int(filename)
if prev_filename_ind==-1:
prev_filename_ind=filename_ind-1
new_emotion=np.argmax(X_scores[file_ind][face_ind])
#print('check',prev_filename_ind,filename_ind-1, new_emotion,current_emotion)
if prev_filename_ind!=filename_ind-1 or new_emotion!=current_emotion or new_emotion not in INTERESTING_STATES:
if len(current_scores)>=10:
emotion=np.argmax(np.mean(current_scores,axis=0))
if emotion in emotion2longest_sequence:
if emotion2longest_sequence[emotion][0]<len(current_scores):
emotion2longest_sequence[emotion]=(len(current_scores),start_i,i-1)
else:
emotion2longest_sequence[emotion]=(len(current_scores),start_i,i-1)
#print(start_i,i-1,idx_to_class[emotion])
start_i=i
current_scores,current_features=[],[]
prev_filename_ind=filename_ind
current_emotion=new_emotion
current_scores.append(X_scores[file_ind][face_ind])
current_features.append(X_global_features[file_ind][face_ind])
if len(emotion2longest_sequence)>0:
for emotion, (_,start_i, end_i) in emotion2longest_sequence.items():
print(idx_to_class[emotion],start_i,end_i,len(files))
for i in range(start_i,min(start_i+20,end_i)+1):
#print(files[i])
fpath=os.path.join(faces_path,files[i][1],str(files[i][2])+'.png')
img=Image.open(fpath)
img = img.resize((112,112), Image.ANTIALIAS)
draw = ImageDraw.Draw(img)
draw.text((0, 0), subjects[subject_ind], align ="left", font=unicode_font,fill=(0,0,255,255))
draw.text((0, 10), idx_to_class[emotion], align ="left",font=unicode_font, fill=(0,255,0,255))
gif.append(img.convert("P",palette=Image.ADAPTIVE))
if False:
for img in gif:
display.clear_output(wait=True)
plt.axis('off')
plt.imshow(img)
plt.show()
if True and len(gif)>0:
gif[0].save('emo.gif', save_all=True,optimize=False, append_images=gif[1:],disposal=2)
```
| github_jupyter |
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import json # to read json
def squad_json_to_dataframe_train(input_file_path, record_path = ['data','paragraphs','qas','answers'],
verbose = 1):
"""
input_file_path: path to the squad json file.
record_path: path to deepest level in json file default value is
['data','paragraphs','qas','answers']
verbose: 0 to suppress it default is 1
"""
if verbose:
print("Reading the json file")
file = json.loads(open(input_file_path).read())
if verbose:
print("processing...")
# parsing different level's in the json file
js = pd.io.json.json_normalize(file , record_path )
m = pd.io.json.json_normalize(file, record_path[:-1] )
r = pd.io.json.json_normalize(file,record_path[:-2])
#combining it into single dataframe
idx = np.repeat(r['context'].values, r.qas.str.len())
ndx = np.repeat(m['id'].values,m['answers'].str.len())
m['context'] = idx
js['q_idx'] = ndx
main = pd.concat([ m[['id','question','context']].set_index('id'),js.set_index('q_idx')],1,sort=False).reset_index()
main['c_id'] = main['context'].factorize()[0]
if verbose:
print("shape of the dataframe is {}".format(main.shape))
print("Done")
return main
!wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json
input_file_path = 'train-v2.0.json'
record_path = ['data','paragraphs','qas','answers']
train = squad_json_to_dataframe_train(input_file_path=input_file_path,record_path=record_path)
train.head()
contexts = train['context'].unique()
contexts_size = contexts.size
print('The total number of unique contexts are = ', contexts_size)
import random
total_text = []
number_of_contexts = 30 # Set the number of contexts you want to concatenate to create the data
for index, row in train.iterrows():
text = ""
for i in range(number_of_contexts):
curr_ques = row['question']
curr_context = row['context']
random_int = random.randint(0, context_size-1)
context_text = contexts[random_int]
if context_text!=curr_context:
text=text+context_text
if i==number_of_contexts/2:
text=text+curr_context
total_text.append(text)
train['total_text']=total_text
train.head()
final_data = train[['question', 'context', 'total_text','text']].copy()
final_data = final_data.rename(columns={'text': 'answer'})
final_data.head()
from google.colab import files
final_data.to_csv('final_data')
files.download('final_data')
!wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json
def squad_json_to_dataframe_dev(input_file_path, record_path = ['data','paragraphs','qas','answers'],
verbose = 1):
"""
input_file_path: path to the squad json file.
record_path: path to deepest level in json file default value is
['data','paragraphs','qas','answers']
verbose: 0 to suppress it default is 1
"""
if verbose:
print("Reading the json file")
file = json.loads(open(input_file_path).read())
if verbose:
print("processing...")
# parsing different level's in the json file
js = pd.io.json.json_normalize(file , record_path )
m = pd.io.json.json_normalize(file, record_path[:-1] )
r = pd.io.json.json_normalize(file,record_path[:-2])
#combining it into single dataframe
idx = np.repeat(r['context'].values, r.qas.str.len())
# ndx = np.repeat(m['id'].values,m['answers'].str.len())
m['context'] = idx
# js['q_idx'] = ndx
main = m[['id','question','context','answers']].set_index('id').reset_index()
main['c_id'] = main['context'].factorize()[0]
if verbose:
print("shape of the dataframe is {}".format(main.shape))
print("Done")
return main
input_file_path = 'dev-v2.0.json'
record_path = ['data','paragraphs','qas','answers']
verbose = 0
dev = squad_json_to_dataframe_dev(input_file_path=input_file_path,record_path=record_path)
dev.head()
```
| github_jupyter |
<table align="center" width=100%>
<tr>
<td width="15%">
<img src="edaicon.png">
</td>
<td>
<div align="center">
<font color="#21618C" size=24px>
<b>Exploratory Data Analysis
</b>
</font>
</div>
</td>
</tr>
</table>
## Problem Statement
The zomato exploratory data analysis is for the foodies to find the best restaurants, value for money restaurants in their locality. It also helps to find their required cuisines in their locality.
## Data Definition
**res_id**: The code given to a restaurant (Categorical)
**name**: Name of the restaurant (Categorical)
**establishment**: Represents the type of establishment (Categorical)
**url**: The website of the restaurant (Categorical)
**address**: The address of the restaurant (Categorical)
**city**: City in which the restaurant located (Categorical)
**city_id**: The code given to a city (Categorical)
**locality**: Locality of the restaurant (Categorical)
**latitude**: Latitude of the restaurant (Categorical)
**longitude**: Longitude of the restaurant (Categorical)
**zipcode**: Zipcode of the city in which the restaurant located (Categorical)
**country_id**: Country code in which the restaurant located (Categorical)
**locality_verbose**: Locality along with the city in which the restaurant located (Categorical)
**cuisines**: The cuisines a restaurant serves (Categorical)
**timings**: The working hours of a restaurant (Categorical)
**average_cost_for_two**: The average amount expected for 2 people (Numerical)
**price_range**: The categories for average cost (Categories - 1,2,3,4) (Categorical)
**currency**: The currency in which a customer pays (Categorical)
**highlights**: The facilities of the restaurant (Categorical)
**aggregate_rating**: The overall rating a restaurant has got (Numerical)
**rating_text**: Categorized ratings (Categorical)
**votes**: Number of votes received by the restaurant from customers (Numerical)
**photo_count**: The number of photos of a restaurant (Numerical)
**opentable_support**: Restaurant reservation from Opentable (Categorical)
**delivery**: The restaurant deliver an order or not (Categorical)
**takeaway**: The restaurant allows a 'takeaway' of an order or not (Categorical)
## Table of Contents
1. **[Import Libraries](#import_lib)**
2. **[Set Options](#set_options)**
3. **[Read Data](#Read_Data)**
4. **[Understand and Prepare the Data](#Understand_Data)**
5. **[Understand the variables](#Understanding_variables)**
6. **[Check for Missing Values](#missing)**
7. **[Study Correlation](#correlation)**
8. **[Detect Outliers](#outliers)**
9. **[Create a new variable 'region'](#region)**
10. **[Some more analysis](#more)**
<a id='import_lib'></a>
## 1. Import Libraries
<table align ="left">
<tr>
<td width="8%">
<img src="todo.png">
</td>
<td>
<div align="left", style="font-size:120%">
<font color="#21618C">
<b> Import the required libraries and functions
</b>
</font>
</div>
</td>
</tr>
</table>
```
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import os
```
<a id='set_options'></a>
## 2. Set Options
<table align="left">
<tr>
<td width="8%">
<img src="todo.png">
</td>
<td>
<div align="left", style="font-size:120%">
<font color="#21618C">
<b>Make necessary changes to :<br><br>
Set the working directory
</b>
</font>
</div>
</td>
</tr>
</table>
```
os.chdir('C:\\Users\\Kejri\\Downloads\\files\\Capstone')
os.getcwd()
```
<a id='Read_Data'></a>
## 3. Read Data
```
df_restaurants = pd.read_csv('ZomatoRestaurantsIndia.csv')
```
<a id='Understand_Data'></a>
## 4. Understand and Prepare the Data
A well-prepared data proves beneficial for analysis as it limits errors and inaccuracies that can occur during analysis. The processed data is more accessible to users.<br> <br>
Data understanding is the process of getting familiar with the data, to identify data type, to discover first insights into the data, or to detect interesting subsets to form hypotheses about hidden information. Whereas, data preparation is the process of cleaning and transforming raw data before analysis. It is an important step before processing and often involves reformatting data, making corrections to data. <br> <br>
Data preparation is often a lengthy process, but it is essential as a prerequisite to put data in context to get insights and eliminate bias resulting from poor data quality.
<table align="left">
<tr>
<td width="8%">
<img src="todo.png">
</td>
<td>
<div align="left", style="font-size:120%">
<font color="#21618C">
<b> Analyze and prepare data:<br>
1. Check dimensions of the dataframe <br>
2. View the head of the data<br>
3. Note the redundant variables and drop them <br>
4. Check the data types. Refer to data definition to ensure your data types are correct. If data types are not as per business context, change the data types as per requirement <br>
5. Check for duplicates<br>
Note: It is an art to explore data and one will need more and more practice to gain expertise in this area
</b>
</font>
</div>
</td>
</tr>
</table>
### -------------------------*** Provide the inference's from the output of every code executed.***----------------------------
**1. Check dimensions of the dataframe in terms of rows and columns**
```
df_restaurants.shape
df_restaurants.info()
print('The dataframe has 211944 rows and 26 columns. Also in some columns, such as, \'establishment\', \'zipcode\', \'highlights\' etc., the number of rows are less.')
```
**2. View the head of the data**
```
df_restaurants.head(5)
print('Each locality has a latitude and longitude, that lie in a certain city and a certain country; Has various restaurant names which have an address and url to their website and many more such variables')
```
**3. Note the redundant variables and drop them**
```
df_restaurants.columns.duplicated().sum()
#i=0
#for var in df_restaurants_copy.columns:
# print(i+1,". ",var, ": ", df_restaurants_copy[var].unique())
# i+=1
df_restaurants['currency'].unique(), df_restaurants['currency'].shape[0]
df_restaurants['country_id'].unique(), df_restaurants['currency'].shape[0]
df_restaurants['opentable_support'].isna().sum(), df_restaurants['currency'].shape[0]
df_restaurants_copy = df_restaurants.drop(['locality_verbose','currency','country_id','takeaway','res_id'], axis=1).copy()
print('We have two separate columns \'locality\' and \'city\', we can later derive \'locality_verbose\' from these two, so we drop \'locality_verbose\', since we might later need to make analysis city or locality wise also. Also, since \'currency\', \'country_id\', \'takeaway\' has same value for all restaurants, i.e, Rs., 1, -1 respectively, we decide to drop these. Also, \'res_id\' would not be needed since we don\'t have any other dataframe to map to and we have default index for this dataframe as well, so we drop this also.')
```
**4. Check the data types. Refer to data definition to ensure your data types are correct. If data types are not as per business context, change the data types as per requirement**
```
df_restaurants_copy.dtypes
print('Here, we have searched for datatypes that have been described as numerical in data definition, but wrongly made categorical in dataframe. No such variable found. Regarding variables that have been marked as categorical in data definition, but are not \'object\' data type in dataframe, all such variables can be excluded in numerical calculations on dataframe, so no need to convert them to object.')
```
#### Change the incorrect data type
```
#df_restaurants_copy[['city_id','latitude','longitude','price_range','opentable_support','delivery']] = df_restaurants_copy[['city_id','latitude','longitude','price_range','opentable_support','delivery']].astype('object')
print('All datatypes which are numerical in data definition, are also numerical in dataframe')
```
**5. Check for Duplicates**
```
df_restaurants_copy.duplicated().sum()
df_restaurants_copy[df_restaurants_copy.duplicated(keep="last")]
#df_restaurants_copy[df_restaurants_copy['name']=='Peshawri - ITC Mughal']
df_restaurants_copy.drop_duplicates(keep='first',inplace=True)
df_restaurants_copy.shape
df_restaurants_copy[df_restaurants_copy['name']=='Peshawri - ITC Mughal']
print('Dropped all duplicate rows, retaining the first one')
```
<a id = 'Understanding_variables'> </a>
## 5. Understand the variables
**1. Variable 'name'**
```
df_restaurants_copy['name'].isna().sum()
df_restaurants_copy['name'].duplicated().sum()
df_restaurants_copy['name'].unique()
```
**2. Variable 'establishment'**
```
df_restaurants_copy['establishment'].isna().sum()
df_restaurants_copy['establishment'].duplicated().sum()
df_restaurants_copy['establishment'].unique()
```
**3. Variable 'city'**
```
df_restaurants_copy['city'].isna().sum()
df_restaurants_copy['city'].duplicated().sum()
i = 0
for var in df_restaurants_copy['city'].unique():
print(var)
df_restaurants_copy['city'].unique()
```
**Let us find the count of restaurants in each city**
```
df_restaurants_copy['name'].isna().sum()
df_restaurants_copy.groupby(['city'])[['city','name']].head()
i = 0
for var in df_restaurants_copy['city'].unique():
print(var , ' has ', len(df_restaurants_copy[df_restaurants_copy['city'] == var]['name']), ' restaurants.')
```
**4. Variable 'locality'**
```
df_restaurants_copy['locality'].isna().sum()
df_restaurants_copy['locality'].duplicated().sum()
df_restaurants_copy['locality'].unique()
```
**4. Variable 'latitude'**
From the variable 'latitude', we know the latitudinal location of the restaurant
The Latitudinal extent of India 8º4‛N to 37º6‛ N.
We must check whether we have any points beyond this extent.
```
df_restaurants_copy['latitude'].isna().sum()
df_restaurants_copy['latitude'].duplicated().sum()
len(df_restaurants_copy[(df_restaurants_copy['latitude'] < 8.066667) | (df_restaurants_copy['latitude'] > 37.1)])
```
- We need to replace all these values with NaN's.
```
def replacement(x):
if((x < 8.066667) | (x > 37.1)):
return np.nan
else:
return x
df_restaurants_copy['latitude'] = df_restaurants_copy['latitude'].transform(lambda x: replacement(x))
```
- check if the values are replace by NaN's
```
len(df_restaurants_copy[(df_restaurants_copy['latitude'] < 8.066667) | (df_restaurants_copy['latitude'] > 37.1)])
df_restaurants_copy['latitude'].isna().sum()
print('Now, the number of nan values has become same as number of values in the result above i.e. 955, which means all the qualifying values have been replaced with nan values')
```
- We see all the values are replaced by NaN's
```
df_restaurants_copy[df_restaurants_copy['latitude'].isna()]['latitude'].head()
```
**5. Variable 'longitude'**
From the variable 'longitude', we know the longitudinal location of the restaurant
The Longitudinal extent of India is from 68°7'E to 97°25'E
We must check whether we have any points beyond this extent.
```
df_restaurants_copy['longitude'].isna().sum()
df_restaurants_copy['longitude'].duplicated().sum()
len(df_restaurants_copy[(df_restaurants_copy['longitude'] < 68.1166667) | (df_restaurants_copy['longitude'] > 97.41666667)])
```
- We need to replace all these values with NaN's.
```
def replacement2(x):
if((x < 68.1166667) | (x > 97.41666667)):
return np.nan
else:
return x
df_restaurants_copy['longitude'] = df_restaurants_copy['longitude'].transform(lambda x: replacement2(x))
```
- Check if the values are replace by NaN's
```
len(df_restaurants_copy[(df_restaurants_copy['longitude'] < 68.1166667) | (df_restaurants_copy['longitude'] > 97.41666667)])
df_restaurants_copy['longitude'].isna().sum()
print('Now, the number of nan values has become same as number of values in the result above i.e. 957, which means all the qualifying values have been replaced with nan values')
```
- From variable 'latitude' and 'longitude', plot the location of restaurants.
```
df_temp = df_restaurants_copy[['latitude','longitude']].dropna()
df_temp.isna().sum().sum()
fig, ax = plt.subplots(figsize=(6,10))
plt.scatter(df_temp['longitude'],df_temp['latitude'])
pip install gmplot
import gmplot
#lat, long, zoom
google_map = gmplot.GoogleMapPlotter(28.7041,77.1025, 5, apikey="" )
#google_map.apikey = ""
google_map.scatter(df_temp['latitude'],df_temp['longitude'], '#cb202d', size = 35, marker = False)
google_map.draw("location.html")
import webbrowser
webbrowser.open('location.html')
```
**6. Variable 'cuisines'**
```
df_restaurants_copy['cuisines'].isna().sum()
df_restaurants_copy['cuisines'].duplicated().sum()
```
- To find the unique cusines we write a small user defined function.
```
df_restaurants_copy['cuisines']
def cuisines(x):
x=x.dropna()
x=np.asarray(x.transform(lambda x: x.split(", ")).to_numpy())
x= pd.Series(np.concatenate(x, axis=0))
print(x.unique())
cuisines(df_restaurants_copy['cuisines'])
cuisines(df_restaurants_copy[df_restaurants_copy['city'] == 'Agra']['cuisines'])
cuisines(df_restaurants_copy[df_restaurants_copy['city'] == 'Srinagar']['cuisines'])
cuisines(df_restaurants_copy[(df_restaurants_copy['city'] == 'Srinagar') & (df_restaurants_copy['name'] == 'Winterfell Cafe')]['cuisines'])
cuisines(df_restaurants_copy[(df_restaurants_copy['city'] == 'Srinagar') & (df_restaurants_copy['name'] == 'Nathus Sweets')]['cuisines'])
```
- find out the frequency of each cuisine
```
def cuisines_freq(x):
x=x.dropna()
x=np.asarray(x.transform(lambda x: x.split(", ")).to_numpy())
x= pd.Series(np.concatenate(x, axis=0))
print(x.value_counts())
cuisines_freq(df_restaurants_copy['cuisines'])
cuisines_freq(df_restaurants_copy[df_restaurants_copy['city'] == 'Srinagar']['cuisines'])
```
**8. Variable 'average_cost_for_two'**
```
df_restaurants_copy['average_cost_for_two'].isna().sum()
df_restaurants_copy['average_cost_for_two'].duplicated().sum()
len(df_restaurants_copy['average_cost_for_two'])
df_restaurants_copy['average_cost_for_two'].min(), round(df_restaurants_copy['average_cost_for_two'].mean(),2), df_restaurants_copy['average_cost_for_two'].max()
```
**9. Variable 'price_range'**
```
df_restaurants_copy['price_range'].isna().sum()
df_restaurants_copy['price_range'].duplicated().sum()
df_restaurants_copy['price_range'].unique()
```
- visualize a exploded pie chart.
```
labels = 1,2,3,4
sizes = [len(df_restaurants_copy[df_restaurants_copy['price_range']==1]['price_range']), len(df_restaurants_copy[df_restaurants_copy['price_range']==2]['price_range']), len(df_restaurants_copy[df_restaurants_copy['price_range']==3]['price_range']), len(df_restaurants_copy[df_restaurants_copy['price_range']==4]['price_range'])]
explode = (0, 0, 0, 0.3)
fig, ax = plt.subplots()
ax.pie(sizes, labels=labels, explode=explode, autopct='%1.1f%%', shadow=True, startangle=90, radius = 2)
plt.show()
```
**10. Variable 'highlights'**
```
df_restaurants_copy['highlights'].isna().sum()
df_restaurants_copy['highlights'].duplicated().sum()
df_restaurants_copy['highlights'].head()
```
- write a small function to know the number of times a facility has appeared in the 'Highlights'.
```
def highlights_freq(x):
x=x.dropna()
x=np.asarray(x.transform(lambda x: x.split(", ")).to_numpy())
x= pd.Series(np.concatenate(x, axis=0))
print(x.value_counts())
highlights_freq(df_restaurants_copy['highlights'])
```
- Now we find out which facility occurs most number of in the data.
```
def highlights_freq_max(x):
x=x.dropna()
x=np.asarray(x.transform(lambda x: x.split(", ")).to_numpy())
x= pd.Series(np.concatenate(x, axis=0))
print(x.value_counts().head(1))
highlights_freq_max(df_restaurants_copy['highlights'])
```
**11. Variable 'aggregate_rating'**
```
df_restaurants_copy['aggregate_rating'].isna().sum()
df_restaurants_copy['aggregate_rating'].duplicated().sum()
df_restaurants_copy['aggregate_rating'].head()
round(df_restaurants_copy['aggregate_rating'].mean(),2)
```
**12. Variable 'rating_text'**
```
df_restaurants_copy['rating_text'].isna().sum()
df_restaurants_copy['rating_text'].duplicated().sum()
df_restaurants_copy['rating_text'].head()
df_restaurants_copy['rating_text'].unique()
df_restaurants_copy[df_restaurants_copy['rating_text'] == "Very Good"]['aggregate_rating'].unique()
df_restaurants_copy[df_restaurants_copy['rating_text'] == "Excellent"]['aggregate_rating'].unique()
df_restaurants_copy[df_restaurants_copy['rating_text'] == "Good"]['aggregate_rating'].unique()
df_restaurants_copy[df_restaurants_copy['rating_text'] == "Average"]['aggregate_rating'].unique()
df_restaurants_copy[df_restaurants_copy['rating_text'] == "Not rated"]['aggregate_rating'].unique()
df_restaurants_copy[df_restaurants_copy['rating_text'] == "Poor"]['aggregate_rating'].unique()
```
Creating a New feature for better understanding of ratings
```
def ratings(x):
if x>=4.5:
return "Excellent"
elif ((x<4.5) & (x>=4)):
return "Very Good"
elif ((x<4) & (x>=3.5)):
return 'Good'
elif ((x<3.5) & (x>=2.5)):
return 'Average'
elif ((x<2.5) & (x>0)):
return 'Poor'
else:
return 'Not Rated'
df_restaurants_copy['new_ratings'] = np.nan
df_restaurants_copy['new_ratings'].head()
df_restaurants_copy['new_ratings'] = df_restaurants_copy['aggregate_rating'].transform(lambda x: ratings(x))
df_restaurants_copy['rating_text'].unique()
df_restaurants_copy['new_ratings'].unique()
df_restaurants_copy['new_ratings'].head()
df_restaurants_copy = df_restaurants_copy.drop(['rating_text'], axis=1)
```
**13. Variable 'votes'**
```
df_restaurants_copy['votes'].isna().sum()
df_restaurants_copy['votes'].duplicated().sum()
df_restaurants_copy['votes'].min(), round(df_restaurants_copy['votes'].mean(),2), df_restaurants_copy['votes'].max()
df_restaurants_copy['votes'].head()
```
**14. Variable 'photo_count'**
```
df_restaurants_copy['photo_count'].isna().sum()
df_restaurants_copy['photo_count'].duplicated().sum()
df_restaurants_copy['photo_count'].min(), round(df_restaurants_copy['photo_count'].mean(),2), df_restaurants_copy['photo_count'].max()
```
**15. Variable 'delivery'**
```
df_restaurants_copy['delivery'].isna().sum()
df_restaurants_copy['delivery'].duplicated().sum()
df_restaurants_copy['delivery'].unique()
```
<a id ='missing'></a>
## 6. Check for missing values
```
df_restaurants_copy.isna().sum()
df_restaurants_copy.isna().sum().sum()
```
**6. Study summary statistics**
Let us check the summary statistics for numerical variables.
```
df_restaurants_copy[["average_cost_for_two","aggregate_rating","votes","photo_count"]].describe()
print('Sum')
df_restaurants_copy[["average_cost_for_two","aggregate_rating","votes","photo_count"]].sum()
print('Mode')
df_restaurants_copy[["average_cost_for_two","aggregate_rating","votes","photo_count"]].mode()
```
<a id = 'correlation'> </a>
## 7. Study correlation
```
df_restaurants_copy[["average_cost_for_two","aggregate_rating","votes","photo_count"]].duplicated().sum()
df_temp = df_restaurants_copy[["average_cost_for_two","aggregate_rating","votes","photo_count"]].drop_duplicates()
df_temp_copy = df_temp.copy()
df_temp_copy2 = df_temp.copy()
df_temp.shape
df_temp.isna().sum()
df_temp.corr()
#ax.get_ylim()
ax=sns.heatmap(df_temp.corr(), annot=True, vmin=-1, vmax=1, center= 0, cmap= 'coolwarm')
ax.set_ylim(4.0, 0)
print('Without removing the outliers, the restaurants that have more number of photos also have more number of votes')
def replace_outliers(x):
Q1 = df_temp[x].quantile(0.25)
Q3 = df_temp[x].quantile(0.75)
IQR = Q3 - Q1
print("Outliers Number in ", x, ": ", ((df_temp[x] < (Q1 - 1.5 * IQR)) | (df_temp[x] > (Q3 + 1.5 * IQR))).sum(), "out of ", df_temp[x].shape[0])
##Replaced outliers in HDI for year
whisker1=Q1-1.5*IQR
for i in (np.where((df_temp[x] < whisker1))):
df_temp.iloc[i, df_temp.columns.get_loc(x)]= whisker1
whisker2=Q3+1.5*IQR
for i in (np.where((df_temp[x] > whisker2))):
df_temp.iloc[i, df_temp.columns.get_loc(x)]= whisker2
print('Outliers left: ',len(np.where((((df_temp[x] <(Q1-1.5*IQR)) | (df_temp[x] >(Q3+1.5*IQR)))))[0]))
replace_outliers('average_cost_for_two')
replace_outliers('aggregate_rating')
replace_outliers('votes')
replace_outliers('photo_count')
#ax.get_ylim()
ax=sns.heatmap(df_temp.corr(), annot=True, vmin=-1, vmax=1, center= 0, cmap= 'coolwarm')
ax.set_ylim(4.0, 0)
print('After replacing outliers with whiskers, new correlations have been found')
def del_outliers(x):
Q1 = df_temp[x].quantile(0.25)
Q3 = df_temp[x].quantile(0.75)
IQR = Q3 - Q1
print("Outliers Number in (rows being dropped)", x, ": ", ((df_temp[x] < (Q1 - 1.5 * IQR)) | (df_temp[x] > (Q3 + 1.5 * IQR))).sum(), "out of ", df_temp[x].shape[0])
whisker1=Q1-1.5*IQR
whisker2=Q3+1.5*IQR
##Deleting rows having outliers in HDI for year or suicides_no based on IQR Score method
for i in (np.where((df_temp_copy[x] < whisker1) | (df_temp_copy[x] > whisker2))):
df_temp_copy.drop(df_temp_copy.index[i],inplace=True)
print('Outliers left: ', len(np.where((((df_temp_copy[x] <(Q1-1.5*IQR)) | (df_temp_copy[x] >(Q3+1.5*IQR)))))[0]))
del_outliers('average_cost_for_two')
del_outliers('aggregate_rating')
del_outliers('votes')
del_outliers('photo_count')
#ax.get_ylim()
ax=sns.heatmap(df_temp_copy.corr(), annot=True, vmin=-1, vmax=1, center= 0, cmap= 'coolwarm')
ax.set_ylim(4.0, 0)
print('After deleting rows having outliers, new correlations have been found')
```
<a id='outliers'> </a>
## 8. Detect outliers
```
def detect_outliers(x):
Q1 = df_temp_copy2[x].quantile(0.25)
Q3 = df_temp_copy2[x].quantile(0.75)
IQR = Q3 - Q1
print("Number of Outliers in ", x, ": ", ((df_temp_copy2[x] < (Q1 - 1.5 * IQR)) | (df_temp_copy2[x] > (Q3 + 1.5 * IQR))).sum(), "out of ", df_temp_copy2[x].shape[0])
#whisker1=Q1-1.5*IQR
#whisker2=Q3+1.5*IQR
sns.boxplot(x=df_temp_copy2[x])
detect_outliers('average_cost_for_two')
detect_outliers('aggregate_rating')
detect_outliers('votes')
detect_outliers('photo_count')
```
<a id='region'> </a>
## 9. Create a new variable 'region'
Create a variable 'region' with four categories 'northern','eastern', 'southern', 'western' and 'central'. To do so, use the 'city' column, group all cities belonging to the same region.
```
#Manually created an excel file from data available at the source url: http://www.indianventurez.com/city_list.htm
df=pd.read_excel('cities.xlsx')
df.head()
df=df.dropna()
df['Category'] = df['Category'].replace('WEST INDIA\xa0',"WEST INDIA").replace('SOUTH INDIA\xa0',"SOUTH INDIA").replace('NORTH-EAST INDIA',"EAST INDIA").replace('NORTH EAST INDIA',"EAST INDIA").replace('WESTERN REGION',"WEST INDIA")
df['Category'].unique()
df['CITY']= df['CITY'].transform(lambda x: x.title())
df.head()
df[df['Category']=='NORTH INDIA']['CITY'].values
df[df['Category']=='EAST INDIA']['CITY'].values
df[df['Category']=='SOUTH INDIA']['CITY'].values
df[df['Category']=='WEST INDIA']['CITY'].values
df[df['Category']=='CENTRAL INDIA']['CITY'].values
northern=['Agra', 'Allahabad', 'Almora', 'Ambala', 'Amritsar', 'Auli',
'Baddi', 'Badrinath', 'Balrampur', 'Bareilly', 'Betalghat',
'Bhimtal', 'Binsar', 'Chail', 'Chamba', 'Chandigarh',
'Corbett National Park', 'Dalhousie', 'Dehradun', 'Dharamshala',
'Faridabad', 'Firozabad', 'Gangotri', 'Garhmukteshwar', 'Garhwal',
'Ghaziabad', 'Greater Noida', 'Gulmarg', 'Gurgaon', 'Hansi',
'Haridwar', 'Jalandhar', 'Jammu', 'Jhansi', 'Kanatal', 'Kargil',
'Karnal', 'Kasauli', 'Kashipur', 'Katra', 'Kausani', 'Kaza',
'Kedarnath', 'Khajjiar', 'Kufri', 'Kullu', 'Kushinagar', 'Leh',
'Lucknow', 'Ludhiana', 'Manali', 'Manesar', 'Marchula', 'Mathura',
'Mcleodganj', 'Mohali', 'Moradabad', 'Mukteshwar', 'Mussoorie',
'Nahan', 'Nainital', 'Naldhera', 'New Delhi', 'Noida', 'Palampur',
'Pahalgam', 'Panchkula', 'Pantnagar', 'Parwanoo', 'Patiala',
'Pathankot', 'Patnitop', 'Phagwara', 'Pinjore', 'Pragpur',
'Rai Bareilly', 'Ram Nagar', 'Ranikhet', 'Rishikesh', 'Sattal',
'Shimla', 'Solan', 'Sonauli', 'Srinagar', 'Udhampur', 'Uttarkashi',
'Varanasi', 'Yamunotri','Zirakpur','Nayagaon','Meerut']
eastern=['Agartala', 'Aizwal', 'Barbil', 'Berhampur', 'Bhilai',
'Bhubaneshwar', 'Bodhgaya', 'Cuttack', 'Darjeeling', 'Dibrugarh',
'Digha', 'Dooars', 'Durgapur', 'Gangtok', 'Gaya', 'Gorakhpur',
'Guwahati', 'Imphal', 'Jamshedpur', 'Jorhat', 'Kalimpong',
'Kanpur', 'Kaziranga', 'Kolkata', 'Kurseong', 'Lachung',
'Mandormoni', 'Patna', 'Pelling', 'Puri', 'Raichak', 'Rajgir',
'Ranchi', 'Ravangla', 'Rishyap', 'Rourkela', 'Shillong',
'Shimlipal', 'Siliguri', 'Sunderban', 'Tarapith', 'Yuksom', 'Howrah', 'Kharagpur']
southern=['Alleppey', 'Ashtamudi', 'Bandipur', 'Bangalore', 'Belgaum',
'Calicut', 'Canannore', 'Chennai', 'Chikmagalur', 'Coimbatore',
'Coonoor', 'Coorg', 'Dandeli', 'Gokharna', 'Guruvayoor', 'Halebid',
'Hampi', 'Hassan', 'Hospet', 'Hosur', 'Hubli', 'Hyderabad',
'Idukki', 'Kabini', 'Kanchipuram', 'Kanyakumari', 'Karur',
'Karwar', 'Kasargod', 'Kochin', 'Kodaikanal', 'Kollam', 'Kotagiri',
'Kottayam', 'Kovalam', 'Kumarakom', 'Kumbakonam', 'Kumily',
'Lakshadweep', 'Madurai', 'Mahabalipuram', 'Malappuram', 'Malpe',
'Mararri', 'Mangalore', 'Munnar', 'Mysore', 'Nadukani',
'Nagapattinam', 'Nagarhole', 'Nilgiri', 'Ooty', 'Pallakad',
'Pondicherry', 'Poovar', 'Port Blair', 'Puttaparthi',
'Rajahmundry', 'Rameshwaram', 'Ranny', 'Salem', 'Secunderabad',
'Sharavanbelgola', 'Shivanasamudra', 'Sivaganga District',
'Tanjore', 'Thekkady', 'Thirvannamalai', 'Thiruvananthapuram',
'Tiruchirapalli', 'Tirupur', 'Tirupati', 'Thrissur', 'Udupi',
'Vagamon', 'Varkala', 'Velankanni', 'Vellore', 'Vijayawada',
'Vishakapatnam', 'Wayanad', 'Yercaud','Alappuzha', 'Amravati', 'Guntur',
'Kochi', 'Manipal', 'Palakkad', 'Puducherry', 'Trichy','Trivandrum', 'Vizag']
western=['Ahmedabad', 'Ajmer', 'Alibaug', 'Alsisar', 'Alwar', 'Anand',
'Ankleshwar', 'Aurangabad', 'Balasinor', 'Bambora', 'Behror',
'Bharatpur', 'Bhandardara', 'Bharuch', 'Bhavangadh', 'Bhavnagar',
'Bhuj', 'Bikaner', 'Bundi', 'Chiplun', 'Chittorgarh', 'Dabhosa',
'Daman', 'Dapoli', 'Dausa', 'Diu', 'Dive Agar', 'Durshet',
'Dwarka', 'Ganapatipule', 'Gandhidham', 'Gandhinagar', 'Goa',
'Gondal', 'Igatpuri', 'Jaipur', 'Jaisalmer', 'Jalgaon',
'Jambugodha', 'Jamnagar', 'Jawhar', 'Jodhpur', 'Jojawar',
'Junagadh', 'Karjat', 'Kashid', 'Khandala', 'Khimsar', 'Kolhapur',
'Kota', 'Kumbalgarh', 'Lonavala', 'Lothal', 'Mahabaleshwar',
'Malshej Ghat', 'Malvan', 'Mandavi', 'Mandawa', 'Manmad',
'Matheran', 'Mount Abu', 'Morbi', 'Mumbai', 'Mundra',
'Murud Janjira', 'Nagaur Fort', 'Nagothane', 'Nagpur', 'Nanded',
'Napne', 'Nasik', 'Navi Mumbai', 'Neral', 'Osian', 'Palanpur',
'Pali', 'Palitana', 'Panchgani', 'Panhala', 'Panvel', 'Pench',
'Phalodi', 'Porbandar', 'Poshina', 'Pune', 'Puskhar', 'Rajasthan',
'Rajkot', 'Rajpipla', 'Rajsamand', 'Ramgarh', 'Ranakpur',
'Ranthambore', 'Ratnagiri', 'Rohetgarh', 'Sajan', 'Saputara',
'Sasan Gir', 'Sawai Madhopur', 'Sawantwadi', 'Shirdi', 'Siana',
'Silvassa', 'Surat', 'Tapola', 'Thane', 'Udaipur', 'Vadodara',
'Vapi', 'Veraval', 'Vikramgadh', 'Wankaner','Nashik','Neemrana','Pushkar']
central=['Amla', 'Bandhavgarh', 'Bhopal', 'Chitrakoot', 'Gwalior', 'Indore',
'Jabalpur', 'Kanha', 'Khajuraho', 'Orchha', 'Pachmarhi', 'Panna',
'Raipur', 'Ujjain']
def region(x):
#northern=['Delhi','Jaipur','Lucknow','Kanpur','Ghaziabad','Ludhiana','Agra','Allahabad','Faridabad','Meerut','Varanasi','Srinagar','Amritsar','Jodhpur','Chandigarh','Kota','Bareily','Moradabad','Gurgaon','Aligarh','Jalandhar','Saharanpur','Gorakhpur','Bikaner','Noida','Firozabad','Dehradun','Ajmer','Lonni','Jhansi','Jammu']
if x in northern:
return 'northern'
elif x in eastern:
return 'eastern'
elif x in southern:
return 'southern'
elif x in western:
return 'western'
elif x in central:
return 'central'
else:
return np.nan
df_restaurants_copy['region'] = np.nan
df_restaurants_copy['region'].head()
df_restaurants_copy['city'].isna().sum()
df_restaurants_copy['region'] = df_restaurants_copy['city'].transform(lambda x: region(x))
df_restaurants_copy['region'].unique()
df_restaurants_copy[df_restaurants_copy['region'].isna()]['city'].unique()
print('Let\'s add these leftover cities manually to their respective lists')
df_restaurants_copy['region'].unique()
df_restaurants_copy.groupby('region')[['region','city']].head(2)
df_restaurants_copy.groupby('region')['city'].first()
```
<a id='more'> </a>
## 10. Some more Analysis
<b>Lets us explore the data some more now that we have extrapolated and removed the missing values <br>
We now conduct analysis to compare the regions.</b>
### 1. To find which cities have expensive restaurants
```
#METHOD 1: Based on average 'average cost for two' of all restaurants per city for cities which have expensive restaurants
def detect_res(x, y):
Q1 = df_restaurants_copy[x].quantile(0.25)
Q3 = df_restaurants_copy[x].quantile(0.75)
IQR = Q3 - Q1
if y==1:
return df_restaurants_copy[df_restaurants_copy[x] > (Q3 + 1.5 * IQR)][['city','latitude','longitude','average_cost_for_two']].drop_duplicates(keep="first")
else:
return df_restaurants_copy[df_restaurants_copy[x] > (Q3 + 1.5 * IQR)][['city','latitude','longitude','average_cost_for_two']].groupby(['city']).mean().sort_values(by="average_cost_for_two",ascending=False).reset_index().drop_duplicates(keep="first").reset_index()
print("The cities which have expensive restaurants: \n", detect_res('average_cost_for_two',1)['city'].unique())
print(len(detect_res('average_cost_for_two',1)['city'].unique())," out of ", len(df_restaurants_copy['city'].unique())," cities have expensive restaurants")
#detect_res('average_cost_for_two',2)
```
- plot the cities which have costliest restaurants.
```
fig, ax = plt.subplots(figsize=(6,10))
plt.scatter(detect_res('average_cost_for_two',2)['longitude'],detect_res('average_cost_for_two',2)['latitude'])
detect_res('average_cost_for_two',2).head(5)
fig, ax = plt.subplots(figsize=(6,25))
#plt.xticks(rotation=90)
sns.barplot(y=detect_res('average_cost_for_two',2)['city'], x=detect_res('average_cost_for_two',2)['average_cost_for_two'])
detect_res('average_cost_for_two',2)[detect_res('average_cost_for_two',2)['average_cost_for_two']>=2000]
fig, ax = plt.subplots(figsize=(6,8))
ax = sns.barplot(y=detect_res('average_cost_for_two',2)[detect_res('average_cost_for_two',2)['average_cost_for_two']>=2000]['city'], x=detect_res('average_cost_for_two',2)[detect_res('average_cost_for_two',2)['average_cost_for_two']>=2000]['average_cost_for_two'])
fig, ax = plt.subplots(figsize=(2,4))
ax = plt.scatter(y=detect_res('average_cost_for_two',2)[detect_res('average_cost_for_two',2)['average_cost_for_two']>=2000]['longitude'], x=detect_res('average_cost_for_two',2)[detect_res('average_cost_for_two',2)['average_cost_for_two']>=2000]['latitude'])
#Method 2: Based on cities having atleast 1 expensive restaurant:
df_restaurants_copy[df_restaurants_copy['average_cost_for_two']>=6000]['city'].unique()
df1 = df_restaurants_copy[df_restaurants_copy['average_cost_for_two']>=6000].drop_duplicates().groupby('city')[['latitude','longitude','average_cost_for_two']].mean().sort_values(by="average_cost_for_two", ascending=False).reset_index()
df1
fig, ax = plt.subplots(figsize=(6,8))
ax = sns.barplot(x=df1['average_cost_for_two'],y=df1['city'])
#METHOD 3: MEAN AVERAGE COST FOR TWO INCLUDING BOTH EXPENSIVE AND NON-EXPENSIVE RESTAURANTS
df2 = df_restaurants_copy.drop_duplicates().groupby('city')[['latitude','longitude','average_cost_for_two']].mean().sort_values(by="average_cost_for_two", ascending=False).head(10).reset_index()
df2
fig, ax = plt.subplots(figsize=(4,6))
ax = sns.barplot(x=df2['average_cost_for_two'],y=df2['city'])
#METHOD 4: MEDIAN AVERAGE COST FOR TWO INCLUDING BOTH EXPENSIVE AND NON-EXPENSIVE RESTAURANTS
df2 = df_restaurants_copy.drop_duplicates().groupby('city')[['latitude','longitude','average_cost_for_two']].median().sort_values(by="average_cost_for_two", ascending=False).head(10).reset_index()
df2
print('METHOD 4')
fig, ax = plt.subplots(figsize=(4,6))
ax = sns.barplot(x=df2['average_cost_for_two'],y=df2['city'])
#METHOD 5: MEAN AVERAGE COST FOR TWO INCLUDING EXPENSIVE RESTAURANTS ONLY
df2 = df_restaurants_copy.drop_duplicates().sort_values(by="average_cost_for_two", ascending=False).head(20).groupby('city')[['latitude','longitude','average_cost_for_two']].mean().sort_values(by="average_cost_for_two", ascending=False).reset_index()
df2
print('METHOD 5')
fig, ax = plt.subplots(figsize=(4,6))
ax = sns.barplot(x=df2['average_cost_for_two'],y=df2['city'])
```
### 2. Comparing regions
### 2a. Highlights available in restaurants for different regions
To cater our analysis we define the regions as nothern, eastern, western and southern.
We first need to select the unique facilities available in each region and sort according to their frequencies.
```
def highlights_sort(x):
x=x.dropna()
x=np.asarray(x.transform(lambda x: x.split(", ")).to_numpy())
x= pd.Series(np.concatenate(x, axis=0))
z = x.value_counts().reset_index()
z = z.rename(columns={'index': 'highlights', 0: 'frequency'})
return z
```
**Highlights of the northern region**
```
print(highlights_sort(df_restaurants_copy[df_restaurants_copy['region'] == "northern"]['highlights']))
```
**Highlights of the eastern region**
```
print(highlights_sort(df_restaurants_copy[df_restaurants_copy['region'] == "eastern"]['highlights']))
```
**Highlights of the southern region**
```
print(highlights_sort(df_restaurants_copy[df_restaurants_copy['region'] == "southern"]['highlights']))
```
**Highlights of the western region**
```
print(highlights_sort(df_restaurants_copy[df_restaurants_copy['region'] == "western"]['highlights']))
```
#### Plot the barplot for different regions
We shall now plot the graphs for top 10 highlights.
```
print('Northern: ')
fig, ax = plt.subplots(figsize=(6,6))
sns.barplot(y=highlights_sort(df_restaurants_copy[df_restaurants_copy['region'] == "northern"]['highlights'])['highlights'].head(10), x=highlights_sort(df_restaurants_copy[df_restaurants_copy['region'] == "northern"]['highlights'])['frequency'].head(10))
print('Western:')
fig, ax = plt.subplots(figsize=(6,6))
sns.barplot(y=highlights_sort(df_restaurants_copy[df_restaurants_copy['region'] == "western"]['highlights'])['highlights'].head(10), x=highlights_sort(df_restaurants_copy[df_restaurants_copy['region'] == "western"]['highlights'])['frequency'].head(10))
print('Eastern: ')
fig, ax = plt.subplots(figsize=(6,6))
sns.barplot(y=highlights_sort(df_restaurants_copy[df_restaurants_copy['region'] == "eastern"]['highlights'])['highlights'].head(10), x=highlights_sort(df_restaurants_copy[df_restaurants_copy['region'] == "eastern"]['highlights'])['frequency'].head(10))
print('Southern: ')
fig, ax = plt.subplots(figsize=(6,6))
sns.barplot(y=highlights_sort(df_restaurants_copy[df_restaurants_copy['region'] == "southern"]['highlights'])['highlights'].head(10), x=highlights_sort(df_restaurants_copy[df_restaurants_copy['region'] == "southern"]['highlights'])['frequency'].head(10))
```
### 2b. Cuisines available in restaurants for different regions
```
def cuisines_freq2(x):
x=x.dropna()
x=np.asarray(x.transform(lambda x: x.split(", ")).to_numpy())
x= pd.Series(np.concatenate(x, axis=0))
z = x.value_counts().reset_index()
z = z.rename(columns={'index': 'cuisines', 0: 'frequency'})
return z
```
**Cuisines in the northern region**
```
print(cuisines_freq2(df_restaurants_copy[df_restaurants_copy['region'] == "northern"]['cuisines']))
```
**Cuisines in the eastern region**
```
print(cuisines_freq2(df_restaurants_copy[df_restaurants_copy['region'] == "eastern"]['cuisines']))
```
**Cuisines in the southern region**
```
print(cuisines_freq2(df_restaurants_copy[df_restaurants_copy['region'] == "southern"]['cuisines']))
```
**Cuisines in the western region**
```
print(cuisines_freq2(df_restaurants_copy[df_restaurants_copy['region'] == "western"]['cuisines']))
```
- Plot the barplot for top 10 cuisines served in the four regions
```
print('Northern: ')
fig, ax = plt.subplots(figsize=(6,6))
sns.barplot(y=cuisines_freq2(df_restaurants_copy[df_restaurants_copy['region'] == "northern"]['cuisines'])['cuisines'].head(10), x=cuisines_freq2(df_restaurants_copy[df_restaurants_copy['region'] == "northern"]['cuisines'])['frequency'].head(10))
print('Western: ')
fig, ax = plt.subplots(figsize=(6,6))
sns.barplot(y=cuisines_freq2(df_restaurants_copy[df_restaurants_copy['region'] == "western"]['cuisines'])['cuisines'].head(10), x=cuisines_freq2(df_restaurants_copy[df_restaurants_copy['region'] == "western"]['cuisines'])['frequency'].head(10))
print('Eastern: ')
fig, ax = plt.subplots(figsize=(6,6))
sns.barplot(y=cuisines_freq2(df_restaurants_copy[df_restaurants_copy['region'] == "eastern"]['cuisines'])['cuisines'].head(10), x=cuisines_freq2(df_restaurants_copy[df_restaurants_copy['region'] == "eastern"]['cuisines'])['frequency'].head(10))
print('Southern: ')
fig, ax = plt.subplots(figsize=(6,6))
sns.barplot(y=cuisines_freq2(df_restaurants_copy[df_restaurants_copy['region'] == "southern"]['cuisines'])['cuisines'].head(10), x=cuisines_freq2(df_restaurants_copy[df_restaurants_copy['region'] == "southern"]['cuisines'])['frequency'].head(10))
```
### 3. The Northern Region
**Now we shall consider only the northern region**
**1. The top 10 cuisines served in Restaurants**
```
print(cuisines_freq2(df_restaurants_copy[df_restaurants_copy['region'] == "northern"]['cuisines']).head(10))
```
**2. Do restaurants with more photo counts and votes have better rating?**
```
df_temp1 = df_restaurants_copy[df_restaurants_copy['region']=='northern'][["aggregate_rating","votes","photo_count"]].copy()
df_temp1.duplicated().sum()
df_temp1 = df_temp1.drop_duplicates()
df_temp1.isna().sum()
df_temp1.head()
df_temp1.corr().iloc[1:,0]
print('We need not always delete outliers. Without treating outliers, we see a very small positive correlation between "votes and aggregate_rating" and "photo_count and aggregate_rating".\nClearly, more votes and more photo_count result in less, though a positive impact on aggregate rating.\nSo the answer is, Very likely, yes! Maybe there is an indirect effect working here. Let\'s understand how this happens, below:')
df_votes = df_temp1.groupby('aggregate_rating').sum().sort_values(by="votes", ascending=False)['votes'].reset_index()
df_votes
df_photo = df_temp1.groupby('aggregate_rating').sum().sort_values(by="photo_count", ascending=False)['photo_count'].reset_index()
df_photo
```
- Plot a boxplots for the above table
```
print("Categorical distribution plot between aggregate_rating and votes: ")
fig, ax = plt.subplots(figsize=(10,5))
ax = sns.barplot(y="votes", x="aggregate_rating", data=df_votes)
print('So,it is clear that maximum number of votes are for ratings between 3.7 and 4.6')
print("Categorical distribution plot between aggregate_rating and photo_count: ")
fig, ax = plt.subplots(figsize=(10,5))
ax = sns.barplot(y="photo_count", x="aggregate_rating", data=df_photo)
print('Almost same trend also holds true here. So, maximum photo count is for ratings between 3.9 and 4.5')
print('Now let\'s draw boxplot for each variable:')
fig, ax = plt.subplots(figsize=(7,3))
sns.boxplot(x=df_temp1['aggregate_rating'])
fig, ax = plt.subplots(figsize=(15,2))
sns.boxplot(x=df_temp1['votes'])
fig, ax = plt.subplots(figsize=(15,2))
sns.boxplot(x=df_temp1['photo_count'])
```
### 4. The Mumbai city
consider the city mumbai and get a better insights of restuarants in Mumbai.
```
df_mumbai = df_restaurants_copy[df_restaurants_copy['city']=='Mumbai'].drop_duplicates(keep="first").copy()
df_mumbai.head(2)
```
**1. Expensive restaurants in Mumbai**
- Define the costliest restaurants whose average cost of two people exceeds Rs.5000 .
- Plot the restaurants which are costliest based on their average cost for two .
```
df_m_expensive = df_mumbai[df_mumbai['average_cost_for_two']>5000].sort_values(by="average_cost_for_two", ascending=False).reset_index()
df_m_expensive.head(2)
fig, ax = plt.subplots(figsize=(6,5))
ax = sns.barplot(y="name", x="average_cost_for_two", data=df_m_expensive)
```
**2.To find the top 20 cuisines of Mumbai**
- select unique cuisines available at restaurants in Mumbai
- sort cuisines based on frequency
```
print(cuisines_freq2(df_mumbai['cuisines']))
```
**3. To find the popular localities in Mumbai**
```
df_popular = df_mumbai.groupby('locality')['votes'].sum().sort_values(ascending=False).reset_index().head(10)
df_popular
fig, ax = plt.subplots(figsize=(6,5))
ax = sns.barplot(y="locality", x="votes", data=df_popular)
```
**4. Check for relationship between 'aggregate_rating' and 'average_cost_for_two'**
```
df_mumbai[['aggregate_rating','average_cost_for_two']].corr().iloc[0,1]
print('Weak Positive Correlation exists between the two as shown below:')
df_mumbai_agg = df_mumbai.groupby('aggregate_rating').sum().sort_values(by="average_cost_for_two", ascending=False)['average_cost_for_two'].reset_index()
df_mumbai_agg
fig, ax = plt.subplots(figsize=(10,5))
ax = sns.barplot(y="average_cost_for_two", x="aggregate_rating", data=df_mumbai_agg)
```
**5. Multiple box plot for photo_counts based on establishment type.**
```
fig, ax = plt.subplots(figsize=(10,15))
ax = sns.boxplot(x="photo_count", y="establishment", data=df_mumbai)
```
**6. Check for payments method offered in restaurants**
```
payments = ['Cash','Debit Card','Credit Card','Digital Payments Accepted', 'Alipay Accepted']
def get_payment_method(x):
val=""
x=x.split(", ")
for var in x:
if var in payments:
val = val + ", " + var
else:
continue
if val=="":
return val
else:
return val[2:]
df_payments = df_mumbai[['name','highlights','latitude','longitude']].drop_duplicates().copy()
for i in range(df_payments['highlights'].shape[0]):
df_payments.iloc[i,df_payments.columns.get_loc('highlights')] = get_payment_method(df_payments.iloc[i,df_payments.columns.get_loc('highlights')])
df_payments = df_payments.rename(columns={'name': 'restaurant', 'highlights': 'payment methods'})
df_payments[['restaurant','payment methods','latitude','longitude']].head(10)
print('These restaurants accept Only Cash (So, maybe take enough cash while visiting them):')
df_payments[df_payments['payment methods']=='Cash'][['restaurant','payment methods','latitude','longitude']]
print('verify for first restaurant:\n')
df_mumbai[df_mumbai['name']=='Drinkery 51'].drop_duplicates().iloc[0,df_mumbai[df_mumbai['name']=='Drinkery 51'].columns.get_loc('highlights')]
df_payments[df_payments['restaurant']=='Drinkery 51'].drop_duplicates()['payment methods']
```
- select unique facilities available at restaurants in western region
- sort facilities based on frequency
```
#Western Region of Mumbai
print("Latitudinal extent of Mumbai according to data available: ",df_mumbai['latitude'].min()," degree E to ",df_mumbai['latitude'].max()," degree E")
print("\nWe assume that left 35% and middle 50% is Western Region, which has latitude from ",df_mumbai['latitude'].min()," degree E to ",df_mumbai['latitude'].quantile(0.35)," degree E and longitude from ",df_mumbai['longitude'].quantile(0.25)," degree N to ", df_mumbai['longitude'].quantile(0.75)," degree N")
#Unique facilities (sorted)
print(highlights_sort(df_mumbai[(df_mumbai['latitude'] < df_mumbai['latitude'].quantile(0.35)) & (df_mumbai['longitude'] < df_mumbai['longitude'].quantile(0.75)) & (df_mumbai['longitude'] > df_mumbai['longitude'].quantile(0.25))]['highlights']))
df_mumbai_unique = highlights_sort(df_mumbai[(df_mumbai['latitude'] < df_mumbai['latitude'].quantile(0.35)) & (df_mumbai['longitude'] < df_mumbai['longitude'].quantile(0.75)) & (df_mumbai['longitude'] > df_mumbai['longitude'].quantile(0.25))]['highlights']).copy()
df_not_mumbai = df_restaurants_copy[df_restaurants_copy['city']!='Mumbai'].drop_duplicates(keep="first").copy()
df_not_mumbai.head(2)
df_not_mumbai_unique = highlights_sort(df_not_mumbai['highlights']).copy()
df_not_mumbai_unique.head()
val=""
for var in df_mumbai_unique['highlights'].values:
if var in df_not_mumbai_unique['highlights'].values:
continue
else:
val = val + ", " + var
val = val[2:]
if val =="":
val="None"
print("Values exclusive to Mumbai are: ",val)
print("Thank you :)")
```
| github_jupyter |
# K Nearest Neighbors
This notebook uses scikit-learn's knn model to train classifiers to associate images of peoples' faces and images of handwritten digits.
```
# import libraries
import numpy as np
from scipy.io import loadmat
from scipy.stats import mode
%matplotlib inline
import matplotlib.pyplot as plt
# settings
seed = 421
np.random.seed(seed)
def loaddata(filename: str):
"""This function returns X,y training and testing data from the given filename."""
data = loadmat(filename)
X_train = data['xTr']
y_train = np.round(data['yTr'])
X_test = data['xTe']
y_test = np.round(data['yTe'])
return X_train.T, y_train.T, X_test.T, y_test.T
X_train, y_train, X_test, y_test = loaddata('../data/faces.mat')
def plotdata(X, xdim=38, ydim=31):
n, d = X.shape
f, axes = plt.subplots(1, n, sharey=True)
f.set_figwidth(10 * n)
f.set_figheight(n)
if n > 1:
for i in range(n):
axes[i].imshow(X[i,:].reshape(ydim, xdim).T, cmap=plt.cm.binary_r)
else:
axes.imshow(X[0,:].reshape(ydim, xdim).T, cmap=plt.cm.binary_r)
plt.figure(figsize=(11,8))
plotdata(X_train[:9,:])
# get unique face labels
print(np.unique(y_train))
def subsetdata(X, y, c):
"""This function returns the X features for y == class c."""
mask = np.squeeze(y == c)
sample = X[mask,:]
return sample
# test function
sample = subsetdata(X_train, y_train, 35)
plotdata(sample)
# import sklearn model
from sklearn.neighbors import KNeighborsClassifier
# build and fit a k=1 nearest neighbor model
clf = KNeighborsClassifier(n_neighbors=1).fit(X_train, y_train.ravel())
# import scoring function
from sklearn.metrics import accuracy_score
# get the performance on the test data
score = accuracy_score(y_test, clf.predict(X_test))
print('Accuracy score = {:.2%}'.format(score))
# see performance for a few cases
for c in range(1, 40, 10):
sample = subsetdata(X_test, y_test, c)
preds = clf.predict(sample)
print(f'Actual class = {c} Predictions = {preds}')
plotdata(sample)
```
## Repeat the process with the digit data
```
# load the training and testing sets
X_train, y_train, X_test, y_test = loaddata('../data/digits.mat')
# preview some samples
plt.figure(figsize=(11,8))
plotdata(X_train[:9,:], ydim=16, xdim=16)
# get the class labels
print(np.unique(y_train))
# preview '7' images
sample = subsetdata(X_train, y_train, 7)
plotdata(sample[:7], ydim=16, xdim=16)
# make and fit an instance of a knn model with k=1
clf = KNeighborsClassifier(n_neighbors=1).fit(X_train, y_train.ravel())
# compute and print accuracy on test set
score = accuracy_score(y_test, clf.predict(X_test))
print('Accuracy score = {:.2%}'.format(score))
# see performance
for c in range(0, 9, 4):
sample = subsetdata(X_test, y_test, c)
preds = clf.predict(sample)
print(f'Actual class = {c} Predictions = {preds}')
plotdata(sample[:5], ydim=16, xdim=16)
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
def dydx(x,y):
#Set derivatives
#y is an array, 2D array
#1 dimension is x, other one is 1D with y and x elements
#our eqn is d^2y/dx^2 = -y
#so we can write dydz=z, dzdx=-y
#we will set y = y[0], z = y[1]
#declare an array
y_derivs = np.zeros(2)
#set dydx=z
y_derivs[0] = y[1]
#set dxdx = -y
y_derivs[1] = -1*y[0]
#here we have an array
return y_derivs
```
defined coupled derivatives to integrate above, 4th order runge kutta method below
```
def rk4_mv_core(dydx,xi,yi,nv,h):
#declare k? arrays
k1 = np.zeros(nv)
k2 = np.zeros(nv)
k3 = np.zeros(nv)
k4 = np.zeros(nv)
#nv is number of variables
#? is wild card, it can be any digit from 0-9
#each k give us derivatives estimates of different fxns that were trying to integrate
#define x at 1/2 step
x_ipoh = xi + 0.5*h
#define x at 1 step
x_ipo = xi + h
#declare a temp y array
y_temp = np.zeros(nv)
#get k1 values
y_derivs = dydx(xi,yi)
k1[:] = h*y_derivs[:]
#get k2 values
y_temp[:] = yi[:] + 0.5*k1[:]
y_derivs = dydx(x_ipoh,y_temp)
k2[:] = h*y_derivs[:]
#get k3 values
y_temp[:] = yi[:] + 0.5*k2[:]
y_derivs = dydx(x_ipoh,y_temp)
k3[:] = h*y_derivs[:]
#get k4 values
y_temp[:] = yi[:] + k3[:]
y_derivs = dydx(x_ipo,y_temp)
k4[:] = h*y_derivs[:]
#advance y by a step h
yipo = yi + (k1 + 2*k2 + 2*k3 + k4)/6.
#this is an array
return yipo
```
adaptive step size driver for rk4
```
def rk4_mv_ad(dydx,x_i,y_i,nv,h,tol):
#define safety scale, tell us how much our step is gonna change by
SAFETY = 0.9
H_NEW_FAC = 2.0
# HNF max fator by which were taking a bigger step
#set max number of iterations becasue were doing a while loop
imax = 10000
#set iteration variable
i = 0
#create an error, delta is an array, size of nv, fill it by number that is twice our tolerance
Delta = np.full(nv,2*tol)
#remember the step
h_step = h
#adjust step
while(Delta.max()/tol > 1.0):
#estimate error by taking 1 step size of h vs. 2 steps of size h/2
y_2 = rk4_mv_core(dydx,x_i,y_i,nv,h_step)
y_1 = rk4_mv_core(dydx,x_i,y_i,nv,0.5*h_step)
y_11 = rk4_mv_core(dydx,x_i+0.5*h_step,y_i,nv,0.5*h_step)
#compute an error
Delta = np.fabs(y_2 - y_11)
#if error is too large, take a smaller step
if(Delta.max()/tol > 1.0):
#our error is too large, decrease the step
h_step *= SAFETY * (Delta.max()/tol)**(-0.25)
#Check iteration
if(i>=imax):
print("Too many iterations in rk4_mv_as()")
raise StopIteration("Ending after i = ",i)
#iterate
i+=1
#next time try to take a bigger step
h_new = np.fmin(h_step * (Delta.max()/tol)**(-0.9), h_step*H_NEW_FAC)
#return the answer, a new step, an the step we actually took
return y_2, h_new, h_step
```
wrapper for rk4
```
def rk4_mv(dydx,a,b,y_a,tol):
#dydx is the derivative wrt x
#a is lower bound, b is upper bound
#y_a are the boundary conditions array that contains values from 0-1
#tol is the tolerance for integrating y
#define starting step
xi = a
yi = y_a.copy()
#an initial step size == make very small
h = 1.0e-4 * (b-a)
#set max number of iterations, if theres problems, we will know about it
imax = 10000
#set iteration variable
i = 0
#set number of coupled ODEs to the size of y_a, y_a is our initial conditions
#array, values of array give values of y @ a
nv = len(y_a)
#set initial conditions, makes np array, y is 2D array, with 2 indices to y
#1st is along x direction, 2nd has 2 values y and z
#2 arrays side by side, 1 for y and 1 for z
x = np.full(1,a)
#single element array. x is array with 1 element, a
y = np.full((1,nv),y_a)
#dimensions of 1,nv. series of arrays of size nv(#of ODEs, in this case its 2)
#makes 2 D array, 1 index has 1 element, the other has 2 elements
#its a tuple, nxm matrix. n is 1, and m is 2
#set flag/initial conditions because were doing something iterative
flag = 1
#loop until we reach the right side (b)
while(flag):
#calculate y_i+1
yi_new, h_new, h_step = rk4_mv_ad(dydx,xi,yi,nv,h,tol)
#update step
h = h_new
#prevent overshoot
if(xi+h_step>b):
#take smaller step
h = b-xi
#recalculate y_i+1
yi_new, h_new, h_step = rk4_mv_ad(dydx,xi,yi,nv,h,tol)
#break
flag = 0
#update values
xi += h_step
yi[:] = yi_new[:]
#[:] replaces values in yi with yi_new
#add the step to the arrays
x = np.append(x,xi)
#take array, appends new element, replacing with xi in further steps
y_new = np.zeros((len(x),nv))
y_new[0:len(x)-1,:] = y
y_new[-1,:] = yi[:]
del y
y = y_new
#prevent too many iterations
if(i>=imax):
print("Max number of iterations reached")
raise StopIteration("Iteration number = ",i)
#iterate
i += 1
#output some information
s = "i = %3d\tx = %9.8f\th = %9.8f\tb=%9.8f" % (i,xi, h_step, b)
print(s)
#break if new xi is == b
if(xi==b):
flag = 0
#return the answer
return x,y
```
## Perform the integration
```
a = 0.0
b = 2.0 * np.pi
y_0 = np.zeros(2)
y_0[0] = 0.0
y_0[1] = 1.0
nv = 2
tolerance = 1.0e-6
#perform the integration
x,y = rk4_mv(dydx,a,b,y_0,tolerance)
```
## plot the result
```
plt.plot(x,y[:,0],'o',label='y(x)')
plt.plot(x,y[:,1],'o',label='dydx(x)')
xx = np.linspace(0,2.0*np.pi,1000)
plt.plot(xx,np.sin(xx),label='sin(x)')
plt.plot(xx,np.cos(xx),label='cos(x)')
plt.xlabel('x')
plt.ylabel('y, dy/dx')
plt.legend(frameon=False)
```
## Plot the error
Notice that the errors actually exceed our "tolerance"
```
sine = np.sin(x)
cosine = np.cos(x)
y_error = (y[:,0]-sine)
dydx_error = (y[:,1]-cosine)
plt.plot(x, y_error, label="y(x) Error")
plt.plot(x, dydx_error, label="dydx(x) Error")
plt.legend(frameon=False)
```
| github_jupyter |
# Setup
```
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import itertools as it
import helpers_03
%matplotlib inline
```
# Neurons as Logic Gates
As an introduction to neural networks and their component neurons, we are going to look at using neurons to implement the most primitive logic computations: logic gates. Let's go!
##### The Sigmoid Function
The basic, classic activation function that we apply to neurons is a sigmoid (sometimes just called *the* sigmoid function) function: the standard logistic function.
$$
\sigma = \frac{1}{1 + e^{-x}}
$$
$\sigma$ ranges from (0, 1). When the input $x$ is negative, $\sigma$ is close to 0. When $x$ is positive, $\sigma$ is close to 1. At $x=0$, $\sigma=0.5$
We can implement this conveniently with NumPy.
```
def sigmoid(x):
"""Sigmoid function"""
return 1.0 / (1.0 + np.exp(-x))
```
And plot it with matplotlib.
```
# Plot The sigmoid function
xs = np.linspace(-10, 10, num=100, dtype=np.float32)
activation = sigmoid(xs)
fig = plt.figure(figsize=(6,4))
plt.plot(xs, activation)
plt.plot(0,.5,'ro')
plt.grid(True, which='both')
plt.axhline(y=0, color='y')
plt.axvline(x=0, color='y')
plt.ylim([-0.1, 1.15])
```
## An Example with OR
##### OR Logic
A logic gate takes in two boolean (true/false or 1/0) inputs, and returns either a 0 or 1 depending on its rule. The truth table for a logic gate shows the outputs for each combination of inputs: (0, 0), (0, 1), (1,0), and (1, 1). For example, let's look at the truth table for an Or-gate:
<table>
<tr><th colspan="3">OR gate truth table</th></tr>
<tr><th colspan="2">Input</th><th>Output</th></tr>
<tr><td>0</td><td>0</td><td>0</td></tr>
<tr><td>0</td><td>1</td><td>1</td></tr>
<tr><td>1</td><td>0</td><td>1</td></tr>
<tr><td>1</td><td>1</td><td>1</td></tr>
</table>
##### OR as a Neuron
A neuron that uses the sigmoid activation function outputs a value between (0, 1). This naturally leads us to think about boolean values. Imagine a neuron that takes in two inputs, $x_1$ and $x_2$, and a bias term:
<img src="./images/logic01.png" width=50%/>
By limiting the inputs of $x_1$ and $x_2$ to be in $\left\{0, 1\right\}$, we can simulate the effect of logic gates with our neuron. The goal is to find the weights (represented by ? marks above), such that it returns an output close to 0 or 1 depending on the inputs. What weights should we use to output the same results as OR? Remember: $\sigma(z)$ is close to 0 when $z$ is largely negative (around -10 or less), and is close to 1 when $z$ is largely positive (around +10 or greater).
$$
z = w_1 x_1 + w_2 x_2 + b
$$
Let's think this through:
* When $x_1$ and $x_2$ are both 0, the only value affecting $z$ is $b$. Because we want the result for input (0, 0) to be close to zero, $b$ should be negative (at least -10) to get the very left-hand part of the sigmoid.
* If either $x_1$ or $x_2$ is 1, we want the output to be close to 1. That means the weights associated with $x_1$ and $x_2$ should be enough to offset $b$ to the point of causing $z$ to be at least 10 (i.e., to the far right part of the sigmoid).
Let's give $b$ a value of -10. How big do we need $w_1$ and $w_2$ to be? At least +20 will get us to +10 for just one of $\{w_1, w_2\}$ being on.
So let's try out $w_1=20$, $w_2=20$, and $b=-10$:
<img src="./images/logic02.png\" width=50%/>
##### Some Utility Functions
Since we're going to be making several example logic gates (from different sets of weights and biases), here are two helpers. The first takes our weights and baises and turns them into a two-argument function that we can use like `and(a,b)`. The second is for printing a truth table for a gate.
```
def logic_gate(w1, w2, b):
''' logic_gate is a function which returns a function
the returned function take two args and (hopefully)
acts like a logic gate (and/or/not/etc.). its behavior
is determined by w1,w2,b. a longer, better name would be
make_twoarg_logic_gate_function'''
def the_gate(x1, x2):
return sigmoid(w1 * x1 + w2 * x2 + b)
return the_gate
def test(gate):
'Helper function to test out our weight functions.'
for a, b in it.product(range(2), repeat=2):
print("{}, {}: {}".format(a, b, np.round(gate(a, b))))
```
Let's see how we did. Here's the gold-standard truth table.
<table>
<tr><th colspan="3">OR gate truth table</th></tr>
<tr><th colspan="2">Input</th><th>Output</th></tr>
<tr><td>0</td><td>0</td><td>0</td></tr>
<tr><td>0</td><td>1</td><td>1</td></tr>
<tr><td>1</td><td>0</td><td>1</td></tr>
<tr><td>1</td><td>1</td><td>1</td></tr>
</table>
And our result:
```
or_gate = logic_gate(20, 20, -10)
test(or_gate)
```
This matches - great!
# Exercise 1
##### Part 1: AND Gate
Now you try finding the appropriate weight values for each truth table. Try not to guess and check. Think through it logically and try to derive values that work.
<table>
<tr><th colspan="3">AND gate truth table</th></tr>
<tr><th colspan="2">Input</th><th>Output</th></tr>
<tr><td>0</td><td>0</td><td>0</td></tr>
<tr><td>0</td><td>1</td><td>0</td></tr>
<tr><td>1</td><td>0</td><td>0</td></tr>
<tr><td>1</td><td>1</td><td>1</td></tr>
</table>
```
# Fill in the w1, w2, and b parameters such that the truth table matches
# and_gate = logic_gate()
# test(and_gate)
```
##### Part 2: NOR (Not Or) Gate
<table>
<tr><th colspan="3">NOR gate truth table</th></tr>
<tr><th colspan="2">Input</th><th>Output</th></tr>
<tr><td>0</td><td>0</td><td>1</td></tr>
<tr><td>0</td><td>1</td><td>0</td></tr>
<tr><td>1</td><td>0</td><td>0</td></tr>
<tr><td>1</td><td>1</td><td>0</td></tr>
</table>
<table>
```
# Fill in the w1, w2, and b parameters such that the truth table matches
# nor_gate = logic_gate()
# test(nor_gate)
```
##### Part 3: NAND (Not And) Gate
<table>
<tr><th colspan="3">NAND gate truth table</th></tr>
<tr><th colspan="2">Input</th><th>Output</th></tr>
<tr><td>0</td><td>0</td><td>1</td></tr>
<tr><td>0</td><td>1</td><td>1</td></tr>
<tr><td>1</td><td>0</td><td>1</td></tr>
<tr><td>1</td><td>1</td><td>0</td></tr>
</table>
```
# Fill in the w1, w2, and b parameters such that the truth table matches
# nand_gate = logic_gate()
# test(nand_gate)
```
## Solutions 1
# Limits of Single Neurons
If you've taken computer science courses, you may know that the XOR gates are the basis of computation. They can be used as half-adders, the foundation of being able to add numbers together. Here's the truth table for XOR:
##### XOR (Exclusive Or) Gate
<table>
<tr><th colspan="3">NAND gate truth table</th></tr>
<tr><th colspan="2">Input</th><th>Output</th></tr>
<tr><td>0</td><td>0</td><td>0</td></tr>
<tr><td>0</td><td>1</td><td>1</td></tr>
<tr><td>1</td><td>0</td><td>1</td></tr>
<tr><td>1</td><td>1</td><td>0</td></tr>
</table>
Now the question is, can you create a set of weights such that a single neuron can output this property? It turns out that you cannot. Single neurons can't correlate inputs, so it's just confused. So individual neurons are out. Can we still use neurons to somehow form an XOR gate?
What if we tried something more complex:
<img src="./images/logic03.png\" width=60%/>
Here, we've got the inputs going to two separate gates: the top neuron is an OR gate, and the bottom is a NAND gate. The output of these gates is passed to another neuron, which is an AND gate. If you work out the outputs at each combination of input values, you'll see that this is an XOR gate!
```
# Make sure you have or_gate, nand_gate, and and_gate working from above
def xor_gate(a, b):
c = or_gate(a, b)
d = nand_gate(a, b)
return and_gate(c, d)
test(xor_gate)
```
Thus, we can see how chaining together neurons can compose more complex models than we'd otherwise have access to.
# Learning a Logic Gate
We can use TensorFlow to try and teach a model to learn the correct weights and bias by passing in our truth table as training data.
```
# Create an empty Graph to place our operations in
logic_graph = tf.Graph()
with logic_graph.as_default():
# Placeholder inputs for our a, b, and label training data
x1 = tf.placeholder(tf.float32)
x2 = tf.placeholder(tf.float32)
label = tf.placeholder(tf.float32)
# A placeholder for our learning rate, so we can adjust it
learning_rate = tf.placeholder(tf.float32)
# The Variables we'd like to learn: weights for a and b, as well as a bias term
w1 = tf.Variable(tf.random_normal([]))
w2 = tf.Variable(tf.random_normal([]))
b = tf.Variable(0.0, dtype=tf.float32)
# Use the built-in sigmoid function for our output value
output = tf.nn.sigmoid(w1 * x1 + w2 * x2 + b)
# We'll use the mean of squared errors as our loss function
loss = tf.reduce_mean(tf.square(output - label))
correct = tf.equal(tf.round(output), label)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
# Finally, we create a gradient descent training operation and an initialization operation
train = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
init = tf.global_variables_initializer()
with tf.Session(graph=logic_graph) as sess:
sess.run(init)
# Training data for all combinations of inputs
and_table = np.array([[0,0,0],
[1,0,0],
[0,1,0],
[1,1,1]])
feed_dict={x1: and_table[:,0],
x2: and_table[:,1],
label: and_table[:,2],
learning_rate: 0.5}
for i in range(5000):
l, acc, _ = sess.run([loss, accuracy, train], feed_dict)
if i % 1000 == 0:
print('loss: {}\taccuracy: {}'.format(l, acc))
test_dict = {x1: and_table[:,0], #[0.0, 1.0, 0.0, 1.0],
x2: and_table[:,1]} # [0.0, 0.0, 1.0, 1.0]}
w1_val, w2_val, b_val, out = sess.run([w1, w2, b, output], test_dict)
print('\nLearned weight for w1:\t {}'.format(w1_val))
print('Learned weight for w2:\t {}'.format(w2_val))
print('Learned weight for bias: {}\n'.format(b_val))
print(np.column_stack((and_table[:,[0,1]], out.round().astype(np.uint8) ) ) )
# FIXME! ARGH! use real python or numpy
#idx = 0
#for i in [0, 1]:
# for j in [0, 1]:
# print('{}, {}: {}'.format(i, j, np.round(out[idx])))
# idx += 1
```
# Exercise 2
You may recall that in week 2, we built a class `class TF_GD_LinearRegression` that wrapped up the three steps of using a learning model: (1) build the model graph, (2) train/fit, and (3) test/predict. Above, we *did not* use that style of implementation. And you can see that things get a bit messy, quickly. We have model creation in one spot and then we have training, testing, and output all mixed together (along with TensorFlow helper code like sessions, etc.). We can do better. Rework the code above into a class like `TF_GD_LinearRegression`.
## Solution 2
# Learning an XOR Gate
If we compose a two stage model, we can learn the XOR gate. You'll notice that defining the model itself is starting to get messy. We'll talk about ways of dealing with that next week.
```
class XOR_Graph:
def __init__(self):
# Create an empty Graph to place our operations in
xor_graph = tf.Graph()
with xor_graph.as_default():
# Placeholder inputs for our a, b, and label training data
self.x1 = tf.placeholder(tf.float32)
self.x2 = tf.placeholder(tf.float32)
self.label = tf.placeholder(tf.float32)
# A placeholder for our learning rate, so we can adjust it
self.learning_rate = tf.placeholder(tf.float32)
# abbreviations! this section is the difference
# from the LogicGate class above
Var = tf.Variable; rn = tf.random_normal
self.weights = [[Var(rn([])), Var(rn([]))],
[Var(rn([])), Var(rn([]))],
[Var(rn([])), Var(rn([]))]]
self.biases = [Var(0.0, dtype=tf.float32),
Var(0.0, dtype=tf.float32),
Var(0.0, dtype=tf.float32)]
sig1 = tf.nn.sigmoid(self.x1 * self.weights[0][0] +
self.x2 * self.weights[0][1] +
self.biases[0])
sig2 = tf.nn.sigmoid(self.x1 * self.weights[1][0] +
self.x2 * self.weights[1][1] +
self.biases[1])
self.output = tf.nn.sigmoid(sig1 * self.weights[2][0] +
sig2 * self.weights[2][1] +
self.biases[2])
# We'll use the mean of squared errors as our loss function
self.loss = tf.reduce_mean(tf.square(self.output - self.label))
# Finally, we create a gradient descent training operation
# and an initialization operation
gdo = tf.train.GradientDescentOptimizer
self.train = gdo(self.learning_rate).minimize(self.loss)
correct = tf.equal(tf.round(self.output), self.label)
self.accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
self.sess = tf.Session(graph=xor_graph)
self.sess.run(init)
def fit(self, train_dict):
loss, acc, _ = self.sess.run([self.loss, self.accuracy, self.train],
train_dict)
return loss, acc
def predict(self, test_dict):
# make a list of organized weights:
# see tf.get_collection for more advanced ways to handle this
all_trained = (self.weights[0] + [self.biases[0]] +
self.weights[1] + [self.biases[1]] +
self.weights[2] + [self.biases[2]])
return self.sess.run(all_trained + [self.output], test_dict)
xor_table = np.array([[0,0,0],
[1,0,1],
[0,1,1],
[1,1,0]])
logic_model = XOR_Graph()
train_dict={logic_model.x1: xor_table[:,0],
logic_model.x2: xor_table[:,1],
logic_model.label: xor_table[:,2],
logic_model.learning_rate: 0.5}
print("training")
# note, I might get stuck in a local minima b/c this is a
# small problem with no noise (yes, noise helps!)
# this can converge in one round of 1000 or it might get
# stuck for all 10000
for i in range(10000):
loss, acc = logic_model.fit(train_dict)
if i % 1000 == 0:
print('loss: {}\taccuracy: {}'.format(loss, acc))
print('loss: {}\taccuracy: {}'.format(loss, acc))
print("testing")
test_dict = {logic_model.x1: xor_table[:,0],
logic_model.x2: xor_table[:,1]}
results = logic_model.predict(test_dict)
wb_lrn, predictions = results[:-1], results[-1]
print(wb_lrn)
wb_lrn = np.array(wb_lrn).reshape(3,3)
# combine the predictions with the inputs and clean up the data
# round it and convert to unsigned 8 bit ints
out_table = np.column_stack((xor_table[:,[0,1]],
predictions)).round().astype(np.uint8)
print("results")
print('Learned weights/bias (L1):', wb_lrn[0])
print('Learned weights/bias (L2):', wb_lrn[1])
print('Learned weights/bias (L3):', wb_lrn[2])
print('Testing Table:')
print(out_table)
print("Correct?", np.allclose(xor_table, out_table))
```
# An Example Neural Network
So, now that we've worked with some primitive models, let's take a look at something a bit closer to what we'll work with moving forward: an actual neural network.
The following model accepts a 100 dimensional input, has a hidden layer depth of 300, and an output layer depth of 50. We use a sigmoid activation function for the hidden layer.
```
nn1_graph = tf.Graph()
with nn1_graph.as_default():
x = tf.placeholder(tf.float32, shape=[None, 100])
y = tf.placeholder(tf.float32, shape=[None]) # Labels, not used in this model
with tf.name_scope('hidden1'):
w = tf.Variable(tf.truncated_normal([100, 300]), name='W')
b = tf.Variable(tf.zeros([300]), name='b')
z = tf.matmul(x, w) + b
a = tf.nn.sigmoid(z)
with tf.name_scope('output'):
w = tf.Variable(tf.truncated_normal([300, 50]), name='W')
b = tf.Variable(tf.zeros([50]), name='b')
z = tf.matmul(a, w) + b
output = z
with tf.name_scope('global_step'):
global_step = tf.Variable(0, trainable=False, name='global_step')
inc_step = tf.assign_add(global_step, 1, name='increment_step')
with tf.name_scope('summaries'):
for var in tf.trainable_variables():
hist_summary = tf.summary.histogram(var.op.name, var)
summary_op = tf.summary.merge_all()
init = tf.global_variables_initializer()
tb_base_path = 'tbout/nn1_graph'
tb_path = helpers_03.get_fresh_dir(tb_base_path)
sess = tf.Session(graph=nn1_graph)
writer = tf.summary.FileWriter(tb_path, graph=nn1_graph)
sess.run(init)
summaries = sess.run(summary_op)
writer.add_summary(summaries)
writer.close()
sess.close()
```
# Exercise 3
Modify the template above to create your own neural network with the following features:
* Accepts input of length 200 (and allows for variable number of examples)
* First hidden layer depth of 800
* Second hidden layer depth of 600
* Third hidden layer depth of 400
* Output layer depth of 100
* Include histogram summaries of the variables
## Solution 3
| github_jupyter |
# Data Science Bootcamp - The Bridge
## Precurso
En este notebook vamos a ver, uno a uno, los conceptos básicos de Python. Constarán de ejercicios prácticos acompañados de una explicación teórica dada por el profesor.
Los siguientes enlaces están recomendados para el alumno para profundizar y reforzar conceptos a partir de ejercicios y ejemplos:
- https://www.kaggle.com/learn/python
- https://facundoq.github.io/courses/aa2018/res/02_python.html
- https://www.w3resource.com/python-exercises/
- https://www.practicepython.org/
- https://es.slideshare.net/egutierrezru/python-paraprincipiantes
- https://www.sololearn.com/Play/Python#
- https://github.com/mhkmcp/Python-Bootcamp-from-Basic-to-Advanced
Ejercicios avanzados:
- https://github.com/darkprinx/100-plus-Python-programming-exercises-extended/tree/master/Status (++)
- https://github.com/mahtab04/Python-Programming-Practice (++)
- https://github.com/whojayantkumar/Python_Programs (+++)
- https://www.w3resource.com/python-exercises/ (++++)
- https://github.com/fupus/notebooks-ejercicios (+++++)
Tutor de ayuda PythonTutor:
- http://pythontutor.com/
## 1. Variables y tipos
### Cadenas
```
# Entero - Integer - int
x = 7
# Cadena - String - Lista de caracteres
x = "lorena"
print(x)
x = 7
print(x)
# built-in
type
x = 5
y = 7
z = x + y
print(z)
x = "'lorena'\" "
l = 'silvia----'
g = x + l
# Las cadenas se concatenan
print(g)
print(g)
# type muestra el tipo de la variable
type(g)
type(3)
# print es una función que recibe varios argumentos y cada argumento está diferenciado por la coma. Después de cada coma, la función 'print' añade un espacio.
# mala praxis
print( g,z , 6, "cadena")
# buena praxis - PEP8
print(g, z, 6, "cadena")
u = "g"
silvia = "silvia tiene "
anos = " años"
suma = silvia + u + anos
print(suma)
n = 2
m = "3"
print(n + m)
# Cambiar de int a str
j = 2
print(j)
print(type(j))
j = str(j)
print(j)
print(type(j))
# Cambiar de int a str
j = 2
print(j)
print(type(j))
j = str(j) + " - " + silvia
print(j)
print(type(j))
k = 22
k = str(k)
print(k)
# Cambiar de str a int
lor = "98"
lor = int(lor)
print(lor)
# Para ver la longitud de una lista de caracteres (lista)
mn = "lista de caracteres$%·$% "
#lenght
print(len(mn))
h = len(mn)
print(h + 7)
h = 8
print(h)
x = 2
print(x)
gabriel_vazquez = "Gabriel Vazquez"
print("Hello Python world!")
print("Nombre de compañero")
companero_clase = "Compañero123123"
print(companero_clase)
print(compañero)
x = (2 + 4) + 7
print(x)
# String, Integer, Float, List, None (NaN)
# str, int, float, list,
string_ = "23"
numero = 23
print(type(string_))
print(string_)
numero2 = 10
suma = numero + numero2
print(suma)
string2 = "33"
suma2 = string_ + string2
print(suma2)
m = (numero2 + int(string2))
print(m)
m = ((((65 + int("22")) * 2)))
m
print(type(int(string2)))
y = 22
y = str(y)
print(type(y))
string2 = int(string2)
print(type(string2))
string3 = "10"
numero_a_partir_de_string = int(string3)
print(numero_a_partir_de_string)
print(string3)
print(type(numero_a_partir_de_string))
print(type(string3))
h = "2"
int(h)
# los decimales son float. Python permite las operaciones entre int y float
x = 4
y = 4.2
print(x + y)
# La división normal (/) es siempre float
# La división absoluta (//) puede ser:
# - float si uno de los dos números (o los dos) son float
# - int si los dos son int
j = 15
k = 4
division = j // k
print(division)
print(type(division))
num1 = 12
num2 = 3
suma = num1 + num2
resta = num1 - num2
multiplicacion = num1 * num2
division = num1 / num2
division_absoluta = num1 // num2
gabriel_vazquez = "Gabriel Vazquez"
print("suma:", suma)
print("resta:", resta)
print("multiplicacion:", multiplicacion)
print("division:", division)
print("division_absoluta:", division_absoluta)
print(type(division))
print(type(division_absoluta))
print(x)
j = "2"
j
print(j)
x = 2
j = 6
g = 4
h = "popeye"
# Jupyter notebook permite que la última línea se imprima por pantalla ( la variable )
print(g)
print(j)
print(h)
x
int(5.6/2)
float(2)
g = int(5.6/2)
print(g)
5
x = int(5.6//2)
x
# Soy un comentario
# print("Hello Python world!")
# Estoy creando una variable que vale 2
"""
Esto es otro comentario
"""
print(x)
x = 25
x = 76
x = "1"
message2 = "One of Python's strengths is its diverse community."
print(message2)
```
## Ejercicio:
### Crear una nueva celda.
### Declarar tres variables:
- Una con el nombre "edad" con valor vuestra edad
- Otra "edad_compañero_der" que contengan la edad de tipo entero de vuestro compañero de la derecha
- Otra "suma_anterior" que contenga la suma de las dos variables anteriormente declaradas
Mostrar por pantalla la variable "suma_anterior"
```
edad = 99
edad_companero_der = 30
suma_anterior = edad_companero_der + edad
print(suma_anterior)
h = 89 + suma_anterior
h
edad = 18
edad_companero_der = 29
suma_anterior = edad + edad_companero_der
suma_anterior
i = "hola"
o = i.upper()
o
o.lower()
name = "Ada Lovelace"
x = 2
print(name.upper())
print(name.lower())
print(name.upper)
print(name.upper())
x = 2
x = x + 1
x
x += 1
x
# int
x = 1
# float
y = 2.
# str
s = "string"
# type --> muestra el tipo de la variable o valor
print(type(x))
type(y)
type(s)
5 + 2
x = 2
x = x + 1
x += 1
x = 2
y = 4
print(x, y, "Pepito", "Hola")
s = "Hola soy Soraya:"
s + "789"
print(s, 98, 29, sep="")
print(s, 98, 29)
type( x )
2 + 6
```
## 2. Números y operadores
```
### Enteros ###
x = 3
print("- Tipo de x:")
print(type(x)) # Imprime el tipo (o `clase`) de x
print("- Valor de x:")
print(x) # Imprimir un valor
print("- x+1:")
print(x + 1) # Suma: imprime "4"
print("- x-1:")
print(x - 1) # Resta; imprime "2"
print("- x*2:")
print(x * 2) # Multiplicación; imprime "6"
print("- x^2:")
print(x ** 2) # Exponenciación; imprime "9"
# Modificación de x
x += 1
print("- x modificado:")
print(x) # Imprime "4"
x *= 2
print("- x modificado:")
print(x) # Imprime "8"
print("- El módulo de x con 40")
print(40 % x)
print("- Varias cosas en una línea:")
print(1, 2, x, 5*2) # imprime varias cosas a la vez
# El módulo muestra el resto de la división entre dos números
2 % 2
3 % 2
4 % 5
numero = 99
numero % 2 # Si el resto es 0, el número es par. Sino, impar.
numero % 2
99 % 100
y = 2.5
print("- Tipo de y:")
print(type(y)) # Imprime el tipo de y
print("- Varios valores en punto flotante:")
print(y, y + 1, y * 2.5, y ** 2) # Imprime varios números en punto flotante
```
## Título
Escribir lo que sea
1. uno
2. dos
# INPUT
```
edad = input("Introduce tu edad")
print("Diego tiene", edad, "años")
# Input recoge una entrada de texto de tipo String
num1 = int(input("Introduce el primer número"))
num2 = int(input("Introduce el segundo número"))
print(num1 + num2)
```
## 3. Tipo None
```
x = None
n = 5
s = "Cadena"
print(x + s)
```
## 4. Listas y colecciones
```
# Lista de elementos:
# Las posiciones se empiezan a contar desde 0
s = "Cadena"
primer_elemento = s[0]
#ultimo_elemento = s[5]
ultimo_elemento = s[-1]
print(primer_elemento)
print(ultimo_elemento)
bicycles = ['trek', 'cannondale', 'redline', 'specialized']
bicycles[0]
tamano_lista = len(bicycles)
tamano_lista
ultimo_elemento_por_posicion = tamano_lista - 1
bicycles[ultimo_elemento_por_posicion]
bicycles = ['trek', 'cannondale', 'redline', 'specialized']
message = "My first bicycle was a " + bicycles[0]
print(bicycles)
print(message)
print(type(bicycles))
s = "String"
s.lower()
print(s.lower())
print(s)
s = s.lower()
s
# Existen dos tipos de funciones:
# 1. Los que modifican los valores sin que tengamos que especificar una reasignación a la variable
# 2. Los que solo devuelven la operación y no modifican el valor de la variable. Tenemos que forzar la reasignación si queremos modificar la variable.
cars = ['bmw', 'audi', 'toyota', 'subaru']
print(cars)
cars.reverse()
print(cars)
cars = ['bmw']
print(cars)
cars.reverse()
print(cars)
s = "Hola soy Clara"
print(s[::-1])
s
l = "hola"
len(l)
l[3]
# Para acceder a varios elementos, se especifica con la nomenclatura "[N:M]". N es el primer elemento a obtener, M es el último elemento a obtener pero no incluido. Ejemplo:
# Queremos mostrar desde las posiciones 3 a la 7. Debemos especificar: [3:8]
# Si M no tiene ningún valor, se obtiene desde N hasta el final.
# Si N no tiene ningún valor, es desde el principio de la colección hasta M
s[3:len(s)]
s[:3]
s[3:10]
motorcycles = ['honda', 'yamaha', 'suzuki', 'ducati']
print(motorcycles)
too_expensive = 'ducati'
motorcycles.remove(too_expensive)
print(motorcycles)
print(too_expensive + " is too expensive for me.")
# Agrega un valor a la última posición de la lista
motorcycles.append("ducati")
motorcycles
lista = ['honda', 2, 8.9, [2, 3], 'yamaha', 'suzuki', 'ducati']
lista[3]
lista.remove(8.9)
lista
lista
l = lista[1]
l
lista.remove(l)
lista
lista.remove(lista[2])
lista = ['honda', 2, 8.9, [2, 3], 'yamaha', 'suzuki', 'honda', 'ducati']
lista
# remove elimina el primer elemento que se encuentra que coincide con el valor del argumento
lista = ['honda', 2, 8.9, [2, 3], 'yamaha', 'suzuki', 'honda', 'ducati']
lista.remove("honda")
lista
# Accedemos a la posición 1 del elemento que está en la posición 2 de lista
lista[2][1]
lista[3][2]
p = lista.remove("honda")
print(p)
l = [2, 4, 6, 8]
l.reverse()
l
```
### Colecciones
1. Listas
2. String (colección de caracteres)
3. Tuplas
4. Conjuntos (Set)
```
# Listas --> Mutables
lista = [2, 5, "caract", [9, "g", ["j"]]]
print(lista[-1][-1][-1])
lista[3][2][0]
lista.append("ultimo")
lista
# Tuplas --> Inmutables
tupla = (2, 5, "caract", [9, "g", ["j"]])
tupla
s = "String"
s[2]
tupla[3].remove(9)
tupla
tupla[3][1].remove("j")
tupla
tupla2 = (2, 5, 'caract', ['g', ['j']])
tupla2[-1].remove(["j"])
tupla2
tupla2 = (2, 5, 'caract', ['g', ['j']])
tupla2[-1].remove("g")
tupla2
tupla2[-1].remove(["j"])
tupla2
if False == 0:
print(0)
print(type(lista))
print(type(tupla))
# Update listas
lista = [2, "6", ["k", "m"]]
lista[1] = 1
lista
lista = [2, "6", ["k", "m"]]
lista[2] = 0
lista
tupla = (2, "6", ["k", "m"])
tupla[2] = 0
tupla
tupla = (2, "6", ["k", "m"])
tupla[2][1] = 0
tupla
# Conjuntos
conjunto = [2, 4, 6, "a", "z", "h", 2]
conjunto = set(conjunto)
conjunto
conjunto = ["a", "z", "h", 2, 2, 4, 6, True, True, False]
conjunto = set(conjunto)
conjunto
conjunto = ["a", "z", "h", 2, 2, 4, 6, 2.1, 2.4, 2.3, True, True, False]
conjunto = set(conjunto)
conjunto
conjunto_tupla = ("a", "z", "h", 2, 2, 4, 6, 2.1, 2.4, 2.3, True, True, False)
conjunto = set(conjunto_tupla)
conjunto
conjunto = {"a", "z", "h", 2, 2, 4, 6, 2.1, 2.4, 2.3, True, True, False}
conjunto
s = "String"
lista_s = list(s)
lista_s
s = "String"
conj = {s}
conj
tupla = (2, 5, "h")
tupla = list(tupla)
tupla.remove(2)
tupla = tuple(tupla)
tupla
tupla = (2, 5, "h")
tupla = list(tupla)
tupla.remove(2)
tupla = (((((tupla)))))
tupla
tupla = (2, 5, "h")
tupla = list(tupla)
tupla.remove(2)
tupla = tuple(tupla)
tupla
conjunto = {2, 5, "h"}
lista_con_conjunto = [conjunto]
lista_con_conjunto[0]
# No podemos acceder a elementos de un conjunto
lista_con_conjunto[0][0]
lista = [1, 5, 6, True, 6]
set(lista)
lista = [True, 5, 6, 1, 6]
set(lista)
1 == True
```
## 5. Condiciones: if, elif, else
### Boolean
```
True
False
# Operadores comparativos
x = (1 == 1)
x
1 == 2
"a" == "a"
# Diferente
"a" != "a"
2 > 4
4 > 2
4 >= 4
4 > 4
4 <= 5
4 < 3
input()
1 == 1
"""
== -> Igualdad
!= -> Diferecia
< -> Menor que
> -> Mayor que
<= -> Menor o igual
>= -> Mayor o igual
"""
# and solo devolverá True si TODOS son True
True and False
# or devolverá True si UNO es True
(1 == 1) or (1 == 2)
(1 == 1) or (1 == 2) and (1 == 1)
(1 == 1) or ((1 == 2) and (1 == 1))
(1 == 1) and (1 == 2) and (1 == 1)
(1 == 1) and (1 == 2) and ((1 == 1) or (0 == 0))
# True and False and True
print("Yo soy\n Gabriel")
if 1 != 1:
print("Son iguales")
else:
print("No entra en if")
if 1 == 1:
print("Son iguales")
else:
print("No entra en if")
if 1>3:
print("Es mayor")
elif 2==2:
print("Es igual")
elif 3==3:
print("Es igual2")
else:
print("Ninguna de las anteriores")
if 2>3:
print(1)
else:
print("Primer else")
if 2==2:
print(2)
else:
print("Segundo else")
if 2>3:
print(1)
else:
print("Primer else")
if 3==3:
print(" 3 es 3 ")
if 2==2:
print(2)
else:
print("Segundo else")
if 2>3:
print(1)
else:
print("Primer else")
# -------
if 3==4:
print(" 3 es 3 ")
# --------
if 2==2:
print(2)
print(5)
x = 6
print(x)
else:
print("Segundo else")
if 2>3:
print(1)
else:
print("Primer else")
# -------
if 3==4:
print(" 3 es 3 ")
# --------
if 2==2:
print(2)
print(5)
x = 6
print(x)
# ------
if x == 7:
print("X es igual a 6")
# ------
y = 7
print(y)
else:
print("Segundo else")
if (not (1==1)):
print("Hola")
if not None:
print(1)
if "a":
print(2)
if 0:
print(0)
# Sinónimos de False para las condiciones:
# None
# False
# 0 (int or float)
# Cualquier colección vacía --> [], "", (), {}
# El None no actúa como un número al compararlo con otro número
"""
El True lo toma como un numérico 1
"""
lista = []
if lista:
print(4)
lista = ["1"]
if lista:
print(4)
if 0.0:
print(2)
if [] or False or 0 or None:
print(4)
if [] and False or 0 or None:
print(4)
if not ([] or False or 0 or None):
print(4)
if [] or False or not 0 or None:
print(True)
else:
print(False)
x = True
y = False
x + y
x = True
y = False
str(x) + str(y)
def funcion_condicion():
if y > 4:
print("Es mayor a 4")
else:
print("No es mayor a 4")
funcion_condicion(y=4)
def funcion_primera(x):
if x == 4:
print("Es igual a 4")
else:
funcion_condicion(y=x)
funcion_primera(x=5)
def funcion_final(apellido):
if len(apellido) > 5:
print("Cumple condición")
else:
print("No cumpole condición")
funcion_final(apellido="Vazquez")
```
# Bucles For, While
```
lista = [1, "dos", 3, "Pepito"]
print(lista[0])
print(lista[1])
print(lista[2])
print(lista[3])
for p in lista:
print(p)
altura1 = [1.78, 1.63, 1.75, 1.68]
altura2 = [2.00, 1.82]
altura3 = [1.65, 1.73, 1.75]
altura4 = [1.72, 1.71, 1.71, 1.62]
lista_alturas = [altura1, altura2, altura3, altura4]
print(lista_alturas[0][1])
for x in lista_alturas:
print(x[3])
def mostrar_cada_elemento_de_lista(lista):
for x in lista:
print(x)
mostrar_cada_elemento_de_lista(lista=lista_alturas)
mostrar_cada_elemento_de_lista(lista=lista)
for x in lista_alturas:
if len(x) > 2:
print(x[2])
else:
print(x[1])
```
| github_jupyter |
# `map` vs `apply`
Do you know the difference between the **`map`** and **`apply`** Series methods?
```
from IPython.display import IFrame
IFrame('http://etc.ch/RjoN', 400, 300)
IFrame('https://directpoll.com/r?XDbzPBd3ixYqg8VzFnGsNyv3rYRtjyM1R0g6HvOxV', 400, 300)
```
# Primary usage of `map` method
As the name implies, **`map`** can literally map one value to another in a Series. Pass it a dictionary (or another Series). Let's see an example:
```
import pandas as pd
import numpy as np
s = pd.Series(np.random.randint(1, 7, 10))
s
```
Create mapping dictionary
```
d = {1:'odd', 2:'even', 3:'odd', 4:'even', 5:'odd', 6:'even'}
s.map(d)
```
Works the same if you use a Series
```
s1 = pd.Series(d)
s1
s.map(s1)
```
#### `map` example with more data
Let's map the values of 1 million integers ranging from 1 to 100 to 'even/odd' strings
```
n = 1000000 # 1 million
s = pd.Series(np.random.randint(1, 101, n))
s.head()
```
Create the mapping
```
d = {i: 'odd' if i % 2 else 'even' for i in range(1, 101)}
print(d)
s.map(d).head(10)
```
### Exercise 1
<span style="color:green; font-size:16px">Can you use the **`apply`** method to do the same thing? Time the difference between the **`apply`** and **`map`**.</span>
```
# your code here
```
### `map` and `apply` can both take functions
Unfortunately both **`map`** and **`apply`** can accept a function that gets implicitly passed each value in the Series. The result of each operation is the exact same.
```
a = s.apply(lambda x: 'odd' if x % 2 else 'even')
b = s.map(lambda x: 'odd' if x % 2 else 'even')
a.equals(b)
```
This dual functionality of **`map`** confuses users. It can accept a dictionary but it can also accept a function.
### Suggestion: only use `map` for literal mapping
It makes more sense to me that the **`map`** method only be used for one purpose and this is to map each value in a Series from one value to another with a dictionary or a Series.
### Use `apply` only for functions
**`apply`** must take a function and has more options than **`map`** when taking a function so it should be used when you want to apply a function to each value in a Series. There is no difference in speed between the two.
### Exercise 2
<span style="color:green; font-size:16px">Use the **`map`** method with a two-item dictionary to convert the Series of integers to 'even/odd' strings. You will need to perform an operation on the Series first. Is this faster or slower than the results in exercise 1?</span>
```
# run this code first
n = 1000000 # 1 million
s = pd.Series(np.random.randint(1, 101, n))
# your code here
```
### Exercise 3
<span style="color:green; font-size:16px">Write a for-loop to convert each value in the Series to 'even/odd' strings. Time the operation.</span>
```
# your code here
```
# Vectorized if-then-else with NumPy `where`
The NumPy **`where`** function provides us with a vectorized if-then-else that is very fast. Let's convert the Series again to 'even/odd' strings.
```
s = pd.Series(np.random.randint(1, 101, n))
np.where(s % 2, 'odd', 'even')
%timeit np.where(s % 2, 'odd', 'even')
```
### Exercise 4
<span style="color:green; font-size:16px">Convert the values from 1-33 to 'low', 34-67 to 'medium' and the rest 'high'.</span>
```
# your code here
```
### There is a DataFrame/Series `where` method
There is a DataFrame/Series **`where`** method but it works differently. You must pass it a boolean DataFrame/series and it will preserve all the values that are True. The other values will by default be converted to missing, but you can specify a specific number as well.
```
s.where(s > 50).head(10)
s.where(s > 50, other=-1).head(10)
```
# Do we really need `apply`?
As we saw from this last example, we could eliminate the need for the **`apply`** method. Most examples of code that use **`apply`** do not actually need it.
### `apply` doesn't really do anything
By itself, the **`apply`** method doesn't really do anything.
* For Series, it iterates over every single value and passes that value to a function that you must pass to **`apply`**.
* For a DataFrame, it iterates over each column or row as a Series and calls your passed function on that Series
Let's see a simple example of **`apply`** used to multiply each value of a Series by 2:
```
s = pd.Series(np.random.randint(1, 101, n))
s.apply(lambda x: x * 2).head()
(s * 2).head()
%timeit s.apply(lambda x: x * 2)
%timeit s * 2
```
### Use vectorized solution whenever possible
As you can see, the solution with **`apply`** was more than 2 orders of magnitude slower than the vectorized solution. A for-loop can be faster than **`apply`**.
```
%timeit pd.Series([v * 2 for v in s])
```
I like to call **`apply`** the **method of last resort**. There is almost rarely a reason to use it over other methods. Pandas and NumPy both provide a tremendous amount of functionality that cover nearly everything you need to do.
Always use pandas and NumPy methods first before anything else.
### Use-cases for `apply` on a Series
When there is no vectorized implementation in pandas, numpy or other scientific library, then you can use **`apply`**.
A simple example (that's not too practical) is finding the underlying data type of each value in a Series.
```
s = pd.Series(['a', {'TX':'Texas'}, 99, (0, 5)])
s
s.apply(type)
```
A more practical example might be from a library that doesn't work directly with arrays, like finding the edit distance between two strings from the NLTK library.
```
from nltk.metrics import edit_distance
edit_distance('Kaitlyn', 'Kaitlin')
s = pd.Series(['Kaitlyn', 'Katelyn', 'Kaitlin', 'Katelynn', 'Katlyn',
'Kaitlynn', 'Katelin', 'Katlynn', 'Kaitlin', 'Caitlyn', 'Caitlynn'])
s
```
Using **`apply`** here is correct
```
s.apply(lambda x: edit_distance(x, 'Kaitlyn'))
```
### Using `apply` on a DataFrame
By default **`apply`** will call the passed function on each individual column on a DataFrame. The column will be passed to the function as a Series.
```
df = pd.DataFrame(np.random.rand(100, 5), columns=['a', 'b', 'c', 'd', 'e'])
df.head()
df.apply(lambda s: s.max())
```
We can change the direction of the operation by seting the **`axis`** parameter to **`1`** or **`columns`**
```
df.apply(lambda s: s.max(), axis='columns').head(10)
```
#### Never actually perform these operations when a DataFrame method exists
Let's fix these two methods and time their differences
```
df.max()
df.max(axis='columns').head(10)
%timeit df.apply(lambda s: s.max())
%timeit df.max()
%timeit df.apply(lambda s: s.max(), axis='columns')
%timeit df.max(axis='columns')
```
5x and 70x faster and much more readable code
### Infected by the documentation
Unfortunately, pandas official documentation is littered with examples that don't need **`apply`**. Can you fix the following 2 misuses of **`apply`** [found here](http://pandas.pydata.org/pandas-docs/stable/10min.html#apply).
### Exercise 1
<span style="color:green; font-size:16px">Make the following idiomatic</span>
```
df.apply(np.cumsum).head()
# your code here
```
### Exercise 2
<span style="color:green; font-size:16px">Make the following idiomatic</span>
```
df.apply(lambda x: x.max() - x.min())
# your code here
```
### `apply` with `axis=1` is the slowest operation you can do in pandas
If you call **`apply`** with **`axis=1`** or identically with **`axis='columns'`** on a DataFrame, pandas will iterate row by row to complete your operation. Since there are almost always more rows than columns, this will be extremely slow.
### Exercise 3
<span style="color:green; font-size:16px">Add a column named **`distance`** to the following DataFrame that computes the euclidean distance between points **`(x1, y1)`** and **`(x2, y2)`**. Calculate it once with **`apply`** and again idiomatically using vectorized operations. Time the difference between them.</span>
```
# run this first
df = pd.DataFrame(np.random.randint(0, 20, (100000, 4)),
columns=['x1', 'y1', 'x2', 'y2'])
df.head()
# your code here
```
### Use-cases for apply on a DataFrame
DataFrames and Series have nearly all of the their methods in common. For methods that only exist for Series, you might need to use **`apply`**.
```
weather = pd.DataFrame({'Houston': ['rainy', 'sunny', 'sunny', 'cloudy', 'rainy', 'sunny'],
'New York':['sunny', 'sunny', 'snowy', 'snowy', 'rainy', 'cloudy'],
'Seattle':['sunny', 'cloudy', 'cloudy', 'cloudy', 'cloudy', 'rainy'],
'Las Vegas':['sunny', 'sunny', 'sunny', 'sunny', 'sunny', 'sunny']})
weather
```
Counting the frequencies of each column is normally done by the Series **`value_counts`** method. It does not exist for DataFrames, so you can use it here with **`apply`**.
```
weather.apply(pd.value_counts)
%matplotlib inline
weather.apply(pd.value_counts).plot(kind='bar')
```
### Using `apply` with the Series accessors `str`, `dt` and `cat`
Pandas Series, depending on their data type, can access additional Series-only methods through **`str`**, **`dt`** and **`cat`** for string, datetime and categorical type columns.
```
weather.Houston.str.capitalize()
```
Since this method exists only for Series, you can use **`apply`** here to capitalize each column.
```
weather.apply(lambda x: x.str.capitalize())
```
This is one case where you can use the **`applymap`** method by directly using the string method on each value.
```
weather.applymap(str.capitalize)
employee = pd.read_csv('../data/employee.csv')
employee.head()
```
Select just the titles and departments
```
emp_title_dept = employee[['DEPARTMENT', 'POSITION_TITLE']]
emp_title_dept.head()
```
Let's find all the departments and titles that contain the word 'police'.
```
has_police = emp_title_dept.apply(lambda x: x.str.upper().str.contains('POLICE'))
has_police.head()
```
Let's use these boolean values to only select rows that have both values as **`True`**.
```
emp_title_dept[has_police.all(axis='columns')].head(10)
```
### How fast are the `str` accessor methods?
Not any faster than looping...
```
%timeit employee['POSITION_TITLE'].str.upper()
%timeit employee['POSITION_TITLE'].apply(str.upper)
%timeit pd.Series([x.upper() for x in employee['POSITION_TITLE']])
%timeit employee['POSITION_TITLE'].max()
%timeit employee['BASE_SALARY'].max()
%timeit employee['POSITION_TITLE'].values.max()
%timeit employee['BASE_SALARY'].values.max()
a_list = employee['POSITION_TITLE'].tolist()
%timeit max(a_list)
```
### Exercise 4
<span style="color:green; font-size:16px">The following example is from the documentation. Produce the same result without using apply by creating a function that it accepts a DataFrame and returns a DataFrame</span>
```
df = pd.DataFrame(np.random.randint(0, 20, (10, 4)),
columns=['x1', 'y1', 'x2', 'y2'])
df.head()
def subtract_and_divide(x, sub, divide=1):
return (x - sub) / divide
df.apply(subtract_and_divide, args=(5,), divide=3)
# your code here
```
### Exercise 5
<span style="color:green; font-size:16px">Make the following idiomatic:</span>
```
college = pd.read_csv('../data/college.csv',
usecols=lambda x: 'UGDS' in x or x == 'INSTNM',
index_col='INSTNM')
college = college.dropna()
college.shape
college.head()
def max_race_count(s):
max_race_pct = s.iloc[1:].max()
return (max_race_pct * s.loc['UGDS']).astype(int)
college.apply(max_race_count, axis=1).head()
# your code here
```
# Tips for debugging `apply`
It is more difficult to debug code that uses **`apply`** when you a custom function. This is because the all the code in your custom function gets executed at once. You aren't stepping through the code one line at a time and checking the output.
### Using the `display` IPython function and print statements to inspect custom function
Let's say you didn't know what **`apply`** with **`axis='columns'`** was implicitly passing to the custom function.
```
# what the hell is x?
def func(x):
return 1
college.apply(func, axis=1).head()
```
Its obvious that you need to know what object **`x`** is in **`func`**. One thing we can do is print out its type. To stop the output we can force an error by calling **`raise`**.
```
# what the hell is x?
def func(x):
print(type(x))
raise
return 1
college.apply(func, axis=1).head()
```
Ok, great. We know that **`x`** is a Series. Why did it get printed twice? It turns out that pandas calls your method twice on the first row/column to determine if it can take a fast path or not. This is a small implementation detail that shouldn't affect you unless your function is making references to variables out of scope.
Let's go one step further and display **`x`** on the screen
```
from IPython.display import display
# what the hell is x?
def func(x):
display(x)
raise
return 1
college.apply(func, axis=1).head()
```
### Exercise 1
<span style="color:green; font-size:16px">Use the **`display`** function after each line in a custom function that gets used with **`apply`** and **`axis='columns'`** to find the population of the second highest race per school. Make sure you raise an exception or else you will have to kill your kernel because of the massive output.</span>
```
# your code here
```
### Exercise 2 - Very difficult
<span style="color:green; font-size:16px">Can you do this without using **`apply`**?</span>
```
# your code here
```
### Exercise 3
<span style="color:green; font-size:16px">When **`apply`** is called on a Series, what is the data type that gets passed to the function?</span>
```
# your code here
```
# Summary
* **`map`** is a Series method. I suggest using by passing it a dictionary/Series and NOT a function
* Use **`apply`** when you want to apply a function to each value of a Series or each row/column of a DataFrame
* You rarely need **`apply`** - Use only pandas and numpy functions first
* Using **`apply`** on a DataFrame with **`axis='columns'`** is the slowest operation in pandas
* You can use **`apply`** on a DataFrame when you need to call a method that is available only to Series (like **`value_counts`**)
* Debug apply by printing and using the **`display`** IPython function inside your custom function
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%aimport utils_1_1
import pandas as pd
import numpy as np
import altair as alt
from altair_saver import save
import datetime
import dateutil.parser
from os.path import join
from constants_1_1 import SITE_FILE_TYPES
from utils_1_1 import (
get_site_file_paths,
get_site_file_info,
get_site_ids,
get_visualization_subtitle,
get_country_color_map,
)
from theme import apply_theme
from web import for_website
alt.data_transformers.disable_max_rows(); # Allow using rows more than 5000
data_release='2021-04-27'
df = pd.read_csv(join("..", "data", "Phase2.1SurvivalRSummariesPublic", "ToShare", "table.beta.mice.std.toShare.csv"))
print(df.head())
# Rename columns
df = df.rename(columns={"variable": "c", "beta": "v"})
consistent_date = {
'2020-03': 'Mar - Apr',
'2020-05': 'May - Jun',
'2020-07': 'Jul - Aug',
'2020-09': 'Sep - Oct',
'2020-11': 'Since Nov'
}
colors = ['#E79F00', '#0072B2', '#D45E00', '#CB7AA7', '#029F73', '#57B4E9']
sites = ['META', 'APHP', 'FRBDX', 'ICSM', 'BIDMC', 'MGB', 'UCLA', 'UMICH', 'UPENN', 'VA1', 'VA2', 'VA3', 'VA4', 'VA5']
site_colors = ['black', '#D45E00', '#0072B2', '#CB7AA7', '#E79F00', '#029F73', '#DBD03C', '#57B4E9', '#57B4E9', '#57B4E9', '#57B4E9', '#57B4E9']
sites = ['META', 'APHP', 'FRBDX', 'ICSM', 'UKFR', 'NWU', 'BIDMC', 'MGB', 'UCLA', 'UMICH', 'UPENN', 'UPITT', 'VA1', 'VA2', 'VA3', 'VA4', 'VA5']
site_colors = ['black', '#0072B2', '#0072B2', '#0072B2', '#0072B2', '#CB7AA7', '#D45E00', '#D45E00', '#D45E00', '#D45E00', '#D45E00', '#D45E00', '#D45E00', '#D45E00', '#D45E00','#D45E00','#D45E00']
df.siteid = df.siteid.apply(lambda x: x.upper())
print(df.siteid.unique().tolist())
group_map = {
'age18to25': 'Age',
'age26to49': 'Age',
'age70to79': 'Age',
'age80plus': 'Age',
'sexfemale': 'Sex',
'raceBlack': 'Race',
'raceAsian': 'Race',
'raceHispanic.and.Other': 'Race',
'CRP': 'Lab',
'albumin': 'Lab',
'TB': 'Lab',
"LYM": 'Lab',
"neutrophil_count" : 'Lab',
"WBC" : 'Lab',
"creatinine": 'Lab',
"AST": 'Lab',
"AA": 'Lab',
"DD": 'Lab',
'mis_CRP': 'Lab Mis.',
'mis_albumin': 'Lab Mis.',
'mis_TB': 'Lab Mis.',
"mis_LYM": 'Lab Mis.',
"mis_neutrophil_count" : 'Lab Mis.',
"mis_WBC" : 'Lab Mis.',
"mis_creatinine": 'Lab Mis.',
"mis_AST": 'Lab Mis.',
"mis_AA": 'Lab Mis.',
'mis_DD': 'Lab Mis.',
'charlson_score': 'Charlson Score',
'mis_charlson_score': 'Charlson Score',
}
df['g'] = df.c.apply(lambda x: group_map[x])
consistent_c = {
'age18to25': '18 - 25',
'age26to49': '26 - 49',
'age70to79': '70 - 79',
'age80plus': '80+',
'sexfemale': 'Female',
'raceBlack': 'Black',
'raceAsian': 'Asian',
'raceHispanic.and.Other': 'Hispanic and Other',
'CRP': 'Log CRP(mg/dL)',
'albumin': 'Albumin (g/dL)',
'TB': 'Total Bilirubin (mg/dL)',
"LYM": 'Lymphocyte Count (10*3/uL)',
"neutrophil_count" : 'Neutrophil Count (10*3/uL)',
"WBC" : 'White Blood Cell (10*3/uL)',
"creatinine": 'Creatinine (mg/dL)',
"AST": 'Log AST (U/L)',
"AA": 'AST/ALT',
"DD": 'Log D-Dimer (ng/mL)',
'mis_CRP': 'CRP not tested',
'mis_albumin': 'Albumin not tested',
'mis_TB': 'Total bilirubin not tested',
"mis_LYM": 'Lymphocyte count not tested',
"mis_neutrophil_count" : 'Neutrophil count not tested',
"mis_WBC" : 'White blood cell not tested',
"mis_creatinine": 'Creatinine not tested',
"mis_AST": 'AST not tested',
"mis_AA": 'ALT/AST not available',
'mis_DD': 'D-dimer nottested',
'charlson_score': 'Charlson Comorbidity Index',
'mis_charlson_score': 'Charlson comorbidity index not available',
}
df.c = df.c.apply(lambda x: consistent_c[x])
unique_g = df.g.unique().tolist()
print(unique_g)
unique_c = df.c.unique().tolist()
print(unique_c)
df
```
# All Sites
```
point=alt.OverlayMarkDef(filled=False, fill='white', strokeWidth=2)
def plot_lab(df=None, metric='cov'):
d = df.copy()
plot = alt.Chart(
d
).mark_bar(
# point=True,
size=10,
# opacity=0.3
).encode(
y=alt.Y("c:N", title=None, axis=alt.Axis(labelAngle=0, tickCount=10), scale=alt.Scale(padding=1), sort=unique_c),
x=alt.X("v:Q", title=None, scale=alt.Scale(zero=True, domain=[-3,3], padding=2, nice=False, clamp=True)),
# color=alt.Color("siteid:N", scale=alt.Scale(domain=sites, range=site_colors)),
color=alt.Color("g:N", scale=alt.Scale(domain=unique_g, range=colors), title='Category'),
).properties(
width=150,
height=250
)
plot = plot.facet(
column=alt.Column("siteid:N", header=alt.Header(title=None), sort=sites)
).resolve_scale(color='shared')
plot = plot.properties(
title={
"text": [
f"Coefficient"
],
"dx": 120,
"subtitle": [
'Lab values are standarized by SD',
get_visualization_subtitle(data_release=data_release, with_num_sites=False)
],
"subtitleColor": "gray",
}
)
return plot
plot = plot_lab(df=df)
# plot = alt.vconcat(*(
# plot_lab(df=df, lab=lab) for lab in unique_sites
# ), spacing=30)
plot = apply_theme(
plot,
axis_y_title_font_size=16,
title_anchor='start',
legend_orient='bottom',
legend_title_orient='left',
axis_label_font_size=14,
header_label_font_size=16,
point_size=100
)
plot
```
## Final Meta
```
def plot_lab(df=None, metric='cov'):
d = df.copy()
d = d[d.siteid == 'META']
print(unique_c)
plot = alt.Chart(
d
).mark_point(
#point=True,
size=120,
filled=True,
opacity=1
).encode(
y=alt.Y("c:N", title=None, axis=alt.Axis(labelAngle=0, tickCount=10, grid=True), scale=alt.Scale(padding=1), sort=unique_c),
x=alt.X("v:Q", title="Hazard Ratio", scale=alt.Scale(zero=True, domain=[0,3.6], padding=0, nice=False, clamp=True)),
# color=alt.Color("siteid:N", scale=alt.Scale(domain=sites, range=site_colors)),
color=alt.Color("g:N", scale=alt.Scale(domain=unique_g, range=colors), title='Category',legend=None),
).properties(
width=550,
height=400
)
line = alt.Chart(pd.DataFrame({'x': [1]})).mark_rule().encode(x='x', strokeWidth=alt.value(1), strokeDash=alt.value([2, 2]))
tick = plot.mark_errorbar(
opacity=0.7 #, color='black',
#color=alt.Color("g:N", scale=alt.Scale(domain=unique_g, range=colors), title='Category')
).encode(
y=alt.Y("c:N", sort=unique_c),
x=alt.X("ci_l:Q", title="Hazard Ratio"),
x2=alt.X2("ci_u:Q"),
stroke=alt.value('black'),
strokeWidth=alt.value(1)
)
plot = (line+tick+plot)
#plot = plot.facet(
# column=alt.Column("siteid:N", header=alt.Header(title=None), sort=sites)
#).resolve_scale(color='shared')
#plot = plot.properties(
# title={
# "text": [
# f"Meta-Analysis Of Coefficient"
# ],
# "dx": 120,
# "subtitle": [
# 'Lab values are standarized by SD'#,
# #get_visualization_subtitle(data_release=data_release, with_num_sites=False)
# ],
# "subtitleColor": "gray",
# }
#)
return plot
plot = plot_lab(df=df)
# plot = alt.vconcat(*(
# plot_lab(df=df, lab=lab) for lab in unique_sites
# ), spacing=30)
plot = apply_theme(
plot,
axis_y_title_font_size=16,
title_anchor='start',
#legend_orient='bottom',
#legend_title_orient='top',
axis_label_font_size=14,
header_label_font_size=16,
point_size=100
)
plot.display()
save(plot,join("..", "result", "final-beta-std-mice-meta.png"), scalefactor=8.0)
```
## Final country
```
def plot_beta(df=None, metric='cov', country=None):
d = df.copy()
d = d[d.siteid == country]
plot = alt.Chart(
d
).mark_point(
# point=True,
size=120,
filled=True,
opacity=1
# opacity=0.3
).encode(
y=alt.Y("c:N", title=None, axis=alt.Axis(labelAngle=0, tickCount=10, grid=True), scale=alt.Scale(padding=1), sort=unique_c),
x=alt.X("v:Q", title="Hazard Ratio", scale=alt.Scale(zero=True, domain=[0,4.6], padding=0, nice=False, clamp=True)),
# color=alt.Color("siteid:N", scale=alt.Scale(domain=sites, range=site_colors)),
color=alt.Color("g:N", scale=alt.Scale(domain=unique_g, range=colors), title='Category', legend=None),
).properties(
width=750,
height=550
)
line = alt.Chart(pd.DataFrame({'x': [1]})).mark_rule().encode(x='x', strokeWidth=alt.value(1), strokeDash=alt.value([2, 2]))
tick = plot.mark_errorbar(
opacity=0.7 #, color='black'
).encode(
y=alt.Y("c:N", sort=unique_c),
x=alt.X("ci_l:Q", title="Hazard Ratio"),
x2=alt.X2("ci_u:Q"),
stroke=alt.value('black'),
strokeWidth=alt.value(1)
)
plot = (line+tick+plot)
# plot = plot.facet(
# column=alt.Column("siteid:N", header=alt.Header(title=None), sort=sites)
# ).resolve_scale(color='shared')
plot = plot.properties(
title={
"text": [
country.replace("META-","")
],
"dx": 160,
#"subtitle": [
# 'Lab values are standarized by SD'
#],
#"subtitleColor": "gray",
}
)
return plot
countrylist1 = ["META-USA", "META-FRANCE"]
countrylist2 = ["META-GERMANY", "META-SPAIN"]
plot1 = alt.hconcat(*(
plot_beta(df=df, country=country) for country in countrylist1
), spacing=30).resolve_scale(color='independent')
plot2 = alt.hconcat(*(
plot_beta(df=df, country=country) for country in countrylist2
), spacing=30).resolve_scale(color='independent')
plot=alt.vconcat(plot1, plot2)
#plot=plot1
plot = apply_theme(
plot,
axis_y_title_font_size=16,
title_anchor='start',
legend_orient='bottom',
legend_title_orient='left',
axis_label_font_size=14,
header_label_font_size=16,
point_size=100
)
plot.display()
save(plot,join("..", "result", "final-beta-std-mice-country.png"), scalefactor=8.0)
```
| github_jupyter |
# Session 17: Recommendation system on your own
This script should allow you to build an interactive website from your own
dataset. If you run into any issues, please let us know!
## Step 1: Select the corpus
In the block below, insert the name of your corpus. There should
be images in the directory "images". If there is metadata, it should
be in the directory "data" with the name of the corpus as the file name.
Also, if there is metadata, there must be a column called filename (with
the filename to the image) and a column called title.
```
cn = "test"
```
## Step 2: Read in the Functions
You need to read in all of the modules and functions below.
```
%pylab inline
import numpy as np
import scipy as sp
import pandas as pd
import sklearn
from sklearn import linear_model
import urllib
import os
from os.path import join
from keras.applications.vgg19 import VGG19
from keras.preprocessing import image
from keras.applications.vgg19 import preprocess_input, decode_predictions
from keras.models import Model
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
def check_create_metadata(cn):
mdata = join("..", "data", cn + ".csv")
if not os.path.exists(mdata):
exts = [".jpg", ".JPG", ".JPEG", ".png"]
fnames = [x for x in os.listdir(join('..', 'images', cn)) if get_ext(x) in exts]
df = pd.DataFrame({'filename': fnames, 'title': fnames})
df.to_csv(mdata, index=False)
def create_embed(corpus_name):
ofile = join("..", "data", corpus_name + "_vgg19_fc2.npy")
if not os.path.exists(ofile):
vgg19_full = VGG19(weights='imagenet')
vgg_fc2 = Model(inputs=vgg19_full.input, outputs=vgg19_full.get_layer('fc2').output)
df = pd.read_csv(join("..", "data", corpus_name + ".csv"))
output = np.zeros((len(df), 224, 224, 3))
for i in range(len(df)):
img_path = join("..", "images", corpus_name, df.filename[i])
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
output[i, :, :, :] = x
if (i % 100) == 0:
print("Loaded image {0:03d}".format(i))
output = preprocess_input(output)
img_embed = vgg_fc2.predict(output, verbose=True)
np.save(ofile, img_embed)
def rm_ext(s):
return os.path.splitext(s)[0]
def get_ext(s):
return os.path.splitext(s)[-1]
def clean_html():
if not os.path.exists(join("..", "html")):
os.makedirs(join("..", "html"))
if not os.path.exists(join("..", "html", "pages")):
os.makedirs(join("..", "html", "pages"))
for p in [x for x in os.listdir(join('..', 'html', 'pages')) if get_ext(x) in [".html", "html"]]:
os.remove(join('..', 'html', 'pages', p))
def load_data(cn):
X = np.load(join("..", "data", cn + "_vgg19_fc2.npy"))
return X
def write_header(f, cn, index=False):
loc = ""
if not index:
loc = "../"
f.write("<html>\n")
f.write(' <link rel="icon" href="{0:s}img/favicon.ico">\n'.format(loc))
f.write(' <title>Distant Viewing Tutorial</title>\n\n')
f.write(' <link rel="stylesheet" type="text/css" href="{0:s}css/bootstrap.min.css">'.format(loc))
f.write(' <link href="https://fonts.googleapis.com/css?family=Rubik+27px" rel="stylesheet">')
f.write(' <link rel="stylesheet" type="text/css" href="{0:s}css/dv.css">\n\n'.format(loc))
f.write("<body>\n")
f.write(' <div class="d-flex flex-column flex-md-row align-items-center p-3 px-md-4')
f.write('mb-3 bg-white border-bottom box-shadow">\n')
f.write(' <h4 class="my-0 mr-md-auto font-weight-normal">Distant Viewing Tutorial Explorer')
f.write('— {0:s}</h4>\n'.format(cn.capitalize()))
f.write(' <a class="btn btn-outline-primary" href="{0:s}index.html">Back to Index</a>\n'.format(loc))
f.write(' </div>\n')
f.write('\n')
def corpus_to_html(corpus):
pd.set_option('display.max_colwidth', -1)
tc = corpus.copy()
for index in range(tc.shape[0]):
fname = rm_ext(os.path.split(tc['filename'][index])[1])
title = rm_ext(tc['filename'][index])
s = "<a href='pages/{0:s}.html'>{1:s}</a>".format(fname, title)
tc.iloc[index, tc.columns.get_loc('title')] = s
tc = tc.drop(['filename'], axis=1)
return tc.to_html(index=False, escape=False, justify='center')
def create_index(cn, corpus):
f = open(join('..', 'html', 'index.html'), 'w')
write_header(f, cn=cn, index=True)
f.write(' <div style="padding:20px; max-width:1000px">\n')
f.write(corpus_to_html(corpus))
f.write(' </div>\n')
f.write("</body>\n")
f.close()
def get_infobox(corpus, item):
infobox = []
for k, v in corpus.iloc[item].to_dict().items():
if k != "filename":
infobox = infobox + ["<p><b>" + str(k).capitalize() + ":</b> " + str(v) + "</p>"]
return infobox
def save_metadata(f, cn, corpus, X, item):
infobox = get_infobox(corpus, item)
f.write("<div style='width: 1000px;'>\n")
f.write("\n".join(infobox))
if item > 0:
link = rm_ext(os.path.split(corpus['filename'][item - 1])[-1])
f.write("<p align='center'><a href='{0:s}.html'><< previous image</a> \n".format(link))
if item + 1 < X.shape[0]:
link = rm_ext(os.path.split(corpus['filename'][item + 1])[-1])
f.write(" <a href='{0:s}.html'>next image >></a></p>\n".format(link))
f.write("</div>\n")
def save_similar_img(f, cn, corpus, X, item):
dists = np.sum(np.abs(X - X[item, :]), 1)
idx = np.argsort(dists.flatten())[1:13]
f.write("<div style='clear:both; width: 1000px; padding-top: 30px'>\n")
f.write("<h4>Similar Images:</h4>\n")
f.write("<div class='similar'>\n")
for img_path in corpus['filename'][idx].tolist():
hpath = rm_ext(os.path.split(img_path)[1])
f.write('<a href="{0:s}.html"><img src="../../images/{1:2}/{2:s}" style="max-width: 150px; padding:5px"></a>\n'.format(hpath, cn, img_path))
f.write("</div>\n")
f.write("</div>\n")
def create_image_pages(cn, corpus, X):
for item in range(X.shape[0]):
img_path = corpus['filename'][item]
url = os.path.split(img_path)[1]
f = open(join('..', 'html', 'pages', rm_ext(url) + ".html"), 'w')
write_header(f, cn, index=False)
f.write("<div style='padding:25px'>\n")
# Main image
f.write("<div style='float: left; width: 610px;'>\n")
f.write('<img src="../../images/{0:s}/{1:s}" style="max-width: 600px; max-height: 500px;">\n'.format(cn, img_path))
f.write("</div>\n\n")
# Main information box
save_metadata(f, cn, corpus, X, item)
# Similar
save_similar_img(f, cn, corpus, X, item)
f.write("</body>\n")
f.close()
```
## Step 3: Create the embeddings
The next step is create the embeddings. If there is no metadata, this code
will also create it.
```
check_create_metadata(cn)
create_embed(cn)
```
### Step 4: Create the website
Finally, create the website with the code below.
```
clean_html()
corpus = pd.read_csv(join("..", "data", cn + ".csv"))
X = load_data(cn)
create_index(cn, corpus)
create_image_pages(cn, corpus, X)
```
You should find a folder called `html`. Open that folder and double click on the
file `index.html`, opening it in a web browser (Chrome or Firefox preferred; Safari
should work too). Do not open it in Jupyter.
You will see a list of all of the available images from the corpus you selected.
Click on one and you'll get to an item page for that image. From there you can
see the image itself, available metadata, select the previous or next image in the
corpus, and view similar images from the VGG19 similarity measurement.
| github_jupyter |
# K-Nearest Neighbors Algorithm
In this Jupyter Notebook we will focus on $KNN-Algorithm$. KNN is a data classification algorithm that attempts to determine what group a data point is in by looking at the data points around it.
An algorithm, looking at one point on a grid, trying to determine if a point is in group A or B, looks at the states of the points that are near it. The range is arbitrarily determined, but the point is to take a sample of the data. If the majority of the points are in group A, then it is likely that the data point in question will be A rather than B, and vice versa.
<br>
<img src="knn/example 1.png" height="30%" width="30%">
# Imports
```
import numpy as np
from tqdm import tqdm_notebook
```
# How it works?
We have some labeled data set $X-train$, and a new set $X$ that we want to classify based on previous classyfications
## Seps
### 1. Calculate distance to all neightbours
### 2. Sort neightbours (based on closest distance)
### 3. Count possibilities of each class for k nearest neighbours
### 4. The class with highest possibilty is Your prediction
# 1. Calculate distance to all neighbours
Depending on the problem You should use diffrent type of count distance method.
<br>
For example we can use Euclidean distance. Euclidean distance is the "ordinary" straight-line distance between two points in D-Dimensional space
#### Definiton
$d(p, q) = d(q, p) = \sqrt{(q_1 - p_1)^2 + (q_2 - p_2)^2 + \dots + (q_D - p_D)^2} = \sum_{d=1}^{D} (p_d - q_d)^2$
#### Example
Distance in $R^2$
<img src="knn/euklidean_example.png" height="30%" width="30%">
$p = (4,6)$
<br>
$q = (1,2)$
<br>
$d(p, q) = \sqrt{(1-4)^2 + (2-6)^2} =\sqrt{9 + 16} = \sqrt{25} = 5 $
## Code
```
def get_euclidean_distance(A_matrix, B_matrix):
"""
Function computes euclidean distance between matrix A and B
Args:
A_matrix (numpy.ndarray): Matrix size N1:D
B_matrix (numpy.ndarray): Matrix size N2:D
Returns:
numpy.ndarray: Matrix size N1:N2
"""
A_square = np.reshape(np.sum(A_matrix * A_matrix, axis=1), (A_matrix.shape[0], 1))
B_square = np.reshape(np.sum(B_matrix * B_matrix, axis=1), (1, B_matrix.shape[0]))
AB = A_matrix @ B_matrix.T
C = -2 * AB + B_square + A_square
return np.sqrt(C)
```
## Example Usage
```
X = np.array([[1,2,3] , [-4,5,-6]])
X_train = np.array([[0,0,0], [1,2,3], [4,5,6], [-4, 4, -6]])
print("X: {} Exaples in {} Dimensional space".format(*X.shape))
print("X_train: {} Exaples in {} Dimensional space".format(*X_train.shape))
print()
print("X:")
print(X)
print()
print("X_train")
print(X_train)
distance_matrix = get_euclidean_distance(X, X_train)
print("Distance Matrix shape: {}".format(distance_matrix.shape))
print("Distance between first example from X and first form X_train {}".format(distance_matrix[0,0]))
print("Distance between first example from X and second form X_train {}".format(distance_matrix[0,1]))
```
# 2. Sort neightbours
In order to find best fitting class for our observations we need to find to which classes belong observation neightbours and then to sort classes based on the closest distance
## Code
```
def get_sorted_train_labels(distance_matrix, y):
"""
Function sorts y labels, based on probabilities from distances matrix
Args:
distance_matrix (numpy.ndarray): Distance Matrix, between points from X and X_train, size: N1:N2
y (numpy.ndarray): vector of classes of X points, size: N1
Returns:
numpy.ndarray: labels matrix sorted according to distances to nearest neightours, size N1:N2
"""
order = distance_matrix.argsort(kind='mergesort')
return np.squeeze(y[order])
```
## Example Usage
```
y_train = np.array([[1, 1, 2, 3]]).T
print("Labels array {} Examples in {} Dimensional Space".format(*y_train.shape))
print("Distance matrix shape {}".format(distance_matrix.shape))
sorted_train_labels = get_sorted_train_labels(distance_matrix, y_train)
print("Sorted train labels {} shape".format(sorted_train_labels.shape))
print("Closest 3 classes for first element from set X: {}".format(sorted_train_labels[0, :3]))
```
# 3. Count possibilities of each class for k nearest neighbours
In order to find best class for our observation $x$ we need to calculate the probability of belonging to each class. In our case it is quite easy. We need just to count how many from k-nearest-neighbours of observation $x$ belong to each class and then devide it by k
<br><br>
$p(y=class \space| x) = \frac{\sum_{1}^{k}(1 \space if \space N_i = class, \space else \space 0) }{k}$ Where $N_i$ is $i$ nearest neightbour
## Code
```
def get_p_y_x_using_knn(y, k):
"""
The function determines the probability distribution p (y | x)
for each of the labels for objects from the X
using the KNN classification learned on the X_train
Args:
y (numpy.ndarray): Sorted matrix of N2 nearest neighbours labels, size N1:N2
k (int): number of nearest neighbours for KNN algorithm
Returns: numpy.ndarray: Matrix of probabilities for N1 points (from set X) of belonging to each class,
size N1:C (where C is number of classes)
"""
first_k_neighbors = y[:, :k]
N1, N2 = y.shape
classes = np.unique(y)
number_of_classes = classes.shape[0]
probabilities_matrix = np.zeros(shape=(N1, number_of_classes))
for i, row in enumerate(first_k_neighbors):
for j, value in enumerate(classes):
probabilities_matrix[i][j] = list(row).count(value) / k
return probabilities_matrix
```
## Example usage
```
print("Sorted train labels:")
print(sorted_train_labels)
probabilities_matrix = get_p_y_x_using_knn(y=sorted_train_labels, k=4)
print("Probability fisrt element belongs to 1-st class: {:2f}".format(probabilities_matrix[0,0]))
print("Probability fisrt element belongs to 3-rd class: {:2f}".format(probabilities_matrix[0,2]))
```
# 4. The class with highest possibilty is Your prediction
At the end we combine all previous steps to get prediction
## Code
```
def predict(X, X_train, y_train, k, distance_function):
"""
Function returns predictions for new set X based on labels of points from X_train
Args:
X (numpy.ndarray): set of observations (points) that we want to label
X_train (numpy.ndarray): set of lalabeld bservations (points)
y_train (numpy.ndarray): labels for X_train
k (int): number of nearest neighbours for KNN algorithm
Returns:
(numpy.ndarray): label predictions for points from set X
"""
distance_matrix = distance_function(X, X_train)
sorted_labels = get_sorted_train_labels(distance_matrix=distance_matrix, y=y_train)
p_y_x = get_p_y_x_using_knn(y=sorted_labels, k=k)
number_of_classes = p_y_x.shape[1]
reversed_rows = np.fliplr(p_y_x)
prediction = number_of_classes - (np.argmax(reversed_rows, axis=1) + 1)
return prediction
```
## Example usage
```
prediction = predict(X, X_train, y_train, 3, get_euclidean_distance)
print("Predicted propabilities of classes for for first observation", probabilities_matrix[0])
print("Predicted class for for first observation", prediction[0])
print()
print("Predicted propabilities of classes for for second observation", probabilities_matrix[1])
print("Predicted class for for second observation", prediction[1])
```
# Accuracy
To find how good our knn model works we should count accuracy
## Code
```
def count_accuracy(prediction, y_true):
"""
Returns:
float: Predictions accuracy
"""
N1 = prediction.shape[0]
accuracy = np.sum(prediction == y_true) / N1
return accuracy
```
## Example usage
```
y_true = np.array([[0, 2]])
predicton = predict(X, X_train, y_train, 3, get_euclidean_distance)
print("True classes:{}, accuracy {}%".format(y_true, count_accuracy(predicton, y_true) * 100))
```
# Find best k
Best k parameter is that one for which we have highest accuracy
## Code
```
def select_knn_model(X_validation, y_validation, X_train, y_train, k_values, distance_function):
"""
Function returns k parameter that best fit Xval points
Args:
Xval (numpy.ndarray): set of Validation Data, size N1:D
Xtrain (numpy.ndarray): set of Training Data, size N2:D
yval (numpy.ndarray): set of labels for Validation data, size N1:1
ytrain (numpy.ndarray): set of labels for Training Data, size N2:1
k_values (list): list of int values of k parameter that should be checked
Returns:
int: k paprameter that best fit validation set
"""
accuracies = []
for k in tqdm_notebook(k_values):
prediction = predict(X_validation, X_train, y_train, k, distance_function)
accuracy = count_accuracy(prediction, y_validation)
accuracies.append(accuracy)
best_k = k_values[accuracies.index(max(accuracies))]
return best_k, accuracies
```
# Real World Example - Iris Dataset
<img src="knn/iris_example1.jpeg" height="60%" width="60%">
This is perhaps the best known database to be found in the pattern recognition literature. The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. One class is linearly separable from the other 2; the latter are NOT linearly separable from each other.
Each example contains 4 attributes
1. sepal length in cm
2. sepal width in cm
3. petal length in cm
4. petal width in cm
Predicted attribute: class of iris plant.
<img src="knn/iris_example2.png" height="70%" width="70%">
```
from sklearn import datasets
import matplotlib.pyplot as plt
iris = datasets.load_iris()
iris_X = iris.data
iris_y = iris.target
print("Iris: {} examples in {} dimensional space".format(*iris_X.shape))
print("First example in dataset :\n Speal lenght: {}cm \n Speal width: {}cm \n Petal length: {}cm \n Petal width: {}cm".format(*iris_X[0]))
print("Avalible classes", np.unique(iris_y))
```
## Prepare Data
In our data set we have 150 examples (50 examples of each class), we have to divide it into 3 datasets.
1. Training data set, 90 examples. It will be used to find k - nearest neightbours
2. Validation data set, 30 examples. It will be used to find best k parameter, the one for which accuracy is highest
3. Test data set, 30 examples. It will be used to check how good our model performs
Data has to be shuffled (mixed in random order), because originally it is stored 50 examples of class 0, 50 of 1 and 50 of 2.
```
from sklearn.utils import shuffle
iris_X, iris_y = shuffle(iris_X, iris_y, random_state=134)
test_size = 30
validation_size = 30
training_size = 90
X_test = iris_X[:test_size]
X_validation = iris_X[test_size: (test_size+validation_size)]
X_train = iris_X[(test_size+validation_size):]
y_test = iris_y[:test_size]
y_validation = iris_y[test_size: (test_size+validation_size)]
y_train = iris_y[(test_size+validation_size):]
```
## Find best k parameter
```
k_values = [i for i in range(3,50)]
best_k, accuracies = select_knn_model(X_validation, y_validation, X_train, y_train, k_values, distance_function=get_euclidean_distance)
plt.plot(k_values, accuracies)
plt.xlabel('K parameter')
plt.ylabel('Accuracy')
plt.title('Accuracy for k nearest neighbors')
plt.grid()
plt.show()
```
## Count accuracy for training set
```
prediction = predict(X_test, X_train, y_train, best_k, get_euclidean_distance)
accuracy = count_accuracy(prediction, y_test)
print("Accuracy for best k={}: {:2f}%".format(best_k, accuracy*100))
```
# Real World Example - Mnist Dataset
Mnist is a popular database of handwritten images created for people who are new to machine learning. There are many courses on the internet that include classification problem using MNIST dataset.
This dataset contains 55000 images and labels. Each image is 28x28 pixels large, but for the purpose of the classification task they are flattened to 784x1 arrays $(28 \cdot 28 = 784)$. Summing up our training set is a matrix of size $[50000, 784]$ = [amount of images, size of image]. We will split it into 40000 training examples and 10000 validation examples to choose a best k
It also contains 5000 test images and labels, but for test we will use only 1000 (due to time limitations, using 5k would take 5x as much time)
<h3>Mnist Data Example</h3>
<img src="knn/mnist_example.jpg" height="70%" width="70%">
Now we are going to download this dataset and split it into test and train sets.
```
import utils
import cv2
training_size = 49_000
validation_size = 1000
test_size = 1000
train_data, test = utils.get_mnist_dataset()
train_images, train_labels = train_data
test_images, test_labels = test
validation_images = train_images[training_size:training_size + validation_size]
train_images = train_images[:training_size]
validation_labels = train_labels[training_size:training_size + validation_size]
train_labels = train_labels[:training_size]
test_images = test_images[:test_size]
test_labels = test_labels[:test_size]
print("Training images matrix size: {}".format(train_images.shape))
print("Training labels matrix size: {}".format(train_labels.shape))
print("Validation images matrix size: {}".format(validation_images.shape))
print("Validation labels matrix size: {}".format(validation_labels.shape))
print("Testing images matrix size: {}".format(test_images.shape))
print("Testing labels matrix size: {}".format(test_labels.shape))
print("Possible labels {}".format(np.unique(test_labels)))
```
## Visualisation
Visualisation isn't necessery to the problem, but it helps to understand what are we doing.
```
from matplotlib.gridspec import GridSpec
def show_first_8(images):
ax =[]
fig = plt.figure(figsize=(10, 10))
gs = GridSpec(2, 4, wspace=0.0, hspace=-0.5)
for i in range(2):
for j in range(4):
ax.append(fig.add_subplot(gs[i,j]))
for i, axis in enumerate(ax):
axis.imshow(images[i])
plt.show()
first_8_images = train_images[:8]
resized = np.reshape(first_8_images, (-1,28,28))
print('First 8 images of train set:')
show_first_8(resized)
```
## Find best k parameter
```
k_values = [i for i in range(3, 50, 5)]
best_k, accuracies = select_knn_model(validation_images, validation_labels, train_images, train_labels, k_values,
distance_function=get_euclidean_distance)
plt.plot(k_values, accuracies)
plt.xlabel('K parameter')
plt.ylabel('Accuracy')
plt.title('Accuracy for k nearest neighbors')
plt.grid()
plt.show()
prediction = np.squeeze(predict(test_images, train_images, train_labels, best_k, get_euclidean_distance))
accuracy = count_accuracy(prediction, test_labels)
print("Accuracy on test set for best k={}: {:2}%".format(best_k, accuracy * 100))
```
# Sources
https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm - first visualisation image
https://en.wikipedia.org/wiki/Euclidean_distance - euclidean distance visualisation
https://rajritvikblog.wordpress.com/2017/06/29/iris-dataset-analysis-python/ - first iris image
https://rpubs.com/wjholst/322258 - second iris image
https://www.kaggle.com/pablotab/mnistpklgz - mnist dataset
| github_jupyter |
# DAT210x - Programming with Python for DS
## Module2 - Lab5
Import and alias Pandas:
```
import pandas as pd
```
As per usual, load up the specified dataset, setting appropriate header labels.
```
# Note: only 8 elements in provided header names, so need to slice out the first index element from original dataset
col_names = ['education', 'age', 'capital-gain', 'race', 'capital-loss', 'hours-per-week', 'sex', 'classification']
# Import from csv. No headers.
df = pd.read_csv('./Datasets/census.data', sep=',', header=None)
# Take the actual data slice. The pandas df will handle the index, so slice it out of the imported data frame
df = df.iloc[:, 1:]
# Set proper column names
df.columns = col_names
# Double check everything worked by looking at the dataset
df.head(10)
df.dtypes
```
Excellent.
Now, use basic pandas commands to look through the dataset. Get a feel for it before proceeding!
Do the data-types of each column reflect the values you see when you look through the data using a text editor / spread sheet program? If you see `object` where you expect to see `int32` or `float64`, that is a good indicator that there might be a string or missing value or erroneous value in the column.
```
# Make capital-gain numeric. Accept NaN coercions
df.loc[:, 'capital-gain'] = pd.to_numeric(df.loc[:, 'capital-gain'], errors='coerce')
# Same with capital-loss
df.loc[:, 'capital-loss'] = pd.to_numeric(df.loc[:, 'capital-loss'], errors='coerce')
df.dtypes
```
Try use `your_data_frame['your_column'].unique()` or equally, `your_data_frame.your_column.unique()` to see the unique values of each column and identify the rogue values.
If you find any value that should be properly encoded to NaNs, you can convert them either using the `na_values` parameter when loading the dataframe. Or alternatively, use one of the other methods discussed in the reading.
```
# Replace value of 99999 with None/NaN
selector = df.loc[:, 'capital-gain'] == 99999
df.loc[selector, 'capital-gain'] = None
df['capital-gain'].unique()
# Example of unique() based queries. Found no other anomalies
df['classification'].unique()
```
Look through your data and identify any potential categorical features. Ensure you properly encode any ordinal and nominal types using the methods discussed in the chapter.
Be careful! Some features can be represented as either categorical or continuous (numerical). If you ever get confused, think to yourself what makes more sense generally---to represent such features with a continuous numeric type... or a series of categories?
```
# UNCOMMENT TO RUN THROUGH AGAIN
# ordered_education = ['Preschool', '1st-4th', '5th-6th', '7th-8th',
# '9th', '10th', '11th', '12th', 'HS-grad',
# 'Some-college', 'Bachelors', 'Masters', 'Doctorate']
# df.education = df.education.astype("category",
# ordered=True,
# categories=ordered_education).cat.codes
#
# ordered_classification = ['<=50K', '>50K']
# df.classification = df.classification.astype("category",
# ordered=True,
# categories=ordered_classification).cat.codes
df = pd.get_dummies(df,columns=['race'])
df
df = pd.get_dummies(df,columns=['sex'])
df
```
Lastly, print out your dataframe!
```
print(df)
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import pyrds
import pandas as pd
import requests
import datetime
from google.cloud import bigquery
client = bigquery.Client()
lookback = 7
today = datetime.datetime.today()
start_date = (today - datetime.timedelta(days=lookback)).strftime('%Y-%m-%d')
end_date = today.strftime('%Y-%m-%d')
```
Accounts created daily
```
url = f'https://data.ripple.com/v2/stats/?start={start_date}&end={end_date}&interval=day&family=metric&metrics=accounts_created'
res = requests.get(url)
xrp_accts = pd.DataFrame(res.json()['stats'])
xrp_accts
```
Network stats: number of nodes and validators
```
date_list = [x.strftime('%Y-%m-%dT%H:%M:%SZ') for x in list(pd.date_range((today - datetime.timedelta(days=lookback)).strftime('%Y%m%d'),today.strftime('%Y%m%d'),freq='1D'))]
node_list = []
for d in date_list:
url = f'https://data.ripple.com/v2/network/topology?verbose=true&date={d}'
res = requests.get(url)
try:
node_list.append(res.json()['node_count'])
except:
node_list.append(None)
print(f'Error with {d}')
nodes_df = pd.DataFrame({'date': date_list, 'nodes': node_list})
nodes_df
res = requests.get(f'https://data.ripple.com/v2/network/validators')
res.json()['count']
```
Ledger data: transaction count and value
Requires a GBQ account. You can provide service account JSON credentials as an argument. If you want to run as your google user from your PC, you should first install the [Google Cloud SDK](https://cloud.google.com/sdk/), then run:
`gcloud auth application-default login`
```
def gbq_query(query, query_params=None):
"""
Run a query against Google Big Query, returning a pandas dataframe of the result.
Parameters
----------
query: str
The query string
query_params: list, optional
The query parameters to pass into the query string
"""
client = bigquery.Client()
job_config = bigquery.QueryJobConfig()
job_config.query_parameters = query_params
return client.query(query, job_config=job_config).to_dataframe()
query = """
select
date(l.CloseTime) as `date`
, t.TransactionType
, count(1) as txn_count
, sum(t.AmountXRP) / 1e6 as txn_value
from `xrpledgerdata.fullhistory.transactions` t
join `xrpledgerdata.fullhistory.ledgers` l
on t.LedgerIndex = l.LedgerIndex
where t.TransactionResult = "tesSUCCESS"
and date(l.CloseTime) >= CAST(@start_date AS DATE)
group by 1,2
order by 1 desc, 2
"""
query_params = [
bigquery.ScalarQueryParameter("start_date", "STRING", start_date)
]
xrp = gbq_query(query,query_params)
xrp
```
| github_jupyter |
# Quantization of Signals
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Spectral Shaping of the Quantization Noise
The quantized signal $x_Q[k]$ can be expressed by the continuous amplitude signal $x[k]$ and the quantization error $e[k]$ as
\begin{equation}
x_Q[k] = \mathcal{Q} \{ x[k] \} = x[k] + e[k]
\end{equation}
According to the [introduced model](linear_uniform_quantization_error.ipynb#Model-for-the-Quantization-Error), the quantization noise can be modeled as uniformly distributed white noise. Hence, the noise is distributed over the entire frequency range. The basic concept of [noise shaping](https://en.wikipedia.org/wiki/Noise_shaping) is a feedback of the quantization error to the input of the quantizer. This way the spectral characteristics of the quantization noise can be modified, i.e. spectrally shaped. Introducing a generic filter $h[k]$ into the feedback loop yields the following structure

The quantized signal can be deduced from the block diagram above as
\begin{equation}
x_Q[k] = \mathcal{Q} \{ x[k] - e[k] * h[k] \} = x[k] + e[k] - e[k] * h[k]
\end{equation}
where the additive noise model from above has been introduced and it has been assumed that the impulse response $h[k]$ is normalized such that the magnitude of $e[k] * h[k]$ is below the quantization step $Q$. The overall quantization error is then
\begin{equation}
e_H[k] = x_Q[k] - x[k] = e[k] * (\delta[k] - h[k])
\end{equation}
The power spectral density (PSD) of the quantization error with noise shaping is calculated to
\begin{equation}
\Phi_{e_H e_H}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \Phi_{ee}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \cdot \left| 1 - H(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \right|^2
\end{equation}
Hence the PSD $\Phi_{ee}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ of the quantizer without noise shaping is weighted by $| 1 - H(\mathrm{e}^{\,\mathrm{j}\,\Omega}) |^2$. Noise shaping allows a spectral modification of the quantization error. The desired shaping depends on the application scenario. For some applications, high-frequency noise is less disturbing as low-frequency noise.
### Example - First-Order Noise Shaping
If the feedback of the error signal is delayed by one sample we get with $h[k] = \delta[k-1]$
\begin{equation}
\Phi_{e_H e_H}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \Phi_{ee}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \cdot \left| 1 - \mathrm{e}^{\,-\mathrm{j}\,\Omega} \right|^2
\end{equation}
For linear uniform quantization $\Phi_{ee}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sigma_e^2$ is constant. Hence, the spectral shaping constitutes a high-pass characteristic of first order. The following simulation evaluates the noise shaping quantizer of first order.
```
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as sig
%matplotlib inline
w = 8 # wordlength of the quantized signal
xmin = -1 # minimum of input signal
N = 32768 # number of samples
def uniform_midtread_quantizer_w_ns(x, Q):
# limiter
x = np.copy(x)
idx = np.where(x <= -1)
x[idx] = -1
idx = np.where(x > 1 - Q)
x[idx] = 1 - Q
# linear uniform quantization with noise shaping
xQ = Q * np.floor(x/Q + 1/2)
e = xQ - x
xQ = xQ - np.concatenate(([0], e[0:-1]))
return xQ[1:]
# quantization step
Q = 1/(2**(w-1))
# compute input signal
np.random.seed(5)
x = np.random.uniform(size=N, low=xmin, high=(-xmin-Q))
# quantize signal
xQ = uniform_midtread_quantizer_w_ns(x, Q)
e = xQ - x[1:]
# estimate PSD of error signal
nf, Pee = sig.welch(e, nperseg=64)
# estimate SNR
SNR = 10*np.log10((np.var(x)/np.var(e)))
print('SNR = {:2.1f} dB'.format(SNR))
plt.figure(figsize=(10, 5))
Om = nf*2*np.pi
plt.plot(Om, Pee*6/Q**2, label='estimated PSD')
plt.plot(Om, np.abs(1 - np.exp(-1j*Om))**2, label='theoretic PSD')
plt.plot(Om, np.ones(Om.shape), label='PSD w/o noise shaping')
plt.title('PSD of quantization error')
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$\hat{\Phi}_{e_H e_H}(e^{j \Omega}) / \sigma_e^2$')
plt.axis([0, np.pi, 0, 4.5])
plt.legend(loc='upper left')
plt.grid()
```
**Exercise**
* The overall average SNR is lower than for the quantizer without noise shaping. Why?
Solution: The average power per frequency is lower that without noise shaping for frequencies below $\Omega \approx \pi$. However, this comes at the cost of a larger average power per frequency for frequencies above $\Omega \approx \pi$. The average power of the quantization noise is given as the integral over the PSD of the quantization noise. It is larger for noise shaping and the resulting SNR is consequently lower. Noise shaping is nevertheless beneficial in applications where a lower quantization error in a limited frequency region is desired.
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples*.
| github_jupyter |
## Omega and Xi
To implement Graph SLAM, a matrix and a vector (omega and xi, respectively) are introduced. The matrix is square and labelled with all the robot poses (xi) and all the landmarks (Li). Every time you make an observation, for example, as you move between two poses by some distance `dx` and can relate those two positions, you can represent this as a numerical relationship in these matrices.
It's easiest to see how these work in an example. Below you can see a matrix representation of omega and a vector representation of xi.
<img src='images/omega_xi.png' width=20% height=20% />
Next, let's look at a simple example that relates 3 poses to one another.
* When you start out in the world most of these values are zeros or contain only values from the initial robot position
* In this example, you have been given constraints, which relate these poses to one another
* Constraints translate into matrix values
<img src='images/omega_xi_constraints.png' width=70% height=70% />
If you have ever solved linear systems of equations before, this may look familiar, and if not, let's keep going!
### Solving for x
To "solve" for all these x values, we can use linear algebra; all the values of x are in the vector `mu` which can be calculated as a product of the inverse of omega times xi.
<img src='images/solution.png' width=30% height=30% />
---
**You can confirm this result for yourself by executing the math in the cell below.**
```
import numpy as np
# define omega and xi as in the example
omega = np.array([[1,0,0],
[-1,1,0],
[0,-1,1]])
xi = np.array([[-3],
[5],
[3]])
# calculate the inverse of omega
omega_inv = np.linalg.inv(np.matrix(omega))
# calculate the solution, mu
mu = omega_inv*xi
# print out the values of mu (x0, x1, x2)
print(mu)
```
## Motion Constraints and Landmarks
In the last example, the constraint equations, relating one pose to another were given to you. In this next example, let's look at how motion (and similarly, sensor measurements) can be used to create constraints and fill up the constraint matrices, omega and xi. Let's start with empty/zero matrices.
<img src='images/initial_constraints.png' width=35% height=35% />
This example also includes relationships between poses and landmarks. Say we move from x0 to x1 with a displacement `dx` of 5. Then we have created a motion constraint that relates x0 to x1, and we can start to fill up these matrices.
<img src='images/motion_constraint.png' width=50% height=50% />
In fact, the one constraint equation can be written in two ways. So, the motion constraint that relates x0 and x1 by the motion of 5 has affected the matrix, adding values for *all* elements that correspond to x0 and x1.
---
### 2D case
In these examples, we've been showing you change in only one dimension, the x-dimension. In the project, it will be up to you to represent x and y positional values in omega and xi. One solution could be to create an omega and xi that are 2x larger that the number of robot poses (that will be generated over a series of time steps) and the number of landmarks, so that they can hold both x and y values for poses and landmark locations. I might suggest drawing out a rough solution to graph slam as you read the instructions in the next notebook; that always helps me organize my thoughts. Good luck!
| github_jupyter |
# KNN
K-Nearest Neighbors is a non-parametric model that can be used for classification, binary and multinomial, and for regression. Being non-parametric means that it predicts an output looking at the training data instead of using some learned parameters. This means that the training set needs to be stored.
For the classification task, given a new data example the model look at the k-closests points in the stored dataset and takes the most popular label among these points as the predicted label. For the regression problem, the output value is an average of the k-closests points.
The main hyperparameters of the model are:
* k: A higher value entails greater computational cost and smoother decision boundaries, a low value might result in overfitting.
* Similarity metric: The distance between points can be computed using euclidean distance, cosine distance, ot other similarity metrics
```
import matplotlib.pyplot as plt
import numpy as np
import sys
sys.path.append("..")
from models.knn import KNNClassifier, KNNRegressor
from utils.datasets import blobs_classification_dataset
from utils.visualization import plot_decision_boundary
```
## Multinomial classification
Contrary to logistic regression, the KNN calssification algorithm can manage any number of labels. For instance:
```
(x_train, y_train), (x_test, y_test) = blobs_classification_dataset(features=2, classes=5, samples=500)
# Visualize
plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train, cmap='jet')
plt.show()
# Initialize the model.
model = KNNClassifier(X=x_train, y=y_train)
# KNN doesn't needs to fit the data. Predictions can be made "out of the box".
y_hat = model.predict(x_test)
acc = np.mean(y_hat==y_test)
print("Accuracy on the holdout set: %.2f" % acc)
# Visualize decision boundary
ax = plot_decision_boundary(model.predict,
x_range=[x_test[:, 0].min()-1, x_test[:, 0].max()+1],
y_range=[x_test[:, 1].min()-1, x_test[:, 1].max()+1], classes=5)
ax.scatter(x_test[:, 0], x_test[:, 1], c=y_test, cmap='jet', label='True classes')
plt.show()
```
### Non lineary-separable data
This model can also generalize well to other kinds of data distribution
```
# Load new data
from utils.datasets import radial_classification_dataset
(x_train, y_train), (x_test, y_test) = radial_classification_dataset(classes=4, samples=200)
# Initialize the model.
model = KNNClassifier(X=x_train, y=y_train)
# Visualize decision boundary
ax = plot_decision_boundary(model.predict,
x_range=[x_train[:, 0].min()-1, x_train[:, 0].max()+1],
y_range=[x_train[:, 1].min()-1, x_train[:, 1].max()+1], classes=4)
ax.scatter(x_train[:, 0], x_train[:, 1], c=y_train, cmap='jet', label='True classes')
plt.show()
```
## Regression
The KNN algorithm can also be used to perform regression.
```
# Generate the data
x_train = np.random.random_sample(100)*10
y_train = np.sin(x_train) + np.random.randn(100)*0.01
# Plot data
plt.plot(x_train, y_train, 'o')
plt.title('Sampled data from $sin(x)$')
plt.show();
# Initialize regression model
model = KNNRegressor(X=x_train.reshape([-1, 1]), y=y_train)
# Plot regressed line over training data in the original range
x_axis = np.linspace(0, 10, 200)
plt.plot(x_train, y_train, 'o')
plt.plot(x_axis, model.predict(x_axis), color='red')
plt.show()
```
It may seem that the model learns very well the function, however it only performs well when there is enough close data. This happens when we try to predict values outside the original range.
```
# Plot regressed line over training data along a greater range
x_axis = np.linspace(-10, 20, 400)
plt.plot(x_train, y_train, 'o')
plt.plot(x_axis, model.predict(x_axis), color='red')
plt.show()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
#Data Importing
df = pd.read_csv(r'C:\Users\Nibras\OneDrive\Desktop\covishieldtamilnadu.csv',header=None)
df.head(10)
#Data Cleaning#
headers = ["S.No", "Health_Unit_District","1stDosageCovishieldHCW","2ndDosageCovishieldHCW","1stDosageCovishieldFLW","2ndDosageCovishieldFLW","1stDoseCovishield18to44","2ndDoseCovishield18to44","1stDoseCovishield45to60Comorbidities","2ndDosageCovishield45to60Comorbidities","1stDosageCovishield60+Comorbidities","2ndDosageCovishield60+Comorbidities","Total1stDoseCovishield","Total2ndDoseCovishield","Date"]
print("headers\n", headers)
df.columns = headers
df.head(10)
df = df.iloc[1: , :]
df.head(9)
df.dtypes
df['S.No'] = df['S.No'].astype('int64')
df['1stDosageCovishieldHCW'] = df['1stDosageCovishieldHCW'].astype('int64')
df['2ndDosageCovishieldHCW'] = df['2ndDosageCovishieldHCW'].astype('int64')
df['1stDosageCovishieldFLW'] = df['1stDosageCovishieldFLW'].astype('int64')
df['2ndDosageCovishieldFLW'] = df['2ndDosageCovishieldFLW'].astype('int64')
df['1stDoseCovishield18to44'] = df['1stDoseCovishield18to44'].astype('int64')
df['2ndDoseCovishield18to44'] = df['2ndDoseCovishield18to44'].astype('int64')
df['1stDoseCovishield45to60Comorbidities'] = df['1stDoseCovishield45to60Comorbidities'].astype('int64')
df['2ndDosageCovishield45to60Comorbidities'] = df['2ndDosageCovishield45to60Comorbidities'].astype('int64')
df['1stDosageCovishield60+Comorbidities'] = df['1stDosageCovishield60+Comorbidities'].astype('int64')
df['2ndDosageCovishield60+Comorbidities'] = df['2ndDosageCovishield60+Comorbidities'].astype('int64')
df['Total1stDoseCovishield'] = df['Total1stDoseCovishield'].astype('int64')
df['Total2ndDoseCovishield'] = df['Total2ndDoseCovishield'].astype('int64')
df.dtypes
#data visualisation
import matplotlib.pyplot as plt
%matplotlib inline
df1=df.head(10)
df1.plot(x="Health_Unit_District", y="1stDoseCovishield18to44", kind="bar")
date_set = set(df1['Date'])
plt.figure()
for Date in date_set:
selected_data = df1.loc[df1['Date'] == Date]
plt.plot(selected_data['Health_Unit_District'], selected_data['1stDoseCovishield18to44'], label=Date)
plt.legend()
plt.show()
#Predicting a particulars number using linear regression
from sklearn.model_selection import train_test_split
import datetime as dt
df['Date'] = pd.to_datetime(df['Date'])
df.head(3)
#converting date time to ordinal
df['Date']=df['Date'].map(dt.datetime.toordinal)
df.head(3)
x=df['Date'].tail(3)
y=df['1stDoseCovishield18to44'].tail(3)
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3)
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
import numpy as np
lr.fit(np.array(x_train).reshape(-1,1),np.array(y_train).reshape(-1,1))
df.tail(3)
y_pred=lr.predict(np.array(x_test).reshape(-1,1))
from sklearn.metrics import mean_squared_error
mean_squared_error(x_test,y_pred)
lr.predict(np.array([[737937]])) #that is may of 27
#actual number on 27th of may 2022 is 5874
```
| github_jupyter |
<a href="https://colab.research.google.com/github/MuhammedAshraf2020/DNN-using-tensorflow/blob/main/DNN_using_tensorflow_ipynb.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#import libs
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from keras.datasets.mnist import load_data
# prepare dataset
(X_train , y_train) , (X_test , y_test) = load_data()
X_train = X_train.astype("float32") / 255
X_test = X_test.astype("float32") / 255
# Make sure images have shape (28, 28, 1)
X_train = np.expand_dims(X_train, -1)
X_test = np.expand_dims(X_test, -1)
for i in range(0, 9):
plt.subplot(330 + 1 + i)
plt.imshow(X_train[i][: , : , 0], cmap=plt.get_cmap('gray'))
plt.show()
X_train = [X_train[i].ravel() for i in range(len(X_train))]
X_test = [X_test[i].ravel() for i in range(len(X_test))]
y_train = tf.keras.utils.to_categorical(y_train , num_classes = 10)
y_test = tf.keras.utils.to_categorical(y_test , num_classes = 10 )
#set parameter
n_input = 28 * 28
n_hidden_1 = 512
n_hidden_2 = 256
n_hidden_3 = 128
n_output = 10
learning_rate = 0.01
epochs = 50
batch_size = 128
tf.compat.v1.disable_eager_execution()
# weight intialization
X = tf.compat.v1.placeholder(tf.float32 , [None , n_input])
y = tf.compat.v1.placeholder(tf.float32 , [None , n_output])
def Weights_init(list_layers , stddiv):
Num_layers = len(list_layers)
weights = {}
bias = {}
for i in range( Num_layers-1):
weights["W{}".format(i+1)] = tf.Variable(tf.compat.v1.truncated_normal([list_layers[i] , list_layers[i+1]] , stddev = stddiv))
bias["b{}".format(i+1)] = tf.Variable(tf.compat.v1.truncated_normal([list_layers[i+1]]))
return weights , bias
list_param = [784 , 512 , 256 , 128 , 10]
weights , biases = Weights_init(list_param , 0.1)
def Model (X , nn_weights , nn_bias):
Z1 = tf.add(tf.matmul(X , nn_weights["W1"]) , nn_bias["b1"])
Z1_out = tf.nn.relu(Z1)
Z2 = tf.add(tf.matmul(Z1_out , nn_weights["W2"]) , nn_bias["b2"])
Z2_out = tf.nn.relu(Z2)
Z3 = tf.add(tf.matmul(Z2_out , nn_weights["W3"]) , nn_bias["b3"])
Z3_out = tf.nn.relu(Z3)
Z4 = tf.add(tf.matmul(Z3_out , nn_weights["W4"]) , nn_bias["b4"])
Z4_out = tf.nn.softmax(Z4)
return Z4_out
nn_layer_output = Model(X , weights , biases)
loss = tf.reduce_mean(tf.compat.v1.nn.softmax_cross_entropy_with_logits_v2(logits = nn_layer_output , labels = y))
optimizer = tf.compat.v1.train.GradientDescentOptimizer(learning_rate).minimize(loss)
init = tf.compat.v1.global_variables_initializer()
# Determining if the predictions are accurate
is_correct_prediction = tf.equal(tf.argmax(nn_layer_output , 1),tf.argmax(y, 1))
#Calculating prediction accuracy
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
saver = tf.compat.v1.train.Saver()
with tf.compat.v1.Session() as sess:
# initializing all the variables
sess.run(init)
total_batch = int(len(X_train) / batch_size)
for epoch in range(epochs):
avg_cost = 0
for i in range(total_batch):
batch_x , batch_y = X_train[i * batch_size : (i + 1) * batch_size] , y_train[i * batch_size : (i + 1) * batch_size]
_, c = sess.run([optimizer,loss], feed_dict={X: batch_x, y: batch_y})
avg_cost += c / total_batch
if(epoch % 10 == 0):
print("Epoch:", (epoch + 1), "train_cost =", "{:.3f} ".format(avg_cost) , end = "")
print("train_acc = {:.3f} ".format(sess.run(accuracy, feed_dict={X: X_train, y:y_train})) , end = "")
print("valid_acc = {:.3f}".format(sess.run(accuracy, feed_dict={X: X_test, y:y_test})))
saver.save(sess , save_path = "/content/Model.ckpt")
```
| github_jupyter |
This notebook copies images and annotations from the original dataset, to perform instance detection
you can control how many images per pose (starting from some point) and how many instances to consider as well as which classes
we also edit the annotation file because initially all annotations are made by instance not category
```
import os;
from shutil import copyfile
import xml.etree.ElementTree as ET
images_per_set = 102
start_after = 100
```
the following code can be used to generate both training and testing set
```
classes = ['book1','book2','book3','book4','book5',
'cellphone1','cellphone2','cellphone3','cellphone4','cellphone5',
'mouse1','mouse2','mouse3','mouse4','mouse5',
'ringbinder1','ringbinder2','ringbinder3','ringbinder4','ringbinder5']
days = [5,5,5,5,5,3,3,3,3,3,7,7,7,7,7,3,3,3,3,3]
images_path ='C:/Users/issa/Documents/datasets/ICUB_Instance/test/'
annotations_path = 'C:/Users/issa/Documents/datasets/ICUB_Instance/test_ann/'
instances_list = [1,2,3,4,5]
for category_name,day_number in zip(classes,days):
in_pose_counter = 0;
j =0
for inst in instances_list:
j=0
dirs = ['D:\\2nd_Semester\\CV\\Project\\part1\\part1\\'+category_name+'\\'+category_name+str(inst)+'\\MIX\\day'+str(day_number)+'\\left\\',
'D:\\2nd_Semester\\CV\\Project\\part1\\part1\\'+category_name+'\\'+category_name+str(inst)+'\\ROT2D\\day'+str(day_number)+'\\left\\',
'D:\\2nd_Semester\\CV\\Project\\part1\\part1\\'+category_name+'\\'+category_name+str(inst)+'\\ROT3D\\day'+str(day_number)+'\\left\\',
'D:\\2nd_Semester\\CV\\Project\\part1\\part1\\'+category_name+'\\'+category_name+str(inst)+'\\SCALE\\day'+str(day_number)+'\\left\\',
'D:\\2nd_Semester\\CV\\Project\\part1\\part1\\'+category_name+'\\'+category_name+str(inst)+'\\TRANSL\\day'+str(day_number)+'\\left\\']
for dir in dirs:
i=0;
in_pose_counter = 0;
if(i>images_per_set):
break;
for innerSubDir,innerDirs,innerFiles in os.walk(dir):
for file in innerFiles:
i = i+1
if(i>images_per_set):
break;
in_pose_counter = in_pose_counter+1
if(in_pose_counter>start_after):
j = j+1
copyfile(dir+file,images_path+category_name+str(inst)+'_'+str(j)+'.jpg')
in_pose_counter =0
j =0
for inst in instances_list:
j=0
dirs = ['D:\\2nd_Semester\\CV\\Project\\Annotations_refined\Annotations_refined\\'+category_name+'\\'+category_name+str(inst)+'\\MIX\\day'+str(day_number)+'\\left\\',
'D:\\2nd_Semester\\CV\\Project\\Annotations_refined\Annotations_refined\\'+category_name+'\\'+category_name+str(inst)+'\\ROT2D\\day'+str(day_number)+'\\left\\',
'D:\\2nd_Semester\\CV\\Project\\Annotations_refined\Annotations_refined\\'+category_name+'\\'+category_name+str(inst)+'\\ROT3D\\day'+str(day_number)+'\\left\\',
'D:\\2nd_Semester\\CV\\Project\\Annotations_refined\Annotations_refined\\'+category_name+'\\'+category_name+str(inst)+'\\SCALE\\day'+str(day_number)+'\\left\\',
'D:\\2nd_Semester\\CV\\Project\\Annotations_refined\Annotations_refined\\'+category_name+'\\'+category_name+str(inst)+'\\TRANSL\\day'+str(day_number)+'\\left\\']
for dir in dirs:
i=0;
in_pose_counter =0
if(i>images_per_set):
break;
for innerSubDir,innerDirs,innerFiles in os.walk(dir):
for file in innerFiles:
i = i+1
if(i>images_per_set):
break;
in_pose_counter = in_pose_counter+1
if(in_pose_counter>start_after):
j = j+1
outputPath = annotations_path+category_name+str(inst)+'_'+str(j)+'.xml'
copyfile(dir+file,outputPath)
```
This part produces the train/val sets, we set the classes and the list containing the number of images per class, then we split the list and generate the files.
```
import numpy as np
from sklearn.model_selection import train_test_split
x = np.arange(1,129)
classes = ['book1','book2','book3','book4','book5',
'cellphone1','cellphone2','cellphone3','cellphone4','cellphone5',
'mouse1','mouse2','mouse3','mouse4','mouse5',
'ringbinder1','ringbinder2','ringbinder3','ringbinder4','ringbinder5']
for category in classes:
xtrain,xtest,ytrain,ytest=train_test_split(x, x, test_size=0.25)
file = open("datasets//ICUB_Instance//train.txt","a")
for i in xtrain:
file.write(category+'_'+str(i)+'\n')
file.close()
file = open("datasets//ICUB_Instance//val.txt","a")
for i in xtest:
file.write(category+'_'+str(i)+'\n')
file.close()
```
this part generates the test set list, no splitting here
```
import numpy as np
classes = ['book1','book2','book3','book4','book5',
'cellphone1','cellphone2','cellphone3','cellphone4','cellphone5',
'mouse1','mouse2','mouse3','mouse4','mouse5',
'ringbinder1','ringbinder2','ringbinder3','ringbinder4','ringbinder5']
x = np.arange(1,21)
file = open("datasets//ICUB_Instance//test.txt","a")
for category in classes:
for i in x:
file.write(category+'_'+str(i)+'\n')
file.close()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/krsmith/DS-Sprint-01-Dealing-With-Data/blob/master/module4-databackedassertions/LS_DS_114_Making_Data_backed_Assertions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Lambda School Data Science - Making Data-backed Assertions
This is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it.
## Lecture - generating a confounding variable
The prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.
Let's use Python to generate data that actually behaves in this fashion!
```
import random
dir(random) # Reminding ourselves what we can do here
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
print(example_user)
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have twice as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
time_on_site = random.uniform(10, 600)
purchased = random.random() < 0.1 + (time_on_site // 1500)
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
time_on_site = random.uniform(5, 300)
purchased = random.random() < 0.2 + (time_on_site // 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users)
print(users[:10])
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head()
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
time_bins = pd.cut(user_data['time_on_site'], 5) # 5 equal-sized bins
pd.crosstab(user_data['purchased'], time_bins)
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(user_data['purchased'], time_bins, normalize='columns')
# That seems counter to our hypothesis
# More time on the site seems to have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
# Live/stretch goal - how can we do that?
```
## Assignment - what's going on here?
Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.
Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
```
# TODO - your code here
# Use what we did live in lecture as an example
# HINT - you can find the raw URL on GitHub and potentially use that
# to load the data with read_csv, or you can upload it yourself
import pandas as pd
df = pd.read_csv("https://raw.githubusercontent.com/invegat/DS-Sprint-01-Dealing-With-Data/master/module4-databackedassertions/persons.csv")
df.head()
print(df.describe())
print(df.info())
# Looking at data with crosstabulation pd.crosstab(a, [b, c], rownames=['a'], colnames=['b', 'c'])
pd.crosstab(df['exercise_time'], [df['age'], df['weight']], normalize='columns')
# Use panda.cut() to organize data into bins
age_bins = pd.cut(df['age'], bins=[18,30,42,54,68,80]) #8 equal-sized bins
time_bins = pd.cut(df['exercise_time'], bins=[0,75,150,225,300]) #10 equal-sized bins
weight_bins = pd.cut(df['weight'], bins=[100,130,160,190,220])
# Now show the data with crosstab in bins
ct_persons = pd.crosstab(time_bins, [age_bins, weight_bins],normalize='columns')
ct_persons
import matplotlib.pyplot as plt
!pip install seaborn --upgrade
import seaborn as sns
print(sns.__version__)
# Seaborn Heatmap for age/weight by exercise time
fig, ax = plt.subplots(figsize=(16,10))
sns.heatmap(ct_persons,annot=True,linewidths=.5,cmap="YlGnBu", ax=ax);
#Older age, disregarding weight, tends to exercise for less time.
#Higher weight, disregarding age, tends to exercise for less time.
#Younger age and lower weight tends to excercise for the most time.
# Seaborn Heatmap for age/exercise time by weight
ct_weight = pd.crosstab(weight_bins, [age_bins, time_bins],normalize='columns')
fig, ax = plt.subplots(figsize=(16,10))
sns.heatmap(ct_weight,annot=True,linewidths=.5,cmap="YlGnBu", ax=ax);
# Seaborn Heatmap for weight/exercise time by age
ct_age = pd.crosstab(age_bins, [weight_bins, time_bins],normalize='columns')
fig, ax = plt.subplots(figsize=(16,10))
sns.heatmap(ct_age,annot=True,linewidths=.5,cmap="YlGnBu", ax=ax);
```
### Assignment questions
After you've worked on some code, answer the following questions in this text block:
1. What are the variable types in the data?
*All of the variables in this data are integers which represent a person's age, weight, and exercise time.*
2. What are the relationships between the variables?
*The lower the age, regardless of weight, the more time a person would spend exercising. The higher the weight, regardless of age, the less time a person would spend exercising. Younger age & lower weight tends to excercise for more time.*
3. Which relationships are "real", and which spurious?
*I would say that age and weight are spurious.*
*REAL: Higher weight = less exercise time.*
*REAL: Higher age = less exercise time.*
## Stretch goals and resources
Following are *optional* things for you to take a look at. Focus on the above assignment first, and make sure to commit and push your changes to GitHub.
- [Spurious Correlations](http://tylervigen.com/spurious-correlations)
- [NIH on controlling for confounding variables](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4017459/)
Stretch goals:
- Produce your own plot inspierd by the Spurious Correlation visualizations (and consider writing a blog post about it - both the content and how you made it)
- Pick one of the techniques that NIH highlights for confounding variables - we'll be going into many of them later, but see if you can find which Python modules may help (hint - check scikit-learn)
| github_jupyter |
# Hyperparameter Ensembles for Robustness and Uncertainty Quantification
*Florian Wenzel, April 8th 2021. Licensed under the Apache License, Version 2.0.*
Recently, we proposed **Hyper-deep Ensembles** ([Wenzel et al., NeurIPS 2020](https://arxiv.org/abs/2006.13570)) a simple, yet powerful, extension of [deep ensembles](https://arxiv.org/abs/1612.01474). The approach works with any given deep network architecture and, therefore, can be easily integrated (and improve) a machine learning system that is already used in production.
Hyper-deep ensembles improve the performance of a given deep network by forming an ensemble over multiple variants of that architecture where each member uses different hyperparameters. In this notebook we consider a ResNet-20 architecture with block-wise $\ell_2$-regularization parameters and a label smoothing parameter. We construct an ensemble of 4 members where each member uses a different set of hyperparameters. This leads to an ensemble of **diverse members**, i.e., members that are complementary in their predictions. The final ensemble greatly improves the prediction performance and the robustness of the model, e.g., in out-of-distribution settings.
Let's start with some boilerplate code for data loading and the model definition.
Requirements:
```bash
!pip install "git+https://github.com/google/uncertainty-baselines.git#egg=uncertainty_baselines"
```
```
import tensorflow as tf
import tensorflow_datasets as tfds
import numpy as np
import uncertainty_baselines as ub
def _ensemble_accuracy(labels, logits_list):
"""Compute the accuracy resulting from the ensemble prediction."""
per_probs = tf.nn.softmax(logits_list)
probs = tf.reduce_mean(per_probs, axis=0)
acc = tf.keras.metrics.SparseCategoricalAccuracy()
acc.update_state(labels, probs)
return acc.result()
def _ensemble_cross_entropy(labels, logits):
logits = tf.convert_to_tensor(logits)
ensemble_size = float(logits.shape[0])
labels = tf.cast(labels, tf.int32)
ce = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=tf.broadcast_to(labels[tf.newaxis, ...], tf.shape(logits)[:-1]),
logits=logits)
nll = -tf.reduce_logsumexp(-ce, axis=0) + tf.math.log(ensemble_size)
return tf.reduce_mean(nll)
def greedy_selection(val_logits, val_labels, max_ens_size, objective='nll'):
"""Greedy procedure from Caruana et al. 2004, with replacement."""
assert_msg = 'Unknown objective type (received {}).'.format(objective)
assert objective in ('nll', 'acc', 'nll-acc'), assert_msg
# Objective that should be optimized by the ensemble. Arbitrary objectives,
# e.g., based on nll, acc or calibration error (or combinations of those) can
# be used.
if objective == 'nll':
get_objective = lambda acc, nll: nll
elif objective == 'acc':
get_objective = lambda acc, nll: acc
else:
get_objective = lambda acc, nll: nll-acc
best_acc = 0.
best_nll = np.inf
best_objective = np.inf
ens = []
def get_ens_size():
return len(set(ens))
while get_ens_size() < max_ens_size:
current_val_logits = [val_logits[model_id] for model_id in ens]
best_model_id = None
for model_id, logits in enumerate(val_logits):
acc = _ensemble_accuracy(val_labels, current_val_logits + [logits])
nll = _ensemble_cross_entropy(val_labels, current_val_logits + [logits])
obj = get_objective(acc, nll)
if obj < best_objective:
best_acc = acc
best_nll = nll
best_objective = obj
best_model_id = model_id
if best_model_id is None:
print('Ensemble could not be improved: Greedy selection stops.')
break
ens.append(best_model_id)
return ens, best_acc, best_nll
def parse_checkpoint_dir(checkpoint_dir):
"""Parse directory of checkpoints."""
paths = []
subdirectories = tf.io.gfile.glob(os.path.join(checkpoint_dir, '*'))
is_checkpoint = lambda f: ('checkpoint' in f and '.index' in f)
for subdir in subdirectories:
for path, _, files in tf.io.gfile.walk(subdir):
if any(f for f in files if is_checkpoint(f)):
latest_checkpoint_without_suffix = tf.train.latest_checkpoint(path)
paths.append(os.path.join(path, latest_checkpoint_without_suffix))
break
return paths
DATASET = 'cifar10'
TRAIN_PROPORTION = 0.95
BATCH_SIZE = 64
ENSEMBLE_SIZE = 4
# load data
ds_info = tfds.builder(DATASET).info
num_classes = ds_info.features['label'].num_classes
# test set
steps_per_eval = ds_info.splits['test'].num_examples // BATCH_SIZE
test_dataset = ub.datasets.get(
DATASET,
split=tfds.Split.TEST).load(batch_size=BATCH_SIZE)
# validation set
validation_percent = 1 - TRAIN_PROPORTION
val_dataset = ub.datasets.get(
dataset_name=DATASET,
split=tfds.Split.VALIDATION,
validation_percent=validation_percent,
drop_remainder=False).load(batch_size=BATCH_SIZE)
steps_per_val_eval = int(ds_info.splits['train'].num_examples *
validation_percent) // BATCH_SIZE # validation set
```
# Let's construct the hyper-deep ensemble over a ResNet-20 architecture
**This is the (simplified) hyper-deep ensembles construction pipeline**
> **1. Random search:** train several models on the train set using different (random) hyperparameters.
>
> **2. Ensemble construction:** on a validation set using a greedy selection method.
Remark:
*In this notebook we use a slightly simplified version of the pipeline compared to the approach of the original paper (where an additional stratification step is used). Additionally, after selecting the optimal hyperparameters the ensemble performance can be improved even more by retraining the selected models on the full train set (i.e., this time not reserving a portion for validation). The simplified pipeline in this notebook is slightly less performant but easier to implement. The simplified pipeline is similar to the ones used by Caranua et al., 2004 and Zaidi et al., 2020 in the context of neural architecture search.*
## Step 1: Random Hyperparameter Search
We start by training 100 different versions of the ResNet-20 using different $\ell_2$-regularization parameters and label smoothing parameters. Since this would take some time we have already trained the models using a standard training script (which can be found [here](https://github.com/google/uncertainty-baselines/blob/master/baselines/cifar/deterministic.py)) and directly load the checkpoints.
```
# The model architecture we want to form the ensemble over
# here, we use the original ResNet-20 model by He et al. 2015
model = ub.models.wide_resnet(
input_shape=ds_info.features['image'].shape,
depth=22,
width_multiplier=1,
num_classes=num_classes,
l2=0.,
version=1)
# Load checkpoints
# Unfortunately, we can't release the check points yet. But we hope we can make
# make them available. For the moment, you have to train the individual models
# yourself using the train script linked above.
CHECKPOINT_DIR = 'insert_path_to_checkpoints'
ensemble_filenames = parse_checkpoint_dir(CHECKPOINT_DIR)
model_pool_size = len(ensemble_filenames)
checkpoint = tf.train.Checkpoint(model=model)
print('Model pool size: {}'.format(model_pool_size))
```
## Step 2: Construction of the hyperparameter ensemble on the validation set
First we compute the logits of all models in our model pool on the validation set.
```
# Compute the logits on the validation set
val_logits, val_labels = [], []
for m, ensemble_filename in enumerate(ensemble_filenames):
# Enforce memory clean-up
tf.keras.backend.clear_session()
checkpoint.restore(ensemble_filename)
val_iterator = iter(val_dataset)
val_logits_m = []
for _ in range(steps_per_val_eval):
inputs = next(val_iterator)
features = inputs['features']
labels = inputs['labels']
val_logits_m.append(model(features, training=False))
if m == 0:
val_labels.append(labels)
val_logits.append(tf.concat(val_logits_m, axis=0))
if m == 0:
val_labels = tf.concat(val_labels, axis=0)
if m % 10 == 0 or m == model_pool_size - 1:
percent = (m + 1.) / model_pool_size
message = ('{:.1%} completion for prediction on validation set: '
'model {:d}/{:d}.'.format(percent, m + 1, model_pool_size))
print(message)
```
Now we are ready to construct the ensemble.
* In the first step, we take the best model (on the validation set) -> `model_1`
* In the second step, we fix `model_1` and try all models in our model pool and construct the ensemble `[model_1, model_2]`. We select the model `model_2` that leads to the highest performance gain.
* In the third step, we fix `model_1`, `model_2` and choose `model_3` to construct an ensemble `[model_1, model_2, model_3]` that leads to the highest performance gain over step 2.
* ... and so on, until the desired ensemble size is reached or no performance gain could be achieved anymore.
```
# Ensemble construction by greedy member selection on the validation set
selected_members, val_acc, val_nll = greedy_selection(val_logits, val_labels,
ENSEMBLE_SIZE,
objective='nll')
unique_selected_members = list(set(selected_members))
message = ('Members selected by greedy procedure: model ids = {} (with {} unique '
'member(s))').format(
selected_members, len(unique_selected_members))
print(message)
```
# Evaluation on the test set
Let's see how the **hyper-deep ensemble** performs on the test set.
```
# Evaluate the following metrics on the test step
metrics = {
'ensemble/negative_log_likelihood': tf.keras.metrics.Mean(),
'ensemble/accuracy': tf.keras.metrics.SparseCategoricalAccuracy(),
}
metrics_single = {
'single/negative_log_likelihood': tf.keras.metrics.SparseCategoricalCrossentropy(),
'single/accuracy': tf.keras.metrics.SparseCategoricalAccuracy(),
}
# compute logits for ensemble member on test set
logits_test = []
for m, member_id in enumerate(unique_selected_members):
ensemble_filename = ensemble_filenames[member_id]
checkpoint.restore(ensemble_filename)
logits = []
test_iterator = iter(test_dataset)
for _ in range(steps_per_eval):
features = next(test_iterator)['features']
logits.append(model(features, training=False))
logits_test.append(tf.concat(logits, axis=0))
logits_test = tf.convert_to_tensor(logits_test)
print('Completed computation of member logits on the test set.')
# compute test metrics
test_iterator = iter(test_dataset)
for step in range(steps_per_eval):
labels = next(test_iterator)['labels']
logits = logits_test[:, (step*BATCH_SIZE):((step+1)*BATCH_SIZE)]
labels = tf.cast(labels, tf.int32)
negative_log_likelihood = _ensemble_cross_entropy(labels, logits)
# per member output probabilities
per_probs = tf.nn.softmax(logits)
# ensemble output probabilites
probs = tf.reduce_mean(per_probs, axis=0)
metrics['ensemble/negative_log_likelihood'].update_state(
negative_log_likelihood)
metrics['ensemble/accuracy'].update_state(labels, probs)
# for comparison compute performance of the best single model
# this is by definition the first model that was selected by the greedy
# selection method
logits_single = logits_test[0, (step*BATCH_SIZE):((step+1)*BATCH_SIZE)]
probs_single = tf.nn.softmax(logits_single)
metrics_single['single/negative_log_likelihood'].update_state(labels, logits_single)
metrics_single['single/accuracy'].update_state(labels, probs_single)
percent = (step + 1) / steps_per_eval
if step % 25 == 0 or step == steps_per_eval - 1:
message = ('{:.1%} completion final test prediction'.format(percent))
print(message)
ensemble_results = {name: metric.result() for name, metric in metrics.items()}
single_results = {name: metric.result() for name, metric in metrics_single.items()}
```
## Here is the final ensemble performance
We gained almost 2 percentage points in terms of accuracy over the best single model!
```
print('Ensemble performance:')
for m, val in ensemble_results.items():
print(' {}: {}'.format(m, val))
print('\nFor comparison:')
for m, val in single_results.items():
print(' {}: {}'.format(m, val))
```
## Hyper-deep ensembles as a strong baseline
We have seen that **hyper-deep ensembles** can lead to significant performance gains and can be easily implemented in your existing machine learning pipeline. Moreover, we hope that other researchers can benefit from this by using **hyper-deep ensembles** as a competitive, yet simple-to-implement, baseline. Even though **hyper-deep ensembles** might be more expensive than single model methods, it can show how much can be gained by introducing more diversity in the predictions.
## Hyper-deep ensembles can make your ML pipeline more robust
**Don't throw away your precious models!**
In many settings where we use a standard (single model) deep neural network, we usually start with a hyperparameter search. Typically, we select the model with the best hyperparameters and throw away all the others. Here, we show that you can get a much more performant system by combining multiple models from the hyperparameter search.
**What's the additional cost?**
In most cases you already get a significant performance boost if you combine 4 models. The main additional cost (provided you have already done the hyperparameter search) is that your model is now 4x larger (more memory) and 4x times slower to perform the predictions (if not parallelized). Often the performance boost justifies this increased cost. If you can't afford the additional cost, check out **hyper-batch ensembles**. This is an efficient version that amortizes hyper-deep ensembles **within a single model** (see our [paper](https://arxiv.org/abs/2006.13570)).
## Pointers to additional resources
* The full code for the extended **hyper-deep ensembles** pipeline and the code for the experiments in our paper can be found in the [Uncertainty Baselines](https://github.com/google/uncertainty-baselines/blob/master/baselines/cifar/hyperdeepensemble.py) repository.
* Our efficient version **hyper-batch ensembles** that amortize hyper-deep ensembles within a single model is implemented as a keras layer and can be found in [Edward2](https://github.com/google/edward2 ).
## For questions reach out to
Florian Wenzel ([florianwenzel@google.com](mailto:florianwenzel@google.com)) \
Rodolphe Jenatton ([rjenatton@google.com](mailto:rjenatton@google.com))
### Reference
If you use parts of this pipeline for your projects or papers we would be happy if you would cite our paper.
> Florian Wenzel, Jasper Snoek, Dustin Tran and Rodolphe Jenatton (2020).
> [Hyperparameter Ensembles for Robustness and Uncertainty Quantification](https://arxiv.org/abs/2006.13570).
> In _Neural Information Processing Systems_.
```none
@inproceedings{wenzel2020good,
author = {Florian Wenzel and Jasper Snoek and Dustin Tran and Rodolphe Jenatton},
title = {Hyperparameter Ensembles for Robustness and Uncertainty Quantification},
booktitle = {Neural Information Processing Systems)},
year = {2020},
}
```
| github_jupyter |
# Validating Multi-View Spherical KMeans by Replicating Paper Results
Here we will validate the implementation of multi-view spherical kmeans by replicating the right side of figure 3 from the Multi-View Clustering paper by Bickel and Scheffer.
```
import sklearn
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfVectorizer
import numpy as np
import scipy as scp
from scipy import sparse
import mvlearn
from mvlearn.cluster.mv_spherical_kmeans import MultiviewSphericalKMeans
from joblib import Parallel, delayed
import matplotlib.pyplot as plt
import warnings
warnings.simplefilter('ignore') # Ignore warnings
```
### A function to recreate the artificial dataset from the paper
The experiment in the paper used the 20 Newsgroup dataset, which consists of around 18000 newsgroups posts on 20 topics. This dataset can be obtained from scikit-learn. To create the artificial dataset used in the experiment, 10 of the 20 classes from the 20 newsgroups dataset were selected and grouped into 2 groups of 5 classes, and then encoded as tfidf vectors. These now represented the 5 multi-view classes, each with 2 views (one from each group). 200 examples were randomly sampled from each of the 20 newsgroups, producing 1000 concatenated examples uniformly distributed over the 5 classes.
```
NUM_SAMPLES = 200
#Load in the vectorized news group data from scikit-learn package
news = fetch_20newsgroups(subset='all')
all_data = np.array(news.data)
all_targets = np.array(news.target)
class_names = news.target_names
#A function to get the 20 newsgroup data
def get_data():
#Set class pairings as described in the multiview clustering paper
view1_classes = ['comp.graphics','rec.motorcycles', 'sci.space', 'rec.sport.hockey', 'comp.sys.ibm.pc.hardware']
view2_classes = ['rec.autos', 'sci.med','misc.forsale', 'soc.religion.christian','comp.os.ms-windows.misc']
#Create lists to hold data and labels for each of the 5 classes across 2 different views
labels = [num for num in range(len(view1_classes)) for _ in range(NUM_SAMPLES)]
labels = np.array(labels)
view1_data = list()
view2_data = list()
#Randomly sample 200 items from each of the selected classes in view1
for ind in range(len(view1_classes)):
class_num = class_names.index(view1_classes[ind])
class_data = all_data[(all_targets == class_num)]
indices = np.random.choice(class_data.shape[0], NUM_SAMPLES)
view1_data.append(class_data[indices])
view1_data = np.concatenate(view1_data)
#Randomly sample 200 items from each of the selected classes in view2
for ind in range(len(view2_classes)):
class_num = class_names.index(view2_classes[ind])
class_data = all_data[(all_targets == class_num)]
indices = np.random.choice(class_data.shape[0], NUM_SAMPLES)
view2_data.append(class_data[indices])
view2_data = np.concatenate(view2_data)
#Vectorize the data
vectorizer = TfidfVectorizer()
view1_data = vectorizer.fit_transform(view1_data)
view2_data = vectorizer.fit_transform(view2_data)
#Shuffle and normalize vectors
shuffled_inds = np.random.permutation(NUM_SAMPLES * len(view1_classes))
view1_data = sparse.vstack(view1_data)
view2_data = sparse.vstack(view2_data)
view1_data = np.array(view1_data[shuffled_inds].todense())
view2_data = np.array(view2_data[shuffled_inds].todense())
magnitudes1 = np.linalg.norm(view1_data, axis=1)
magnitudes2 = np.linalg.norm(view2_data, axis=1)
magnitudes1[magnitudes1 == 0] = 1
magnitudes2[magnitudes2 == 0] = 1
magnitudes1 = magnitudes1.reshape((-1,1))
magnitudes2 = magnitudes2.reshape((-1,1))
view1_data /= magnitudes1
view2_data /= magnitudes2
labels = labels[shuffled_inds]
return view1_data, view2_data, labels
```
### Function to compute cluster entropy
The function below is used to calculate the total clustering entropy using the formula described in the paper.
```
def compute_entropy(partitions, labels, k, num_classes):
total_entropy = 0
num_examples = partitions.shape[0]
for part in range(k):
labs = labels[partitions == part]
part_size = labs.shape[0]
part_entropy = 0
for cl in range(num_classes):
prop = np.sum(labs == cl) * 1.0 / part_size
ent = 0
if(prop != 0):
ent = - prop * np.log2(prop)
part_entropy += ent
part_entropy = part_entropy * part_size / num_examples
total_entropy += part_entropy
return total_entropy
```
### Functions to Initialize Centroids and Run Experiment
The randSpherical function initializes the initial cluster centroids by taking a uniform random sampling of points on the surface of a unit hypersphere. The getEntropies function runs Multi-View Spherical Kmeans Clustering on the data with n_clusters from 1 to 10 once each. This function essentially runs one trial of the experiment.
```
def randSpherical(n_clusters, n_feat1, n_feat2):
c_centers1 = np.random.normal(0, 1, (n_clusters, n_feat1))
c_centers1 /= np.linalg.norm(c_centers1, axis=1).reshape((-1, 1))
c_centers2 = np.random.normal(0, 1, (n_clusters, n_feat2))
c_centers2 /= np.linalg.norm(c_centers2, axis=1).reshape((-1, 1))
return [c_centers1, c_centers2]
def getEntropies():
v1_data, v2_data, labels = get_data()
entropies = list()
for num in range(1,11):
centers = randSpherical(num, v1_data.shape[1], v2_data.shape[1])
kmeans = MultiviewSphericalKMeans(n_clusters=num, init=centers, n_init=1)
pred = kmeans.fit_predict([v1_data, v2_data])
ent = compute_entropy(pred, labels, num, 5)
entropies.append(ent)
print('done')
return entropies
```
### Running multiple trials of the experiment
It was difficult to exactly reproduce the results from the Multi-View Clustering Paper because the experimentors randomly sampled a subset of the 20 newsgroup dataset samples to create the artificial dataset, and this random subset was not reported. Therefore, in an attempt to at least replicate the overall shape of the distribution of cluster entropy over the number of clusters, we resample the dataset and recreate the artificial dataset each trial. Therefore, each trial consists of resampling and recreating the artificial dataset, and then running Multi-view Spherical KMeans clustering on that dataset for n_clusters 1 to 10 once each. We performed 80 such trials and the results of this are shown below.
```
#Do spherical kmeans and get entropy values for each k for multiple trials
n_workers = 10
n_trials = 80
mult_entropies1 = Parallel(n_jobs=n_workers)(
delayed(getEntropies)() for i in range(n_trials))
```
### Experiment Results
We see the results of this experiment below. Here, we have more or less reproduced the shape of the distribution as seen in figure 3 from the Multi-view Clustering Paper.
```
mult_entropies1 = np.array(mult_entropies1)
ave_m_entropies = np.mean(mult_entropies1, axis=0)
std_m_entropies = np.std(mult_entropies1, axis=0)
x_values = list(range(1, 11))
plt.errorbar(x_values, ave_m_entropies, std_m_entropies, capsize=5, color = '#F46C12')
plt.xlabel('k')
plt.ylabel('Entropy')
plt.legend(['2 Views'])
plt.rc('axes', labelsize=12)
plt.show()
```
| github_jupyter |
# StellarGraph Ensemble for link prediction
In this example, we use `stellargraph`s `BaggingEnsemble` class of [GraphSAGE](http://snap.stanford.edu/graphsage/) models to predict citation links in the Cora dataset (see below). The `BaggingEnsemble` class brings ensemble learning to `stellargraph`'s graph neural network models, e.g., `GraphSAGE`, quantifying prediction variance and potentially improving prediction accuracy.
The problem is treated as a supervised link prediction problem on a homogeneous citation network with nodes representing papers (with attributes such as binary keyword indicators and categorical subject) and links corresponding to paper-paper citations.
To address this problem, we build a a base `GraphSAGE` model with the following architecture. First we build a two-layer GraphSAGE model that takes labeled `(paper1, paper2)` node pairs corresponding to possible citation links, and outputs a pair of node embeddings for the `paper1` and `paper2` nodes of the pair. These embeddings are then fed into a link classification layer, which first applies a binary operator to those node embeddings (e.g., concatenating them) to construct the embedding of the potential link. Thus obtained link embeddings are passed through the dense link classification layer to obtain link predictions - probability for these candidate links to actually exist in the network. The entire model is trained end-to-end by minimizing the loss function of choice (e.g., binary cross-entropy between predicted link probabilities and true link labels, with true/false citation links having labels 1/0) using stochastic gradient descent (SGD) updates of the model parameters, with minibatches of 'training' links fed into the model.
Finally, using our base model, we create an ensemble with each model in the ensemble trained on a bootstrapped sample of the training data.
**References**
1. Inductive Representation Learning on Large Graphs. W.L. Hamilton, R. Ying, and J. Leskovec arXiv:1706.02216
[cs.SI], 2017.
```
import matplotlib.pyplot as plt
import networkx as nx
import pandas as pd
import numpy as np
from tensorflow import keras
import os
import stellargraph as sg
from stellargraph.data import EdgeSplitter
from stellargraph.mapper import GraphSAGELinkGenerator
from stellargraph.layer import GraphSAGE, link_classification
from stellargraph import BaggingEnsemble
from sklearn import preprocessing, feature_extraction, model_selection
from stellargraph import globalvar
%matplotlib inline
def plot_history(history):
def remove_prefix(text, prefix):
return text[text.startswith(prefix) and len(prefix):]
figsize=(7, 5)
c_train = 'b'
c_test = 'g'
metrics = sorted(set([remove_prefix(m, "val_") for m in list(history[0].history.keys())]))
for m in metrics:
# summarize history for metric m
plt.figure(figsize=figsize)
for h in history:
plt.plot(h.history[m], c=c_train)
plt.plot(h.history['val_' + m], c=c_test)
plt.title(m)
plt.ylabel(m)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='best')
plt.show()
def load_cora(data_dir, largest_cc=False):
g_nx = nx.read_edgelist(path=os.path.expanduser(os.path.join(data_dir, "cora.cites")))
for edge in g_nx.edges(data=True):
edge[2]['label'] = 'cites'
# load the node attribute data
cora_data_location = os.path.expanduser(os.path.join(data_dir, "cora.content"))
node_attr = pd.read_csv(cora_data_location, sep='\t', header=None)
values = { str(row.tolist()[0]): row.tolist()[-1] for _, row in node_attr.iterrows()}
nx.set_node_attributes(g_nx, values, 'subject')
if largest_cc:
# Select the largest connected component. For clarity we ignore isolated
# nodes and subgraphs; having these in the data does not prevent the
# algorithm from running and producing valid results.
g_nx_ccs = (g_nx.subgraph(c).copy() for c in nx.connected_components(g_nx))
g_nx = max(g_nx_ccs, key=len)
print("Largest subgraph statistics: {} nodes, {} edges".format(
g_nx.number_of_nodes(), g_nx.number_of_edges()))
feature_names = ["w_{}".format(ii) for ii in range(1433)]
column_names = feature_names + ["subject"]
node_data = pd.read_csv(os.path.join(data_dir, "cora.content"), sep='\t', header=None, names=column_names)
node_data.index = node_data.index.map(str)
node_data = node_data[node_data.index.isin(list(g_nx.nodes()))]
return g_nx, node_data, feature_names
```
### Loading the CORA network data
**Downloading the CORA dataset:**
The dataset used in this demo can be downloaded from https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz
The following is the description of the dataset:
> The Cora dataset consists of 2708 scientific publications classified into one of seven classes.
> The citation network consists of 5429 links. Each publication in the dataset is described by a
> 0/1-valued word vector indicating the absence/presence of the corresponding word from the dictionary.
> The dictionary consists of 1433 unique words. The README file in the dataset provides more details.
Download and unzip the cora.tgz file to a location on your computer and set the `data_dir` variable to
point to the location of the dataset (the directory containing "cora.cites" and "cora.content").
```
data_dir = os.path.expanduser("~/data/cora")
```
Load the dataset
```
G, node_data, feature_names = load_cora(data_dir)
```
We need to convert node features that will be used by the model to numeric values that are required for GraphSAGE input. Note that all node features in the Cora dataset, except the categorical "subject" feature, are already numeric, and don't require the conversion.
```
if "subject" in feature_names:
# Convert node features to numeric vectors
feature_encoding = feature_extraction.DictVectorizer(sparse=False)
node_features = feature_encoding.fit_transform(
node_data[feature_names].to_dict("records")
)
else: # node features are already numeric, no further conversion is needed
node_features = node_data[feature_names].values
```
Add node data to G:
```
for nid, f in zip(node_data.index, node_features):
G.nodes[nid][globalvar.TYPE_ATTR_NAME] = "paper" # specify node type
G.nodes[nid]["feature"] = f
```
We aim to train a link prediction model, hence we need to prepare the train and test sets of links and the corresponding graphs with those links removed.
We are going to split our input graph into train and test graphs using the `EdgeSplitter` class in `stellargraph.data`. We will use the train graph for training the model (a binary classifier that, given two nodes, predicts whether a link between these two nodes should exist or not) and the test graph for evaluating the model's performance on hold out data.
Each of these graphs will have the same number of nodes as the input graph, but the number of links will differ (be reduced) as some of the links will be removed during each split and used as the positive samples for training/testing the link prediction classifier.
From the original graph G, extract a randomly sampled subset of test edges (true and false citation links) and the reduced graph G_test with the positive test edges removed:
```
# Define an edge splitter on the original graph G:
edge_splitter_test = EdgeSplitter(G)
# Randomly sample a fraction p=0.1 of all positive links, and same number of negative links, from G, and obtain the
# reduced graph G_test with the sampled links removed:
G_test, edge_ids_test, edge_labels_test = edge_splitter_test.train_test_split(
p=0.1, method="global", keep_connected=True, seed=42
)
```
The reduced graph G_test, together with the test ground truth set of links (edge_ids_test, edge_labels_test), will be used for testing the model.
Now, repeat this procedure to obtain validation data that we are going to use for early stopping in order to prevent overfitting. From the reduced graph G_test, extract a randomly sampled subset of validation edges (true and false citation links) and the reduced graph G_val with the positive validation edges removed.
```
# Define an edge splitter on the reduced graph G_test:
edge_splitter_val = EdgeSplitter(G_test)
# Randomly sample a fraction p=0.1 of all positive links, and same number of negative links, from G_test, and obtain the
# reduced graph G_train with the sampled links removed:
G_val, edge_ids_val, edge_labels_val = edge_splitter_val.train_test_split(
p=0.1, method="global", keep_connected=True, seed=100
)
```
We repeat this procedure one last time in order to obtain the training data for the model.
From the reduced graph G_val, extract a randomly sampled subset of train edges (true and false citation links) and the reduced graph G_train with the positive train edges removed:
```
# Define an edge splitter on the reduced graph G_test:
edge_splitter_train = EdgeSplitter(G_test)
# Randomly sample a fraction p=0.1 of all positive links, and same number of negative links, from G_test, and obtain the
# reduced graph G_train with the sampled links removed:
G_train, edge_ids_train, edge_labels_train = edge_splitter_train.train_test_split(
p=0.1, method="global", keep_connected=True, seed=42
)
```
G_train, together with the train ground truth set of links (edge_ids_train, edge_labels_train), will be used for training the model.
Convert G_train, G_val, and G_test to StellarGraph objects (undirected, as required by GraphSAGE) for ML:
```
G_train = sg.StellarGraph(G_train, node_features="feature")
G_test = sg.StellarGraph(G_test, node_features="feature")
G_val = sg.StellarGraph(G_val, node_features="feature")
```
Summary of G_train and G_test - note that they have the same set of nodes, only differing in their edge sets:
```
print(G_train.info())
print(G_test.info())
print(G_val.info())
```
### Specify global parameters
Here we specify some important parameters that control the type of ensemble model we are going to use. For example, we specify the number of models in the ensemble and the number of predictions per query point per model.
```
n_estimators = 5 # Number of models in the ensemble
n_predictions = 10 # Number of predictions per query point per model
```
Next, we create link generators for sampling and streaming train and test link examples to the model. The link generators essentially "map" pairs of nodes `(paper1, paper2)` to the input of GraphSAGE: they take minibatches of node pairs, sample 2-hop subgraphs with `(paper1, paper2)` head nodes extracted from those pairs, and feed them, together with the corresponding binary labels indicating whether those pairs represent true or false citation links, to the input layer of the GraphSAGE model, for SGD updates of the model parameters.
Specify the minibatch size (number of node pairs per minibatch) and the number of epochs for training the model:
```
batch_size = 20
epochs = 20
```
Specify the sizes of 1- and 2-hop neighbour samples for GraphSAGE. Note that the length of `num_samples` list defines the number of layers/iterations in the GraphSAGE model. In this example, we are defining a 2-layer GraphSAGE model:
```
num_samples = [20, 10]
```
### Create the generators for training
For training we create a generator on the `G_train` graph. The `shuffle=True` argument is given to the `flow` method to improve training.
```
generator = GraphSAGELinkGenerator(G_train, batch_size, num_samples)
train_gen = generator.flow(edge_ids_train,
edge_labels_train,
shuffle=True)
```
At test time we use the `G_test` graph and don't specify the `shuffle` argument (it defaults to `False`).
```
test_gen = GraphSAGELinkGenerator(G_test, batch_size, num_samples).flow(edge_ids_test,
edge_labels_test)
val_gen = GraphSAGELinkGenerator(G_val, batch_size, num_samples).flow(edge_ids_val,
edge_labels_val)
```
### Create the base GraphSAGE model
Build the model: a 2-layer GraphSAGE model acting as node representation learner, with a link classification layer on concatenated `(paper1, paper2)` node embeddings.
GraphSAGE part of the model, with hidden layer sizes of 20 for both GraphSAGE layers, a bias term, and no dropout. (Dropout can be switched on by specifying a positive dropout rate, 0 < dropout < 1)
Note that the length of layer_sizes list must be equal to the length of num_samples, as len(num_samples) defines the number of hops (layers) in the GraphSAGE model.
```
layer_sizes = [20, 20]
assert len(layer_sizes) == len(num_samples)
graphsage = GraphSAGE(
layer_sizes=layer_sizes, generator=generator, bias=True, dropout=0.5
)
# Build the model and expose the input and output tensors.
x_inp, x_out = graphsage.build()
```
Final link classification layer that takes a pair of node embeddings produced by graphsage, applies a binary operator to them to produce the corresponding link embedding ('ip' for inner product; other options for the binary operator can be seen by running a cell with `?link_classification` in it), and passes it through a dense layer:
```
prediction = link_classification(
output_dim=1, output_act="relu", edge_embedding_method='ip'
)(x_out)
```
Stack the GraphSAGE and prediction layers into a Keras model.
```
base_model = keras.Model(inputs=x_inp, outputs=prediction)
```
Now we create the ensemble based on `base_model` we just created.
```
model = BaggingEnsemble(model=base_model, n_estimators=n_estimators, n_predictions=n_predictions)
```
We need to `compile` the model specifying the optimiser, loss function, and metrics to use.
```
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.binary_crossentropy,
weighted_metrics=["acc"],
)
```
Evaluate the initial (untrained) ensemble of models on the train and test set:
```
init_train_metrics_mean, init_train_metrics_std = model.evaluate_generator(train_gen)
init_test_metrics_mean, init_test_metrics_std = model.evaluate_generator(test_gen)
print("\nTrain Set Metrics of the initial (untrained) model:")
for name, m, s in zip(model.metrics_names, init_train_metrics_mean, init_train_metrics_std):
print("\t{}: {:0.4f}±{:0.4f}".format(name, m, s))
print("\nTest Set Metrics of the initial (untrained) model:")
for name, m, s in zip(model.metrics_names, init_test_metrics_mean, init_test_metrics_std):
print("\t{}: {:0.4f}±{:0.4f}".format(name, m, s))
```
### Train the ensemble model
We are going to use **bootstrap samples** of the training dataset to train each model in the ensemble. For this purpose, we need to pass `generator`, `edge_ids_train`, and `edge_labels_train` to the `fit_generator` method.
Note that training time will vary based on computer speed. Set `verbose=1` for reporting of training progress.
```
history = model.fit_generator(
generator=generator,
train_data = edge_ids_train,
train_targets = edge_labels_train,
epochs=epochs,
validation_data=val_gen,
verbose=0,
use_early_stopping=True, # Enable early stopping
early_stopping_monitor="val_weighted_acc",
)
```
Plot the training history:
```
plot_history(history)
```
Evaluate the trained model on test citation links. After training the model, performance should be better than before training (shown above):
```
train_metrics_mean, train_metrics_std = model.evaluate_generator(train_gen)
test_metrics_mean, test_metrics_std = model.evaluate_generator(test_gen)
print("\nTrain Set Metrics of the trained model:")
for name, m, s in zip(model.metrics_names, train_metrics_mean, train_metrics_std):
print("\t{}: {:0.4f}±{:0.4f}".format(name, m, s))
print("\nTest Set Metrics of the trained model:")
for name, m, s in zip(model.metrics_names, test_metrics_mean, test_metrics_std):
print("\t{}: {:0.4f}±{:0.4f}".format(name, m, s))
```
### Make predictions with the model
Now let's get the predictions for all the edges in the test set.
```
test_predictions = model.predict_generator(generator=test_gen)
```
These predictions will be the output of the last layer in the model with `sigmoid` activation.
The array `test_predictions` has dimensionality $MxKxNxF$ where $M$ is the number of estimators in the ensemble (`n_estimators`); $K$ is the number of predictions per query point per estimator (`n_predictions`); $N$ is the number of query points (`len(test_predictions)`); and $F$ is the output dimensionality of the specified layer determined by the shape of the output layer (in this case it is equal to 1 since we are performing binary classification).
```
type(test_predictions), test_predictions.shape
```
For demonstration, we are going to select one of the edges in the test set, and plot the ensemble's predictions for that edge.
Change the value of `selected_query_point` (valid values are in the range of `0` to `len(test_predictions)`) to visualise the results for another test point.
```
selected_query_point = -10
# Select the predictios for the point specified by selected_query_point
qp_predictions = test_predictions[:, :, selected_query_point, :]
# The shape should be n_estimators x n_predictions x size_output_layer
qp_predictions.shape
```
Next, to facilitate plotting the predictions using either a density plot or a box plot, we are going to reshape `qp_predictions` to $R\times F$ where $R$ is equal to $M\times K$ as above and $F$ is the output dimensionality of the output layer.
```
qp_predictions = qp_predictions.reshape(np.product(qp_predictions.shape[0:-1]),
qp_predictions.shape[-1])
qp_predictions.shape
```
The model returns the probability of edge, the class to predict. The probability of no edge is just the complement of the latter. Let's calculate it so that we can plot the distribution of predictions for both outcomes.
```
qp_predictions=np.hstack((qp_predictions, 1.-qp_predictions,))
```
We'd like to assess the ensemble's confidence in its predictions in order to decide if we can trust them or not. Utilising a box plot, we can visually inspect the ensemble's distribution of prediction probabilities for a point in the test set.
If the spread of values for the predicted point class is well separated from those of the other class with little overlap then we can be confident that the prediction is correct.
```
correct_label = "Edge"
if edge_labels_test[selected_query_point] == 0:
correct_label = "No Edge"
fig, ax = plt.subplots(figsize=(12,6))
ax.boxplot(x=qp_predictions)
ax.set_xticklabels(["Edge", "No Edge"])
ax.tick_params(axis='x', rotation=45)
plt.title("Correct label is "+ correct_label)
plt.ylabel("Predicted Probability")
plt.xlabel("Class")
```
For the selected pair of nodes (query point), the ensemble is not certain as to whether an edge between these two nodes should exist. This can be inferred by the large spread of values as indicated in the above figure.
(Note that due to the stochastic nature of training neural network algorithms, the above conclusion may not be valid if you re-run the notebook; however, the general conclusion that the use of ensemble learning can be used to quantify the model's uncertainty about its prediction still holds.)
The below image shows an example of the classifier making a correct prediction with higher confidence than the above example. The results is for the setting `selected_query_point=0`.

| github_jupyter |
```
%matplotlib inline
import numpy as np
import scipy.spatial
import pandas as pd
import sklearn.decomposition
import matplotlib.pyplot as plt
import seaborn as sb
import linear_cca
import multimodal_data
```
# Useful References
## https://arxiv.org/pdf/1711.02391.pdf
## http://users.stat.umn.edu/~helwig/notes/cancor-Notes.pdf
## https://www.statisticssolutions.com/canonical-correlation/
# Load data
```
l1k = multimodal_data.load_l1000("treatment_level_all_alleles.csv")
l1k = multimodal_data.load_l1000("replicate_level_all_alleles.csv")
cp = multimodal_data.load_cell_painting(
"/data1/luad/others/morphology.csv",
"resnet18-validation-well_profiles.csv",
aggregate_replicates=False
)
l1k, cp = multimodal_data.align_profiles(l1k, cp, sample=4)
common_alleles = set(cp["Allele"].unique()).intersection( l1k["Allele"].unique() )
genes = list(common_alleles)
genes = [x for x in genes if x not in ["EGFP", "BFP", "HCRED"]]
l1k = l1k[l1k.Allele.isin(genes)]
cp = cp[cp.Allele.isin(genes)]
```
# Compute CCA
```
# Preprocessing to the data:
# 1. Standardize features (z-scoring)
# 2. Reduce dimensionality (PCA down to 100 features)
# This is necessary because we only have 175 data points,
# while L1000 has 978 features and Cell Painting has 256.
# So PCA is useful as a regularizer somehow.
def cca_analysis(GE_train, MF_train, GE_test, MF_test):
# Prepare Gene Expression matrix
sc_l1k = sklearn.preprocessing.StandardScaler()
sc_l1k.fit(GE_train)
GE = sc_l1k.transform(GE_train)
pca_l1k = sklearn.decomposition.PCA(n_components=150, svd_solver="full")
pca_l1k.fit(GE)
GE = pca_l1k.transform(GE)
# Prepare Cell Painting matrix
sc_cp = sklearn.preprocessing.StandardScaler()
sc_cp.fit(MF_train)
MF = sc_cp.transform(MF_train)
pca_cp = sklearn.decomposition.PCA(n_components=100, svd_solver="full")
pca_cp.fit(MF)
MF = pca_cp.transform(MF)
# Compute CCA
A, B, D, ma, mb = linear_cca.linear_cca(MF, GE, 10)
X = pca_cp.transform(sc_cp.transform(MF_test))
Y = pca_l1k.transform(sc_l1k.transform(GE_test))
X = np.dot(X, A)
Y = np.dot(Y, B)
return X, Y, D
GE = np.asarray(l1k)[:,1:]
MF = np.asarray(cp)[:,1:]
MF_v, GE_v, D = cca_analysis(GE, MF, GE, MF)
# In linear CCA, the canonical correlations equal to the square roots of the eigenvalues:
plt.plot(np.sqrt(D))
print("First cannonical correlation: ", np.sqrt(D[0]))
D = scipy.spatial.distance_matrix(MF_v[:,0:2], GE_v[:,0:2])
NN = np.argsort(D, axis=1) # Nearest morphology point to each gene expression point
plt.figure(figsize=(10,10))
plt.scatter(MF_v[:,0], MF_v[:,1], c="blue", s=50, edgecolor='gray', linewidths=1)
plt.scatter(GE_v[:,0]+0, GE_v[:,1]+0, c="lime", edgecolor='gray', linewidths=1)
connected = 0
for i in range(MF_v.shape[0]):
for j in range(7): #GE_v.shape[0]):
if cp.iloc[i].Allele == l1k.iloc[NN[i,j]].Allele:
plt.plot([GE_v[NN[i,j],0],MF_v[i,0]],[GE_v[NN[i,j],1],MF_v[i,1]], 'k-', color="red")
# if np.random.random() > 0.9:
# plt.text(GE_v[i,0], GE_v[i,1], l1k.iloc[i].Allele, horizontalalignment='left', size='medium', color='black')
connected += 1
#break
print(connected)
# plt.xlim(-2,2)
# plt.ylim(-2,2)
df = pd.DataFrame(data={"cca1": np.concatenate((GE_v[:,0], MF_v[:,0])),
"cca2": np.concatenate((GE_v[:,1],MF_v[:,1])),
"source": ["L1K" for x in range(GE_v.shape[0])]+["CP" for x in range(MF_v.shape[0])],
"allele": list(l1k["Allele"]) + list(cp["Allele"])}
)
df["color"] = df["allele"].str.find("EGFR") != -1
sb.lmplot(data=df, x="cca1", y="cca2", hue="color", fit_reg=False, col="source")
plt.figure(figsize=(10,10))
plt.scatter(MF_v[:,0], MF_v[:,1], c="blue", s=100, edgecolor='gray', linewidths=1)
plt.figure(figsize=(10,10))
plt.scatter(GE_v[:,0]+0, GE_v[:,1]+0, c="lime", s=100, edgecolor='gray', linewidths=1)
```
# Annotate visualization
```
def visualize_annotations(l1k, cp, GE_v, MF_v, display_items=[]):
ge_data = pd.DataFrame(data=l1k["Allele"].reset_index())
ge_data["x"] = GE_v[:,0]
ge_data["y"] = GE_v[:,1]
ge_data.columns = ["idx", "Allele", "x", "y"]
ge_data["type"] = "GeneExpression"
mf_data = pd.DataFrame(data=cp["Allele"].reset_index())
mf_data["x"] = MF_v[:,0]
mf_data["y"] = MF_v[:,1]
mf_data.columns = ["idx", "Allele", "x", "y"]
mf_data["type"] = "Morphology"
data = pd.concat([ge_data, mf_data])
plt.figure(figsize=(12,12))
p1 = sb.regplot(data=ge_data, x="x", y="y", fit_reg=False, color="red", scatter_kws={'s':50})
p2 = sb.regplot(data=mf_data, x="x", y="y", fit_reg=False, color="blue", scatter_kws={'s':50})
for point in range(ge_data.shape[0]):
#if ge_data.Allele[point] in display_items:
p1.text(ge_data.x[point], ge_data.y[point], ge_data.Allele[point], horizontalalignment='left', size='medium', color='black')
for point in range(mf_data.shape[0]):
#if mf_data.Allele[point] in display_items:
p2.text(mf_data.x[point], mf_data.y[point], mf_data.Allele[point], horizontalalignment='left', size='medium', color='black')
visualize_annotations(l1k, cp, GE_v, MF_v, display_items=["NFE2L2_p.T80K","EGFP"])
```
# Visualization in the test set
```
common_alleles = set(cp["Allele"].unique()).intersection( l1k["Allele"].unique() )
genes = list(common_alleles)
np.random.shuffle(genes)
train = genes[0:9*int(len(genes)/10)]
test = genes[9*int(len(genes)/10):]
GE_train = np.asarray(l1k[l1k["Allele"].isin(train)])[:,1:]
MF_train = np.asarray(cp[cp["Allele"].isin(train)])[:,1:]
GE_test = np.asarray(l1k[l1k["Allele"].isin(test)])[:,1:]
MF_test = np.asarray(cp[cp["Allele"].isin(test)])[:,1:]
MF_v, GE_v, D = cca_analysis(GE_train, MF_train, GE_test, MF_test)
visualize_annotations(
l1k[l1k["Allele"].isin(test)],
cp[cp["Allele"].isin(test)],
GE_v,
MF_v
)
D = scipy.spatial.distance_matrix(MF_v[:,0:2], GE_v[:,0:2])
NN = np.argsort(D, axis=1) # Nearest morphology point to each gene expression point
plt.figure(figsize=(10,10))
plt.scatter(MF_v[:,0], MF_v[:,1], c="blue", s=50, edgecolor='gray', linewidths=1)
plt.scatter(GE_v[:,0]+0, GE_v[:,1]+0, c="red", edgecolor='gray', linewidths=1)
connected = 0
for i in range(MF_v.shape[0]):
for j in range(7):
if cp.iloc[i].Allele == l1k.iloc[NN[i,j]].Allele:
plt.plot([GE_v[NN[i,j],0],MF_v[i,0]],[GE_v[NN[i,j],1],MF_v[i,1]], 'k-', color="lime")
# In linear CCA, the canonical correlations equal to the square roots of the eigenvalues:
plt.plot(np.sqrt(D))
print("First cannonical correlation: ", np.sqrt(D[0]))
```
# Visualize data matrices
```
X = (GE - np.min(GE))/(np.max(GE) - np.min(GE))
X = np.asarray(X, dtype=np.float32)
plt.imshow(X)
X = (MF - np.min(MF))/(np.max(MF) - np.min(MF))
X = np.asarray(X, dtype=np.float32)
plt.imshow(X)
```
| github_jupyter |
```
#!pip install tensorflow
import tensorflow as tf
import numpy as np
corpus_raw = 'He is the king . The king is royal . She is the royal queen '
# convert to lower case
corpus_raw = corpus_raw.lower()
words = []
for word in corpus_raw.split():
if word != '.': # because we don't want to treat . as a word
words.append(word)
words = set(words) # so that all duplicate words are removed
word2int = {}
int2word = {}
vocab_size = len(words) # gives the total number of unique words
for i,word in enumerate(words):
word2int[word] = i
int2word[i] = word
# raw sentences is a list of sentences.
raw_sentences = corpus_raw.split('.')
sentences = []
for sentence in raw_sentences:
sentences.append(sentence.split())
WINDOW_SIZE = 2
data = []
for sentence in sentences:
for word_index, word in enumerate(sentence):
for nb_word in sentence[max(word_index - WINDOW_SIZE, 0) : min(word_index + WINDOW_SIZE, len(sentence)) + 1] :
if nb_word != word:
data.append([word, nb_word])
# function to convert numbers to one hot vectors
def to_one_hot(data_point_index, vocab_size):
temp = np.zeros(vocab_size)
temp[data_point_index] = 1
return temp
x_train = [] # input word
y_train = [] # output word
for data_word in data:
x_train.append(to_one_hot(word2int[ data_word[0] ], vocab_size))
y_train.append(to_one_hot(word2int[ data_word[1] ], vocab_size))
# convert them to numpy arrays
x_train = np.asarray(x_train)
y_train = np.asarray(y_train)
# making placeholders for x_train and y_train
x = tf.placeholder(tf.float32, shape=(None, vocab_size))
y_label = tf.placeholder(tf.float32, shape=(None, vocab_size))
EMBEDDING_DIM = 5 # you can choose your own number
W1 = tf.Variable(tf.random_normal([vocab_size, EMBEDDING_DIM]))
b1 = tf.Variable(tf.random_normal([EMBEDDING_DIM])) #bias
hidden_representation = tf.add(tf.matmul(x,W1), b1)
W2 = tf.Variable(tf.random_normal([EMBEDDING_DIM, vocab_size]))
b2 = tf.Variable(tf.random_normal([vocab_size]))
prediction = tf.nn.softmax(tf.add( tf.matmul(hidden_representation, W2), b2))
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init) #make sure you do this!
# define the loss function:
cross_entropy_loss = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(prediction), reduction_indices=[1]))
# define the training step:
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(cross_entropy_loss)
n_iters = 10000
# train for n_iter iterations
for _ in range(n_iters):
sess.run(train_step, feed_dict={x: x_train, y_label: y_train})
print('loss is : ', sess.run(cross_entropy_loss, feed_dict={x: x_train, y_label: y_train}))
vectors = sess.run(W1 + b1)
def euclidean_dist(vec1, vec2):
return np.sqrt(np.sum((vec1-vec2)**2))
def find_closest(word_index, vectors):
min_dist = 10000 # to act like positive infinity
min_index = -1
query_vector = vectors[word_index]
for index, vector in enumerate(vectors):
if euclidean_dist(vector, query_vector) < min_dist and not np.array_equal(vector, query_vector):
min_dist = euclidean_dist(vector, query_vector)
min_index = index
return min_index
find_closest(0, vectors)
for w,i in word2int.items():
closest = find_closest(0, vectors)
print(w, int2word[closest])
from sklearn.manifold import TSNE
model = TSNE(n_components=2, random_state=0)
np.set_printoptions(suppress=True)
vectors = model.fit_transform(vectors)
from sklearn import preprocessing
normalizer = preprocessing.Normalizer()
nvectors = normalizer.fit_transform(vectors, 'l2')
print(nvectors)
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
print(words)
for word in words:
print(word, vectors[word2int[word]][1])
ax.annotate(word, (nvectors[word2int[word]][0],nvectors[word2int[word]][1] ))
plt.show()
import gensim
wv = gensim.models.word2vec()
```
| github_jupyter |
```
import re
import os
import copy
from math import log, pow
import subprocess
import matplotlib.pyplot as plt
```
#### Fault Classes
| Fault | Binary | Decimal |
| --- | --- | --- |
| A | 00 | 0 |
| B | 01 | 1 |
| C | 10 | 2 |
| D | 11 | 3 |
#### Fault Code = (variable << 2) + fault_class
| Fault | Binary | Decimal |
| --- | --- | --- |
| A1 | 100 | 4 |
| B1 | 101 | 5 |
| C1 | 110 | 6 |
| D1 | 111 | 7 |
### Sample (C1)
```
fault = 2
variable = 1
code = (variable << 2) + fault
code
```
#### Reverse
```
fault = code & 3
fault
variable = code >> 2
variable
```
#### Current Fault Codes
| Fault | Decimal |
| --- | --- |
| A1 | 4 |
| B1 | 5 |
| C1 | 6 |
| D1 | 7 |
| A1B1 | 4-5 |
| A1C1 | 4-6 |
| A1D1 | 4-7 |
| B1C1 | 5-6 |
| C1D1 | 6-7 |
#### Flow Chart
```
from graphviz import Digraph
dot = Digraph(node_attr={'shape': 'box'}, format='png', filename='sensorfusion')
dot.edge_attr.update(arrowhead='vee', arrowsize='1')
dot.node('0', 'Fault Generator')
dot.node('1', 'Other Faults')
dot.node('2', 'Supervisor (Polling)')
dot.node('3', 'All faults\n processed\n?' ,shape='diamond')
dot.node('4', 'Send trigger signal \n &\n increment frequency')
dot.node('5', 'Append fault')
dot.node('6', 'Does fault\n exist?', shape='diamond')
dot.node('7', 'Remove and delay fault\nwith lower\npriority')
dot.node('8', 'Time\nOut?', shape='diamond')
dot.node('9', 'end', shape='oval')
dot.node('10', 'start', shape='oval')
dot.node('11', 'Delayed Faults')
dot.edge('2', '1')
dot.edge('0', '2', ' Possibility\nof fault')
dot.edge('1', '5', ' Possibility\nof fault')
dot.edge('2', '3')
dot.edge('3', '6', 'Yes')
dot.edge('3', '5', 'No')
dot.edge('4', '8')
dot.edge('5', '6')
dot.edge('6', '4', 'Yes')
dot.edge('6', '7', 'No')
dot.edge('7', '3')
dot.edge('8', '2', 'No')
dot.edge('8', '9', 'Yes')
dot.edge('10', '0')
dot.edge('11', '1')
dot
#dot.save()
#dot.render(view=True)
```
### Run
```
command = "D:/code/C++/RT-Cadmium-FDD-New-Code/top_model/main.exe"
completed_process = subprocess.run(command, shell=False, capture_output=True, text=True)
#print(completed_process.stdout)
```
### Read from file
```
fileName = "SensorFusion.txt"
fault_codes = {}
with open(fileName, "r") as f:
lines = f.readlines()
with open(fileName, "r") as f:
output = f.read()
for line in lines:
if (re.search("supervisor", line) != None):
res = re.findall("\{\d+[, ]*\d*[, ]*\d*\}", line)
if len(res) > 0:
str_interest = res[0].replace('}', '').replace('{', '')
faults = str_interest.split(', ')
key = '-' + '-'.join(faults) + '-'
fault_codes[key] = fault_codes.get(key, 0) + 1
generators = {'A': 0, 'B': 0, 'C': 0, 'D': 0}
for key in generators.keys():
generators[key] = len(re.findall("faultGen" + key, output))
fault_codes
```
### ANALYSIS / VERIFICATION
#### Definitions
**Pure Fault**: Faults from a single generator.
**Compound Faults**: Faults formed from the combination of pure faults.
### Premise
Fault $A1$: Should have no discarded entry, because it has the highest priority
Fault $B1$: Should have some discarded value, for the case $BD$, which is not available
Fault $C1$: Higher percentage of discarded cases than $C$, because of its lower priority
Fault $D1$: Highest percentage of discarded cases, because it has the lowest priority
Generator $output_{A1} = n({A1}) + n({A1} \cap {B1}) + n({A1} \cap {C1}) + n({A1} \cap {D1}) + discarded_{A1}$
Generator $output_{B1} = n({B1}) + n({A1} \cap {B1}) + n({B1} \cap {C1}) + discarded_{B1}$
Generator $output_{C1} = n({C1}) + n({A1} \cap {C1}) + n({B1} \cap {C1}) + n({C1} \cap {D1}) + discarded_{C1}$
Generator $output_{D1} = n({D1}) + n({A1} \cap {D1}) + n({C1} \cap {D1}) + discarded_{D1}$
Where $discarded_{A1} \equiv 0$, because A has the highest priority, and $discarded_{B1} = 0$ because B1 has a fault code combination with the others in the right order, using the priority system.
```
def sumFromSupervisor(code):
'''
Returns the number of times faults associated with a particular pure fault (the parameter) were output by the supervisor
@param code: int
@return int
'''
sum = 0
for key, value in fault_codes.items():
if '-' + str(code) + '-' in key:
sum += value;
return sum;
a_discarded = generators['A'] - sumFromSupervisor(4)
a_discarded
b_discarded = generators['B'] - sumFromSupervisor(5)
b_discarded
c_discarded = generators['C'] - sumFromSupervisor(6)
c_discarded
d_discarded = generators['D'] - sumFromSupervisor(7)
d_discarded
total_discarded = a_discarded + b_discarded + c_discarded + d_discarded
total_discarded
total_generated = generators['A'] + generators['B'] + generators['C'] + generators['D']
total_generated
discarded = {'A': a_discarded, 'B': b_discarded, 'C': c_discarded, 'D': d_discarded}
discarded_percentage = {'A': a_discarded * 100 / total_generated, 'B': b_discarded * 100 / total_generated, 'C': c_discarded * 100 / total_generated, 'D': d_discarded * 100 / total_generated}
discarded_cases
fault_codes
a_increment = generators['A'] - fault_codes['-4-5-'] - fault_codes['-4-6-'] - fault_codes['-4-7-'] - a_discarded
a_increment
b_increment = generators['B'] - fault_codes['-4-5-'] - fault_codes['-5-6-'] - b_discarded
b_increment
c_increment = generators['C'] - fault_codes['-4-6-'] - fault_codes['-5-6-'] - fault_codes['-6-7-'] - c_discarded
c_increment
d_increment = generators['D'] - fault_codes['-4-7-'] - fault_codes['-6-7-'] - d_discarded
d_increment
```
### Discard Charts
```
#plt.title('Discarded Bar')
plt.bar(discarded.keys(), discarded.values())
#plt.show()
plt.savefig('discarded bar.png', format='png')
keys, values = list(discarded.keys()), list(discarded.values())
legend_keys = copy.copy(keys)
for i in range(len(keys)):
legend_keys[i] = str(legend_keys[i]) + " = " + str(values[i])
# Remove wedgeprops to make pie
wedges, texts = plt.pie(values, textprops=dict(color="w"), wedgeprops=dict(width=0.5))
plt.legend(wedges, legend_keys,
title="Fault Codes",
loc="center left",
bbox_to_anchor=(1, 0, 0.5, 1))
#plt.title("Discarded Pie")
#plt.show()
plt.savefig('discard pie.png', format='png')
```
### Discard Percentage Charts
```
#plt.title('Discard Percentage')
plt.bar(discarded_percentage.keys(), discarded_percentage.values())
#plt.show()
plt.savefig('sensorfusion.png', format='png')
keys, values = list(discarded_percentage.keys()), list(discarded_percentage.values())
legend_keys = copy.copy(keys)
for i in range(len(keys)):
legend_keys[i] = str(legend_keys[i]) + " (%) = " + str(values[i])
# Remove wedgeprops to make pie
wedges, texts = plt.pie(values, textprops=dict(color="w"), wedgeprops=dict(width=0.5))
plt.legend(wedges, legend_keys,
title="Fault Codes",
loc="center left",
bbox_to_anchor=(1, 0, 0.5, 1))
#plt.title("Discard Percentage")
plt.show()
plt.savefig('discard percntage pie.png')
```
### Toggle Time vs Frequency of Generators
```
toggle_times = {'A': 620, 'B': 180, 'C': 490, 'D': 270}
```
#### Premise
$faults\,generated \propto \frac{1}{toggle\,time}$
$\therefore B > D > C > A$
### Generator Output Charts (Possibilities of Faults)
```
generators['A']
#plt.title('Generator Output (Possibilities of Faults)')
plt.bar(generators.keys(), generators.values())
#plt.show()
plt.savefig('generator output bar.png')
keys, values = list(generators.keys()), list(generators.values())
legend_keys = copy.copy(keys)
for i in range(len(keys)):
legend_keys[i] = "n (" + str(legend_keys[i]) + ") = " + str(values[i])
# Remove wedgeprops to make pie
wedges, texts = plt.pie(values, textprops=dict(color="w"), wedgeprops=dict(width=0.5))
plt.legend(wedges, legend_keys,
title="Fault Codes",
loc="center left",
bbox_to_anchor=(1, 0, 0.5, 1))
#plt.title("Generator Output Charts (Possibilities of Fault)")
#plt.show()
plt.savefig('generator output pie.png')
```
### Single-Run Fault Charts
```
chart_data = copy.copy(fault_codes)
values = list(chart_data.values())
keys = list(chart_data.keys())
#plt.title('Single-Run')
plt.bar(keys, values)
#plt.show()
plt.savefig('single-run bar.png')
# Remove wedgeprops to make pie
wedges, texts = plt.pie(values,
textprops=dict(color="w"),
wedgeprops=dict(width=0.5))
legend_keys = copy.copy(keys)
for i in range(len(keys)):
legend_keys[i] = str(legend_keys[i]) + " " + str(values[i]) + " " + "times"
plt.legend(wedges, legend_keys,
title="Fault Codes",
loc="center left",
bbox_to_anchor=(1, 0, 0.5, 1))
#plt.title("Single-Run")
#plt.show()
plt.savefig('single-run pie.png')
```
### Cumulative Faults Chart
```
fileName = "D:/code/C++/RT-Cadmium-FDD-New-Code/knowledge_base/_fault_codes_dir/_fault_codes_list.txt"
with open(fileName, "r") as f:
lines = f.readlines()
total = {}
for line in lines:
res = re.findall("\d+[-]*\d*", line)
if len(res) > 0:
total[res[0]] = str(res[1])
values = list(total.values())
keys = list(total.keys())
#plt.title('Cummulative')
plt.bar(keys, values)
#plt.show()
plt.savefig('single-run bar.png')
values = list(total.values())
keys = list(total.keys())
legend_keys = copy.copy(keys)
for i in range(len(keys)):
legend_keys[i] = str(legend_keys[i]) + " " + str(values[i]) + " " + 'times'
# Remove wedgeprops to make pie
wedges, texts = plt.pie(values, textprops=dict(color="w"), wedgeprops=dict(width=0.5))
plt.legend(wedges, legend_keys,
title="Fault Codes",
loc="center left",
bbox_to_anchor=(1, 0, 0.5, 1))
#plt.title("Cumulative")
#plt.show()
plt.savefig('cumulative pie.png')
```
| github_jupyter |
# Six Degrees Of Wikipedia
**A Project by Robin Graham-Hayes**
```
# Run this code to import the nessisary files and libraries
import helpers
import pathways
import wikipedia as wiki
```
## Introduction
In the Wikipedia game the goal is to get from one Wikipedia page to another in the shortest amount of clicks. You can only use the links in the main article. Talk: pages and other meta Wikipedia pages are un interesting and outside of the heart of the game.
My project is a deviation of this game. Its goal is to find all the valid pathways between the two pages. It can realy be interesting to find unintuitive pathways between pages. A pathway such as: `| Toothpaste ---> Apricot ---> Alexander the Great |` demostrates how pathways can be unintuitive without the specific context the the articles give. Without this program a lot of these short weird pathways would be hard to find.
## Methodology
#### The Wikipedia API
The first problem I had to solve was how to interface with wikipedia pages. I could use BeautifulSoup but I was able to find a nice Wikipedia API that made interfacing with wikipedia pretty simple. I used the bash command `pip install wikipedia` to install the API and ran some basic tests:
```
mars = wiki.page("Mars (planet)")
mars_links = mars.links
print("Here is a sample of the links:", mars_links[20:25])
```
Now I have a basic way to get all the links in a page. In addition the Wikipedia API will throw errors if the page does not exist of if it is a disambiguation page which is useful for interacting with broken links and the alternitive format of a disambiguation page.
#### Depth First Searches
The next problem I had to tackle is how to actualy search through wikipedia pages. I knew I needed to specify a maximum depth to search, otherwise the program could end up thousands of pages deep in one pathway it needs to check. I also know that I dont know the precise amount of times I will have to iterate through a search. Following that a recursive depth fisrt search made the most sense for my purpose.
*The Folowing graphic depicts the basic pattern of a depth first search:*
A
/ | \
B E G
/\ | \
C D F H
The program starts by searching the first page. It will then go on to serch through the first link, and then through all the following links untill it reaches its max depth in the search. After that it will start going back through the previous branches that were not included in the first deep search. In the above graphic, the search pattern is depicted in alphebetical order.
#### The Algorithm
To create the search helpers `helper.get_pathway()` there are some checks that need to happen first. First we need to check if the file has the end page as a link in the article.
```
print(helpers.has_end(mars.title, wiki.page("The Moon")))
print(helpers.has_end(mars.title, wiki.page("Salt")))
```
Next we need to check if the max depth has been reached, since we are checking the links before hand this check will be one less than the given number.
After those checks we start checking each link in the page recursivly to see if the end page present in that next page. During this step we also check wether the link we are about to search is in the existing trail and if it is we skip it to avoid entering a infinite loop.
#### Slow downs
Once I got the algorithm running, I ran into the problem that accessing thousands and thousands of webpages end up taking a lot of time. To reduce the time I had to limit the maximum depth of my tests to four chains deep. I also rand into the realization the the wikipedia API was pulling all of the links including all of the hundreds of links in the navigation boxes at the bottom of the page. These are typicaly not included in the wikipedia game and it is both uninteresting to look at those paths and significantly slow down the searches.
#### Getting the links
I quickly learned that even with a more robust Wikipedia API there was no way to distinguish links in the navigation boxes from the main links in the article. I slowly developed a way to parse the HTML retrived by the wikipedia API with BeautifulSoup.
Doing this took a few steps, and is mostly contained in the function `helpers.parse_links()` I had to first seperate the navigation box from the main part of the article. I was able to do this using string indexing, where I found the first index of the navigation box, and cut off all the lines after that index. The next step was to pull only the wikipedia links from the page. This was fairly simple with BeautifulSoup as the search criteria was fairly limited, all the wiki links on the page had a title and no class except disambiguation pages.
BeautifulSoup allows me to use the following functions to find regular links:
`content_soup.find_all(
name="a", attrs={"class": None, "title": re.compile(".")})`
As well as finding all the disambiguation links:
`content_soup.find_all(
name="a", attrs={"class": "mw-disambig", "title": re.compile(".")})`
We then only have to sort out all the meta pages. The function `helpers.get_titles()` which takes the ResultsSet object from `helpers.parse_links()` and only returns the titles that are from articles that are regular wikipedia pages.
All of this was then simplified into the function `helpers.get_links()` which parses the links of any page title given to it.
```
print(f"The page {mars.title} has {len(mars_links)} links")
print(f"The page {mars.title} has {len(helpers.get_links(mars.title))} links")
```
As you can see above this significantly reduces the number of links the program has to sort through.
#### Saving to files
To further save time I decided to save information localy. I created the function `helpers.save_links()` to save all the links to a file witht the title of the article and each link of the page listed on seperate lines of the file. The function `helpers.read_links()` was created to read the links from the file. Both of these functions were added to `helpers.get_links()` and significantly sped up the runtime of the search.
#### Making an Interactive Script
The result of all of these functions was `pathways.find_paths()` however I wanted to provide a way for any user to interact with the program and not need to know the ins and outs of how `pathways.find_paths()` works. I created the python script `Six_Degrees_of_Wiki.py` to do just that.
By running `python Six_Degrees_of_Wiki.py` in a bash terminal you will be prompted for the strarting article and if you would like to use the default arguments for the max steps and ending page. If you don't it will prompt you for those as well.
## Results
The results collected from the main function call are filterd through `helpers.plot_all_paths()` to format them in a more visualy and are then saved to a file.
For example the following function call produces a file containing the following text:
We can call `pathways.find_paths` to see the full out put of the function call:
```
# The following code will produce ~125 lines of text
# to show you the process the search follows
pathways.find_paths("Toothpaste", "Alexander the Great", 2)
```
If you use `python Six_Degrees_of_Wiki.py` to run the same search as the function call above does you would get the following terminal output:
## Conclusion
I learned a lot through this project, I got alot more used to using an API to request data. I also learned a lot about BeautifulSoup, though I still have a lot to learn about using BeautifulSoup. Also learning the limits of programs and how a large program will actualy take time to run, as well as learning how to optimise a program to increase runtime efficiency. If you can avoid a try-except call without risking a error being thrown, that can be extreamly useful as they tend to add a lot of time when call repeatedly.
I think there are some really interesting paths between wikipedia pages that are for the most part unintuitive. Who knew you could get from Ketchup to The Bible with the path: `| Ketchup --> Salt --> Bible |` or from Toothpaste to Alexander the Great through apricot: `| Toothpaste ---> Apricot ---> Alexander the Great |`
I think this project highlights how important context is. If you are just told something or given some information without context how would you know what to do with it. If I was playing the wikipedia game I would never click on Apricot to get to Alexander the Great. Things are not ment to existt in this world without any context. A student is not defined by their SAT scores, you need to consider the bigger picture.
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.