code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# About This Notebook
* The following notebooks utilizes the [generated outputs](https://www.kaggle.com/usaiprashanth/gptmodel-outputs) and performs some Exploratory Data Analysis
```
#loading the outputs
import joblib
withoutshuffle = joblib.load('../input/gptmodel-outputs/results (4)/withoutshuffle.pkl')
withshuffle = joblib.load('../input/gptmodel-outputs/results (3)/withshuffle.pkl')
data29 = joblib.load('../input/gptmodel-outputs/results (5)/data29.pkl')
```
* Data @param withshuffle, @param withoutshuffle @param data29 are nested arrays with following structure
> array[0] index of the document with respect to THE PILE dataset
> array[1] length of the document
> array[2] the score of the document (number of correctly predicted labels)
* The folllowing two graphs compare the score of the model with and without shuffling the evaluation data
* More information about shuffling can be found [here](https://www.kaggle.com/usaiprashanth/gpt-1-3b-model?scriptVersionId=72760342) and [here](https://www.kaggle.com/usaiprashanth/gpt-1-3b-model?scriptVersionId=72761073)
```
import matplotlib.pyplot as plt
plt.plot(withshuffle[0],withshuffle[2],'r+')
plt.plot(withoutshuffle[0],withoutshuffle[2],'r+')
```
* My original interpretation of this idea (which has been proved wrong) was that the order in which the data would be evaluated would effect the evaluation loss of model. Which is inherently false. The reasoning for this is due to the fact of there being randomness involved with the model.
```
plt.plot(data29[0],data29[2],'r+')
```
* Dividing the arrays of 0th and 29th shard into 1000 buckets and plotting their average score
```
buckets = []
plt.rcParams["figure.figsize"] = (25,3)
import numpy as np
for i in range(0,10000,10):
buckets.append(np.nanmean(withoutshuffle[2][i:i+10]))
plt.plot(buckets)
```
* Atleast for the first 10,000 samples, there doesn't seem to be any difference in the memorization of data with respect to it's position in the dataset.
* However, It is worth noting that 10,000 samples is a very small sampling for a dataset as big as [The Pile](https://pile.eleuther.ai/) and the results can significantly differ when evaluated with another shard of the dataset.
* This can be generalized by plotting the bucketed version of data29 (outputs of 29th shard of THE PILE)
```
buckets = []
plt.rcParams["figure.figsize"] = (25,3)
import numpy as np
for i in range(0,10000,10):
buckets.append(np.nanmean(data29[2][i:i+10]))
plt.plot(buckets)
#Finding means and variances
print(np.nanmean(withoutshuffle[2]),np.nanmean(data29[2]))
print(np.nanvar(withoutshuffle[2]),np.nanvar(data29[2]))
```
* Atleast of gpt-neo-1.3B model, there doesn't seem to be any correlation between the way data is memorized and the position of data within training dataset
| github_jupyter |
```
## Import Libraries
import sys
import os
sys.path.append(os.path.abspath(os.path.join('..')))
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from pandas.api.types import is_string_dtype, is_numeric_dtype
%matplotlib inline
CSV_PATH = "../data/impression_log.csv"
# taking a csv file path and reading a dataframe
def read_proccessed_data(csv_path):
try:
df = pd.read_csv(csv_path)
print("file read as csv")
return df
except FileNotFoundError:
print("file not found")
## getting number of columns, row and column information
def get_data_info(Ilog_df: pd.DataFrame):
row_count, col_count = Ilog_df.shape
print(f"Number of rows: {row_count}")
print(f"Number of columns: {col_count}")
return Ilog_df.info()
## basic statistics of each column and see the data at glance
def get_statistics_info(Ilog_df: pd.DataFrame):
return Ilog_df.describe(include='all')
# reading the extracted impression_log data and getting information
Ilog_df = read_proccessed_data(CSV_PATH)
get_data_info(Ilog_df)
get_statistics_info(Ilog_df)
Ilog_df.head()
```
## Missing Values
```
def percent_missing(df):
totalCells = np.product(df.shape)
missingCount = df.isnull().sum()
totalMissing = missingCount.sum()
return round((totalMissing / totalCells) * 100, 2)
print("The Impression_log data dataset contains", percent_missing(Ilog_df), "%", "missing values.")
```
## Handling Missing Values
```
def percent_missing_for_col(df, col_name: str):
total_count = len(df[col_name])
if total_count <= 0:
return 0.0
missing_count = df[col_name].isnull().sum()
return round((missing_count / total_count) * 100, 2)
null_percent_df = pd.DataFrame(columns = ['column', 'null_percent'])
columns = Ilog_df.columns.values.tolist()
null_percent_df['column'] = columns
null_percent_df['null_percent'] = null_percent_df['column'].map(lambda x: percent_missing_for_col(Ilog_df, x))
null_percent_df.sort_values(by=['null_percent'], ascending = False)
```
### I used forward fill method to fill the missing values
```
Ilog_df['AudienceID'] = Ilog_df['AudienceID'].fillna(method='ffill')
Ilog_df['DeviceMake'] = Ilog_df['DeviceMake'].fillna(method='ffill')
Ilog_df['Browser'] = Ilog_df['Browser'].fillna(method='ffill')
Ilog_df['OS'] = Ilog_df['OS'].fillna(method='ffill')
Ilog_df['OSFamily'] = Ilog_df['OSFamily'].fillna(method='ffill')
Ilog_df['Region'] = Ilog_df['Region'].fillna(method='ffill')
Ilog_df['City'] = Ilog_df['City'].fillna(method='ffill')
#checking after handling the missing values
def percent_missing(df):
totalCells = np.product(df.shape)
missingCount = df.isnull().sum()
totalMissing = missingCount.sum()
return round((totalMissing / totalCells) * 100, 2)
print("The Impression_log data dataset contains", percent_missing(Ilog_df), "%", "missing values.")
```
Remove dupilicate rows
```
Ilog_df.drop_duplicates(inplace=True)
Ilog_df.info()
Ilog_df.to_csv("../data/processed.csv",index=False)
```
| github_jupyter |
# Denoising Autoencoder
Sticking with the MNIST dataset, let's add noise to our data and see if we can define and train an autoencoder to _de_-noise the images.
<img src='notebook_ims/autoencoder_denoise.png' width=70%/>
Let's get started by importing our libraries and getting the dataset.
```
import torch
import numpy as np
from torchvision import datasets
from torchvision import transforms
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# load the training and test datasets
train_data = datasets.MNIST(root='data', train=True,
download=True, transform=transform)
test_data = datasets.MNIST(root='data', train=False,
download=True, transform=transform)
# Create training and test dataloaders
num_workers = 0
# how many samples per batch to load
batch_size = 20
# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers)
train_on_gpu = torch.cuda.is_available()
```
### Visualize the Data
```
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# get one image from the batch
img = np.squeeze(images[0])
fig = plt.figure(figsize = (5,5))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
```
---
# Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1.
>**We'll use noisy images as input and the original, clean images as targets.**
Below is an example of some of the noisy images I generated and the associated, denoised images.
<img src='notebook_ims/denoising.png' />
Since this is a harder problem for the network, we'll want to use _deeper_ convolutional layers here; layers with more feature maps. You might also consider adding additional layers. I suggest starting with a depth of 32 for the convolutional layers in the encoder, and the same depths going backward through the decoder.
#### TODO: Build the network for the denoising autoencoder. Add deeper and/or additional layers compared to the model above.
```
import torch.nn as nn
import torch.nn.functional as F
# define the NN architecture
class ConvDenoiser(nn.Module):
def __init__(self):
super(ConvDenoiser, self).__init__()
## encoder layers ##
self.conv1 = nn.Conv2d(1, 4, 3, padding=1)
self.conv2 = nn.Conv2d(4, 16, 3, padding=1)
self.conv3 = nn.Conv2d(16, 32, 3, padding=1)
self.maxpool = nn.MaxPool2d(2)
## decoder layers ##
## a kernel of 2 and a stride of 2 will increase the spatial dims by 2
self.t_conv1 = nn.ConvTranspose2d(32, 16, 3, stride=2)
self.t_conv2 = nn.ConvTranspose2d(16, 4, 2, stride=2)
self.t_conv3 = nn.ConvTranspose2d(4, 1, 2, stride=2)
def forward(self, x):
## encode ##
x = F.relu(self.conv1(x))
x = self.maxpool(x)
x = F.relu(self.conv2(x))
x = self.maxpool(x)
x = F.relu(self.conv3(x))
x = self.maxpool(x)
## decode ##
## apply ReLu to all hidden layers *except for the output layer
## apply a sigmoid to the output layer
x = F.relu(self.t_conv1(x))
x = F.relu(self.t_conv2(x))
x = torch.sigmoid(self.t_conv3(x))
return x
# initialize the NN
model = ConvDenoiser()
if train_on_gpu:
model.cuda()
print(model)
```
---
## Training
We are only concerned with the training images, which we can get from the `train_loader`.
>In this case, we are actually **adding some noise** to these images and we'll feed these `noisy_imgs` to our model. The model will produce reconstructed images based on the noisy input. But, we want it to produce _normal_ un-noisy images, and so, when we calculate the loss, we will still compare the reconstructed outputs to the original images!
Because we're comparing pixel values in input and output images, it will be best to use a loss that is meant for a regression task. Regression is all about comparing quantities rather than probabilistic values. So, in this case, I'll use `MSELoss`. And compare output images and input images as follows:
```
loss = criterion(outputs, images)
```
```
# specify loss function
criterion = nn.MSELoss()
# specify loss function
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
# number of epochs to train the model
n_epochs = 20
# for adding noise to images
noise_factor=0.5
for epoch in range(1, n_epochs+1):
# monitor training loss
train_loss = 0.0
###################
# train the model #
###################
for images, _ in train_loader:
## add random noise to the input images
noisy_imgs = images + noise_factor * torch.randn(*images.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
if train_on_gpu:
images, noisy_imgs = images.cuda(), noisy_imgs.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
## forward pass: compute predicted outputs by passing *noisy* images to the model
outputs = model(noisy_imgs)
# calculate the loss
# the "target" is still the original, not-noisy images
loss = criterion(outputs, images)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update running training loss
train_loss += loss.item()*images.size(0)
# print avg training statistics
train_loss = train_loss/len(train_loader)
print(f"Epoch: {epoch} \tTraining Loss: {train_loss:.6f}")
```
## Checking out the results
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprising great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
```
# obtain one batch of test images
dataiter = iter(test_loader)
images, _ = dataiter.next()
# add noise to the test images
noisy_imgs = images + noise_factor * torch.randn(*images.shape)
noisy_imgs = np.clip(noisy_imgs, 0, 1)
if train_on_gpu:
images, noisy_imgs = images.cuda(), noisy_imgs.cuda()
# get sample outputs
output = model(noisy_imgs)
# prep images for display
noisy_imgs = noisy_imgs.cpu().numpy() if train_on_gpu else noisy_imgs.numpy()
# output is resized into a batch of iages
output = output.view(batch_size, 1, 28, 28)
# use detach when it's an output that requires_grad
output = output.detach().cpu().numpy()
# plot the first ten input images and then reconstructed images
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(25,4))
# input images on top row, reconstructions on bottom
for noisy_imgs, row in zip([noisy_imgs, output], axes):
for img, ax in zip(noisy_imgs, row):
ax.imshow(np.squeeze(img), cmap='gray')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
```
| github_jupyter |
# Sparkify Project Workspace
This workspace contains a tiny subset (128MB) of the full dataset available (12GB). Feel free to use this workspace to build your project, or to explore a smaller subset with Spark before deploying your cluster on the cloud. Instructions for setting up your Spark cluster is included in the last lesson of the Extracurricular Spark Course content.
You can follow the steps below to guide your data analysis and model building portion of this project.
```
# import libraries
from pyspark.sql import SparkSession
import pandas as pd
from pyspark.sql.functions import isnan, when, count, col, countDistinct, to_timestamp
from pyspark.sql import functions as F
import matplotlib.pyplot as plt
import seaborn as sns
from pyspark.ml.feature import MinMaxScaler, VectorAssembler
from pyspark.sql.types import IntegerType
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression, RandomForestClassifier, LinearSVC, GBTClassifier
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
# create a Spark session
spark = SparkSession \
.builder \
.appName("Python Spark SQL") \
.getOrCreate()
```
# Load and Clean Dataset
In this workspace, the mini-dataset file is `mini_sparkify_event_data.json`. Load and clean the dataset, checking for invalid or missing data - for example, records without userids or sessionids.
```
df = spark.read.json('mini_sparkify_event_data.json')
df.show(5)
print((df.count(), len(df.columns)))
df.select([count(when(isnan(c) | col(c).isNull(), c)).alias(c) for c in df.columns]).show()
df.select(col('location')).groupBy('location').count().count()
for column in df.columns:
if df.select(col(column)).groupBy(column).count().count()<30:
print('\033[1m' + column + '\033[0m') , print(df.select(col(column)).groupBy(column).count().show(30, False))
df.where(col("firstName").isNull()).select(col('auth')).groupBy('auth').count().show()
df.where(col("firstName").isNull()).select(col('level')).groupBy('level').count().show()
df.where(col("firstName").isNull()).select(col('page')).groupBy('page').count().show()
df.where(col("artist").isNotNull()).select(col('page')).groupBy('page').count().show()
df.where(col("artist").isNull()).select(col('page')).groupBy('page').count().show()
```
We have 2 different types of missing values.
1. Missing user data for 8346 entries. From the analysis above it seems that the users that have null data are users that have not logged in the app yet. As these events cannot be correlated with the userId, we cannot use them, so we will drop them
2. Mssing song data for 58392 entries. From the analysis above it seems that the missing songs are reasonable. The song data are populated only in case the page is the NextSong page, so we will keep all these entries for now
```
df = df.na.drop(subset=["firstName"])
df.select([count(when(isnan(c) | col(c).isNull(), c)).alias(c) for c in df.columns]).show()
```
# Exploratory Data Analysis
When you're working with the full dataset, perform EDA by loading a small subset of the data and doing basic manipulations within Spark. In this workspace, you are already provided a small subset of data you can explore.
### Define Churn
Once you've done some preliminary analysis, create a column `Churn` to use as the label for your model. I suggest using the `Cancellation Confirmation` events to define your churn, which happen for both paid and free users. As a bonus task, you can also look into the `Downgrade` events.
### Explore Data
Once you've defined churn, perform some exploratory data analysis to observe the behavior for users who stayed vs users who churned. You can start by exploring aggregates on these two groups of users, observing how much of a specific action they experienced per a certain time unit or number of songs played.
```
df.createOrReplaceTempView("DATA")
spark.sql("""
SELECT count(distinct userId) FROM DATA
""").show(10, False)
spark.sql("""
SELECT distinct userId,page
FROM DATA
where page in ('Cancellation Confirmation','Downgrade')
order by userId,page
""").show(10, False)
spark.sql("""
SELECT page,to_timestamp(ts/1000) as ts,level
FROM DATA
where userId='100001'
order by ts
""").show(500, False)
spark.sql("""
SELECT page,to_timestamp(ts/1000) as ts,level
FROM DATA
where userId='100002'
order by ts
""").show(500, False)
```
We can see that even that the user went to Downgrade page he remained paid. I assume that he should do a Submit Downgrade page to consider his downgrade valid
```
spark.sql("""
SELECT distinct userId,page
FROM DATA
where page in ('Cancellation Confirmation','Submit Downgrade')
order by userId,page
""").show(10, False)
spark.sql("""
SELECT page,to_timestamp(ts/1000) as ts,level
FROM DATA
where userId='100009'
order by ts
""").show(500, False)
```
This user after the submit upgrade become paid and after the submit downgrade become free again
```
spark.sql("""
SELECT distinct userId,page
FROM DATA
where page in ('Cancellation Confirmation','Submit Downgrade')
order by userId,page
""").count()
df = df.withColumn("churn", when((col("page")=='Cancellation Confirmation') | (col("page")=='Submit Downgrade'),1).otherwise(0))
df.show(5)
df.createOrReplaceTempView("DATA")
spark.sql("""
SELECT distinct userId,page,churn
FROM DATA
where page in ('Cancellation Confirmation','Submit Downgrade')
order by userId,page
""").show(5, False)
```
# Feature Engineering
Once you've familiarized yourself with the data, build out the features you find promising to train your model on. To work with the full dataset, you can follow the following steps.
- Write a script to extract the necessary features from the smaller subset of data
- Ensure that your script is scalable, using the best practices discussed in Lesson 3
- Try your script on the full data set, debugging your script if necessary
If you are working in the classroom workspace, you can just extract features based on the small subset of data contained here. Be sure to transfer over this work to the larger dataset when you work on your Spark cluster.
```
spark.sql("""
SELECT max(to_timestamp(ts/1000)) as max_ts,min(to_timestamp(ts/1000)) as min_ts
FROM DATA
""").show(5, False)
df_dataset = spark.sql("""
SELECT DATA.userId,
case when gender='M' then 1 else 0 end as is_male_flag,
max(churn) as churn,
count(distinct ts_day) as days_in_app,
count(distinct song)/sum(case when song is not null then 1 else 0 end) as avg_songs,
count(distinct artist)/sum(case when song is not null then 1 else 0 end) as avg_artists,
round(sum(length/60)/sum(case when song is not null then 1 else 0 end),2) as avg_song_length,
count(1) as events_cnt,
count(1)/count(distinct ts_day) as avg_sessions_per_day,
sum(case when DATA.page='NextSong' then 1 else 0 end)/count(distinct ts_day) as avg_pg_song_cnt,
sum(case when DATA.page='Roll Advert' then 1 else 0 end)/count(distinct ts_day) as avg_pg_advert_cnt,
sum(case when DATA.page='Logout' then 1 else 0 end)/count(distinct ts_day) as avg_pg_logout_cnt,
sum(case when DATA.page='Thumbs Down' then 1 else 0 end)/count(distinct ts_day) as avg_pg_down_cnt,
sum(case when DATA.page='Thumbs Up' then 1 else 0 end)/count(distinct ts_day) as avg_pg_up_cnt,
sum(case when DATA.page='Add Friend' then 1 else 0 end)/count(distinct ts_day) as avg_pg_friend_cnt,
sum(case when DATA.page='Add to Playlist' then 1 else 0 end)/count(distinct ts_day) as avg_pg_playlist_cnt,
sum(case when DATA.page='Help' then 1 else 0 end)/count(distinct ts_day) as avg_pg_help_cnt,
sum(case when DATA.page='Home' then 1 else 0 end)/count(distinct ts_day) as avg_pg_home_cnt,
sum(case when DATA.page='Save Settings' then 1 else 0 end)/count(distinct ts_day) as avg_pg_save_settings_cnt,
sum(case when DATA.page='About' then 1 else 0 end)/count(distinct ts_day) as avg_pg_about_cnt,
sum(case when DATA.page='Settings' then 1 else 0 end)/count(distinct ts_day) as avg_pg_settings_cnt,
sum(case when DATA.page='Login' then 1 else 0 end)/count(distinct ts_day) as avg_pg_login_cnt,
sum(case when DATA.page='Submit Registration' then 1 else 0 end)/count(distinct ts_day) as avg_pg_sub_reg_cnt,
sum(case when DATA.page='Register' then 1 else 0 end)/count(distinct ts_day) as avg_pg_reg_cnt,
sum(case when DATA.page='Upgrade' then 1 else 0 end)/count(distinct ts_day) as avg_pg_upg_cnt,
sum(case when DATA.page='Submit Upgrade' then 1 else 0 end)/count(distinct ts_day) as avg_pg_sub_upg_cnt,
sum(case when DATA.page='Error' then 1 else 0 end)/count(distinct ts_day) as avg_pg_error_cnt
FROM DATA
LEFT JOIN
(
SELECT distinct DATE_TRUNC('day', to_timestamp(ts/1000)) as ts_day, userId FROM DATA
) day_ts
ON day_ts.userId=DATA.userId
GROUP BY DATA.userId,gender
""")
churn_cnt = df_dataset.select(col('churn'),col('userId')).groupby('churn').count().toPandas()
#churn_cnt.show()
sns.barplot('churn','count', data=churn_cnt)
plt.title('Churn Distribution')
plt.xticks(rotation = 90)
is_male_flag_dstr = df_dataset.select(col('is_male_flag'),col('churn')).groupby('is_male_flag','churn').agg(count("churn").alias("churn_cnt")).toPandas()
#is_male_flag_dstr.show()
sns.barplot('churn', 'churn_cnt', hue = 'is_male_flag', data=is_male_flag_dstr)
plt.title('Churn Distribution Per Gender')
plt.xticks(rotation = 90)
is_male_flag_dstr = df_dataset.select(col('is_male_flag'),col('churn')).groupby('is_male_flag').agg(F.mean("churn").alias("avg_churn_cnt")).toPandas()
#is_male_flag_dstr.show()
sns.barplot('is_male_flag', 'avg_churn_cnt', data=is_male_flag_dstr)
plt.title('Average Churn Distribution Per Gender')
plt.xticks(rotation = 90)
days_in_app_dstr = df_dataset.select(col('days_in_app'),col('churn')).groupby('churn').agg(F.mean("days_in_app").alias("avg_days_in_app")).toPandas()
#is_male_flag_dstr.show()
sns.barplot('churn', 'avg_days_in_app', data=days_in_app_dstr)
plt.title('Churn Distribution Per Average Days in App')
plt.xticks(rotation = 90)
days_in_app_dstr = df_dataset.select(col('avg_sessions_per_day'),col('churn')).groupby('churn').agg(F.mean("avg_sessions_per_day").alias("avg_sessions_per_day")).toPandas()
#is_male_flag_dstr.show()
sns.barplot('churn', 'avg_sessions_per_day', data=days_in_app_dstr)
plt.title('Churn Distribution Per Average Sessions Per Day')
plt.xticks(rotation = 90)
days_in_app_dstr = df_dataset.select(col('avg_pg_down_cnt'),col('churn')).groupby('churn').agg(F.mean("avg_pg_down_cnt").alias("avg_pg_thumbs_down")).toPandas()
#is_male_flag_dstr.show()
sns.barplot('churn', 'avg_pg_thumbs_down', data=days_in_app_dstr)
plt.title('Churn Distribution Per Average Thumbs Down Per Day')
plt.xticks(rotation = 90)
days_in_app_dstr = df_dataset.select(col('avg_pg_up_cnt'),col('churn')).groupby('churn').agg(F.mean("avg_pg_up_cnt").alias("avg_pg_thumps_up")).toPandas()
#is_male_flag_dstr.show()
sns.barplot('churn', 'avg_pg_thumps_up', data=days_in_app_dstr)
plt.title('Churn Distribution Per Average Thumbs Up Per Day')
plt.xticks(rotation = 90)
days_in_app_dstr = df_dataset.select(col('avg_pg_friend_cnt'),col('churn')).groupby('churn').agg(F.mean("avg_pg_friend_cnt").alias("avg_pg_friend_cnt")).toPandas()
#is_male_flag_dstr.show()
sns.barplot('churn', 'avg_pg_friend_cnt', data=days_in_app_dstr)
plt.title('Churn Distribution Per Average Add Friends Per Day')
plt.xticks(rotation = 90)
days_in_app_dstr = df_dataset.select(col('avg_pg_playlist_cnt'),col('churn')).groupby('churn').agg(F.mean("avg_pg_playlist_cnt").alias("avg_pg_playlist_cnt")).toPandas()
#is_male_flag_dstr.show()
sns.barplot('churn', 'avg_pg_playlist_cnt', data=days_in_app_dstr)
plt.title('Churn Distribution Per Average Add to Playlist Per Day')
plt.xticks(rotation = 90)
days_in_app_dstr = df_dataset.select(col('avg_pg_advert_cnt'),col('churn')).groupby('churn').agg(F.mean("avg_pg_advert_cnt").alias("avg_pg_advert_cnt")).toPandas()
#is_male_flag_dstr.show()
sns.barplot('churn', 'avg_pg_advert_cnt', data=days_in_app_dstr)
plt.title('Churn Distribution Per Average Advert Per Day')
plt.xticks(rotation = 90)
days_in_app_dstr = df_dataset.select(col('avg_pg_error_cnt'),col('churn')).groupby('churn').agg(F.mean("avg_pg_error_cnt").alias("avg_pg_error_cnt")).toPandas()
#is_male_flag_dstr.show()
sns.barplot('churn', 'avg_pg_error_cnt', data=days_in_app_dstr)
plt.title('Churn Distribution Per Error Per Day')
plt.xticks(rotation = 90)
days_in_app_dstr = df_dataset.select(col('events_cnt'),col('churn')).groupby('churn').agg(F.mean("events_cnt").alias("avg_events_cnt")).toPandas()
#is_male_flag_dstr.show()
sns.barplot('churn', 'avg_events_cnt', data=days_in_app_dstr)
plt.title('Churn Distribution Per Events Average')
plt.xticks(rotation = 90)
days_in_app_dstr = df_dataset.select(col('avg_pg_song_cnt'),col('churn')).groupby('churn').agg(F.mean("avg_pg_song_cnt").alias("avg_pg_song_cnt")).toPandas()
#is_male_flag_dstr.show()
sns.barplot('churn', 'avg_pg_song_cnt', data=days_in_app_dstr)
plt.title('Churn Distribution Per Songs Average')
plt.xticks(rotation = 90)
days_in_app_dstr = df_dataset.select(col('avg_pg_logout_cnt'),col('churn')).groupby('churn').agg(F.mean("avg_pg_logout_cnt").alias("avg_pg_logout_cnt")).toPandas()
#is_male_flag_dstr.show()
sns.barplot('churn', 'avg_pg_logout_cnt', data=days_in_app_dstr)
plt.title('Churn Distribution Per LogOut Average')
plt.xticks(rotation = 90)
days_in_app_dstr = df_dataset.select(col('avg_pg_sub_upg_cnt'),col('churn')).groupby('churn').agg(F.mean("avg_pg_sub_upg_cnt").alias("avg_pg_sub_upg_cnt")).toPandas()
#is_male_flag_dstr.show()
sns.barplot('churn', 'avg_pg_sub_upg_cnt', data=days_in_app_dstr)
plt.title('Churn Distribution Per Upgrade Average')
plt.xticks(rotation = 90)
```
# Modeling
Split the full dataset into train, test, and validation sets. Test out several of the machine learning methods you learned. Evaluate the accuracy of the various models, tuning parameters as necessary. Determine your winning model based on test accuracy and report results on the validation set. Since the churned users are a fairly small subset, I suggest using F1 score as the metric to optimize.
```
df_dataset = spark.sql("""
SELECT DATA.userId,
case when gender='M' then 1 else 0 end as is_male_flag,
max(churn) as churn,
count(distinct ts_day) as days_in_app,
count(distinct song)/sum(case when song is not null then 1 else 0 end) as avg_songs,
count(distinct artist)/sum(case when song is not null then 1 else 0 end) as avg_artists,
round(sum(length/60)/sum(case when song is not null then 1 else 0 end),2) as avg_song_length,
count(1) as events_cnt,
count(1)/count(distinct ts_day) as avg_sessions_per_day,
sum(case when DATA.page='NextSong' then 1 else 0 end)/count(distinct ts_day) as avg_pg_song_cnt,
sum(case when DATA.page='Roll Advert' then 1 else 0 end)/count(distinct ts_day) as avg_pg_advert_cnt,
sum(case when DATA.page='Logout' then 1 else 0 end)/count(distinct ts_day) as avg_pg_logout_cnt,
sum(case when DATA.page='Thumbs Down' then 1 else 0 end)/count(distinct ts_day) as avg_pg_down_cnt,
sum(case when DATA.page='Thumbs Up' then 1 else 0 end)/count(distinct ts_day) as avg_pg_up_cnt,
sum(case when DATA.page='Add Friend' then 1 else 0 end)/count(distinct ts_day) as avg_pg_friend_cnt,
sum(case when DATA.page='Add to Playlist' then 1 else 0 end)/count(distinct ts_day) as avg_pg_playlist_cnt,
sum(case when DATA.page='Help' then 1 else 0 end)/count(distinct ts_day) as avg_pg_help_cnt,
sum(case when DATA.page='Home' then 1 else 0 end)/count(distinct ts_day) as avg_pg_home_cnt,
sum(case when DATA.page='Save Settings' then 1 else 0 end)/count(distinct ts_day) as avg_pg_save_settings_cnt,
sum(case when DATA.page='About' then 1 else 0 end)/count(distinct ts_day) as avg_pg_about_cnt,
sum(case when DATA.page='Settings' then 1 else 0 end)/count(distinct ts_day) as avg_pg_settings_cnt,
sum(case when DATA.page='Login' then 1 else 0 end)/count(distinct ts_day) as avg_pg_login_cnt,
sum(case when DATA.page='Submit Registration' then 1 else 0 end)/count(distinct ts_day) as avg_pg_sub_reg_cnt,
sum(case when DATA.page='Register' then 1 else 0 end)/count(distinct ts_day) as avg_pg_reg_cnt,
sum(case when DATA.page='Upgrade' then 1 else 0 end)/count(distinct ts_day) as avg_pg_upg_cnt,
sum(case when DATA.page='Submit Upgrade' then 1 else 0 end)/count(distinct ts_day) as avg_pg_sub_upg_cnt,
sum(case when DATA.page='Error' then 1 else 0 end)/count(distinct ts_day) as avg_pg_error_cnt
FROM DATA
LEFT JOIN
(
SELECT distinct DATE_TRUNC('day', to_timestamp(ts/1000)) as ts_day, userId FROM DATA
) day_ts
ON day_ts.userId=DATA.userId
GROUP BY DATA.userId,gender
""")
#for column in ['days_in_app','events_cnt','avg_sessions_per_day','avg_pg_song_cnt','avg_pg_advert_cnt',
# 'avg_pg_friend_cnt','avg_pg_playlist_cnt','avg_songs','avg_artists','avg_song_length',
# 'avg_pg_logout_cnt','avg_pg_sub_upg_cnt','avg_pg_upg_cnt','avg_pg_down_cnt','avg_pg_up_cnt',
# 'avg_pg_error_cnt'
# ]:
for column in [ 'days_in_app',
'avg_songs',
'avg_artists',
'avg_song_length',
'events_cnt',
'avg_sessions_per_day',
'avg_pg_song_cnt',
'avg_pg_advert_cnt',
'avg_pg_logout_cnt',
'avg_pg_down_cnt',
'avg_pg_up_cnt',
'avg_pg_friend_cnt',
'avg_pg_playlist_cnt'
]:
# VectorAssembler Transformation - Converting column to vector type
vector_assempler = VectorAssembler(inputCols=[column],outputCol=column+"_vect")
# MinMaxScaler Transformation
scaler = MinMaxScaler(inputCol=column+"_vect", outputCol=column+"_scaled")
# Pipeline of VectorAssembler and MinMaxScaler
pipeline = Pipeline(stages=[vector_assempler, scaler])
# Fitting pipeline on dataframe
df_dataset = pipeline.fit(df_dataset).transform(df_dataset).drop(column+"_vect")
#features_vector_assempler = VectorAssembler(inputCols=['days_in_app_scaled','events_cnt_scaled',
# 'avg_sessions_per_day_scaled','avg_pg_song_cnt_scaled','avg_pg_advert_cnt_scaled',
# 'avg_pg_friend_cnt_scaled','avg_pg_playlist_cnt_scaled','avg_songs_scaled','avg_artists_scaled',
# 'avg_song_length_scaled','avg_pg_logout_cnt_scaled','avg_pg_sub_upg_cnt_scaled',
# 'avg_pg_upg_cnt_scaled','avg_pg_down_cnt_scaled','avg_pg_up_cnt_scaled',
# 'avg_pg_error_cnt_scaled'
# ],outputCol="features")
features_vector_assempler = VectorAssembler(inputCols=['is_male_flag',
'days_in_app_scaled',
'avg_songs_scaled',
'avg_artists_scaled',
'avg_song_length_scaled',
'events_cnt_scaled',
'avg_sessions_per_day_scaled',
'avg_pg_song_cnt_scaled',
'avg_pg_advert_cnt_scaled',
'avg_pg_logout_cnt_scaled',
'avg_pg_down_cnt_scaled',
'avg_pg_up_cnt_scaled',
'avg_pg_friend_cnt_scaled',
'avg_pg_playlist_cnt_scaled'],outputCol="features")
df_dataset_model = features_vector_assempler.transform(df_dataset)
df_dataset_model = df_dataset_model.select(col("churn").alias("label"),col("features"))
#Test 1
train, test = df_dataset_model.randomSplit([0.8, 0.2], seed=7)
#sub_test, validation = test.randomSplit([0.5, 0.5], seed = 7)
print("Training Dataset Count: " + str(train.count()))
print("Test Dataset Count: " + str(test.count()))
gbt = GBTClassifier(featuresCol = 'features', labelCol = "label", maxIter = 10, maxDepth = 10, seed = 7)
gbt_fitted_model = gbt.fit(train)
predictions = gbt_fitted_model.transform(test)
f1 = MulticlassClassificationEvaluator(metricName = 'f1')
acc = MulticlassClassificationEvaluator(metricName = 'accuracy')
prec = MulticlassClassificationEvaluator(metricName = 'weightedPrecision')
rec = MulticlassClassificationEvaluator(metricName = 'weightedRecall')
gbt_f1_score = f1.evaluate(predictions)
gbt_acc_score = acc.evaluate(predictions)
gbt_prec_score = prec.evaluate(predictions)
gbt_rec_score = rec.evaluate(predictions)
print('GBT Accuracy: {}, GBT Precision: {}, GBT Recall: {}, GBT F1-Score: {}'.format(round(gbt_acc_score*100,2),round(gbt_prec_score*100,2),round(gbt_rec_score*100,2),round(gbt_f1_score*100,2)))
rf = RandomForestClassifier()
rf_fitted_model = rf.fit(train)
predictions = rf_fitted_model.transform(test)
f1 = MulticlassClassificationEvaluator(metricName = 'f1')
acc = MulticlassClassificationEvaluator(metricName = 'accuracy')
prec = MulticlassClassificationEvaluator(metricName = 'weightedPrecision')
rec = MulticlassClassificationEvaluator(metricName = 'weightedRecall')
rf_f1_score = f1.evaluate(predictions)
rf_acc_score = acc.evaluate(predictions)
rf_prec_score = prec.evaluate(predictions)
rf_rec_score = rec.evaluate(predictions)
print('Random Forest Accuracy: {}, Random Forest Precision: {}, Random Forest Recall: {}, Random Forest F1-Score: {}'.format(round(rf_acc_score*100,2),round(rf_prec_score*100,2),round(rf_rec_score*100,2),round(rf_f1_score*100,2)))
lr = LogisticRegression(featuresCol="features", labelCol="label", maxIter=10, regParam=0.01)
lr_fitted_model = lr.fit(train)
predictions = lr_fitted_model.transform(test)
f1 = MulticlassClassificationEvaluator(metricName = 'f1')
acc = MulticlassClassificationEvaluator(metricName = 'accuracy')
prec = MulticlassClassificationEvaluator(metricName = 'weightedPrecision')
rec = MulticlassClassificationEvaluator(metricName = 'weightedRecall')
lr_f1_score = f1.evaluate(predictions)
lr_acc_score = acc.evaluate(predictions)
lr_prec_score = prec.evaluate(predictions)
lr_rec_score = rec.evaluate(predictions)
print('Logistic Regression Accuracy: {}, Logistic Regression Precision: {}, Logistic Regression Recall: {}, Logistic Regression F1-Score: {}'.format(round(lr_acc_score*100,2),round(lr_prec_score*100,2),round(lr_rec_score*100,2),round(lr_f1_score*100,2)))
svm = LinearSVC(featuresCol="features", labelCol="label", maxIter=10, regParam=0.1)
svm_fitted_model = svm.fit(train)
predictions = svm_fitted_model.transform(test)
f1 = MulticlassClassificationEvaluator(metricName = 'f1')
acc = MulticlassClassificationEvaluator(metricName = 'accuracy')
prec = MulticlassClassificationEvaluator(metricName = 'weightedPrecision')
rec = MulticlassClassificationEvaluator(metricName = 'weightedRecall')
svm_f1_score = f1.evaluate(predictions)
svm_acc_score = acc.evaluate(predictions)
svm_prec_score = prec.evaluate(predictions)
svm_rec_score = rec.evaluate(predictions)
print('SVM Accuracy: {}, SVM Precision: {}, SVM Recall: {}, SVM F1-Score: {}'.format(round(svm_acc_score*100,2),round(svm_prec_score*100,2),round(svm_rec_score*100,2),round(svm_f1_score*100,2)))
```
From the above executions and evaluations we will choosse as better performant algorythm the GBT one.
This is the algorythm that we will use for the calculation of the churn score with these KPIs.
Of course, the next step is to evaluate and validate the results running the code on the full dataset.
If we are happy with the results we can deploy the churn calculation algorythm in production
# Final Steps
Clean up your code, adding comments and renaming variables to make the code easier to read and maintain. Refer to the Spark Project Overview page and Data Scientist Capstone Project Rubric to make sure you are including all components of the capstone project and meet all expectations. Remember, this includes thorough documentation in a README file in a Github repository, as well as a web app or blog post.
| github_jupyter |
```
import requests
# Get token from Hoopla
username = 'HOOPLA_LOGIN'
# for test, fake
password = 'HOOPLA_PWD'
hoopla_headers = {'accept':'application/json, text/plain, */*', 'accept-encoding': 'gzip, deflate, br',
'content-type':'application/x-www-form-urlencoded', 'device-version': 'Chrome', 'referer':'https://www.hoopladigital.com/'}
data = {'username':username, 'password':password}
resp = requests.post('https://hoopla-ws.hoopladigital.com/tokens',
headers=hoopla_headers, data=data)
# Extract the token from the response.
hoopla_token = None
import json
if resp.status_code == 200:
print("Raw Content: {}".format(resp.content))
content = resp.content.decode('utf-8')
json_content = json.loads(content)
if json_content['tokenStatus'] == "SUCCESS":
hoopla_token = json_content['token']
else:
print("Invalid credentials, could not obtain token")
else:
print("Error getting token!")
print(hoopla_token)
# Search the raw API
search_param = 'tolkien'
# Try a search against the 'raw' search. Requires an "OPTIONS" query first?
hoopla_headers = {'accept':'application/json, text/plain, */*', 'accept-encoding': 'gzip, deflate, br',
'content-type':'application/x-www-form-urlencoded', 'device-version': 'Chrome', 'referer':'https://hoopla-ws.hoopladigital.com',
'ws-api': '2.1',
'authorization': "Bearer {}".format(hoopla_token)}
raw_search_url = 'https://hoopla-ws.hoopladigital.com/categories/search?q={}&sort=TOP&wwwVersion=4.20.3'.format(search_param)
resp = requests.get(raw_search_url, headers=hoopla_headers)
print(resp.status_code)
print("Attempted a search for {}, result={}".format(search_param, resp.content))
# Search against the Audiobooks endpoint
audiobooks_search_url = 'https://hoopla-ws.hoopladigital.com/v2/search/AUDIOBOOKS?limit=50&offset=0&q={}&sort=TOP&wwwVersion=4.20.3'.format(search_param)
ab_resp = requests.get(audiobooks_search_url, headers=hoopla_headers)
print(ab_resp.status_code)
ebooks_search_url = 'https://hoopla-ws.hoopladigital.com/v2/search/EBOOKS?limit=50&offset=0&q={}&sort=TOP&wwwVersion=4.20.3'.format(search_param)
ebooks_resp = requests.get(audiobooks_search_url, headers=hoopla_headers)
print(ebooks_resp.status_code)
people_search_url = 'https://hoopla-ws.hoopladigital.com/v2/search/PEOPLE?limit=50&offset=0&q={}&sort=TOP&wwwVersion=4.20.3'.format(search_param)
people_resp = requests.get(audiobooks_search_url, headers=hoopla_headers)
print(people_resp.status_code)
# Now try to search against the 'artist-suggestions' ('unauthorized' if called directly)
#search_artist_sugg_url = 'https://search-api.hoopladigital.com/prod/artist-suggestions?q={}&suggester=name&size=5'.format(search_param)
##resp = requests.get(search_artist_sugg_url,headers=hoopla_headers)
#print(resp.status_code)
#print(resp.content)
#search_title_sugg_url = 'https://search-api.hoopladigital.com/prod/title-suggestions?q=elizabeth+bear&suggester=series&size=5'
ab_results = json.loads(ab_resp.content.decode('utf-8'))
ebook_results = json.loads(ebooks_resp.content.decode('utf-8'))
people_results = json.loads(people_resp.content.decode('utf-8'))
print("Audiobooks:")
print(ab_results)
print("Ebooks")
print(ebook_results)
print("People")
print(people_results)
### RBDIGITAL!
username = 'RBDIGITAL_LOGIN'
password = 'RBDIGITAL_PASSWORD'
home_library_url = 'mycitymystate.rbdigital.com'
rbdigital_headers = {'accept': '*/*', 'accept-encoding': 'gzip, deflate',
'Access-Control-Request-Headers': 'authorization,content-type',
'Access-Control-Request-Method': 'POST',
'Origin': '{}'.format(home_library_url),
'content-type':'application/x-www-form-urlencoded', 'device-version': 'Chrome', 'referer':'https://www.hoopladigital.com/'}
options_resp = requests.options('http://auth.rbdigital.com/v1/authenticate',
headers=rbdigital_headers, data=data)
print(options_resp.status_code)
rbdigital_headers = {'accept': '*/*', 'accept-encoding': 'gzip, deflate',
'Content-Type': 'application/json',
'Accept-Language': 'en-US',
'Authorization': 'bearer 5ab487ad749bbe02e0aef7c8',
'Content-Length': '122',
'Host': 'auth.rbdigital.com',
'Referer': 'http://auth.rbdigital.com',
'Origin': '{}'.format(home_library_url)}
patron_data = {'PatronIdentifier':username, 'PatronSecret':password, 'Source': 'oneclick',
'auth_state': 'auth_internal', 'libraryId': 75} # XXX libraryID from where?
auth_resp = requests.post('http://auth.rbdigital.com/v1/authenticate',
headers=rbdigital_headers,
data=patron_data)
print(auth_resp.status_code)
print(auth_resp.content)
# http://developer.rbdigital.com/documents/patron-login
# Having trouble getting the bearer token. Not sure where it comes from...
```
| github_jupyter |
[](https://mybinder.org/v2/gh/tueda/mympltools/HEAD?labpath=examples/Examples.ipynb)
[](https://colab.research.google.com/github/tueda/mympltools/blob/HEAD/examples/Examples.ipynb)
```
# Install mympltools 22.5.1 only when running on Binder/Colab.
! [ -n "$BINDER_SERVICE_HOST$COLAB_GPU" ] && pip install "git+https://github.com/tueda/mympltools.git@22.5.1#egg=mympltools[fitting]"
```
## Basic style
```
import matplotlib.pyplot as plt
import numpy as np
import mympltools as mt
# Use the style.
mt.use("21.10")
x = np.linspace(-5, 5)
fig, ax = plt.subplots()
ax.plot(x, x**2)
# Show grid lines.
mt.grid(ax)
plt.show()
```
## Annotation text for lines
```
import matplotlib.pyplot as plt
import numpy as np
import mympltools as mt
mt.use("21.10")
x = np.linspace(-5, 5)
fig, ax = plt.subplots()
# Plot a curve with an annotation.
l1 = ax.plot(x, np.exp(x))
mt.line_annotate("awesome function", l1)
# Plot another curve with an annotation.
l2 = ax.plot(x, np.exp(-0.2 * x**2))
mt.line_annotate("another function", l2, x=3.5)
ax.set_yscale("log")
mt.grid(ax)
plt.show()
```
To fine-tune the text position, use the `xytext` (default: `(0, 5)`) and `rotation` options.
## Handling uncertainties
```
import matplotlib.pyplot as plt
import numpy as np
import mympltools as mt
mt.use("21.10")
# Bounded(x, dx) represents a curve with a symmetric error bound.
x = np.linspace(0, 10)
y1 = mt.Bounded(x, 0.1)
y2 = mt.Bounded(1.5 + np.sin(x), 0.2)
y3 = y1 / y2 # The error bound of this result is not symmetric.
fig, ax = plt.subplots()
# Plot a curve with an error band.
a1 = mt.errorband(ax, x, y3.x, y3.err, label="f")
ax.legend([a1], ax.get_legend_handles_labels()[1])
mt.grid(ax)
plt.show()
```
## Curve fitting (SciPy wrapper)
```
import matplotlib.pyplot as plt
import numpy as np
import mympltools as mt
mt.use("21.10")
np.random.seed(1)
# Fitting function.
def f(x, a, b, c):
return a * np.exp(-b * x) + c
# Data to be fitted.
n = 10
x = np.linspace(0, 4, 40)
y = []
e = []
for xi in x:
di = np.zeros(n)
di += f(xi, 2.5, 1.3, 0.5)
di += np.random.randn(n) * 0.5 # noise
yi = np.average(di)
ei = np.std(di) / np.sqrt(n)
y += [yi]
e += [ei]
# Perform fitting.
model = mt.fit(f, x, y, e)
print(model)
# Plot with the fitted curve.
fig, ax = plt.subplots()
ax.errorbar(x, y, e, fmt="o")
ax.plot(x, model(x))
mt.grid(ax)
plt.show()
# Example taken from https://root.cern.ch/doc/v626/FittingDemo_8C.html
import matplotlib.pyplot as plt
import numpy as np
import mympltools as mt
mt.use("21.10")
# Data.
xdata = np.linspace(0, 3, 61)
xdata = (xdata[1:] + xdata[:-1]) / 2
ydata_str = """
6, 1, 10, 12, 6, 13, 23, 22, 15, 21,
23, 26, 36, 25, 27, 35, 40, 44, 66, 81,
75, 57, 48, 45, 46, 41, 35, 36, 53, 32,
40, 37, 38, 31, 36, 44, 42, 37, 32, 32,
43, 44, 35, 33, 33, 39, 29, 41, 32, 44,
26, 39, 29, 35, 32, 21, 21, 15, 25, 15
"""
ydata = [float(s) for s in ydata_str.split(",")]
edata = np.sqrt(ydata)
# Fitting function.
def background(x, c0, c1, c2):
return c0 + c1 * x + c2 * x**2
def signal(x, a0, a1, a2):
return 0.5 * a0 * a1 / np.pi / ((x - a2) ** 2 + 0.25 * a1**2)
def fit_f(x, c0, c1, c2, a0, a1, a2):
return background(x, c0, c1, c2) + signal(x, a0, a1, a2)
# Perform fitting.
model = mt.fit(fit_f, xdata, ydata, edata, p0=(1, 1, 1, 1, 0.2, 1))
print(model)
# Plot with the fitted curve.
fig, ax = plt.subplots()
ax.errorbar(xdata, ydata, edata, (x[1] - x[0]) / 2, ".", c="k", label="Data")
x = np.linspace(0, 3, 200)
ax.plot(x, background(x, *model.popt[0:3]), c="r", label="Background fit")
ax.plot(x, signal(x, *model.popt[3:6]), c="b", label="Signal fit")
ax.plot(x, model(x), c="m", label="Global fit")
ax.legend()
mt.grid(ax)
plt.show()
```
| github_jupyter |
# Geofísica Matemática y Computacional.
## Examen
### 23 de noviembre de 2021
Antes de entregar este *notebook*, asegúrese de que la ejecución se realiza como se espera.
1. Reinicie el kernel.
- Para ello seleccione en el menú principal: Kernel$\rightarrow$Restart.
2. Llene todos las celdas que indican:
- `YOUR CODE HERE` o
- "YOUR ANSWER HERE"
3. Ponga su nombre en la celda siguiente (y el de sus colaboradores si es el caso).
4. Una vez terminado el ejercicio haga clic en el botón Validate y asegúrese de que no hay ningún error en la ejecución.
```
NAME = ""
COLLABORATORS = ""
```
---
# Convección-difusión de calor estacionaria
Considere el siguiente problema:
$$
\begin{eqnarray*}
c_p \rho \frac{\partial}{\partial x} \left( u T \right) -
\frac{\partial }{\partial x} \left( \kappa \frac{\partial T}{\partial x}\right) & = &
S \\
T(0) & = & 1 \\
T(L) & = & 0
\end{eqnarray*}
$$
<img src="conv03.png" width="300" align="middle">
La solución analítica es la siguiente:
$$
\displaystyle
T(x) = \frac{\exp\left(\frac{\rho u x}{\kappa}\right) - 1 }{\exp\left(\frac{\rho v L}{\kappa}\right) - 1} (T_L - T_0) + T_0
$$
Implementar la solución numérica con diferencias finitas en Python.
Utilice los siguientes datos:
- $L = 1.0$ [m],
- $c_p = 1.0$ [J / Kg $^\text{o}$K],
- $\rho = 1.0$ [kg/m$^3$],
- $\kappa = 0.1$ [kg/m s],
- $S = 0$
## Diferencias Centradas
1. Realice la implementación usando el esquema de **Diferencias Centradas para el término advectivo** y haga las siguientes pruebas:
1. $u = 0.1$ [m/s], con $6$ nodos.<br>
2. $u = 2.5$ [m/s], con $6$ nodos.<br>
3. $u = 2.5$ [m/s], con $N = $ tal que el error sea menor a $0.005$.<br>
En todos los casos compare la solución numérica con la analítica calculando el error con la fórmula: $E = ||T_a - T_n||_\infty$. Genere figuras similares a las siguientes:
<table>
<tr>
<td><img src="caso1c.png" width="300"></td>
<td><img src="caso2c.png" width="300"></td>
</tr>
</table>
```
import numpy as np
import matplotlib.pyplot as plt
# Parámetros para el estilo de las gráficas
plt.style.use('seaborn-paper')
params = {'figure.figsize' : (10,7),
# 'text.usetex' : True,
'xtick.labelsize': 20,
'ytick.labelsize': 20,
'axes.labelsize' : 24,
'axes.titlesize' : 24,
'legend.fontsize': 24,
'lines.linewidth': 3,
'lines.markersize': 10,
'grid.color' : 'darkgray',
'grid.linewidth' : 0.5,
'grid.linestyle' : '--',
'font.family': 'DejaVu Serif',
}
plt.rcParams.update(params)
def mesh(L,N):
"""
Esta función calcula el h y las coordenadas de la malla
Parameters
----------
L : float
Longitud del dominio.
N : int
Número de incógnitas (sin las fronteras)
Returns
-------
h, x: el tamaño h de la malla y las coordenadas en la dirección x
"""
# YOUR CODE HERE
raise NotImplementedError()
def Laplaciano1D(par):
"""
Esta función calcula los coeficientes de la matriz de
diferencias finitas.
Paremeters
----------
par: dict
Diccionario que contiene todos los datos del problema.
Returns
----------
A : la matriz de la discretización.
"""
N = par['N']
h = par['h']
alpha = par['alpha']
cp = par['cp']
rho = par['rho']
u = par['u']
# YOUR CODE HERE
raise NotImplementedError()
a = b + c
A = np.zeros((N,N))
A[0,0] = a
A[0,1] = -b
for i in range(1,N-1):
A[i,i] = a
A[i,i+1] = -b
A[i,i-1] = -c
A[N-1,N-2] = -c
A[N-1,N-1] = a
return A
def RHS(par):
"""
Esta función calcula el lado derecho del sistema lineal.
Paremeters
----------
par: dict
Diccionario que contiene todos los datos del problema.
Returns
----------
f : el vector del lado derecho del sistema.
"""
N = par['N']
h = par['h']
alpha = par['alpha']
cp = par['cp']
rho = par['rho']
u = par['u']
T0 = par['BC'][0]
TL = par['BC'][1]
f = np.zeros(N)
# YOUR CODE HERE
raise NotImplementedError()
f[0] = c * T0
f[N-1] = b * TL
return f
def plotSol(par, x, T, E):
"""
Función de graficación de la solución analítica y la numérica
"""
titulo = 'u = {}, N = {}'.format(par['u'], par['N'])
error = '$||E||_2$ = {:10.8f}'.format(E)
plt.figure(figsize=(10,5))
plt.title(titulo + ', ' + error)
plt.scatter(x,T, zorder=5, s=100, fc='C1', ec='k', alpha=0.75, label='Numérica')
plt.plot(x,T, 'C1--', lw=1.0)
xa, Ta = analyticSol(par)
plt.plot(xa,Ta,'k-', label='Analítica')
plt.xlim(-0.1,1.1)
plt.ylim(-0.1,1.3)
plt.xlabel('x [m]')
plt.ylabel('T[$^o$C]')
plt.grid()
plt.legend(loc='lower left')
plt.show()
def analyticSol(par, NP = 100):
"""
Calcula la solución analítica
Paremeters
----------
par: dict
Diccionario que contiene todos los datos del problema.
NP: int
Número de puntos para calcular la solución analítica. Si no se da
ningún valor usa 100 puntos para hacer el cálculo.
Returns
----------
xa, Ta : un arreglo (xa) con las coordenadas donde se calcula la
solución analítica y otro arreglo (Ta) con la solución analítica.
"""
L = par['L']
rho = par['rho']
u = par['u']
alpha = par['alpha']
T0 = par['BC'][0]
TL = par['BC'][1]
# YOUR CODE HERE
raise NotImplementedError()
def numSol(par):
"""
Función que calcula la matriz del sistema (A), el lado derecho (f)
y con esta información resuelve el sistema lineal para obtener la
solución.
Paremeters
----------
par: dict
Diccionario que contiene todos los datos del problema.
Returns
----------
T : un arreglo (T) con la solución analítica.
"""
# YOUR CODE HERE
raise NotImplementedError()
def error(Ta, Tn):
"""
Función que calcula el error de la solución numérica.
Paremeters
----------
Ta: array
Arreglo con la solución analítica.
T: array
Arreglo con la solución numérica.
Returns
----------
E : float
Error de la solución numérica con respecto a la analítica.
"""
# YOUR CODE HERE
raise NotImplementedError()
def casos(u, N):
"""
Función para resolver cada caso que usa las funciones anteriores.
Paremeters
----------
u: float
Velocidad.
N: int
Número de incógnitas.
"""
# Definición de un diccionario para almancenar los datos del problema
par = {}
par['L'] = 1.0 # m
par['cp'] = 1.0 # [J / Kg K]
par['rho'] = 1.0 # kg/m^3
par['u'] = u # m/s
par['alpha'] = 0.1 # kg / m.s
par['BC'] = (1.0, 0.0) # Condiciones de frontera
par['N'] = N # Número de incógnitas
h, x = mesh(par['L'], par['N'])
par['h'] = h
# Definición del arreglo donde se almacenará la solución numérica
N = par['N']
T = np.zeros(N+2)
T[0] = par['BC'][0] # Condición de frontera en x = 0
T[N+1] = par['BC'][1] # Condición de frontera en x = L
# Se ejecuta la función para obtener la solución
T[1:N+1] = numSol(par)
# Se calcula la función para calcular la solución analítica
_, Ta = analyticSol(par, N+2)
# Se calcula el error
Error = error(Ta, T)
# Se grafica la solución
plotSol(par, x, T, Error)
```
### Caso 1.A.
- u = 0.1
- N = 6
```
# YOUR CODE HERE
raise NotImplementedError()
```
### Caso 1.B.
- u = 2.5
- N = 6
```
# YOUR CODE HERE
raise NotImplementedError()
```
### Caso 1.C.
- u = 2.5
- N = ?
```
# YOUR CODE HERE
raise NotImplementedError()
```
## Upwind
2. Realice la implementación usando el esquema **Upwind para el término advectivo** y haga las siguientes pruebas:
1. $u = 0.1$ [m/s], con $6$ nodos.<br>
2. $u = 2.5$ [m/s], con $6$ nodos.<br>
3. $u = 2.5$ [m/s], con $N = $ tal que el error sea menor a $0.1$.<br>
En todos los casos compare la solución numérica con la analítica calculando el error con la fórmula: $E = ||T_a - T_n||_\infty$. Genere figuras similares a las siguientes:
<table>
<tr>
<td><img src="caso1u.png" width="300"></td>
<td><img src="caso2u.png" width="300"></td>
</tr>
</table>
```
def Laplaciano1D(par):
"""
Esta función calcula los coeficientes de la matriz de
diferencias finitas.
Paremeters
----------
par: dict
Diccionario que contiene todos los datos del problema.
Returns
----------
A : la matriz de la discretización.
"""
N = par['N']
h = par['h']
alpha = par['alpha']
cp = par['cp']
rho = par['rho']
u = par['u']
# YOUR CODE HERE
raise NotImplementedError()
a = b + c
A = np.zeros((N,N))
A[0,0] = a
A[0,1] = -b
for i in range(1,N-1):
A[i,i] = a
A[i,i+1] = -b
A[i,i-1] = -c
A[N-1,N-2] = -c
A[N-1,N-1] = a
return A
def RHS(par):
"""
Esta función calcula el lado derecho del sistema lineal.
Paremeters
----------
par: dict
Diccionario que contiene todos los datos del problema.
Returns
----------
f : el vector del lado derecho del sistema.
"""
N = par['N']
h = par['h']
alpha = par['alpha']
cp = par['cp']
rho = par['rho']
u = par['u']
T0 = par['BC'][0]
TL = par['BC'][1]
f = np.zeros(N)
# YOUR CODE HERE
raise NotImplementedError()
f[0] = c * T0
f[N-1] = b * TL
return f
```
### Caso 2.A.
- u = 0.1
- N = 6
```
# YOUR CODE HERE
raise NotImplementedError()
```
### Caso 2.B.
- u = 2.5
- N = 6
```
# YOUR CODE HERE
raise NotImplementedError()
```
### Caso 2.C.
- u = 2.5
- N = ?
```
# YOUR CODE HERE
raise NotImplementedError()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/DiGyt/snippets/blob/master/NeuropynamicsToolboxFirstDraft.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
BSD 3-Clause License
Copyright (c) 27.07.2020, Dirk Gütlin
All rights reserved.
# *Simulate biological networks of neurons*
---
This is a first and very crude idea to build a simple Python Toolbox that should be able to simulate:
1. Models of Biological Neurons
2. Models of Neuron Connections (Dendrites/Axons)
3. Models of Biological Neural Networks, including Neurons and Neuron Connections
The Toolbox should be modular, easy to use, easily scalabe, and provide meaningful visualizations on all levels.
This notebook gives a first example for a possible general workflow of these models.
##### Imports
For now, we only rely on numpy.
```
import numpy as np
```
All other imports are only required for plotting.
```
!pip install mne
import mne
import networkx as nx
import matplotlib.pyplot as plt
```
### Define Neuron, Dendrite, and Network models
```
class IzhikevichNeuron():
"""Implementation of an Izhikevich Neuron."""
def __init__(self, dt=0.5, Vmax=35, V0=-70, u0=-14):
# Initialize starting parameters for our neuron
self.dt = dt
self.Vmax = Vmax
self.V = V0
self.u = u0
self.I = 0
def __call__(self, I, a=0.02, b=0.2, c=-65, d=8):
"""Simulate one timestep of our Izhikevich Model."""
if self.V < self.Vmax: # build up spiking potential
# calculate the membrane potential
dv = (0.04 * self.V + 5) * self.V + 140 - self.u
V = self.V + (dv + self.I) * self.dt
# calculate the recovery variable
du = a * (b * self.V - self.u)
u = self.u + self.dt * du
else: # spiking potential is reached
V = c
u = self.u + d
# limit the spikes at Vmax
V = self.Vmax if V > self.Vmax else V
# assign the t-1 states of the model
self.V = V
self.u = u
self.I = I
return V
class Dendrite():
"""A dendrite-axon model capable of storing multiple action potentials over a
course of time steps."""
def __init__(self, weight=1, temp_delay=1):
self.weight = weight
self.temp_delay = temp_delay
self.action_potentials = []
def __call__(self, ap_input):
"""Simulate one time step for this dendrite."""
# simulate the next timestep in the dendrite
new_ap_state = []
ap_output = 0
for ap, t in self.action_potentials:
# if the AP has travelled through the dendrite, return output
if t == 0:
ap_output += ap * self.weight
# else countdown the timesteps for remaining APs in the dendrite
else:
new_ap_state.append((ap, t - 1))
self.action_potentials = new_ap_state
# enter a new AP into the dendrite
if ap_input != 0:
self.action_potentials.append((ap_input, self.temp_delay))
return ap_output
class BNN():
"""A biological neural network connecting multiple neuron models."""
def __init__(self, neurons, connections):
self.neurons = neurons
self.connections = connections
self.neuron_states = np.zeros(len(neurons))
def __call__(self, inputs=[0]):
"""Simulates one timestep in our BNN, while allowing additional external
input being passed as a list of max length = len(BNN.neurons), where
one inputs[i] corresponds to an action potential entered into BNN.neurons[i]
at this timestep."""
# add the external inputs to the propagated neuron inputs
padded_inputs = np.pad(inputs, (0, len(self.neurons) - len(inputs)), 'constant')
neuron_inputs = self.neuron_states + padded_inputs
# process all the neurons
#TODO: neuron outputs are atm represented as the deviation from their respective V0 value
neuron_outputs = [neuron(i) + 70 for neuron, i in zip(self.neurons, neuron_inputs)]
# update the future neuron inputs by propagating them through the connections
neuron_states = np.zeros(len(self.neurons))
for (afferent, efferent, connection) in self.connections:
neuron_states[efferent] += connection(neuron_outputs[afferent])
# we need to round in order to prevent rounding errors
neuron_states = np.round(neuron_states, 9)
self.neuron_states = neuron_states
return neuron_outputs
# TODO: The plotting function is really ugly and should be redone.
def plot(self, **kwargs):
"""A crude way of plotting the network, by transforming it to a networkX graph."""
graph = nx.MultiDiGraph()
graph.add_nodes_from([0, len(self.neurons) - 1])
graph.add_edges_from([(eff, aff, connection.temp_delay) for aff, eff, connection in self.connections])
pos = nx.circular_layout(graph)
nx.draw_networkx_nodes(graph, pos, **kwargs)
ax = plt.gca()
for e in graph.edges:
ax.annotate("",
xy=pos[e[0]], xycoords='data',
xytext=pos[e[1]], textcoords='data',
arrowprops=dict(arrowstyle="->", color="0.5",
shrinkA=10, shrinkB=10,
patchA=None, patchB=None,
connectionstyle="arc3,rad=rrr".replace('rrr',str(0.05 + 0.1*e[2])),),)
plt.axis('off')
plt.show()
```
### Simulate Neurons and Dendrites
Start with simulating an Izhikevich Neuron
```
# define the neuron
delta_time = 0.5 # step size in ms
neuron = IzhikevichNeuron(dt=0.5)
# define the simulation length (in timesteps of delta_time)
sim_steps = 1000
times = [t*delta_time for t in range(sim_steps)]
# plot regular spiking
plt.plot(times, [neuron(I=10) for t in times])
plt.title("Regular Izhikevich Neuron")
plt.xlabel("Time in ms")
plt.ylabel("Voltage in mV")
# plot chattering neuron
plt.figure()
plt.plot(times, [neuron(I=10, c=-50, d=2) for t in times])
plt.title("Chattering Izhikevich Neuron")
plt.xlabel("Time in ms")
plt.ylabel("Voltage in mV")
# create a single impulse response
inputs = np.zeros(len(times))
inputs[200:210] = 10
plt.figure()
plt.plot(times, [neuron(I=i) for i in inputs])
plt.title("Regular Izhikevich Neuron under single pulse")
plt.xlabel("Time in ms")
plt.ylabel("Voltage in mV")
```
Get an intuition for how APs travel along the Dendrite
```
dendrite = Dendrite(weight=1, temp_delay=5)
# print out the APs travelling through our dendrite
for ap in [0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 2, 2, 3, 3, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]:
output = dendrite(ap)
print(output, dendrite.action_potentials)
```
### Simulate a full network
First, we generate a Network Model and visualize it.
```
# define a network model, created from 5 connected Izhikevich Neurons
bnn = BNN(neurons=[IzhikevichNeuron() for i in range(5)],
connections=[(0, 1, Dendrite()),
(0, 2, Dendrite(weight=0.5)),
(0, 3, Dendrite(temp_delay=3)),
(0, 4, Dendrite(temp_delay=5)),
(1, 2, Dendrite(weight=0.8, temp_delay=4)),
(2, 3, Dendrite(temp_delay=1)),
(2, 3, Dendrite(temp_delay=4)),
(3, 2, Dendrite(weight=0.3, temp_delay=3))])
bnn.plot()
```
Run the network without any inputs for 1000 timesteps.
Optimally, the neurons should show no outputs.
```
timesteps=1000
state_log = np.empty([len(bnn.neurons), timesteps])
for i in range(timesteps):
neuron_states = bnn()
state_log[:, i] = neuron_states
```
Plot the outputs of each neuron, using the MNE Toolbox for neurophysiological data analysis.
```
info = mne.create_info(ch_names=["Neuron0", "Neuron1",
"Neuron2", "Neuron3",
"Neuron4"], sfreq=1000/0.5)
data = mne.io.RawArray(state_log, info)
d = data.plot(scalings=dict(misc=1e2))
```
Simulate the data again, but inducing a short voltage burst of 10 mV into the first Neuron after 100 ms
```
# give input to the first neuron
n_times = 2000
inputs = np.zeros([n_times, 2])
inputs[200:220] = 10
state_log = np.empty([len(bnn.neurons), n_times])
for ind, current in enumerate(inputs):
neuron_states = bnn(current)
state_log[:, ind] = neuron_states
# plot it
info = mne.create_info(ch_names=["Neuron0", "Neuron1",
"Neuron2", "Neuron3",
"Neuron4"], sfreq=1000/0.5)
data = mne.io.RawArray(state_log, info)
d = data.plot(scalings=dict(misc=1e2))
```
| github_jupyter |
# Add a new language to SoS
It is relatively easy to define a new language module to allow SoS to exchange variables with a kernel. To make the extension available to other users, you will need to create a package with proper entry points. Please check documentation on [`Extending SoS`](Extending_SoS.html) for details.
SoS needs to know a few things before it can support a language properly,
1. The Jupyter kernel this language uses to work with Jupyer, which is a `ir` kernel for language `R`.
2. How to translate a Python object to a **similar** object in this language
3. How to translate an object in this language to a **similar** object in Python.
4. The color of the prompt of cells executed by this language.
5. (Optional but recommend). Information of a running session.
6. Optional options for interacting with the language on frontend.
It is important to understand that, **SoS does not tranfer any variables among kernels, it creates independent homonymous variables of similar types that are native to the destination language**. For example, for the following two variables
```
a = 1
b = c(1, 2)
```
in R, SoS execute the following statements to create variables `a` and `b` in Python
```
a = 1
b = [1, 2]
```
Note that `a` and `b` are of different types in Python although they are of the same type `numeric` in `R`.
## Define a new language Module
To support a new language, you will need to write a Python package that defines a class, say `mylanguage`, that should provide the following class attributes:
1. `supported_kernels`: a dictionary of language and names of the kernels that the language supports, such as `{'R': ['ir']}`. If multiple kernels are supported, SoS will look for a kernel with matched name in the order that is specified (e.g. `{'JavaScript': ['ijavascript', 'inodejs']}`). Multiple languages can be specified if a language module supports multiple languages (e.g. `Matlab` and `Octave`).
2. `background_color`: a name or `#XXXXXX` value for a color that will be used in the prompt area of cells that are executed by the subkernel. An empty string can be used for using default notebook color. If the language module defines multiple languages, a dictionary `{language: color}` can be used to specify different colors for supported languages.
3. `cd_command`: A command to change current working directory, specified with `{dir}` intepolated with option of magic `%cd`. For example, the command for R is `'setwd({dir!r})'` where `!r` quotes the provided `dir`.
4. `options`: A Python dictionary with options that will be passed to the frontend. Currently two options `variable_pattern` and `assignment_pattern` are supported. Both options should be regular expressions in JS style.
* Option `variable_pattern` is used to identify if a statement is a simple variable (nothing else). If this option is defined and the input text (if executed at the side panel) matches the pattern, SoS will prepend `%preview` to the code. This option is useful only when `%preview var` displays more information than `var`.
* Option `assignment_pattern` is used to identify if a statement is an assignment operation. If this option is defined and the input text matches the pattern, SoS will prepend `%preview var` to the code where `var` should be the first matched portion of the pattern (use `( )`). This mechanism allows SoS to automatically display result of an assignment when you step through the code.
An instance of the class would be initialized with the sos kernel and the name of the subkernel, which does not have to be one of the `supported_kernels` (could be self-defined) and should provide the following attributes and functions:
1. `init_statements`: a statement that will be executed by the sub-kernel when the kernel starts. This statement usually defines functions to convert object to Python.
2. `get_vars`: a Python function that transfer a Python variable to the subkernel.
3. `put_vars`: a Python function that put one or more variables in the subkernel to SoS or another subkernel.
4. `sessioninfo`: a Python function that returns information of the running kernel, usually including version of the language, the kernel, and currently used packages and their versions. For `R`, this means a call to `sessionInfo()` function. The return value of this function can be a string, a list of strings or `(key, value)` pairs, or a dictinary. The function will be called by the `%sessioninfo` magic of SoS.
## Obtain variable from SoS
The `get_vars` function should be defined as
```
def get_vars(self, var_names)
```
where
* `self` is the language instance with access to the SoS kernel, and
* `var_names` are names in the sos dictionary.
This function is responsible for probing the type of Python variable and create a similar object in the subkernel.
For example, to create a Python object `b = [1, 2]` in `R` (magic `%get`), this function could
1. Obtain a R expression to create this variable (e.g. `b <- c(1, 2)`)
2. Execute the expression in the subkernel to create variable `b` in it.
Note that the function `get_vars` can change the variable name because a valid variable name in Python might not be a valid variable name in another language. The function should give a warning if this happens.
## Send variables to other kernels
The `put_vars` function should be defined as
```
def put_vars(self, var_names, to_kernel=None)
```
where
1. `self` is the language instance with access to the SoS kernel
2. `var_name` is a list of variables that should exist in the subkernel. Because a subkernel is responsible for sharing variables with names starting with `sos` to SoS automatically, this function should be called to pass these variables even when `var_names` is empty.
3. `to_kernel` is the destination kernel to which the variables should be passed.
Depending on destination kernel, this function can:
* If the destination kernel is `sos`, the function should return a dictionary of variables that will be merged to the SoS dictionary.
* If direct variable transfer is not supported by the language, the function can return a Python dictionary, in which case the language transfers the variables to SoS and let SoS pass along to the destination kernel.
* If direct variable transfer is supported, the function should return a string. SoS will evaluate the string in the destination kernel to pass variables directly to the destination kernel.
So basically, a language can start with an implementation of `put_vars(to_kernel='sos')` and let SoS handle the rest. If need arises, it can
* Implement variable exchanges between instances of the same language. This can be useful because there are usually lossness and more efficient methods in this case.
* Put variable to another languages where direct varable transfer is much more efficient than transferring through SoS.
For example, to send a `R` object `b <- c(1, 2)` from subkernel `R` to `SoS` (magic `%put`), this function can
1. Execute an statement in the subkernel to get the value(s) of variable(s) in some format, for example, a string `"{'b': [1, 2]}"`.
2. Post-process these varibles to return a dictionary to SoS.
The [`R` sos extension](https://github.com/vatlab/SOS/blob/master/src/sos/R/kernel.py) provides a good example to get you started.
**NOTE**: Unlike other language extension mechanisms in which the python module can get hold of the "engine" of the interpreter (e.g. `saspy` and matlab's Python extension start the interpreter for direct communication) or have access to lower level API of the language (e.g. `rpy2`), SoS only have access to the interface of the language and perform all conversions by executing commands in the subkernels and intercepting their response. Consequently,
1. Data exchange can be slower than other methods.
2. Data exchange is less dependent on version of the interpreter.
2. Data exchange can happen between a local and a remote kernel.
Also, although it can be more efficient to save large datasets to disk files and load in another kernel, this method does not work for kernels that do not share the same filesystem. We currently ignore this issue and assume all kernels have access to the same file system.
## Registering the new language module
To register additional language modules with SoS, you will need to add your modules to section `sos-language` and other relevant sections of entry points of `setup.py`. For example, you can create a package with the following entry_points to provide support for ruby.
```
entry_points='''
[sos-language]
ruby = sos_ruby.kernel:sos_ruby
[sos-targets]
Ruby_Library = sos_ruby.target:Ruby-Library
'''
```
With the installation of this package, `sos` would be able to obtain a class `sos_ruby` from module `sos_ruby.kernel`, and use it to work with the `ruby` language.
| github_jupyter |
```
from datascience import *
from datascience.predicates import are
path_data = '../../../data/'
import numpy as np
import matplotlib
matplotlib.use('Agg', warn=False)
%matplotlib inline
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
from urllib.request import urlopen
import re
def read_url(url):
return re.sub('\\s+', ' ', urlopen(url).read().decode())
```
# Plotting the classics
In this example, we will explore statistics for two classic novels: *The Adventures of Huckleberry Finn* by Mark Twain, and *Little Women* by Louisa May Alcott. The text of any book can be read by a computer at great speed. Books published before 1923 are currently in the *public domain*, meaning that everyone has the right to copy or use the text in any way. [Project Gutenberg](http://www.gutenberg.org/) is a website that publishes public domain books online. Using Python, we can load the text of these books directly from the web.
This example is meant to illustrate some of the broad themes of this text. Don't worry if the details of the program don't yet make sense. Instead, focus on interpreting the images generated below. Later sections of the text will describe most of the features of the Python programming language used below.
First, we read the text of both books into lists of chapters, called `huck_finn_chapters` and `little_women_chapters`. In Python, a name cannot contain any spaces, and so we will often use an underscore `_` to stand in for a space. The `=` in the lines below give a name on the left to the result of some computation described on the right. A *uniform resource locator* or *URL* is an address on the Internet for some content; in this case, the text of a book. The `#` symbol starts a comment, which is ignored by the computer but helpful for people reading the code.
```
# Read two books, fast!
huck_finn_url = 'https://www.inferentialthinking.com/chapters/01/3/huck_finn.txt'
huck_finn_text = read_url(huck_finn_url)
huck_finn_chapters = huck_finn_text.split('CHAPTER ')[44:]
little_women_url = 'https://www.inferentialthinking.com/chapters/01/3/little_women.txt'
little_women_text = read_url(little_women_url)
little_women_chapters = little_women_text.split('CHAPTER ')[1:]
```
While a computer cannot understand the text of a book, it can provide us with some insight into the structure of the text. The name `huck_finn_chapters` is currently bound to a list of all the chapters in the book. We can place them into a table to see how each chapter begins.
```
# Display the chapters of Huckleberry Finn in a table.
Table().with_column('Chapters', huck_finn_chapters)
```
Each chapter begins with a chapter number in Roman numerals, followed by the first sentence of the chapter. Project Gutenberg has printed the first word of each chapter in upper case.
| github_jupyter |
<h2> ====================================================</h2>
<h1>MA477 - Theory and Applications of Data Science</h1>
<h1>Lesson 1: General Overview</h1>
<h4>Dr. Valmir Bucaj</h4>
<br>
United States Military Academy, West Point, AY20-2
<h2>=====================================================</h2>
<h2> Lecture Outline</h2>
<html>
<ol>
<li><b>Notation</b></li>
<br>
<li> <b>What is Machine/Statistical Learning?</b></li>
<br>
<li><b> Why's and How's of Estimating the Relationship Between <font color='red'> Predictors </font> and <font color='red'> Response</font></b></li>
<br>
<li> <b> Prediction Accuracy and Model Interpretability Trade-Off</b></li>
<br>
<li><b> Supervised vs. Unsupervised Models</b></li>
<br>
<li><b> Regression vs. Classification Models </b></li>
<br>
<li><b> Assesing Model Accuracy</b>
<ol>
<li> Mean Squared Error (MSE)</li>
<li> Confusion Matrix</li>
<li> ROC Curve </li>
<li> Cross-Validation</li>
</ol>
</li>
<br>
<li><b> Bias-Variance Trade-Off</b></li>
</ol>
<hr>
<hr>
<h2>Notation</h2>
<ul>
<li> $X$: predictors, features, independent variables</li>
<li> $Y$: response, target, dependent variable</li>
<li> $p$: number of predictors</li>
<li> $n$: number of samples </li>
</ul>
<br>
<h2>What is Machine/Statistical Learning?</h2>
<br>
We will use the terms <i> Statistical Learning</i> and <i> Machine Learning</i> interchangeably.
<ul>
<li>Roughly spekaing, Machine Learning refers to a set of methods for estimating the systematic information that the <i> predictors/features</i>, denoted by $X$, provide about the <i> response</i>, denoted by $Y$.</li>
<li> Equivalently: it is a set of approaches for estimating the relationship between the <i> predictor variables </i> and the <i> response variable</i></li>
<br>
Specifically, suppose that we observe some quantitative response $Y$ and collect $p$ different features $X_1,\dots, X_p$ that we believe to be related to the response $Y$. Letting $X=(X_1,\dots, X_p)$ then we have
$$Y=f(X)+\epsilon$$
where $f$ represents the relationship or systematic information that the predictors $X$ provide about the response $Y$, and $\epsilon$ represents some random error term <b>independent</b> of $X-$ this stems from the fact that $Y$ may depend on other factors that are not among the $p$ features $X$.
<br>
So, roughly speaking, Machine Learning refers to all the different methods of estimating this $f$.
<br>
We will illustrate this with some examples below.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from mpl_toolkits import mplot3d
import matplotlib.pyplot as plt
%matplotlib inline
```
<h3> Mock Example 1</h3>
Let
<ul>
<li>$Y$: be yearly income</li>
<li> $X_1$: years of education post fith grade</li>
<li>$X_2$: height of the person</li>
</ul>
Suppose we are interested in
<ul>
<li>Predicting $Y$ based on $X=(X_1,X_2)$ and
<li> Understading how each of $X_1$ and $X_2$ is related to and affects $Y$.
</ul>
<b>Remark:</b> Don't worry about the code for now.
```
# x=np.linspace(4,17,40)
# y=100/(1+np.exp(-x+10))+40
# y_noise=np.random.normal(0,15,40)
# x2=np.linspace(4,6.7,40)
# np.random.shuffle(x2)
# y_out=y+y_noise
# income=pd.DataFrame({"X1":x.round(3), 'X2':x2.round(3), 'Y':y_out})
#This is what the information about the first 10 people look like
income.head()
plt.figure(figsize=(10,6))
plt.scatter(income['X1'],income['Y'],edgecolor='red',c='red',s=15)
plt.plot(x,y, label='True f')
plt.xlabel("Years of Education",fontsize=14)
plt.ylabel("Yearly Income",fontsize=14)
plt.legend(loc=2)
plt.show()
```
Since, in this mock case, the <b>Income</b> is a simulated data set, we know precisely how years of education is related to the yearly income. In other words, we know exactly what the function $f$ is (the blue curve above). However,
in practice $f$ is not known, and our goal will be to find a good estimate $\widehat f$ of $f$.
<br>
Another important question that one often wants to answer in practice is which features are most strongly related to the response and would they make a good predictor? Are there any features that seem not to carry any information about $Y$ at all? If so, which? etc. etc. etc.
For example, in our mock case, <b> height</b> seems not to carry any information regarding the persons yearly income(it would be weird if it did!). See the plot beow.
```
plt.figure(figsize=(10,6))
plt.scatter(income['X2'],y_out, c='red', s=15)
plt.xlabel('Height',fontsize=14)
plt.ylabel("Yearly Income",fontsize=14)
plt.show()
fig = plt.figure(figsize=(12,8))
ax = plt.axes(projection="3d")
def z_function(x1, x2):
return 10*x1+8*x2-100
x1=np.linspace(5,17,40)
x2=np.linspace(4,6,40)
np.random.shuffle(x2)
X, Y = np.meshgrid(x1, x2)
Z = z_function(X, Y)
Y_target=100/(1+np.exp(-x+10))+30+np.random.normal(0,20,40)+np.sqrt(x2)
ax.plot_wireframe(X,Y,Z)
ax.plot_wireframe(X, Y, Z, color='green')
ax.set_xlabel('Years of Education',fontsize=14)
ax.set_ylabel('Height',fontsize=14)
ax.set_zlabel('Yearly Income',fontsize=14)
for i in range(len(x1)):
ax.plot([x1[i]],[x2[i]],[Y_target[i]],marker='o',color='r')
ax.set_xlabel('Years of Education',fontsize=14)
ax.set_ylabel('Height',fontsize=14)
ax.set_zlabel('Yearly Income',fontsize=14)
ax.view_init(10,245)
plt.show()
```
<h2> Why do we want to estimate $f$ ?</h2>
Typically there are two main reasons why it is of interest to estimate $f$: <b> prediction</b> and <b>inference</b>.
<ul>
<li><h3>Prediction</h3></li>
In many situations we can get a hold of the features for a particular target, but obtaining the value of the target variable is difficult and often impossible.
For example, imagine you want to know whether a patient will have a severe adverse reaction to a particular drug. One, albeit very undesirable, way to figure that out is by administering the drug and observing the effect. However, if the patient has an adverse reaction which may cause damges, the hospital is liable for a lawsuit etc. So, you want to figure out a way to determine if the patient will have an adverse reaction to the drug based say on some blood characteristics, $X_1,\dots, X_p$. These blook markers may be readily obtained in the lab!
So, in this case we may predict $Y$ by using $$\widehat Y=\widehat f(X)$$
where $\widehat f$ is some estimate of $f$, and $\widehat Y$ is the resulting prediction for $Y$.
How accurate our estimate $\widehat Y$ is depends on two factors:
<ul>
<li><b> Reducible Error</b></li>
This error stems from the fact that our estimate $\widehat f$ of $f$ may not be perfect. However,
since we may potentially get a better estimate of $f$ via another method, this erros is called <b> reducible</b>, as it may be further reduced.
<br>
<li><b>Irreducible Error</b></li>
This error stems from the fact that there may be other features, outside of $X=(X_1,\dots, X_p)$ that we have not measured, but that may play an important role in predicting $Y$. In other words, even if we could find a perfect estimate $\widehat f$ of $f$, that is $\widehat Y=f(X)$, there will still be some inherent error in the model for the simple fact that the features $X_1,\dots, X_p$, that we have measured, are just not sufficient for a perfect prediction of $Y$.
</ul>
Typically, when one is exclusively interested in prediction, then the specific form of $f$ is of little to no importance, and it is taken as a <b>black box</b>.
<li><h3> Inference</h3></li>
In practice we are often not so much interested in building the best prediction model, but rather in understanding specifically how $Y$ is affected as the features $X_1,\dots, X_p$ change.
In inference problems, the estimated $\widehat f$ may no longer taken as a black box, but rather needs to be understood well.
Some questions of interest that we would want to answer are as follows:
<ul>
<li>Which predictors are associated with the response?</li>
<li>What is the relationship between each response and the predictor? Positive, negative, more complex?</li>
<li>Is the relationship between predictors and response linear or more complex?</li>
</ul>
<br>
Discuss the <b> Income</b> and the <b> Drug Adverse Reaction</b> cases from this perspective.
</ul>
<h2> How is $f$ Estimated?</h2>
<br>
To estimate $f$ you need data...often a lot of data...that will train or teach our method how to estimate $f$. The data used to train our method is refered to as <b> training data</b>.
For example, if $x_i=(x_{i1},x_{i2},\dots, x_{in})$ for $i=1,2,\dots,n$ is the $i^{th}$ observation and $y_i$ the response associated with it, then the <b>training data </b> consists of
$$\big\{(x_1,y_1),(x_2,y_2),\dots,(x_n,y_n)\big\}.$$
There is an ocean of linear and non-linear methods of estimating $f$. Overall, they can be split into two categories: <br><b> parametric</b> and <b> non-parametric</b> methods.
<ul>
<li><h3> Parametric Methods</h3></li>
<br>
<b> Step 1:</b> Assume the form of $f$<br>
For example, one of the simplest assumptions we can make is that $f$ is linear, that is:
$$f(X)=\beta_0+\sum_{i=1}^n\beta_iX_i$$
<b> Step 2:</b> Estimate the coefficients $\{\beta_i\}_{i=0}^n$
Next, you need to use the training data and select a procedure to estimate the coefficients $\beta_0,\beta_1,\dots, \beta_n$; that is find $\widehat \beta_0,\dots,\widehat \beta_n$ such that $$\widehat Y=\widehat \beta_0+\sum_{i=1}^n\widehat \beta_i X_i$$
One of the main <b>advantages</b> of parametric methods is that the problem is transformed from estimating an arbitrary and unknown $f$ to estimating a set of parameters, which in general is much easier to do and requires less data!
One of the main <b>disadvantages</b> of parametic methods is that the assumption that you make about the form of $f$ often may not closely match the true form of $f$, which may lead in poor estimates.
<b> Examples of parametric methods include:</b>
<ul>
<li> Simple Linear Regression (Least Squares)</li>
<li> Lasso & Ridge Regression</li>
<li> Logistic Regression </li>
<li> Neural Nets etc.</li>
</ul>
<li><h3> Non-Parametric Methods</h3></li>
<br>
Non-parametric methods do not make any assumptions on the form of $f$, but rather try to estimate it by trying to approximate as closely and as smoothly as possible the training data.
One of the main <b>advantages</b> of non-parametric approaches is that because they do not make any assumptions on the form of $f$ they can accomodate a wide range of possibilities, and as such stand a better chance of approaching the true form of $f$, and as such may have better prediction power.
One of the main </b>disadvantages</b> is that they typically require far more training data than parametric methods to successfully and correctly estimate $f$ and may be prone to overfitting.
<b> Examples of non-parametric methods include:</b>
<ul>
<li> Decision Trees </li>
<li> K-Nearest Neighbor </li>
<li> Support Vector Machines etc.</li>
</ul>
</ul>
<hr>
<hr>
<h2> Prediction Accuracy vs. Model Interpretability</h2>
As a rule of thumb, the less flexible a model is the more interpretable it may be, and vice versa, the more flexible a model is the less interpretable it may be!
For a pictorial view of where some of the models fall see Fig 1. below.
<br>
<hr>
<b> Remark:</b> You may ignore this part for now, but if interested, below is the code I used to generate the graph above:
```
models={'Lasso':(0.05,0.9), 'Ridge':(0.1,0.8), 'Least Squares':(0.2,0.7),'GANs':(0.45,0.55),
'Decision Trees':(0.5,0.5),'SVM':(0.8,0.2),'Boosting':(0.9,0.25), 'ANN':(0.9,0.1)}
plt.figure(figsize=(10,6))
for item,val in zip(models.keys(),models.values()):
plt.text(val[0],val[1],item, fontsize=14)
plt.xticks([0.1,0.5,1], ('Low','Medium','High'))
plt.yticks([0.1,0.5,1], ('Low','Medium','High'))
plt.xlim(0,1.1)
plt.ylim(0,1.1)
plt.xlabel("Flexibility",fontsize=14)
plt.ylabel('Interpretability',fontsize=14)
plt.text(0.1,-0.2," Fig 1. Trade-off between flexibility and interpretability ",fontsize=16)
plt.show()
```
<h2> Supervised vs. Unsupervised Learning Methods</h2>
<b>Supervised learning</b> describes the situations where for each observation of the predictor measurements $x_i,\, i=1,\dots,n$ there is a corresponding response measurement $y_i$.
Models that fall under the <b> supervised learning</b> category try to relate the response to the predictors in an attempt to accurately predict the response for future, previously unseen, observations or better understand the relationship of the response to predictors.
<br>
<b>Unsupervised learning</b> describes the situations where we observe predictor measurements $x_i,\, i=1,\dots,n$ but there is <b>no</b> associated response $y_i.$
Since it's not possible to make predictions without having an associated response variable, what sort of analysis are possible in this scenario?
We can investigate the relationship between the <b>observations</b> or the <b>features</b> themselves!
<h3>Mock Example</h3>
Suppose we suspect there are a few <i>unknown</i> subtypes of skin-cancer, and we have tasked a team of Data Scientists to try and confirm our suspiction.
We have collected the following measurements for each tissue sample from 150 different subjects: <b> mean radius, texture</b>, and <b> concavity</b>. There is no response/target variable here to supervise our anlaysis, so it is not possible to do any prediction analysis. So, what can we do?
A sample of the data is given on the <b> cancer</b> dataset below.
<font color='red' size='4'>Group Exercise</font>
Discuss the graphs below. Focus on what some of the important information we can extract from them and on their shortcomings.
<b> Remark</b> For the sake of this exercise you may completely ignore the code below which I used to generate the synthetic dataset and the graphs.
```
from sklearn.datasets import make_blobs
# def create_datfarame(feat_names,n_feat,n_samp,centers,std):
# X, y = make_blobs(n_samples=n_samp, centers=centers,cluster_std=std, n_features=n_feat,
# random_state=0,center_box=(0,10))
# cancer=pd.DataFrame()
# for name, i in zip(feat_names,range(n_feat)):
# cancer[name]=X[:,i]
# return cancer,y
# feat_names=['texture','mean_radius','concavity']
# cancer,y=create_datfarame(feat_names,n_feat=3,n_samp=150,std=0.65,centers=3)
cancer.head(10)
plt.figure(figsize=(10,6))
plt.scatter(cancer['mean_radius'],cancer['concavity'], c=y)
#plt.scatter(cancer['mean_radius'],cancer['concavity'],c=y)
plt.xlabel('mean_radius',fontsize=13)
plt.ylabel("concavity",fontsize=14)
plt.show()
plt.figure(figsize=(10,6))
plt.scatter(cancer['mean_radius'],cancer['texture'], c=y)
plt.xlabel('mean_radius',fontsize=13)
plt.ylabel("texture",fontsize=14)
plt.show()
plt.figure(figsize=(10,6))
plt.scatter(cancer['texture'],cancer['concavity'], c=y)
plt.xlabel('texture',fontsize=13)
plt.ylabel("concavity",fontsize=14)
plt.show()
```
<h2> Regression vs. Classification Problems</h2>
Both regression and classification problems fall under the supervised learning realm.
Generally, problems with a <i> quantitative</i> response variable are referred to as <b> regression problems</b> and those with a <i>qualitative</i> response variable are referred to as <b> classification problems</b>.
Examples of:
<ul>
<li> <b>quantiative variables:</b> age, height, weight, average gpa, yearly income, blood pressure etc.</li>
<li><b>qualitative variables:</b> gender, race, spam email (yes/no), credit card fraud (ye/no) etc.</li>
</ul>
<h2> Assesing Model Accuracy</h2>
Because there is no one best model that works for all data sets, in practice, it is very important and often very challenging to select the best model for a given dataset.
In order to be able to select one model over another, it is crutial to have a quantiative way of measuring the models performance and quality of fit. In other words, for a given observation, we need a way to quantify how close the predicted response is to the true response.
<h3>Regression Setting</h3>
One widely used measure in the regression setting is the <i> mean squared error</i> (MSE):
$$MSE=\frac{1}{n}\sum_{i=1}^n\left(y_i-\widehat y_i\right)^2$$
where $y_i$ is the true response and $\widehat y_i$ is the prediction that $\widehat f$ gives for the the observation $x_i$;
that is $\widehat y_i=\widehat f(x_i)$.
A small MSE indicated that the true and predicted response are close to each other, on the other hand, if some of the predicted responses are far away from the true ones, MSE will tend to be large.
<h4> Training vs. Test MSE</h4>
Training MSE is measured using the training dataset, whereas test MSE is measured using observations which have not previously been seen by the model.
In practice, we care about the performance of our model on previously unseen data, hence <b> test MSE</b> is what should be used.
Selecting a model based on <b> training MSE</b> can lead to extremely poor performance, as training MSE and test MSE may behave very differently. Specifically, a model with very low <b> training MSE</b> may have a very large <b>test MSE</b>(which is what we really care about).
Let's discuss the graphs below which illustrate this phenomenon:

<br>
<h3>Classification Setting</h3>
<ul>
<li> <b>Confustion Matrix</b></li>
A confusion matrix is a simple and neat way of portraying how well our algorithm is doing at classifying the data.
<b> Example:</b> Suppose we want to classify credit card transactions as <i> fradulent</i> or <i> normal(non-fradulent)</i> based on a certain number of features we have used to train our algorithm on.
We will designate <i> normal</i> as our positive class and <i> fradu</i> as the negative class.
<b> Notation:</b>
<ul>
<li> <b>TP</b>= True Positive</li>
<li><b>TN</b>=True Negative</li>
<li><b>FP</b>=False Positive</li>
<li><b>FN</b>=False Negative</li>
</ul>

Depnding on the situation and the type of problem, we may be in some or all of the following measures:
<br>
<ul>
<li> <b> Accuracy Rate:</b> $$\frac{TP+TN}{Total} \text{ where }\, Total=TP+TN+FP+FN$$</li>
<li><b> Error Rate:</b> $$ 1-\frac{TP+TN}{Total}$$</li>
<li><b> True Positive Rate or Recall:</b> $$\frac{TP}{TP+FN}$$</li>
<li><b>False Positive Rate:</b> $$\frac{FP}{FP+TN}$$</li>
<br>
<li><b> Precision (if the algorithm predicts Normal, how often is it correct?):</b>
$$\frac{TP}{TP+FP}$$</li>
</ul>
<hr>
<br>
<font color='red' size='4'>Group Discussion</font>: Considering the credit-card fraud example, which of these measured do you think would be of most interest, and why? Specifically, would <b> accuracy</b> be a good choice for measuring the models performance?
<hr>
</ul>
<h2> Bias-Variance Trade-Off</h2>
In every machine learning model there are always two competing properties that are at war with each other, namely <b> bias</b> and <b>variance</b> of the model.
Suppose there is a relationship between the predictor $X$ and the response $Y$, $$y=f(x)+\epsilon$$ and that using a machine learning model and some training set we estimate this relationship, that is $$\widehat y=\widehat f(x)$$
Now, given a new observation $x_0$, the error we observe in our model for this observation is:$$\widehat f(x_0)-f(x_0)-\epsilon$$
It is very important to notice that this erro, among other things, also depends on the training set that was used to estimate $f$. A good and roboust model would give good predictions regardless of what training set was used to compute $\widehat f$.
Hence, we can talk about the <i> average </i> error incurred due to possible different estimates of $f$ using different training sets.
That is, the average <i> test MSE</i> for a given new observation $x_0$ can always be decomposed into the following three components
$$E\left[\left(\widehat f(x_0)-f(x_0)-\epsilon\right)^2\right]=\left(Bias \widehat f(x_0)\right)^2+Var\left(\widehat f(x_0)\right)+Var(\epsilon)$$
So, a great model is one that has <b> low bias</b> and <b>low variance</b>.
What do we exactly mean by <b> variance</b> and <b>bias</b> of a ML model?
<ul>
<li><b> Bias of a ML model </b> refers to the error that is introduced by approximating a complex real-life problem by a simpler model. For example, approximating a real-life situation by a linear regression introduces bias, as it assumes that there is a linear relationship between predictors and the response, which may not necessarily be the case</li>
<li><b> Variance of a ML model </b> refers to the amount by which $\widehat f$ would change if we estimated it using different training sets. </li>
</ul>
The following picture gives a good pictorial representation of this dynamic.

As a rule of thumb, the more complex and flexible a model is the higher the variance and the lower the bias and vice versa, the less flexible and simple a model the higher the bias and the lower the variance. This dynamic of bias-variance is the reason why <i> test MSE</i> is always U-shaped, as is illustrated in the graph below.

| github_jupyter |
# House Price Prediction
<p><b>Status: <span style=color:orange;>In process</span></b></p>
##### LOAD THE FEATURE DATA
```
import pandas as pd
import numpy as np
X = pd.read_csv('../../../data/preprocessed_data/X.csv', sep=',')
print ('Feature data, shape:\nX: {}'.format(X.shape))
X.head()
y = pd.read_csv('../../../data/preprocessed_data/y.csv', sep=',', header=None)
print ('Target data, shape:\ny: {}'.format(y.shape))
y.head()
```
##### SPLIT THE DATA
```
from sklearn.model_selection import train_test_split
# set the seed for reproducibility
np.random.seed(127)
# split the dataset into 2 training and 2 testing sets
X_train, X_test, y_train, y_test = train_test_split(X.values, y.values, test_size=0.2, random_state=13)
print('Data shapes:\n')
print('X_train : {}\ny_train : {}\n\nX_test : {}\ny_test : {}'.format(X_train.shape,
y_train.shape,
X_test.shape,
y_test.shape))
```
##### DEFINE NETWORK PARAMETERS
```
# define number of attributes
n_features = X_train.shape[1]
n_target = 1 # quantitative data
# count number of samples in each set of data
n_train = X_train.shape[0]
n_test = X_test.shape[0]
# define amount of neurons
n_layer_in = n_features # 12 neurons in input layer
n_layer_h1 = 5 # first hidden layer
n_layer_h2 = 5 # second hidden layer
n_layer_out = n_target # 1 neurons in output layer
sigma_init = 0.01 # For randomized initialization
```
##### RESET TENSORFLOW GRAPH IF THERE IS ANY
```
import tensorflow as tf
# this will set up a specific seed in order to control the output
# and get more homogeneous results though every model variation
def reset_graph(seed=127):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
reset_graph()
```
##### MODEL ARCHITECTURE
```
# create symbolic variables
X = tf.placeholder(tf.float32, [None, n_layer_in], name="input")
Y = tf.placeholder(tf.float32, [None, n_layer_out], name="output")
# deploy the variables that will store the weights
W = {
'W1': tf.Variable(tf.random_normal([n_layer_in, n_layer_h1], stddev = sigma_init), name='W1'),
'W2': tf.Variable(tf.random_normal([n_layer_h1, n_layer_h2], stddev = sigma_init), name='W2'),
'W3': tf.Variable(tf.random_normal([n_layer_h2, n_layer_out], stddev = sigma_init), name='W3')
}
# deploy the variables that will store the bias
b = {
'b1': tf.Variable(tf.random_normal([n_layer_h1]), name='b1'),
'b2': tf.Variable(tf.random_normal([n_layer_h2]), name='b2'),
'b3': tf.Variable(tf.random_normal([n_layer_out]), name='b3')
}
# this will create the model architecture and output the result
def model_MLP(_X, _W, _b):
with tf.name_scope('hidden_1'):
layer_h1 = tf.nn.selu(tf.add(tf.matmul(_X,_W['W1']), _b['b1']))
with tf.name_scope('hidden_2'):
layer_h2 = tf.nn.selu(tf.add(tf.matmul(layer_h1,_W['W2']), _b['b2']))
with tf.name_scope('layer_output'):
layer_out = tf.add(tf.matmul(layer_h2,_W['W3']), _b['b3'])
return layer_out # these are the predictions
with tf.name_scope("MLP"):
y_pred = model_MLP(X, W, b)
```
##### DEFINE LEARNING RATE
```
learning_rate = 0.4
# CHOOSE A DECAYING METHOD IN HERE
model_decay = 'none' # [exponential | inverse_time | natural_exponential | polynomial | none]
global_step = tf.Variable(0, trainable=False)
decay_rate = 0.90
decay_step = 10000
if model_decay == 'exponential':
learning_rate = tf.train.exponential_decay(learning_rate, global_step, decay_step, decay_rate)
elif model_decay == 'inverse_time':
learning_rate = tf.train.inverse_time_decay(learning_rate, global_step, decay_step, decay_rate)
elif model_decay == 'natural_exponential':
learning_rate = tf.train.natural_exp_decay(learning_rate, global_step, decay_step, decay_rate)
elif model_decay == 'polynomial':
end_learning_rate = 0.001
learning_rate = tf.train.polynomial_decay(learning_rate, global_step, decay_step, end_learning_rate, power=0.5)
else:
decay_rate = 1.0
learning_rate = tf.train.exponential_decay(learning_rate, global_step, decay_step, decay_rate)
print('Decaying Learning Rate : ', model_decay)
```
##### DEFINE MODEL TRAINING AND MEASURE PERFORMANCE
```
with tf.name_scope("loss"):
loss = tf.square(Y - y_pred) # squared error
#loss = tf.nn.softmax(logits=y_pred) # softmax
#loss = tf.nn.log_softmax(logits=y_pred) # log-softmax
#loss = tf.nn.softmax_cross_entropy_with_logits_v2(labels=Y, logits=y_pred, dim=-1) # cross-entropy
#loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=Y, logits=y_pred) # sigmoid-cross-entropy
#loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=Y, logits=y_pred) # sparse-softmax-cross-entropy
loss = tf.reduce_mean(loss, name='MSE')
with tf.name_scope("train"):
#optimizer = tf.train.GradientDescentOptimizer(learning_rate) # SGD
#optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate,momentum=0.9) # MOMENTUM
#optimizer = tf.train.AdagradOptimizer(learning_rate=learning_rate) # ADAGRAD
optimizer = tf.train.AdadeltaOptimizer(learning_rate=learning_rate) # ADADELTA
#optimizer = tf.train.RMSPropOptimizer(learning_rate=learning_rate, decay=1) # RMS
training_op = optimizer.minimize(loss, global_step=global_step)
# Create summaries
tf.summary.scalar("loss", loss)
tf.summary.scalar("learn_rate", learning_rate)
# Merge all summaries into a single op to generate the summary data
merged_summary_op = tf.summary.merge_all()
```
##### DEFINE DIRECTORIES FOR RESULTS
```
import sys
import shutil
from datetime import datetime
# set up the directory to store the results for tensorboard
now = datetime.utcnow().strftime('%Y%m%d%H%M%S')
root_ckpoint = 'tf_checkpoints'
root_logdir = 'tf_logs'
logdir = '{}/run-{}/'.format(root_logdir, now)
## Try to remove tree; if failed show an error using try...except on screen
try:
shutil.rmtree(root_ckpoint)
except OSError as e:
print ("Error: %s - %s." % (e.filename, e.strerror))
```
##### EXECUTE THE MODEL
```
from datetime import datetime
# define some parameters
n_epochs = 40
display_epoch = 2 # checkpoint will also be created based on this
batch_size = 10
n_batches = int(n_train/batch_size)
# this will help to restore the model to a specific epoch
saver = tf.train.Saver(tf.global_variables())
# store the results through every epoch iteration
mse_train_list = []
mse_test_list = []
learning_list = []
prediction_results = []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# write logs for tensorboard
summary_writer = tf.summary.FileWriter(logdir, graph=tf.get_default_graph())
for epoch in range(n_epochs):
for i in range(0, n_train, batch_size):
# create batches
X_batch = X_train[i:i+batch_size]
y_batch = y_train[i:i+batch_size]
# improve the model
_, _summary = sess.run([training_op, merged_summary_op], feed_dict={X:X_batch, Y:y_batch})
# Write logs at every iteration
summary_writer.add_summary(_summary)
# measure performance and display the results
if (epoch+1) % display_epoch == 0:
_mse_train = sess.run(loss, feed_dict={X: X_train, Y: y_train})
_mse_test = sess.run(loss, feed_dict={X: X_test, Y: y_test})
mse_train_list.append(_mse_train); mse_test_list.append(_mse_test)
learning_list.append(sess.run(learning_rate))
# Save model weights to disk for reproducibility
saver = tf.train.Saver(max_to_keep=15)
saver.save(sess, "{}/epoch{:04}.ckpt".format(root_ckpoint, (epoch+1)))
print("Epoch: {:04}\tTrainMSE: {:06.5f}\tTestMSE: {:06.5f}, Learning: {:06.7f}".format((epoch+1),
_mse_train,
_mse_test,
learning_list[-1]))
# store the predictuve values
prediction_results = sess.run(y_pred, feed_dict={X: X_test, Y: y_test})
predictions = sess.run(y_pred, feed_dict={X: X_test, Y: y_test})
# output comparative table
dataframe = pd.DataFrame(predictions, columns=['Prediction'])
dataframe['Target'] = y_test
dataframe['Difference'] = dataframe.Target - dataframe.Prediction
print('\nPrinting results :\n\n', dataframe)
```
##### VISUALIZE THE MODEL'S IMPROVEMENTS
```
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
# set up legend
blue_patch = mpatches.Patch(color='blue', label='Train MSE')
red_patch = mpatches.Patch(color='red', label='Test MSE')
plt.legend(handles=[blue_patch,red_patch])
plt.grid()
# plot the data
plt.plot(mse_train_list, color='blue')
plt.plot(mse_test_list, color='red')
plt.xlabel('epochs (x{})'.format(display_epoch))
plt.ylabel('MSE [minimize]');
```
##### LEARNING RATE EVOLUTION
```
or_patch = mpatches.Patch(color='orange', label='Learning rate')
plt.legend(handles=[or_patch])
plt.plot(learning_list, color='orange');
plt.xlabel('epochs (x{})'.format(display_epoch))
plt.ylabel('learning rate');
```
##### VISUALIZE THE RESULTS
```
plt.figure(figsize=(15,10))
# define legend
blue_patch = mpatches.Patch(color='blue', label='Prediction')
red_patch = mpatches.Patch(color='red', label='Expected Value')
green_patch = mpatches.Patch(color='green', label='Abs Error')
plt.legend(handles=[blue_patch,red_patch, green_patch])
# plot data
x_array = np.arange(len(prediction_results))
plt.scatter(x_array, prediction_results, color='blue')
plt.scatter(x_array, y_test, color='red')
abs_error = abs(y_test-prediction_results)
plt.plot(x_array, abs_error, color='green')
plt.grid()
# define legends
plt.xlabel('index'.format(display_epoch))
plt.ylabel('MEDV');
```
##### VISUALIZE TENSORBOARD
```
from IPython.display import clear_output, Image, display, HTML
# CHECK IT ON TENSORBOARD TYPING THESE LINES IN THE COMMAND PROMPT:
# tensorboard --logdir=/tmp/tf_logs
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = b"<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
show_graph(tf.get_default_graph())
```
## ----- PREPARE THE MODEL FOR FUTURE RESTORES -----
##### SAVED VARIABLE LIST
These is the list of variables that were saved on every checkpoint after training.
.data: Contains variable values
.meta: Contains graph structure
.index: Identifies checkpoints
```
for i, var in enumerate(saver._var_list):
print('Var {}: {}'.format(i, var))
```
##### RESTORE TO CHECKPOINT
```
# select the epoch to be restored
epoch = 38
# Running a new session
print('Restoring model to Epoch {}\n'.format(epoch))
with tf.Session() as sess:
# Restore variables from disk
saver.restore(sess, '{}/epoch{:04}.ckpt'.format(root_ckpoint, epoch))
print('\nPrint expected values :')
print(y_test)
print('\nPrint predicted values :')
predictions = sess.run(y_pred, feed_dict={X: X_test})
print(predictions)
```
| github_jupyter |
### Previous: <a href = "keras_10.ipynb">1.10 Activation function </a>
# <center> Keras </center>
## <center>1.11 Units</center>
# Explanation
# Units
units: Positive integer, dimensionality of the output space.
The amount of "neurons", or "cells", or whatever the layer has inside it.
The "units" of each layer will define the output shape (the shape of the tensor that is produced by the layer and that will be the input of the next layer).
```
#previously done
from keras.models import Sequential
from keras.layers.core import Dense, Dropout
from keras.optimizers import SGD, Adam, Adamax
from keras.utils import np_utils
from keras.utils.vis_utils import model_to_dot
from keras.datasets import mnist
from keras.datasets import mnist
from keras.utils import np_utils
%matplotlib inline
import math
import random
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import SVG
#Load MNIST
(X_train, y_train), (X_test, y_test) = mnist.load_data()
#Reshape
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
Y_train = np_utils.to_categorical(y_train, 10)
Y_test = np_utils.to_categorical(y_test, 10)
#Split
X_train = X_train[0:10000]
X_test = X_test[0:1000]
Y_train = Y_train[0:10000]
Y_test = Y_test[0:1000]
def plot_training_history(history):
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
#loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
```
# Example
```
model = Sequential()
model.add(Dense(input_dim=28*28, units=500, activation='sigmoid'))
model.add(Dense(units=500, activation='sigmoid'))
model.add(Dense(units=10, activation='softmax'))
BATCH_SIZE=100
NP_EPOCHS = 3
model.compile(loss='mse',
optimizer=Adam(),
metrics=['accuracy'])
history = model.fit(X_train, Y_train,
batch_size=BATCH_SIZE, epochs=NP_EPOCHS,
verbose=1, validation_data=(X_test, Y_test))
plot_training_history(history)
```
# Task
Play around with the unit-size. What do you notice?
### Next: <a href = "keras_12.ipynb">1.12 Dropout </a>
| github_jupyter |
# CI/CD for a Kubeflow pipeline on Vertex AI
**Learning Objectives:**
1. Learn how to create a custom Cloud Build builder to pilote Vertex AI Pipelines
1. Learn how to write a Cloud Build config file to build and push all the artifacts for a KFP
1. Learn how to setup a Cloud Build GitHub trigger a new run of the Kubeflow PIpeline
In this lab you will walk through authoring of a **Cloud Build** CI/CD workflow that automatically builds, deploys, and runs a Kubeflow pipeline on Vertex AI. You will also integrate your workflow with **GitHub** by setting up a trigger that starts the workflow when a new tag is applied to the **GitHub** repo hosting the pipeline's code.
## Configuring environment settings
```
PROJECT_ID = !(gcloud config get-value project)
PROJECT_ID = PROJECT_ID[0]
REGION = "us-central1"
ARTIFACT_STORE = f"gs://{PROJECT_ID}-kfp-artifact-store"
```
Let us make sure that the artifact store exists:
```
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
```
## Creating the KFP CLI builder for Vertex AI
### Exercise
In the cell below, write a docker file that
* Uses `gcr.io/deeplearning-platform-release/base-cpu` as base image
* Install the python packages `kfp` with version `1.6.6 ` and `google-cloud-aiplatform` with version `1.3.0`
* Starts `/bin/bash` as entrypoint
```
%%writefile kfp-cli/Dockerfile
# TODO
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install kfp==1.6.6 google-cloud-aiplatform==1.3.0
ENTRYPOINT ["/bin/bash"]
```
### Build the image and push it to your project's **Container Registry**.
```
KFP_CLI_IMAGE_NAME = "kfp-cli-vertex"
KFP_CLI_IMAGE_URI = f"gcr.io/{PROJECT_ID}/{KFP_CLI_IMAGE_NAME}:latest"
KFP_CLI_IMAGE_URI
```
### Exercise
In the cell below, use `gcloud builds` to build the `kfp-cli-vertex` Docker image and push it to the project gcr.io registry.
```
!{KFP_CLI_IMAGE_URI}
# COMPLETE THE COMMAND
# https://cloud.google.com/sdk/gcloud/reference/builds/submit
!gcloud builds submit --async --timeout 15m --tag {KFP_CLI_IMAGE_URI} kfp-cli
```
## Understanding the **Cloud Build** workflow.
### Exercise
In the cell below, you'll complete the `cloudbuild_vertex.yaml` file describing the CI/CD workflow and prescribing how environment specific settings are abstracted using **Cloud Build** variables.
The CI/CD workflow automates the steps you walked through manually during `lab-02_vertex`:
1. Builds the trainer image
1. Compiles the pipeline
1. Uploads and run the pipeline to the Vertex AI Pipeline environment
1. Pushes the trainer to your project's **Container Registry**
The **Cloud Build** workflow configuration uses both standard and custom [Cloud Build builders](https://cloud.google.com/cloud-build/docs/cloud-builders). The custom builder encapsulates **KFP CLI**.
```
%%writefile cloudbuild_vertex.yaml
# Copyright 2021 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this
# file except in compliance with the License. You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS"
# BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.
steps:
# Build the trainer image
# TODO
- name:
id: 'Build the trainer image'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/trainer_image_covertype_vertex:latest', '.']
dir: $_PIPELINE_FOLDER/trainer_image_vertex
# Push the trainer image, to make it available in the compile step
- name: 'gcr.io/cloud-builders/docker'
id: 'Push the trainer image'
args: ['push', 'gcr.io/$PROJECT_ID/trainer_image_covertype_vertex:latest']
dir: $_PIPELINE_FOLDER/trainer_image_vertex
# Compile the pipeline
- name: 'gcr.io/$PROJECT_ID/kfp-cli-vertex'
id: 'Compile the pipeline'
args:
- '-c'
- |
dsl-compile-v2 # TODO
env:
- 'PIPELINE_ROOT=gs://$PROJECT_ID-kfp-artifact-store/pipeline'
- 'PROJECT_ID=$PROJECT_ID'
- 'REGION=$_REGION'
- 'SERVING_CONTAINER_IMAGE_URI=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest'
- 'TRAINING_CONTAINER_IMAGE_URI=gcr.io/$PROJECT_ID/trainer_image_covertype_vertex:latest'
- 'TRAINING_FILE_PATH=gs://$PROJECT_ID-kfp-artifact-store/data/training/dataset.csv'
- 'VALIDATION_FILE_PATH=gs://$PROJECT_ID-kfp-artifact-store/data/validation/dataset.csv'
dir: pipeline_vertex
# Run the pipeline
- name: 'gcr.io/$PROJECT_ID/kfp-cli-vertex'
args:
- '-c'
- |
python kfp-cli_vertex/run_pipeline.py # TODO
# Push the images to Container Registry
# TODO: List the images to be pushed to the project Docker registry
# TODO
images: ['gcr.io/$PROJECT_ID/trainer_image_covertype_vertex:latest']
# This is required since the pipeline run overflows the default timeout
timeout: 10800s
```
## Manually triggering CI/CD runs
You can manually trigger **Cloud Build** runs using the [gcloud builds submit command]( https://cloud.google.com/sdk/gcloud/reference/builds/submit).
```
SUBSTITUTIONS = f"_REGION={REGION},_PIPELINE_FOLDER=./"
SUBSTITUTIONS
!gcloud builds submit . --config cloudbuild_vertex.yaml --substitutions {SUBSTITUTIONS} --async
```
**Note:** If you experience issues with CloudBuild being able to access Vertex AI, you may need to run the following commands in **CloudShell**:
```
PROJECT_ID=$(gcloud config get-value project)
PROJECT_NUMBER=$(gcloud projects list --filter="name=$PROJECT_ID" --format="value(PROJECT_NUMBER)")
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:$PROJECT_NUMBER@cloudbuild.gserviceaccount.com" \
--role="roles/aiplatform.user"
gcloud iam service-accounts add-iam-policy-binding \
$PROJECT_NUMBER-compute@developer.gserviceaccount.com \
--member="serviceAccount:$PROJECT_NUMBER@cloudbuild.gserviceaccount.com" \
--role="roles/iam.serviceAccountUser"
```
## Setting up GitHub integration
## Exercise
In this exercise you integrate your CI/CD workflow with **GitHub**, using [Cloud Build GitHub App](https://github.com/marketplace/google-cloud-build).
You will set up a trigger that starts the CI/CD workflow when a new tag is applied to the **GitHub** repo managing the pipeline source code. You will use a fork of this repo as your source GitHub repository.
### Step 1: Create a fork of this repo
[Follow the GitHub documentation](https://help.github.com/en/github/getting-started-with-github/fork-a-repo) to fork [this repo](https://github.com/GoogleCloudPlatform/asl-ml-immersion)
### Step 2: Create a **Cloud Build** trigger
Connect the fork you created in the previous step to your Google Cloud project and create a trigger following the steps in the [Creating GitHub app trigger](https://cloud.google.com/cloud-build/docs/create-github-app-triggers) article. Use the following values on the **Edit trigger** form:
|Field|Value|
|-----|-----|
|Name|[YOUR TRIGGER NAME]|
|Description|[YOUR TRIGGER DESCRIPTION]|
|Event| Tag|
|Source| [YOUR FORK]|
|Tag (regex)|.\*|
|Build Configuration|Cloud Build configuration file (yaml or json)|
|Cloud Build configuration file location| ./notebooks/kubeflow_pipelines/cicd/solutions/cloudbuild_vertex.yaml|
Use the following values for the substitution variables:
|Variable|Value|
|--------|-----|
|_REGION|us-central1|
|_PIPELINE_FOLDER|notebooks/kubeflow_pipelines/cicd/solutions
### Step 3: Trigger the build
To start an automated build [create a new release of the repo in GitHub](https://help.github.com/en/github/administering-a-repository/creating-releases). Alternatively, you can start the build by applying a tag using `git`.
```
git tag [TAG NAME]
git push origin --tags
```
```
!git add
```
After running the command above, a build should have been automatically triggered, which you should able to inspect [here](https://console.cloud.google.com/cloud-build/builds).
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| github_jupyter |
# Word2Vec
**Learning Objectives**
1. Compile all steps into one function
2. Prepare training data for Word2Vec
3. Model and Training
4. Embedding lookup and analysis
## Introduction
Word2Vec is not a singular algorithm, rather, it is a family of model architectures and optimizations that can be used to learn word embeddings from large datasets. Embeddings learned through Word2Vec have proven to be successful on a variety of downstream natural language processing tasks.
Note: This notebook is based on [Efficient Estimation of Word Representations in Vector Space](https://arxiv.org/pdf/1301.3781.pdf) and
[Distributed
Representations of Words and Phrases and their Compositionality](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf). It is not an exact implementation of the papers. Rather, it is intended to illustrate the key ideas.
These papers proposed two methods for learning representations of words:
* **Continuous Bag-of-Words Model** which predicts the middle word based on surrounding context words. The context consists of a few words before and after the current (middle) word. This architecture is called a bag-of-words model as the order of words in the context is not important.
* **Continuous Skip-gram Model** which predict words within a certain range before and after the current word in the same sentence. A worked example of this is given below.
You'll use the skip-gram approach in this notebook. First, you'll explore skip-grams and other concepts using a single sentence for illustration. Next, you'll train your own Word2Vec model on a small dataset. This notebook also contains code to export the trained embeddings and visualize them in the [TensorFlow Embedding Projector](http://projector.tensorflow.org/).
Each learning objective will correspond to a __#TODO__ in the [student lab notebook](../labs/word2vec.ipynb) -- try to complete that notebook first before reviewing this solution notebook.
## Skip-gram and Negative Sampling
While a bag-of-words model predicts a word given the neighboring context, a skip-gram model predicts the context (or neighbors) of a word, given the word itself. The model is trained on skip-grams, which are n-grams that allow tokens to be skipped (see the diagram below for an example). The context of a word can be represented through a set of skip-gram pairs of `(target_word, context_word)` where `context_word` appears in the neighboring context of `target_word`.
Consider the following sentence of 8 words.
> The wide road shimmered in the hot sun.
The context words for each of the 8 words of this sentence are defined by a window size. The window size determines the span of words on either side of a `target_word` that can be considered `context word`. Take a look at this table of skip-grams for target words based on different window sizes.
Note: For this tutorial, a window size of *n* implies n words on each side with a total window span of 2*n+1 words across a word.

The training objective of the skip-gram model is to maximize the probability of predicting context words given the target word. For a sequence of words *w<sub>1</sub>, w<sub>2</sub>, ... w<sub>T</sub>*, the objective can be written as the average log probability

where `c` is the size of the training context. The basic skip-gram formulation defines this probability using the softmax function.

where *v* and *v<sup>'<sup>* are target and context vector representations of words and *W* is vocabulary size.
Computing the denominator of this formulation involves performing a full softmax over the entire vocabulary words which is often large (10<sup>5</sup>-10<sup>7</sup>) terms.
The [Noise Contrastive Estimation](https://www.tensorflow.org/api_docs/python/tf/nn/nce_loss) loss function is an efficient approximation for a full softmax. With an objective to learn word embeddings instead of modelling the word distribution, NCE loss can be [simplified](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) to use negative sampling.
The simplified negative sampling objective for a target word is to distinguish the context word from *num_ns* negative samples drawn from noise distribution *P<sub>n</sub>(w)* of words. More precisely, an efficient approximation of full softmax over the vocabulary is, for a skip-gram pair, to pose the loss for a target word as a classification problem between the context word and *num_ns* negative samples.
A negative sample is defined as a (target_word, context_word) pair such that the context_word does not appear in the `window_size` neighborhood of the target_word. For the example sentence, these are few potential negative samples (when `window_size` is 2).
```
(hot, shimmered)
(wide, hot)
(wide, sun)
```
In the next section, you'll generate skip-grams and negative samples for a single sentence. You'll also learn about subsampling techniques and train a classification model for positive and negative training examples later in the tutorial.
## Setup
```
# Use the chown command to change the ownership of repository to user.
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install -q tqdm
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the
# results of that search to a name in the local scope.
import io
import itertools
import numpy as np
import os
import re
import string
import tensorflow as tf
import tqdm
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Activation, Dense, Dot, Embedding, Flatten, GlobalAveragePooling1D, Reshape
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
```
Please check your tensorflow version using the cell below.
```
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
SEED = 42
AUTOTUNE = tf.data.experimental.AUTOTUNE
```
### Vectorize an example sentence
Consider the following sentence:
`The wide road shimmered in the hot sun.`
Tokenize the sentence:
```
sentence = "The wide road shimmered in the hot sun"
tokens = list(sentence.lower().split())
print(len(tokens))
```
Create a vocabulary to save mappings from tokens to integer indices.
```
vocab, index = {}, 1 # start indexing from 1
vocab['<pad>'] = 0 # add a padding token
for token in tokens:
if token not in vocab:
vocab[token] = index
index += 1
vocab_size = len(vocab)
print(vocab)
```
Create an inverse vocabulary to save mappings from integer indices to tokens.
```
inverse_vocab = {index: token for token, index in vocab.items()}
print(inverse_vocab)
```
Vectorize your sentence.
```
example_sequence = [vocab[word] for word in tokens]
print(example_sequence)
```
### Generate skip-grams from one sentence
The `tf.keras.preprocessing.sequence` module provides useful functions that simplify data preparation for Word2Vec. You can use the `tf.keras.preprocessing.sequence.skipgrams` to generate skip-gram pairs from the `example_sequence` with a given `window_size` from tokens in the range `[0, vocab_size)`.
Note: `negative_samples` is set to `0` here as batching negative samples generated by this function requires a bit of code. You will use another function to perform negative sampling in the next section.
```
window_size = 2
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
example_sequence,
vocabulary_size=vocab_size,
window_size=window_size,
negative_samples=0)
print(len(positive_skip_grams))
```
Take a look at few positive skip-grams.
```
for target, context in positive_skip_grams[:5]:
print(f"({target}, {context}): ({inverse_vocab[target]}, {inverse_vocab[context]})")
```
### Negative sampling for one skip-gram
The `skipgrams` function returns all positive skip-gram pairs by sliding over a given window span. To produce additional skip-gram pairs that would serve as negative samples for training, you need to sample random words from the vocabulary. Use the `tf.random.log_uniform_candidate_sampler` function to sample `num_ns` number of negative samples for a given target word in a window. You can call the funtion on one skip-grams's target word and pass the context word as true class to exclude it from being sampled.
Key point: *num_ns* (number of negative samples per positive context word) between [5, 20] is [shown to work](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) best for smaller datasets, while *num_ns* between [2,5] suffices for larger datasets.
```
# Get target and context words for one positive skip-gram.
target_word, context_word = positive_skip_grams[0]
# Set the number of negative samples per positive context.
num_ns = 4
context_class = tf.reshape(tf.constant(context_word, dtype="int64"), (1, 1))
negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
true_classes=context_class, # class that should be sampled as 'positive'
num_true=1, # each positive skip-gram has 1 positive context class
num_sampled=num_ns, # number of negative context words to sample
unique=True, # all the negative samples should be unique
range_max=vocab_size, # pick index of the samples from [0, vocab_size]
seed=SEED, # seed for reproducibility
name="negative_sampling" # name of this operation
)
print(negative_sampling_candidates)
print([inverse_vocab[index.numpy()] for index in negative_sampling_candidates])
```
### Construct one training example
For a given positive `(target_word, context_word)` skip-gram, you now also have `num_ns` negative sampled context words that do not appear in the window size neighborhood of `target_word`. Batch the `1` positive `context_word` and `num_ns` negative context words into one tensor. This produces a set of positive skip-grams (labelled as `1`) and negative samples (labelled as `0`) for each target word.
```
# Add a dimension so you can use concatenation (on the next step).
negative_sampling_candidates = tf.expand_dims(negative_sampling_candidates, 1)
# Concat positive context word with negative sampled words.
context = tf.concat([context_class, negative_sampling_candidates], 0)
# Label first context word as 1 (positive) followed by num_ns 0s (negative).
label = tf.constant([1] + [0]*num_ns, dtype="int64")
# Reshape target to shape (1,) and context and label to (num_ns+1,).
target = tf.squeeze(target_word)
context = tf.squeeze(context)
label = tf.squeeze(label)
```
Take a look at the context and the corresponding labels for the target word from the skip-gram example above.
```
print(f"target_index : {target}")
print(f"target_word : {inverse_vocab[target_word]}")
print(f"context_indices : {context}")
print(f"context_words : {[inverse_vocab[c.numpy()] for c in context]}")
print(f"label : {label}")
```
A tuple of `(target, context, label)` tensors constitutes one training example for training your skip-gram negative sampling Word2Vec model. Notice that the target is of shape `(1,)` while the context and label are of shape `(1+num_ns,)`
```
print(f"target :", target)
print(f"context :", context )
print(f"label :", label )
```
### Summary
This picture summarizes the procedure of generating training example from a sentence.

## Lab Task 1: Compile all steps into one function
### Skip-gram Sampling table
A large dataset means larger vocabulary with higher number of more frequent words such as stopwords. Training examples obtained from sampling commonly occuring words (such as `the`, `is`, `on`) don't add much useful information for the model to learn from. [Mikolov et al.](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) suggest subsampling of frequent words as a helpful practice to improve embedding quality.
The `tf.keras.preprocessing.sequence.skipgrams` function accepts a sampling table argument to encode probabilities of sampling any token. You can use the `tf.keras.preprocessing.sequence.make_sampling_table` to generate a word-frequency rank based probabilistic sampling table and pass it to `skipgrams` function. Take a look at the sampling probabilities for a `vocab_size` of 10.
```
sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(size=10)
print(sampling_table)
```
`sampling_table[i]` denotes the probability of sampling the i-th most common word in a dataset. The function assumes a [Zipf's distribution](https://en.wikipedia.org/wiki/Zipf%27s_law) of the word frequencies for sampling.
Key point: The `tf.random.log_uniform_candidate_sampler` already assumes that the vocabulary frequency follows a log-uniform (Zipf's) distribution. Using these distribution weighted sampling also helps approximate the Noise Contrastive Estimation (NCE) loss with simpler loss functions for training a negative sampling objective.
### Generate training data
Compile all the steps described above into a function that can be called on a list of vectorized sentences obtained from any text dataset. Notice that the sampling table is built before sampling skip-gram word pairs. You will use this function in the later sections.
```
# Generates skip-gram pairs with negative sampling for a list of sequences
# (int-encoded sentences) based on window size, number of negative samples
# and vocabulary size.
def generate_training_data(sequences, window_size, num_ns, vocab_size, seed):
# Elements of each training example are appended to these lists.
targets, contexts, labels = [], [], []
# Build the sampling table for vocab_size tokens.
# TODO 1a
sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(vocab_size)
# Iterate over all sequences (sentences) in dataset.
for sequence in tqdm.tqdm(sequences):
# Generate positive skip-gram pairs for a sequence (sentence).
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
sequence,
vocabulary_size=vocab_size,
sampling_table=sampling_table,
window_size=window_size,
negative_samples=0)
# Iterate over each positive skip-gram pair to produce training examples
# with positive context word and negative samples.
# TODO 1b
for target_word, context_word in positive_skip_grams:
context_class = tf.expand_dims(
tf.constant([context_word], dtype="int64"), 1)
negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
true_classes=context_class,
num_true=1,
num_sampled=num_ns,
unique=True,
range_max=vocab_size,
seed=SEED,
name="negative_sampling")
# Build context and label vectors (for one target word)
negative_sampling_candidates = tf.expand_dims(
negative_sampling_candidates, 1)
context = tf.concat([context_class, negative_sampling_candidates], 0)
label = tf.constant([1] + [0]*num_ns, dtype="int64")
# Append each element from the training example to global lists.
targets.append(target_word)
contexts.append(context)
labels.append(label)
return targets, contexts, labels
```
## Lab Task 2: Prepare training data for Word2Vec
With an understanding of how to work with one sentence for a skip-gram negative sampling based Word2Vec model, you can proceed to generate training examples from a larger list of sentences!
### Download text corpus
You will use a text file of Shakespeare's writing for this tutorial. Change the following line to run this code on your own data.
```
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
```
Read text from the file and take a look at the first few lines.
```
with open(path_to_file) as f:
lines = f.read().splitlines()
for line in lines[:20]:
print(line)
```
Use the non empty lines to construct a `tf.data.TextLineDataset` object for next steps.
```
# TODO 2a
text_ds = tf.data.TextLineDataset(path_to_file).filter(lambda x: tf.cast(tf.strings.length(x), bool))
```
### Vectorize sentences from the corpus
You can use the `TextVectorization` layer to vectorize sentences from the corpus. Learn more about using this layer in this [Text Classification](https://www.tensorflow.org/tutorials/keras/text_classification) tutorial. Notice from the first few sentences above that the text needs to be in one case and punctuation needs to be removed. To do this, define a `custom_standardization function` that can be used in the TextVectorization layer.
```
# We create a custom standardization function to lowercase the text and
# remove punctuation.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
return tf.strings.regex_replace(lowercase,
'[%s]' % re.escape(string.punctuation), '')
# Define the vocabulary size and number of words in a sequence.
vocab_size = 4096
sequence_length = 10
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Set output_sequence_length length to pad all samples to same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
```
Call `adapt` on the text dataset to create vocabulary.
```
vectorize_layer.adapt(text_ds.batch(1024))
```
Once the state of the layer has been adapted to represent the text corpus, the vocabulary can be accessed with `get_vocabulary()`. This function returns a list of all vocabulary tokens sorted (descending) by their frequency.
```
# Save the created vocabulary for reference.
inverse_vocab = vectorize_layer.get_vocabulary()
print(inverse_vocab[:20])
```
The vectorize_layer can now be used to generate vectors for each element in the `text_ds`.
```
def vectorize_text(text):
text = tf.expand_dims(text, -1)
return tf.squeeze(vectorize_layer(text))
# Vectorize the data in text_ds.
text_vector_ds = text_ds.batch(1024).prefetch(AUTOTUNE).map(vectorize_layer).unbatch()
```
### Obtain sequences from the dataset
You now have a `tf.data.Dataset` of integer encoded sentences. To prepare the dataset for training a Word2Vec model, flatten the dataset into a list of sentence vector sequences. This step is required as you would iterate over each sentence in the dataset to produce positive and negative examples.
Note: Since the `generate_training_data()` defined earlier uses non-TF python/numpy functions, you could also use a `tf.py_function` or `tf.numpy_function` with `tf.data.Dataset.map()`.
```
sequences = list(text_vector_ds.as_numpy_iterator())
print(len(sequences))
```
Take a look at few examples from `sequences`.
```
for seq in sequences[:5]:
print(f"{seq} => {[inverse_vocab[i] for i in seq]}")
```
### Generate training examples from sequences
`sequences` is now a list of int encoded sentences. Just call the `generate_training_data()` function defined earlier to generate training examples for the Word2Vec model. To recap, the function iterates over each word from each sequence to collect positive and negative context words. Length of target, contexts and labels should be same, representing the total number of training examples.
```
targets, contexts, labels = generate_training_data(
sequences=sequences,
window_size=2,
num_ns=4,
vocab_size=vocab_size,
seed=SEED)
print(len(targets), len(contexts), len(labels))
```
### Configure the dataset for performance
To perform efficient batching for the potentially large number of training examples, use the `tf.data.Dataset` API. After this step, you would have a `tf.data.Dataset` object of `(target_word, context_word), (label)` elements to train your Word2Vec model!
```
BATCH_SIZE = 1024
BUFFER_SIZE = 10000
dataset = tf.data.Dataset.from_tensor_slices(((targets, contexts), labels))
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
print(dataset)
```
Add `cache()` and `prefetch()` to improve performance.
```
dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE)
print(dataset)
```
## Lab Task 3: Model and Training
The Word2Vec model can be implemented as a classifier to distinguish between true context words from skip-grams and false context words obtained through negative sampling. You can perform a dot product between the embeddings of target and context words to obtain predictions for labels and compute loss against true labels in the dataset.
### Subclassed Word2Vec Model
Use the [Keras Subclassing API](https://www.tensorflow.org/guide/keras/custom_layers_and_models) to define your Word2Vec model with the following layers:
* `target_embedding`: A `tf.keras.layers.Embedding` layer which looks up the embedding of a word when it appears as a target word. The number of parameters in this layer are `(vocab_size * embedding_dim)`.
* `context_embedding`: Another `tf.keras.layers.Embedding` layer which looks up the embedding of a word when it appears as a context word. The number of parameters in this layer are the same as those in `target_embedding`, i.e. `(vocab_size * embedding_dim)`.
* `dots`: A `tf.keras.layers.Dot` layer that computes the dot product of target and context embeddings from a training pair.
* `flatten`: A `tf.keras.layers.Flatten` layer to flatten the results of `dots` layer into logits.
With the sublassed model, you can define the `call()` function that accepts `(target, context)` pairs which can then be passed into their corresponding embedding layer. Reshape the `context_embedding` to perform a dot product with `target_embedding` and return the flattened result.
Key point: The `target_embedding` and `context_embedding` layers can be shared as well. You could also use a concatenation of both embeddings as the final Word2Vec embedding.
```
class Word2Vec(Model):
def __init__(self, vocab_size, embedding_dim):
super(Word2Vec, self).__init__()
self.target_embedding = Embedding(vocab_size,
embedding_dim,
input_length=1,
name="w2v_embedding", )
self.context_embedding = Embedding(vocab_size,
embedding_dim,
input_length=num_ns+1)
self.dots = Dot(axes=(3,2))
self.flatten = Flatten()
def call(self, pair):
target, context = pair
we = self.target_embedding(target)
ce = self.context_embedding(context)
dots = self.dots([ce, we])
return self.flatten(dots)
```
### Define loss function and compile model
For simplicity, you can use `tf.keras.losses.CategoricalCrossEntropy` as an alternative to the negative sampling loss. If you would like to write your own custom loss function, you can also do so as follows:
``` python
def custom_loss(x_logit, y_true):
return tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=y_true)
```
It's time to build your model! Instantiate your Word2Vec class with an embedding dimension of 128 (you could experiment with different values). Compile the model with the `tf.keras.optimizers.Adam` optimizer.
```
# TODO 3a
embedding_dim = 128
word2vec = Word2Vec(vocab_size, embedding_dim)
word2vec.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
```
Also define a callback to log training statistics for tensorboard.
```
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
```
Train the model with `dataset` prepared above for some number of epochs.
```
word2vec.fit(dataset, epochs=20, callbacks=[tensorboard_callback])
```
Tensorboard now shows the Word2Vec model's accuracy and loss.
```
!tensorboard --bind_all --port=8081 --logdir logs
```
Run the following command in **Cloud Shell:**
<code>gcloud beta compute ssh --zone <instance-zone> <notebook-instance-name> --project <project-id> -- -L 8081:localhost:8081</code>
Make sure to replace <instance-zone>, <notebook-instance-name> and <project-id>.
In Cloud Shell, click *Web Preview* > *Change Port* and insert port number *8081*. Click *Change and Preview* to open the TensorBoard.

**To quit the TensorBoard, click Kernel > Interrupt kernel**.
## Lab Task 4: Embedding lookup and analysis
Obtain the weights from the model using `get_layer()` and `get_weights()`. The `get_vocabulary()` function provides the vocabulary to build a metadata file with one token per line.
```
# TODO 4a
weights = word2vec.get_layer('w2v_embedding').get_weights()[0]
vocab = vectorize_layer.get_vocabulary()
```
Create and save the vectors and metadata file.
```
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0: continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
```
Download the `vectors.tsv` and `metadata.tsv` to analyze the obtained embeddings in the [Embedding Projector](https://projector.tensorflow.org/).
```
try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception as e:
pass
```
## Next steps
This tutorial has shown you how to implement a skip-gram Word2Vec model with negative sampling from scratch and visualize the obtained word embeddings.
* To learn more about word vectors and their mathematical representations, refer to these [notes](https://web.stanford.edu/class/cs224n/readings/cs224n-2019-notes01-wordvecs1.pdf).
* To learn more about advanced text processing, read the [Transformer model for language understanding](https://www.tensorflow.org/tutorials/text/transformer) tutorial.
* If you’re interested in pre-trained embedding models, you may also be interested in [Exploring the TF-Hub CORD-19 Swivel Embeddings](https://www.tensorflow.org/hub/tutorials/cord_19_embeddings_keras), or the [Multilingual Universal Sentence Encoder](https://www.tensorflow.org/hub/tutorials/cross_lingual_similarity_with_tf_hub_multilingual_universal_encoder)
* You may also like to train the model on a new dataset (there are many available in [TensorFlow Datasets](https://www.tensorflow.org/datasets)).
| github_jupyter |
```
#default_exp tabular.core
#export
from fastai2.torch_basics import *
from fastai2.data.all import *
from nbdev.showdoc import *
#export
pd.set_option('mode.chained_assignment','raise')
```
# Tabular core
> Basic function to preprocess tabular data before assembling it in a `DataLoaders`.
## Initial preprocessing
```
#export
def make_date(df, date_field):
"Make sure `df[date_field]` is of the right date type."
field_dtype = df[date_field].dtype
if isinstance(field_dtype, pd.core.dtypes.dtypes.DatetimeTZDtype):
field_dtype = np.datetime64
if not np.issubdtype(field_dtype, np.datetime64):
df[date_field] = pd.to_datetime(df[date_field], infer_datetime_format=True)
df = pd.DataFrame({'date': ['2019-12-04', '2019-11-29', '2019-11-15', '2019-10-24']})
make_date(df, 'date')
test_eq(df['date'].dtype, np.dtype('datetime64[ns]'))
#export
def add_datepart(df, field_name, prefix=None, drop=True, time=False):
"Helper function that adds columns relevant to a date in the column `field_name` of `df`."
make_date(df, field_name)
field = df[field_name]
prefix = ifnone(prefix, re.sub('[Dd]ate$', '', field_name))
attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
if time: attr = attr + ['Hour', 'Minute', 'Second']
for n in attr: df[prefix + n] = getattr(field.dt, n.lower())
df[prefix + 'Elapsed'] = field.astype(np.int64) // 10 ** 9
if drop: df.drop(field_name, axis=1, inplace=True)
return df
df = pd.DataFrame({'date': ['2019-12-04', '2019-11-29', '2019-11-15', '2019-10-24']})
df = add_datepart(df, 'date')
test_eq(df.columns, ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start', 'Elapsed'])
df.head()
#export
def _get_elapsed(df,field_names, date_field, base_field, prefix):
for f in field_names:
day1 = np.timedelta64(1, 'D')
last_date,last_base,res = np.datetime64(),None,[]
for b,v,d in zip(df[base_field].values, df[f].values, df[date_field].values):
if last_base is None or b != last_base:
last_date,last_base = np.datetime64(),b
if v: last_date = d
res.append(((d-last_date).astype('timedelta64[D]') / day1))
df[prefix + f] = res
return df
#export
def add_elapsed_times(df, field_names, date_field, base_field):
"Add in `df` for each event in `field_names` the elapsed time according to `date_field` grouped by `base_field`"
field_names = list(L(field_names))
#Make sure date_field is a date and base_field a bool
df[field_names] = df[field_names].astype('bool')
make_date(df, date_field)
work_df = df[field_names + [date_field, base_field]]
work_df = work_df.sort_values([base_field, date_field])
work_df = _get_elapsed(work_df, field_names, date_field, base_field, 'After')
work_df = work_df.sort_values([base_field, date_field], ascending=[True, False])
work_df = _get_elapsed(work_df, field_names, date_field, base_field, 'Before')
for a in ['After' + f for f in field_names] + ['Before' + f for f in field_names]:
work_df[a] = work_df[a].fillna(0).astype(int)
for a,s in zip([True, False], ['_bw', '_fw']):
work_df = work_df.set_index(date_field)
tmp = (work_df[[base_field] + field_names].sort_index(ascending=a)
.groupby(base_field).rolling(7, min_periods=1).sum())
tmp.drop(base_field,1,inplace=True)
tmp.reset_index(inplace=True)
work_df.reset_index(inplace=True)
work_df = work_df.merge(tmp, 'left', [date_field, base_field], suffixes=['', s])
work_df.drop(field_names,1,inplace=True)
return df.merge(work_df, 'left', [date_field, base_field])
df = pd.DataFrame({'date': ['2019-12-04', '2019-11-29', '2019-11-15', '2019-10-24'], 'event': [False, True, False, True], 'base': [1,1,2,2]})
df = add_elapsed_times(df, ['event'], 'date', 'base')
df
#export
def cont_cat_split(df, max_card=20, dep_var=None):
"Helper function that returns column names of cont and cat variables from given `df`."
cont_names, cat_names = [], []
for label in df:
if label == dep_var: continue
if df[label].dtype == int and df[label].unique().shape[0] > max_card or df[label].dtype == float: cont_names.append(label)
else: cat_names.append(label)
return cont_names, cat_names
```
## Tabular -
```
#export
class _TabIloc:
"Get/set rows by iloc and cols by name"
def __init__(self,to): self.to = to
def __getitem__(self, idxs):
df = self.to.items
if isinstance(idxs,tuple):
rows,cols = idxs
cols = df.columns.isin(cols) if is_listy(cols) else df.columns.get_loc(cols)
else: rows,cols = idxs,slice(None)
return self.to.new(df.iloc[rows, cols])
#export
class Tabular(CollBase, GetAttr, FilteredBase):
"A `DataFrame` wrapper that knows which cols are cont/cat/y, and returns rows in `__getitem__`"
_default,with_cont='procs',True
def __init__(self, df, procs=None, cat_names=None, cont_names=None, y_names=None, block_y=CategoryBlock, splits=None,
do_setup=True, device=None):
if splits is None: splits=[range_of(df)]
df = df.iloc[sum(splits, [])].copy()
self.dataloaders = delegates(self._dl_type.__init__)(self.dataloaders)
super().__init__(df)
self.y_names,self.device = L(y_names),device
if block_y is not None:
if callable(block_y): block_y = block_y()
procs = L(procs) + block_y.type_tfms
self.cat_names,self.cont_names,self.procs = L(cat_names),L(cont_names),Pipeline(procs, as_item=True)
self.split = len(splits[0])
if do_setup: self.setup()
def subset(self, i): return self.new(self.items[slice(0,self.split) if i==0 else slice(self.split,len(self))])
def copy(self): self.items = self.items.copy(); return self
def new(self, df): return type(self)(df, do_setup=False, block_y=None, **attrdict(self, 'procs','cat_names','cont_names','y_names', 'device'))
def show(self, max_n=10, **kwargs): display_df(self.all_cols[:max_n])
def setup(self): self.procs.setup(self)
def process(self): self.procs(self)
def loc(self): return self.items.loc
def iloc(self): return _TabIloc(self)
def targ(self): return self.items[self.y_names]
def all_col_names (self): return self.cat_names + self.cont_names + self.y_names
def n_subsets(self): return 2
def new_empty(self): return self.new(pd.DataFrame({}, columns=self.items.columns))
def to_device(self, d=None):
self.device = d
return self
properties(Tabular,'loc','iloc','targ','all_col_names','n_subsets')
#export
class TabularPandas(Tabular):
def transform(self, cols, f): self[cols] = self[cols].transform(f)
#export
def _add_prop(cls, nm):
@property
def f(o): return o[list(getattr(o,nm+'_names'))]
@f.setter
def fset(o, v): o[getattr(o,nm+'_names')] = v
setattr(cls, nm+'s', f)
setattr(cls, nm+'s', fset)
_add_prop(Tabular, 'cat')
_add_prop(Tabular, 'cont')
_add_prop(Tabular, 'y')
_add_prop(Tabular, 'all_col')
df = pd.DataFrame({'a':[0,1,2,0,2], 'b':[0,0,0,0,1]})
to = TabularPandas(df, cat_names='a')
t = pickle.loads(pickle.dumps(to))
test_eq(t.items,to.items)
test_eq(to.all_cols,to[['a']])
to.show() # only shows 'a' since that's the only col in `TabularPandas`
#export
class TabularProc(InplaceTransform):
"Base class to write a non-lazy tabular processor for dataframes"
def setup(self, items=None, train_setup=False): #TODO: properly deal with train_setup
super().setup(getattr(items,'train',items), train_setup=False)
# Procs are called as soon as data is available
return self(items.items if isinstance(items,Datasets) else items)
#export
def _apply_cats (voc, add, c):
if not is_categorical_dtype(c):
return pd.Categorical(c, categories=voc[c.name][add:]).codes+add
return c.cat.codes+add #if is_categorical_dtype(c) else c.map(voc[c.name].o2i)
def _decode_cats(voc, c): return c.map(dict(enumerate(voc[c.name].items)))
#export
class Categorify(TabularProc):
"Transform the categorical variables to that type."
order = 1
def setups(self, to):
self.classes = {n:CategoryMap(to.iloc[:,n].items, add_na=(n in to.cat_names)) for n in to.cat_names}
def encodes(self, to): to.transform(to.cat_names, partial(_apply_cats, self.classes, 1))
def decodes(self, to): to.transform(to.cat_names, partial(_decode_cats, self.classes))
def __getitem__(self,k): return self.classes[k]
#export
@Categorize
def setups(self, to:Tabular):
if len(to.y_names) > 0:
self.vocab = CategoryMap(getattr(to, 'train', to).iloc[:,to.y_names[0]].items)
self.c = len(self.vocab)
return self(to)
@Categorize
def encodes(self, to:Tabular):
to.transform(to.y_names, partial(_apply_cats, {n: self.vocab for n in to.y_names}, 0))
return to
@Categorize
def decodes(self, to:Tabular):
to.transform(to.y_names, partial(_decode_cats, {n: self.vocab for n in to.y_names}))
return to
show_doc(Categorify, title_level=3)
df = pd.DataFrame({'a':[0,1,2,0,2]})
to = TabularPandas(df, Categorify, 'a')
cat = to.procs.categorify
test_eq(cat['a'], ['#na#',0,1,2])
test_eq(to['a'], [1,2,3,1,3])
df1 = pd.DataFrame({'a':[1,0,3,-1,2]})
to1 = to.new(df1)
to1.process()
#Values that weren't in the training df are sent to 0 (na)
test_eq(to1['a'], [2,1,0,0,3])
to2 = cat.decode(to1)
test_eq(to2['a'], [1,0,'#na#','#na#',2])
#test with splits
cat = Categorify()
df = pd.DataFrame({'a':[0,1,2,3,2]})
to = TabularPandas(df, cat, 'a', splits=[[0,1,2],[3,4]])
test_eq(cat['a'], ['#na#',0,1,2])
test_eq(to['a'], [1,2,3,0,3])
df = pd.DataFrame({'a':pd.Categorical(['M','H','L','M'], categories=['H','M','L'], ordered=True)})
to = TabularPandas(df, Categorify, 'a')
cat = to.procs.categorify
test_eq(cat['a'], ['#na#','H','M','L'])
test_eq(to.items.a, [2,1,3,2])
to2 = cat.decode(to)
test_eq(to2['a'], ['M','H','L','M'])
#test with targets
cat = Categorify()
df = pd.DataFrame({'a':[0,1,2,3,2], 'b': ['a', 'b', 'a', 'b', 'b']})
to = TabularPandas(df, cat, 'a', splits=[[0,1,2],[3,4]], y_names='b')
test_eq(to.vocab, ['a', 'b'])
test_eq(to['b'], [0,1,0,1,1])
to2 = to.procs.decode(to)
test_eq(to2['b'], ['a', 'b', 'a', 'b', 'b'])
#test with targets and train
cat = Categorify()
df = pd.DataFrame({'a':[0,1,2,3,2], 'b': ['a', 'b', 'a', 'c', 'b']})
to = TabularPandas(df, cat, 'a', splits=[[0,1,2],[3,4]], y_names='b')
test_eq(to.vocab, ['a', 'b'])
#export
class NormalizeTab(TabularProc):
"Normalize the continuous variables."
order = 2
def setups(self, dsets): self.means,self.stds = dsets.conts.mean(),dsets.conts.std(ddof=0)+1e-7
def encodes(self, to): to.conts = (to.conts-self.means) / self.stds
def decodes(self, to): to.conts = (to.conts*self.stds ) + self.means
#export
@Normalize
def setups(self, to:Tabular):
self.means,self.stds = getattr(to, 'train', to).conts.mean(),getattr(to, 'train', to).conts.std(ddof=0)+1e-7
return self(to)
@Normalize
def encodes(self, to:Tabular):
to.conts = (to.conts-self.means) / self.stds
return to
@Normalize
def decodes(self, to:Tabular):
to.conts = (to.conts*self.stds ) + self.means
return to
norm = Normalize()
df = pd.DataFrame({'a':[0,1,2,3,4]})
to = TabularPandas(df, norm, cont_names='a')
x = np.array([0,1,2,3,4])
m,s = x.mean(),x.std()
test_eq(norm.means['a'], m)
test_close(norm.stds['a'], s)
test_close(to['a'].values, (x-m)/s)
df1 = pd.DataFrame({'a':[5,6,7]})
to1 = to.new(df1)
to1.process()
test_close(to1['a'].values, (np.array([5,6,7])-m)/s)
to2 = norm.decode(to1)
test_close(to2['a'].values, [5,6,7])
norm = Normalize()
df = pd.DataFrame({'a':[0,1,2,3,4]})
to = TabularPandas(df, norm, cont_names='a', splits=[[0,1,2],[3,4]])
x = np.array([0,1,2])
m,s = x.mean(),x.std()
test_eq(norm.means['a'], m)
test_close(norm.stds['a'], s)
test_close(to['a'].values, (np.array([0,1,2,3,4])-m)/s)
#export
class FillStrategy:
"Namespace containing the various filling strategies."
def median (c,fill): return c.median()
def constant(c,fill): return fill
def mode (c,fill): return c.dropna().value_counts().idxmax()
#export
class FillMissing(TabularProc):
"Fill the missing values in continuous columns."
def __init__(self, fill_strategy=FillStrategy.median, add_col=True, fill_vals=None):
if fill_vals is None: fill_vals = defaultdict(int)
store_attr(self, 'fill_strategy,add_col,fill_vals')
def setups(self, dsets):
self.na_dict = {n:self.fill_strategy(dsets[n], self.fill_vals[n])
for n in pd.isnull(dsets.conts).any().keys()}
def encodes(self, to):
missing = pd.isnull(to.conts)
for n in missing.any().keys():
assert n in self.na_dict, f"nan values in `{n}` but not in setup training set"
to[n].fillna(self.na_dict[n], inplace=True)
if self.add_col:
to.loc[:,n+'_na'] = missing[n]
if n+'_na' not in to.cat_names: to.cat_names.append(n+'_na')
show_doc(FillMissing, title_level=3)
fill1,fill2,fill3 = (FillMissing(fill_strategy=s)
for s in [FillStrategy.median, FillStrategy.constant, FillStrategy.mode])
df = pd.DataFrame({'a':[0,1,np.nan,1,2,3,4]})
df1 = df.copy(); df2 = df.copy()
tos = TabularPandas(df, fill1, cont_names='a'),TabularPandas(df1, fill2, cont_names='a'),TabularPandas(df2, fill3, cont_names='a')
test_eq(fill1.na_dict, {'a': 1.5})
test_eq(fill2.na_dict, {'a': 0})
test_eq(fill3.na_dict, {'a': 1.0})
for t in tos: test_eq(t.cat_names, ['a_na'])
for to_,v in zip(tos, [1.5, 0., 1.]):
test_eq(to_['a'].values, np.array([0, 1, v, 1, 2, 3, 4]))
test_eq(to_['a_na'].values, np.array([0, 0, 1, 0, 0, 0, 0]))
dfa = pd.DataFrame({'a':[np.nan,0,np.nan]})
tos = [t.new(o) for t,o in zip(tos,(dfa,dfa.copy(),dfa.copy()))]
for t in tos: t.process()
for to_,v in zip(tos, [1.5, 0., 1.]):
test_eq(to_['a'].values, np.array([v, 0, v]))
test_eq(to_['a_na'].values, np.array([1, 0, 1]))
```
## TabularPandas Pipelines -
```
procs = [Normalize, Categorify, FillMissing, noop]
df = pd.DataFrame({'a':[0,1,2,1,1,2,0], 'b':[0,1,np.nan,1,2,3,4]})
to = TabularPandas(df, procs, cat_names='a', cont_names='b')
#Test setup and apply on df_main
test_eq(to.cat_names, ['a', 'b_na'])
test_eq(to['a'], [1,2,3,2,2,3,1])
test_eq(to['b_na'], [1,1,2,1,1,1,1])
x = np.array([0,1,1.5,1,2,3,4])
m,s = x.mean(),x.std()
test_close(to['b'].values, (x-m)/s)
test_eq(to.classes, {'a': ['#na#',0,1,2], 'b_na': ['#na#',False,True]})
#Test apply on y_names
df = pd.DataFrame({'a':[0,1,2,1,1,2,0], 'b':[0,1,np.nan,1,2,3,4], 'c': ['b','a','b','a','a','b','a']})
to = TabularPandas(df, procs, 'a', 'b', y_names='c')
test_eq(to.cat_names, ['a', 'b_na'])
test_eq(to['a'], [1,2,3,2,2,3,1])
test_eq(to['b_na'], [1,1,2,1,1,1,1])
test_eq(to['c'], [1,0,1,0,0,1,0])
x = np.array([0,1,1.5,1,2,3,4])
m,s = x.mean(),x.std()
test_close(to['b'].values, (x-m)/s)
test_eq(to.classes, {'a': ['#na#',0,1,2], 'b_na': ['#na#',False,True]})
test_eq(to.vocab, ['a','b'])
df = pd.DataFrame({'a':[0,1,2,1,1,2,0], 'b':[0,1,np.nan,1,2,3,4], 'c': ['b','a','b','a','a','b','a']})
to = TabularPandas(df, procs, 'a', 'b', y_names='c')
test_eq(to.cat_names, ['a', 'b_na'])
test_eq(to['a'], [1,2,3,2,2,3,1])
test_eq(df.a.dtype,int)
test_eq(to['b_na'], [1,1,2,1,1,1,1])
test_eq(to['c'], [1,0,1,0,0,1,0])
df = pd.DataFrame({'a':[0,1,2,1,1,2,0], 'b':[0,np.nan,1,1,2,3,4], 'c': ['b','a','b','a','a','b','a']})
to = TabularPandas(df, procs, cat_names='a', cont_names='b', y_names='c', splits=[[0,1,4,6], [2,3,5]])
test_eq(to.cat_names, ['a', 'b_na'])
test_eq(to['a'], [1,2,2,1,0,2,0])
test_eq(df.a.dtype,int)
test_eq(to['b_na'], [1,2,1,1,1,1,1])
test_eq(to['c'], [1,0,0,0,1,0,1])
#export
def _maybe_expand(o): return o[:,None] if o.ndim==1 else o
#export
class ReadTabBatch(ItemTransform):
def __init__(self, to): self.to = to
def encodes(self, to):
if not to.with_cont: res = tensor(to.cats).long(), tensor(to.targ)
else: res = (tensor(to.cats).long(),tensor(to.conts).float(), tensor(to.targ))
if to.device is not None: res = to_device(res, to.device)
return res
def decodes(self, o):
o = [_maybe_expand(o_) for o_ in to_np(o) if o_.size != 0]
vals = np.concatenate(o, axis=1)
df = pd.DataFrame(vals, columns=self.to.all_col_names)
to = self.to.new(df)
to = self.to.procs.decode(to)
return to
#export
@typedispatch
def show_batch(x: Tabular, y, its, max_n=10, ctxs=None):
x.show()
from torch.utils.data.dataloader import _MultiProcessingDataLoaderIter,_SingleProcessDataLoaderIter,_DatasetKind
_loaders = (_MultiProcessingDataLoaderIter,_SingleProcessDataLoaderIter)
#export
@delegates()
class TabDataLoader(TfmdDL):
do_item = noops
def __init__(self, dataset, bs=16, shuffle=False, after_batch=None, num_workers=0, **kwargs):
if after_batch is None: after_batch = L(TransformBlock().batch_tfms)+ReadTabBatch(dataset)
super().__init__(dataset, bs=bs, shuffle=shuffle, after_batch=after_batch, num_workers=num_workers, **kwargs)
def create_batch(self, b): return self.dataset.iloc[b]
TabularPandas._dl_type = TabDataLoader
```
## Integration example
```
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
df_main,df_test = df.iloc[:10000].copy(),df.iloc[10000:].copy()
df_main.head()
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['age', 'fnlwgt', 'education-num']
procs = [Categorify, FillMissing, Normalize]
splits = RandomSplitter()(range_of(df_main))
%time to = TabularPandas(df_main, procs, cat_names, cont_names, y_names="salary", splits=splits)
dls = to.dataloaders()
dls.valid.show_batch()
to_tst = to.new(df_test)
to_tst.process()
to_tst.all_cols.head()
```
## Other target types
### Multi-label categories
#### one-hot encoded label
```
def _mock_multi_label(df):
sal,sex,white = [],[],[]
for row in df.itertuples():
sal.append(row.salary == '>=50k')
sex.append(row.sex == ' Male')
white.append(row.race == ' White')
df['salary'] = np.array(sal)
df['male'] = np.array(sex)
df['white'] = np.array(white)
return df
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
df_main,df_test = df.iloc[:10000].copy(),df.iloc[10000:].copy()
df_main = _mock_multi_label(df_main)
df_main.head()
#export
@EncodedMultiCategorize
def encodes(self, to:Tabular): return to
@EncodedMultiCategorize
def decodes(self, to:Tabular):
to.transform(to.y_names, lambda c: c==1)
return to
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['age', 'fnlwgt', 'education-num']
procs = [Categorify, FillMissing, Normalize]
splits = RandomSplitter()(range_of(df_main))
y_names=["salary", "male", "white"]
%time to = TabularPandas(df_main, procs, cat_names, cont_names, y_names=y_names, block_y=MultiCategoryBlock(encoded=True, vocab=y_names), splits=splits)
dls = to.dataloaders()
dls.valid.show_batch()
```
#### Not one-hot encoded
```
def _mock_multi_label(df):
targ = []
for row in df.itertuples():
labels = []
if row.salary == '>=50k': labels.append('>50k')
if row.sex == ' Male': labels.append('male')
if row.race == ' White': labels.append('white')
targ.append(' '.join(labels))
df['target'] = np.array(targ)
return df
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
df_main,df_test = df.iloc[:10000].copy(),df.iloc[10000:].copy()
df_main = _mock_multi_label(df_main)
df_main.head()
@MultiCategorize
def encodes(self, to:Tabular):
#to.transform(to.y_names, partial(_apply_cats, {n: self.vocab for n in to.y_names}, 0))
return to
@MultiCategorize
def decodes(self, to:Tabular):
#to.transform(to.y_names, partial(_decode_cats, {n: self.vocab for n in to.y_names}))
return to
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['age', 'fnlwgt', 'education-num']
procs = [Categorify, FillMissing, Normalize]
splits = RandomSplitter()(range_of(df_main))
%time to = TabularPandas(df_main, procs, cat_names, cont_names, y_names="target", block_y=MultiCategoryBlock(), splits=splits)
to.procs[2].vocab
```
### Regression
```
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
df_main,df_test = df.iloc[:10000].copy(),df.iloc[10000:].copy()
df_main = _mock_multi_label(df_main)
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['fnlwgt', 'education-num']
procs = [Categorify, FillMissing, Normalize]
splits = RandomSplitter()(range_of(df_main))
%time to = TabularPandas(df_main, procs, cat_names, cont_names, y_names='age', block_y=TransformBlock(), splits=splits)
to.procs[-1].means
dls = to.dataloaders()
dls.valid.show_batch()
```
## Not being used now - for multi-modal
```
class TensorTabular(Tuple):
def get_ctxs(self, max_n=10, **kwargs):
n_samples = min(self[0].shape[0], max_n)
df = pd.DataFrame(index = range(n_samples))
return [df.iloc[i] for i in range(n_samples)]
def display(self, ctxs): display_df(pd.DataFrame(ctxs))
class TabularLine(pd.Series):
"A line of a dataframe that knows how to show itself"
def show(self, ctx=None, **kwargs): return self if ctx is None else ctx.append(self)
class ReadTabLine(ItemTransform):
def __init__(self, proc): self.proc = proc
def encodes(self, row):
cats,conts = (o.map(row.__getitem__) for o in (self.proc.cat_names,self.proc.cont_names))
return TensorTabular(tensor(cats).long(),tensor(conts).float())
def decodes(self, o):
to = TabularPandas(o, self.proc.cat_names, self.proc.cont_names, self.proc.y_names)
to = self.proc.decode(to)
return TabularLine(pd.Series({c: v for v,c in zip(to.items[0]+to.items[1], self.proc.cat_names+self.proc.cont_names)}))
class ReadTabTarget(ItemTransform):
def __init__(self, proc): self.proc = proc
def encodes(self, row): return row[self.proc.y_names].astype(np.int64)
def decodes(self, o): return Category(self.proc.classes[self.proc.y_names][o])
# tds = TfmdDS(to.items, tfms=[[ReadTabLine(proc)], ReadTabTarget(proc)])
# enc = tds[1]
# test_eq(enc[0][0], tensor([2,1]))
# test_close(enc[0][1], tensor([-0.628828]))
# test_eq(enc[1], 1)
# dec = tds.decode(enc)
# assert isinstance(dec[0], TabularLine)
# test_close(dec[0], pd.Series({'a': 1, 'b_na': False, 'b': 1}))
# test_eq(dec[1], 'a')
# test_stdout(lambda: print(show_at(tds, 1)), """a 1
# b_na False
# b 1
# category a
# dtype: object""")
```
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
# This notebook is dedicated to the visualization of the Yield Curve.
## What is the yield curve?
The yield curve shows the different yields, or interest rates, across different contract lengths at a snapshot in time (typically daily). The curves below are all based on data obtained from [FRED](https://research.stlouisfed.org/) - Federal Reserve Economic Data.
## Why is the yield curve important?
The yield curve is a popular indicator among economists (as listeners of Planet Money's The Indicator will know) because it has correctly forecasted a recession in the United States without any false positives, since about 1970. This predictive power is triggered when the yield curve becomes "inverted" - meaning that short term interest rates outweigh long term interest rates.
*NOTE*: This is slightly vague, but what is specifically meant here is the difference in ten year yield, when compared to three month yield.
## Longer explanation of the yield curve
The current view of the yield curve is a relatively recent phenomenon, all things considered. It is only since after the Great Depression would we consider the positively sloping yield curves 'normal.' Prior to this, for most of the 19th and 20th centuries, the United States would have had a negatively sloping yield curve. This is caused because the growth the U.S. experienced was **deflationary**, as opposed to the inflationary growth the Fed targets currently. This is because in deflationary growth, current cash flows are less valuable than future cash flows.
For a more detailed explanation, see the [Wikipedia page](https://en.wikipedia.org/wiki/Yield_curve).
```
import pandas as pd
import numpy as np
from scipy.interpolate import CubicSpline
import matplotlib.pyplot as plt
def read_in_data(filename, index_col="DATE"):
dataframe = pd.read_csv(filename, index_col=index_col)
dataframe.drop("T10Y3M", axis=1, inplace=True)
return dataframe
def get_data_for_date(date_, df):
try:
datum = df.loc[date_]
except KeyError:
print("Sorry, no data is available for that day.")
return None, None
x_arr = np.array([1.0/12, 3.0/12, 1.0, 2.0, 3.0, 5.0, 7.0, 10.0, 20.0, 30.0])
y_arr = np.array(list(filter(lambda x: x is not None, [None if x == "." else float(x) for x in datum])))
mask = [idx for idx, x in enumerate(y_arr) if x < np.inf]
if not mask:
print("Sorry, no data is available for that day.")
return None, None
x = x_arr[mask]
y = y_arr[mask]
if len(x) != len(x_arr):
print("WARNING: some data is missing for this date.")
if y_arr[-1] != y[-1]:
print("WARNING: 30 year rate is not available for this date.")
return x, y
def plot_yield_curve(date_, dataframe):
x, y = get_data_for_date(date_, dataframe)
if x is None and y is None:
return
cs = CubicSpline(x, y)
xx = np.linspace(0, 30, num=200)
plt.plot(x, y, 'k.')
plt.plot(xx, cs(xx), 'b--')
plt.title(f"Yield Curve for {date_}")
plt.xlabel("Borrowing Period (years)")
plt.ylabel("Interest Rates (percent)")
plt.show()
def plot_long_term_difference_measure(filename, index_col="DATE"):
df = pd.read_csv(filename, index_col=index_col)
series = df["T10Y3M"].copy()
y_arr = np.array(list(filter(lambda x: x is not None, [None if x == "." else float(x) for x in series.tolist()])))
mask = [idx for idx, x in enumerate(y_arr) if x < np.inf]
y_arr = y_arr[mask]
dates = np.array(series.index.tolist())
dates_used = dates[mask]
x_arr = np.arange(len(dates))[mask]
plt.plot(x_arr, y_arr, 'k--')
plt.plot(x_arr, [0]*len(x_arr), 'r')
plt.xticks(x_arr[::365], dates_used[::365], rotation=90)
plt.tight_layout()
plt.show()
return
dataframe = read_in_data("data/yield_curve_data_ordered.csv")
```
#### Normal Yield Curve
```
plot_yield_curve("2019-01-07", dataframe)
```
#### Inverted Yield Curve
```
plot_yield_curve("2006-12-05", dataframe)
```
#### Trend Over Time
As indicated in the notes above, the main consideration we're looking at is 10 year vs 3 month yield. So instead of a snapshot, we can plot this over time. The red line is added to visualize periods of time in which the yield curve is inverted.
```
plot_long_term_difference_measure("data/yield_curve_data_ordered.csv")
```
| github_jupyter |
```
from scipy.sparse import diags
import random
import numpy as np
import scipy as sc
import pandas as pd
import csv
import scipy.linalg as spl
import matplotlib.pyplot as plt
from matplotlib import rc
rc('text', usetex=True)
import time
import sys
sys.path.insert(0, '../../python/')
from opt_utils import *
from grad_utils import *
from ks_utils import *
from simulation_utils import *
from cv_utils import *
%matplotlib inline
```
# Generate synethic data
```
N = 10 # number of teams
T = 10 # number of seasons/rounds/years
tn = [1] * int(T * N * (N - 1)/2) # number of games between each pair of teams
```
### Gaussian Process
```
random.seed(0)
np.random.seed(0)
P_list = make_prob_matrix(T,N,r = 1,alpha = 1,mu = [0,0.2])
game_matrix_list = get_game_matrix_list_from_P(tn,P_list)
data = game_matrix_list # shape: T*N*N
```
## Oracle estimator
```
# vanilla BT
random.seed(0)
np.random.seed(0)
_, beta_oracle = gd_bt(data = P_list)
latent = beta_oracle
for i in range(N):
plt.plot(latent[:,i], label="team %d"%i)
plt.xlabel("season number")
plt.ylabel("latent parameter")
# plt.legend(loc='upper left', bbox_to_anchor=(1, 1.03, 1, 0))
```
## Kernel method
## $h = T^{-3/4}$
```
T**(-3/4)
T, N = data.shape[0:2]
ks_data = kernel_smooth(data,1/6 * T**(-1/5))
ks_data[1,:,:]
objective_pgd, beta_pgd = gd_bt(data = ks_data,verbose=True)
T, N = data.shape[0:2]
beta = beta_pgd.reshape((T,N))
f = plt.figure(1, figsize = (9,5))
ax = plt.subplot(111)
for i in range(N):
ax.plot(range(1,T + 1),beta[:,i],marker = '.',label = 'Team' + str(i),linewidth=1)
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
# Put a legend to the right of the current axis
# ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
# f.savefig("l2_sq_solution.pdf", bbox_inches='tight')
```
## LOOCV
```
start_time = time.time()
random.seed(0)
np.random.seed(0)
h_list = np.linspace(0.3, 0.01, 10)
# h_cv, nll_cv, beta_cv, prob_cv = cv_utils.loocv_ks(data, h_list, gd_bt, num_loocv = 200, return_prob = True, out = "notebook")
h_cv, nll_cv, beta_cv, prob_cv = loocv_ks(data, h_list, gd_bt, num_loocv = 200, return_prob = True, out = "notebook")
loo_nll_DBT, loo_prob_DBT = max(nll_cv), prob_cv[np.argmax(nll_cv)]
print("--- %s seconds ---" % (time.time() - start_time))
h_cv
f = plt.figure(1, figsize = (7,5))
size_ylabel = 20
size_xlabel = 30
size_tick = 20
nll_cv = nll_cv
plt.plot(h_list[::-1], nll_cv)
plt.xlabel(r'$h$',fontsize = size_xlabel); plt.ylabel(r"Averaged nll",fontsize = size_ylabel)
plt.tick_params(axis='both', which='major', labelsize=size_tick)
# f.savefig("cv_curve.pdf", bbox_inches='tight')
import time
start_time = time.time()
random.seed(0)
np.random.seed(0)
h = h_cv
nll_DBT, beta_DBT, prob_DBT = loo_DBT(data, h, gd_bt, num_loo = 200, return_prob = True, out = "notebook")
print("--- %s seconds ---" % (time.time() - start_time))
def get_winrate(data):
T, N = data.shape[:2]
winrate = np.sum(data, 2) / (np.sum(data,2) + np.sum(data,1))
return winrate
def loo_winrate(data,num_loo = 200):
indices = np.array(np.where(np.full(data.shape, True))).T
cum_match = np.cumsum(data.flatten())
loglikes_loo = 0
prob_loo = 0
for i in range(num_loo):
data_loo = data.copy()
rand_match = np.random.randint(np.sum(data))
rand_index = indices[np.min(np.where(cum_match >= rand_match)[0])]
data_loo[tuple(rand_index)] -= 1
winrate_loo = get_winrate(data = data_loo)
prob_loo += 1 - winrate_loo[rand_index[0],rand_index[1]]
return (-loglikes_loo/num_loo, prob_loo/num_loo)
# winrate
random.seed(0)
np.random.seed(0)
winrate = get_winrate(data)
loo_nll_wr, loo_prob_wr = loo_winrate(data)
loo_prob_wr
# vanilla BT
import time
start_time = time.time()
random.seed(0)
np.random.seed(0)
objective_vanilla_bt, beta_vanilla_bt = gd_bt(data = data,verbose = True)
loo_nll_vBT, loo_prob_vBT = loo_vBT(data,num_loo = 200)
print("--- %s seconds ---" % (time.time() - start_time))
loo_nll_vBT
loo_prob_vBT
rank_dif_estimator = [0] * 3
beta_all = [winrate,beta_vanilla_bt,beta_cv]
for i in range(len(rank_dif_estimator)):
betai = beta_all[i]
rank_dif_estimator[i] = np.mean(av_dif_rank(beta_oracle,betai))
rank_dif_estimator
df = pd.DataFrame({'estimator':['winrate','vanilla BT','DBT'],'average rank difference':rank_dif_estimator,
'LOO Prob':[loo_prob_wr,loo_prob_vBT,loo_prob_DBT],
'LOO nll':[loo_nll_wr,loo_nll_vBT,loo_nll_DBT]})
print(df.to_latex(index_names=True, escape=False, index=False,
column_format='c|c|c|c|', float_format="{:0.2f}".format,
header=True, bold_rows=True))
T, N = data.shape[0:2]
f = plt.figure(1, figsize = (10,8))
size_ylabel = 20
size_xlabel = 15
size_title = 15
size_tick = 13
size_legend = 15.4
font_title = "Times New Roman Bold"
random.seed(0)
np.random.seed(0)
color_matrix = c=np.random.rand(N,3)
beta = beta_oracle.reshape((T,N))
ax = plt.subplot(221)
for i in range(N):
ax.plot(range(1,T + 1),beta[:,i],c=color_matrix[i,:],marker = '.',label = 'Team' + str(i),linewidth=1)
ax.tick_params(axis='both', which='major', labelsize=size_tick)
plt.title(r"True $\beta^*$",fontsize = size_title)
plt.xlabel(r"$t$",fontsize = size_xlabel); plt.ylabel(r"${\beta}^*$",fontsize = size_ylabel,rotation = "horizontal")
bottom, top = plt.ylim()
beta = beta_cv.reshape((T,N))
ax = plt.subplot(222)
for i in range(N):
ax.plot(range(1,T + 1),beta[:,i],c=color_matrix[i,:],marker = '.',label = 'Team' + str(i),linewidth=1)
ax.tick_params(axis='both', which='major', labelsize=size_tick)
plt.title(r"Dynamic Bradley-Terry, Gaussian Kernel",fontsize = size_title)
plt.xlabel(r"$t$",fontsize = size_xlabel); plt.ylabel(r"$\hat{\beta}$",fontsize = size_ylabel,rotation = "horizontal")
# plt.ylim((bottom, top))
beta = winrate.reshape((T,N))
ax = plt.subplot(223)
for i in range(N):
ax.plot(range(1,T + 1),beta[:,i],c=color_matrix[i,:],marker = '.',label = 'Team' + str(i),linewidth=1)
ax.tick_params(axis='both', which='major', labelsize=size_tick)
plt.title(r"Win Rate",fontsize = size_title)
plt.xlabel("t",fontsize = size_xlabel); plt.ylabel(r"Win Rate",fontsize = 10,rotation = "vertical")
ax.legend(loc='lower left', fontsize = size_legend,labelspacing = 0.75,bbox_to_anchor=(-0.03,-0.6),ncol = 5)
beta = beta_vanilla_bt.reshape((T,N))
ax = plt.subplot(224)
for i in range(N):
ax.plot(range(1,T + 1),beta[:,i],c=color_matrix[i,:],marker = '.',label = 'Team' + str(i),linewidth=1)
ax.tick_params(axis='both', which='major', labelsize=size_tick)
plt.title(r"Original Bradley-Terry",fontsize = size_title)
plt.xlabel(r"$t$",fontsize = size_xlabel); plt.ylabel(r"$\hat{\beta}$",fontsize = size_ylabel,rotation = "horizontal")
plt.subplots_adjust(hspace = 0.3)
plt.show()
# f.savefig("compare.pdf", bbox_inches='tight')
```
## repeated experiment
```
import time
start_time = time.time()
random.seed(0)
np.random.seed(0)
B = 20
loo_ks = 200
loo = 200
h_cv_list = []
rank_diff_DBT_list, loo_nll_DBT_list, loo_prob_DBT_list = [], [], []
rank_diff_wr_list, loo_nll_wr_list, loo_prob_wr_list = [], [], []
rank_diff_vBT_list, loo_nll_vBT_list, loo_prob_vBT_list = [], [], []
for b in range(B):
N = 10 # number of teams
T = 10 # number of seasons/rounds/years
tn = [1] * int(T * N * (N - 1)/2) # number of games between each pair of teams
[alpha,r] = [1,1]
##### get beta here #####
P_list = make_prob_matrix(T,N,r = 1,alpha = 1,mu = [0,0.2])
P_winrate = P_list.sum(axis=2)
game_matrix_list = get_game_matrix_list_from_P(tn,P_list)
data = game_matrix_list # shape: T*N*N
# true beta
_, beta_oracle = gd_bt(data = P_list)
# ks cv
h_list = np.linspace(0.15, 0.01, 10)
h_cv, nll_cv, beta_cv, prob_cv = loocv_ks(data, h_list, gd_bt, num_loocv = loo_ks, verbose = False,
return_prob = True, out = "notebook")
h_cv_list.append(h_cv)
loo_nll_DBT_list.append(max(nll_cv))
loo_prob_DBT_list.append(prob_cv[np.argmax(nll_cv)])
rank_diff_DBT_list.append(np.mean(av_dif_rank(beta_oracle,beta_cv)))
# # fixed h
# h_cv = 1/6 * T**(-1/5)
# nll_cv, beta_cv, prob_cv = loo_DBT(data, h_cv, gd_bt, num_loo = 200, return_prob = True, out = "notebook")
# h_cv_list.append(h_cv)
# loo_nll_DBT_list.append(nll_cv)
# loo_prob_DBT_list.append(prob_cv)
# rank_diff_DBT_list.append(np.mean(av_dif_rank(beta_oracle,beta_cv)))
winrate = get_winrate(data)
loo_nll_wr, loo_prob_wr = loo_winrate(data,num_loo = loo)
loo_nll_wr_list.append(loo_nll_wr)
loo_prob_wr_list.append(loo_prob_wr)
rank_diff_wr_list.append(np.mean(av_dif_rank(beta_oracle,winrate)))
objective_vanilla_bt, beta_vBT = gd_bt(data = data)
loo_nll_vBT, loo_prob_vBT = loo_vBT(data,num_loo = loo)
loo_nll_vBT_list.append(loo_nll_vBT)
loo_prob_vBT_list.append(loo_prob_vBT)
rank_diff_vBT_list.append(np.mean(av_dif_rank(beta_oracle,beta_vBT)))
print(str(b) + '-th repeat finished.')
print("--- %s seconds ---" % (time.time() - start_time))
rank_dif_estimator = [np.mean(rank_diff_wr_list),
np.mean(rank_diff_vBT_list),
np.mean(rank_diff_DBT_list)]
loo_prob_wr = np.mean(loo_prob_wr_list)
loo_prob_DBT = np.mean(loo_prob_DBT_list)
loo_prob_vBT = np.mean(loo_prob_vBT_list)
loo_nll_wr = np.mean(loo_nll_wr_list)
loo_nll_DBT = np.mean(loo_nll_DBT_list)
loo_nll_vBT = np.mean(loo_nll_vBT_list)
df = pd.DataFrame({'estimator':['winrate','vanilla BT','DBT'],'average rank difference':rank_dif_estimator,
'LOO Prob':[loo_prob_wr,loo_prob_vBT,loo_prob_DBT],
'LOO nll':[loo_nll_wr,loo_nll_vBT,loo_nll_DBT]})
print("--- %s seconds ---" % (time.time() - start_time))
print(df.to_latex(index_names=True, escape=False, index=False,
column_format='c|c|c|c|', float_format="{:0.2f}".format,
header=True, bold_rows=True))
T, N = data.shape[0:2]
f = plt.figure(1, figsize = (10,8))
size_ylabel = 20
size_xlabel = 15
size_title = 15
size_tick = 13
size_legend = 15.4
font_title = "Times New Roman Bold"
random.seed(0)
np.random.seed(0)
color_matrix = c=np.random.rand(N,3)
beta = beta_oracle.reshape((T,N))
ax = plt.subplot(221)
for i in range(N):
ax.plot(range(1,T + 1),beta[:,i],c=color_matrix[i,:],marker = '.',label = 'Team' + str(i),linewidth=1)
ax.tick_params(axis='both', which='major', labelsize=size_tick)
plt.title(r"Oracle $\beta^o$",fontsize = size_title)
plt.xlabel(r"$T$",fontsize = size_xlabel); plt.ylabel(r"${\beta}^o$",fontsize = size_ylabel,rotation = "horizontal")
# bottom, top = plt.ylim()
beta = beta_cv.reshape((T,N))
ax = plt.subplot(222)
for i in range(N):
ax.plot(range(1,T + 1),beta[:,i],c=color_matrix[i,:],marker = '.',label = 'Team' + str(i),linewidth=1)
ax.tick_params(axis='both', which='major', labelsize=size_tick)
plt.title(r"Dynamic Bradley-Terry, Gaussian Kernel",fontsize = size_title)
plt.xlabel(r"$T$",fontsize = size_xlabel); plt.ylabel(r"$\hat{\beta}$",fontsize = size_ylabel,rotation = "horizontal")
# plt.ylim((bottom, top))
beta = winrate.reshape((T,N))
ax = plt.subplot(223)
for i in range(N):
ax.plot(range(1,T + 1),beta[:,i],c=color_matrix[i,:],marker = '.',label = 'Team' + str(i),linewidth=1)
ax.tick_params(axis='both', which='major', labelsize=size_tick)
plt.title(r"Win Rate",fontsize = size_title)
plt.xlabel(r"$T$",fontsize = size_xlabel); plt.ylabel(r"Win Rate",fontsize = 10,rotation = "vertical")
# ax.legend(loc='lower left', fontsize = size_legend,labelspacing = 0.75,bbox_to_anchor=(-0.03,-0.6),ncol = 5)
beta = beta_vBT.reshape((T,N))
ax = plt.subplot(224)
for i in range(N):
ax.plot(range(1,T + 1),beta[:,i],c=color_matrix[i,:],marker = '.',label = 'Team' + str(i),linewidth=1)
ax.tick_params(axis='both', which='major', labelsize=size_tick)
plt.title(r"Vanilla Bradley-Terry",fontsize = size_title)
plt.xlabel(r"$T$",fontsize = size_xlabel); plt.ylabel(r"$\hat{\beta}$",fontsize = size_ylabel,rotation = "horizontal")
plt.subplots_adjust(hspace = 0.3)
plt.show()
f.savefig("compare_beta_NT10_n1_ag.pdf", bbox_inches='tight')
loo_prob_DBT_list
f = plt.figure(1, figsize = (16,8))
size_ylabel = 20
size_xlabel = 15
size_title = 15
size_tick = 13
size_legend = 15.4
font_title = "Times New Roman Bold"
random.seed(0)
np.random.seed(0)
color_list = ['red','blue','green']
x_range = [i for i in range(B)]
ax = plt.subplot(311)
ax.plot(x_range,rank_diff_wr_list,c=color_list[0],marker = '.',label = 'win rate',linewidth=1)
ax.plot(x_range,rank_diff_vBT_list,c=color_list[1],marker = '.',label = 'vBT',linewidth=1)
ax.plot(x_range,rank_diff_DBT_list,c=color_list[2],marker = '.',label = 'DBT',linewidth=1)
ax.tick_params(axis='both', which='major', labelsize=size_tick)
plt.title(r"average rank difference over 20 repeats (agnostic.N,T=10,n=1)",fontsize = size_title)
plt.xlabel(r"Repeat",fontsize = size_xlabel); plt.ylabel(r"ave. rank diff.",fontsize = size_ylabel,rotation = "vertical")
ax.legend(loc='upper left', fontsize = size_legend,labelspacing = 0.75,ncol = 1)
ax = plt.subplot(312)
ax.plot(x_range,loo_nll_vBT_list,c=color_list[1],marker = '.',label = 'vBT',linewidth=1)
ax.plot(x_range,loo_nll_DBT_list,c=color_list[2],marker = '.',label = 'DBT',linewidth=1)
ax.tick_params(axis='both', which='major', labelsize=size_tick)
plt.title(r"LOO nll over 20 repeats (agnostic.N,T=10,n=1)",fontsize = size_title)
plt.xlabel(r"Repeat",fontsize = size_xlabel); plt.ylabel(r"LOO nll",fontsize = size_ylabel,rotation = "vertical")
ax.legend(loc='upper left', fontsize = size_legend,labelspacing = 0.75,ncol = 1)
ax = plt.subplot(313)
ax.plot(x_range,loo_prob_wr_list,c=color_list[0],marker = '.',label = 'win rate',linewidth=1)
ax.plot(x_range,loo_prob_vBT_list,c=color_list[1],marker = '.',label = 'vBT',linewidth=1)
ax.plot(x_range,loo_prob_DBT_list,c=color_list[2],marker = '.',label = 'DBT',linewidth=1)
ax.tick_params(axis='both', which='major', labelsize=size_tick)
plt.title(r"LOO prob over 20 repeats (agnostic.N,T=10,n=1)",fontsize = size_title)
plt.xlabel(r"Repeat",fontsize = size_xlabel); plt.ylabel(r"LOO prob",fontsize = size_ylabel,rotation = "vertical")
ax.legend(loc='upper left', fontsize = size_legend,labelspacing = 0.75,ncol = 1)
plt.subplots_adjust(hspace = 0.6)
plt.show()
f.savefig("perform_NT10_n1_ag.pdf", bbox_inches='tight')
```
| github_jupyter |
# Neural Networks
<a id='Table_of_Content'></a>
**[1. Neural Networks](#1.Neural_Networks)**
* [1.1. Perceptron](#1.1.Perceptron)
* [1.2. Sigmoid](#1.2.Sigmoid)
**[2. Neural Networks Architecture](#2.Neural_Networks_Architecture)**
**[3. Training Neural Network](#3.Training_Neural_Network)**
* [3.1. Forward Propagation](#3.1.Forward_Propagation)
* [3.2. Compute Error](#3.2.Compute_Error)
* [3.3. Back Propagation](#3.3.Back_Propagation)
* [3.4. Gradient Descent](#3.4.Gradient_Descent)
* [3.5. Computational Graph](#3.5.Computational_Graph)
* [3.6. Gradient_Checking](#3.6.Gradient_Checking)
* [3.7. Parameter Update](#3.7.Parameter_Update)
* [3.8. Learning Rate](#3.8.Learning_Rate)
<a id='1.Neural_Networks'></a>
# 1. Neural Networks
Neural networks (NN) are a broad family of algorithms that have formed the basis for the recent resurgence in the computational field called deep learning. Early work on neural networks actually began in the 1950s and 60s. And just recently, neural network has experienced a resurgence of interest, as deep learning has achieved impressive state-of-the-art results.
Neural network is basically a mathematical model built from simple functions with changing parameters. Just like a biological neuron has dendrites to receive signals, a cell body to process them, and an axon to send signals out to other neurons, an artificial neuron has a number of input channels, a processing stage, and one output that can branch out to multiple other artificial neurons. Neurons are interconnected and pass message to each other.
To understand neural networks, Let's get started with **Perceptron**.
<center><img src="images/neurons.png" alt="neuron" width="500px"/></center>
<a id='1.1.Perceptron'></a>
## 1.1. Perceptron
A perceptron takes several binary inputs, $x_1,x_2,…,x_n$, and produces a single binary output.
<center><img src="images/perceptron.png" alt="neuron" width="300px"/></center>
The example above shows a perceptron taking three inputs $x_1, x_2, x_3$. Each input is given a $weight$ $W \in \mathbb{R}$ and it serves to express the importance of its corresponding input in the computation of output for that perceptron. The perceptron output, 0 or 1, is determined by the weighted sum $\sum_i w_ix_i$ with respect to a $threshold$ value as follows:
\begin{equation}
output = \left\{
\begin{array}{rl}
0 & \text{if } \sum_iw_ix_i \leq \text{threshold}\\
1 & \text{if } \sum_iw_ix_i > \text{threshold}
\end{array} \right.
\end{equation}
The weighted sum can be categorically defined as a dot product between $w$ and $x$ as follows:
$$\sum_i w_ix_i \equiv w \cdot x$$
where $w$ and $x$ are vectors corresponding to weights and inputs respectively. Introducing a bias term $b \equiv -threshold$ results in
\begin{equation}
output = \left\{
\begin{array}{rl}
0 & \text{if } w \cdot x + b \leq 0\\
1 & \text{if } w \cdot x + b > 0
\end{array} \right.
\end{equation}
You can think of the $bias$ as a measure of how easy it is to get the perceptron to output 1. For a perceptron with a high positive $bias$, it is extremely easy for the perceptron to output 1. In constrast, if the $bias$ is relatively a negative value, it is difficult for the perceptron to output 1.
<center><img src="images/perceptron2.png" alt="neuron" width="300px"/></center>
A way to think about the perceptron is that it is a device that makes **decisions** by weighing up evidence.
```
import numpy as np
X = np.array([0, 1, 1])
W = np.array([5, 1, -3])
b=5
def perceptron_neuron(X, W, b):
return int(X.dot(W)+b > 0)
perceptron_neuron(X,W,b)
```
<a id='1.2.Sigmoid'></a>
## 1.2. Sigmoid
Small changes to $weights$ and $bias$ of any perceptron in a network can cause the output to flip from 0 to 1 or 1 to 0. This flip can cause the behaviour of the rest of the network to change in a complicated way.
```
x = np.array([100])
b = np.array([9])
w1 = np.array([-0.08])
w2 = np.array([-0.09])
print(perceptron_neuron(x,w1,b))
print(perceptron_neuron(x,w2,b))
```
The problem above can be overcome by using a Sigmoid neuron. It functions similarly to a Perceptron but modified such that small changes in $weights$ and $bias$ cause only a small change in the output.
As with a Perceptron, a Sigmoid neuron also computes $w \cdot x + b $, but now with the Sigmoid function being incorporated as follows:
\begin{equation}
z = w \cdot x + b \\
\sigma(z) = \frac{1}{1+e^{-z}}
\end{equation}
<center><img src="images/sigmoid_neuron.png" alt="neuron" width="500px"/></center>
A Sigmoid function produces output between 0 and 1, and the figure below shows the function. If $z$ is large and positive, the output of a sigmoid neuron approximates to 1, just as it would for a perceptron. Alternatively if $z$ is highly negative, the output approxiates to 0.
<center><img src="images/sigmoid_shape.png" alt="neuron" width="400px"/></center>
```
def sigmoid(x):
return 1/(1 + np.exp(-x))
def sigmoid_neuron(X, W, b):
z = X.dot(W)+b
return sigmoid(z)
print(sigmoid_neuron(x,w1,b))
print(sigmoid_neuron(x,w2,b))
```
Click here to go back [Table of Content](#Table_of_Content).
<a id='2.Neural_Networks_Architecture'></a>
# 2. Neural Networks Architecture
A neural network can take many forms. A typical architecture consists of an input layer (leftmost), an output layer (rightmost), and a middle layer (hidden layer). Each layer can have multiple neurons while the number of neurons in the output layer is dependent on the number of classes.
<center><img src="images/neuralnetworks.png" alt="neuron" width="600px"/></center>
Click here to go back [Table of Content](#Table_of_Content).
```
# Create dataset
from sklearn.datasets import make_moons, make_circles
import matplotlib.pyplot as plt
import pandas as pd
seed = 123
np.random.seed(seed)
X, y = make_circles(n_samples=1000, factor=.5, noise=.1, random_state=seed)
colors = {0:'red', 1:'blue'}
df = pd.DataFrame(dict(x=X[:,0], y=X[:,1], label=y))
fig, ax = plt.subplots()
grouped = df.groupby('label')
for key, group in grouped:
group.plot(ax=ax, kind='scatter', x='x', y='y', label=key, color=colors[key])
plt.show()
```
<a id='3.Training_Neural_Network'></a>
## 3. Training Neural Network
Previously we have learnt that $weights$ express the importance of variables, and $bias$ is a threshold to control the behaviour of neurons. So, how can we determine these $weights$ and $bias$?
Consider these steps:
1. Since we do not know the ideal $weights$ and $bias$, we initialize them using random numbers **(parameters initialization).**
2. Let the data flow through the network with these initialized $weights$ and $bias$ to get a predicted output. This process is known as **forward propagation**.
3. Compare the predicted output with the actual output. An error is computed if there is a difference between them. A high error thus indicates that current $weights$ and $bias$ do not give an accurate prediction. **(compute error)**
4. To fix these $weights$ and $bias$, a backward computation is carried out by finding the partial derivative of error with respect to each $weight$ and $bias$ and then updating their values accordingly. This process is known as **backpropagation**.
5. Repeat steps (2) to (4) until the error is below a pre-defined threshold to obtain the optimized $weights$ and $bias$.
<a id='3.1.Forward_Propagation'></a>
### 3.1. Forward Propagation
<center><img src="images/network2.png" alt="neuron" width="400px"/></center>
The model above has two neurons in the input layer, four neurons (also known as **activation units**) in the hidden layer, and one neuron in the output layer.
$X = [x_1, x_2]$ is the input matrix
$W^{j+1} =$ matrix of $weights$ controlling function mapping from layer $j$ to layer $j + 1$
\begin{align}
W^{j+1} \equiv
\begin{bmatrix}
w_{11} & w_{12} \\
w_{21} & w_{22} \\
w_{31} & w_{32} \\
w_{41} & w_{42} \\
\end{bmatrix}^{j+1}
\end{align}
$W^{j+1}_{kl}$, $k$ is node in layer $j+1$, $l$ is node in layer $j$
$B^{j+1} = $ matrix of $bias$ controlling function mapping from layer $j$ to layer $j + 1$
\begin{align}
B^{j+1} \equiv
\begin{bmatrix}
b_{1} & b_{2} & b_{3} & b_{4}
\end{bmatrix}^{j+1}
\end{align}
$B^{j+1}_k$, $k$ is node in layer $j + 1$
the activation units can be label as $a_i^j =$ "activation" of unit $i$ in layer $j$.
if $j=0,\quad a_i^j, $ is equivalent to input layer.
Finally the activation function in layer 1 can be denode as,
\begin{align}
a_1^1 = \sigma(W_{11}^1x_1+W_{12}^1x_2+B^1_{1}) \\
a_2^1 = \sigma(W_{21}^1x_1+W_{22}^1x_2+B^1_{2}) \\
a_3^1 = \sigma(W_{31}^1x_1+W_{32}^1x_2+B^1_{3}) \\
a_4^1 = \sigma(W_{41}^1x_1+W_{42}^1x_2+B^1_{4})
\end{align}
Simplified using vectorization,
\begin{align}
a^1 = \sigma(X \cdot W^{1T}+B^1) \\
\end{align}
\begin{align}
output = \sigma( a^1 \cdot W^{2T}+B^2) \\
\end{align}
Implement forward propagation and feed in the generated data
Tips:
- Using numpy function *random.randn()* to generate a Gaussian distribution with mean 0, and variance 1.
```
#step 1: parameters initialization
def initialize_params():
params = {
'W1': np.random.randn(4,2),
'B1': np.random.randn(1,4),
'W2': np.random.randn(1,4),
'B2': np.random.randn(1,1),
}
return params
#step 2: forward propagation
x_ = np.array([X[0]])
y_ = np.array([y[0]])
np.random.seed(0)
params = initialize_params()
def forward(X, params):
a1 = sigmoid(X.dot(params['W1'].T)+params['B1'])
output = sigmoid(a1.dot(params['W2'].T)+params['B2'])
cache={'a1':a1, 'params': params}
return output, cache
output, cache = forward(x_, params)
print('Actual output: ', y_)
print('Predicted output: ',output)
```
<a id='3.2.Compute_Error'></a>
### 3.2. Compute Error
Error is also known as Loss or Cost.
**Loss function** is usually a function defined on a data point, prediction and label, and measures the penalty.
**Cost function** is usually more general. It might be a sum of loss functions over your training set plus some model complexity penalty (regularization).
To compute the error, we should first define a $cost$ $function$. For simplicity, we will use **One Half Mean Squared Error** as our cost function. The equation is listed below:
\begin{equation}
MSE = \frac{1}{2n} \sum (\hat y - y)^2
\end{equation}
where $n$ is the number of training samples, $\hat y$ is the predicted output, and $y$ is the actual output. A low cost results is returned if the predicted output is close to the actual output, which indicates a good measure of accuracy.
```
#step 3: cost function
def mse(yhat, y):
n = yhat.shape[0]
return (1/(2*n)) * np.sum(np.square(yhat-y))
mse(output, y_)
```
<a id='3.3.Back_Propagation'></a>
### 3.3. Back Propagation
Now we know that to get a good prediction, the cost should be as low/small as possible. To minimize the cost, we have to tune the $weights$ and $bias$, but how can we do that? Do we go with random trial and error or is there a better way to do it? Fortunately, there is a better way and it is called **Gradient Descent**.
<a id='3.4.Gradient_Descent'></a>
### 3.4. Gradient Descent
Gradient descent is an optimization algorithm that iteratively looks for optimal $weights$ and $bias$ so that the cost gets smaller and eventually equals zero.
In the interative process, the gradient (of the cost function with respect to $weights$ and $bias$) is computed. The gradient is the change in cost when $weights$ and $bias$ are changed. This helps us update $weights$ and $bias$ in the direction in which the cost is minimized.
Let's recall the forward propagation equation:
\begin{align}
a^1 = \sigma(X \cdot W^{1T}+B^1) \\
output = \sigma( a^1 \cdot W^{2T}+B^2) \\
cost = \frac{1}{2n} \sum (output - y)^2
\end{align}
Arrange them into a single equation, and $cost$, $L$ can be defined as follows:
\begin{align}
L = \frac{1}{2n} \sum (\sigma( \sigma(X \cdot W^{1T}+B^1) \cdot W^{2T}+B^2) - y)^2
\end{align}
From the equation, we want to find the gradient or derivative of $L$ with respect to $W^1, W^2, B^1, B^2$.
\begin{align}
\frac{\partial L}{\partial W^1}, \frac{\partial L}{\partial W^2}, \frac{\partial L}{\partial B^1}, \frac{\partial L}{\partial B^2}
\end{align}
Computation of partial derivatives of $L$ with respect to the $weights$ and $bias$ can become very complex if the layer of the network grows. To make it simple, we can actually break the equation into smaller compenents and use **chain rule** to derive the partial derivative.
<a id='3.5.Computational_Graph'></a>
### 3.5. Computational Graph
Eventually, we can think of the forward propagation equation as a computational graph.
<center><img src="images/comp_graph.png" alt="neuron" width="500"/></center>
<center><img src="images/comp_graph2.png" alt="neuron" width="500px"/></center>
<center><img src="images/comp_graph3.png" alt="neuron" width="500px"/></center>
#### Scalar Example
\begin{align}
a_1^1 = \sigma(W_{11}^1x_1+W_{12}^1x_2+B^1_{1}) \quad \equiv \quad \frac{1}{1+\exp^{-(W_1x_1+W_2x_2+B_1)}}
\end{align}
<center><img src="images/simple_comp_graph.png" alt="neuron" width="600"/></center>
#### Note:
\begin{align}
L \quad &\rightarrow \quad \frac{\partial L}{\partial L} = 1 \\
L = \frac{1}{2n} \sum (output - y)^2 \quad &\rightarrow \quad \frac{\partial L}{\partial output} = \frac{1}{n} (output - y) \\
f(x) = e^x \quad &\rightarrow \quad \frac{\partial f}{\partial x} = e^x \\
f(x) = xy \quad &\rightarrow \quad \frac{\partial f}{\partial x} = y, \quad \frac{\partial f}{\partial y} = x \\
f(x) = 1/x \quad &\rightarrow \quad \frac{\partial f}{\partial x} = -1/x^2 \\
f(x) = x+c \quad &\rightarrow \quad \frac{\partial f}{\partial x} = 1 \\
\sigma(x) = \frac{1}{1+e^{-x}} \quad &\rightarrow \quad \frac{\partial f}{\partial \sigma} = \sigma(1-\sigma)
\end{align}
```
print('X: ', X[0])
print('W11: ', params['W1'][0])
print('B11: ', params['B1'][0][0])
# how is the graph look like in our case?
# calculate forward and backward flows
```
#### A Vectorized Example
\begin{align}
a^1 = \sigma(X \cdot W^{1T}+B^1) \\
\end{align}
<center><img src="images/vec_comp_graph.png" alt="neuron" width="700"/></center>
#### Note:
\begin{align}
q = X\cdot(W^T) \quad &\rightarrow \quad \frac{\partial f}{\partial X} = \frac{\partial f}{\partial q} \cdot W , \quad \frac{\partial f}{\partial W} = X^T \cdot \frac{\partial f}{\partial q} \\
l = q+B \quad &\rightarrow \quad \frac{\partial f}{\partial B} = \begin{bmatrix}1 & 1 \end{bmatrix} \cdot \frac{\partial f}{\partial l} , \quad \frac{\partial f}{\partial q} = \frac{\partial f}{\partial l}
\end{align}
```
def dmse(output, y):
return (output - y)/output.shape[0]
def backward(X, output, y, cache):
grads={}
a1=cache['a1']
params=cache['params']
dloss = dmse(output, y)
doutput = output*(1-output)*dloss
#compute gradient of B2 and W2
dW2 = a1.T.dot(doutput)
dB2 = np.sum(doutput, axis=0, keepdims=True)
dX2 = doutput.dot(params['W2'])
da1 = a1*(1-a1)*dX2
#compute gradient of B1 and W1
dW1 = X.T.dot(da1)
dB1 = np.sum(da1, axis=0, keepdims=True)
grads['W1'] = dW1.T
grads['W2'] = dW2.T
grads['B1'] = dB1
grads['B2'] = dB2
return grads
X_ = X[:3]
Y_ = y[:3].reshape(-1,1)
def step(X,y,params):
output, cache = forward(X, params)
cost = mse(output, y)
grads = backward(X, output, y, cache)
return (cost, grads)
np.random.seed(0)
params = initialize_params()
cost, grads = step(X_, Y_, params)
print(cost)
print(grads)
```
<a id='3.6.Gradient_Checking'></a>
### 3.6. Gradient Checking
We can use Numerical Gradient to evaluate the gradient (Analytical gradient) that we have calculated.
<center><img src="images/num_grad.png" alt="neuron" width="400"/></center>
Consider the image above, where the red line is our function, the blue line is the gradient derived from the point $x$, the green line is the approximated gradient from the point of $x$, and $h$ is the step size. It can then be shown that:
$$ \frac{\partial f}{\partial x} \approx \frac{Y_C-Y_B}{X_C-X_B} \quad = \quad \frac{f(x+h) - f(x-h)}{(x+h)-(x-h)} \quad = \quad \frac{f(x+h) - f(x-h)}{2h} $$
##### EXAMPLE
```
w1 = 3; x1 = 1; w2 = 2; x2 = -2; b1 = 2
h = 1e-4
def f(w1, x1, w2, x2, b1):
linear = (w1*x1)+(w2*x2)+b1
return 1/(1+np.exp(-linear))
num_grad_x1 = (f(w1, x1+h, w2, x2, b1) - f(w1, x1-h, w2, x2, b1))/(2*h)
print(num_grad_x1)
# vectorized gradient checking
def gradient_check(f, x, h=0.00001):
grad = np.zeros_like(x)
# iterate over all indexes in x
it = np.nditer(x, flags=['multi_index'])
while not it.finished:
# evaluate function at x+h
ix = it.multi_index
oldval = x[ix]
x[ix] = oldval + h # increment by h
fxph = f(x) # evalute f(x + h)
x[ix] = oldval - h
fxnh = f(x) # evaluate f(x - h)
x[ix] = oldval # restore
# compute the partial derivative with centered formula
grad[ix] = ((fxph - fxnh) / (2 * h)).sum() # the slope
it.iternext() # step to next dimension
return grad
def rel_error(x, y):
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
for param_name in grads:
f = lambda W: step(X_, Y_, params)[0]
param_grad_num = gradient_check(f, params[param_name])
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
```
<a id='3.7.Parameter_Update'></a>
### 3.7. Parameter Update
After getting the gradient, parameters are updated depend on the gradients; if it is positive, then the updated parameters reduces in value, and if it is negative, then the updated parameter increases in value. Regardless of the gradient, the main goal is to reach the global minimum.
<a id='3.8.Learning_Rate'></a>
### 3.8. Learning Rate
Large or small updates are controlled by the learning rate known as $\alpha$. Hence, gradient descent equations are as follows:
\begin{align}
W^1 &= W^1 - \alpha * \frac{\partial L}{\partial W^1} \\
B^1 &= B^1 - \alpha * \frac{\partial L}{\partial B^1} \\
W^2 &= W^2 - \alpha * \frac{\partial L}{\partial W^2} \\
B^2 &= B^2 - \alpha * \frac{\partial L}{\partial B^2} \\
\end{align}
```
def update_parameter(params, grads, learning_rate):
params['W1'] += -learning_rate * grads['W1']
params['B1'] += -learning_rate * grads['B1']
params['W2'] += -learning_rate * grads['W2']
params['B2'] += -learning_rate * grads['B2']
params = initialize_params()
def train(X,y,learning_rate=0.1,num_iters=30000,batch_size=256):
num_train = X.shape[0]
costs = []
for it in range(num_iters):
random_indices = np.random.choice(num_train, batch_size)
X_batch = X[random_indices]
y_batch = y[random_indices]
cost, grads = step(X_batch, y_batch, params)
costs.append(cost)
# update parameters
update_parameter(params, grads, learning_rate)
return costs
costs = train(X,y.reshape(-1,1))
plt.plot(costs)
def predict(X, params):
W1 = params['W1']
B1 = params['B1']
W2 = params['W2']
B2 = params['B2']
output, _ = forward(X, params)
return output
# test on training samples
y_pred = []
for i in range(len(X)):
pred = np.squeeze(predict(X[i],params)).round()
y_pred.append(pred)
plt.scatter(X[:,0], X[:,1], c=y_pred, linewidths=0, s=20);
# test on new samples
X_new, _ = make_circles(n_samples=1000, factor=.5, noise=.1)
y_pred = []
for i in range(len(X_new)):
pred = np.squeeze(predict(X_new[i],params)).round()
y_pred.append(pred)
plt.scatter(X_new[:,0], X_new[:,1], c=y_pred, linewidths=0, s=20);
```
# References:
- http://neuralnetworksanddeeplearning.com/
- http://cs231n.github.io/optimization-2/
- http://kineticmaths.com/index.php?title=Numerical_Differentiation
- https://google-developers.appspot.com/machine-learning/crash-course/backprop-scroll/
Click here to go back [Table of Content](#Table_of_Content).
| github_jupyter |
# <div class="girk">With this notebook you can search all available databases seperately. You can edit the search string for each database and download the results for each. If desired you can join all the results and download a combined excel at the bottom of the page</div>
# How-to:
- Start Institutional VPN
- Change variables: Location (where do you want to download to?), query and run the cell
- Search the header for your desired database and run the cells below
- For each database the query is set to the one declared at the top (default). If needed it can be changed for every database to whatever you want. Count-limit is also adjustable to your needs
- Api-callable: Scopus, Science Direct and Arxiv
- The rest is scraped via chromedriver and can take a few minutes, depending how many search results are found
```
import scraper_tobiashilt as scrape
from datetime import datetime
#Location where the results should be downloaded to
Location = '/Users/Tobias/Desktop/Python/'
Date = datetime.today().strftime('%Y-%m-%d')
# Search string
query = '("DLT" OR "Distributed Ledger") AND ("Circular Economy" OR "Sustainable supply chain")'
```
# Scopus
```
# Api-Key --> https://dev.elsevier.com
# max value for count: 6000 (Api-limit)
key = '...'
query_scopus = query
count = 50
df_Scopus = scrape.scrape_scopus(key, query_scopus, count)
df_Scopus
```
### Download
```
Excel_Scopus = Location + Date + "_Scopus.xlsx"
df_Scopus.to_excel(Excel_Scopus)
```
<font color=red>---------------------------------------------------------------------------------------------- </font>
# Science Direct
```
# Get your Api-Key --> https://dev.elsevier.com
# Count kann auf max. 6000 gesetzt werden (API-Limit)
key = '...'
insttoken = '...'
query_sd = query
count = 50
df_sciencedirect = scrape.scrape_sd(key,insttoken, query_sd, count)
df_sciencedirect
```
### Download
```
Excel_ScienceDirect = Location + Date + "_ScienceDirect.xlsx"
df_sciencedirect.to_excel(Excel_ScienceDirect)
```
<font color=red>---------------------------------------------------------------------------------------------- </font>
# Arxiv
```
# Unser Searchstring muss angepasst werden, da keine Ergebnisse gefunden werden. Count anpassen
#query_arxix = query
query_arxiv = "test"
count = 10
df_arxiv = scrape.scrape_arxiv(query_arxiv, count)
df_arxiv
```
### Download
```
Excel_Arxiv = Location + Date + "_Arxiv.xlsx"
df_arxiv.to_excel(Excel_Arxiv)
```
<font color=red>---------------------------------------------------------------------------------------------- </font>
# Web of Science
```
# Mit Webdriver, da keine API vorhanden und wir keinen IP-Block bekommen möchten (automatischer request laut robots.txt nicht erlaubt)
#--> Kaffee trinken gehen und machen lassen
# Count kann beliebig groß sein
count = 5
#query_wos = "test"
query_wos = query
df_wos = scrape.scrape_wos(query_wos, count)
df_wos
```
### Download
```
Excel_WoS = Location + Date + "_WoS.xlsx"
df_wos.to_excel(Excel_WoS)
```
<font color=red>---------------------------------------------------------------------------------------------- </font>
# ACM digital
```
# Mit Webdriver, da keine API vorhanden und wir keinen IP-Block bekommen möchten (automatischer request laut robots.txt nicht erlaubt)
#--> Kaffee trinken gehen und machen lassen
# Count kann beliebig groß sein
count = 51
#query_wos = "test"
query_acm = query
df_acm = scrape.scrape_acm(query_acm, count)
df_acm
```
## Download
```
Excel_acm = Location + Date + "_ACM.xlsx"
df_acm.to_excel(Excel_acm)
```
<font color=red>---------------------------------------------------------------------------------------------- </font>
# IEEE
```
# Mit Webdriver, da keine API vorhanden und wir keinen IP-Block bekommen möchten (automatischer request laut robots.txt nicht erlaubt)
#--> Kaffee trinken gehen und machen lassen
# Count kann beliebig groß sein
count = 51
#query_wos = "test"
query_ieee = query
df_ieee = scrape.scrape_ieee(query_ieee, count)
df_ieee
Excel_ieee = Location + Date + "_IEEE.xlsx"
df_ieee.to_excel(Excel_ieee)
```
<font color=red>---------------------------------------------------------------------------------------------- </font>
# Emerald Insight
```
count =
query_emerald = query
df_emerald = scrape.scrape_emerald(query_emerald, count)
df_emerald
Excel_emerald = Location + Date + "_Emerald.xlsx"
df_emerald.to_excel(Excel_emerald)
```
# <div class="burk"> Combine and download all</div><i class="fa fa-lightbulb-o "></i>
### Combine your desired databases by including them in "frames"
```
import pandas as pd
# default
#frames = [df_Scopus, df_sciencedirect, df_arxiv, df_wos, df_acm, df_ieee, df_emerald]
frames = [df_Scopus, df_sciencedirect, df_arxiv, df_wos]
result = pd.concat(frames, ignore_index=True)
pre = len(result)
result[result['DOI'].isnull() | ~result[result['DOI'].notnull()].duplicated(subset='DOI',keep='first')]
result['Title'] = result['Title'].str.lower()
result.drop_duplicates(subset ='Title', keep = 'first', inplace = True)
after = len(result)
print('Es wurden', pre-after, 'Duplikate entfernt! (Basierend auf DOI oder Titel)')
result.reset_index(inplace = True, drop = True)
result['Cited_by'] = pd.to_numeric(result['Cited_by'])
result
detailed_combined = Location + Date + "_detailed_combined.xlsx"
result.to_excel(detailed_combined)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/aniketsharma00411/employee_future_prediction/blob/main/xgboost.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Dataset link: https://www.kaggle.com/tejashvi14/employee-future-prediction
# Uploading dataset
```
from google.colab import files
uploaded = files.upload()
```
# Initialization
```
import pandas as pd
import numpy as np
from itertools import product
from sklearn.model_selection import train_test_split
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder
from xgboost import XGBClassifier
import warnings
warnings.filterwarnings("ignore")
df = pd.read_csv('Employee.csv')
X = df.drop(['LeaveOrNot'], axis=1)
y = df['LeaveOrNot']
```
# Preparing data
```
X_full_train, X_test, y_full_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(X_full_train, y_full_train, test_size=0.25, random_state=42)
numerical = ['Age']
categorical = ['Education', 'JoiningYear', 'City', 'PaymentTier', 'Gender', 'EverBenched', 'ExperienceInCurrentDomain']
```
# Creating a Pipeline
```
def create_new_pipeline(params):
numerical_transformer = SimpleImputer(strategy='median')
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='most_frequent')),
('encoding', OneHotEncoder(drop='first'))
])
preprocessor = ColumnTransformer(
transformers=[
('numerical', numerical_transformer, numerical),
('categorical', categorical_transformer, categorical)
])
model = XGBClassifier(
n_jobs=-1,
random_state=42,
**params
)
pipeline = Pipeline(
steps=[
('preprocessing', preprocessor),
('model', model)
]
)
return pipeline
```
# Hyperparameter Tuning
```
search_space = {
'n_estimators': np.linspace(10, 700, num=7).astype('int'),
'max_depth': np.linspace(1, 10, num=5).astype('int'),
'learning_rate': np.logspace(-3, 1, num=9),
'reg_alpha': np.logspace(-1, 1, num=5),
'reg_lambda': np.logspace(-1, 1, num=5)
}
max_score = 0
best_params = {}
for val in product(*search_space.values()):
params = {}
for i, param in enumerate(search_space.keys()):
params[param] = val[i]
print(params)
pipeline = create_new_pipeline(params)
pipeline.fit(X_train, y_train)
score = pipeline.score(X_val, y_val)
if score > max_score:
max_score = score
best_params = params
print(f'Score: {score}\tBest score: {max_score}')
best_params
max_score
```
# Training
```
pipeline = create_new_pipeline(best_params)
pipeline.fit(X_full_train, y_full_train)
```
# Validation
```
pipeline.score(X_full_train, y_full_train)
```
| github_jupyter |
```
"""
@author: Jay Mehta
Based on the work of Maziar Raissi
"""
import sys
# Include the path that contains a number of files that have txt files containing solutions to the Burger's problem.
sys.path.insert(0,'../../Utilities/')
# Import required modules
import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
from scipy.interpolate import griddata
from pyDOE import lhs
from mpl_toolkits.mplot3d import Axes3D
import time
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1 import make_axes_locatable
np.random.seed(1234)
torch.manual_seed(1234)
class PhysicsInformedNN:
# Initialize the class
"""
This class defined the Physics Informed Neural Network. The class is first initialized by the __init__ function. Additional functions related to the class are also defined subsequently.
"""
def __init__(self, X_u, u, X_f, layers, lb, ub, nu, epochs):
# Defining the lower and upper bound of the domain.
self.lb = lb
self.ub = ub
# Define the initial conditions for X and t
self.x_u = X_u[:,0:1]
self.t_u = X_u[:,1:2]
self.x_u_tf = self.x_u
self.t_u_tf = self.t_u
# Define the final conditions for X and t
self.x_f = X_f[:,0:1]
self.t_f = X_f[:,1:2]
self.x_f_tf = self.x_f
self.t_f_tf = self.t_f
# Declaring the field for the variable to be solved for
self.u = u
self.u_tf = u
# Declaring the number of layers in the Neural Network
self.layers = layers
# Defininf the diffusion constant in the problem (?)
self.nu = nu
# Create the structure of the neural network here, or build a function below to build the architecture and send the model here.
self.model = self.neural_net(layers)
# Define the initialize_NN function to obtain the initial weights and biases for the network.
self.model.apply(self.initialize_NN)
# Select the optimization method for the network. Currently, it is just a placeholder.
self.optimizer = torch.optim.SGD(self.model.parameters(), lr = 0.01)
for epoch in range(0,epochs):
u_pred = self.net_u(torch.from_numpy(self.x_u_tf), torch.from_numpy(self.t_u_tf))
f_pred = self.net_f(self.x_f_tf, self.t_f_tf)
loss = self.calc_loss(u_pred, self.u_tf, f_pred)
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
# train(model,epochs,self.x_u_tf,self.t_u_tf,self.x_f_tf,self.t_f_tf,self.u_tf)
def neural_net(self, layers):
"""
A function to build the neural network of the required size using the weights and biases provided. Instead of doing this, can we use a simple constructor method and initalize them post the construction? That would be sensible and faster.
"""
model = nn.Sequential()
for l in range(0, len(layers) - 1):
model.add_module("layer_"+str(l), nn.Linear(layers[l],layers[l+1], bias=True))
model.add_module("tanh_"+str(l), nn.Tanh())
return model
def initialize_NN(self, m):
"""
Initialize the neural network with the required layers, the weights and the biases. The input "layers" in an array that contains the number of nodes (neurons) in each layer.
"""
if type(m) == nn.Linear:
nn.init.xavier_uniform_(m.weight)
# print(m.weight)
def net_u(self, x, t):
"""
Forward pass through the network to obtain the U field.
"""
u = self.model(torch.cat((x,t),1).float())
return u
def net_f(self, x, t):
u = net_u(self.model, x, t)
u_x = torch.autograd.grad(u, x, grad_outputs = torch.tensor([[1.0],[1.0]]), create_graph = True)
u_xx = torch.autograd.grad(u_x, x, grad_outputs = torch.tensor([[1.0],[1.0]]), create_graph = True)
u = net_u(self.model, x, t)
u_t = torch.autograd.grad(u, t, create_graph = True)
f = u_t[0] + u * u_x[0] - self.nu * u_xx[0]
return f
def calc_loss(self, u_pred, u_tf, f_pred):
losses = torch.mean(torch.mul(u_pred - u_tf, u_pred - u_tf)) + torch.mean(torch.mul(f_pred, f_pred))
return losses
def train(self, model, epochs, x_u_tf, t_u_tf, x_f_tf, t_f_tf, u_tf):
for epoch in range(0,epochs):
# Now, one can perform a forward pass through the network to predict the value of u and f for various locations of x and at various times t. The function to call here is net_u and net_f.
# Here it is crucial to remember to provide x and t as columns and not as rows. Concatenation in the prediction step will fail otherwise.
u_pred = net_u(x_u_tf, t_u_tf)
f_pred = net_f(x_f_tf, t_f_tf)
# Now, we can define the loss of the network. The loss here is broken into two components: one is the loss due to miscalculating the predicted value of u, the other is for not satisfying the physical governing equation in f which must be equal to 0 at all times and all locations (strong form).
loss = calc_loss(u_pred, u_tf, f_pred)
# Calculate the gradients using the backward() method.
loss.backward() # Here, a tensor may need to be passed so that the gradients can be calculated.
# Optimize the parameters through the optimization step and the learning rate.
optimizer.step()
# Repeat the prediction, calculation of losses, and optimization a number of times to optimize the network.
# layers = [2, 20, 20, 20, 20, 20, 20, 20, 20, 1]
# model = neural_net(layers)
# model.apply(initialize_NN)
# model(torch.tensor([1.1,0.5])) # This is how you feed the network forward. Use model(x) where x has two inputs for the location and time.
# x = torch.tensor([[1.1],[1.2]],requires_grad = True)
# t = torch.tensor([[0.5],[0.5]],requires_grad = True)
# u = model(torch.cat((x,t),1))
# # print(torch.cat((x,t),1))
# u.backward(torch.tensor([[1.0],[1.0]]))
# # print(x.grad.data)
# u_x = torch.autograd.grad(u,t,grad_outputs = torch.tensor([[1.0],[1.0]]),create_graph = True)
# y = torch.tensor([1.],requires_grad = True)
# x = torch.tensor([10.],requires_grad = True)
# y2 = torch.cat((x,y))
# print(y2)
# A = torch.tensor([[2.,3.],[4.,5.]],requires_grad = True)
# loss = (torch.mul(y2,y2)).sum()
# print(torch.autograd.grad(loss,x))
# print(torch.autograd.grad(loss,t))
# u = net_u(model, x, t)
# print(u)
# u_x = torch.autograd.grad(u, x, create_graph = True)
# u_xx = torch.autograd.grad(u, x, create_graph = True)
# u = net_u(model, x, t)
# u_t = torch.autograd.grad(u,t)
nu = 0.01/np.pi
noise = 0.0
N_u = 100
N_f = 10000
# Layer Map
layers = [2, 20, 20, 20, 20, 20, 20, 20, 20, 1]
data = scipy.io.loadmat('../../appendix/Data/burgers_shock.mat')
t = data['t'].flatten()[:,None]
x = data['x'].flatten()[:,None]
Exact = np.real(data['usol']).T
X, T = np.meshgrid(x,t)
X_star = np.hstack((X.flatten()[:,None],T.flatten()[:,None]))
u_star = Exact.flatten()[:,None]
# Doman bounds
lb = X_star.min(0)
ub = X_star.max(0)
xx1 = np.hstack((X[0:1,:].T, T[0:1,:].T))
uu1 = Exact[0:1,:].T
xx2 = np.hstack((X[:,0:1], T[:,0:1]))
uu2 = Exact[:,0:1]
xx3 = np.hstack((X[:,-1:], T[:,-1:]))
uu3 = Exact[:,-1:]
X_u_train = np.vstack([xx1, xx2, xx3])
X_f_train = lb + (ub-lb)*lhs(2, N_f)
X_f_train = np.vstack((X_f_train, X_u_train))
u_train = np.vstack([uu1, uu2, uu3])
idx = np.random.choice(X_u_train.shape[0], N_u, replace=False)
X_u_train = X_u_train[idx, :]
u_train = u_train[idx,:]
# model = PhysicsInformedNN(X_u_train,u_train,X_f_train,layers,lb,ub,nu,5)
X_u_train = torch.from_numpy(X_u_train)
X_u_train.requires_grad = True
u_train = torch.from_numpy(u_train)
u_train.requires_grad = True
x_u = X_u_train[:,0:1]
t_u = X_u_train[:,1:2]
model = nn.Sequential()
for l in range(0, len(layers) - 1):
model.add_module("layer_"+str(l), nn.Linear(layers[l],layers[l+1], bias=True))
model.add_module("tanh_"+str(l), nn.Tanh())
optimizer = torch.optim.LBFGS(model.parameters(), lr = 0.01)
losses = []
optimizer = torch.optim.SGD(model.parameters(), lr = 0.001)
for epoch in range(0,1000):
u_pred = model(torch.cat((x_u,t_u),1).float())
u_x = torch.autograd.grad(u_pred,x_u,grad_outputs = torch.ones([len(x_u),1],dtype = torch.float),create_graph=True)
u_xx = torch.autograd.grad(u_x,x_u,grad_outputs = torch.ones([len(x_u),1],dtype = torch.float),create_graph=True)
u_t = torch.autograd.grad(u_pred,t_u,grad_outputs = torch.ones([len(t_u),1],dtype = torch.float),create_graph=True)
f = u_t[0] + u_pred * u_x[0] - nu * u_xx[0]
loss = torch.mean(torch.mul(u_pred - u_train, u_pred - u_train)) + torch.mean(torch.mul(f,f))
losses.append(loss.detach().numpy())
optimizer.zero_grad()
loss.backward()
optimizer.step()
epo = np.array([i for i in range(0,1000)])
plt.plot(epo,losses)
np.linalg.norm(u_star - model(torch.tensor(X_star).float()).detach().numpy(),2)/np.linalg.norm(u_star,2)
type(optimizer) == torch.optim.lbfgs.LBFGS
```
| github_jupyter |
# Beautiful Technical Documentation with nbdoc and Docusarus
> [nbdoc](https://github.com/outerbounds/nbdoc) is a lightweight version of [nbdev](https://github.com/fastai/nbdev) that allows you to create rich, testable content with notebooks. [Docusarus](https://docusaurus.io/) is a beautiful static site generator for code documentation and blogging. This project brings all of these together to give you a powerful documentation system.
## Background: Literate Documentation
Writing technical documentation can be an ardous process. Many projects have no documentation at all, and if they do they are often stale or out of date. Our goal is to make writing documentation as easy as possible by providing the following:
- [x] **Authoring experience that encourages the creation** of quality documentation: (1) Write & run code in-situ - avoid copy & pasting code (2) WSYIWG or low-latency hot-reload experience so you can get immediate feedback on how your docs will look.
- [x] **Testing**: Automated testing of code snippets
- [x] **Unified Search**: Unified search across API docs, tutorials, how-to guides, user guides with a great user interface that helps users understand the source of each
- [x] Allow you to **highlight specific lines of code.**
- [x] Allow authors to **hide** cell inputs, outputs or both
- [ ] **Entity Linking**: Detect entities like modules and class names in backticks and automatically link those to the appropriate documentation.
- [ ] **Inline and side-by-side explanations** (pop-ups, two-column view, etc)
- [ ] Allow reader to **collapse/show** code and output
The unchecked items are a work in progress. There are some tools that offer some of these features such as [Jupyter Book](https://jupyterbook.org/intro.html) and [Quarto](https://quarto.org/), but we wanted more flexibility with regards to the static site generator and desired additional features not available on those platforms.
## Setup
1. First, create an isolated python environment using your favorite tool such as `conda`, `pipenv`, `poetry` etc. Then, from the root of this repo run this command in the terminal:
```sh
make install
```
2. Then you need to open 3 different terminal windows (I recommend using split panes), and run the following commands in three seperate windows:
_Note: we tried to use docker-compose but had trouble getting some things to run on Apple Silicon, so this will have to do for now._
Start the docs server:
```shell
make docs
```
Watch for changes to notebooks:
```sh
make watch
```
Start Jupyter Lab:
```sh
make nb
```
3. Open a browser window for the docs [http://localhost:3000/](http://localhost:3000/). In my experience, you may have to hard-refresh the first time you make a change, but hot-reloading generally works.
## Authoring In Notebooks
**For this tutorial to make the most sense, you should view this notebook and the rendered doc side-by-side. This page is called "Authoring Guide" and is the default page at the root when you start the site.** This tutorial assumes you have some familiarity with static site generators, if you do not, please visit the [Docusarus docs](https://docusaurus.io/docs).
### Create Pages With Notebooks
You can create a notebook in any directory. When you do this, an associated markdown file is automatically generated with the same name in the same location. For example `intro.ipynb` generates `intro.md`. For pages that are created with a notebook, you should always edit them in a notebook. The markdown that is generated can be useful for debugging, but should not be directly edited a warning message is present in auto-generated markdown files.
However, using notebooks in the first place is optional. You can create Markdown files as you normally would to create pages. We recommend using notebooks whenever possible, as you can embed arbitrary Markdown in notebooks, and also use `raw cells` for things like front matter or MDX components.
### Front Matter & MDX
The first cell of your notebook should be a `raw` cell with the appropriate front-matter. For example, this notebook has the following front matter:
```
---
slug: /
title: Authoring Guide
---
```
### Python Scripts In Docs
If you use the `%%writefile` magic, the magic command will get stripped from the cell, and the cell will be annotated with the appropriate filename as a title to denote that the cell block is referencing a script. Furthermore, any outputs are removed when you use this magic command.
```
%%writefile myflow.py
from metaflow import FlowSpec, step
class MyFlow(FlowSpec):
@step
def start(self):
self.some_data = ['some', 'data']
self.next(self.middle)
@step
def middle(self):
self.next(self.end)
@step
def end(self):
pass
if __name__ == '__main__':
MyFlow()
```
### Running shell commands
You can use the `!` magic to run shell commands. When you do this, the cell is marked with the appropriate language automatically. For Metaflow output, the preamble of the logs are automatically removed.
```
!python myflow.py run
```
You may wish to only show logs from particular steps when executing a Flow. You can accomplish this by using the `#cell_meta:show_steps=<step_name>` comment:
```
#cell_meta:show_steps=start
!python myflow.py run
```
You can show multiple steps by seperating step names with commas:
```
#cell_meta:show_steps=start,end
!python myflow.py run
```
### Writing Interactive Code & Toggling Visibility
It can be useful to write interactive code in notebooks as well. If you want to interact with a Flow, we recommend using the `--run-id-file <filemame>` flag.
Note we are hiding both the input and output of the below cell (because it is a bit repetitive in this case) with the `#cell_meta:tag=remove_cell` comment:
```
#cell_meta:tag=remove_cell
!python myflow.py run --run-id-file run_id.txt
```
Now, you can write and run your code as normal and this will show up in the docs:
```
run_id = !cat run_id.txt
from metaflow import Run
run = Run(f'Myflow/{run_id[0]}')
run.data.some_data
```
It is often smart to run tests in your docs. To do this, simply add assert statements. These will get tested automatically when we run the test suite.
```
assert run.data.some_data == ['some', 'data']
assert run.successful
```
But what if you only want to show the cell input, but not the output. Perhaps the output is too long and not necesary. You can do this with the `#cell_meta:tag=remove_output` comment.
```
#cell_meta:tag=remove_output
print(''.join(['This output would be really annoying if shown in the docs\n'] * 10))
```
You may want to just show the output and not the input. You can do that with the `#cell_meta:tag=remove_input` comment:
```
#cell_meta:tag=remove_input
print(''.join(['You can only see the output, but not the code that created me\n'] * 3))
```
## Running Tests
To test the notebooks, run `make test`. This will execute all notebooks in parallel and report an error if there are any errors found:
### Skipping tests in cells
If you want to skip certain cells from running in tests because they take a really long time, you can place the comment `#notest` at the top of the cell.
## Docusaurus
All Docusarus features will work because notebook markdown just becomes regular markdown. Furthermore, special things such as MDX, JSX, or Front Matter can be created with a raw cell. For more information, visit [the Docusarus docs](https://docusaurus.io/docs).
### Docusaurus Installation
```
$ yarn
```
### Docusaurus Local Development
```
$ yarn start
```
This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
### Docusaurus Build
```
$ yarn build
```
This command generates static content into the `build` directory and can be served using any static contents hosting service.
### Docusaurus Deployment
Using SSH:
```
$ USE_SSH=true yarn deploy
```
Not using SSH:
```
$ GIT_USER=<Your GitHub username> yarn deploy
```
If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch.
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES']='0,1,2,3,4,5,6,7'
%run -p ../beta_tinfocnf.py --softcond False --sup False --cond False --latent_dim 4 --noise_std 0.3 --a_range "1.0, .08" --b_range "0.25, .03" --noise_std_test 0.3 --a_range_test "1.0, .08" --b_range_test "0.25, .03" --adjoint False --visualize True --niters 20000 --nsample 200 --lr 0.001 --beta 250.0 --save vis_sup --savedir ./results_ticnf_nstd_0_3_a_1_0_08_b_0_25_03_nstdt_0_3_at_1_0_08_bt_0_25_03_200_20K_lr_0_001_beta_250_unsup --gpu 1
#
# %run -p ../latent_ode_tinfocnf.py --sup True --noise_std 0.3 --a 0. --b 0.3 --noise_std_test 0.3 --a_test 0. --b_test 0.4 --adjoint False --visualize True --niters 2000 --lr 0.01 --save vis_sup --savedir ./results_sup_nstd_0_3_a_0_b_0_3_nstdt_0_3_at_0_bt_0_4 --gpu 0
# #
# %run -p ../latent_ode_tinfocnf.py --sup True --noise_std 0.3 --a 0. --b 0.3 --noise_std_test 0.3 --a_test 0. --b_test 0.5 --adjoint False --visualize True --niters 2000 --lr 0.01 --save vis_sup --savedir ./results_sup_nstd_0_3_a_0_b_0_3_nstdt_0_3_at_0_bt_0_5 --gpu 0
# #
# %run -p ../latent_ode_tinfocnf.py --sup True --noise_std 0.3 --a 0. --b 0.3 --noise_std_test 0.3 --a_test 1. --b_test 0.3 --adjoint False --visualize True --niters 2000 --lr 0.01 --save vis_sup --savedir ./results_sup_nstd_0_3_a_0_b_0_3_nstdt_0_3_at_1_bt_0_3 --gpu 0
# #
# %run -p ../latent_ode_tinfocnf.py --sup True --noise_std 0.3 --a 0. --b 0.3 --noise_std_test 0.3 --a_test 1. --b_test 0.4 --adjoint False --visualize True --niters 2000 --lr 0.01 --save vis_sup --savedir ./results_sup_nstd_0_3_a_0_b_0_3_nstdt_0_3_at_1_bt_0_4 --gpu 0
# #
# %run -p ../latent_ode_tinfocnf.py --sup True --noise_std 0.3 --a 0. --b 0.3 --noise_std_test 0.3 --a_test 1. --b_test 0.5 --adjoint False --visualize True --niters 2000 --lr 0.01 --save vis_sup --savedir ./results_sup_nstd_0_3_a_0_b_0_3_nstdt_0_3_at_1_bt_0_5 --gpu 0
# #
# %run -p ../latent_ode_tinfocnf.py --sup True --noise_std 0.3 --a 0. --b 0.3 --noise_std_test 0.3 --a_test 2. --b_test 0.3 --adjoint False --visualize True --niters 2000 --lr 0.01 --save vis_sup --savedir ./results_sup_nstd_0_3_a_0_b_0_3_nstdt_0_3_at_2_bt_0_3 --gpu 0
# #
# %run -p ../latent_ode_tinfocnf.py --sup True --noise_std 0.3 --a 0. --b 0.3 --noise_std_test 0.3 --a_test 2. --b_test 0.4 --adjoint False --visualize True --niters 2000 --lr 0.01 --save vis_sup --savedir ./results_sup_nstd_0_3_a_0_b_0_3_nstdt_0_3_at_2_bt_0_4 --gpu 0
# #
# %run -p ../latent_ode_tinfocnf.py --sup True --noise_std 0.3 --a 0. --b 0.3 --noise_std_test 0.3 --a_test 2. --b_test 0.5 --adjoint False --visualize True --niters 2000 --lr 0.01 --save vis_sup --savedir ./results_sup_nstd_0_3_a_0_b_0_3_nstdt_0_3_at_2_bt_0_5 --gpu 0
# #
```
| github_jupyter |
# 1-4.1 Intro Python
## Conditionals
- **`if`, `else`, `pass`**
- **Conditionals using Boolean String Methods**
- Comparison operators
- String comparisons
-----
><font size="5" color="#00A0B2" face="verdana"> <B>Student will be able to</B></font>
- **control code flow with `if`... `else` conditional logic**
- **using Boolean string methods (`.isupper(), .isalpha(), startswith()...`)**
- using comparison (`>, <, >=, <=, ==, !=`)
- using Strings in comparisons
#
<font size="6" color="#00A0B2" face="verdana"> <B>Concepts</B></font>
## Conditionals use `True` or `False`
- **`if`**
- **`else`**
- **`pass`**
[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c53fdb30-b2b0-4183-9686-64b0e5b46dd2/Unit1_Section4.1-Conditionals.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c53fdb30-b2b0-4183-9686-64b0e5b46dd2/Unit1_Section4.1-Conditonals.vtt","srclang":"en","kind":"subtitles","label":"english"}])
#
<font size="6" color="#00A0B2" face="verdana"> <B>Examples</B></font>
```
if True:
print("True means do something")
else:
print("Not True means do something else")
hot_tea = True
if hot_tea:
print("enjoy some hot tea!")
else:
print("enjoy some tea, and perhaps try hot tea next time.")
someone_i_know = False
if someone_i_know:
print("how have you been?")
else:
# use pass if there is no need to execute code
pass
# changed the value of someone_i_know
someone_i_know = True
if someone_i_know:
print("how have you been?")
else:
pass
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 1</B></font>
## Conditionals
### Using Boolean with `if, else`
- **Give a weather report using `if, else`**
```
sunny_today = True
# [ ] test if it is sunny_today and give proper responses using if and else
if sunny_today:
print("Put on sunscreen!")
else:
print("Take an umbrella")
sunny_today = False
# [ ] use code you created above and test sunny_today = False
if sunny_today:
print("Put on sunscreen!")
else:
print("Take an umbrella")
```
#
<font size="6" color="#00A0B2" face="verdana"> <B>Concepts</B></font>
## Conditionals: Boolean String test methods with `if`
[]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/caa56256-733a-4172-96f7-9ecfc12d49d0/Unit1_Section4.1-conditionals-bool.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/caa56256-733a-4172-96f7-9ecfc12d49d0/Unit1_Section4.1-conditonals-bool.vtt","srclang":"en","kind":"subtitles","label":"english"}])
```python
if student_name.isalpha():
```
- **`.isalnum()`**
- **`.istitle()`**
- **`.isdigit()`**
- **`.islower()`**
- **`.startswith()`**
###
<font size="6" color="#00A0B2" face="verdana"> <B>Examples</B></font>
```
# review code and run cell
favorite_book = input("Enter the title of a favorite book: ")
if favorite_book.istitle():
print(favorite_book, "- nice capitalization in that title!")
else:
print(favorite_book, "- consider capitalization throughout for book titles.")
# review code and run cell
a_number = input("enter a positive integer number: ")
if a_number.isdigit():
print(a_number, "is a positive integer")
else:
print(a_number, "is not a positive integer")
# another if
if a_number.isalpha():
print(a_number, "is more like a word")
else:
pass
# review code and run cell
vehicle_type = input('"enter a type of vehicle that starts with "P": ')
if vehicle_type.upper().startswith("P"):
print(vehicle_type, 'starts with "P"')
else:
print(vehicle_type, 'does not start with "P"')
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 2: multi-part</B></font>
## Evaluating Boolean Conditionals
### create evaluations for `.islower()`
- print output describing **if** each of the 2 strings is all lower or not
```
test_string_1 = "welcome"
test_string_2 = "I have $3"
# [ ] use if, else to test for islower() for the 2 strings
if test_string_1.islower():
print(test_string_1, "is lower case.")
else:
print(test_string_1,"is not lower case.")
if test_string_2.islower():
print(test_string_2, "is lower case.")
else:
print(test_string_2,"is not lower case.")
```
<font size="3" color="#B24C00" face="verdana"> <B>Task 2 continued.. </B></font>
### create a functions using `startswith('w')`
- w_start_test() tests if starts with "w"
**function should have a parameter for `test_string` and print the test result**
```
test_string_1 = "welcome"
test_string_2 = "I have $3"
test_string_3 = "With a function it's efficient to repeat code"
# [ ] create a function w_start_test() use if & else to test with startswith('w')
def w_start_test(test_string):
if test_string.startswith("w"):
print(test_string,"starts with w")
else:
print(test_string,"does not start with w")
return
# [ ] Test the 3 string variables provided by calling w_start_test()
w_start_test(test_string_3)
```
[Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) © 2017 Microsoft
| github_jupyter |
```
import glob
import os
import librosa
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def gen_mfcc_fn(fn, mfcc_window_size, mfcc_stride_size):
X, sample_rate = librosa.load(fn, sr=None, mono=True)
if sample_rate != 44100:
return
mfcc = librosa.feature.mfcc(X, sample_rate,
n_fft=int(mfcc_window_size * sample_rate),
hop_length=int(mfcc_stride_size * sample_rate))
return mfcc.T
def generate_mfccs_for_gmm(parent_dir,
sub_dirs,
file_ext='*.wav',
mfcc_window_size=0.02, mfcc_stride_size=0.01):
mfccs = np.empty((0, 20))
for label, sub_dir in enumerate(sub_dirs):
for fn in glob.glob(os.path.join(parent_dir, sub_dir, file_ext)):
mfcc = gen_mfcc_fn(fn, mfcc_window_size, mfcc_stride_size)
if mfcc is None:
continue
mfccs = np.vstack([mfccs, mfcc])
return mfccs
parent_dir = './UrbanSound8K/audio/'
tr_sub_dirs = ['fold%d'% d for d in range(1, 2)]
mfccs_for_gmm = generate_mfccs_for_gmm(parent_dir, tr_sub_dirs)
print(mfccs_for_gmm.shape)
from sklearn.mixture import GaussianMixture
gmm = GaussianMixture(n_components=64, verbose=10)
gmm.fit(mfccs_for_gmm)
print(mfccs_for_gmm[0].shape)
y = gmm.predict_proba(mfccs_for_gmm[:1])
print(y[0].shape)
import pickle
pickle.dump(gmm, open('gaussian_mixture_model.pkl', 'wb'))
gmm_bak = pickle.load(open('gaussian_mixture_model.pkl', 'rb'))
gmm_bak
sound_class_table = {
'air_conditioner' : 0,
'car_horn' : 1,
'children_playing' : 2,
'dog_bark' : 3,
'drilling' : 4,
'engine_idling' : 5,
'gun_shot' : 6,
'jackhammer' : 7,
'siren' : 8,
'street_music' : 9
}
def segment_window(audio_len, segment_len, segment_stride):
start = 0
while start < audio_len:
yield start, start + segment_len
start += segment_stride
def generate_labels(fn, target_class):
return 1 if int(fn.split('-')[-3]) == sound_class_table[target_class] \
else -1
def generate_F_features(parent_dir,
sub_dirs,
num_segment_needed,
target_class,
file_ext='*.wav',
mfcc_window_size=0.02,
mfcc_stride_size=0.01):
F_features, labels = np.empty((0, 64)), np.array([])
for label, sub_dir in enumerate(sub_dirs):
for fn in glob.glob(os.path.join(parent_dir, sub_dir, file_ext)):
X, sample_rate = librosa.load(fn, sr=None, mono=True)
if sample_rate != 44100:
continue
segment_len = int(sample_rate * 0.1)
segment_stride = int(sample_rate * 0.05)
# file_F_features = np.empty((0, 64))
for start, end in segment_window(X.size, segment_len, segment_stride):
segment_mfccs = librosa.feature.mfcc(X[start:end], sample_rate,
n_fft=int(mfcc_window_size * sample_rate),
hop_length=int(mfcc_stride_size * sample_rate))
segment_F_features = np.sum(gmm.predict_proba(segment_mfccs.T), axis=0) \
/ (segment_mfccs.shape[1])
F_features = np.vstack([F_features, segment_F_features])
labels = np.append(labels, generate_labels(fn, target_class))
if labels.shape[0] >= num_segment_needed:
return np.array(F_features), np.array(labels, dtype=np.int)
# F_features.append(file_F_features)
print("Finished!")
return np.array(F_features), np.array(labels, dtype=np.int)
def extract_test_fn_labels(fn, duration, target_class):
label_file_path = fn.replace('wav', 'txt')
with open(label_file_path) as fd:
lines = fd.readlines()
time_sections_with_label = list(map(lambda x: (float(x[0]), float(x[1]), x[2]), map(lambda x : x.split(), lines)))
time_intervals = np.arange(0.0, duration, 0.05)
labels = np.zeros((time_intervals.shape[0]), dtype=np.int)
for idx, t in enumerate(time_intervals):
labels[idx] = -1
for time_section in time_sections_with_label:
if t < time_section[0] or t > time_section[1]:
continue
if time_section[2] == target_class:
labels[idx] = 1
break
return labels
def gen_test_fn_features(fn):
X, sample_rate = librosa.load(fn, sr=None, mono=True)
if sample_rate != 44100:
return X, sample_rate, None
segment_len = int(sample_rate * 0.1)
segment_stride = int(sample_rate * 0.05)
print(fn)
file_F_features = np.empty((0, 64))
for start, end in segment_window(X.size, segment_len, segment_stride):
segment_mfccs = librosa.feature.mfcc(X[start:end], sample_rate,
n_fft=int(0.02 * sample_rate),
hop_length=int(0.01 * sample_rate))
segment_F_features = np.sum(gmm.predict_proba(segment_mfccs.T), axis=0) \
/ (segment_mfccs.shape[1])
file_F_features = np.vstack([file_F_features, segment_F_features])
return X, sample_rate, file_F_features
def gen_testing_data_for_svm(target_class, parent_dir = '.',
sub_dirs = ['soundscapes_5_events_sub'],
file_ext='*.wav'):
F_features, labels = [], []
for label, sub_dir in enumerate(sub_dirs):
for fn in glob.glob(os.path.join(parent_dir, sub_dir, file_ext)):
X, sample_rate, file_F_features = gen_test_fn_features(fn)
if file_F_features is None:
continue
fn_labels = extract_test_fn_labels(fn, X.size/sample_rate, target_class)
labels.append(fn_labels)
F_features.append(file_F_features)
print("Finished!")
return F_features, labels
# def gen_testing_data_for_svm(target_class, parent_dir = '.',
# sub_dirs = ['soundscapes_5_events_sub'],
# file_ext='*.wav'):
# F_features, labels = [], []
# fns = []
# for label, sub_dir in enumerate(sub_dirs):
# for fn in glob.glob(os.path.join(parent_dir, sub_dir, file_ext)):
# X, sample_rate, file_F_features = gen_test_fn_features(fn)
# if file_F_features is None:
# continue
# fns.append(fn)
# print(fn)
# # fn_labels = extract_test_fn_labels(fn, X.size/sample_rate, target_class)
# # labels.append(fn_labels)
# F_features.append(file_F_features)
# print("Finished!")
# return F_features, fns
def gen_training_data_for_svm(num_target_class_segment, target_class):
parent_dir = './UrbanSound8K/ByClass'
F_features_target_class, labels_target_class = generate_F_features(parent_dir,
[target_class],
num_target_class_segment,
target_class)
F_features_non_target_class = np.empty((0, 64))
labels_non_target_class = np.array([])
for k, _ in sound_class_table.items():
if k == target_class:
continue
tmp_F_features, tmp_labels = generate_F_features(parent_dir,
[k],
int(num_target_class_segment/9),
target_class)
F_features_non_target_class = np.vstack([F_features_non_target_class, tmp_F_features])
labels_non_target_class = np.append(labels_non_target_class, tmp_labels)
return np.vstack([F_features_non_target_class, F_features_target_class]), \
np.append(labels_non_target_class, labels_target_class)
X_all, y_all = gen_training_data_for_svm(1800, target_class='air_conditioner')
print(X_all.shape)
print(y_all.shape)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X_all, y_all, stratify=y_all, train_size=0.85)
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
from sklearn.svm import SVC
clf = SVC(kernel='rbf', C=100, gamma=10, probability=True)
clf.fit(X_train, y_train)
print("Training set score: {:.3f}".format(clf.score(X_train, y_train)))
print("Test set score: {:.3f}".format(clf.score(X_test, y_test)))
from sklearn.metrics import confusion_matrix
print(clf.classes_)
confusion_matrix(y_test, clf.predict(X_test))
import pickle
pickle.dump(clf, open('./sound_detectors/air_conditioner_detector.pkl', 'wb'))
F_features_test, labels_test = gen_testing_data_for_svm(target_class='air_conditioner',
parent_dir='./soundscapes',
sub_dirs=['air_conditioner'])
np.savetxt("./sound_detector_test_data/siren_test_features.csv", np.array(F_features_test), delimiter=",")
np.savetxt("./sound_detector_test_data/siren_test_labels.csv", np.array(labels_test), delimiter=",")
print(np.array(F_features_test).shape)
print(np.array(labels_test).shape)
from sklearn.metrics import recall_score, precision_score, f1_score, accuracy_score
recall_scores = []
precision_scores = []
f1_scores = []
accuracy_scores = []
for x, y in zip(F_features_test, labels_test):
preds = clf.predict(x)
recall_scores.append(recall_score(y, preds))
precision_scores.append(precision_score(y, preds))
f1_scores.append(f1_score(y, preds))
accuracy_scores.append(accuracy_score(y, preds))
plt.plot(recall_scores)
plt.show()
plt.plot(precision_scores)
plt.show()
plt.plot(f1_scores)
plt.show()
plt.plot(accuracy_scores)
plt.show()
# 0.696400072903
# 0.697101319222
# 0.669004671073
# 0.8357
print(np.mean(recall_scores))
print(np.mean(precision_scores))
print(np.mean(f1_scores))
print(np.mean(accuracy_scores))
print(len(F_features_test))
print(len(fns))
preds = list(map(lambda d: clf.predict(d), F_features_test))
precisions = list(map(lambda d: d.tolist().count(1)/len(d), preds))
print(len(precisions))
plt.plot(precisions)
fs = list(filter(lambda p: p[0]>=.9, zip(precisions, fns)))
print(len(fs))
print(fs)
import os
for f in fs:
# os.system('cp %s ./hownoisy/data/ByClass/air_conditioner' % (f[1]))
```
| github_jupyter |
# Handling Missing Data
The difference between data found in many tutorials and data in the real world is that real-world data is rarely clean and homogeneous.
In particular, many interesting datasets will have some amount of data missing.
To make matters even more complicated, different data sources may indicate missing data in different ways.
In this section, we will discuss some general considerations for missing data, discuss how Pandas chooses to represent it, and demonstrate some built-in Pandas tools for handling missing data in Python.
We'll refer to missing data in general as the following values:
* *null*
* *NaN*
* *NA*
## Trade-Offs in Missing Data Conventions
There are a number of schemes that have been developed to indicate the presence of missing data in a table or DataFrame.
Generally, they revolve around one of two strategies: using a *mask* that globally indicates missing values, or choosing a *sentinel value* that indicates a missing entry.
In the masking approach, the mask might be an entirely separate Boolean array, or it may involve appropriation of one bit in the data representation to locally indicate the null status of a value.
In the sentinel approach, the sentinel value could be some data-specific convention, such as indicating a missing integer value with -9999 or some rare bit pattern, or it could be a more global convention, such as indicating a missing floating-point value with NaN (Not a Number), a special value which is part of the IEEE floating-point specification.
None of these approaches is without trade-offs: use of a separate mask array requires allocation of an additional Boolean array, which adds overhead in both storage and computation. A sentinel value reduces the range of valid values that can be represented, and may require extra (often non-optimized) logic in CPU and GPU arithmetic. Common special values like NaN are not available for all data types.
As in most cases where no universally optimal choice exists, different languages and systems use different conventions.
For example, the R language uses reserved bit patterns within each data type as sentinel values indicating missing data, while the SciDB system uses an extra byte attached to every cell which indicates a NA state.
## Missing Data in Pandas
The way in which Pandas handles missing values is constrained by its reliance on the NumPy package, which does not have a built-in notion of NA values for non-floating-point data types.
Pandas could have followed R's lead in specifying bit patterns for each individual data type to indicate nullness, but this approach turns out to be rather unwieldy.
While R contains four basic data types, NumPy supports *far* more than this: for example, while R has a single integer type, NumPy supports *fourteen* basic integer types once you account for available precisions, signedness, and endianness of the encoding.
Reserving a specific bit pattern in all available NumPy types would lead to an unwieldy amount of overhead in special-casing various operations for various types, likely even requiring a new fork of the NumPy package. Further, for the smaller data types (such as 8-bit integers), sacrificing a bit to use as a mask will significantly reduce the range of values it can represent.
NumPy does have support for masked arrays – that is, arrays that have a separate Boolean mask array attached for marking data as "good" or "bad."
Pandas could have derived from this, but the overhead in both storage, computation, and code maintenance makes that an unattractive choice.
With these constraints in mind, Pandas chose to use sentinels for missing data, and further chose to use two already-existing Python null values: the special floating-point ``NaN`` value, and the Python ``None`` object.
This choice has some side effects, as we will see, but in practice ends up being a good compromise in most cases of interest.
### ``None``: Pythonic missing data
The first sentinel value used by Pandas is ``None``, a Python singleton object that is often used for missing data in Python code.
Because it is a Python object, ``None`` cannot be used in any arbitrary NumPy/Pandas array, but only in arrays with data type ``'object'`` (i.e., arrays of Python objects):
```
import numpy as np
import pandas as pd
vals1 = np.array([1, None, 3, 4])
vals1
```
This ``dtype=object`` means that the best common type representation NumPy could infer for the contents of the array is that they are Python objects.
While this kind of object array is useful for some purposes, any operations on the data will be done at the Python level, with much more overhead than the typically fast operations seen for arrays with native types:
```
for dtype in ['object', 'int']:
print("dtype =", dtype)
%timeit np.arange(1E6, dtype=dtype).sum()
print()
```
The use of Python objects in an array also means that if you perform aggregations like ``sum()`` or ``min()`` across an array with a ``None`` value, you will generally get an error:
```
vals1.sum()
```
This reflects the fact that addition between an integer and ``None`` is undefined.
### ``NaN``: Missing numerical data
The other missing data representation, ``NaN`` (acronym for *Not a Number*), is different; it is a special floating-point value recognized by all systems that use the standard IEEE floating-point representation:
```
vals2 = np.array([1, np.nan, 3, 4])
vals2.dtype
```
Notice that NumPy chose a native floating-point type for this array: this means that unlike the object array from before, this array supports fast operations pushed into compiled code.
You should be aware that ``NaN`` is a bit like a data virus–it infects any other object it touches.
Regardless of the operation, the result of arithmetic with ``NaN`` will be another ``NaN``:
```
1 + np.nan
0 * np.nan
```
Note that this means that aggregates over the values are well defined (i.e., they don't result in an error) but not always useful:
```
vals2.sum(), vals2.min(), vals2.max()
```
NumPy does provide some special aggregations that will ignore these missing values:
```
np.nansum(vals2), np.nanmin(vals2), np.nanmax(vals2)
```
Keep in mind that ``NaN`` is specifically a floating-point value; there is no equivalent NaN value for integers, strings, or other types.
### NaN and None in Pandas
``NaN`` and ``None`` both have their place, and Pandas is built to handle the two of them nearly interchangeably, converting between them where appropriate:
```
pd.Series([1, np.nan, 2, None])
```
For types that don't have an available sentinel value, Pandas automatically type-casts when NA values are present.
For example, if we set a value in an integer array to ``np.nan``, it will automatically be upcast to a floating-point type to accommodate the NA:
```
x = pd.Series(range(2), dtype=int)
x
x[0] = None
x
```
Notice that in addition to casting the integer array to floating point, Pandas automatically converts the ``None`` to a ``NaN`` value.
(Be aware that there is a proposal to add a native integer NA to Pandas in the future; as of this writing, it has not been included).
While this type of magic may feel a bit hackish compared to the more unified approach to NA values in domain-specific languages like R, the Pandas sentinel/casting approach works quite well in practice and in my experience only rarely causes issues.
The following table lists the upcasting conventions in Pandas when NA values are introduced:
|Typeclass | Conversion When Storing NAs | NA Sentinel Value |
|--------------|-----------------------------|------------------------|
| ``floating`` | No change | ``np.nan`` |
| ``object`` | No change | ``None`` or ``np.nan`` |
| ``integer`` | Cast to ``float64`` | ``np.nan`` |
| ``boolean`` | Cast to ``object`` | ``None`` or ``np.nan`` |
Keep in mind that in Pandas, string data is always stored with an ``object`` dtype.
## Operating on Null Values
As we have seen, Pandas treats ``None`` and ``NaN`` as essentially interchangeable for indicating missing or null values.
To facilitate this convention, there are several useful methods for detecting, removing, and replacing null values in Pandas data structures.
They are:
- ``isnull()``: Generate a boolean mask indicating missing values
- ``notnull()``: Opposite of ``isnull()``
- ``dropna()``: Return a filtered version of the data
- ``fillna()``: Return a copy of the data with missing values filled or imputed
We will conclude this section with a brief exploration and demonstration of these routines.
### Detecting null values
Pandas data structures have two useful methods for detecting null data: ``isnull()`` and ``notnull()``.
Either one will return a Boolean mask over the data. For example:
```
data = pd.Series([1, np.nan, 'hello', None])
data.isnull()
```
Boolean masks can be used directly as a ``Series`` or ``DataFrame`` index:
```
data[data.notnull()]
```
The ``isnull()`` and ``notnull()`` methods produce similar Boolean results for ``DataFrame``s.
### Dropping null values
In addition to the masking used before, there are the convenience methods, ``dropna()``
(which removes NA values) and ``fillna()`` (which fills in NA values). For a ``Series``,
the result is straightforward:
```
data.dropna()
```
Notice that the default behavior of `dropna()` is to leave the original DataFrame/Series untouched. In order to drop NAs from the original source one could use the argument `inplace=True`. This has to be used with caution.
```
data
data.dropna(inplace=True)
data
```
For a ``DataFrame``, there are more options.
Consider the following ``DataFrame``:
```
df = pd.DataFrame([[1, np.nan, 2],
[2, 3, 5],
[np.nan, 4, 6]])
df
```
We cannot drop single values from a ``DataFrame``; we can only drop full rows or full columns.
Depending on the application, you might want one or the other, so ``dropna()`` gives a number of options for a ``DataFrame``.
By default, ``dropna()`` will drop all rows in which *any* null value is present:
```
df.dropna()
```
Alternatively, you can drop NA values along a different axis; ``axis=1`` drops all columns containing a null value:
```
df.dropna(axis='columns')
```
But this drops some good data as well; you might rather be interested in dropping rows or columns with *all* NA values, or a majority of NA values.
This can be specified through the ``how`` or ``thresh`` parameters, which allow fine control of the number of nulls to allow through.
The default is ``how='any'``, such that any row or column (depending on the ``axis`` keyword) containing a null value will be dropped.
You can also specify ``how='all'``, which will only drop rows/columns that are *all* null values:
```
df[3] = np.nan
df
df.dropna(axis='columns', how='all')
```
For finer-grained control, the ``thresh`` parameter lets you specify a minimum number of non-null values for the row/column to be kept:
```
df.dropna(axis='rows', thresh=3)
```
Here the first and last row have been dropped, because they contain only two non-null values.
### Filling null values
Sometimes rather than dropping NA values, you'd rather replace them with a valid value.
This value might be a single number like zero, or it might be some sort of imputation or interpolation from the good values.
You could do this in-place using the ``isnull()`` method as a mask, but because it is such a common operation Pandas provides the ``fillna()`` method, which returns a copy of the array with the null values replaced.
Consider the following ``Series``:
```
data = pd.Series([1, np.nan, 2, None, 3], index=list('abcde'))
data
```
We can fill NA entries with a single value, such as zero:
```
data.fillna(0)
```
We can specify a forward-fill to propagate the previous value forward:
```
# forward-fill
data.fillna(method='ffill')
```
Or we can specify a back-fill to propagate the next values backward:
```
# back-fill
data.fillna(method='bfill')
```
For ``DataFrame``s, the options are similar, but we can also specify an ``axis`` along which the fills take place:
```
df
df.fillna(method='ffill', axis=1)
```
Notice that if a previous value is not available during a forward fill, the NA value remains.
| github_jupyter |
## Dependencies
```
import os, random, warnings
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
from transformers import TFDistilBertModel
from tokenizers import BertWordPieceTokenizer
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.layers import Dense, Input, Dropout, GlobalAveragePooling1D, Concatenate
def seed_everything(seed=0):
np.random.seed(seed)
tf.random.set_seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
os.environ['TF_DETERMINISTIC_OPS'] = '1'
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
# Auxiliary functions
def plot_metrics(history, metric_list):
fig, axes = plt.subplots(len(metric_list), 1, sharex='col', figsize=(20, len(metric_list) * 5))
axes = axes.flatten()
for index, metric in enumerate(metric_list):
axes[index].plot(history[metric], label='Train %s' % metric)
axes[index].plot(history['val_%s' % metric], label='Validation %s' % metric)
axes[index].legend(loc='best', fontsize=16)
axes[index].set_title(metric)
plt.xlabel('Epochs', fontsize=16)
sns.despine()
plt.show()
def jaccard(str1, str2):
a = set(str1.lower().split())
b = set(str2.lower().split())
c = a.intersection(b)
return float(len(c)) / (len(a) + len(b) - len(c))
def evaluate_model(train_set, validation_set):
train_set['jaccard'] = train_set.apply(lambda x: jaccard(x['selected_text'], x['prediction']), axis=1)
validation_set['jaccard'] = validation_set.apply(lambda x: jaccard(x['selected_text'], x['prediction']), axis=1)
print('Train set Jaccard: %.3f' % train_set['jaccard'].mean())
print('Validation set Jaccard: %.3f' % validation_set['jaccard'].mean())
print('\nMetric by sentiment')
for sentiment in train_df['sentiment'].unique():
print('\nSentiment == %s' % sentiment)
print('Train set Jaccard: %.3f' % train_set[train_set['sentiment'] == sentiment]['jaccard'].mean())
print('Validation set Jaccard: %.3f' % validation_set[validation_set['sentiment'] == sentiment]['jaccard'].mean())
# Transformer inputs
def get_start_end(text, selected_text, offsets, max_seq_len):
# find the intersection between text and selected text
idx_start, idx_end = None, None
for index in (i for i, c in enumerate(text) if c == selected_text[0]):
if text[index:index + len(selected_text)] == selected_text:
idx_start = index
idx_end = index + len(selected_text)
break
intersection = [0] * len(text)
if idx_start != None and idx_end != None:
for char_idx in range(idx_start, idx_end):
intersection[char_idx] = 1
targets = np.zeros(len(offsets))
for i, (o1, o2) in enumerate(offsets):
if sum(intersection[o1:o2]) > 0:
targets[i] = 1
# OHE targets
target_start = np.zeros(len(offsets))
target_end = np.zeros(len(offsets))
targets_nonzero = np.nonzero(targets)[0]
if len(targets_nonzero) > 0:
target_start[targets_nonzero[0]] = 1
target_end[targets_nonzero[-1]] = 1
return target_start, target_end
def preprocess(text, selected_text, context, tokenizer, max_seq_len):
context_encoded = tokenizer.encode(context)
context_encoded = context_encoded.ids[1:-1]
encoded = tokenizer.encode(text)
encoded.pad(max_seq_len)
encoded.truncate(max_seq_len)
input_ids = encoded.ids
offsets = encoded.offsets
attention_mask = encoded.attention_mask
token_type_ids = ([0] * 3) + ([1] * (max_seq_len - 3))
input_ids = [101] + context_encoded + [102] + input_ids
# update input ids and attentions masks size
input_ids = input_ids[:-3]
attention_mask = [1] * 3 + attention_mask[:-3]
target_start, target_end = get_start_end(text, selected_text, offsets, max_seq_len)
x = [np.asarray(input_ids, dtype=np.int32),
np.asarray(attention_mask, dtype=np.int32),
np.asarray(token_type_ids, dtype=np.int32)]
y = [np.asarray(target_start, dtype=np.int32),
np.asarray(target_end, dtype=np.int32)]
return (x, y)
def get_data(df, tokenizer, MAX_LEN):
x_input_ids = []
x_attention_masks = []
x_token_type_ids = []
y_start = []
y_end = []
for row in df.itertuples():
x, y = preprocess(getattr(row, "text"), getattr(row, "selected_text"), getattr(row, "sentiment"), tokenizer, MAX_LEN)
x_input_ids.append(x[0])
x_attention_masks.append(x[1])
x_token_type_ids.append(x[2])
y_start.append(y[0])
y_end.append(y[1])
x_train = [np.asarray(x_input_ids), np.asarray(x_attention_masks), np.asarray(x_token_type_ids)]
y_train = [np.asarray(y_start), np.asarray(y_end)]
return x_train, y_train
def decode(pred_start, pred_end, text, tokenizer):
offset = tokenizer.encode(text).offsets
if pred_end >= len(offset):
pred_end = len(offset)-1
decoded_text = ""
for i in range(pred_start, pred_end+1):
decoded_text += text[offset[i][0]:offset[i][1]]
if (i+1) < len(offset) and offset[i][1] < offset[i+1][0]:
decoded_text += " "
return decoded_text
```
# Load data
```
train_df = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/train.csv')
print('Train samples: %s' % len(train_df))
display(train_df.head())
```
# Pre process
```
train_df['text'].fillna('', inplace=True)
train_df['selected_text'].fillna('', inplace=True)
train_df["text"] = train_df["text"].apply(lambda x: x.lower())
train_df["selected_text"] = train_df["selected_text"].apply(lambda x: x.lower())
train_df['text'] = train_df['text'].astype(str)
train_df['selected_text'] = train_df['selected_text'].astype(str)
```
# Model parameters
```
MAX_LEN = 128
BATCH_SIZE = 64
EPOCHS = 10
LEARNING_RATE = 1e-5
ES_PATIENCE = 2
base_path = '/kaggle/input/qa-transformers/distilbert/'
base_model_path = base_path + 'distilbert-base-uncased-distilled-squad-tf_model.h5'
config_path = base_path + 'distilbert-base-uncased-distilled-squad-config.json'
vocab_path = base_path + 'bert-large-uncased-vocab.txt'
model_path = 'model.h5'
```
# Tokenizer
```
tokenizer = BertWordPieceTokenizer(vocab_path, lowercase=True)
tokenizer.save('./')
```
# Train/validation split
```
train, validation = train_test_split(train_df, test_size=0.2, random_state=SEED)
x_train, y_train = get_data(train, tokenizer, MAX_LEN)
x_valid, y_valid = get_data(validation, tokenizer, MAX_LEN)
print('Train set size: %s' % len(x_train[0]))
print('Validation set size: %s' % len(x_valid[0]))
```
# Model
```
def model_fn():
input_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
token_type_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='token_type_ids')
base_model = TFDistilBertModel.from_pretrained(base_model_path, config=config_path, name="base_model")
sequence_output = base_model({'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids})
last_state = sequence_output[0]
x = GlobalAveragePooling1D()(last_state)
y_start = Dense(MAX_LEN, activation='softmax', name='y_start')(x)
y_end = Dense(MAX_LEN, activation='softmax', name='y_end')(x)
model = Model(inputs=[input_ids, attention_mask, token_type_ids], outputs=[y_start, y_end])
model.compile(optimizers.Adam(lr=LEARNING_RATE),
loss=losses.CategoricalCrossentropy(),
metrics=[metrics.CategoricalAccuracy()])
return model
model = model_fn()
model.summary()
```
# Train
```
es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE,
restore_best_weights=True, verbose=1)
history = model.fit(x_train, y_train,
validation_data=(x_valid, y_valid),
callbacks=[es],
epochs=EPOCHS,
verbose=1).history
model.save_weights(model_path)
```
# Model loss graph
```
sns.set(style="whitegrid")
plot_metrics(history, metric_list=['loss', 'y_start_loss', 'y_end_loss', 'y_start_categorical_accuracy', 'y_end_categorical_accuracy'])
```
# Model evaluation
```
train_preds = model.predict(x_train)
valid_preds = model.predict(x_valid)
train['start'] = train_preds[0].argmax(axis=-1)
train['end'] = train_preds[1].argmax(axis=-1)
train['prediction'] = train.apply(lambda x: decode(x['start'], x['end'], x['text'], tokenizer), axis=1)
train["prediction"] = train["prediction"].apply(lambda x: '.' if x.strip() == '' else x)
validation['start'] = valid_preds[0].argmax(axis=-1)
validation['end'] = valid_preds[1].argmax(axis=-1)
validation['prediction'] = validation.apply(lambda x: decode(x['start'], x['end'], x['text'], tokenizer), axis=1)
validation["prediction"] = validation["prediction"].apply(lambda x: '.' if x.strip() == '' else x)
evaluate_model(train, validation)
```
# Visualize predictions
```
print('Train set')
display(train.head(10))
print('Validation set')
display(validation.head(10))
```
| github_jupyter |
<div class="alert alert-block alert-info">
Section of the book chapter: <b>5.3 Model Selection, Optimization and Evaluation</b>
</div>
# 5. Model Selection and Evaluation
**Table of Contents**
* [5.1 Hyperparameter Optimization](#5.1-Hyperparameter-Optimization)
* [5.2 Model Evaluation](#5.2-Model-Evaluation)
**Learnings:**
- how to optimize machine learning (ML) models with grid search, random search and Bayesian optimization,
- how to evaluate ML models.
### Packages
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
# ignore warnings
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn.ensemble import RandomForestRegressor
import utils
```
### Read in Data
**Dataset:** Felix M. Riese and Sina Keller, "Hyperspectral benchmark dataset on soil moisture", Dataset, Zenodo, 2018. [DOI:10.5281/zenodo.1227836](http://doi.org/10.5281/zenodo.1227836) and [GitHub](https://github.com/felixriese/hyperspectral-soilmoisture-dataset)
**Introducing paper:** Felix M. Riese and Sina Keller, “Introducing a Framework of Self-Organizing Maps for Regression of Soil Moisture with Hyperspectral Data,” in IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 2018, pp. 6151-6154. [DOI:10.1109/IGARSS.2018.8517812](https://doi.org/10.1109/IGARSS.2018.8517812)
```
X_train, X_test, y_train, y_test = utils.get_xy_split()
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
```
### Fix Random State
```
np.random.seed(42)
```
***
## 5.1 Hyperparameter Optimization
Content:
- [5.1.1 Grid Search](#5.1.1-Grid-Search)
- [5.1.2 Randomized Search](#5.1.2-Randomized-Search)
- [5.1.3 Bayesian Optimization](#5.1.3-Bayesian-Optimization)
### 5.1.1 Grid Search
```
# NBVAL_IGNORE_OUTPUT
from sklearn.svm import SVR
from sklearn.model_selection import GridSearchCV
# example mode: support vector regressor
model = SVR(kernel="rbf")
# define parameter grid to be tested
params = {
"C": np.logspace(-4, 4, 9),
"gamma": np.logspace(-4, 4, 9)}
# set up grid search and run it on the data
gs = GridSearchCV(model, params)
%timeit gs.fit(X_train, y_train)
print("R2 score = {0:.2f} %".format(gs.score(X_test, y_test)*100))
```
### 5.1.2 Randomized Search
```
# NBVAL_IGNORE_OUTPUT
from sklearn.svm import SVR
from sklearn.model_selection import RandomizedSearchCV
# example mode: support vector regressor
model = SVR(kernel="rbf")
# define parameter grid to be tested
params = {
"C": np.logspace(-4, 4, 9),
"gamma": np.logspace(-4, 4, 9)}
# set up grid search and run it on the data
gsr = RandomizedSearchCV(model, params, n_iter=15, refit=True)
%timeit gsr.fit(X_train, y_train)
print("R2 score = {0:.2f} %".format(gsr.score(X_test, y_test)*100))
```
### 5.1.3 Bayesian Optimization
Implementation: [github.com/fmfn/BayesianOptimization](https://github.com/fmfn/BayesianOptimization)
```
# NBVAL_IGNORE_OUTPUT
from sklearn.svm import SVR
from bayes_opt import BayesianOptimization
# define function to be optimized
def opt_func(C, gamma):
model = SVR(C=C, gamma=gamma)
return model.fit(X_train, y_train).score(X_test, y_test)
# set bounded region of parameter space
pbounds = {'C': (1e-5, 1e4), 'gamma': (1e-5, 1e4)}
# define optimizer
optimizer = BayesianOptimization(
f=opt_func,
pbounds=pbounds,
random_state=1)
# optimize
%time optimizer.maximize(init_points=2, n_iter=15)
print("R2 score = {0:.2f} %".format(optimizer.max["target"]*100))
```
***
## 5.2 Model Evaluation
Content:
- [5.2.1 Generate Exemplary Data](#5.2.1-Generate-Exemplary-Data)
- [5.2.2 Plot the Data](#5.2.2-Plot-the-Data)
- [5.2.3 Evaluation Metrics](#5.2.3-Evaluation-Metrics)
```
import sklearn.metrics as me
```
### 5.2.1 Generate Exemplary Data
```
### generate example data
np.random.seed(1)
# define x grid
x_grid = np.linspace(0, 10, 11)
y_model = x_grid*0.5
# define first dataset without outlier
y1 = np.array([y + np.random.normal(scale=0.2) for y in y_model])
# define second dataset with outlier
y2 = np.copy(y1)
y2[9] = 0.5
# define third dataset with higher variance
y3 = np.array([y + np.random.normal(scale=1.0) for y in y_model])
```
### 5.2.2 Plot the Data
```
# plot example data
fig, (ax1, ax2, ax3) = plt.subplots(1,3, figsize=(12,4))
fontsize = 18
titleweight = "bold"
titlepad = 10
scatter_label = "Data"
scatter_alpha = 0.7
scatter_s = 100
ax1.scatter(x_grid, y1, label=scatter_label, alpha=scatter_alpha, s=scatter_s)
ax1.set_title("(a) Low var.", fontsize=fontsize, fontweight=titleweight, pad=titlepad)
ax2.scatter(x_grid, y2, label=scatter_label, alpha=scatter_alpha, s=scatter_s)
ax2.set_title("(b) Low var. + outlier", fontsize=fontsize, fontweight=titleweight, pad=titlepad)
ax3.scatter(x_grid, y3, label=scatter_label, alpha=scatter_alpha, s=scatter_s)
ax3.set_title("(c) Higher var.", fontsize=fontsize, fontweight=titleweight, pad=titlepad)
for i, ax in enumerate([ax1, ax2, ax3]):
i += 1
# red line
ax.plot(x_grid, y_model, label="Model", c="tab:red", linestyle="dashed", linewidth=4, alpha=scatter_alpha)
# x-axis cosmetics
ax.set_xlabel("x in a.u.", fontsize=fontsize)
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(fontsize)
# y-axis cosmetics
if i != 1:
ax.set_yticklabels([])
else:
ax.set_ylabel("y in a.u.", fontsize=fontsize, rotation=90)
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(fontsize)
ax.set_xlim(-0.5, 10.5)
ax.set_ylim(-0.5, 6.5)
# ax.set_title("Example "+str(i), fontsize=fontsize)
if i == 2:
ax.legend(loc=2, fontsize=fontsize*1.0, frameon=True)
plt.tight_layout()
plt.savefig("plots/metrics_plot.pdf", bbox_inches="tight")
```
### 5.2.3 Evaluation Metrics
```
# calculating the metrics
for i, y in enumerate([y1, y2, y3]):
print("Example", i+1)
print("- MAE = {:.2f}".format(me.mean_absolute_error(y_model, y)))
print("- MSE = {:.2f}".format(me.mean_squared_error(y_model, y)))
print("- RMSE = {:.2f}".format(np.sqrt(me.mean_squared_error(y_model, y))))
print("- R2 = {:.2f}%".format(me.r2_score(y_model, y)*100))
print("-"*20)
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from sympy import Symbol, integrate
%matplotlib notebook
```
### Smooth local paths
We will use cubic spirals to generate smooth local paths. Without loss of generality, as $\theta$ smoothly changes from 0 to 1, we impose a condition on the curvature as follows
$\kappa = f'(\theta) = K(\theta(1-\theta))^n $
This ensures curvature vanishes at the beginning and end of the path. Integrating, the yaw changes as
$\theta = \int_0^x f'(\theta)d\theta$
With $n = 1$ we get a cubic spiral, $n=2$ we get a quintic spiral and so on. Let us use the sympy package to find the family of spirals
1. Declare $x$ a Symbol
2. You want to find Integral of $f'(x)$
3. You can choose $K$ so that all coefficients are integers
Verify if $\theta(0) = 0$ and $\theta(1) = 1$
```
K = 30#choose for cubic/quintic
n = 2#choose for cubic/ quintic
x = Symbol('x')#declare as Symbol
print(integrate(K*(x*(1-x))**n, x)) # complete the expression
#write function to compute a cubic spiral
#input/ output can be any theta
def cubic_spiral(theta_i, theta_f, n=10):
x = np.linspace(0, 1, num=n)
theta = (-2*x**3 + 3*x**2) * (theta_f-theta_i) + theta_i
return theta
# pass
def quintic_spiral(theta_i, theta_f, n=10):
x = np.linspace(0, 1, num=n)
theta = (6*x**5 - 15*x**4 + 10*x**3)* (theta_f-theta_i) + theta_i
return theta
# pass
def circular_spiral(theta_i, theta_f, n=10):
x = np.linspace(0, 1, num=n)
theta = x* (theta_f-theta_i) + theta_i
return theta
```
### Plotting
Plot cubic, quintic spirals along with how $\theta$ will change when moving in a circular arc. Remember circular arc is when $\omega $ is constant
```
theta_i = 1.57
theta_f = 0
n = 10
x = np.linspace(0, 1, num=n)
plt.figure()
plt.plot(x,circular_spiral(theta_i, theta_f, n),label='Circular')
plt.plot(x,cubic_spiral(theta_i, theta_f, n), label='Cubic')
plt.plot(x,quintic_spiral(theta_i, theta_f, n), label='Quintic')
plt.grid()
plt.legend()
```
## Trajectory
Using the spirals, convert them to trajectories $\{(x_i,y_i,\theta_i)\}$. Remember the unicycle model
$dx = v\cos \theta dt$
$dy = v\sin \theta dt$
$\theta$ is given by the spiral functions you just wrote. Use cumsum() in numpy to calculate {}
What happens when you change $v$?
```
v = 1
dt = 0.1
theta_i = 1.57
theta_f = 0
n = 100
theta_cubic = cubic_spiral(theta_i, theta_f, n)
theta_quintic = quintic_spiral(theta_i, theta_f, int(n+(23/1000)*n))
theta_circular = circular_spiral(theta_i, theta_f, int(n-(48/1000)*n))
# print(theta)
def trajectory(v,dt,theta):
dx = v*np.cos(theta) *dt
dy = v*np.sin(theta) *dt
# print(dx)
x = np.cumsum(dx)
y = np.cumsum(dy)
return x,y
# plot trajectories for circular/ cubic/ quintic
plt.figure()
plt.plot(*trajectory(v,dt,theta_circular), label='Circular')
plt.plot(*trajectory(v,dt,theta_cubic), label='Cubic')
plt.plot(*trajectory(v,dt,theta_quintic), label='Quintic')
plt.grid()
plt.legend()
```
## Symmetric poses
We have been doing only examples with $|\theta_i - \theta_f| = \pi/2$.
What about other orientation changes? Given below is an array of terminal angles (they are in degrees!). Start from 0 deg and plot the family of trajectories
```
dt = 0.1
thetas = [15, 30, 45, 60, 90, 120, 150, 180] #convert to radians
plt.figure()
for tf in thetas:
t = cubic_spiral(0, np.deg2rad(tf),50)
x = np.cumsum(np.cos(t)*dt)
y = np.cumsum(np.sin(t)*dt)
plt.plot(x, y, label=f'0 to {tf} degree')
plt.grid()
plt.legend()
# On the same plot, move from 180 to 180 - theta
#thetas =
plt.figure()
for tf in thetas:
t = cubic_spiral(np.pi, np.pi-np.deg2rad(tf),50)
x = np.cumsum(np.cos(t)*dt)
y = np.cumsum(np.sin(t)*dt)
plt.plot(x, y, label=f'180 to {180-tf} degree')
plt.grid()
plt.legend()
```
Modify your code to print the following for the positive terminal angles $\{\theta_f\}$
1. Final x, y position in corresponding trajectory: $x_f, y_f$
2. $\frac{y_f}{x_f}$ and $\tan \frac{\theta_f}{2}$
What do you notice?
What happens when $v$ is doubled?
```
dt = 0.1
thetas = [15, 30, 45, 60, 90, 120, 150, 180] #convert to radians
# plt.figure()
for tf in thetas:
t = cubic_spiral(0, np.deg2rad(tf),50)
x = np.cumsum(np.cos(t)*dt)
y = np.cumsum(np.sin(t)*dt)
print(f'tf: {tf} x_f : {x[-1]} y_f: {y[-1]} y_f/x_f : {y[-1]/x[-1]} tan (theta_f/2) : {np.tan(np.deg2rad(tf)/2)}')
```
These are called *symmetric poses*. With this spiral-fitting approach, only symmetric poses can be reached.
In order to move between any 2 arbitrary poses, you will have to find an intermediate pose that is pair-wise symmetric to the start and the end pose.
What should be the intermediate pose? There are infinite possibilities. We would have to formulate it as an optimization problem. As they say, that has to be left for another time!
```
```
| github_jupyter |
## Add cancer analysis
Analysis of results from `run_add_cancer_classification.py`.
We hypothesized that adding cancers in a principled way (e.g. by similarity to the target cancer) would lead to improved performance relative to both a single-cancer model (using only the target cancer type), and a pan-cancer model using all cancer types without regard for similarity to the target cancer.
Script parameters:
* RESULTS_DIR: directory to read experiment results from
* IDENTIFIER: {gene}\_{cancer_type} target identifier to plot results for
```
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import pancancer_evaluation.config as cfg
import pancancer_evaluation.utilities.analysis_utilities as au
RESULTS_DIR = os.path.join(cfg.repo_root, 'add_cancer_results', 'add_cancer')
```
### Load data
```
add_cancer_df = au.load_add_cancer_results(RESULTS_DIR, load_cancer_types=True)
print(add_cancer_df.shape)
add_cancer_df.sort_values(by=['gene', 'holdout_cancer_type']).head()
# load data from previous single-cancer and pan-cancer experiments
# this is to put the add cancer results in the context of our previous results
pancancer_dir = os.path.join(cfg.results_dir, 'pancancer')
pancancer_dir2 = os.path.join(cfg.results_dir, 'vogelstein_s1_results', 'pancancer')
single_cancer_dir = os.path.join(cfg.results_dir, 'single_cancer')
single_cancer_dir2 = os.path.join(cfg.results_dir, 'vogelstein_s1_results', 'single_cancer')
single_cancer_df1 = au.load_prediction_results(single_cancer_dir, 'single_cancer')
single_cancer_df2 = au.load_prediction_results(single_cancer_dir2, 'single_cancer')
single_cancer_df = pd.concat((single_cancer_df1, single_cancer_df2))
print(single_cancer_df.shape)
single_cancer_df.head()
pancancer_df1 = au.load_prediction_results(pancancer_dir, 'pancancer')
pancancer_df2 = au.load_prediction_results(pancancer_dir2, 'pancancer')
pancancer_df = pd.concat((pancancer_df1, pancancer_df2))
print(pancancer_df.shape)
pancancer_df.head()
single_cancer_comparison_df = au.compare_results(single_cancer_df,
identifier='identifier',
metric='aupr',
correction=True,
correction_alpha=0.001,
verbose=False)
pancancer_comparison_df = au.compare_results(pancancer_df,
identifier='identifier',
metric='aupr',
correction=True,
correction_alpha=0.001,
verbose=False)
experiment_comparison_df = au.compare_results(single_cancer_df,
pancancer_df=pancancer_df,
identifier='identifier',
metric='aupr',
correction=True,
correction_alpha=0.05,
verbose=False)
experiment_comparison_df.sort_values(by='corr_pval').head(n=10)
```
### Plot change in performance as cancers are added
```
IDENTIFIER = 'BRAF_COAD'
# IDENTIFIER = 'EGFR_ESCA'
# IDENTIFIER = 'EGFR_LGG'
# IDENTIFIER = 'KRAS_CESC'
# IDENTIFIER = 'PIK3CA_ESCA'
# IDENTIFIER = 'PIK3CA_STAD'
# IDENTIFIER = 'PTEN_COAD'
# IDENTIFIER = 'PTEN_BLCA'
# IDENTIFIER = 'TP53_OV'
# IDENTIFIER = 'NF1_GBM'
GENE = IDENTIFIER.split('_')[0]
gene_df = add_cancer_df[(add_cancer_df.gene == GENE) &
(add_cancer_df.data_type == 'test') &
(add_cancer_df.signal == 'signal')].copy()
# make seaborn treat x axis as categorical
gene_df.num_train_cancer_types = gene_df.num_train_cancer_types.astype(str)
gene_df.loc[(gene_df.num_train_cancer_types == '-1'), 'num_train_cancer_types'] = 'all'
sns.set({'figure.figsize': (14, 6)})
sns.pointplot(data=gene_df, x='num_train_cancer_types', y='aupr', hue='identifier',
order=['0', '1', '2', '4', 'all'])
plt.legend(bbox_to_anchor=(1.15, 0.5), loc='center right', borderaxespad=0., title='Cancer type')
plt.title('Adding cancer types by confusion matrix similarity, {} mutation prediction'.format(GENE), size=13)
plt.xlabel('Number of added cancer types', size=13)
plt.ylabel('AUPR', size=13)
id_df = add_cancer_df[(add_cancer_df.identifier == IDENTIFIER) &
(add_cancer_df.data_type == 'test') &
(add_cancer_df.signal == 'signal')].copy()
# make seaborn treat x axis as categorical
id_df.num_train_cancer_types = id_df.num_train_cancer_types.astype(str)
id_df.loc[(id_df.num_train_cancer_types == '-1'), 'num_train_cancer_types'] = 'all'
sns.set({'figure.figsize': (14, 6)})
cat_order = ['0', '1', '2', '4', 'all']
sns.pointplot(data=id_df, x='num_train_cancer_types', y='aupr', hue='identifier',
order=cat_order)
plt.legend([],[], frameon=False)
plt.title('Adding cancer types by confusion matrix similarity, {} mutation prediction'.format(IDENTIFIER),
size=13)
plt.xlabel('Number of added cancer types', size=13)
plt.ylabel('AUPR', size=13)
# annotate points with cancer types
def label_points(x, y, cancer_types, gene, ax):
a = pd.DataFrame({'x': x, 'y': y, 'cancer_types': cancer_types})
for i, point in a.iterrows():
if gene in ['TP53', 'PIK3CA'] and point['x'] == 4:
ax.text(point['x']+0.05,
point['y']+0.005,
str(point['cancer_types'].replace(' ', '\n')),
bbox=dict(facecolor='none', edgecolor='black', boxstyle='round'),
ha='left', va='center')
else:
ax.text(point['x']+0.05,
point['y']+0.005,
str(point['cancer_types'].replace(' ', '\n')),
bbox=dict(facecolor='none', edgecolor='black', boxstyle='round'))
cat_to_loc = {c: i for i, c in enumerate(cat_order)}
group_id_df = (
id_df.groupby(['num_train_cancer_types', 'train_cancer_types'])
.mean()
.reset_index()
)
label_points([cat_to_loc[c] for c in group_id_df.num_train_cancer_types],
group_id_df.aupr,
group_id_df.train_cancer_types,
GENE,
plt.gca())
```
### Plot gene/cancer type "best model" performance vs. single/pan-cancer models
```
id_df = add_cancer_df[(add_cancer_df.identifier == IDENTIFIER) &
(add_cancer_df.data_type == 'test')].copy()
best_num = (
id_df[id_df.signal == 'signal']
.groupby('num_train_cancer_types')
.mean()
.reset_index()
.sort_values(by='aupr', ascending=False)
.iloc[0, 0]
)
print(best_num)
best_id_df = (
id_df.loc[id_df.num_train_cancer_types == best_num, :]
.drop(columns=['num_train_cancer_types', 'how_to_add', 'train_cancer_types'])
)
best_id_df['train_set'] = 'best_add'
sc_id_df = (
id_df.loc[id_df.num_train_cancer_types == 1, :]
.drop(columns=['num_train_cancer_types', 'how_to_add', 'train_cancer_types'])
)
sc_id_df['train_set'] = 'single_cancer'
pc_id_df = (
id_df.loc[id_df.num_train_cancer_types == -1, :]
.drop(columns=['num_train_cancer_types', 'how_to_add', 'train_cancer_types'])
)
pc_id_df['train_set'] = 'pancancer'
all_id_df = pd.concat((sc_id_df, best_id_df, pc_id_df), sort=False)
all_id_df.head()
sns.set()
sns.boxplot(data=all_id_df, x='train_set', y='aupr', hue='signal', hue_order=['signal', 'shuffled'])
plt.title('{}, single/best/pancancer predictors'.format(IDENTIFIER))
plt.xlabel('Training data')
plt.ylabel('AUPR')
plt.legend(title='Signal')
print('Single cancer significance: {}'.format(
single_cancer_comparison_df.loc[single_cancer_comparison_df.identifier == IDENTIFIER, 'reject_null'].values[0]
))
print('Pan-cancer significance: {}'.format(
pancancer_comparison_df.loc[pancancer_comparison_df.identifier == IDENTIFIER, 'reject_null'].values[0]
))
# Q2: where is this example in the single vs. pan-cancer volcano plot?
# see pancancer only experiments for an example of this sort of thing
experiment_comparison_df['nlog10_p'] = -np.log(experiment_comparison_df.corr_pval)
sns.set({'figure.figsize': (8, 6)})
sns.scatterplot(data=experiment_comparison_df, x='delta_mean', y='nlog10_p',
hue='reject_null', alpha=0.3)
plt.xlabel('AUPRC(pancancer) - AUPRC(single cancer)')
plt.ylabel(r'$-\log_{10}($adjusted p-value$)$')
plt.title('Highlight {} in pancancer vs. single-cancer comparison'.format(IDENTIFIER))
def highlight_id(x, y, val, ax, id_to_plot):
a = pd.concat({'x': x, 'y': y, 'val': val}, axis=1)
for i, point in a.iterrows():
if point['val'] == id_to_plot:
ax.scatter(point['x'], point['y'], color='red', marker='+', s=100)
highlight_id(experiment_comparison_df.delta_mean,
experiment_comparison_df.nlog10_p,
experiment_comparison_df.identifier,
plt.gca(),
IDENTIFIER)
```
Overall, these results weren't quite as convincing as we were expecting. Although there are a few gene/cancer type combinations where there is a clear improvement when one or two relevant cancer types are added, overall there isn't much change in many cases (see first line plots of multiple cancer types).
Biologically speaking, this isn't too surprising for a few reasons:
* Some genes aren’t drivers in certain cancer types
* Some genes have very cancer-specific effects
* Some genes (e.g. TP53) have very well-preserved effects across all cancers
We think there could be room for improvement as far as cancer type selection (some of the cancers chosen don't make a ton of sense), but overall we're a bit skeptical that this approach will lead to models that generalize better than a single-cancer model in most cases.
| github_jupyter |
```
import environmentv6 as e
import mdptoolbox
import matplotlib.pyplot as plt
import numpy as np
import progressbar as pb
import scipy.sparse as ss
import seaborn as sns
import warnings
warnings.filterwarnings('ignore', category=ss.SparseEfficiencyWarning)
# params
alpha = 0.4
gamma = 0.5
T = 8
epsilon = 10e-5
# game
action_count = 4
adopt = 0; override = 1; mine = 2; match = 3
# fork params
fork_count = 3
irrelevant = 0; relevant = 1; active = 2;
state_count = (T+1) * (T+1) * 3
# mapping utils
state_mapping = {}
states = []
count = 0
for a in range(T+1):
for h in range(T+1):
for fork in range(fork_count):
state_mapping[(a, h, fork)] = count
states.append((a, h, fork))
count += 1
# initialize matrices
transitions = []; rewards = []
for _ in range(action_count):
transitions.append(ss.csr_matrix(np.zeros(shape=(state_count, state_count))))
rewards.append(ss.csr_matrix(np.zeros(shape=(state_count, state_count))))
mining_cost = 0.8
# populate matrices
for state_index in range(state_count):
a, h, fork = states[state_index]
# adopt
transitions[adopt][state_index, state_mapping[0, 0, irrelevant]] = 1
# override
if a > h:
transitions[override][state_index, state_mapping[a-h-1, 0, irrelevant]] = 1
rewards[override][state_index, state_mapping[a-h-1, 0, irrelevant]] = h + 1
else:
transitions[override][state_index, 0] = 1
rewards[override][state_index, 0] = -10000
# mine
if (fork != active) and (a < T) and (h < T):
transitions[mine][state_index, state_mapping[a+1, h, irrelevant]] = alpha
transitions[mine][state_index, state_mapping[a, h+1, relevant]] = (1 - alpha)
rewards[mine][state_index, state_mapping[a+1, h, irrelevant]] = -1 * alpha * mining_cost
rewards[mine][state_index, state_mapping[a, h+1, relevant]] = -1 * alpha * mining_cost
elif (fork == active) and (a > h) and (h > 0) and (a < T) and (h < T):
transitions[mine][state_index, state_mapping[a+1, h, active]] = alpha
transitions[mine][state_index, state_mapping[a-h, 1, relevant]] = (1 - alpha) * gamma
transitions[mine][state_index, state_mapping[a, h+1, relevant]] = (1 - alpha) * (1 - gamma)
rewards[mine][state_index, state_mapping[a+1, h, active]] = -1 * alpha * mining_cost
rewards[mine][state_index, state_mapping[a-h, 1, relevant]] = h - alpha * mining_cost
rewards[mine][state_index, state_mapping[a, h+1, relevant]] = -1 * alpha * mining_cost
else:
transitions[mine][state_index, 0] = 1
rewards[mine][state_index, 0] = -10000
# match
if (fork == relevant) and (a >= h) and (h > 0) and (a < T) and (h < T):
transitions[match][state_index, state_mapping[a+1, h, active]] = alpha
transitions[match][state_index, state_mapping[a-h, 1, relevant]] = (1 - alpha) * gamma
transitions[match][state_index, state_mapping[a, h+1, relevant]] = (1 - alpha) * (1 - gamma)
rewards[match][state_index, state_mapping[a+1, h, active]] = -1 * alpha * mining_cost
rewards[match][state_index, state_mapping[a-h, 1, relevant]] = h - alpha * mining_cost
rewards[match][state_index, state_mapping[a, h+1, relevant]] = -1 * alpha * mining_cost
else:
transitions[match][state_index, 0] = 1
rewards[match][state_index, 0] = -10000
rvi = mdptoolbox.mdp.RelativeValueIteration(transitions, rewards, epsilon/8)
rvi.run()
policy = rvi.policy
processPolicy(policy)
np.reshape(policy, (9,9,3))
def processPolicy(policy):
results = ''
for a in range(9):
for h in range(9):
for fork in range(3):
state_index = state_mapping[(a, h, fork)]
action = policy[state_index]
if action == 0:
results += 'a'
elif action == 1:
results += 'o'
elif action == 2:
results += 'w'
elif action == 3:
results += 'm'
else:
print('here')
results += ' & '
results += '\\\\ \n'
print(results)
sm1_policy = np.asarray([
[2, 0, 9, 9, 9, 9, 9, 9, 9],
[2, 0, 9, 9, 9, 9, 9, 9, 9],
[2, 1, 0, 9, 9, 9, 9, 9, 9],
[2, 2, 1, 0, 9, 9, 9, 9, 9],
[2, 2, 2, 1, 0, 9, 9, 9, 9],
[2, 2, 2, 2, 1, 0, 9, 9, 9],
[2, 2, 2, 2, 2, 1, 0, 9, 9],
[2, 2, 2, 2, 2, 2, 1, 0, 9],
[1, 1, 1, 1, 1, 1, 1, 1, 0]
])
honest_policy = np.asarray([
[2, 0, 9, 9, 9, 9, 9, 9, 9],
[1, 9, 9, 9, 9, 9, 9, 9, 9],
[9, 9, 9, 9, 9, 9, 9, 9, 9],
[9, 9, 9, 9, 9, 9, 9, 9, 9],
[9, 9, 9, 9, 9, 9, 9, 9, 9],
[9, 9, 9, 9, 9, 9, 9, 9, 9],
[9, 9, 9, 9, 9, 9, 9, 9, 9],
[9, 9, 9, 9, 9, 9, 9, 9, 9],
[9, 9, 9, 9, 9, 9, 9, 9, 9]
])
opt_policy = np.reshape(policy, (9,9))
def get_opt_policy(alpha, T, mining_cost):
for state_index in range(state_count):
a, h = states[state_index]
# adopt transitions
transitions[adopt][state_index, state_mapping[0, 0]] = 1
# override
if a > h:
transitions[override][state_index, state_mapping[a-h-1, 0]] = 1
rewards[override][state_index, state_mapping[a-h-1, 0]] = h + 1
else:
transitions[override][state_index, 0] = 1
rewards[override][state_index, 0] = -10000
# mine transitions
if (a < T) and (h < T):
transitions[mine][state_index, state_mapping[a+1, h]] = alpha
transitions[mine][state_index, state_mapping[a, h+1]] = (1 - alpha)
rewards[mine][state_index, state_mapping[a+1, h]] = -1 * alpha * mining_cost
rewards[mine][state_index, state_mapping[a, h+1]] = -1 * alpha * mining_cost
else:
transitions[mine][state_index, 0] = 1
rewards[mine][state_index, 0] = -10000
rvi = mdptoolbox.mdp.RelativeValueIteration(transitions, rewards, epsilon/8)
rvi.run()
return np.reshape(rvi.policy, (T+1, T+1))
get_opt_policy(alpha=0.4, T=8, mining_cost=0.5)
# simulation
length = int(1e6)
alpha = 0.4
T = 8
mining_cost = 0.5
env = e.Environment(alpha, T, mining_cost)
# simulation
bar = pb.ProgressBar()
_ = env.reset()
current_reward = 0
for _ in bar(range(length)):
a, h = env.current_state
action = opt_policy[(a,h)]
_, reward = env.takeAction(action)
current_reward += reward
# opt
print(current_reward, current_reward / length)
# sm1
print(current_reward, current_reward / length)
# honest
print(current_reward, current_reward / length)
```
| github_jupyter |
```
import json
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
pd.options.mode.chained_assignment = None
SRC_TRAIN = "../../../data/src/train.csv"
TRAIN = "../../../data/train.csv"
TEST = "../../../data/test.csv"
CONFIG = "../../../data/conf.json"
```
# Чтение данных
```
df = pd.read_csv(SRC_TRAIN)
df.head()
```
Избавимся от столбцов, которые для выбранного решения "кажутся" бесполезными
```
df = df.drop("PassengerId", axis=1)
df = df.drop("Name", axis=1)
df = df.drop("Ticket", axis=1)
df = df.drop("Cabin", axis=1)
df.head()
```
Случайное разделение исходных данных на train/test датасеты в пропорции 0,7 / 0,3
```
train, test = train_test_split(df, test_size=0.3)
```
Сохранение полученных данных для использования в обучении NeoML
```
train.to_csv(TRAIN, sep="|")
test.to_csv(TEST, sep="|")
train.count()
```
Неодинаковое кол-во значений в некоторых столбцах говорит о том, что данные нужно 'полечить'
```
datasets = [train, test]
```
# Подготовка данных
## Age
Для исправления пропусков в возрасте будем выбирать случайное значение из интервала (mean - sigma; mean + sigma)
```
age_mean = train["Age"].mean()
age_std = train["Age"].std()
print("age-mean='{0}' age-std='{1}'".format(age_mean, age_std))
for ds in datasets:
age_null_count = ds["Age"].isnull().sum()
rand_age = np.random.randint(age_mean - age_std, age_mean + age_std, size=age_null_count)
age_slice = ds["Age"].copy()
age_slice[ np.isnan(age_slice) ] = rand_age
ds["Age"] = age_slice
ds["Age"] = ds["Age"].astype(int)
```
## Embarked
Будем считать, что отсутствующее значение равно S (Southampton)
```
for ds in datasets:
ds["Embarked"] = ds["Embarked"].fillna("S")
train.count()
```
## Encodes
Строковые данные пола и места посадки кодируются с помощью LabelEncoder
```
sex_encoder = LabelEncoder()
sex_encoder.fit(train["Sex"])
train["Sex"] = sex_encoder.transform(train["Sex"])
test["Sex"] = sex_encoder.transform(test["Sex"])
embarked_encode = LabelEncoder()
embarked_encode.fit(train["Embarked"])
train["Embarked"] = embarked_encode.transform(train["Embarked"])
test["Embarked"] = embarked_encode.transform(test["Embarked"])
```
## X/Y
Из исходных данных подготовим сэмплы для обучения классификатора
```
X_train = train.drop("Survived", axis=1)
Y_train = train["Survived"]
X_test = test.drop("Survived", axis=1)
Y_test = test["Survived"]
```
## Scale
Вектора признаков необъодимо скалировать, что бы значения некоторых фич "не давили" своим весом
```
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
```
# Обучение
## Train
Для классификатора выберем модель на основе SVM
```
classifier = SVC(kernel = 'rbf', random_state = 0)
classifier.fit(X_train, Y_train)
```
## Test
```
Y_train_predict = classifier.predict(X_train)
acc = accuracy_score(Y_train, Y_train_predict)
print("Train Acc = {:.2f}".format(acc * 100))
Y_test_predict = classifier.predict(X_test)
acc = accuracy_score(Y_test, Y_test_predict)
print("Test Acc = {:.2f}".format(acc * 100))
```
Запомним значения точности, чтобы сверить с решением на NeoML
# Сохранение конфига
Для воспроизведения функционала подготовки данных сохраняем параметры в json-конфиг
```
# Количество полей + 1 (столбец с индексом)
fields_count = len(train.columns) + 1
config = {
'expected-fields': fields_count,
'min-age': int(age_mean - age_std),
'max-age': int(age_mean + age_std),
'sex-labels': list(sex_encoder.classes_),
'embarked-labels': list(embarked_encode.classes_),
'scaler-mean': list(scaler.mean_),
'scaler-std': list(scaler.scale_)
}
with open(CONFIG, 'w') as fp:
json.dump(config, fp)
```
# NeoML incoming
Итак, после исследования имеется в наличии:
- процедура подготовки данных и конфиг, описывающий параметры;
- модель, решающая задачу.
Необходимо воспроизвести решение с использованием NeoML.
Это позволит создать SDK, которое можно интегрировать в произвольное приложение
Что ж, приступим...
| github_jupyter |
Reference: https://qiita.com/sasayabaku/items/b7872a3b8acc7d6261bf
LSTMの学習を倍の長さでしてみる
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.layers import LSTM
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import EarlyStopping
import numpy as np
import matplotlib.pyplot as plt
def sin(x, T=100):
return np.sin(2.0 * np.pi * x / T)
# sin波にノイズを付与する
def toy_problem(T=100, ampl=0.05):
x = np.arange(0, 2 * T + 1)
noise = ampl * np.random.uniform(low=-1.0, high=1.0, size=len(x))
return sin(x) + noise
f = toy_problem()
def make_dataset(low_data, n_prev=100, maxlen=25):
data, target = [], []
for i in range(len(low_data)-maxlen):
data.append(low_data[i:i + maxlen])
target.append(low_data[i + maxlen])
re_data = np.array(data).reshape(len(data), maxlen, 1)
re_target = np.array(target).reshape(len(data), 1)
return re_data, re_target
# 300周期 (601サンプル)にデータに拡張
f = toy_problem(T=300)
# 50サンプルごとに分割
g, h = make_dataset(f, maxlen=50)
print(g.shape)
g
print(h.shape)
h
#h.reshape(h.shape[0])
#[i for i in range(h.shape[0])]
plt.scatter([i for i in range(h.shape[0])], h.reshape(h.shape[0]))
plt.scatter([i for i in range(len(f))], f)
```
### モデル構築
```
# 1つの学習データのStep数(今回は25)
length_of_sequence = g.shape[1]
in_out_neurons = 1
n_hidden = 300
model = Sequential()
model.add(LSTM(n_hidden, batch_input_shape=(None, length_of_sequence, in_out_neurons), return_sequences=False))
model.add(Dense(in_out_neurons))
model.add(Activation("linear"))
optimizer = Adam(lr=0.001)
model.compile(loss="mean_squared_error", optimizer=optimizer)
early_stopping = EarlyStopping(monitor='val_loss', mode='auto', patience=20)
hist = model.fit(g, h,
batch_size=300,
epochs=100,
validation_split=0.1,
callbacks=[early_stopping])
import pandas as pd
results = pd.DataFrame(hist.history)
results[['loss', 'val_loss']].plot()
```
### 予測
```
predicted = model.predict(g)
plt.figure()
plt.plot(range(25,len(predicted)+25), predicted, color="r", label="predict_data")
plt.plot(range(0, len(f)), f, color="b", label="row_data")
plt.legend()
plt.show()
```
### 未来の予測
```
future_test = g[-1].T
print(future_test.shape)
future_test
# 1つの学習データの時間の長さ -> 25
time_length = future_test.shape[1]
# 未来の予測データを保存していく変数
future_result = np.empty((1))
# 未来予想
for step2 in range(400):
test_data = np.reshape(future_test, (1, time_length, 1))
batch_predict = model.predict(test_data)
future_test = np.delete(future_test, 0)
future_test = np.append(future_test, batch_predict)
future_result = np.append(future_result, batch_predict)
#future_result
# sin波をプロット
plt.figure()
plt.plot(range(25,len(predicted)+25), predicted, color="r", label="predict_data")
plt.plot(range(0, len(f)), f, color="b", label="row_data")
plt.plot(range(0+len(f), len(future_result)+len(f)), future_result, color="g", label="future_predict")
plt.legend()
plt.show()
```
| github_jupyter |
# Exploring Texas Execution Data
# Setup
```
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
# Python 2 & 3 Compatibility
from __future__ import print_function, division
# Necessary imports
import pandas as pd
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf
import patsy
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import RidgeCV
%matplotlib inline
import pyLDAvis
```
# Import
```
df = pd.read_csv("data/Death_Row_Data.csv", encoding = "latin1")
print(len(df))
df.sample(5)
```
# Look at year vs count
```
years = []
for i in range(len(df.Date)):
a = df.Date[i][-4:]
years.append(a)
df["year"] = years
df_count = df.groupby(['year']).count()
df_counts.head()
df_count.reset_index(inplace=True)
df_counts = df_count[['year','Last Statement']]
df_counts.rename(columns={'Last Statement': 'count'}, inplace=True)
df_counts.to_csv('count_by_year.csv')
df_counts[['year']] = df_counts[['year']].astype(int)
lr1 = LinearRegression()
X = df_counts[['year']]
y = df_counts['count']
lr1.fit(X,y)
lr1.score(X,y)
degree = 3
est = make_pipeline(PolynomialFeatures(degree), LinearRegression())
est.fit(X, y)
fig,ax = plt.subplots(1,1);
ax.scatter(X, y,label='ground truth')
ax.plot(X, est.predict(X), color='red',label='degree=%d' % degree)
ax.set_ylabel('y')
ax.set_xlabel('x')
ax.legend(loc='upper right',frameon=True)
```
# Sentiment of last words
```
df.head()
df = df.rename(columns={"Last Statement": "last_statement"})
df.head()
from textblob import TextBlob
polarity = []
subjectivity = []
for i in range(len(df.last_statement)):
state = str(df.last_statement[i])
a = list(TextBlob(state).sentiment)
polarity.append(a[0])
subjectivity.append(a[1])
len(subjectivity)
df['polarity'] = polarity
df['subjectivity'] = subjectivity
fig,ax = plt.subplots(1,1);
ax.scatter(subjectivity,polarity)
ax.set_xlim(-1,1)
ax.set_ylim(-1,1)
```
# Race
```
for i in range(0,len(df)):
race = df["Race"].iloc[i]
if race == "White ":
df["Race"].iloc[i] = "White"
elif race == "Hispanic ":
df["Race"].iloc[i] = "Hispanic"
elif race == "Histpanic":
df["Race"].iloc[i] = "Hispanic"
else:
pass
races = df["Race"].unique()
print (races)
sns.factorplot('Race',data=df,kind='count')
plt.title("Number of offenders by race")
plt.show()
```
| github_jupyter |
```
"""
Author: Shaimaa K. El-Baklish
This file is under MIT License.
Link: https://github.com/shaimaa-elbaklish/funcMinimization/blob/main/LICENSE.md
"""
import numpy as np
import plotly.graph_objects as go
```
## Benchmark Multimodal Functions Available
| Function | Dimension | Bounds | Optimal Function Value |
| -------- | --------- | ------ | ---------------------- |
| $$ f_{1} = 4x_1^2 - 2.1x_1^4 + \frac{1}{3}x_1^6 + x_1 x_2 - 4x_2^2 + 4 x_2^4 $$ | 2 | [-5, 5] | -1.0316 |
| $$ f_{2} = (x_2 - \frac{5.1}{4\pi^2}x_1^2 + \frac{5}{\pi}x_1 -6)^2 +10(1 - \frac{1}{8\pi})\cos{x_1} + 10 $$ | 2 | [-5, 5] | 0.398 |
| $$ f_{3} = -\sum_{i=1}^{4} c_i exp(-\sum_{j=1}^{3} a_{ij}(x_j - p_{ij})^2) $$ | 3 | [1, 3] | -3.86 |
| $$ f_{4} = -\sum_{i=1}^{4} c_i exp(-\sum_{j=1}^{6} a_{ij}(x_j - p_{ij})^2) $$ | 6 | [0, 1] | -3.32 |
| $$ f_{5} = -\sum_{i=1}^{7} [(X - a_i)(X - a_i)^T + c_i]^{-1} $$ | 4 | [0, 10] | -10.4028 |
```
class Function:
def __init__(self, x = None, n = 2, lb = np.array([-5, -5]), ub = np.array([5, 5])):
self.n_x = n
if x is not None:
assert(x.shape[0] == self.n_x)
self.fvalue = self.getFValue(x)
self.x = x
self.fvalue = None
assert(lb.shape[0] == self.n_x)
self.lb = lb
assert(ub.shape[0] == self.n_x)
self.ub = ub
self.benchmark_selected = None
def setBenchmarkFunction(self, f_name = "f1"):
benchmarks = {
"f1": [2, np.array([-5, -5]), np.array([5, 5])],
"f2": [2, np.array([-5, -5]), np.array([5, 5])],
"f3": [3, 1*np.ones(shape=(3,)), 3*np.ones(shape=(3,))],
"f4": [6, np.zeros(shape=(6,)), 1*np.ones(shape=(6,))],
"f5": [4, np.zeros(shape=(4,)), 10*np.ones(shape=(4,))]
}
self.benchmark_selected = f_name
[self.n_x, self.lb, self.ub] = benchmarks.get(f_name, benchmarks.get("f1"))
def isFeasible(self, x):
return np.all(x >= self.lb) and np.all(x <= self.ub)
def getFValue(self, x):
if self.benchmark_selected is None:
func_value = 4*x[0]**2 - 2.1*x[0]**4 + (x[0]**6)/3 + x[0]*x[1] - 4*x[1]**2 + 4*x[1]**4
return func_value
benchmarks_coeffs = {
"f3": {"a": np.array([[3, 10, 30], [0.1, 10, 35], [3, 10, 30], [0.1, 10, 35]]),
"c": np.array([1, 1.2, 3, 3.2]),
"p": np.array([[0.3689, 0.117, 0.2673], [0.4699, 0.4387, 0.747], [0.1091, 0.8732, 0.5547], [0.03815, 0.5743, 0.8828]])},
"f4": {"a": np.array([[10, 3, 17, 3.5, 1.7, 8], [0.05, 10, 17, 0.1, 8, 14], [3, 3.5, 1.7, 10, 17, 8], [17, 8, 0.05, 10, 0.1, 14]]),
"c": np.array([1, 1.2, 3, 3.2]),
"p": np.array([[0.1312, 0.1696, 0.5569, 0.0124, 0.8283, 0.5886], [0.2329, 0.4135, 0.8307, 0.3736, 0.1004, 0.9991], [0.2348, 0.1415, 0.3522, 0.2883, 0.3047, 0.6650], [0.4047, 0.8828, 0.8732, 0.5743, 0.1091, 0.0381]])},
"f5": {"a": np.array([[4, 4, 4, 4], [1, 1, 1, 1], [8, 8, 8, 8], [6, 6, 6, 6], [3, 7, 3, 7], [2, 9, 2, 9], [5, 5, 3, 3], [8, 1, 8, 1], [6, 2, 6, 2], [7, 3.6, 7, 3.6]]),
"c": np.array([0.1, 0.2, 0.2, 0.4, 0.4, 0.6, 0.3, 0.7, 0.5, 0.5])}
}
benchmarks = {
"f1": lambda z: 4*z[0]**2 - 2.1*z[0]**4 + (z[0]**6)/3 + z[0]*z[1] - 4*z[1]**2 + 4*z[1]**4,
"f2": lambda z: (z[1] - (5.1/(4*np.pi**2))*z[0]**2 + (5/np.pi)*z[0] -6)**2 + 10*(1 - (1/(8*np.pi)))*np.cos(z[0]) + 10,
"f3": lambda z: -np.sum(benchmarks_coeffs["f3"]["c"] * np.exp(-np.sum(list(map(lambda ai, pi: ai*(z - pi)**2, benchmarks_coeffs["f3"]["a"], benchmarks_coeffs["f3"]["p"])), axis=1))),
"f4": lambda z: -np.sum(benchmarks_coeffs["f4"]["c"] * np.exp(-np.sum(list(map(lambda ai, pi: ai*(z - pi)**2, benchmarks_coeffs["f4"]["a"], benchmarks_coeffs["f4"]["p"])), axis=1))),
"f5": lambda z: -np.sum(list(map(lambda ai, ci: 1/((z - ai) @ (z - ai).T + ci), benchmarks_coeffs["f5"]["a"], benchmarks_coeffs["f5"]["c"])))
}
func_value = benchmarks.get(self.benchmark_selected)(x)
return func_value
def initRandomSoln(self):
self.x = np.random.rand(self.n_x) * (self.ub - self.lb) + self.lb
assert(self.isFeasible(self.x))
self.fvalue = self.getFValue(self.x)
def getNeighbourSoln(self):
r = np.random.rand(self.n_x)
x_new = self.x + r * (self.ub - self.x) + (1 - r) * (self.lb - self.x)
assert(self.isFeasible(x_new))
return x_new
class GeneticAlgorithm:
def __init__(self, problem, n_pop = 50, max_iter = 100, p_elite = 0.1, p_crossover = 0.8, p_mutation = 0.1,
parents_selection = "Random", tournament_size = 5, mutation_selection = "Worst", survivors_selection = "Fitness"):
self.problem = problem
self.n_pop = n_pop
self.max_iter = max_iter
self.p_elite = p_elite
self.p_crossover = p_crossover
self.p_mutation = p_mutation
self.parents_selection = parents_selection
self.tournament_size = tournament_size if tournament_size < n_pop else n_pop
self.mutation_selection = mutation_selection
self.survivors_selection = survivors_selection
self.gen_sols = None
self.gen_fvalues = None
self.gen_ages = None
self.best_sols = None
self.best_fvalues = None
def initRandomPopulation(self):
self.gen_sols = []
self.gen_fvalues = []
self.gen_ages = []
self.best_sols = []
self.best_fvalues = []
for _ in range(self.n_pop):
self.problem.initRandomSoln()
new_sol = self.problem.x
new_fvalue = self.problem.fvalue
self.gen_sols.append(new_sol)
self.gen_fvalues.append(new_fvalue)
self.gen_ages.append(0)
if len(self.best_sols) == 0:
self.best_sols.append(new_sol)
self.best_fvalues.append(new_fvalue)
elif (new_fvalue < self.best_fvalues[0]):
self.best_sols[0], self.best_fvalues[0] = new_sol, new_fvalue
def selectParents(self, numParents, criteria):
gen_probs = 1 / (1 + np.square(self.gen_fvalues))
gen_probs = gen_probs / sum(gen_probs)
lambda_rank = 1.5 # (between 1 and 2) offspring created by best individual
gen_ranks = list(map(lambda i: np.argwhere(np.argsort(self.gen_fvalues) == i)[0,0], np.arange(self.n_pop)))
gen_ranks = ((2-lambda_rank) + np.divide(gen_ranks, self.n_pop-1)*(2*lambda_rank-2)) / self.n_pop
selection_criteria = {
"Random": lambda n: np.random.choice(self.n_pop, size=(n,), replace=False),
"RouletteWheel": lambda n: np.random.choice(self.n_pop, size=(n,), replace=True, p=gen_probs),
"SUS": lambda n: np.random.choice(self.n_pop, size=(n,), replace=False, p=gen_probs),
"Rank": lambda n: np.random.choice(self.n_pop, size=(n,), replace=False, p=gen_ranks),
"Tournament": lambda n: np.array([np.amin(list(map(lambda i: [self.gen_fvalues[i], i],
np.random.choice(self.n_pop, size=(self.tournament_size,), replace=False))),
axis=0)[1] for _ in range(n)], dtype=int),
"Worst": lambda n: np.argsort(self.gen_fvalues)[self.n_pop-n:]
}
parents_idx = selection_criteria.get(criteria, selection_criteria["Random"])(numParents)
return parents_idx
def crossover(self, p1_idx, p2_idx):
# Whole Arithmetic Combination
alpha = np.random.rand() * (0.9 - 0.7) + 0.7
child1 = alpha * self.gen_sols[p1_idx] + (1 - alpha) * self.gen_sols[p2_idx]
child2 = (1 - alpha) * self.gen_sols[p1_idx] + alpha * self.gen_sols[p2_idx]
return child1, child2
def mutation(self, p_idx):
# Random noise
r = np.random.rand(self.problem.n_x)
child = self.gen_sols[p_idx] + r * (self.problem.ub - self.gen_sols[p_idx]) + (1 - r) * (self.problem.lb - self.gen_sols[p_idx])
return child
def selectSurvivors(self, numSurvivors, criteria):
selection_criteria = {
"Age": lambda n: np.argsort(self.gen_ages)[:n],
"Fitness": lambda n: np.argsort(self.gen_fvalues)[:n]
}
survivors_idx = selection_criteria.get(criteria, selection_criteria["Fitness"])(numSurvivors)
return survivors_idx
def perform_algorithm(self):
self.initRandomPopulation()
print("Best Initial Solution ", self.best_fvalues[0])
n_crossovers = int(np.ceil(self.p_crossover * self.n_pop / 2))
n_mutations = int(self.p_mutation * self.n_pop)
n_elite = int(self.p_elite * self.n_pop)
n_survivors = self.n_pop - int(self.p_crossover*self.n_pop) - n_mutations - n_elite
for _ in range(self.max_iter):
# Crossover and Parents Selection
parents_idx = self.selectParents(numParents=n_crossovers*2, criteria=self.parents_selection)
new_gen_sols = []
new_gen_fvalues = []
new_gen_ages = []
for i in range(0, n_crossovers*2, 2):
[ch1, ch2] = self.crossover(parents_idx[i], parents_idx[i+1])
new_gen_sols.append(ch1)
new_gen_fvalues.append(self.problem.getFValue(ch1))
new_gen_ages.append(0)
if len(new_gen_sols) == int(self.p_crossover * self.n_pop):
break
new_gen_sols.append(ch2)
new_gen_fvalues.append(self.problem.getFValue(ch2))
new_gen_ages.append(0)
# Mutation and Parents Selection
parents_idx = self.selectParents(numParents=n_mutations, criteria=self.mutation_selection)
for i in range(n_mutations):
ch = self.mutation(parents_idx[i])
new_gen_sols.append(ch)
new_gen_fvalues.append(self.problem.getFValue(ch))
new_gen_ages.append(0)
# Elite Members
elite_idx = self.selectSurvivors(numSurvivors=n_elite, criteria="Fitness")
for i in range(n_elite):
new_gen_sols.append(self.gen_sols[elite_idx[i]])
new_gen_fvalues.append(self.gen_fvalues[elite_idx[i]])
new_gen_ages.append(self.gen_ages[elite_idx[i]]+1)
# Survivors (if any)
survivors_idx = self.selectSurvivors(numSurvivors=n_survivors, criteria=self.survivors_selection)
for i in range(n_survivors):
new_gen_sols.append(self.gen_sols[survivors_idx[i]])
new_gen_fvalues.append(self.gen_fvalues[survivors_idx[i]])
new_gen_ages.append(self.gen_ages[survivors_idx[i]]+1)
assert(len(new_gen_sols) == self.n_pop)
assert(len(new_gen_fvalues) == self.n_pop)
assert(len(new_gen_ages) == self.n_pop)
# New generation becomes current one
self.gen_sols = new_gen_sols
self.gen_fvalues = new_gen_fvalues
self.gen_ages = new_gen_ages
# update best solution reached so far
best_idx = np.argmin(self.gen_fvalues)
if self.gen_fvalues[best_idx] < self.best_fvalues[-1]:
self.best_sols.append(self.gen_sols[best_idx])
self.best_fvalues.append(self.gen_fvalues[best_idx])
else:
self.best_sols.append(self.best_sols[-1])
self.best_fvalues.append(self.best_fvalues[-1])
def visualize(self):
# convergence plot
fig1 = go.Figure(data=go.Scatter(x=np.arange(0, self.max_iter), y=self.best_fvalues, mode="lines"))
fig1.update_layout(
title="Convergence Plot",
xaxis_title="Iteration Number",
yaxis_title="Fitness Value of Best So Far"
)
fig1.show()
pass
problem = Function()
problem.setBenchmarkFunction(f_name="f2")
GA = GeneticAlgorithm(problem, n_pop = 50, max_iter=100, p_elite=0.1, p_crossover=0.7, p_mutation=0.1,
parents_selection="SUS", tournament_size = 20, mutation_selection = "Worst", survivors_selection = "Age")
GA.perform_algorithm()
print(GA.best_sols[-1])
print(GA.best_fvalues[-1])
GA.visualize()
```
| github_jupyter |
```
import os
import sys
module_path = "/gpfs/space/home/roosild/ut-mit-news-classify/NYT/"
if module_path not in sys.path:
sys.path.append(module_path)
from torch.utils.data import DataLoader
import torch
import os
from tqdm.auto import tqdm
from utils import print_f
%load_ext autoreload
%autoreload 2
print_f('All imports seem good!')
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print_f('Using device:', device)
from transformers import GPT2Model
from utils import GPTVectorizedDataset
MODEL = 'gpt2'
batch_size = 8
chunk_size = 200_000
if cutoff_end_chars:
ending = 'min500_cutoff_replace'
else:
ending = 'min500_complete'
os.makedirs('tokenized', exist_ok=True)
tokenized_train_path = f'/gpfs/space/projects/stud_nlp_share/cutoff/GPT/tokenized/train_{size_str}_{ending}_chunk{NR}of{TOTAL_NR_OF_CHUNKS}.pt'
tokenized_test_path = f'/gpfs/space/projects/stud_nlp_share/cutoff/GPT/tokenized/test_{size_str}_{ending}_chunk{NR}of{TOTAL_NR_OF_CHUNKS}.pt'
os.makedirs('vectorized', exist_ok=True)
vectorized_train_path = f'/gpfs/space/projects/stud_nlp_share/cutoff/GPT/vectorized/train_150k_min500_complete.pt'
vectorized_test_path = f'/gpfs/space/projects/stud_nlp_share/cutoff/GPT/vectorized/test_150k_min500_complete.pt'
print_f('Loading NYT dataset...')
train_dataset = torch.load(tokenized_train_path)
test_dataset = torch.load(tokenized_test_path)
# start actual vectorization with GPT2
runs = [(train_dataset, vectorized_train_path), (test_dataset, vectorized_test_path)]
print_f('Loading model...')
model = GPT2Model.from_pretrained(MODEL)
# resize model embedding to match new tokenizer
model.resize_token_embeddings(len(test_dataset.tokenizer))
# fix model padding token id
model.config.pad_token_id = model.config.eos_token_id
# Load model to defined device.
model.to(device)
for dataset, output_path in runs:
total_chunks = len(dataset) // chunk_size + 1
print_f('total chunks', total_chunks)
# skip already embedded articles
skip_n_articles = 0
chunk_paths = sorted([chunk_path for chunk_path in os.listdir('.') if f'{output_path}_chunk' in chunk_path])
print_f('chunks', chunk_paths)
if len(chunk_paths) > 0:
for i, chunk_path in enumerate(chunk_paths):
chunk = torch.load(chunk_path)
skip_n_articles += len(chunk)
print_f(f'Chunk at "{chunk_path}" has {len(chunk)} articles.')
del chunk
gc.collect()
print_f('skip:', skip_n_articles)
if skip_n_articles >= len(dataset):
print_f('Looks like the dataset if fully embedded already. Skipping this dataset...')
continue
print_f('dataset original', len(dataset))
dataset.input_ids = dataset.input_ids[skip_n_articles:]
dataset.attention_mask = dataset.attention_mask[skip_n_articles:]
dataset.labels = dataset.labels[skip_n_articles:]
print_f('dataset after skipping', len(dataset))
iterator = DataLoader(dataset, batch_size=batch_size)
print_f('Vectorizing dataset for ', output_path)
X_train = []
y_train = []
chunk_id = len(chunk_paths) + 1
print_f('Starting at chunk id', chunk_id)
for i, batch in enumerate(tqdm(iterator)):
inputs, attention_mask, labels = batch
real_batch_size = inputs.shape[0]
inputs = inputs.to(device)
attention_mask = attention_mask.to(device)
labels = torch.tensor(labels).to(device)
with torch.no_grad():
output = model(input_ids=inputs, attention_mask=attention_mask)
output = output[0]
# indices of last non-padded elements in each sequence
# adopted from https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt2/modeling_gpt2.py#L1290-L1302
last_non_padded_ids = torch.ne(inputs, test_dataset.tokenizer.pad_token_id).sum(-1) - 1
embeddings = output[range(real_batch_size), last_non_padded_ids, :]
X_train += embeddings.detach().cpu()
y_train += labels.detach().cpu()
if len(X_train) >= chunk_size:
print_f('Saving chunk:', output_path)
saved_dataset = GPTVectorizedDataset(torch.stack(X_train), torch.stack(y_train))
torch.save(saved_dataset, output_path, pickle_protocol=4)
X_train = []
y_train = []
chunk_id += 1
# take care of what's left after loop
if len(X_train) >= 0:
print_f('Saving chunk:', output_path)
saved_dataset = GPTVectorizedDataset(torch.stack(X_train), torch.stack(y_train))
torch.save(saved_dataset, output_path, pickle_protocol=4)
print_f('All done!')
```
| github_jupyter |
# Face Recognition for the Happy House
Welcome to the first assignment of week 4! Here you will build a face recognition system. Many of the ideas presented here are from [FaceNet](https://arxiv.org/pdf/1503.03832.pdf). In lecture, we also talked about [DeepFace](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf).
Face recognition problems commonly fall into two categories:
- **Face Verification** - "is this the claimed person?". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem.
- **Face Recognition** - "who is this person?". For example, the video lecture showed a face recognition video (https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem.
FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person.
**In this assignment, you will:**
- Implement the triplet loss function
- Use a pretrained model to map face images into 128-dimensional encodings
- Use these encodings to perform face verification and face recognition
In this exercise, we will be using a pre-trained model which represents ConvNet activations using a "channels first" convention, as opposed to the "channels last" convention used in lecture and previous programming assignments. In other words, a batch of images will be of shape $(m, n_C, n_H, n_W)$ instead of $(m, n_H, n_W, n_C)$. Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community.
Let's load the required packages.
```
from keras.models import Sequential
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.engine.topology import Layer
from keras import backend as K
K.set_image_data_format('channels_first')
import cv2
import os
import numpy as np
from numpy import genfromtxt
import pandas as pd
import tensorflow as tf
from fr_utils import *
from inception_blocks_v2 import *
%matplotlib inline
%load_ext autoreload
%autoreload 2
np.set_printoptions(threshold=np.nan)
```
## 0 - Naive Face Verification
In Face Verification, you're given two images and you have to tell if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person!
<img src="images/pixel_comparison.png" style="width:380px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 1** </u></center></caption>
Of course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on.
You'll see that rather than using the raw image, you can learn an encoding $f(img)$ so that element-wise comparisons of this encoding gives more accurate judgements as to whether two pictures are of the same person.
## 1 - Encoding face images into a 128-dimensional vector
### 1.1 - Using an ConvNet to compute encodings
The FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning settings, let's just load weights that someone else has already trained. The network architecture follows the Inception model from [Szegedy *et al.*](https://arxiv.org/abs/1409.4842). We have provided an inception network implementation. You can look in the file `inception_blocks.py` to see how it is implemented (do so by going to "File->Open..." at the top of the Jupyter notebook).
The key things you need to know are:
- This network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of $m$ face images) as a tensor of shape $(m, n_C, n_H, n_W) = (m, 3, 96, 96)$
- It outputs a matrix of shape $(m, 128)$ that encodes each input face image into a 128-dimensional vector
Run the cell below to create the model for face images.
```
FRmodel = faceRecoModel(input_shape=(3, 96, 96))
print("Total Params:", FRmodel.count_params())
```
** Expected Output **
<table>
<center>
Total Params: 3743280
</center>
</table>
By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings the compare two face images as follows:
<img src="images/distance_kiank.png" style="width:680px;height:250px;">
<caption><center> <u> <font color='purple'> **Figure 2**: <br> </u> <font color='purple'> By computing a distance between two encodings and thresholding, you can determine if the two pictures represent the same person</center></caption>
So, an encoding is a good one if:
- The encodings of two images of the same person are quite similar to each other
- The encodings of two images of different persons are very different
The triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart.
<img src="images/triplet_comparison.png" style="width:280px;height:150px;">
<br>
<caption><center> <u> <font color='purple'> **Figure 3**: <br> </u> <font color='purple'> In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N) </center></caption>
### 1.2 - The Triplet Loss
For an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network.
<img src="images/f_x.png" style="width:380px;height:150px;">
<!--
We will also add a normalization step at the end of our model so that $\mid \mid f(x) \mid \mid_2 = 1$ (means the vector of encoding should be of norm 1).
!-->
Training will use triplets of images $(A, P, N)$:
- A is an "Anchor" image--a picture of a person.
- P is a "Positive" image--a picture of the same person as the Anchor image.
- N is a "Negative" image--a picture of a different person than the Anchor image.
These triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example.
You'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\alpha$:
$$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$
You would thus like to minimize the following "triplet cost":
$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}_\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}_\text{(2)} + \alpha \large ] \small_+ \tag{3}$$
Here, we are using the notation "$[z]_+$" to denote $max(z,0)$.
Notes:
- The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small.
- The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large, so it thus makes sense to have a minus sign preceding it.
- $\alpha$ is called the margin. It is a hyperparameter that you should pick manually. We will use $\alpha = 0.2$.
Most implementations also normalize the encoding vectors to have norm equal one (i.e., $\mid \mid f(img)\mid \mid_2$=1); you won't have to worry about that here.
**Exercise**: Implement the triplet loss as defined by formula (3). Here are the 4 steps:
1. Compute the distance between the encodings of "anchor" and "positive": $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$
2. Compute the distance between the encodings of "anchor" and "negative": $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$
3. Compute the formula per training example: $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$
3. Compute the full formula by taking the max with zero and summing over the training examples:
$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small_+ \tag{3}$$
Useful functions: `tf.reduce_sum()`, `tf.square()`, `tf.subtract()`, `tf.add()`, `tf.maximum()`.
For steps 1 and 2, you will need to sum over the entries of $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ and $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$ while for step 4 you will need to sum over the training examples.
```
# GRADED FUNCTION: triplet_loss
def triplet_loss(y_true, y_pred, alpha = 0.2):
"""
Implementation of the triplet loss as defined by formula (3)
Arguments:
y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
y_pred -- python list containing three objects:
anchor -- the encodings for the anchor images, of shape (None, 128)
positive -- the encodings for the positive images, of shape (None, 128)
negative -- the encodings for the negative images, of shape (None, 128)
Returns:
loss -- real number, value of the loss
"""
anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
### START CODE HERE ### (≈ 4 lines)
# Step 1: Compute the (encoding) distance between the anchor and the positive, you will need to sum over axis=-1
pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), axis=-1)
# Step 2: Compute the (encoding) distance between the anchor and the negative, you will need to sum over axis=-1
neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), axis=-1)
# Step 3: subtract the two previous distances and add alpha.
basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha)
# Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.
loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0))
### END CODE HERE ###
return loss
with tf.Session() as test:
tf.set_random_seed(1)
y_true = (None, None, None)
y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1),
tf.random_normal([3, 128], mean=1, stddev=1, seed = 1),
tf.random_normal([3, 128], mean=3, stddev=4, seed = 1))
loss = triplet_loss(y_true, y_pred)
print("loss = " + str(loss.eval()))
```
**Expected Output**:
<table>
<tr>
<td>
**loss**
</td>
<td>
528.143
</td>
</tr>
</table>
## 2 - Loading the trained model
FaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run.
```
FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
load_weights_from_FaceNet(FRmodel)
```
Here're some examples of distances between the encodings between three individuals:
<img src="images/distance_matrix.png" style="width:380px;height:200px;">
<br>
<caption><center> <u> <font color='purple'> **Figure 4**:</u> <br> <font color='purple'> Example of distance outputs between three individuals' encodings</center></caption>
Let's now use this model to perform face verification and face recognition!
## 3 - Applying the model
Back to the Happy House! Residents are living blissfully since you implemented happiness recognition for the house in an earlier assignment.
However, several issues keep coming up: The Happy House became so happy that every happy person in the neighborhood is coming to hang out in your living room. It is getting really crowded, which is having a negative impact on the residents of the house. All these random happy people are also eating all your food.
So, you decide to change the door entry policy, and not just let random happy people enter anymore, even if they are happy! Instead, you'd like to build a **Face verification** system so as to only let people from a specified list come in. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the door. The face recognition system then checks that they are who they claim to be.
### 3.1 - Face Verification
Let's build a database containing one encoding vector for each person allowed to enter the happy house. To generate the encoding we use `img_to_encoding(image_path, model)` which basically runs the forward propagation of the model on the specified image.
Run the following code to build the database (represented as a python dictionary). This database maps each person's name to a 128-dimensional encoding of their face.
```
database = {}
database["danielle"] = img_to_encoding("images/danielle.png", FRmodel)
database["younes"] = img_to_encoding("images/younes.jpg", FRmodel)
database["tian"] = img_to_encoding("images/tian.jpg", FRmodel)
database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel)
database["kian"] = img_to_encoding("images/kian.jpg", FRmodel)
database["dan"] = img_to_encoding("images/dan.jpg", FRmodel)
database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel)
database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel)
database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel)
database["felix"] = img_to_encoding("images/felix.jpg", FRmodel)
database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel)
database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel)
```
Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.
**Exercise**: Implement the verify() function which checks if the front-door camera picture (`image_path`) is actually the person called "identity". You will have to go through the following steps:
1. Compute the encoding of the image from image_path
2. Compute the distance about this encoding and the encoding of the identity image stored in the database
3. Open the door if the distance is less than 0.7, else do not open.
As presented above, you should use the L2 distance (np.linalg.norm). (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.)
```
# GRADED FUNCTION: verify
def verify(image_path, identity, database, model):
"""
Function that verifies if the person on the "image_path" image is "identity".
Arguments:
image_path -- path to an image
identity -- string, name of the person you'd like to verify the identity. Has to be a resident of the Happy house.
database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).
model -- your Inception model instance in Keras
Returns:
dist -- distance between the image_path and the image of "identity" in the database.
door_open -- True, if the door should open. False otherwise.
"""
### START CODE HERE ###
# Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)
encoding = img_to_encoding(image_path, model)
# Step 2: Compute distance with identity's image (≈ 1 line)
dist = np.linalg.norm(encoding - database.get(identity))
# Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)
if dist < 0.7:
print("It's " + str(identity) + ", welcome home!")
door_open = True
else:
print("It's not " + str(identity) + ", please go away")
door_open = False
### END CODE HERE ###
return dist, door_open
```
Younes is trying to enter the Happy House and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture:
<img src="images/camera_0.jpg" style="width:100px;height:100px;">
```
verify("images/camera_0.jpg", "younes", database, FRmodel)
```
**Expected Output**:
<table>
<tr>
<td>
**It's younes, welcome home!**
</td>
<td>
(0.65939283, True)
</td>
</tr>
</table>
Benoit, who broke the aquarium last weekend, has been banned from the house and removed from the database. He stole Kian's ID card and came back to the house to try to present himself as Kian. The front-door camera took a picture of Benoit ("images/camera_2.jpg). Let's run the verification algorithm to check if benoit can enter.
<img src="images/camera_2.jpg" style="width:100px;height:100px;">
```
verify("images/camera_2.jpg", "kian", database, FRmodel)
```
**Expected Output**:
<table>
<tr>
<td>
**It's not kian, please go away**
</td>
<td>
(0.86224014, False)
</td>
</tr>
</table>
### 3.2 - Face Recognition
Your face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the house that evening he couldn't get in!
To reduce such shenanigans, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the house, and the front door will unlock for them!
You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as another input.
**Exercise**: Implement `who_is_it()`. You will have to go through the following steps:
1. Compute the target encoding of the image from image_path
2. Find the encoding from the database that has smallest distance with the target encoding.
- Initialize the `min_dist` variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding.
- Loop over the database dictionary's names and encodings. To loop use `for (name, db_enc) in database.items()`.
- Compute L2 distance between the target "encoding" and the current "encoding" from the database.
- If this distance is less than the min_dist, then set min_dist to dist, and identity to name.
```
# GRADED FUNCTION: who_is_it
def who_is_it(image_path, database, model):
"""
Implements face recognition for the happy house by finding who is the person on the image_path image.
Arguments:
image_path -- path to an image
database -- database containing image encodings along with the name of the person on the image
model -- your Inception model instance in Keras
Returns:
min_dist -- the minimum distance between image_path encoding and the encodings from the database
identity -- string, the name prediction for the person on image_path
"""
### START CODE HERE ###
## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)
encoding = img_to_encoding(image_path, model)
## Step 2: Find the closest encoding ##
# Initialize "min_dist" to a large value, say 100 (≈1 line)
min_dist = 100
# Loop over the database dictionary's names and encodings.
for (name, db_enc) in database.items():
# Compute L2 distance between the target "encoding" and the current "emb" from the database. (≈ 1 line)
dist = np.linalg.norm(encoding - db_enc)
# If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)
if dist < min_dist:
min_dist = dist
identity = name
### END CODE HERE ###
if min_dist > 0.7:
print("Not in the database.")
else:
print ("it's " + str(identity) + ", the distance is " + str(min_dist))
return min_dist, identity
```
Younes is at the front-door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your who_it_is() algorithm identifies Younes.
```
who_is_it("images/camera_0.jpg", database, FRmodel)
```
**Expected Output**:
<table>
<tr>
<td>
**it's younes, the distance is 0.659393**
</td>
<td>
(0.65939283, 'younes')
</td>
</tr>
</table>
You can change "`camera_0.jpg`" (picture of younes) to "`camera_1.jpg`" (picture of bertrand) and see the result.
Your Happy House is running well. It only lets in authorized persons, and people don't need to carry an ID card around anymore!
You've now seen how a state-of-the-art face recognition system works.
Although we won't implement it here, here're some ways to further improve the algorithm:
- Put more images of each person (under different lighting conditions, taken on different days, etc.) into the database. Then given a new image, compare the new face to multiple pictures of the person. This would increae accuracy.
- Crop the images to just contain the face, and less of the "border" region around the face. This preprocessing removes some of the irrelevant pixels around the face, and also makes the algorithm more robust.
<font color='blue'>
**What you should remember**:
- Face verification solves an easier 1:1 matching problem; face recognition addresses a harder 1:K matching problem.
- The triplet loss is an effective loss function for training a neural network to learn an encoding of a face image.
- The same encoding can be used for verification and recognition. Measuring distances between two images' encodings allows you to determine whether they are pictures of the same person.
Congrats on finishing this assignment!
### References:
- Florian Schroff, Dmitry Kalenichenko, James Philbin (2015). [FaceNet: A Unified Embedding for Face Recognition and Clustering](https://arxiv.org/pdf/1503.03832.pdf)
- Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, Lior Wolf (2014). [DeepFace: Closing the gap to human-level performance in face verification](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf)
- The pretrained model we use is inspired by Victor Sy Wang's implementation and was loaded using his code: https://github.com/iwantooxxoox/Keras-OpenFace.
- Our implementation also took a lot of inspiration from the official FaceNet github repository: https://github.com/davidsandberg/facenet
| github_jupyter |
# Reshaping & Tidy Data
> Structuring datasets to facilitate analysis [(Wickham 2014)](http://www.jstatsoft.org/v59/i10/paper)
So, you've sat down to analyze a new dataset.
What do you do first?
In episode 11 of [Not So Standard Deviations](https://www.patreon.com/NSSDeviations?ty=h), Hilary and Roger discussed their typical approaches.
I'm with Hilary on this one, you should make sure your data is tidy.
Before you do any plots, filtering, transformations, summary statistics, regressions...
Without a tidy dataset, you'll be fighting your tools to get the result you need.
With a tidy dataset, it's relatively easy to do all of those.
Hadley Wickham kindly summarized tidiness as a dataset where
1. Each variable forms a column
2. Each observation forms a row
3. Each type of observational unit forms a table
And today we'll only concern ourselves with the first two.
As quoted at the top, this really is about facilitating analysis: going as quickly as possible from question to answer.
```
%matplotlib inline
import os
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
if int(os.environ.get("MODERN_PANDAS_EPUB", 0)):
import prep # noqa
pd.options.display.max_rows = 10
sns.set(style='ticks', context='talk')
```
## NBA Data
[This](http://stackoverflow.com/questions/22695680/python-pandas-timedelta-specific-rows) StackOverflow question asked about calculating the number of days of rest NBA teams have between games.
The answer would have been difficult to compute with the raw data.
After transforming the dataset to be tidy, we're able to quickly get the answer.
We'll grab some NBA game data from basketball-reference.com using pandas' `read_html` function, which returns a list of DataFrames.
```
fp = 'data/nba.csv'
if not os.path.exists(fp):
tables = pd.read_html("http://www.basketball-reference.com/leagues/NBA_2016_games.html")
games = tables[0]
games.to_csv(fp)
else:
games = pd.read_csv(fp, index_col=0)
games.head()
```
Side note: pandas' `read_html` is pretty good. On simple websites it almost always works.
It provides a couple parameters for controlling what gets selected from the webpage if the defaults fail.
I'll always use it first, before moving on to [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/) or [lxml](http://lxml.de/) if the page is more complicated.
As you can see, we have a bit of general munging to do before tidying.
Each month slips in an extra row of mostly NaNs, the column names aren't too useful, and we have some dtypes to fix up.
```
column_names = {'Date': 'date', 'Start (ET)': 'start',
'Unamed: 2': 'box', 'Visitor/Neutral': 'away_team',
'PTS': 'away_points', 'Home/Neutral': 'home_team',
'PTS.1': 'home_points', 'Unamed: 7': 'n_ot'}
games = (games.rename(columns=column_names)
.dropna(thresh=4)
[['date', 'away_team', 'away_points', 'home_team', 'home_points']]
.assign(date=lambda x: pd.to_datetime(x['date'], format='%a, %b %d, %Y'))
.set_index('date', append=True)
.rename_axis(["game_id", "date"])
.sort_index())
games.head()
```
A quick aside on that last block.
- `dropna` has a `thresh` argument. If at least `thresh` items are missing, the row is dropped. We used it to remove the "Month headers" that slipped into the table.
- `assign` can take a callable. This lets us refer to the DataFrame in the previous step of the chain. Otherwise we would have to assign `temp_df = games.dropna()...` And then do the `pd.to_datetime` on that.
- `set_index` has an `append` keyword. We keep the original index around since it will be our unique identifier per game.
- We use `.rename_axis` to set the index names (this behavior is new in pandas 0.18; before `.rename_axis` only took a mapping for changing labels).
The Question:
> **How many days of rest did each team get between each game?**
Whether or not your dataset is tidy depends on your question. Given our question, what is an observation?
In this case, an observation is a `(team, game)` pair, which we don't have yet. Rather, we have two observations per row, one for home and one for away. We'll fix that with `pd.melt`.
`pd.melt` works by taking observations that are spread across columns (`away_team`, `home_team`), and melting them down into one column with multiple rows. However, we don't want to lose the metadata (like `game_id` and `date`) that is shared between the observations. By including those columns as `id_vars`, the values will be repeated as many times as needed to stay with their observations.
```
tidy = pd.melt(games.reset_index(),
id_vars=['game_id', 'date'], value_vars=['away_team', 'home_team'],
value_name='team')
tidy.head()
```
The DataFrame `tidy` meets our rules for tidiness: each variable is in a column, and each observation (`team`, `date` pair) is on its own row.
Now the translation from question ("How many days of rest between games") to operation ("date of today's game - date of previous game - 1") is direct:
```
# For each team... get number of days between games
tidy.groupby('team')['date'].diff().dt.days - 1
```
That's the essence of tidy data, the reason why it's worth considering what shape your data should be in.
It's about setting yourself up for success so that the answers naturally flow from the data (just kidding, it's usually still difficult. But hopefully less so).
Let's assign that back into our DataFrame
```
tidy['rest'] = tidy.sort_values('date').groupby('team').date.diff().dt.days - 1
tidy.dropna().head()
```
To show the inverse of `melt`, let's take `rest` values we just calculated and place them back in the original DataFrame with a `pivot_table`.
```
by_game = (pd.pivot_table(tidy, values='rest',
index=['game_id', 'date'],
columns='variable')
.rename(columns={'away_team': 'away_rest',
'home_team': 'home_rest'}))
df = pd.concat([games, by_game], axis=1)
df.dropna().head()
```
One somewhat subtle point: an "observation" depends on the question being asked.
So really, we have two tidy datasets, `tidy` for answering team-level questions, and `df` for answering game-level questions.
One potentially interesting question is "what was each team's average days of rest, at home and on the road?" With a tidy dataset (the DataFrame `tidy`, since it's team-level), `seaborn` makes this easy (more on seaborn in a future post):
```
sns.set(style='ticks', context='paper')
g = sns.FacetGrid(tidy, col='team', col_wrap=6, hue='team', size=2)
g.map(sns.barplot, 'variable', 'rest');
```
An example of a game-level statistic is the distribution of rest differences in games:
```
df['home_win'] = df['home_points'] > df['away_points']
df['rest_spread'] = df['home_rest'] - df['away_rest']
df.dropna().head()
delta = (by_game.home_rest - by_game.away_rest).dropna().astype(int)
ax = (delta.value_counts()
.reindex(np.arange(delta.min(), delta.max() + 1), fill_value=0)
.sort_index()
.plot(kind='bar', color='k', width=.9, rot=0, figsize=(12, 6))
)
sns.despine()
ax.set(xlabel='Difference in Rest (Home - Away)', ylabel='Games');
```
Or the win percent by rest difference
```
fig, ax = plt.subplots(figsize=(12, 6))
sns.barplot(x='rest_spread', y='home_win', data=df.query('-3 <= rest_spread <= 3'),
color='#4c72b0', ax=ax)
sns.despine()
```
## Stack / Unstack
Pandas has two useful methods for quickly converting from wide to long format (`stack`) and long to wide (`unstack`).
```
rest = (tidy.groupby(['date', 'variable'])
.rest.mean()
.dropna())
rest.head()
```
`rest` is in a "long" form since we have a single column of data, with multiple "columns" of metadata (in the MultiIndex). We use `.unstack` to move from long to wide.
```
rest.unstack().head()
```
`unstack` moves a level of a MultiIndex (innermost by default) up to the columns.
`stack` is the inverse.
```
rest.unstack().stack()
```
With `.unstack` you can move between those APIs that expect there data in long-format and those APIs that work with wide-format data. For example, `DataFrame.plot()`, works with wide-form data, one line per column.
```
with sns.color_palette() as pal:
b, g = pal.as_hex()[:2]
ax=(rest.unstack()
.query('away_team < 7')
.rolling(7)
.mean()
.plot(figsize=(12, 6), linewidth=3, legend=False))
ax.set(ylabel='Rest (7 day MA)')
ax.annotate("Home", (rest.index[-1][0], 1.02), color=g, size=14)
ax.annotate("Away", (rest.index[-1][0], 0.82), color=b, size=14)
sns.despine()
```
The most conenient form will depend on exactly what you're doing.
When interacting with databases you'll often deal with long form data.
Pandas' `DataFrame.plot` often expects wide-form data, while `seaborn` often expect long-form data. Regressions will expect wide-form data. Either way, it's good to be comfortable with `stack` and `unstack` (and MultiIndexes) to quickly move between the two.
## Mini Project: Home Court Advantage?
We've gone to all that work tidying our dataset, let's put it to use.
What's the effect (in terms of probability to win) of being
the home team?
### Step 1: Create an outcome variable
We need to create an indicator for whether the home team won.
Add it as a column called `home_win` in `games`.
```
df['home_win'] = df.home_points > df.away_points
```
### Step 2: Find the win percent for each team
In the 10-minute literature review I did on the topic, it seems like people include a team-strength variable in their regressions.
I suppose that makes sense; if stronger teams happened to play against weaker teams at home more often than away, it'd look like the home-effect is stronger than it actually is.
We'll do a terrible job of controlling for team strength by calculating each team's win percent and using that as a predictor.
It'd be better to use some kind of independent measure of team strength, but this will do for now.
We'll use a similar `melt` operation as earlier, only now with the `home_win` variable we just created.
```
wins = (
pd.melt(df.reset_index(),
id_vars=['game_id', 'date', 'home_win'],
value_name='team', var_name='is_home',
value_vars=['home_team', 'away_team'])
.assign(win=lambda x: x.home_win == (x.is_home == 'home_team'))
.groupby(['team', 'is_home'])
.win
.agg({'n_wins': 'sum', 'n_games': 'count', 'win_pct': 'mean'})
)
wins.head()
```
Pause for visualiztion, because why not
```
g = sns.FacetGrid(wins.reset_index(), hue='team', size=7, aspect=.5, palette=['k'])
g.map(sns.pointplot, 'is_home', 'win_pct').set(ylim=(0, 1));
```
(It'd be great if there was a library built on top of matplotlib that auto-labeled each point decently well. Apparently this is a difficult problem to do in general).
```
g = sns.FacetGrid(wins.reset_index(), col='team', hue='team', col_wrap=5, size=2)
g.map(sns.pointplot, 'is_home', 'win_pct')
```
Those two graphs show that most teams have a higher win-percent at home than away. So we can continue to investigate.
Let's aggregate over home / away to get an overall win percent per team.
```
win_percent = (
# Use sum(games) / sum(games) instead of mean
# since I don't know if teams play the same
# number of games at home as away
wins.groupby(level='team', as_index=True)
.apply(lambda x: x.n_wins.sum() / x.n_games.sum())
)
win_percent.head()
win_percent.sort_values().plot.barh(figsize=(6, 12), width=.85, color='k')
plt.tight_layout()
sns.despine()
plt.xlabel("Win Percent")
```
Is there a relationship between overall team strength and their home-court advantage?
```
plt.figure(figsize=(8, 5))
(wins.win_pct
.unstack()
.assign(**{'Home Win % - Away %': lambda x: x.home_team - x.away_team,
'Overall %': lambda x: (x.home_team + x.away_team) / 2})
.pipe((sns.regplot, 'data'), x='Overall %', y='Home Win % - Away %')
)
sns.despine()
plt.tight_layout()
```
Let's get the team strength back into `df`.
You could you `pd.merge`, but I prefer `.map` when joining a `Series`.
```
df = df.assign(away_strength=df['away_team'].map(win_percent),
home_strength=df['home_team'].map(win_percent),
point_diff=df['home_points'] - df['away_points'],
rest_diff=df['home_rest'] - df['away_rest'])
df.head()
import statsmodels.formula.api as sm
df['home_win'] = df.home_win.astype(int) # for statsmodels
mod = sm.logit('home_win ~ home_strength + away_strength + home_rest + away_rest', df)
res = mod.fit()
res.summary()
```
The strength variables both have large coefficeints (really we should be using some independent measure of team strength here, `win_percent` is showing up on the left and right side of the equation). The rest variables don't seem to matter as much.
With `.assign` we can quickly explore variations in formula.
```
(sm.Logit.from_formula('home_win ~ strength_diff + rest_spread',
df.assign(strength_diff=df.home_strength - df.away_strength))
.fit().summary())
mod = sm.Logit.from_formula('home_win ~ home_rest + away_rest', df)
res = mod.fit()
res.summary()
```
Overall not seeing to much support for rest mattering, but we got to see some more tidy data.
That's it for today.
Next time we'll look at data visualization.
| github_jupyter |
# The Tractable Buffer Stock Model
<p style="text-align: center;"><small><small><small>Generator: BufferStockTheory-make/notebooks_byname</small></small></small></p>
The [TractableBufferStock](http://www.econ2.jhu.edu/people/ccarroll/public/LectureNotes/Consumption/TractableBufferStock/) model is a (relatively) simple framework that captures all of the qualitative, and many of the quantitative features of optimal consumption in the presence of labor income uncertainty.
```
# This cell has a bit of (uninteresting) initial setup.
import matplotlib.pyplot as plt
import numpy as np
import HARK
from time import clock
from copy import deepcopy
mystr = lambda number : "{:.3f}".format(number)
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from HARK.utilities import plotFuncs
# Import the model from the toolkit
from HARK.ConsumptionSaving.TractableBufferStockModel import TractableConsumerType
```
The key assumption behind the model's tractability is that there is only a single, stark form of uncertainty: So long as an employed consumer remains employed, that consumer's labor income $P$ will rise at a constant rate $\Gamma$:
\begin{align}
P_{t+1} &= \Gamma P_{t}
\end{align}
But, between any period and the next, there is constant hazard $p$ that the consumer will transition to the "unemployed" state. Unemployment is irreversible, like retirement or disability. When unemployed, the consumer receives a fixed amount of income (for simplicity, zero). (See the [linked handout](http://www.econ2.jhu.edu/people/ccarroll/public/LectureNotes/Consumption/TractableBufferStock/) for details of the model).
Defining $G$ as the growth rate of aggregate wages/productivity, we assume that idiosyncratic wages grow by $\Gamma = G/(1-\mho)$ where $(1-\mho)^{-1}$ is the growth rate of idiosyncratic productivity ('on-the-job learning', say). (This assumption about the relation between idiosyncratic income growth and idiosyncratic risk means that an increase in $\mho$ is a mean-preserving spread in human wealth; again see [the lecture notes](http://www.econ2.jhu.edu/people/ccarroll/public/LectureNotes/Consumption/TractableBufferStock/)).
Under CRRA utility $u(C) = \frac{C^{1-\rho}}{1-\rho}$, the problem can be normalized by $P$. Using lower case for normalized varibles (e.g., $c = C/P$), the normalized problem can be expressed by the Bellman equation:
\begin{eqnarray*}
v_t({m}_t) &=& \max_{{c}_t} ~ U({c}_t) + \beta \Gamma^{1-\rho} \overbrace{\mathbb{E}[v_{t+1}^{\bullet}]}^{=p v_{t+1}^{u}+(1-p)v_{t+1}^{e}} \\
& s.t. & \\
{m}_{t+1} &=& (m_{t}-c_{t})\mathcal{R} + \mathbb{1}_{t+1},
\end{eqnarray*}
where $\mathcal{R} = R/\Gamma$, and $\mathbb{1}_{t+1} = 1$ if the consumer is employed (and zero if unemployed).
Under plausible parameter values the model has a target level of $\check{m} = M/P$ (market resources to permanent income) with an analytical solution that exhibits plausible relationships among all of the parameters.
Defining $\gamma = \log \Gamma$ and $r = \log R$, the handout shows that an approximation of the target is given by the formula:
\begin{align}
\check{m} & = 1 + \left(\frac{1}{(\gamma-r)+(1+(\gamma/\mho)(1-(\gamma/\mho)(\rho-1)/2))}\right)
\end{align}
```
# Define a parameter dictionary and representation of the agents for the tractable buffer stock model
TBS_dictionary = {'UnempPrb' : .00625, # Prob of becoming unemployed; working life of 1/UnempProb = 160 qtrs
'DiscFac' : 0.975, # Intertemporal discount factor
'Rfree' : 1.01, # Risk-free interest factor on assets
'PermGroFac' : 1.0025, # Permanent income growth factor (uncompensated)
'CRRA' : 2.5} # Coefficient of relative risk aversion
MyTBStype = TractableConsumerType(**TBS_dictionary)
```
## Target Wealth
Whether the model exhibits a "target" or "stable" level of the wealth-to-permanent-income ratio for employed consumers depends on whether the 'Growth Impatience Condition' (the GIC) holds:
\begin{align}\label{eq:GIC}
\left(\frac{(R \beta (1-\mho))^{1/\rho}}{\Gamma}\right) & < 1
\\ \left(\frac{(R \beta (1-\mho))^{1/\rho}}{G (1-\mho)}\right) &< 1
\\ \left(\frac{(R \beta)^{1/\rho}}{G} (1-\mho)^{-\rho}\right) &< 1
\end{align}
and recall (from [PerfForesightCRRA](http://econ.jhu.edu/people/ccarroll/public/lecturenotes/consumption/PerfForesightCRRA/)) that the perfect foresight 'Growth Impatience Factor' is
\begin{align}\label{eq:PFGIC}
\left(\frac{(R \beta)^{1/\rho}}{G}\right) &< 1
\end{align}
so since $\mho > 0$, uncertainty makes it harder to be 'impatient.' To understand this, think of someone who, in the perfect foresight model, was 'poised': Exactly on the knife edge between patience and impatience. Now add a precautionary saving motive; that person will now (to some degree) be pushed off the knife edge in the direction of 'patience.' So, in the presence of uncertainty, the conditions on parameters other than $\mho$ must be stronger in order to guarantee 'impatience' in the sense of wanting to spend enough for your wealth to decline _despite_ the extra precautionary motive.
```
# Define a function that plots the employed consumption function and sustainable consumption function
# for given parameter values
def makeTBSplot(DiscFac,CRRA,Rfree,PermGroFac,UnempPrb,mMax,mMin,cMin,cMax,plot_emp,plot_ret,plot_mSS,show_targ):
MyTBStype.DiscFac = DiscFac
MyTBStype.CRRA = CRRA
MyTBStype.Rfree = Rfree
MyTBStype.PermGroFac = PermGroFac
MyTBStype.UnempPrb = UnempPrb
try:
MyTBStype.solve()
except:
print('Unable to solve; parameter values may be too close to their limiting values')
plt.xlabel('Market resources ${m}_t$')
plt.ylabel('Consumption ${c}_t$')
plt.ylim([cMin,cMax])
plt.xlim([mMin,mMax])
m = np.linspace(mMin,mMax,num=100,endpoint=True)
if plot_emp:
c = MyTBStype.solution[0].cFunc(m)
c[m==0.] = 0.
plt.plot(m,c,'-b')
if plot_mSS:
plt.plot([mMin,mMax],[(MyTBStype.PermGroFacCmp/MyTBStype.Rfree + mMin*(1.0-MyTBStype.PermGroFacCmp/MyTBStype.Rfree)),(MyTBStype.PermGroFacCmp/MyTBStype.Rfree + mMax*(1.0-MyTBStype.PermGroFacCmp/MyTBStype.Rfree))],'--k')
if plot_ret:
c = MyTBStype.solution[0].cFunc_U(m)
plt.plot(m,c,'-g')
if show_targ:
mTarg = MyTBStype.mTarg
cTarg = MyTBStype.cTarg
targ_label = r'$\left(\frac{1}{(\gamma-r)+(1+(\gamma/\mho)(1-(\gamma/\mho)(\rho-1)/2))}\right) $' #+ mystr(mTarg) + '\n$\check{c}^* = $ ' + mystr(cTarg)
plt.annotate(targ_label,xy=(0.0,0.0),xytext=(0.2,0.1),textcoords='axes fraction',fontsize=18)
plt.plot(mTarg,cTarg,'ro')
plt.annotate('↙️ m target',(mTarg,cTarg),xytext=(0.25,0.2),ha='left',textcoords='offset points')
plt.show()
return None
# Define widgets to control various aspects of the plot
# Define a slider for the discount factor
DiscFac_widget = widgets.FloatSlider(
min=0.9,
max=0.99,
step=0.0002,
value=TBS_dictionary['DiscFac'], # Default value
continuous_update=False,
readout_format='.4f',
description='$\\beta$')
# Define a slider for relative risk aversion
CRRA_widget = widgets.FloatSlider(
min=1.0,
max=5.0,
step=0.01,
value=TBS_dictionary['CRRA'], # Default value
continuous_update=False,
readout_format='.2f',
description='$\\rho$')
# Define a slider for the interest factor
Rfree_widget = widgets.FloatSlider(
min=1.01,
max=1.04,
step=0.0001,
value=TBS_dictionary['Rfree'], # Default value
continuous_update=False,
readout_format='.4f',
description='$R$')
# Define a slider for permanent income growth
PermGroFac_widget = widgets.FloatSlider(
min=1.00,
max=1.015,
step=0.0002,
value=TBS_dictionary['PermGroFac'], # Default value
continuous_update=False,
readout_format='.4f',
description='$G$')
# Define a slider for unemployment (or retirement) probability
UnempPrb_widget = widgets.FloatSlider(
min=0.000001,
max=TBS_dictionary['UnempPrb']*2, # Go up to twice the default value
step=0.00001,
value=TBS_dictionary['UnempPrb'],
continuous_update=False,
readout_format='.5f',
description='$\\mho$')
# Define a text box for the lower bound of {m}_t
mMin_widget = widgets.FloatText(
value=0.0,
step=0.1,
description='$m$ min',
disabled=False)
# Define a text box for the upper bound of {m}_t
mMax_widget = widgets.FloatText(
value=50.0,
step=0.1,
description='$m$ max',
disabled=False)
# Define a text box for the lower bound of {c}_t
cMin_widget = widgets.FloatText(
value=0.0,
step=0.1,
description='$c$ min',
disabled=False)
# Define a text box for the upper bound of {c}_t
cMax_widget = widgets.FloatText(
value=1.5,
step=0.1,
description='$c$ max',
disabled=False)
# Define a check box for whether to plot the employed consumption function
plot_emp_widget = widgets.Checkbox(
value=True,
description='Plot employed $c$ function',
disabled=False)
# Define a check box for whether to plot the retired consumption function
plot_ret_widget = widgets.Checkbox(
value=False,
description='Plot retired $c$ function',
disabled=False)
# Define a check box for whether to plot the sustainable consumption line
plot_mSS_widget = widgets.Checkbox(
value=True,
description='Plot sustainable $c$ line',
disabled=False)
# Define a check box for whether to show the target annotation
show_targ_widget = widgets.Checkbox(
value=True,
description = 'Show target $(m,c)$',
disabled = False)
# Make an interactive plot of the tractable buffer stock solution
# To make some of the widgets not appear, replace X_widget with fixed(desired_fixed_value) in the arguments below.
interact(makeTBSplot,
DiscFac = DiscFac_widget,
CRRA = CRRA_widget,
Rfree = Rfree_widget,
PermGroFac = PermGroFac_widget,
UnempPrb = UnempPrb_widget,
mMin = mMin_widget,
mMax = mMax_widget,
cMin = cMin_widget,
cMax = cMax_widget,
show_targ = show_targ_widget,
plot_emp = plot_emp_widget,
plot_ret = plot_ret_widget,
plot_mSS = plot_mSS_widget,
);
```
# PROBLEM
Your task is to make a simplified slider that involves only $\beta$.
First, create a variable `betaMax` equal to the value of $\beta$ at which the Growth Impatience Factor is exactly equal to 1 (that is, the consumer is exactly on the border between patience and impatience). (Hint: The formula for this is [here](http://www.econ2.jhu.edu/people/ccarroll/public/LectureNotes/Consumption/TractableBufferStock/#GIFMax)).
Next, create a slider/'widget' like the one above, but where all variables except $\beta$ are set to their default values, and the slider takes $\beta$ from 0.05 below its default value up to `betaMax - 0.01`. (The numerical solution algorithm becomes unstable when the GIC is too close to being violated, so you don't want to go all the way up to `betaMax.`)
Explain the logic of the result that you see.
(Hint: You do not need to copy and paste (then edit) the entire contents of the cell that creates the widgets above; you only need to modify the `DiscFac_widget`)
```
# Define a slider for the discount factor
my_rho = TBS_dictionary['CRRA'];
my_R = TBS_dictionary['Rfree'];
my_upsidedownOmega = TBS_dictionary['UnempPrb']; # didnt have time to figure out the right value
my_Gamma = TBS_dictionary['PermGroFac']/(1-my_upsidedownOmega);
betaMax = (my_Gamma**my_rho)/(my_R*(1-my_upsidedownOmega));
DiscFac_widget = widgets.FloatSlider(
min=TBS_dictionary['DiscFac']-0.05,
max=betaMax-0.01,
step=0.0002,
value=TBS_dictionary['DiscFac'], # Default value
continuous_update=False,
readout_format='.4f',
description='$\\beta$')
interact(makeTBSplot,
DiscFac = DiscFac_widget,
CRRA = fixed(TBS_dictionary['CRRA']),
Rfree = fixed(TBS_dictionary['Rfree']),
PermGroFac = fixed(TBS_dictionary['PermGroFac']),
UnempPrb = fixed(TBS_dictionary['UnempPrb']),
mMin = mMin_widget,
mMax = mMax_widget,
cMin = cMin_widget,
cMax = cMax_widget,
show_targ = show_targ_widget,
plot_emp = plot_emp_widget,
plot_ret = plot_ret_widget,
plot_mSS = plot_mSS_widget,
);
```
# Target level of market resources increases with increased patience as does the consumption because patience is rewarded by the returns on savings.
| github_jupyter |
# + 
# **First Notebook: Virtual machine test and assignment submission**
#### This notebook will test that the virtual machine (VM) is functioning properly and will show you how to submit an assignment to the autograder. To move through the notebook just run each of the cells. You will not need to solve any problems to complete this lab. You can run a cell by pressing "shift-enter", which will compute the current cell and advance to the next cell, or by clicking in a cell and pressing "control-enter", which will compute the current cell and remain in that cell. At the end of the notebook you will export / download the notebook and submit it to the autograder.
#### ** This notebook covers: **
#### *Part 1:* Test Spark functionality
#### *Part 2:* Check class testing library
#### *Part 3:* Check plotting
#### *Part 4:* Check MathJax formulas
#### *Part 5:* Export / download and submit
### ** Part 1: Test Spark functionality **
#### ** (1a) Parallelize, filter, and reduce **
```
# Check that Spark is working
largeRange = sc.parallelize(xrange(100000))
reduceTest = largeRange.reduce(lambda a, b: a + b)
filterReduceTest = largeRange.filter(lambda x: x % 7 == 0).sum()
print reduceTest
print filterReduceTest
# If the Spark jobs don't work properly these will raise an AssertionError
assert reduceTest == 4999950000
assert filterReduceTest == 714264285
```
#### ** (1b) Loading a text file **
```
# Check loading data with sc.textFile
import os.path
baseDir = os.path.join('data')
inputPath = os.path.join('cs100', 'lab1', 'shakespeare.txt')
fileName = os.path.join(baseDir, inputPath)
rawData = sc.textFile(fileName)
shakespeareCount = rawData.count()
print shakespeareCount
# If the text file didn't load properly an AssertionError will be raised
assert shakespeareCount == 122395
```
### ** Part 2: Check class testing library **
#### ** (2a) Compare with hash **
```
# TEST Compare with hash (2a)
# Check our testing library/package
# This should print '1 test passed.' on two lines
from test_helper import Test
twelve = 12
Test.assertEquals(twelve, 12, 'twelve should equal 12')
Test.assertEqualsHashed(twelve, '7b52009b64fd0a2a49e6d8a939753077792b0554',
'twelve, once hashed, should equal the hashed value of 12')
```
#### ** (2b) Compare lists **
```
# TEST Compare lists (2b)
# This should print '1 test passed.'
unsortedList = [(5, 'b'), (5, 'a'), (4, 'c'), (3, 'a')]
Test.assertEquals(sorted(unsortedList), [(3, 'a'), (4, 'c'), (5, 'a'), (5, 'b')],
'unsortedList does not sort properly')
```
### ** Part 3: Check plotting **
#### ** (3a) Our first plot **
#### After executing the code cell below, you should see a plot with 50 blue circles. The circles should start at the bottom left and end at the top right.
```
# Check matplotlib plotting
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from math import log
# function for generating plot layout
def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999', gridWidth=1.0):
plt.close()
fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')
ax.axes.tick_params(labelcolor='#999999', labelsize='10')
for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:
axis.set_ticks_position('none')
axis.set_ticks(ticks)
axis.label.set_color('#999999')
if hideLabels: axis.set_ticklabels([])
plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')
map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])
return fig, ax
# generate layout and plot data
x = range(1, 50)
y = [log(x1 ** 2) for x1 in x]
fig, ax = preparePlot(range(5, 60, 10), range(0, 12, 1))
plt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)
ax.set_xlabel(r'$range(1, 50)$'), ax.set_ylabel(r'$\log_e(x^2)$')
pass
```
### ** Part 4: Check MathJax Formulas **
#### ** (4a) Gradient descent formula **
#### You should see a formula on the line below this one: $$ \scriptsize \mathbf{w}_{i+1} = \mathbf{w}_i - \alpha_i \sum_j (\mathbf{w}_i^\top\mathbf{x}_j - y_j) \mathbf{x}_j \,.$$
#### This formula is included inline with the text and is $ \scriptsize (\mathbf{w}^\top \mathbf{x} - y) \mathbf{x} $.
#### ** (4b) Log loss formula **
#### This formula shows log loss for single point. Log loss is defined as: $$ \begin{align} \scriptsize \ell_{log}(p, y) = \begin{cases} -\log (p) & \text{if } y = 1 \\\ -\log(1-p) & \text{if } y = 0 \end{cases} \end{align} $$
### ** Part 5: Export / download and submit **
#### ** (5a) Time to submit **
#### You have completed the lab. To submit the lab for grading you will need to download it from your IPython Notebook environment. You can do this by clicking on "File", then hovering your mouse over "Download as", and then clicking on "Python (.py)". This will export your IPython Notebook as a .py file to your computer.
#### To upload this file to the course autograder, go to the edX website and find the page for submitting this assignment. Click "Choose file", then navigate to and click on the downloaded .py file. Now click the "Open" button and then the "Check" button. Your submission will be graded shortly and will be available on the page where you submitted. Note that when submission volumes are high, it may take as long as an hour to receive results.
| github_jupyter |
# Collaboration and Competition
---
In this notebook, you will learn how to use the Unity ML-Agents environment for the third project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program.
### 1. Start the Environment
We begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
```
from unityagents import UnityEnvironment
import numpy as np
```
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.
- **Mac**: `"path/to/Tennis.app"`
- **Windows** (x86): `"path/to/Tennis_Windows_x86/Tennis.exe"`
- **Windows** (x86_64): `"path/to/Tennis_Windows_x86_64/Tennis.exe"`
- **Linux** (x86): `"path/to/Tennis_Linux/Tennis.x86"`
- **Linux** (x86_64): `"path/to/Tennis_Linux/Tennis.x86_64"`
- **Linux** (x86, headless): `"path/to/Tennis_Linux_NoVis/Tennis.x86"`
- **Linux** (x86_64, headless): `"path/to/Tennis_Linux_NoVis/Tennis.x86_64"`
For instance, if you are using a Mac, then you downloaded `Tennis.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:
```
env = UnityEnvironment(file_name="Tennis.app")
```
```
env = UnityEnvironment(file_name="Tennis_Linux/Tennis.x86_64")
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
```
### 2. Examine the State and Action Spaces
In this environment, two agents control rackets to bounce a ball over a net. If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play.
The observation space consists of 8 variables corresponding to the position and velocity of the ball and racket. Two continuous actions are available, corresponding to movement toward (or away from) the net, and jumping.
Run the code cell below to print some information about the environment.
```
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
```
### 3. Take Random Actions in the Environment
In the next code cell, you will learn how to use the Python API to control the agents and receive feedback from the environment.
Once this cell is executed, you will watch the agents' performance, if they select actions at random with each time step. A window should pop up that allows you to observe the agents.
Of course, as part of the project, you'll have to change the code so that the agents are able to use their experiences to gradually choose better actions when interacting with the environment!
```
for i in range(1, 6): # play game for 5 episodes
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Score (max over agents) from episode {}: {}'.format(i, np.max(scores)))
```
When finished, you can close the environment.
```
env.close()
```
### 4. It's Your Turn!
Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:
```python
env_info = env.reset(train_mode=True)[brain_name]
```
| github_jupyter |
```
import h5py
import keras
import numpy as np
import os
import random
import sys
import tensorflow as tf
sys.path.append("../src")
import localmodule
# Define constants.
dataset_name = localmodule.get_dataset_name()
models_dir = localmodule.get_models_dir()
units = localmodule.get_units()
n_input_hops = 104
n_filters = [24, 48, 48]
kernel_size = [5, 5]
pool_size = [2, 4]
n_hidden_units = 64
# Define and compile Keras model.
# NB: the original implementation of Justin Salamon in ICASSP 2017 relies on
# glorot_uniform initialization for all layers, and the optimizer is a
# stochastic gradient descent (SGD) with a fixed learning rate of 0.1.
# Instead, we use a he_uniform initialization for the layers followed
# by rectified linear units (see He ICCV 2015), and replace the SGD by
# the Adam adaptive stochastic optimizer (see Kingma ICLR 2014).
model = keras.models.Sequential()
# Layer 1
bn = keras.layers.normalization.BatchNormalization(
input_shape=(128, n_input_hops, 1))
model.add(bn)
conv1 = keras.layers.Convolution2D(n_filters[0], kernel_size,
padding="same", kernel_initializer="he_normal", activation="relu")
model.add(conv1)
pool1 = keras.layers.MaxPooling2D(pool_size=pool_size)
model.add(pool1)
# Layer 2
conv2 = keras.layers.Convolution2D(n_filters[1], kernel_size,
padding="same", kernel_initializer="he_normal", activation="relu")
model.add(conv2)
pool2 = keras.layers.MaxPooling2D(pool_size=pool_size)
model.add(pool2)
# Layer 3
conv3 = keras.layers.Convolution2D(n_filters[2], kernel_size,
padding="same", kernel_initializer="he_normal", activation="relu")
model.add(conv3)
# Layer 4
drop1 = keras.layers.Dropout(0.5)
model.add(drop1)
flatten = keras.layers.Flatten()
model.add(flatten)
dense1 = keras.layers.Dense(n_hidden_units,
kernel_initializer="he_normal", activation="relu",
kernel_regularizer=keras.regularizers.l2(0.01))
model.add(dense1)
# Layer 5
# We put a single output instead of 43 in the original paper, because this
# is binary classification instead of multilabel classification.
drop2 = keras.layers.Dropout(0.5)
model.add(drop2)
dense2 = keras.layers.Dense(1,
kernel_initializer="normal", activation="sigmoid",
kernel_regularizer=keras.regularizers.l2(0.0002))
model.add(dense2)
# Compile model, print model summary.
metrics = ["accuracy"]
#model.compile(loss="binary_crossentropy", optimizer="sgd", metrics=metrics)
#model.compile(loss="mse", optimizer="adam", metrics=metrics)
model.compile(loss="mse", optimizer="sgd", metrics=metrics)
#model.summary()
# Train model.
fold_units = ["unit01"]
augs = ["original"]
aug_dict = localmodule.get_augmentations()
data_dir = localmodule.get_data_dir()
dataset_name = localmodule.get_dataset_name()
logmelspec_name = "_".join([dataset_name, "logmelspec"])
logmelspec_dir = os.path.join(data_dir, logmelspec_name)
original_dir = os.path.join(logmelspec_dir, "original")
n_hops = 104
Xs = []
ys = []
for unit_str in units[:2]:
unit_name = "_".join([dataset_name, "original", unit_str])
unit_path = os.path.join(original_dir, unit_name + ".hdf5")
lms_container = h5py.File(unit_path)
lms_group = lms_container["logmelspec"]
keys = list(lms_group.keys())
for key in keys:
X = lms_group[key]
X_width = X.shape[1]
first_col = int((X_width-n_hops) / 2)
last_col = int((X_width+n_hops) / 2)
X = X[:, first_col:last_col]
X = np.array(X)[np.newaxis, :, :, np.newaxis]
Xs.append(X)
ys.append(np.float32(key.split("_")[3]))
X = np.concatenate(Xs, axis=0)
y = np.array(ys)
X.shape
# MSE, ADAM
model.fit(X[:,:,:,:], y[:], epochs=1, verbose=True)
print(model.evaluate(X, y))
# MSE, SGD
model.fit(X[:,:,:,:], y[:], epochs=1, verbose=True)
print(model.evaluate(X, y))
# MSE, SGD
model.fit(X[:,:,:,:], y[:], epochs=1, verbose=True)
print(model.evaluate(X, y))
# BCE, SGD
model.fit(X[:,:,:,:], y[:], epochs=1, verbose=True)
print(model.evaluate(X, y))
# BCE, ADAM
model.fit(X[:,:,:,:], y[:], epochs=1, verbose=True)
print(model.evaluate(X, y))
m = keras.models.Sequential()
m.add(keras.layers.Dense(1, input_shape=(1,)))
X = np.array([[0.0], [1.0]])
y = np.array([0.0, 1.0])
m.compile(optimizer="sgd", loss="binary_crossentropy")
print(m.layers[0].get_weights())
m.fit(X, y, epochs=500, verbose=False)
print(m.predict(X))
print(m.layers[0].get_weights())
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout
# Generate dummy data
neg_X = np.random.randn(500, 2) + np.array([-2.0, 1.0])
pos_X = np.random.randn(500, 2) + np.array([1.0, -2.0])
X = np.concatenate((neg_X, pos_X), axis=0)
neg_Y = np.zeros((500,))
pos_Y = np.ones((500,))
Y = np.concatenate((neg_Y, pos_Y), axis=0)
model = Sequential()
model.add(Dense(10, input_dim=2, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
print(model.layers[0].get_weights())
model.fit(X, Y, epochs=20, batch_size=100, verbose=False)
print(model.layers[0].get_weights())
from matplotlib import pyplot as plt
%matplotlib inline
plt.figure()
plt.plot(neg_X[:, 0], neg_X[:, 1], '+');
plt.plot(pos_X[:, 0], pos_X[:, 1], '+');
```
| github_jupyter |
# Data Row
## Analisys of Tweets of Trump, Clinton, and Sander
#### Ref:
## https://www.dataquest.io/blog/matplotlib-tutorial/
--------------------------------------------
## Exploring tweets with Pandas
```
import pandas as pd
tweets = pd.read_csv("tweets.csv")
tweets.head()
```
------------------------------------------------
## Generating a candidates column
```
def get_candidate(row):
candidates = []
text = row["text"].lower()
if "clinton" in text or "hillary" in text:
candidates.append("clinton")
if "trump" in text or "donald" in text:
candidates.append("trump")
if "sanders" in text or "bernie" in text:
candidates.append("sanders")
return ",".join(candidates)
tweets["candidate"] = tweets.apply(get_candidate,axis=1)
```
# Importing matplotlib
```
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
```
### Making a bar plot
```
counts = tweets["candidate"].value_counts()
plt.bar(range(len(counts)), counts)
plt.show()
print(counts)
```
--------------------------------------------
## Customizing plots
```
from datetime import datetime
tweets["created"] = pd.to_datetime(tweets["created"])
tweets["user_created"] = pd.to_datetime(tweets["user_created"])
tweets["user_age"] = tweets["user_created"].apply(lambda x: (datetime.now() - x).total_seconds() / 3600 / 24 / 365)
plt.hist(tweets["user_age"])
plt.show()
```
### Adding labels
```
plt.hist(tweets["user_age"])
plt.title("Tweets mentioning candidates")
plt.xlabel("Twitter account age in years")
plt.ylabel("# of tweets")
plt.show()
```
### Making a stacked histogram
```
cl_tweets = tweets["user_age"][tweets["candidate"] == "clinton"]
sa_tweets = tweets["user_age"][tweets["candidate"] == "sanders"]
tr_tweets = tweets["user_age"][tweets["candidate"] == "trump"]
plt.hist([
cl_tweets,
sa_tweets,
tr_tweets
],
stacked=True,
label=["clinton", "sanders", "trump"]
)
plt.legend()
plt.title("Tweets mentioning each candidate")
plt.xlabel("Twitter account age in years")
plt.ylabel("# of tweets")
plt.show()
```
### Annotating the histogram
```
plt.hist([
cl_tweets,
sa_tweets,
tr_tweets
],
stacked=True,
label=["clinton", "sanders", "trump"]
)
plt.legend()
plt.title("Tweets mentioning each candidate")
plt.xlabel("Twitter account age in years")
plt.ylabel("# of tweets")
plt.annotate('More Trump tweets', xy=(1, 35000), xytext=(2, 35000),
arrowprops=dict(facecolor='black'))
plt.show()
```
----------------------------------------------
# Extracting colors
```
import matplotlib.colors as colors
tweets["red"] = tweets["user_bg_color"].apply(lambda x: colors.hex2color('#{0}'.format(x))[0])
tweets["blue"] = tweets["user_bg_color"].apply(lambda x: colors.hex2color('#{0}'.format(x))[2])
```
### Creating the plot
```
fig, axes = plt.subplots(nrows=2, ncols=2)
ax0, ax1, ax2, ax3 = axes.flat
ax0.hist(tweets["red"])
ax0.set_title('Red in backgrounds')
ax1.hist(tweets["red"][tweets["candidate"] == "trump"].values)
ax1.set_title('Red in Trump tweeters')
ax2.hist(tweets["blue"])
ax2.set_title('Blue in backgrounds')
ax3.hist(tweets["blue"][tweets["candidate"] == "trump"].values)
ax3.set_title('Blue in Trump tweeters')
plt.tight_layout()
plt.show()
```
### Removing common background colors
```
tweets["user_bg_color"].value_counts()
tc = tweets[~tweets["user_bg_color"].isin(["C0DEED", "000000", "F5F8FA"])]
def create_plot(data):
fig, axes = plt.subplots(nrows=2, ncols=2)
ax0, ax1, ax2, ax3 = axes.flat
ax0.hist(data["red"])
ax0.set_title('Red in backgrounds')
ax1.hist(data["red"][data["candidate"] == "trump"].values)
ax1.set_title('Red in Trump tweets')
ax2.hist(data["blue"])
ax2.set_title('Blue in backgrounds')
ax3.hist(data["blue"][data["candidate"] == "trump"].values)
ax3.set_title('Blue in Trump tweeters')
plt.tight_layout()
plt.show()
create_plot(tc)
```
## Plotting sentiment
```
gr = tweets.groupby("candidate").agg([np.mean, np.std])
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(7, 7))
ax0, ax1 = axes.flat
std = gr["polarity"]["std"].iloc[1:]
mean = gr["polarity"]["mean"].iloc[1:]
ax0.bar(range(len(std)), std)
ax0.set_xticklabels(std.index, rotation=45)
ax0.set_title('Standard deviation of tweet sentiment')
ax1.bar(range(len(mean)), mean)
ax1.set_xticklabels(mean.index, rotation=45)
ax1.set_title('Mean tweet sentiment')
plt.tight_layout()
plt.show()
```
---------------------------------------------------
# Generating a side by side bar plot
#### Generating tweet lengths
```
def tweet_lengths(text):
if len(text) < 100:
return "short"
elif 100 <= len(text) <= 135:
return "medium"
else:
return "long"
tweets["tweet_length"] = tweets["text"].apply(tweet_lengths)
tl = {}
for candidate in ["clinton", "sanders", "trump"]:
tl[candidate] = tweets["tweet_length"][tweets["candidate"] == candidate].value_counts()
```
#### Plotting
```
fig, ax = plt.subplots()
width = .5
x = np.array(range(0, 6, 2))
ax.bar(x, tl["clinton"], width, color='g')
ax.bar(x + width, tl["sanders"], width, color='b')
ax.bar(x + (width * 2), tl["trump"], width, color='r')
ax.set_ylabel('# of tweets')
ax.set_title('Number of Tweets per candidate by length')
ax.set_xticks(x + (width * 1.5))
ax.set_xticklabels(('long', 'medium', 'short'))
ax.set_xlabel('Tweet length')
plt.show()
```
## Next steps
| github_jupyter |
For all numerical experiments, we will be using the Chambolle-Pock primal-dual algorithm - details can be found on:
1. [A First-order Primal-dual Algorithm for Convex Problems with Applications to Imaging](https://link.springer.com/article/10.1007/s10851-010-0251-1), A. Chambolle, T. Pock, Journal of Mathematical Imaging and Vision (2011). [PDF](https://hal.archives-ouvertes.fr/hal-00490826/document)
2. [Recovering Piecewise Smooth Multichannel Images by Minimization of Convex Functionals with Total Generalized Variation Penalty](https://link.springer.com/chapter/10.1007/978-3-642-54774-4_3), K. Bredies, Efficient algorithms for global optimization methods in computer vision (2014). [PDF](https://imsc.uni-graz.at/mobis/publications/SFB-Report-2012-006.pdf)
3. [Second Order Total Generalized Variation (TGV) for MRI](https://onlinelibrary.wiley.com/doi/full/10.1002/mrm.22595), F. Knoll, K. Bredies, T. Pock, R. Stollberger (2010). [PDF](https://onlinelibrary.wiley.com/doi/epdf/10.1002/mrm.22595)
In order to compute the spatia dependent regularization weights we follow:
4. [Dualization and Automatic Distributed Parameter Selection of Total Generalized Variation via Bilevel Optimization](https://arxiv.org/pdf/2002.05614.pdf), M. Hintermüller, K. Papafitsoros, C.N. Rautenberg, H. Sun, arXiv preprint, (2020)
# Huber Total Variation Denoising
We are solving the discretized version of the following minimization problem
\begin{equation}\label{L2-TV}
\min_{u} \int_{\Omega} (u-f)^{2}dx + \alpha \int_{\Omega} \varphi_{\gamma}(\nabla u)dx
\end{equation}
were $\phi_{\gamma}:\mathbb{R}^{d}\to \mathbb{R}^{+}$ with
\begin{equation}
\phi_{\gamma}(v)=
\begin{cases}
|v|-\frac{1}{2}\gamma & \text{ if } |v|\ge \gamma,\\
\frac{1}{2\gamma}|v(x)|^{2}& \text{ if } |v|< \gamma.\\
\end{cases}
\end{equation}
## Import data...
```
import scipy.io as sio
import matplotlib.pyplot as plt
import numpy as np
mat_contents = sio.loadmat('tutorial1_classical_reg_methods/parrot')
clean=mat_contents['parrot']
f=mat_contents['parrot_noisy_01']
plt.figure(figsize = (7,7))
imgplot2 = plt.imshow(clean)
imgplot2.set_cmap('gray')
plt.figure(figsize = (7,7))
imgplot2 = plt.imshow(f)
imgplot2.set_cmap('gray')
from tutorial1_classical_reg_methods.Tutorial_Codes import psnr, reproject, dxm, dym, dxp, dyp, function_TGV_denoising_CP, P_a_Huber, function_HuberTV_denoising_CP
```
## Task 1
Choose different values for $\alpha$ and $\gamma$ and interprent your results:
- Fix $\gamma$ small, e.g. $\gamma=0.01$ and play with the values of $\alpha$. What do you observe for large $\alpha$? What for small?
- Fix $\alpha$ and play with the values of $\gamma$. What do you observe for large $\gamma$? What for small?
```
alpha=0.085
gamma=0.001
uTV = function_HuberTV_denoising_CP(f,clean, alpha, gamma,1000)
uTikhonov = function_HuberTV_denoising_CP(f,clean, 5, 2,1000)
```
# Total Generalized Variation Denoising
We are solving the discretized version of the following minimization problem
\begin{equation}\label{L2-TGV}
\min_{u} \int_{\Omega} (u-f)^{2}dx + TGV_{\alpha,\beta}(u)
\end{equation}
where
\begin{equation}
TGV_{\alpha,\beta}(u)=\min_{w} \alpha \int_{\Omega} |\nabla u-w|dx + \beta \int_{\Omega} |Ew|dx
\end{equation}
## Task 2a
Choose different values for $\alpha$ and $\beta$ and solve the TGV denoising minimization problem.
- What happens for small $\alpha$ and large $\beta$?
- What happens for large $\alpha$ and small $\beta$?
- What happens where both parameters are small/large?
- Try to find the combination of parameters that gives the highest PSNR value.
```
#alpha=0.085
#beta=0.15
alpha=0.085
beta=0.15
uTGV = function_TGV_denoising_CP(f,clean, alpha, beta, 500)
```
## Task 2b
Import the following spatial dependent regularization weights, which are taken from this work:
- [Dualization and Automatic Distributed Parameter Selection of Total Generalized Variation via Bilevel Optimization](https://arxiv.org/pdf/2002.05614.pdf), M. Hintermüller, K. Papafitsoros, C.N. Rautenberg, H. Sun, arXiv preprint, (2020)
```
weight_contents = sio.loadmat('tutorial1_classical_reg_methods/spatial_dependent_weights')
alpha_spatial=weight_contents['TGV_alpha_spatial']
beta_spatial=weight_contents['TGV_beta_spatial']
#plt.figure(figsize = (7,7))
#imgplot2 = plt.imshow(alpha_spatial)
#imgplot2.set_cmap('gray')
#plt.figure(figsize = (7,7))
#imgplot2 = plt.imshow(beta_spatial)
#imgplot2.set_cmap('gray')
from mpl_toolkits.mplot3d import Axes3D
(n,m)=alpha_spatial.shape
x=range(n)
y=range(m)
X, Y = np.meshgrid(x, y)
halpha = plt.figure(figsize = (7,7))
h_alpha = halpha.add_subplot(111, projection='3d')
h_alpha.plot_surface(X, Y, alpha_spatial)
hbeta = plt.figure(figsize = (7,7))
h_beta = hbeta.add_subplot(111, projection='3d')
h_beta.plot_surface(X, Y, beta_spatial)
hclean = plt.figure(figsize = (7,7))
h_clean = hclean.add_subplot(111, projection='3d')
h_clean.plot_surface(X, Y, clean)
```
And run again the algorithm with this weight:
```
uTGVspatial = function_TGV_denoising_CP(f,clean, alpha_spatial, beta_spatial, 500)
```
Now you can see all the reconstructions together:
```
plt.rcParams['figure.figsize'] = np.array([4, 3])*3
plt.rcParams['figure.dpi'] = 120
fig, axs = plt.subplots(ncols=3, nrows=2)
# remove ticks from plot
for ax in axs.flat:
ax.set(xticks=[], yticks=[])
axs[0,0].imshow(clean, cmap='gray')
axs[0,0].set(xlabel='Clean')
axs[0,1].imshow(f, cmap='gray')
axs[0,1].set(xlabel='Noisy, PSNR = ' + str(np.around(psnr(f, clean),decimals=2)))
axs[0,2].imshow(uTikhonov, cmap='gray')
axs[0,2].set(xlabel='Tikhonov, PSNR = ' + str(np.around(psnr(uTikhonov, clean),decimals=2)))
axs[1,0].imshow(uTV, cmap='gray')
axs[1,0].set(xlabel='TV, PSNR = ' + str(np.around(psnr(uTV, clean),decimals=2)))
axs[1,1].imshow(uTGV, cmap='gray')
axs[1,1].set(xlabel = 'TGV, PSNR = ' + str(np.around(psnr(uTGV, clean),decimals=2)))
axs[1,2].imshow(uTGVspatial, cmap='gray')
axs[1,2].set(xlabel = 'TGV spatial, PSNR = ' + str(np.around(psnr(uTGVspatial, clean),decimals=2)))
```
# TV and TGV MRI reconstruction
Here we will be solving the discretized version of the following minimization problem
\begin{equation}
\min_{u} \int_{\Omega} (S \circ F u-g)^{2}dx + \alpha TV(u)
\end{equation}
and
\begin{equation}
\min_{u} \int_{\Omega} (S \circ F u-g)^{2}dx + TGV_{\alpha,\beta}(u)
\end{equation}
The code for the examples below was kindly provided by Clemens Sirotenko.
## Import data
```
from tutorial1_classical_reg_methods.Tutorial_Codes import normalize, subsampling, subsampling_transposed, compute_differential_operators, function_TV_MRI_CP, function_TGV_MRI_CP
from scipy import sparse
import scipy.sparse.linalg
image=np.load('tutorial1_classical_reg_methods/img_example.npy')
image=np.abs(image[:,:,3])
image = normalize(image)
plt.figure(figsize = (7,7))
imgplot2 = plt.imshow(image)
imgplot2.set_cmap('gray')
```
## Simulate noisy data and subsampled data
Create noisy data $ S \circ F x + \varepsilon = y^{\delta}$ where $x$ is th clean image and $ \varepsilon \sim \mathcal{N}(0,\sigma^2)$ normal distributed centered complex noise
```
mask = np.ones(np.shape(image))
mask[:,1:-1:3] = 0
Fx = np.fft.fft2(image,norm='ortho') #ortho means that the fft2 is unitary
(M,N) = image.shape
rate = 0.039 ##noise rate
noise = np.random.randn(M,N) + (1j)*np.random.randn(M,N) #cmplx noise
distorted_full = Fx + rate*noise
distorted = subsampling(distorted_full, mask)
zero_filling = np.real(np.fft.ifft2(subsampling_transposed(distorted, mask), norm = 'ortho'))
plt.figure(figsize = (7,7))
imgplot2 = plt.imshow(mask)
imgplot2.set_cmap('gray')
plt.figure(figsize = (7,7))
imgplot2 = plt.imshow(zero_filling)
imgplot2.set_cmap('gray')
```
## TV MRI reconstruction
```
x_0 = zero_filling
data = distorted
alpha = 0.025
tau = 1/np.sqrt(12)
sigma = tau
h = 1
max_it = 3000
tol = 1e-4 # algorithm stops if |x_k - x_{k+1}| < tol
x_TV = function_TV_MRI_CP(data,image,mask,x_0,tau,sigma,h,max_it,tol,alpha)
plt.figure(figsize = (7,7))
imgplot2 = plt.imshow(x_TV)
imgplot2.set_cmap('gray')
```
## TGV MRI reconstruction
```
alpha = 0.02
beta = 0.035
x_0 = zero_filling
data = distorted
tau = 1/np.sqrt(12)
sigma = tau
lambda_prox = 1
h = 1
tol = 1e-4
max_it = 2500
x_TGV = function_TGV_MRI_CP(data,image, mask,x_0,tau,sigma,lambda_prox,h,max_it,tol,beta,alpha)
plt.figure(figsize = (7,7))
imgplot2 = plt.imshow(x_TGV)
imgplot2.set_cmap('gray')
```
Now you can see all the reconstructions together:
```
plt.rcParams['figure.figsize'] = np.array([2, 2])*5
plt.rcParams['figure.dpi'] = 120
fig, axs = plt.subplots(ncols=2, nrows=2)
# remove ticks from plot
for ax in axs.flat:
ax.set(xticks=[], yticks=[])
axs[0,0].imshow(normalize(image), cmap='gray')
axs[0,0].set(xlabel='Clean Image')
axs[1,0].imshow(normalize(x_TV), cmap='gray')
axs[1,0].set(xlabel='TV Reconstruction, PSNR = ' + str(np.around(psnr(x_TV, image),decimals=2)))
axs[0,1].imshow(normalize(x_0), cmap='gray')
axs[0,1].set(xlabel = 'Zero Filling Solution , PSNR = ' + str(np.around(psnr(x_0, image),decimals=2)))
axs[1,1].imshow(normalize(x_TGV), cmap='gray')
axs[1,1].set(xlabel='TGV Reconstruction , PSNR = ' + str(np.around(psnr(x_TGV, image),decimals=2)))
```
| github_jupyter |
# Figure. X Inactivation
```
import cPickle
import datetime
import glob
import os
import random
import re
import subprocess
import cdpybio as cpb
import matplotlib as mpl
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pybedtools as pbt
import scipy.stats as stats
import seaborn as sns
import statsmodels.api as sm
import statsmodels.formula.api as smf
import statsmodels as sms
import cardipspy as cpy
import ciepy
%matplotlib inline
%load_ext rpy2.ipython
import socket
if socket.gethostname() == 'fl-hn1' or socket.gethostname() == 'fl-hn2':
pbt.set_tempdir('/frazer01/home/cdeboever/tmp')
outdir = os.path.join(ciepy.root, 'output',
'figure_x_inactivation')
cpy.makedir(outdir)
private_outdir = os.path.join(ciepy.root, 'private_output',
'figure_x_inactivation')
cpy.makedir(private_outdir)
plt.rcParams['font.sans-serif'] = ['Arial']
plt.rcParams['font.size'] = 8
fn = os.path.join(ciepy.root, 'output', 'input_data', 'rsem_tpm.tsv')
tpm = pd.read_table(fn, index_col=0)
fn = os.path.join(ciepy.root, 'output', 'input_data', 'rnaseq_metadata.tsv')
rna_meta = pd.read_table(fn, index_col=0)
fn = os.path.join(ciepy.root, 'output', 'input_data', 'subject_metadata.tsv')
subject_meta = pd.read_table(fn, index_col=0)
fn = os.path.join(ciepy.root, 'output', 'input_data', 'wgs_metadata.tsv')
wgs_meta = pd.read_table(fn, index_col=0)
gene_info = pd.read_table(cpy.gencode_gene_info, index_col=0)
genes = pbt.BedTool(cpy.gencode_gene_bed)
fn = os.path.join(ciepy.root, 'output', 'input_data', 'cnvs.tsv')
cnvs = pd.read_table(fn, index_col=0)
fn = os.path.join(ciepy.root, 'output', 'x_inactivation', 'x_ase_exp.tsv')
x_exp = pd.read_table(fn, index_col=0)
fn = os.path.join(ciepy.root, 'output', 'x_inactivation', 'expression_densities.tsv')
pdfs = pd.read_table(fn, index_col=0)
pdfs.columns = ['No ASE', 'ASE']
fn = os.path.join(ciepy.root, 'output', 'input_data',
'mbased_major_allele_freq.tsv')
maj_af = pd.read_table(fn, index_col=0)
fn = os.path.join(ciepy.root, 'output', 'input_data',
'mbased_p_val_ase.tsv')
ase_pval = pd.read_table(fn, index_col=0)
locus_p = pd.Panel({'major_allele_freq':maj_af, 'p_val_ase':ase_pval})
locus_p = locus_p.swapaxes(0, 2)
snv_fns = glob.glob(os.path.join(ciepy.root, 'private_output', 'input_data', 'mbased_snv',
'*_snv.tsv'))
count_fns = glob.glob(os.path.join(ciepy.root, 'private_output', 'input_data', 'allele_counts',
'*mbased_input.tsv'))
snv_res = {}
for fn in snv_fns:
snv_res[os.path.split(fn)[1].split('_')[0]] = pd.read_table(fn, index_col=0)
count_res = {}
for fn in count_fns:
count_res[os.path.split(fn)[1].split('_')[0]] = pd.read_table(fn, index_col=0)
snv_p = pd.Panel(snv_res)
# We'll keep female subjects with no CNVs on the X chromosome.
sf = subject_meta[subject_meta.sex == 'F']
meta = sf.merge(rna_meta, left_index=True, right_on='subject_id')
s = set(meta.subject_id) & set(cnvs.ix[cnvs.chr == 'chrX', 'subject_id'])
meta = meta[meta.subject_id.apply(lambda x: x not in s)]
meta = meta.ix[[x for x in snv_p.items if x in meta.index]]
snv_p = snv_p.ix[meta.index]
snv_p = snv_p.ix[meta.index]
locus_p = locus_p.ix[meta.index]
# Filter and take log.
tpm_f = tpm[meta[meta.sex == 'F'].index]
tpm_f = tpm_f[(tpm_f != 0).sum(axis=1) > 0]
log_tpm = np.log10(tpm_f + 1)
# Mean center.
log_tpm_c = (log_tpm.T - log_tpm.mean(axis=1)).T
# Variance normalize.
log_tpm_n = (log_tpm_c.T / log_tpm_c.std(axis=1)).T
single = locus_p.ix['071ca248-bcb1-484d-bff2-3aefc84f8688', :, :].dropna()
x_single = single[gene_info.ix[single.index, 'chrom'] == 'chrX']
notx_single = single[gene_info.ix[single.index, 'chrom'] != 'chrX']
t = locus_p.ix[:, :, 'major_allele_freq']
x_all = locus_p.ix[:, set(t.index) & set(gene_info[gene_info.chrom == 'chrX'].index), :]
notx_all = locus_p.ix[:, set(t.index) & set(gene_info[gene_info.chrom != 'chrX'].index), :]
genes_to_plot = ['XIST', 'TSIX']
t = pd.Series(gene_info.index, index=gene_info.gene_name)
exp = log_tpm_n.ix[t[genes_to_plot]].T
exp.columns = genes_to_plot
exp = exp.ix[x_all.items].sort_values(by='XIST', ascending=False)
sns.set_style('white')
```
## Paper
```
n = x_exp.shape[0]
print('Plotting mean expression for {} X chromosome genes.'.format(n))
n = sum(x_exp.mean_sig_exp < x_exp.mean_not_sig_exp)
print('{} of {} ({:.2f}%) genes had higher expression for samples without ASE.'.format(
n, x_exp.shape[0], n / float(x_exp.shape[0]) * 100))
fig = plt.figure(figsize=(6.85, 9), dpi=300)
gs = gridspec.GridSpec(1, 1)
ax = fig.add_subplot(gs[0, 0])
ax.text(0, 1, 'Figure 6',
size=16, va='top', )
ciepy.clean_axis(ax)
ax.set_xticks([])
ax.set_yticks([])
gs.tight_layout(fig, rect=[0, 0.90, 0.5, 1])
gs = gridspec.GridSpec(1, 1)
ax = fig.add_subplot(gs[0, 0])
ax.scatter(x_exp.mean_sig_exp, x_exp.mean_not_sig_exp, alpha=0.4, color='grey', s=10)
ax.set_ylabel('Mean expression,\nno ASE', fontsize=8)
ax.set_xlabel('Mean expression, ASE', fontsize=8)
xmin,xmax = ax.get_xlim()
ymin,ymax = ax.get_ylim()
plt.plot([min(xmin, ymin), max(xmax, ymax)], [min(xmin, ymin), max(xmax, ymax)], color='black', ls='--')
ax.set_xlim(-1, 1.75)
ax.set_ylim(-1, 1.75)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
for l in ax.get_xticklines() + ax.get_yticklines():
l.set_markersize(0)
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(8)
gs.tight_layout(fig, rect=[0.02, 0.79, 0.32, 0.95])
gs = gridspec.GridSpec(1, 1)
ax = fig.add_subplot(gs[0, 0])
ax.hist(x_single.major_allele_freq, bins=np.arange(0.5, 1.05, 0.05), color='grey')
ax.set_xlim(0.5, 1)
ax.set_ylabel('Number of genes', fontsize=8)
ax.set_xlabel('Allelic imbalance fraction', fontsize=8)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_yticks(np.arange(0, 20, 4))
for l in ax.get_xticklines() + ax.get_yticklines():
l.set_markersize(0)
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(8)
gs.tight_layout(fig, rect=[0.33, 0.79, 0.66, 0.95])
gs = gridspec.GridSpec(1, 1)
ax = fig.add_subplot(gs[0, 0])
ax.hist(notx_single.major_allele_freq, bins=np.arange(0.5, 1.05, 0.05), color='grey')
ax.set_xlim(0.5, 1)
ax.set_ylabel('Number of genes', fontsize=8)
ax.set_xlabel('Allelic imbalance fraction', fontsize=8)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
for l in ax.get_xticklines() + ax.get_yticklines():
l.set_markersize(0)
ax.yaxis.set_major_formatter(ciepy.comma_format)
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(8)
gs.tight_layout(fig, rect=[0.66, 0.79, 1, 0.95])
gs = gridspec.GridSpec(1, 4, width_ratios=[0.5, 1.2, 3, 3])
ax = fig.add_subplot(gs[0, 0])
passage_im = ax.imshow(np.array([meta.ix[exp.index, 'passage'].values]).T,
aspect='auto', interpolation='nearest',
cmap=sns.palettes.cubehelix_palette(light=.95, as_cmap=True))
ciepy.clean_axis(ax)
ax.set_xlabel('Passage', fontsize=8)
ax = fig.add_subplot(gs[0, 1])
# Make norm.
vmin = np.floor(exp.min().min())
vmax = np.ceil(exp.max().max())
vmax = max([vmax, abs(vmin)])
vmin = vmax * -1
exp_norm = mpl.colors.Normalize(vmin, vmax)
exp_im = ax.imshow(exp, aspect='auto', interpolation='nearest',
norm=exp_norm, cmap=plt.get_cmap('RdBu_r'))
ciepy.clean_axis(ax)
ax.set_xticks([0, 1])
ax.set_xticklabels(exp.columns, fontsize=8)
for t in ax.get_xticklabels():
t.set_fontstyle('italic')
#t.set_rotation(30)
for l in ax.get_xticklines() + ax.get_yticklines():
l.set_markersize(0)
percent_norm = mpl.colors.Normalize(0, 1)
ax = fig.add_subplot(gs[0, 2])
r = x_all.ix[:, :, 'major_allele_freq'].apply(lambda z: pd.cut(z[z.isnull() == False],
bins=np.arange(0.5, 1.05, 0.05)))
r = r.apply(lambda z: z.value_counts())
r = (r.T / r.max(axis=1)).T
x_ase_im = ax.imshow(r.ix[exp.index], aspect='auto', interpolation='nearest',
norm=percent_norm, cmap=sns.palettes.cubehelix_palette(start=0, rot=-0.5, as_cmap=True))
ciepy.clean_axis(ax)
xmin,xmax = ax.get_xlim()
ax.set_xticks(np.arange(xmin, xmax + 1, 2))
ax.set_xticklabels(np.arange(0.5, 1.05, 0.1), fontsize=8)#, rotation=30)
for l in ax.get_xticklines() + ax.get_yticklines():
l.set_markersize(0)
ax.set_xlabel('Allelic imbalance fraction', fontsize=8)
ax = fig.add_subplot(gs[0, 3])
r = notx_all.ix[:, :, 'major_allele_freq'].apply(lambda z: pd.cut(z[z.isnull() == False],
bins=np.arange(0.5, 1.05, 0.05)))
r = r.apply(lambda z: z.value_counts())
r = (r.T / r.max(axis=1)).T
not_x_ase_im = ax.imshow(r.ix[exp.index], aspect='auto', interpolation='nearest',
norm=percent_norm, cmap=sns.palettes.cubehelix_palette(start=0, rot=-0.5, as_cmap=True))
ciepy.clean_axis(ax)
xmin,xmax = ax.get_xlim()
ax.set_xticks(np.arange(xmin, xmax + 1, 2))
ax.set_xticklabels(np.arange(0.5, 1.05, 0.1), fontsize=8)#, rotation=30)
for l in ax.get_xticklines() + ax.get_yticklines():
l.set_markersize(0)
ax.set_xlabel('Allelic imbalance fraction', fontsize=8)
gs.tight_layout(fig, rect=[0, 0.45, 0.8, 0.8])
gs = gridspec.GridSpec(2, 2)
# Plot colormap for gene expression.
ax = fig.add_subplot(gs[0:2, 0])
cb = plt.colorbar(mappable=exp_im, cax=ax)
cb.solids.set_edgecolor("face")
cb.outline.set_linewidth(0)
for l in ax.get_yticklines():
l.set_markersize(0)
cb.set_label('$\log$ TPM $z$-score', fontsize=8)
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(8)
# Plot colormap for passage number.
ax = fig.add_subplot(gs[0, 1])
cb = plt.colorbar(mappable=passage_im, cax=ax)
cb.solids.set_edgecolor("face")
cb.outline.set_linewidth(0)
for l in ax.get_yticklines():
l.set_markersize(0)
cb.set_label('Passage number', fontsize=8)
cb.set_ticks(np.arange(12, 32, 4))
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(8)
# Plot colormap for ASE.
ax = fig.add_subplot(gs[1, 1])
cb = plt.colorbar(mappable=x_ase_im, cax=ax)
cb.solids.set_edgecolor("face")
cb.outline.set_linewidth(0)
for l in ax.get_yticklines():
l.set_markersize(0)
cb.set_label('Fraction of genes', fontsize=8)
cb.set_ticks(np.arange(0, 1.2, 0.2))
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(8)
gs.tight_layout(fig, rect=[0.8, 0.45, 1, 0.8])
t = fig.text(0.005, 0.93, 'A', weight='bold',
size=12)
t = fig.text(0.315, 0.93, 'B', weight='bold',
size=12)
t = fig.text(0.645, 0.93, 'C', weight='bold',
size=12)
t = fig.text(0.005, 0.79, 'D', weight='bold',
size=12)
t = fig.text(0.005, 0.44, 'E', weight='bold',
size=12)
t = fig.text(0.005, 0.22, 'F', weight='bold',
size=12)
plt.savefig(os.path.join(outdir, 'x_inactivation_skeleton.pdf'))
%%R
suppressPackageStartupMessages(library(Gviz))
t = x_all.ix[:, :, 'major_allele_freq']
r = gene_info.ix[t.index, ['start', 'end']]
%%R -i t,r
ideoTrack <- IdeogramTrack(genome = "hg19", chromosome = "chrX", fontsize=8, fontsize.legend=8,
fontcolor='black', cex=1, cex.id=1, cex.axis=1, cex.title=1)
mafTrack <- DataTrack(range=r, data=t, genome="hg19", type=c("p"), alpha=0.5, lwd=8,
span=0.05, chromosome="chrX", name="Allelic imbalance fraction", fontsize=8,
fontcolor.legend='black', col.axis='black', col.title='black',
background.title='transparent', cex=1, cex.id=1, cex.axis=1, cex.title=1,
fontface=1, fontface.title=1, alpha.title=1)
fn = os.path.join(outdir, 'p_maf.pdf')
%%R -i fn
pdf(fn, 6.85, 2)
plotTracks(c(ideoTrack, mafTrack), from=0, to=58100000, col.title='black')
dev.off()
fn = os.path.join(outdir, 'q_maf.pdf')
%%R -i fn
pdf(fn, 6.85, 2)
plotTracks(c(ideoTrack, mafTrack), from=63000000, to=155270560)
dev.off()
%%R -i fn
plotTracks(c(ideoTrack, mafTrack), from=63000000, to=155270560)
```
## Presentation
```
# Set fontsize
fs = 10
fig = plt.figure(figsize=(6.85, 5), dpi=300)
gs = gridspec.GridSpec(1, 1)
ax = fig.add_subplot(gs[0, 0])
ax.scatter(x_exp.mean_sig_exp, x_exp.mean_not_sig_exp, alpha=0.4, color='grey', s=10)
ax.set_ylabel('Mean expression,\nno ASE', fontsize=fs)
ax.set_xlabel('Mean expression, ASE', fontsize=fs)
xmin,xmax = ax.get_xlim()
ymin,ymax = ax.get_ylim()
plt.plot([min(xmin, ymin), max(xmax, ymax)], [min(xmin, ymin), max(xmax, ymax)], color='black', ls='--')
ax.set_xlim(-1, 1.75)
ax.set_ylim(-1, 1.75)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
for l in ax.get_xticklines() + ax.get_yticklines():
l.set_markersize(0)
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(fs)
gs.tight_layout(fig, rect=[0.02, 0.62, 0.32, 0.95])
gs = gridspec.GridSpec(1, 1)
ax = fig.add_subplot(gs[0, 0])
ax.hist(x_single.major_allele_freq, bins=np.arange(0.5, 1.05, 0.05), color='grey')
ax.set_xlim(0.5, 1)
ax.set_ylabel('Number of genes', fontsize=fs)
ax.set_xlabel('Allelic imbalance fraction', fontsize=fs)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_yticks(np.arange(0, 20, 4))
for l in ax.get_xticklines() + ax.get_yticklines():
l.set_markersize(0)
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(fs)
gs.tight_layout(fig, rect=[0.33, 0.62, 0.66, 0.95])
gs = gridspec.GridSpec(1, 1)
ax = fig.add_subplot(gs[0, 0])
ax.hist(notx_single.major_allele_freq, bins=np.arange(0.5, 1.05, 0.05), color='grey')
ax.set_xlim(0.5, 1)
ax.set_ylabel('Number of genes', fontsize=fs)
ax.set_xlabel('Allelic imbalance fraction', fontsize=fs)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
for l in ax.get_xticklines() + ax.get_yticklines():
l.set_markersize(0)
ax.yaxis.set_major_formatter(ciepy.comma_format)
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(fs)
gs.tight_layout(fig, rect=[0.66, 0.62, 1, 0.95])
#gs.tight_layout(fig, rect=[0, 0.62, 1, 1.0])
# t = fig.text(0.005, 0.88, 'A', weight='bold',
# size=12)
# t = fig.text(0.315, 0.88, 'B', weight='bold',
# size=12)
# t = fig.text(0.675, 0.88, 'C', weight='bold',
# size=12)
gs = gridspec.GridSpec(1, 4, width_ratios=[0.5, 1.2, 3, 3])
ax = fig.add_subplot(gs[0, 0])
passage_im = ax.imshow(np.array([meta.ix[exp.index, 'passage'].values]).T,
aspect='auto', interpolation='nearest',
cmap=sns.palettes.cubehelix_palette(light=.95, as_cmap=True))
ciepy.clean_axis(ax)
ax.set_xlabel('Passage')
ax = fig.add_subplot(gs[0, 1])
# Make norm.
vmin = np.floor(exp.min().min())
vmax = np.ceil(exp.max().max())
vmax = max([vmax, abs(vmin)])
vmin = vmax * -1
exp_norm = mpl.colors.Normalize(vmin, vmax)
exp_im = ax.imshow(exp, aspect='auto', interpolation='nearest',
norm=exp_norm, cmap=plt.get_cmap('RdBu_r'))
ciepy.clean_axis(ax)
ax.set_xticks([0, 1])
ax.set_xticklabels(exp.columns, fontsize=fs)
for t in ax.get_xticklabels():
t.set_fontstyle('italic')
t.set_rotation(30)
for l in ax.get_xticklines() + ax.get_yticklines():
l.set_markersize(0)
percent_norm = mpl.colors.Normalize(0, 1)
ax = fig.add_subplot(gs[0, 2])
r = x_all.ix[:, :, 'major_allele_freq'].apply(lambda z: pd.cut(z[z.isnull() == False],
bins=np.arange(0.5, 1.05, 0.05)))
r = r.apply(lambda z: z.value_counts())
r = (r.T / r.max(axis=1)).T
x_ase_im = ax.imshow(r.ix[exp.index], aspect='auto', interpolation='nearest',
norm=percent_norm, cmap=sns.palettes.cubehelix_palette(start=0, rot=-0.5, as_cmap=True))
ciepy.clean_axis(ax)
xmin,xmax = ax.get_xlim()
ax.set_xticks(np.arange(xmin, xmax + 1, 2))
ax.set_xticklabels(np.arange(0.5, 1.05, 0.1), fontsize=fs)#, rotation=30)
for l in ax.get_xticklines() + ax.get_yticklines():
l.set_markersize(0)
ax.set_xlabel('Allelic imbalance fraction', fontsize=fs)
ax.set_title('X Chromosome')
ax = fig.add_subplot(gs[0, 3])
r = notx_all.ix[:, :, 'major_allele_freq'].apply(lambda z: pd.cut(z[z.isnull() == False],
bins=np.arange(0.5, 1.05, 0.05)))
r = r.apply(lambda z: z.value_counts())
r = (r.T / r.max(axis=1)).T
not_x_ase_im = ax.imshow(r.ix[exp.index], aspect='auto', interpolation='nearest',
norm=percent_norm, cmap=sns.palettes.cubehelix_palette(start=0, rot=-0.5, as_cmap=True))
ciepy.clean_axis(ax)
xmin,xmax = ax.get_xlim()
ax.set_xticks(np.arange(xmin, xmax + 1, 2))
ax.set_xticklabels(np.arange(0.5, 1.05, 0.1), fontsize=fs)#, rotation=30)
for l in ax.get_xticklines() + ax.get_yticklines():
l.set_markersize(0)
ax.set_xlabel('Allelic imbalance fraction', fontsize=fs)
ax.set_title('Autosomes')
# t = fig.text(0.005, 0.615, 'D', weight='bold',
# size=12)
gs.tight_layout(fig, rect=[0, 0, 0.75, 0.62])
gs = gridspec.GridSpec(2, 2)
# Plot colormap for gene expression.
ax = fig.add_subplot(gs[0:2, 0])
cb = plt.colorbar(mappable=exp_im, cax=ax)
cb.solids.set_edgecolor("face")
cb.outline.set_linewidth(0)
for l in ax.get_yticklines():
l.set_markersize(0)
cb.set_label('$\log$ TPM $z$-score', fontsize=fs)
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(fs)
# Plot colormap for passage number.
ax = fig.add_subplot(gs[0, 1])
cb = plt.colorbar(mappable=passage_im, cax=ax)
cb.solids.set_edgecolor("face")
cb.outline.set_linewidth(0)
for l in ax.get_yticklines():
l.set_markersize(0)
cb.set_label('Passage number', fontsize=fs)
cb.set_ticks(np.arange(12, 32, 4))
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(fs)
# Plot colormap for ASE.
ax = fig.add_subplot(gs[1, 1])
cb = plt.colorbar(mappable=x_ase_im, cax=ax)
cb.solids.set_edgecolor("face")
cb.outline.set_linewidth(0)
for l in ax.get_yticklines():
l.set_markersize(0)
cb.set_label('Fraction of genes', fontsize=fs)
cb.set_ticks(np.arange(0, 1.2, 0.2))
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(fs)
gs.tight_layout(fig, rect=[0.75, 0, 1, 0.62])
plt.savefig(os.path.join(outdir, 'x_inactivation_hists_heatmaps_presentation.pdf'))
fig, axs = plt.subplots(1, 2, figsize=(6, 2.4), dpi=300)
ax = axs[1]
ax.hist(x_single.major_allele_freq, bins=np.arange(0.5, 1.05, 0.05), color='grey')
ax.set_xlim(0.5, 1)
ax.set_ylabel('Number of genes', fontsize=fs)
ax.set_xlabel('Allelic imbalance fraction', fontsize=fs)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_yticks(np.arange(0, 20, 4))
for l in ax.get_xticklines() + ax.get_yticklines():
l.set_markersize(0)
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(fs)
ax.set_title('X Chromosome', fontsize=fs)
ax = axs[0]
ax.hist(notx_single.major_allele_freq, bins=np.arange(0.5, 1.05, 0.05), color='grey')
ax.set_xlim(0.5, 1)
ax.set_ylabel('Number of genes', fontsize=fs)
ax.set_xlabel('Allelic imbalance fraction', fontsize=fs)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
for l in ax.get_xticklines() + ax.get_yticklines():
l.set_markersize(0)
ax.yaxis.set_major_formatter(ciepy.comma_format)
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(fs)
ax.set_title('Autosomes', fontsize=fs)
fig.tight_layout()
plt.savefig(os.path.join(outdir, 'mhf_hists_presentation.pdf'))
t = x_all.ix[:, :, 'major_allele_freq']
r = gene_info.ix[t.index, ['start', 'end']]
%%R -i t,r
ideoTrack <- IdeogramTrack(genome = "hg19", chromosome = "chrX", fontsize=16, fontsize.legend=16,
fontcolor='black', cex=1, cex.id=1, cex.axis=1, cex.title=1)
mafTrack <- DataTrack(range=r, data=t, genome="hg19", type=c("smooth", "p"), alpha=0.75, lwd=8,
span=0.05, chromosome="chrX", name="Allelic imbalance fraction", fontsize=12,
fontcolor.legend='black', col.axis='black', col.title='black',
background.title='transparent', cex=1, cex.id=1, cex.axis=1, cex.title=1,
fontface=1, fontface.title=1, alpha.title=1)
fn = os.path.join(outdir, 'p_maf_presentation.pdf')
%%R -i fn
pdf(fn, 10, 3)
plotTracks(c(ideoTrack, mafTrack), from=0, to=58100000, col.title='black')
dev.off()
fn = os.path.join(outdir, 'q_maf_presentation.pdf')
%%R -i fn
pdf(fn, 10, 3)
plotTracks(c(ideoTrack, mafTrack), from=63000000, to=155270560)
dev.off()
```
| github_jupyter |
# Sensing Local Field Potentials with a Directional and Scalable Depth Array: the DISC electrode array
## Supp Figure 1
- Associated data: Supp fig 1 data
- Link: (nature excel link)
## Description:
#### This module does the following:
1. Reads .csv data from ANSYS
2. Calculate F/B ratios for each orientation
3. Plot results
# Settings
```
import pandas as pd
import matplotlib.pyplot as plt
data=pd.read_csv('/home/jovyan/ansys_data/supp-fig-1-final.csv')
```
# Process data
```
gap = data.gap
f_b_ratio_monopole = data.v_front_monopole / data.v_back_monopole
f_b_ratio = data.v_front / data.v_back
f_b_ratio_orth = data.v_front_orth / data.v_back_orth
f_b_ratio_large = data.v_front_large / data.v_back_large
f_b_ratio_wire_monopole = data.v_front_monopole_wire / data.v_back_monopole_wire
f_b_ratio_wire = data.v_front_wire / data.v_back_wire
f_b_ratio_wire_orth = data.v_front_wire_orth / data.v_back_wire_orth
f_b_ratio_wire_large = data.v_front_wire_large / data.v_back_wire_large
```
# Plot Voltage Ratio
```
# First, clear old plot if one exists
plt.clf()
# Now, create figure & add plots to it
plt.figure(figsize=[10, 5], dpi=500)
plt.plot(gap,f_b_ratio_monopole, color='blue', linestyle='dashed')
plt.plot(gap, f_b_ratio, color='blue')
plt.plot(gap,f_b_ratio_orth, color='blue', linestyle='dotted')
plt.plot(gap, f_b_ratio_large, linestyle='dashdot', color='blue')
plt.plot(gap,f_b_ratio_wire_monopole, color='orange', linestyle='dashed')
plt.plot(gap, f_b_ratio_wire, color='orange')
plt.plot(gap,f_b_ratio_wire_orth, color='orange', linestyle='dotted')
plt.plot(gap, f_b_ratio_wire_large, linestyle='dashdot', color='orange')
plt.xscale("linear")
plt.xlabel("gap [mm]")
plt.yscale("log")
plt.ylabel("Voltage Ratio")
plt.legend(["DISC-monopole", "DISC-dipole", "DISC-orth", "DISC-lg", "MW-monopole", "MW-dipole", "MW-orth", "MW-lg"])
plt.savefig('/home/jovyan/ansys_data/images/supp-fig-2.eps', format='eps', dpi=500)
plt.show()
```
# Plot front electrode voltage
```
plt.figure(figsize=[10, 5], dpi=500)
plt.plot(gap, data.v_front_monopole, color='blue', linestyle='dashed')
plt.plot(gap, data.v_front, color='blue')
plt.plot(gap, data.v_front_orth, color='blue', linestyle='dotted')
plt.plot(gap, data.v_front_large, linestyle='dashdot', color='blue')
plt.plot(gap, data.v_front_monopole_wire, color='orange', linestyle='dashed')
plt.plot(gap, data.v_front_wire, color='orange')
plt.plot(gap, data.v_front_wire_orth, color='orange', linestyle='dotted')
plt.plot(gap, data.v_front_wire_large, linestyle='dashdot', color='orange')
plt.legend(["DISC-monopole", "DISC-dipole", "DISC-orth", "DISC-lg", "MW-monopole", "MW-dipole", "MW-orth", "MW-lg"])
plt.xscale("linear")
plt.xlabel("gap [mm]")
plt.yscale("log")
plt.ylabel("Voltage [uV]")
plt.savefig('/home/jovyan/ansys_data/images/supp-fig-2-voltage.eps', format='eps', dpi=500)
plt.show()
```
| github_jupyter |
# Developing an AI application
Going forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications.
In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using [this dataset](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html) of 102 flower categories, you can see a few examples below.
<img src='assets/Flowers.png' width=500px>
The project is broken down into multiple steps:
* Load and preprocess the image dataset
* Train the image classifier on your dataset
* Use the trained classifier to predict image content
We'll lead you through each part which you'll implement in Python.
When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new.
First up is importing the packages you'll need. It's good practice to keep all the imports at the beginning of your code. As you work through this notebook and find you need to import a package, make sure to add the import up here.
```
# Imports here
import json
import numpy as np
import matplotlib.pyplot as plt
from collections import OrderedDict
from PIL import Image
import torch
import torch.nn.functional as F
from torch.autograd import Variable
from torch import nn
from torch import optim
from torchvision import datasets, transforms, models
```
## Load the data
Here you'll use `torchvision` to load the data ([documentation](http://pytorch.org/docs/0.3.0/torchvision/index.html)). The data should be included alongside this notebook, otherwise you can [download it here](https://s3.amazonaws.com/content.udacity-data.com/nd089/flower_data.tar.gz). The dataset is split into three parts, training, validation, and testing. For the training, you'll want to apply transformations such as random scaling, cropping, and flipping. This will help the network generalize leading to better performance. You'll also need to make sure the input data is resized to 224x224 pixels as required by the pre-trained networks.
The validation and testing sets are used to measure the model's performance on data it hasn't seen yet. For this you don't want any scaling or rotation transformations, but you'll need to resize then crop the images to the appropriate size.
The pre-trained networks you'll use were trained on the ImageNet dataset where each color channel was normalized separately. For all three sets you'll need to normalize the means and standard deviations of the images to what the network expects. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`, calculated from the ImageNet images. These values will shift each color channel to be centered at 0 and range from -1 to 1.
```
# Define image paths
data_dir = 'flowers'
train_dir = data_dir + '/train'
valid_dir = data_dir + '/valid'
test_dir = data_dir + '/test'
# Define your transforms for the training, validation, and testing sets
data_transforms = transforms.Compose([transforms.RandomRotation(180),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
])
# Load the datasets with ImageFolder
train_data = datasets.ImageFolder(train_dir, transform=data_transforms)
valid_data = datasets.ImageFolder(valid_dir, transform=data_transforms)
test_data = datasets.ImageFolder(test_dir, transform=data_transforms)
# Using the image datasets and the trainforms, define the dataloaders
trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)
validloader = torch.utils.data.DataLoader(valid_data, batch_size=32, shuffle=True)
testloader = torch.utils.data.DataLoader(test_data, batch_size=16, shuffle=True)
```
### Label mapping
You'll also need to load in a mapping from category label to category name. You can find this in the file `cat_to_name.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categories to the actual names of the flowers.
```
# Load class name mapping
with open('cat_to_name.json', 'r') as f:
cat_to_name = json.load(f)
```
# Building and training the classifier
Now that the data is ready, it's time to build and train the classifier. As usual, you should use one of the pretrained models from `torchvision.models` to get the image features. Build and train a new feed-forward classifier using those features.
We're going to leave this part up to you. If you want to talk through it with someone, chat with your fellow students! You can also ask questions on the forums or join the instructors in office hours.
Refer to [the rubric](https://review.udacity.com/#!/rubrics/1663/view) for guidance on successfully completing this section. Things you'll need to do:
* Load a [pre-trained network](http://pytorch.org/docs/master/torchvision/models.html) (If you need a starting point, the VGG networks work great and are straightforward to use)
* Define a new, untrained feed-forward network as a classifier, using ReLU activations and dropout
* Train the classifier layers using backpropagation using the pre-trained network to get the features
* Track the loss and accuracy on the validation set to determine the best hyperparameters
We've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal!
When training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right. Make sure to try different hyperparameters (learning rate, units in the classifier, epochs, etc) to find the best model. Save those hyperparameters to use as default values in the next part of the project.
```
# Load pre-trained network
model = models.densenet161(pretrained=True)
# Freeze parameters
for param in model.parameters():
param.requires_grad = False
# Create feed forward network and set as classifier
classifier = nn.Sequential(nn.Linear(2208, 1024),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(1024, len(cat_to_name)),
nn.LogSoftmax(dim=1))
model.classifier = classifier
# Function for the validation and test pass
def validation(model, testloader, criterion):
# Use GPU for tests and validation, else CPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
loss = 0
accuracy = 0
for images, labels in testloader:
images = images.to(device)
labels = labels.to(device)
output = model.forward(images)
loss += criterion(output, labels).item()
ps = torch.exp(output)
equality = (labels.data == ps.max(dim=1)[1])
accuracy += equality.type(torch.FloatTensor).mean()
return loss, accuracy
# Hyperparameters
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.classifier.parameters(), lr=0.001)
epochs = 5
# Use GPU for tests and validation, else CPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
print("Start Training...")
print("Device:", device)
# Go through epochs
print_every=20
steps = 0
for e in range(epochs):
model.train()
running_loss = 0
# Train network
for images, labels in trainloader:
steps += 1
images = images.to(device)
labels = labels.to(device)
optimizer.zero_grad()
# Forward and backward passes
outputs = model.forward(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
# Print losses and accuracy after every interval
if steps % print_every == 0:
# Validation pass
model.eval()
with torch.no_grad():
vloss, vaccuracy = validation(model, validloader, criterion)
# Print progress
print("Epoch: {}/{} |".format(e+1, epochs),
"Training Loss: {:.3f} |".format(running_loss/print_every),
"Validation Loss: {:.3f} |".format(vloss/len(validloader)),
"Validation Accuracy: {:.1f}%".format(100*vaccuracy/len(validloader)))
running_loss = 0
model.train()
```
## Testing your network
It's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. Run the test images through the network and measure the accuracy, the same way you did validation. You should be able to reach around 70% accuracy on the test set if the model has been trained well.
```
# Do validation on test set
model.eval()
with torch.no_grad():
_, accuracy = validation(model, testloader, criterion)
print("Accuracy of the network on the test set: {:.2f}%".format(100*accuracy/len(testloader)))
```
## Save the checkpoint
Now that your network is trained, save the model so you can load it later for making predictions. You probably want to save other things such as the mapping of classes to indices which you get from one of the image datasets: `image_datasets['train'].class_to_idx`. You can attach this to the model as an attribute which makes inference easier later on.
```model.class_to_idx = image_datasets['train'].class_to_idx```
Remember that you'll want to completely rebuild the model later so you can use it for inference. Make sure to include any information you need in the checkpoint. If you want to load the model and keep training, you'll want to save the number of epochs as well as the optimizer state, `optimizer.state_dict`. You'll likely want to use this trained model in the next part of the project, so best to save it now.
```
# Attach mapping and hyper parameters to model
model.class_to_idx = train_data.class_to_idx
model.epochs = epochs
model.optimizer = optimizer
def save_checkpoint(model):
""" Save model as a checkpoint along with associated parameters """
checkpoint = {
"feature_arch": "densenet161",
"output_size": model.classifier[-2].out_features,
"hidden_layers": [1024],
"epochs": model.epochs,
"optimizer": model.optimizer,
"class_to_idx": model.class_to_idx,
"state_dict": model.state_dict()
}
torch.save(checkpoint, "checkpoint.pth")
# Save checkpoint
save_checkpoint(model)
```
## Loading the checkpoint
At this point it's good to write a function that can load a checkpoint and rebuild the model. That way you can come back to this project and keep working on it without having to retrain the network.
```
# Function that loads a checkpoint and rebuilds the model
def load_checkpoint(file_path):
""" Rebuild model based on a checkpoint and returns it """
checkpoint = torch.load(file_path)
# Load pretrained feature network
model = models.__dict__[checkpoint["feature_arch"]](pretrained=True)
# Freeze parameters
for param in model.parameters():
param.requires_grad = False
# Add the first layer, input size depends on feature architecture
classifier = nn.ModuleList([
nn.Linear(model.classifier.in_features, checkpoint["hidden_layers"][0]),
nn.ReLU(),
nn.Dropout(),
])
layer_sizes = zip(checkpoint["hidden_layers"][:-1],
checkpoint["hidden_layers"][1:])
for h1, h2 in layer_sizes:
classifier.extend([
nn.Linear(h1, h2),
nn.ReLU(),
nn.Dropout(),
])
# Add output layer
classifier.extend([nn.Linear(checkpoint["hidden_layers"][-1],
checkpoint["output_size"]),
nn.LogSoftmax(dim=1)])
# Replace classifier
model.classifier = nn.Sequential(*classifier)
# Set state dict
model.load_state_dict(checkpoint["state_dict"])
# Append parameters
model.epochs = checkpoint["epochs"]
model.optimizer = checkpoint["optimizer"]
model.class_to_idx = checkpoint["class_to_idx"]
return model
# Load checkpoint
model = load_checkpoint('checkpoint.pth')
```
# Inference for classification
Now you'll write a function to use a trained network for inference. That is, you'll pass an image into the network and predict the class of the flower in the image. Write a function called `predict` that takes an image and a model, then returns the top $K$ most likely classes along with the probabilities. It should look like
```python
probs, classes = predict(image_path, model)
print(probs)
print(classes)
> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]
> ['70', '3', '45', '62', '55']
```
First you'll need to handle processing the input image such that it can be used in your network.
## Image Preprocessing
You'll want to use `PIL` to load the image ([documentation](https://pillow.readthedocs.io/en/latest/reference/Image.html)). It's best to write a function that preprocesses the image so it can be used as input for the model. This function should process the images in the same manner used for training.
First, resize the images where the shortest side is 256 pixels, keeping the aspect ratio. This can be done with the [`thumbnail`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) or [`resize`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) methods. Then you'll need to crop out the center 224x224 portion of the image.
Color channels of images are typically encoded as integers 0-255, but the model expected floats 0-1. You'll need to convert the values. It's easiest with a Numpy array, which you can get from a PIL image like so `np_image = np.array(pil_image)`.
As before, the network expects the images to be normalized in a specific way. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`. You'll want to subtract the means from each color channel, then divide by the standard deviation.
And finally, PyTorch expects the color channel to be the first dimension but it's the third dimension in the PIL image and Numpy array. You can reorder dimensions using [`ndarray.transpose`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.transpose.html). The color channel needs to be first and retain the order of the other two dimensions.
```
def process_image(image):
''' Scales, crops, and normalizes a PIL image for a PyTorch model,
returns an tensor
'''
# Resize image
if image.size[0] < image.size[1]:
image.thumbnail((256, image.size[1]))
else:
image.thumbnail((image.size[0], 256))
# Crop image
image = image.crop((
image.size[0] / 2 - 112,
image.size[1] / 2 - 112,
image.size[0] / 2 + 112,
image.size[1] / 2 + 112
))
# Convert image to numpy array and convert color channel values
np_image = np.array(image) / 256.
# Normalize image array
np_image[:,:,0] = (np_image[:,:,0] - 0.485) / 0.229
np_image[:,:,1] = (np_image[:,:,1] - 0.456) / 0.224
np_image[:,:,2] = (np_image[:,:,2] - 0.406) / 0.225
# Transpose image array
np_image = np_image.transpose(2,0,1)
# Return image array as a tensor
return torch.from_numpy(np_image).float()
```
To check your work, the function below converts a PyTorch tensor and displays it in the notebook. If your `process_image` function works, running the output through this function should return the original image (except for the cropped out portions).
```
def imshow(image, ax=None, title=None):
"""Imshow for Tensor."""
if ax is None:
fig, ax = plt.subplots()
# PyTorch tensors assume the color channel is the first dimension
# but matplotlib assumes is the third dimension
image = image.numpy().transpose((1, 2, 0))
# Undo preprocessing
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
image = std * image + mean
# Image needs to be clipped between 0 and 1 or it looks like noise when displayed
image = np.clip(image, 0, 1)
ax.imshow(image)
return ax
# Test
image = Image.open("flowers/test/1/image_06743.jpg")
imshow(process_image(image))
```
## Class Prediction
Once you can get images in the correct format, it's time to write a function for making predictions with your model. A common practice is to predict the top 5 or so (usually called top-$K$) most probable classes. You'll want to calculate the class probabilities then find the $K$ largest values.
To get the top $K$ largest values in a tensor use [`x.topk(k)`](http://pytorch.org/docs/master/torch.html#torch.topk). This method returns both the highest `k` probabilities and the indices of those probabilities corresponding to the classes. You need to convert from these indices to the actual class labels using `class_to_idx` which hopefully you added to the model or from an `ImageFolder` you used to load the data ([see here](#Save-the-checkpoint)). Make sure to invert the dictionary so you get a mapping from index to class as well.
Again, this method should take a path to an image and a model checkpoint, then return the probabilities and classes.
```python
probs, classes = predict(image_path, model)
print(probs)
print(classes)
> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]
> ['70', '3', '45', '62', '55']
```
```
def predict(image_path, model, topk=5):
''' Predict the class (or classes) of an image using a trained deep learning model.
'''
# Check for CUDA
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
# Load and process image
image = process_image(Image.open(image_path))
image.unsqueeze_(0)
image = image.to(device)
# Predict class
model.eval()
with torch.no_grad():
output = model(image)
# Get topk probabilities and classes
prediction = F.softmax(output.data, dim=1).topk(topk)
probs = prediction[0].data.cpu().numpy().squeeze()
classes = prediction[1].data.cpu().numpy().squeeze()
# Get actual class labels
inverted_dict = dict([[model.class_to_idx[k], k] for k in model.class_to_idx])
classes = [inverted_dict[k] for k in classes]
return probs, classes
# Test
probs, classes = predict("flowers/test/1/image_06743.jpg", model)
print(probs)
print(classes)
```
## Sanity Checking
Now that you can use a trained model for predictions, check to make sure it makes sense. Even if the testing accuracy is high, it's always good to check that there aren't obvious bugs. Use `matplotlib` to plot the probabilities for the top 5 classes as a bar graph, along with the input image. It should look like this:
<img src='assets/inference_example.png' width=300px>
You can convert from the class integer encoding to actual flower names with the `cat_to_name.json` file (should have been loaded earlier in the notebook). To show a PyTorch tensor as an image, use the `imshow` function defined above.
```
def plot_prediction(image_path, true_class_id, topk=5):
''' Plot input image and the top k predictions'''
# Get image and prediction
image = Image.open(image_path)
probs, classes = predict(image_path, model, topk)
# Create plot grid
fig, (ax1, ax2) = plt.subplots(figsize=(6,9), ncols=1, nrows=2)
# Prepare plot for top predicted flower
ax1.set_title(cat_to_name[true_class_id])
ax1.imshow(image)
ax1.axis('off')
# Prepare barchart of topk classes
ax2.barh(range(topk), probs)
ax2.set_yticks(range(topk))
ax2.set_yticklabels([cat_to_name[x] for x in classes])
ax2.invert_yaxis()
# Test
plot_prediction("flowers/test/1/image_06743.jpg", "1", 5)
```
| github_jupyter |
This notebook wants to make use of the evaluation techniques previously developed to select the best algorithms for this problem.
```
import pandas as pd
import numpy as np
import tubesml as tml
from sklearn.model_selection import KFold
from sklearn.pipeline import Pipeline
from sklearn.linear_model import Lasso, Ridge, SGDRegressor
from sklearn.ensemble import RandomForestRegressor, ExtraTreesRegressor
from sklearn.svm import SVR
from sklearn.neighbors import KNeighborsRegressor
from sklearn.neural_network import MLPRegressor
import xgboost as xgb
import lightgbm as lgb
from sklearn.metrics import mean_squared_error, mean_absolute_error
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import sys
sys.path.append("..")
from source.clean import general_cleaner
from source.transf_category import recode_cat, make_ordinal
from source.transf_numeric import tr_numeric
import source.transf_univ as dfp
import source.utility as ut
import source.report as rp
import warnings
warnings.filterwarnings("ignore",
message="The dummies in this set do not match the ones in the train set, we corrected the issue.")
pd.set_option('max_columns', 500)
```
# Data preparation
Get the data ready to flow into the pipeline
```
df_train = pd.read_csv('../data/train.csv')
df_test = pd.read_csv('../data/test.csv')
df_train['Target'] = np.log1p(df_train.SalePrice)
df_train = df_train[df_train.GrLivArea < 4500].copy().reset_index()
del df_train['SalePrice']
train_set, test_set = ut.make_test(df_train,
test_size=0.2, random_state=654,
strat_feat='Neighborhood')
y = train_set['Target'].copy()
del train_set['Target']
y_test = test_set['Target']
del test_set['Target']
```
## Building the pipeline
This was introduced in another notebook and imported above
```
numeric_pipe = Pipeline([('fs', tml.DtypeSel(dtype='numeric')),
('imputer', tml.DfImputer(strategy='median')),
('transf', tr_numeric())])
cat_pipe = Pipeline([('fs', tml.DtypeSel(dtype='category')),
('imputer', tml.DfImputer(strategy='most_frequent')),
('ord', make_ordinal(['BsmtQual', 'KitchenQual',
'ExterQual', 'HeatingQC'])),
('recode', recode_cat()),
('dummies', tml.Dummify(drop_first=True))])
processing_pipe = tml.FeatureUnionDf(transformer_list=[('cat_pipe', cat_pipe),
('num_pipe', numeric_pipe)])
```
## Evaluation method
We have seen how it works in the previous notebook, we have thus imported the necessary functions above.
```
models = [('lasso', Lasso(alpha=0.01)), ('ridge', Ridge()), ('sgd', SGDRegressor()),
('forest', RandomForestRegressor(n_estimators=200)), ('xtree', ExtraTreesRegressor(n_estimators=200)),
('svr', SVR()),
('kneig', KNeighborsRegressor()),
('xgb', xgb.XGBRegressor(n_estimators=200, objective='reg:squarederror')),
('lgb', lgb.LGBMRegressor(n_estimators=200))]
mod_name = []
rmse_train = []
rmse_test = []
mae_train = []
mae_test = []
folds = KFold(5, shuffle=True, random_state=541)
for model in models:
train = train_set.copy()
test = test_set.copy()
print(model[0])
mod_name.append(model[0])
pipe = [('gen_cl', general_cleaner()),
('processing', processing_pipe),
('scl', dfp.df_scaler())] + [model]
model_pipe = Pipeline(pipe)
inf_preds = tml.cv_score(data=train, target=y, cv=folds, estimator=model_pipe)
model_pipe.fit(train, y)
preds = model_pipe.predict(test)
rp.plot_predictions(test, y_test, preds, savename=model[0]+'_preds.png')
rp.plot_predictions(train, y, inf_preds, savename=model[0]+'_inf_preds.png')
rmse_train.append(mean_squared_error(y, inf_preds))
rmse_test.append(mean_squared_error(y_test, preds))
mae_train.append(mean_absolute_error(np.expm1(y), np.expm1(inf_preds)))
mae_test.append(mean_absolute_error(np.expm1(y_test), np.expm1(preds)))
print(f'\tTrain set RMSE: {round(np.sqrt(mean_squared_error(y, inf_preds)), 4)}')
print(f'\tTrain set MAE: {round(mean_absolute_error(np.expm1(y), np.expm1(inf_preds)), 2)}')
print(f'\tTest set RMSE: {round(np.sqrt(mean_squared_error(y_test, preds)), 4)}')
print(f'\tTrain set MAE: {round(mean_absolute_error(np.expm1(y_test), np.expm1(preds)), 2)}')
print('_'*40)
print('\n')
results = pd.DataFrame({'model_name': mod_name,
'rmse_train': rmse_train, 'rmse_test': rmse_test,
'mae_train': mae_train, 'mae_test': mae_test})
results
results.sort_values(by='rmse_train').head(2)
results.sort_values(by='rmse_test').head(2)
results.sort_values(by='mae_train').head(2)
results.sort_values(by='mae_test').head(2)
```
| github_jupyter |
```
%matplotlib inline
```
분류기(Classifier) 학습하기
============================
지금까지 어떻게 신경망을 정의하고, 손실을 계산하며 또 가중치를 갱신하는지에
대해서 배웠습니다.
이제 아마도 이런 생각을 하고 계실텐데요,
데이터는 어떻게 하나요?
------------------------
일반적으로 이미지나 텍스트, 오디오나 비디오 데이터를 다룰 때는 표준 Python 패키지를
이용하여 NumPy 배열로 불러오면 됩니다. 그 후 그 배열을 ``torch.*Tensor`` 로 변환합니다.
- 이미지는 Pillow나 OpenCV 같은 패키지가 유용합니다.
- 오디오를 처리할 때는 SciPy와 LibROSA가 유용하고요.
- 텍스트의 경우에는 그냥 Python이나 Cython을 사용해도 되고, NLTK나 SpaCy도
유용합니다.
특별히 영상 분야를 위한 ``torchvision`` 이라는 패키지가 만들어져 있는데,
여기에는 Imagenet이나 CIFAR10, MNIST 등과 같이 일반적으로 사용하는 데이터셋을 위한
데이터 로더(data loader), 즉 ``torchvision.datasets`` 과 이미지용 데이터 변환기
(data transformer), 즉 ``torch.utils.data.DataLoader`` 가 포함되어 있습니다.
이러한 기능은 엄청나게 편리하며, 매번 유사한 코드(boilerplate code)를 반복해서
작성하는 것을 피할 수 있습니다.
이 튜토리얼에서는 CIFAR10 데이터셋을 사용합니다. 여기에는 다음과 같은 분류들이
있습니다: '비행기(airplane)', '자동차(automobile)', '새(bird)', '고양이(cat)',
'사슴(deer)', '개(dog)', '개구리(frog)', '말(horse)', '배(ship)', '트럭(truck)'.
그리고 CIFAR10에 포함된 이미지의 크기는 3x32x32로, 이는 32x32 픽셀 크기의 이미지가
3개 채널(channel)의 색상로 이뤄져 있다는 것을 뜻합니다.
.. figure:: /_static/img/cifar10.png
:alt: cifar10
cifar10
이미지 분류기 학습하기
----------------------------
다음과 같은 단계로 진행해보겠습니다:
1. ``torchvision`` 을 사용하여 CIFAR10의 학습용 / 시험용 데이터셋을
불러오고, 정규화(nomarlizing)합니다.
2. 합성곱 신경망(Convolution Neural Network)을 정의합니다.
3. 손실 함수를 정의합니다.
4. 학습용 데이터를 사용하여 신경망을 학습합니다.
5. 시험용 데이터를 사용하여 신경망을 검사합니다.
1. CIFAR10을 불러오고 정규화하기
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``torchvision`` 을 사용하여 매우 쉽게 CIFAR10을 불러올 수 있습니다.
```
import torch
import torchvision
import torchvision.transforms as transforms
```
torchvision 데이터셋의 출력(output)은 [0, 1] 범위를 갖는 PILImage 이미지입니다.
이를 [-1, 1]의 범위로 정규화된 Tensor로 변환합니다.
<div class="alert alert-info"><h4>Note</h4><p>만약 Windows 환경에서 BrokenPipeError가 발생한다면,
torch.utils.data.DataLoader()의 num_worker를 0으로 설정해보세요.</p></div>
```
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
batch_size = 4
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
```
재미삼아 학습용 이미지 몇 개를 보겠습니다.
```
import matplotlib.pyplot as plt
import numpy as np
# 이미지를 보여주기 위한 함수
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# 학습용 이미지를 무작위로 가져오기
dataiter = iter(trainloader)
images, labels = dataiter.next()
# 이미지 보여주기
imshow(torchvision.utils.make_grid(images))
# 정답(label) 출력
print(' '.join('%5s' % classes[labels[j]] for j in range(batch_size)))
```
2. 합성곱 신경망(Convolution Neural Network) 정의하기
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
이전의 신경망 섹션에서 신경망을 복사한 후, (기존에 1채널 이미지만 처리하도록
정의된 것을) 3채널 이미지를 처리할 수 있도록 수정합니다.
```
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x, 1) # 배치를 제외한 모든 차원을 평탄화(flatten)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
```
3. 손실 함수와 Optimizer 정의하기
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
교차 엔트로피 손실(Cross-Entropy loss)과 모멘텀(momentum) 값을 갖는 SGD를 사용합니다.
```
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
```
4. 신경망 학습하기
^^^^^^^^^^^^^^^^^^^^
이제 재미있는 부분이 시작됩니다.
단순히 데이터를 반복해서 신경망에 입력으로 제공하고, 최적화(Optimize)만 하면
됩니다.
```
for epoch in range(2): # 데이터셋을 수차례 반복합니다.
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# [inputs, labels]의 목록인 data로부터 입력을 받은 후;
inputs, labels = data
# 변화도(Gradient) 매개변수를 0으로 만들고
optimizer.zero_grad()
# 순전파 + 역전파 + 최적화를 한 후
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# 통계를 출력합니다.
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
```
학습한 모델을 저장해보겠습니다:
```
PATH = './cifar_net.pth'
torch.save(net.state_dict(), PATH)
```
PyTorch 모델을 저장하는 자세한 방법은 `여기 <https://pytorch.org/docs/stable/notes/serialization.html>`_
를 참조해주세요.
5. 시험용 데이터로 신경망 검사하기
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
지금까지 학습용 데이터셋을 2회 반복하며 신경망을 학습시켰습니다.
신경망이 전혀 배운게 없을지도 모르니 확인해봅니다.
신경망이 예측한 출력과 진짜 정답(Ground-truth)을 비교하는 방식으로 확인합니다.
만약 예측이 맞다면 샘플을 '맞은 예측값(correct predictions)' 목록에 넣겠습니다.
첫번째로 시험용 데이터를 좀 보겠습니다.
```
dataiter = iter(testloader)
images, labels = dataiter.next()
# 이미지를 출력합니다.
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
```
이제, 저장했던 모델을 불러오도록 하겠습니다 (주: 모델을 저장하고 다시 불러오는
작업은 여기에서는 불필요하지만, 어떻게 하는지 설명을 위해 해보겠습니다):
```
net = Net()
net.load_state_dict(torch.load(PATH))
```
좋습니다, 이제 이 예제들을 신경망이 어떻게 예측했는지를 보겠습니다:
```
outputs = net(images)
```
출력은 10개 분류 각각에 대한 값으로 나타납니다. 어떤 분류에 대해서 더 높은 값이
나타난다는 것은, 신경망이 그 이미지가 해당 분류에 더 가깝다고 생각한다는 것입니다.
따라서, 가장 높은 값을 갖는 인덱스(index)를 뽑아보겠습니다:
```
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
```
결과가 괜찮아보이네요.
그럼 전체 데이터셋에 대해서는 어떻게 동작하는지 보겠습니다.
```
correct = 0
total = 0
# 학습 중이 아니므로, 출력에 대한 변화도를 계산할 필요가 없습니다
with torch.no_grad():
for data in testloader:
images, labels = data
# 신경망에 이미지를 통과시켜 출력을 계산합니다
outputs = net(images)
# 가장 높은 값(energy)를 갖는 분류(class)를 정답으로 선택하겠습니다
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
```
(10가지 분류 중에 하나를 무작위로) 찍었을 때의 정확도인 10% 보다는 나아보입니다.
신경망이 뭔가 배우긴 한 것 같네요.
그럼 어떤 것들을 더 잘 분류하고, 어떤 것들을 더 못했는지 알아보겠습니다:
```
# 각 분류(class)에 대한 예측값 계산을 위해 준비
correct_pred = {classname: 0 for classname in classes}
total_pred = {classname: 0 for classname in classes}
# 변화도는 여전히 필요하지 않습니다
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predictions = torch.max(outputs, 1)
# 각 분류별로 올바른 예측 수를 모읍니다
for label, prediction in zip(labels, predictions):
if label == prediction:
correct_pred[classes[label]] += 1
total_pred[classes[label]] += 1
# 각 분류별 정확도(accuracy)를 출력합니다
for classname, correct_count in correct_pred.items():
accuracy = 100 * float(correct_count) / total_pred[classname]
print("Accuracy for class {:5s} is: {:.1f} %".format(classname,
accuracy))
```
자, 이제 다음으로 무엇을 해볼까요?
이러한 신경망들을 GPU에서 실행하려면 어떻게 해야 할까요?
GPU에서 학습하기
----------------
Tensor를 GPU로 이동했던 것처럼, 신경망 또한 GPU로 옮길 수 있습니다.
먼저 (CUDA를 사용할 수 있다면) 첫번째 CUDA 장치를 사용하도록 설정합니다:
```
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# CUDA 기기가 존재한다면, 아래 코드가 CUDA 장치를 출력합니다:
print(device)
```
이 섹션의 나머지 부분에서는 ``device`` 를 CUDA 장치라고 가정하겠습니다.
그리고 이 메소드(Method)들은 재귀적으로 모든 모듈의 매개변수와 버퍼를
CUDA tensor로 변경합니다:
.. code:: python
net.to(device)
또한, 각 단계에서 입력(input)과 정답(target)도 GPU로 보내야 한다는 것도 기억해야
합니다:
.. code:: python
inputs, labels = data[0].to(device), data[1].to(device)
CPU와 비교했을 때 어마어마한 속도 차이가 나지 않는 것은 왜 그럴까요?
그 이유는 바로 신경망이 너무 작기 때문입니다.
**연습:** 신경망의 크기를 키워보고, 얼마나 빨라지는지 확인해보세요.
(첫번째 ``nn.Conv2d`` 의 2번째 인자와 두번째 ``nn.Conv2d`` 의 1번째 인자는
같은 숫자여야 합니다.)
**다음 목표들을 달성했습니다**:
- 높은 수준에서 PyTorch의 Tensor library와 신경망을 이해합니다.
- 이미지를 분류하는 작은 신경망을 학습시킵니다.
여러개의 GPU에서 학습하기
-------------------------
모든 GPU를 활용해서 더욱 더 속도를 올리고 싶다면, :doc:`data_parallel_tutorial`
을 참고하세요.
이제 무엇을 해볼까요?
-----------------------
- :doc:`비디오 게임을 할 수 있는 신경망 학습시키기 </intermediate/reinforcement_q_learning>`
- `imagenet으로 최첨단(state-of-the-art) ResNet 신경망 학습시키기`_
- `적대적 생성 신경망으로 얼굴 생성기 학습시키기`_
- `순환 LSTM 네트워크를 사용해 단어 단위 언어 모델 학습시키기`_
- `다른 예제들 참고하기`_
- `더 많은 튜토리얼 보기`_
- `포럼에서 PyTorch에 대해 얘기하기`_
- `Slack에서 다른 사용자와 대화하기`_
```
# %%%%%%INVISIBLE_CODE_BLOCK%%%%%%
del dataiter
# %%%%%%INVISIBLE_CODE_BLOCK%%%%%%
```
| github_jupyter |
<a href="https://colab.research.google.com/github/NoerNikmat/machine_learning_models_for_absenteeism_at_work_dataset/blob/main/1_Data_Preparation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# DATA PREPARATION FOR MACHINE LEARNING MODELS
Using Absenteeism at work An UCI dataset
## Programming with Python
### Import Dataset from Kaggle
Install Kaggle for upload dataset into google colab
```
!pip install -q kaggle
```
Upload Kaggle API key
```
from google.colab import files
files.upload()
! mkdir ~/.kaggle
! cp kaggle.json ~/.kaggle/
! chmod 600 ~/.kaggle/kaggle.json
```
Download dataset from Kaggle
```
! kaggle datasets download -d 'loganalive/absenteeism-at-work-an-uci-dataset/download'
!ls
!unzip -q absenteeism-at-work-an-uci-dataset.zip
!ls
```
### Import Library
```
import pandas as pd
import numpy as np
absent = pd.read_csv('Absenteeism_at_work.csv')
absent.head(20)
absent['Work load Average/day ']
```
Dimensions of data
```
shape = absent.shape
print (shape)
```
Data type for each attribute
```
types = absent.dtypes
print(types)
absent
# Null values in dataset
absent_data = pd.DataFrame(absent.isnull().sum())
absent_data = absent_data.rename(columns={0:"Absent_sum"})
absent_data["Absent Percent"] = (absent_data["Absent_sum"]/len(absent))*100
absent_data
absent.describe()
pd.set_option('display.width', 100)
pd.set_option('precision',3)
description = absent.describe()
print(description)
class_counts = absent.groupby('Absenteeism time in hours').size()
print(class_counts)
correlations = absent.corr(method = 'pearson')
print(correlations)
```
The most common method for calculating correlation is Pearson’s Correlation Coefficient, that assumes a normal distribution of the attributes involved. A correlation of -1 or 1 shows a full negative or positive correlation respectively. Whereas a value of 0 shows no correlation at all.
The matrix lists all attributes across the top and down the side, to give correlation between all pairs of attributes (twice, because the matrix is symmetrical). There can show the diagonal line through the matrix from the top left to bottom right corners of the matrix shows perfect correlation of each attribute with itself.
```
skew = absent.skew()
print(skew)
```
The skew result show a positive (right) or negative (left) skew.
Values closer to zeroshow less skew.
```
def unique(list1):
list_set = set(list1)
unique_list = (list(list_set))
for x in unique_list:
print (x)
abtag=absent['Absenteeism time in hours']
unique(abtag)
abtag2=abtag[absent['Absenteeism time in hours']]
unique(abtag2)
# add categorical target column as per project requirement
absent['Absenteeism categories'] = np.where((absent['Absenteeism time in hours'] >= 0)&(absent['Absenteeism time in hours'] <= 20), "Group 0",
np.where((absent['Absenteeism time in hours'] >= 21)&(absent['Absenteeism time in hours'] <= 40), "Group 1",
np.where((absent['Absenteeism time in hours'] >= 41)&(absent['Absenteeism time in hours'] <= 60), "Group 2",
np.where((absent['Absenteeism time in hours'] >= 61)&(absent['Absenteeism time in hours'] <= 80), "Group 3",
np.where((absent['Absenteeism time in hours'] >= 81)&(absent['Absenteeism time in hours'] <= 100), "Group 4",
np.where((absent['Absenteeism time in hours'] >= 101)&(absent['Absenteeism time in hours'] <= 120),"Group 5",0))
))))
absent.head(20)
absent['Absenteeism categories'].tail()
```
Formatting to proper data type
```
absent['followUp_req'] = np.where(absent['Reason for absence']<= 21,1, 0)
absent['Reason for absence'] = absent['Reason for absence'].astype('category')
absent['Month of absence'] = absent['Month of absence'].astype('category')
absent['Day of the week'] = absent['Day of the week'].astype('category')
absent['Seasons'] = absent['Seasons'].astype('category')
absent['Disciplinary failure'] = absent['Disciplinary failure'].astype('category')
absent['Education'] = absent['Education'].astype('category')
absent['Social drinker'] = absent['Social drinker'].astype('category')
absent['Social smoker'] = absent['Social smoker'].astype('category')
absent['Pet'] = absent['Pet'].astype('category')
absent['followUp_req'] = absent['followUp_req'].astype('category')
absent['Absenteeism categories'] = absent['Absenteeism categories'].astype('category')
absent.info()
# store two datasets, one for continous and other categorical
dataset_continuous = absent.drop('Absenteeism categories', axis=1)
dataset_categorical = absent.drop('Absenteeism time in hours',axis=1)
print(dataset_continuous.shape)
print(dataset_categorical.shape)
# write the taining data to file
dataset_continuous.to_csv('cleanDataset_continuousTarget.csv',index=False)
dataset_categorical.to_csv('cleanDataset_categoricalTarget.csv',index=False)
# get the test dataset
test_path = 'Absenteeism_at_work.csv'
data_test = pd.read_csv(test_path, decimal=",")
# preprocess the test dataset
# adding new column named 'followUp_req' based on whether reason for absence required follow up or not
data_test['followUp_req'] = np.where(data_test['Reason for absence'] <= 21, 1, 0)
# add categorical target column as per project requirement
data_test['Absenteeism categories'] = np.where(data_test['Absenteeism time in hours'] == 0, "Group 0",
np.where(data_test['Absenteeism time in hours'] == 1, "Group 1",
np.where(data_test['Absenteeism time in hours'] == 2, "Group 2",
np.where(data_test['Absenteeism time in hours'] == 3, "Group 3",
np.where((data_test['Absenteeism time in hours'] >= 4)&(data_test['Absenteeism time in hours'] <= 7), "Group 4",
np.where(data_test['Absenteeism time in hours'] == 8, "Group 5",
np.where(data_test['Absenteeism time in hours'] >= 9, "Group 6",0))
)))))
data_test['Reason for absence'] = data_test['Reason for absence'].astype('category').cat.codes
data_test['Month of absence'] = data_test['Month of absence'].astype('category').cat.codes
data_test['Day of the week'] = data_test['Day of the week'].astype('category').cat.codes
data_test['Seasons'] = data_test['Seasons'].astype('category').cat.codes
data_test['Disciplinary failure'] = data_test['Disciplinary failure'].astype('category').cat.codes
data_test['Education'] = data_test['Education'].astype('category').cat.codes
data_test['Social drinker'] = data_test['Social drinker'].astype('category').cat.codes
data_test['Social smoker'] = data_test['Social smoker'].astype('category').cat.codes
data_test['Pet'] = data_test['Pet'].astype('category').cat.codes
data_test['followUp_req'] = data_test['followUp_req'].astype('category').cat.codes
data_test['Absenteeism categories'] = data_test['Absenteeism categories'].astype('category').cat.codes
dataset_test = data_test
print(dataset_test.shape)
data_test.head()
dataset_test.to_csv('cleanDataset_categoricalTarget_test.csv',index=False)
```
| github_jupyter |
# Bidirection LSTM - IMDB sentiment classification
see **https://github.com/fchollet/keras/blob/master/examples/imdb_bidirectional_lstm.py**
```
KERAS_MODEL_FILEPATH = '../../demos/data/imdb_bidirectional_lstm/imdb_bidirectional_lstm.h5'
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Embedding, LSTM, Input, Bidirectional
from keras.datasets import imdb
from keras.callbacks import EarlyStopping, ModelCheckpoint
import json
max_features = 20000
maxlen = 200 # cut texts after this number of words (among top max_features most common words)
print('Loading data...')
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=max_features)
print(len(X_train), 'train sequences')
print(len(X_test), 'test sequences')
print("Pad sequences (samples x time)")
X_train = sequence.pad_sequences(X_train, maxlen=maxlen)
X_test = sequence.pad_sequences(X_test, maxlen=maxlen)
print('X_train shape:', X_train.shape)
print('X_test shape:', X_test.shape)
y_train = np.array(y_train)
y_test = np.array(y_test)
model = Sequential()
model.add(Embedding(max_features, 64, input_length=maxlen))
model.add(Bidirectional(LSTM(32)))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])
# Model saving callback
checkpointer = ModelCheckpoint(filepath=KERAS_MODEL_FILEPATH, monitor='val_acc', verbose=1, save_best_only=True)
# Early stopping
early_stopping = EarlyStopping(monitor='val_acc', verbose=1, patience=2)
# train
batch_size = 128
epochs = 10
model.fit(X_train, y_train,
validation_data=[X_test, y_test],
batch_size=batch_size, epochs=epochs, verbose=2,
callbacks=[checkpointer, early_stopping])
```
**sample data**
```
word_index = imdb.get_word_index()
word_dict = {idx: word for word, idx in word_index.items()}
sample = []
for idx in X_train[0]:
if idx >= 3:
sample.append(word_dict[idx-3])
elif idx == 2:
sample.append('-')
' '.join(sample)
with open('../../demos/data/imdb_bidirectional_lstm/imdb_dataset_word_index_top20000.json', 'w') as f:
f.write(json.dumps({word: idx for word, idx in word_index.items() if idx < max_features}))
with open('../../demos/data/imdb_bidirectional_lstm/imdb_dataset_word_dict_top20000.json', 'w') as f:
f.write(json.dumps({idx: word for word, idx in word_index.items() if idx < max_features}))
sample_test_data = []
for i in np.random.choice(range(X_test.shape[0]), size=1000, replace=False):
sample_test_data.append({'values': X_test[i].tolist(), 'label': y_test[i].tolist()})
with open('../../demos/data/imdb_bidirectional_lstm/imdb_dataset_test.json', 'w') as f:
f.write(json.dumps(sample_test_data))
```
| github_jupyter |
## TODO:
<ul>
<li>Usar o libreoffice e encontrar 2000 palavras erradas (80h)</li>
<li>Classificar as palavras por tipo (80h)</li>
</ul>
## <b>Italian Pipeline</b>
```
# load hunspell
import urllib
import json
import numpy as np
import pandas as pd
import itertools
from matplotlib import pyplot as plt
import re
suggestions = pd.DataFrame(data)
suggestions
suggestions.to_csv('suggestions.auto.csv')
import hunspell
it_spellchecker = hunspell.HunSpell('/home/rgomes/dictionaries/dictionaries/it/index.dic', '/home/rgomes/dictionaries/dictionaries/it/index.aff')
with open('../auto.spellchecker.results.filtered.json', encoding='utf-8') as data_file:
data = json.loads(data_file.read())
data = list(filter(lambda x: x,data))
a = map(lambda x: x['word'], data)
b = map(lambda x : (x,it_spellchecker.spell(x)), a)
asd = filter(lambda x: x[1] ,b)
errors_hunspell = list(filter(lambda x: x[1] == False , b))
ac_errors = filter(lambda x: re.search(r'[À-ž\'\`]', x[0]) ,errors_hunspell)
# for item in list(ac_errors):
#print(item[0] + '\n')
corrected_ac_errors = []
with open('../italian_accented_erros.txt', encoding='utf-8') as data_file2:
lines = data_file2.readlines()
corrected_ac_errors = list(filter(lambda y: y != '',map(lambda x: x.rstrip('\n'), lines)))
corrected_words = []
for index,x in enumerate(ac_errors):
if x[0] != corrected_ac_errors[index]:
corrected_words.append((x[0], corrected_ac_errors[index]))
all_words = []
with open('../italian_words_all.txt', encoding='utf-8') as data_file_all:
lines = data_file_all.readlines()
all_words = list(map(lambda x: x.rstrip('\n').lower(), lines))
all_words = list(map(lambda x: x.replace('!#$%&()*+,./:;<=>?@[\\]_{|}', ''), all_words))
def histogram(list):
d={}
for i in list:
if i in d:
d[i] += 1
else:
d[i] = 1
return d
def plotHistogram(data):
h = histogram(data)
h = sorted(h.items(), key=lambda x: x[1], reverse=True)
h = map(lambda x: x[1], h)
# remove the words that appears only once
h = filter(lambda x: x > 1, h)
plt.plot(list(h))
plt.show()
suggestions_csv = pd.read_csv('/home/rgomes/Downloads/suggestions filtered - suggestions.auto.csv')
suggestions_csv = suggestions_csv.replace(np.nan, '', regex=True)
suggestions_csv.drop(['is_italian_word', 'suggestions', 'HELPFUL LINK', 'Already removed words'], axis=1)
suggestions_corrected = []
for _, row in suggestions_csv.iterrows():
if row['spelling_correction']:
suggestions_corrected.append((row['word'], row['spelling_correction']))
suggestions_corrected
print(len(suggestions_corrected))
h = histogram(all_words)
h = sorted(h.items(), key=lambda x: x[1], reverse=True)
#######
# filtra apenas aquelas corrigidas com repeticao
combined_corrections_map = list(set(corrected_words + suggestions_corrected))
print('Total corrections {}'.format(len(combined_corrections_map)))
combined_words_list = list(map(lambda x : x[0].lower(), combined_corrections_map))
#print(combined_words_list)
mapped_combined_words = filter(lambda x : x[0].lower() in combined_words_list, h)
total_words = list(mapped_combined_words)
print(total_words[0])
count = 0
for w in total_words:
count = count + w[1]
print(count)
combined_corrections_map
print(len(corrected_words), len(suggestions_corrected))
a_ordered = filter(lambda x: re.search(r'[À-ž\'\`]', x[0]),h)
b_ordered = filter(lambda x: not it_spellchecker.spell(x[0]),a_ordered)
c_ordered = filter(lambda x: not(x[0] in combined_words_list),b_ordered)
d = list(c_ordered)
count2 = 0
for w in d:
count2 = count2 + w[1]
print(count2)
with open('../ordered_last_errors.txt', 'w') as ordered_last_errors:
for item in d:
ordered_last_errors.write(item[0] + '\n')
last_corrections = []
with open('../ordered_last_errors_corrected.txt') as ordered_last_corrections:
lines = list(map(lambda x: x.rstrip('\n').lower(), ordered_last_corrections))
for index, item in enumerate(d):
if item[0] != lines[index]:
last_corrections.append((item[0],lines[index]))
print(len(last_corrections))
h = histogram(all_words)
h = sorted(h.items(), key=lambda x: x[1], reverse=True)
# filtra apenas aquelas corrigidas com repeticao
combined_corrections_map = list(set(corrected_words + suggestions_corrected + last_corrections))
#combined_corrections_map = list(map(lambda x : (x[0].replace('!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~', ''), combined_corrections_map)))
print('Total corrections {}'.format(len(combined_corrections_map)))
combined_words_list = list(map(lambda x : x[0].lower(), combined_corrections_map))
#print(combined_words_list)
mapped_combined_words = list(filter(lambda x : x[0].lower() in combined_words_list, h))
#remove rare cases and outliers
# todo: remove nonsense words verified by norton
total_words = list(filter(lambda x: x[1] > 1 and x[1] < 2200,mapped_combined_words))
print(total_words[0])
count = 0
for w in total_words:
count = count + w[1]
print(count)
all_count_dict = dict((a[0], a) for a in total_words)
all_corrections_dict = dict((a[0], a) for a in combined_corrections_map)
all_data = []
for item in all_count_dict:
if all_corrections_dict.get(item):
all_data.append((item, all_count_dict[item][1], all_corrections_dict[item][1]))
print(len(all_data))
df = pd.DataFrame(all_data)
df.to_csv('../final_corrections.csv')
```
| github_jupyter |
# Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Also, some information [here at r2rt](http://r2rt.com/recurrent-neural-networks-in-tensorflow-ii.html) and from [Sherjil Ozair](https://github.com/sherjilozair/char-rnn-tensorflow) on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
```
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
```
First we'll load the text file and convert it into integers for our network to use.
```
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
```
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the `split_frac` keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
```
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
```
I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size `batch_size X num_steps`. For example, if we want our network to train on a sequence of 100 characters, `num_steps = 100`. For the next batch, we'll shift this window the next sequence of `num_steps` characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
```
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
final_state = tf.identity(state, name='final_state')
# Reshape output so it's a bunch of rows, one row for each cell output
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
preds = tf.nn.softmax(logits, name='predictions')
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
```
## Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are `lstm_size` and `num_layers`. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
```
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
```
## Write out the graph for TensorBoard
```
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/1', sess.graph)
```
## Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by `save_every_n`) I calculate the validation loss and save a checkpoint.
```
!mkdir -p checkpoints/anna
epochs = 1
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
```
## Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
```
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
```
| github_jupyter |
# Bayesian Estimation Supersedes the T-Test
```
%matplotlib inline
import numpy as np
import pymc3 as pm
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
print('Running on PyMC3 v{}'.format(pm.__version__))
```
This model replicates the example used in:
Kruschke, John. (2012) **Bayesian estimation supersedes the t-test**. *Journal of Experimental Psychology*: General.
### The Problem
Several statistical inference procedures involve the comparison of two groups. We may be interested in whether one group is larger than another, or simply different from the other. We require a statistical model for this because true differences are usually accompanied by measurement or stochastic noise that prevent us from drawing conclusions simply from differences calculated from the observed data.
The *de facto* standard for statistically comparing two (or more) samples is to use a statistical test. This involves expressing a null hypothesis, which typically claims that there is no difference between the groups, and using a chosen test statistic to determine whether the distribution of the observed data is plausible under the hypothesis. This rejection occurs when the calculated test statistic is higher than some pre-specified threshold value.
Unfortunately, it is not easy to conduct hypothesis tests correctly, and their results are very easy to misinterpret. Setting up a statistical test involves several subjective choices (*e.g.* statistical test to use, null hypothesis to test, significance level) by the user that are rarely justified based on the problem or decision at hand, but rather, are usually based on traditional choices that are entirely arbitrary (Johnson 1999). The evidence that it provides to the user is indirect, incomplete, and typically overstates the evidence against the null hypothesis (Goodman 1999).
A more informative and effective approach for comparing groups is one based on **estimation** rather than **testing**, and is driven by Bayesian probability rather than frequentist. That is, rather than testing whether two groups are different, we instead pursue an estimate of how different they are, which is fundamentally more informative. Moreover, we include an estimate of uncertainty associated with that difference which includes uncertainty due to our lack of knowledge of the model parameters (epistemic uncertainty) and uncertainty due to the inherent stochasticity of the system (aleatory uncertainty).
## Example: Drug trial evaluation
To illustrate how this Bayesian estimation approach works in practice, we will use a fictitious example from Kruschke (2012) concerning the evaluation of a clinical trial for drug evaluation. The trial aims to evaluate the efficacy of a "smart drug" that is supposed to increase intelligence by comparing IQ scores of individuals in a treatment arm (those receiving the drug) to those in a control arm (those recieving a placebo). There are 47 individuals and 42 individuals in the treatment and control arms, respectively.
```
drug = (101,100,102,104,102,97,105,105,98,101,100,123,105,103,100,95,102,106,
109,102,82,102,100,102,102,101,102,102,103,103,97,97,103,101,97,104,
96,103,124,101,101,100,101,101,104,100,101)
placebo = (99,101,100,101,102,100,97,101,104,101,102,102,100,105,88,101,100,
104,100,100,100,101,102,103,97,101,101,100,101,99,101,100,100,
101,100,99,101,100,102,99,100,99)
y1 = np.array(drug)
y2 = np.array(placebo)
y = pd.DataFrame(dict(value=np.r_[y1, y2], group=np.r_[['drug']*len(drug), ['placebo']*len(placebo)]))
y.hist('value', by='group');
```
The first step in a Bayesian approach to inference is to specify the full probability model that corresponds to the problem. For this example, Kruschke chooses a Student-t distribution to describe the distributions of the scores in each group. This choice adds robustness to the analysis, as a T distribution is less sensitive to outlier observations, relative to a normal distribution. The three-parameter Student-t distribution allows for the specification of a mean $\mu$, a precision (inverse-variance) $\lambda$ and a degrees-of-freedom parameter $\nu$:
$$f(x|\mu,\lambda,\nu) = \frac{\Gamma(\frac{\nu + 1}{2})}{\Gamma(\frac{\nu}{2})} \left(\frac{\lambda}{\pi\nu}\right)^{\frac{1}{2}} \left[1+\frac{\lambda(x-\mu)^2}{\nu}\right]^{-\frac{\nu+1}{2}}$$
the degrees-of-freedom parameter essentially specifies the "normality" of the data, since larger values of $\nu$ make the distribution converge to a normal distribution, while small values (close to zero) result in heavier tails.
Thus, the likelihood functions of our model are specified as follows:
$$y^{(treat)}_i \sim T(\nu, \mu_1, \sigma_1)$$
$$y^{(placebo)}_i \sim T(\nu, \mu_2, \sigma_2)$$
As a simplifying assumption, we will assume that the degree of normality $\nu$ is the same for both groups. We will, of course, have separate parameters for the means $\mu_k, k=1,2$ and standard deviations $\sigma_k$.
Since the means are real-valued, we will apply normal priors on them, and arbitrarily set the hyperparameters to the pooled empirical mean of the data and twice the pooled empirical standard deviation, which applies very diffuse information to these quantities (and importantly, does not favor one or the other *a priori*).
$$\mu_k \sim N(\bar{x}, 2s)$$
```
μ_m = y.value.mean()
μ_s = y.value.std() * 2
with pm.Model() as model:
group1_mean = pm.Normal('group1_mean', μ_m, sigma=μ_s)
group2_mean = pm.Normal('group2_mean', μ_m, sigma=μ_s)
```
The group standard deviations will be given a uniform prior over a plausible range of values for the variability of the outcome variable, IQ.
In Kruschke's original model, he uses a very wide uniform prior for the group standard deviations, from the pooled empirical standard deviation divided by 1000 to the pooled standard deviation multiplied by 1000. This is a poor choice of prior, because very basic prior knowledge about measures of human coginition dictate that the variation cannot ever be as high as this upper bound. IQ is a standardized measure, and hence this constrains how variable a given population's IQ values can be. When you place such a wide uniform prior on these values, you are essentially giving a lot of prior weight on inadmissable values. In this example, there is little practical difference, but in general it is best to apply as much prior information that you have available to the parameterization of prior distributions.
We will instead set the group standard deviations to have a $\text{Uniform}(1,10)$ prior:
```
σ_low = 1
σ_high = 10
with model:
group1_std = pm.Uniform('group1_std', lower=σ_low, upper=σ_high)
group2_std = pm.Uniform('group2_std', lower=σ_low, upper=σ_high)
```
We follow Kruschke by making the prior for $\nu$ exponentially distributed with a mean of 30; this allocates high prior probability over the regions of the parameter that describe the range from normal to heavy-tailed data under the Student-T distribution.
```
with model:
ν = pm.Exponential('ν_minus_one', 1/29.) + 1
pm.kdeplot(np.random.exponential(30, size=10000), shade=0.5);
```
Since PyMC3 parameterizes the Student-T in terms of precision, rather than standard deviation, we must transform the standard deviations before specifying our likelihoods.
```
with model:
λ1 = group1_std**-2
λ2 = group2_std**-2
group1 = pm.StudentT('drug', nu=ν, mu=group1_mean, lam=λ1, observed=y1)
group2 = pm.StudentT('placebo', nu=ν, mu=group2_mean, lam=λ2, observed=y2)
```
Having fully specified our probabilistic model, we can turn our attention to calculating the comparisons of interest in order to evaluate the effect of the drug. To this end, we can specify deterministic nodes in our model for the difference between the group means and the difference between the group standard deviations. Wrapping them in named `Deterministic` objects signals to PyMC that we wish to record the sampled values as part of the output.
As a joint measure of the groups, we will also estimate the "effect size", which is the difference in means scaled by the pooled estimates of standard deviation. This quantity can be harder to interpret, since it is no longer in the same units as our data, but the quantity is a function of all four estimated parameters.
```
with model:
diff_of_means = pm.Deterministic('difference of means', group1_mean - group2_mean)
diff_of_stds = pm.Deterministic('difference of stds', group1_std - group2_std)
effect_size = pm.Deterministic('effect size',
diff_of_means / np.sqrt((group1_std**2 + group2_std**2) / 2))
```
Now, we can fit the model and evaluate its output.
```
with model:
trace = pm.sample(2000, cores=2)
```
We can plot the stochastic parameters of the model. PyMC's `plot_posterior` function replicates the informative histograms portrayed in Kruschke (2012). These summarize the posterior distributions of the parameters, and present a 95% credible interval and the posterior mean. The plots below are constructed with the final 1000 samples from each of the 2 chains, pooled together.
```
pm.plot_posterior(trace, varnames=['group1_mean','group2_mean', 'group1_std', 'group2_std', 'ν_minus_one'],
color='#87ceeb');
```
Looking at the group differences, we can conclude that there are meaningful differences between the two groups for all three measures. For these comparisons, it is useful to use zero as a reference value (`ref_val`); providing this reference value yields cumulative probabilities for the posterior distribution on either side of the value. Thus, for the difference in means, 99.4% of the posterior probability is greater than zero, which suggests the group means are credibly different. The effect size and differences in standard deviation are similarly positive.
These estimates suggest that the "smart drug" increased both the expected scores, but also the variability in scores across the sample. So, this does not rule out the possibility that some recipients may be adversely affected by the drug at the same time others benefit.
```
pm.plot_posterior(trace, varnames=['difference of means','difference of stds', 'effect size'],
ref_val=0,
color='#87ceeb');
```
When `forestplot` is called on a trace with more than one chain, it also plots the potential scale reduction parameter, which is used to reveal evidence for lack of convergence; values near one, as we have here, suggest that the model has converged.
```
pm.forestplot(trace, varnames=['group1_mean',
'group2_mean']);
pm.forestplot(trace, varnames=['group1_std',
'group2_std',
'ν_minus_one']);
pm.summary(trace,varnames=['difference of means', 'difference of stds', 'effect size'])
```
## References
1. Goodman SN. Toward evidence-based medical statistics. 1: The P value fallacy. Annals of Internal Medicine. 1999;130(12):995-1004. doi:10.7326/0003-4819-130-12-199906150-00008.
2. Johnson D. The insignificance of statistical significance testing. Journal of Wildlife Management. 1999;63(3):763-772.
3. Kruschke JK. Bayesian estimation supersedes the t test. J Exp Psychol Gen. 2013;142(2):573-603. doi:10.1037/a0029146.
The original pymc2 implementation was written by Andrew Straw and can be found here: https://github.com/strawlab/best
Ported to PyMC3 by [Thomas Wiecki](https://twitter.com/twiecki) (c) 2015, updated by Chris Fonnesbeck.
| github_jupyter |
```
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import torch
torch.set_printoptions(edgeitems=2)
torch.manual_seed(123)
class_names = ['airplane','automobile','bird','cat','deer',
'dog','frog','horse','ship','truck']
from torchvision import datasets, transforms
data_path = '../data-unversioned/p1ch7/'
cifar10 = datasets.CIFAR10(data_path, train=True, download=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4915, 0.4823, 0.4468),
(0.2470, 0.2435, 0.2616))
]))
cifar10_val = datasets.CIFAR10(data_path, train=False, download=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4915, 0.4823, 0.4468),
(0.2470, 0.2435, 0.2616))
]))
label_map = {0: 0, 2: 1}
class_names = ['airplane', 'bird']
cifar2 = [(img, label_map[label]) for img, label in cifar10 if label in [0, 2]]
cifar2_val = [(img, label_map[label]) for img, label in cifar10_val if label in [0, 2]]
import torch.nn as nn
n_out = 2
model = nn.Sequential(
nn.Linear(
3072, # <1>
512, # <2>
),
nn.Tanh(),
nn.Linear(
512, # <2>
n_out, # <3>
)
)
def softmax(x):
return torch.exp(x) / torch.exp(x).sum()
x = torch.tensor([1.0, 2.0, 3.0])
softmax(x)
softmax(x).sum()
softmax = nn.Softmax(dim=1)
x = torch.tensor([[1.0, 2.0, 3.0],
[1.0, 2.0, 3.0]])
softmax(x)
model = nn.Sequential(
nn.Linear(3072, 512),
nn.Tanh(),
nn.Linear(512, 2),
nn.Softmax(dim=1))
img, _ = cifar2[0]
plt.imshow(img.permute(1, 2, 0))
plt.show()
img_batch = img.view(-1).unsqueeze(0)
out = model(img_batch)
out
_, index = torch.max(out, dim=1)
index
out = torch.tensor([
[0.6, 0.4],
[0.9, 0.1],
[0.3, 0.7],
[0.2, 0.8],
])
class_index = torch.tensor([0, 0, 1, 1]).unsqueeze(1)
truth = torch.zeros((4,2))
truth.scatter_(dim=1, index=class_index, value=1.0)
truth
def mse(out):
return ((out - truth) ** 2).sum(dim=1).mean()
mse(out)
out.gather(dim=1, index=class_index)
def likelihood(out):
prod = 1.0
for x in out.gather(dim=1, index=class_index):
prod *= x
return prod
likelihood(out)
def neg_log_likelihood(out):
return -likelihood(out).log()
neg_log_likelihood(out)
out0 = out.clone().detach()
out0[0] = torch.tensor([0.9, 0.1]) # more right
out2 = out.clone().detach()
out2[0] = torch.tensor([0.4, 0.6]) # slightly wrong
out3 = out.clone().detach()
out3[0] = torch.tensor([0.1, 0.9]) # very wrong
mse_comparison = torch.tensor([mse(o) for o in [out0, out, out2, out3]])
mse_comparison
((mse_comparison / mse_comparison[1]) - 1) * 100
nll_comparison = torch.tensor([neg_log_likelihood(o) for o in [out0, out, out2, out3]])
nll_comparison
((nll_comparison / nll_comparison[1]) - 1) * 100
softmax = nn.Softmax(dim=1)
log_softmax = nn.LogSoftmax(dim=1)
x = torch.tensor([[0.0, 104.0]])
softmax(x)
softmax = nn.Softmax(dim=1)
log_softmax = nn.LogSoftmax(dim=1)
x = torch.tensor([[0.0, 104.0]])
softmax(x)
torch.log(softmax(x))
log_softmax(x)
torch.exp(log_softmax(x))
model = nn.Sequential(
nn.Linear(3072, 512),
nn.Tanh(),
nn.Linear(512, 2),
nn.LogSoftmax(dim=1))
loss = nn.NLLLoss()
img, label = cifar2[0]
out = model(img.view(-1).unsqueeze(0))
loss(out, torch.tensor([label]))
import torch
import torch.nn as nn
import torch.optim as optim
model = nn.Sequential(
nn.Linear(3072, 512),
nn.Tanh(),
nn.Linear(512, 2),
nn.LogSoftmax(dim=1))
learning_rate = 1e-2
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
loss_fn = nn.NLLLoss()
n_epochs = 100
for epoch in range(n_epochs):
for img, label in cifar2:
out = model(img.view(-1).unsqueeze(0))
loss = loss_fn(out, torch.tensor([label]))
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("Epoch: %d, Loss: %f" % (epoch, float(loss)))
train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=True)
import torch
import torch.nn as nn
import torch.optim as optim
train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=True)
model = nn.Sequential(
nn.Linear(3072, 128),
nn.Tanh(),
nn.Linear(128, 2),
nn.LogSoftmax(dim=1))
learning_rate = 1e-2
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
loss_fn = nn.NLLLoss()
n_epochs = 100
for epoch in range(n_epochs):
for imgs, labels in train_loader:
outputs = model(imgs.view(imgs.shape[0], -1))
loss = loss_fn(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("Epoch: %d, Loss: %f" % (epoch, float(loss)))
import torch
import torch.nn as nn
import torch.optim as optim
train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=True)
model = nn.Sequential(
nn.Linear(3072, 512),
nn.Tanh(),
nn.Linear(512, 2),
nn.LogSoftmax(dim=1))
learning_rate = 1e-2
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
loss_fn = nn.NLLLoss()
n_epochs = 100
for epoch in range(n_epochs):
for imgs, labels in train_loader:
outputs = model(imgs.view(imgs.shape[0], -1))
loss = loss_fn(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("Epoch: %d, Loss: %f" % (epoch, float(loss)))
train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=False)
correct = 0
total = 0
with torch.no_grad():
for imgs, labels in train_loader:
outputs = model(imgs.view(imgs.shape[0], -1))
_, predicted = torch.max(outputs, dim=1)
total += labels.shape[0]
correct += int((predicted == labels).sum())
print("Accuracy: %f" % (correct / total))
val_loader = torch.utils.data.DataLoader(cifar2_val, batch_size=64, shuffle=False)
correct = 0
total = 0
with torch.no_grad():
for imgs, labels in val_loader:
outputs = model(imgs.view(imgs.shape[0], -1))
_, predicted = torch.max(outputs, dim=1)
total += labels.shape[0]
correct += int((predicted == labels).sum())
print("Accuracy: %f" % (correct / total))
model = nn.Sequential(
nn.Linear(3072, 1024),
nn.Tanh(),
nn.Linear(1024, 512),
nn.Tanh(),
nn.Linear(512, 128),
nn.Tanh(),
nn.Linear(128, 2),
nn.LogSoftmax(dim=1))
model = nn.Sequential(
nn.Linear(3072, 1024),
nn.Tanh(),
nn.Linear(1024, 512),
nn.Tanh(),
nn.Linear(512, 128),
nn.Tanh(),
nn.Linear(128, 2))
loss_fn = nn.CrossEntropyLoss()
import torch
import torch.nn as nn
import torch.optim as optim
train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=True)
model = nn.Sequential(
nn.Linear(3072, 1024),
nn.Tanh(),
nn.Linear(1024, 512),
nn.Tanh(),
nn.Linear(512, 128),
nn.Tanh(),
nn.Linear(128, 2))
learning_rate = 1e-2
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
loss_fn = nn.CrossEntropyLoss()
n_epochs = 100
for epoch in range(n_epochs):
for imgs, labels in train_loader:
outputs = model(imgs.view(imgs.shape[0], -1))
loss = loss_fn(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("Epoch: %d, Loss: %f" % (epoch, float(loss)))
train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=False)
correct = 0
total = 0
with torch.no_grad():
for imgs, labels in train_loader:
outputs = model(imgs.view(imgs.shape[0], -1))
_, predicted = torch.max(outputs, dim=1)
total += labels.shape[0]
correct += int((predicted == labels).sum())
print("Accuracy: %f" % (correct / total))
val_loader = torch.utils.data.DataLoader(cifar2_val, batch_size=64, shuffle=False)
correct = 0
total = 0
with torch.no_grad():
for imgs, labels in val_loader:
outputs = model(imgs.view(imgs.shape[0], -1))
_, predicted = torch.max(outputs, dim=1)
total += labels.shape[0]
correct += int((predicted == labels).sum())
print("Accuracy: %f" % (correct / total))
sum([p.numel() for p in model.parameters()])
sum([p.numel() for p in model.parameters() if p.requires_grad == True])
first_model = nn.Sequential(
nn.Linear(3072, 512),
nn.Tanh(),
nn.Linear(512, 2),
nn.LogSoftmax(dim=1))
sum([p.numel() for p in first_model.parameters()])
sum([p.numel() for p in nn.Linear(3072, 512).parameters()])
sum([p.numel() for p in nn.Linear(3072, 1024).parameters()])
linear = nn.Linear(3072, 1024)
linear.weight.shape, linear.bias.shape
conv = nn.Conv2d(3, 16, kernel_size=3)
conv.weight.shape
conv.bias.shape
img, _ = cifar2[0]
output = conv(img.unsqueeze(0))
img.unsqueeze(0).shape, output.shape
plt.imshow(img.permute(1, 2, 0), cmap='gray')
plt.show()
plt.imshow(output[0, 0].detach(), cmap='gray')
plt.show()
output.shape
conv = nn.Conv2d(3, 1, kernel_size=3, padding=1)
output = conv(img.unsqueeze(0))
output.shape
with torch.no_grad():
conv.bias.zero_()
with torch.no_grad():
conv.weight.fill_(1.0 / 9.0)
output = conv(img.unsqueeze(0))
plt.imshow(output[0, 0].detach(), cmap='gray')
plt.show()
conv = nn.Conv2d(3, 1, kernel_size=3, padding=1)
with torch.no_grad():
conv.weight[:] = torch.tensor([[-1.0, 0.0, 1.0],
[-1.0, 0.0, 1.0],
[-1.0, 0.0, 1.0]])
conv.bias.zero_()
output = conv(img.unsqueeze(0))
plt.imshow(output[0, 0].detach(), cmap='gray')
plt.show()
pool = nn.MaxPool2d(2)
output = pool(img.unsqueeze(0))
output.shape
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, padding=1),
nn.Tanh(),
nn.MaxPool2d(2),
nn.Conv2d(16, 8, kernel_size=3, padding=1),
nn.Tanh(),
nn.MaxPool2d(2),
...)
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, padding=1),
nn.Tanh(),
nn.MaxPool2d(2),
nn.Conv2d(16, 8, kernel_size=3, padding=1),
nn.Tanh(),
nn.MaxPool2d(2),
# WARNING: something missing here
nn.Linear(512, 32),
nn.Tanh(),
nn.Linear(32, 2))
sum([p.numel() for p in model.parameters()])
model(img.unsqueeze(0))
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, padding=1)
self.act1 = nn.Tanh()
self.pool1 = nn.MaxPool2d(2)
self.conv2 = nn.Conv2d(16, 8, kernel_size=3, padding=1)
self.act2 = nn.Tanh()
self.pool2 = nn.MaxPool2d(2)
self.fc1 = nn.Linear(8 * 8 * 8, 32)
self.act4 = nn.Tanh()
self.fc2 = nn.Linear(32, 2)
def forward(self, x):
out = self.pool1(self.act1(self.conv1(x)))
out = self.pool2(self.act2(self.conv2(out)))
out = out.view(-1, 8 * 8 * 8)
out = self.act4(self.fc1(out))
out = self.fc2(out)
return out
model = Net()
sum([p.numel() for p in model.parameters()])
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(16, 8, kernel_size=3, padding=1)
self.fc1 = nn.Linear(8 * 8 * 8, 32)
self.fc2 = nn.Linear(32, 2)
def forward(self, x):
out = F.max_pool2d(torch.tanh(self.conv1(x)), 2)
out = F.max_pool2d(torch.tanh(self.conv2(out)), 2)
out = out.view(-1, 8 * 8 * 8)
out = torch.tanh(self.fc1(out))
out = self.fc2(out)
return out
model = Net()
model(img.unsqueeze(0))
import torch
import torch.nn as nn
import torch.nn.functional as F
train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=True)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(16, 8, kernel_size=3, padding=1)
self.fc1 = nn.Linear(8 * 8 * 8, 32)
self.fc2 = nn.Linear(32, 2)
def forward(self, x):
out = F.max_pool2d(torch.relu(self.conv1(x)), 2)
out = F.max_pool2d(torch.relu(self.conv2(out)), 2)
out = out.view(-1, 8 * 8 * 8)
out = torch.tanh(self.fc1(out))
out = self.fc2(out)
return out
model = Net()
learning_rate = 1e-2
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
loss_fn = nn.CrossEntropyLoss()
n_epochs = 100
for epoch in range(n_epochs):
for imgs, labels in train_loader:
outputs = model(imgs)
loss = loss_fn(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("Epoch: %d, Loss: %f" % (epoch, float(loss)))
train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=False)
correct = 0
total = 0
with torch.no_grad():
for imgs, labels in train_loader:
outputs = model(imgs)
_, predicted = torch.max(outputs, dim=1)
total += labels.shape[0]
correct += int((predicted == labels).sum())
print("Accuracy: %f" % (correct / total))
val_loader = torch.utils.data.DataLoader(cifar2_val, batch_size=64, shuffle=False)
correct = 0
total = 0
with torch.no_grad():
for imgs, labels in val_loader:
outputs = model(imgs)
_, predicted = torch.max(outputs, dim=1)
total += labels.shape[0]
correct += int((predicted == labels).sum())
print("Accuracy: %f" % (correct / total))
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(16, 8, kernel_size=3, padding=1)
self.fc1 = nn.Linear(8 * 8 * 8, 32)
self.fc2 = nn.Linear(32, 2)
def forward(self, x):
out = F.max_pool2d(torch.relu(self.conv1(x)), 2)
out = F.max_pool2d(torch.relu(self.conv2(out)), 2)
out = out.view(-1, 8 * 8 * 8)
out = torch.tanh(self.fc1(out))
out = self.fc2(out)
return out
model = Net()
sum([p.numel() for p in model.parameters()])
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, padding=1),
nn.Tanh(),
nn.MaxPool2d(2),
nn.Conv2d(16, 8, kernel_size=3, padding=1),
nn.Tanh(),
nn.MaxPool2d(2),
nn.Linear(8*8*8, 32),
nn.Tanh(),
nn.Linear(32, 2))
model(img.unsqueeze(0))
```
| github_jupyter |
# TensorFlow实战Titanic解析
## 一、数据读入及预处理
### 1. 使用pandas读入csv文件,读入为pands.DataFrame对象
```
import os
import numpy as np
import pandas as pd
import tensorflow as tf
# read data from file
data = pd.read_csv('data/train.csv')
print(data.info())
```
### 2. 预处理
1. 剔除空数据
2. 将'Sex'字段转换为int类型
3. 选取数值类型的字段,抛弃字符串类型字段
```
# fill nan values with 0
data = data.fillna(0)
# convert ['male', 'female'] values of Sex to [1, 0]
data['Sex'] = data['Sex'].apply(lambda s: 1 if s == 'male' else 0)
# 'Survived' is the label of one class,
# add 'Deceased' as the other class
data['Deceased'] = data['Survived'].apply(lambda s: 1 - s)
# select features and labels for training
dataset_X = data[['Sex', 'Age', 'Pclass', 'SibSp', 'Parch', 'Fare']]
dataset_Y = data[['Deceased', 'Survived']]
print(dataset_X)
print(dataset_Y)
```
### 3. 将训练数据切分为训练集(training set)和验证集(validation set)
```
from sklearn.model_selection import train_test_split
# split training data and validation set data
X_train, X_val, y_train, y_val = train_test_split(dataset_X.as_matrix(), dataset_Y.as_matrix(),
test_size=0.2,
random_state=42)
```
# 二、构建计算图
### 逻辑回归
逻辑回归是形式最简单,并且最容易理解的分类器之一。从数学上,逻辑回归的预测函数可以表示为如下公式:
*y = softmax(xW + b)*
其中,*x*为输入向量,是大小为*d×1*的列向量,*d*是特征数。*W*是大小为的*c×d*权重矩阵,*c*是分类类别数目。*b*是偏置向量,为*c×1*列向量。*softmax*在数学定义里,是指一种归一化指数函数。它将一个*k*维的向量*x*按照公式

的形式将向量中的元素转换为*(0, 1)*的区间。机器学习领域常使用这种方法将类似判别函数的置信度值转换为概率形式(如判别超平面的距离等)。*softmax*函数常用于输出层,用于指定唯一的分类输出。
### 1. 使用placeholder声明输入占位符
TensorFlow设计了数据Feed机制。也就是说计算程序并不会直接交互执行,而是在声明过程只做计算图的构建。所以,此时并不会触碰真实的数据,而只是通过placeholder算子声明一个输入数据的占位符,在后面真正运行计算时,才用数据替换占位符。
声明占位符placeholder需要给定三个参数,分别是输入数据的元素类型dtype、维度形状shape和占位符名称标识name。
```
# 声明输入数据占位符
# shape参数的第一个元素为None,表示可以同时放入任意条记录
X = tf.placeholder(tf.float32, shape=[None, 6], name='input')
y = tf.placeholder(tf.float32, shape=[None, 2], name='label')
```
### 2. 声明参数变量
变量的声明方式是直接定义tf.Variable()对象。
初始化变量对象有两种方式,一种是从protocol buffer结构VariableDef中反序列化,另一种是通过参数指定初始值。最简单的方式就是向下面程序这样,为变量传入初始值。初始值必须是一个tensor对象,或是可以通过convert_to_tensor()方法转换成tensor的Python对象。TensorFlow提供了多种构造随机tensor的方法,可以构造全零tensor、随机正态分布tensor等。定义变量会保留初始值的维度形状。
```
# 声明变量
weights = tf.Variable(tf.random_normal([6, 2]), name='weights')
bias = tf.Variable(tf.zeros([2]), name='bias')
```
### 3. 构造前向传播计算图
使用算子构建由输入计算出标签的计算过程。
在计算图的构建过程中,TensorFlow会自动推算每一个节点的输入输出形状。若无法运算,比如两个行列数不同的矩阵相加,则会直接报错。
```
y_pred = tf.nn.softmax(tf.matmul(X, weights) + bias)
```
### 4. 声明代价函数
使用交叉熵(cross entropy)作为代价函数。
```
# 使用交叉熵作为代价函数
cross_entropy = - tf.reduce_sum(y * tf.log(y_pred + 1e-10),
reduction_indices=1)
# 批量样本的代价值为所有样本交叉熵的平均值
cost = tf.reduce_mean(cross_entropy)
```
#### NOTE
在计算交叉熵的时候,对模型输出值 y_pred 加上了一个很小的误差值(在上面程序中是 1e-10),这是因为当 y_pred 十分接近真值 y_true 的时候,也就是 y_pred 的值非常接近 0 或 1 的取值时,计算会得到负无穷 -inf,从而导致输出非法,并进一步导致无法计算梯度,迭代陷入崩溃。要解决这个问题有三种办法:
1. 在计算时,直接加入一个极小的误差值,使计算合法。这样可以避免计算,但存在的问题是加入误差后相当于y_pred的值会突破1。在示例代码中使用了这种方案;
2. 使用 clip() 函数,当 y_pred 接近 0 时,将其赋值成为极小误差值。也就是将 y_pred 的取值范围限定在的范围内;
3. 当计算交叉熵的计算出现 nan 值时,显式地将cost设置为0。这种方式回避了 函数计算的问题,而是在最终的代价函数上进行容错处理。
### 5. 加入优化算法
TensorFlow内置了多种经典的优化算法,如随机梯度下降算法(SGD,Stochastic Gradient Descent)、动量算法(Momentum)、Adagrad算法、ADAM算法、RMSProp算法等。优化器内部会自动构建梯度计算和反向传播部分的计算图。
一般对于优化算法,最关键的参数是学习率(learning rate),对于学习率的设置是一门技术。同时,不同优化算法在不同问题上可能会有不同的收敛速度,在解决实际问题时可以做多种尝试。
```
# 使用随机梯度下降算法优化器来最小化代价,系统自动构建反向传播部分的计算图
train_op = tf.train.GradientDescentOptimizer(0.001).minimize(cost)
```
### 6. (optional) 计算准确率
```
# 计算准确率
correct_pred = tf.equal(tf.argmax(y, 1), tf.argmax(y_pred, 1))
acc_op = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
```
# 三、构建训练迭代 & 执行训练
### 启动Session,代入数据进行计算。训练结束后使用验证集评估训练效果
```
with tf.Session() as sess:
# variables have to be initialized at the first place
tf.global_variables_initializer().run()
# training loop
for epoch in range(10):
total_loss = 0.
for i in range(len(X_train)):
# prepare feed data and run
feed_dict = {X: [X_train[i]], y: [y_train[i]]}
_, loss = sess.run([train_op, cost], feed_dict=feed_dict)
total_loss += loss
# display loss per epoch
print('Epoch: %04d, total loss=%.9f' % (epoch + 1, total_loss))
print 'Training complete!'
# Accuracy calculated by TensorFlow
accuracy = sess.run(acc_op, feed_dict={X: X_val, y: y_val})
print("Accuracy on validation set: %.9f" % accuracy)
# Accuracy calculated by NumPy
pred = sess.run(y_pred, feed_dict={X: X_val})
correct = np.equal(np.argmax(pred, 1), np.argmax(y_val, 1))
numpy_accuracy = np.mean(correct.astype(np.float32))
print("Accuracy on validation set (numpy): %.9f" % numpy_accuracy)
```
# 四、存储和加载模型参数
变量的存储和读取是通过tf.train.Saver类来完成的。Saver对象在初始化时,为计算图加入了用于存储和加载变量的算子,并可以通过参数指定是要存储哪些变量。Saver对象的save()和restore()方法是触发图中算子的入口。
```
# 训练步数记录
global_step = tf.Variable(0, name='global_step', trainable=False)
# 存档入口
saver = tf.train.Saver()
# 在Saver声明之后定义的变量将不会被存储
# non_storable_variable = tf.Variable(777)
ckpt_dir = './ckpt_dir'
if not os.path.exists(ckpt_dir):
os.makedirs(ckpt_dir)
with tf.Session() as sess:
tf.global_variables_initializer().run()
# 加载模型存档
ckpt = tf.train.get_checkpoint_state(ckpt_dir)
if ckpt and ckpt.model_checkpoint_path:
print('Restoring from checkpoint: %s' % ckpt.model_checkpoint_path)
saver.restore(sess, ckpt.model_checkpoint_path)
start = global_step.eval()
for epoch in range(start, start + 10):
total_loss = 0.
for i in range(0, len(X_train)):
feed_dict = {
X: [X_train[i]],
y: [y_train[i]]
}
_, loss = sess.run([train_op, cost], feed_dict=feed_dict)
total_loss += loss
print('Epoch: %04d, loss=%.9f' % (epoch + 1, total_loss))
# 模型存档
global_step.assign(epoch).eval()
saver.save(sess, ckpt_dir + '/logistic.ckpt',
global_step=global_step)
print('Training complete!')
```
# TensorBoard
TensorBoard是TensorFlow配套的可视化工具,可以用来帮助理解复杂的模型和检查实现中的错误。
TensorBoard的工作方式是启动一个WEB服务,该服务进程从TensorFlow程序执行所得的事件日志文件(event files)中读取概要(summary)数据,然后将数据在网页中绘制成可视化的图表。概要数据主要包括以下几种类别:
1. 标量数据,如准确率、代价损失值,使用tf.summary.scalar加入记录算子;
2. 参数数据,如参数矩阵weights、偏置矩阵bias,一般使用tf.summary.histogram记录;
3. 图像数据,用tf.summary.image加入记录算子;
4. 音频数据,用tf.summary.audio加入记录算子;
5. 计算图结构,在定义tf.summary.FileWriter对象时自动记录。
可以通过TensorBoard展示的完整程序:
```
################################
# Constructing Dataflow Graph
################################
# arguments that can be set in command line
tf.app.flags.DEFINE_integer('epochs', 10, 'Training epochs')
tf.app.flags.DEFINE_integer('batch_size', 10, 'size of mini-batch')
FLAGS = tf.app.flags.FLAGS
with tf.name_scope('input'):
# create symbolic variables
X = tf.placeholder(tf.float32, shape=[None, 6])
y_true = tf.placeholder(tf.float32, shape=[None, 2])
with tf.name_scope('classifier'):
# weights and bias are the variables to be trained
weights = tf.Variable(tf.random_normal([6, 2]))
bias = tf.Variable(tf.zeros([2]))
y_pred = tf.nn.softmax(tf.matmul(X, weights) + bias)
# add histogram summaries for weights, view on tensorboard
tf.summary.histogram('weights', weights)
tf.summary.histogram('bias', bias)
# Minimise cost using cross entropy
# NOTE: add a epsilon(1e-10) when calculate log(y_pred),
# otherwise the result will be -inf
with tf.name_scope('cost'):
cross_entropy = - tf.reduce_sum(y_true * tf.log(y_pred + 1e-10),
reduction_indices=1)
cost = tf.reduce_mean(cross_entropy)
tf.summary.scalar('loss', cost)
# use gradient descent optimizer to minimize cost
train_op = tf.train.GradientDescentOptimizer(0.001).minimize(cost)
with tf.name_scope('accuracy'):
correct_pred = tf.equal(tf.argmax(y_true, 1), tf.argmax(y_pred, 1))
acc_op = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Add scalar summary for accuracy
tf.summary.scalar('accuracy', acc_op)
global_step = tf.Variable(0, name='global_step', trainable=False)
# use saver to save and restore model
saver = tf.train.Saver()
# this variable won't be stored, since it is declared after tf.train.Saver()
non_storable_variable = tf.Variable(777)
ckpt_dir = './ckpt_dir'
if not os.path.exists(ckpt_dir):
os.makedirs(ckpt_dir)
################################
# Training the model
################################
# use session to run the calculation
with tf.Session() as sess:
# create a log writer. run 'tensorboard --logdir=./logs'
writer = tf.summary.FileWriter('./logs', sess.graph)
merged = tf.summary.merge_all()
# variables have to be initialized at the first place
tf.global_variables_initializer().run()
# restore variables from checkpoint if exists
ckpt = tf.train.get_checkpoint_state(ckpt_dir)
if ckpt and ckpt.model_checkpoint_path:
print('Restoring from checkpoint: %s' % ckpt.model_checkpoint_path)
saver.restore(sess, ckpt.model_checkpoint_path)
start = global_step.eval()
# training loop
for epoch in range(start, start + FLAGS.epochs):
total_loss = 0.
for i in range(0, len(X_train), FLAGS.batch_size):
# train with mini-batch
feed_dict = {
X: X_train[i: i + FLAGS.batch_size],
y_true: y_train[i: i + FLAGS.batch_size]
}
_, loss = sess.run([train_op, cost], feed_dict=feed_dict)
total_loss += loss
# display loss per epoch
print('Epoch: %04d, loss=%.9f' % (epoch + 1, total_loss))
summary, accuracy = sess.run([merged, acc_op],
feed_dict={X: X_val, y_true: y_val})
writer.add_summary(summary, epoch) # Write summary
print('Accuracy on validation set: %.9f' % accuracy)
# set and update(eval) global_step with epoch
global_step.assign(epoch).eval()
saver.save(sess, ckpt_dir + '/logistic.ckpt',
global_step=global_step)
print('Training complete!')
```


| github_jupyter |
```
!pip install pandas==1.0.3
from sklearn.model_selection import StratifiedKFold, GroupKFold
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from tqdm import tqdm_notebook
from math import log
import lightgbm as lgb
import gc
import shap
from sklearn.metrics import f1_score
from tqdm.notebook import tqdm
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
import seaborn as sns
n_classes = 11
folds = 5
SEED_ = 987654321
path = '../input/lgbm-dataset/'
group = np.load(path + 'group.npy', allow_pickle = True)
train_target = pd.read_csv('../input/liverpool-ion-switching/train.csv', usecols = ['open_channels'])
time = pd.read_csv('../input/liverpool-ion-switching/train.csv', usecols = ['open_channels', 'time'])
train = pd.read_csv(path + 'train_clean.csv')
out_ = np.arange(3640000, 3824000)
idx = train.index
bool_mask = idx.isin(out_)
train = train[~idx.isin(out_)].reset_index(drop = True)
group = group[~idx.isin(out_)]
train_target = train_target[~idx.isin(out_)].reset_index(drop = True)
def evaluate_macroF1_lgb(predictions, truth):
# this follows the discussion in https://github.com/Microsoft/LightGBM/issues/1483
labels = truth.get_label()
pred_labels = predictions.reshape(n_classes,-1).argmax(axis=0) #np.unique(
f1 = f1_score(labels, pred_labels, average='macro')
return ('macroF1', f1, True)
params = {
"objective" : "multiclass",
"num_class" : n_classes,
'metric' : "None",
'boosting_type':'gbdt',
'learning_rate':0.05,
'colsample_bytree': 0.8,
'lambda_l1': 1,
'lambda_l2': 1,
'max_depth': -1,
'num_leaves': 2**8,
'subsample': .75,
'seed': SEED_,
'importance_type':'gain',
'n_jobs':-1,
}
gc.collect()
time = pd.read_csv('../input/liverpool-ion-switching/train.csv', usecols = ['open_channels', 'time']).loc[~idx.isin(out_)].reset_index(drop = True)
time['group'] = (time['time'].transform(lambda x: np.ceil(x*10000/500000)))
time['segment'] = train['segment']
strat =time['segment'].astype(str).copy() + time['open_channels'].astype(str).copy()
le = LabelEncoder()
strat = le.fit_transform(strat).astype(np.int16)
del time, le
gc.collect()
gc.collect()
kf = GroupKFold(n_splits = folds)
model_list = []
score = 0
pred_oof = np.zeros((train.shape[0], n_classes))
for fold_n, (train_index, valid_index) in enumerate(kf.split(train, strat, group)):
print(f'BEGIN FOLD: {fold_n} -------\n\n\n')
X_train, X_valid = train.iloc[train_index,:], train.iloc[valid_index,:]
y_train, y_valid = train_target.iloc[train_index,:], train_target.iloc[valid_index,:]
model = lgb.train(params,lgb.Dataset(X_train, label=y_train, categorical_feature = ['segment']),
5000, valid_sets = lgb.Dataset(X_valid, label=y_valid,categorical_feature = ['segment']), valid_names ='validation',
verbose_eval = 50, feval = evaluate_macroF1_lgb, early_stopping_rounds = 50)
gc.collect()
model_list += [model]
valid_ = model.predict(X_valid)
pred = valid_.argmax(axis = 1).reshape((-1))
pred_oof[valid_index, :] = valid_
score_temp = f1_score(y_valid, pred, average = 'macro')
score += score_temp/folds
del model, X_train, X_valid, y_train, y_valid
gc.collect()
print(f'\n\nF1_SCORE: {score_temp}\n\n\nENDED FOLD: {fold_n} -------\n\n\n')
np.save('pred_oof.npy', pred_oof, allow_pickle = True)
print(f'FINAL F1 SCORE: {score}')
feature_importances = pd.DataFrame()
feature_importances['feature'] = train.columns
for fold_, mod in tqdm(enumerate(model_list)):
feature_importances['fold_{}'.format(fold_ + 1)] = mod.feature_importance(importance_type='gain')
mod.save_model(f'model{fold_}')
scaler = MinMaxScaler(feature_range=(0, 100))
feature_importances['average'] = scaler.fit_transform(X=pd.DataFrame(feature_importances[['fold_{}'.format(fold + 1) for fold in range(kf.n_splits)]].mean(axis=1)))
fig = plt.figure(figsize=(20, 16))
sns.barplot(data=feature_importances.sort_values(by='average', ascending=False).head(50), x='average', y='feature');
plt.title('50 TOP feature importance over {} average'.format(fold_+1))
del train, train_target
gc.collect()
test = pd.read_csv(path + 'test_clean.csv')
for i, mod in enumerate(model_list):
if i == 0:
pred_test = mod.predict(test)/folds
else:
pred_test += mod.predict(test)/folds
np.save('pred_test.npy', pred_test, allow_pickle = True)
```
| github_jupyter |
```
#!/usr/bin/env python
# vim:fileencoding=utf-8
import sys
import numpy as np
import matplotlib.pyplot as plt
import soundfile as sf
import matplotlib
import pandas as pd
#データセットの分割
from sklearn.model_selection import train_test_split
#深層学習ライブラリ
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.layers.recurrent import SimpleRNN
from keras.optimizers import Adam
from keras.callbacks import EarlyStopping
#音楽ファイル
Music_file = './Input/01_Radio/NHKRadio.wav'
Music_noise_file = './Input/01_Radio/NHKRadio_Tpdfnoise.wav'
MusicType = 'NHK Radio'
MusicFileName = 'NHKRadio'
NoiseType = 'Tpdfnoise'
# wavファイル読み込み
Music_wav, Music_fs = sf.read(Music_file)
# ステレオ2chの場合、モノラル音源に変換(左右の各音を2で割った音を足して作成.)
if(Music_wav.shape[1] == 1):
Music_wavdata = Music_wav
print(Music_wav.shape[1])
else:
Music_wavdata = (0.5 * Music_wav[:, 1]) + (0.5 * Music_wav[:, 0])
# wavファイル読み込み
Music_whitenoise_wav, Music_whitenoise_fs = sf.read(Music_noise_file)
# ステレオ2chの場合、モノラル音源に変換(左右の各音を2で割った音を足して作成.)
if(Music_whitenoise_wav.shape[1] == 1):
Music_whitenoise_wavdata = Music_whitenoise_wav
print(Music_whitenoise_wav.shape[1])
else:
Music_whitenoise_wavdata = (0.5 * Music_whitenoise_wav[:, 1]) + (0.5 * Music_whitenoise_wav[:, 0])
#時間軸(信号の場合はLength)
#x = range(300)
x = range(500)
#Y軸は信号データ
#y = Music_whitenoise_wavdata[:300]
y = Music_whitenoise_wavdata[:500]
#学習のフレーム
l = 150
#関数
#信号データと学習のフレームを使用
def make_dataset(y, l):
data = []
target = []
for i in range(len(y)-l):
data.append(y[i:i+l])
target.append(y[i + l])
return(data, target)
#関数呼び出しでデータセットを作成
(data, target) = make_dataset(y, l)
#1フレームのデータ
#data[0]
#len(data[0])
#len(data)
#target
#len(target)
#RNN用のデータセットを作成
#ここで3次元データセットを作成しなくてはならない
data = np.array(data).reshape(-1, l, 1)
num_neurons = 1
n_hidden = 200
model = Sequential()
model.add(SimpleRNN(n_hidden, batch_input_shape=(None, l, num_neurons), return_sequences=False))
model.add(Dense(num_neurons))
model.add(Activation('linear'))
optimizer = Adam(lr = 0.001)
model.compile(loss="mean_squared_error", optimizer=optimizer)
#model.compile(loss="mean_squared_logarithmic_error", optimizer=optimizer)
#model.compile(loss="mean_absolute_percentage_error", optimizer=optimizer)
#model.compile(loss="cosine_similarity", optimizer=optimizer)
#model.compile(loss="mean_absolute_error", optimizer=optimizer)
#early_stopping = EarlyStopping(monitor='val_loss', mode='auto', patience=20)
early_stopping = EarlyStopping(monitor='val_loss', mode='min', patience=20)
#model.fit(data, target, batch_size=300, epochs=100, validation_split=0.1, callbacks=[early_stopping])
model.fit(data, target, batch_size=300, epochs=100, validation_split=0.1, callbacks=[early_stopping])
fig = plt.figure(1)
#検証ではノイズがないデータを使用すること
pred = model.predict(data)
#Y軸のラベル
Signal_Ylabel_str = MusicType + 'Signal' + NoiseType
RNN_Ylabel_str = 'RNN Pred' + 'Signal'
plt.figure(figsize=(15, 4))
plt.subplot(1, 2, 1)
plt.xlim(0, 500)
plt.plot(x, y, color='blue')
plt.xlabel('Original Signal ' + MusicType)
plt.ylabel(RNN_Ylabel_str)
plt.subplot(1, 2, 2)
plt.xlim(0, 500)
plt.plot(x[:l], y[:l], color='blue', label=Signal_Ylabel_str)
plt.plot(x[l:], pred, color='red', label=RNN_Ylabel_str, linestyle="dotted")
plt.xlabel('RNN Prediction of Acoustic Signal(' + MusicType + ')')
plt.legend(loc='lower left')
#plt.savefig
fig.set_tight_layout(True)
plt.savefig('./Output/RNN_Pred_Signal/RNN_' + NoiseType + '_' + MusicFileName + '.png')
plt.show()
#シグナル値
pd_y = pd.DataFrame(y[:l],columns=["Signal"])
pd_pred = pd.DataFrame(pred,columns=["Signal"])
pd_concat_y = pd.concat([pd_y,pd_pred], axis=0)
pd_concat_y = pd_concat_y.reset_index(drop=True)
#時間軸
pd_pandas_x = pd.DataFrame(range(500),columns=["Time"])
#信号配列
pd_concat_Signal = pd.concat([pd_pandas_x,pd_concat_y], axis=1)
#保存
pd_concat_Signal.to_csv('./Output/RNN_Pred_Signal/RNN_' + NoiseType + '_' + MusicFileName + '.csv')
pd_concat_Signal
```
| github_jupyter |
```
from bs4 import BeautifulSoup
import requests
import pandas as pd
from pandas import Series, DataFrame
from ipywidgets import FloatProgress
from time import sleep
from IPython.display import display
import re
import pickle
url = 'http://www.imdb.com/chart/top?ref_=nv_mv_250_6'
result = requests.get(url)
c = result.content
soup = BeautifulSoup(c,"lxml")
soup
moviename = []
cast = []
description = []
rating = []
ratingoutof = []
year = []
genre = []
movielength = []
rot_audscore = []
rot_avgrating = []
rot_users = []
summary = soup.find('div',{'class':'article'})
rgx = re.compile('[%s]' % '()')
f = FloatProgress(min=0, max=250)
display(f)
for row,i in zip(summary.find('table').findAll('tr'),range(len(summary.find('table').findAll('tr')))):
for sitem in row.findAll('span',{'class':'secondaryInfo'}):
s = sitem.find(text=True)
year.append(rgx.sub("", s))
for ritem in row.findAll('td',{'class':'ratingColumnimdbRating'}):
for iget in ritem.findAll('strong',{'title':'9.2basedon2,364,168userratings'}):
rat = iget.find(text=True)
rating.append(rat)
ratingoutof.append(iget.get('title').split(' ', 4)[3])
for item in row.findAll('td',{'class':'titleColumn'}):
for href in item.findAll('a',href=True):
moviename.append(href.find(text=True))
rurl = 'https://www.rottentomatoes.com/m/'+ href.find(text=True)
try:
rresult = requests.get(rurl)
except requests.exceptions.ConnectionError:
status_code = "Connection refused"
rc = rresult.content
rsoup = BeautifulSoup(rc)
try:
rot_audscore.append(rsoup.find('div',{'class':'meter-value'}).find('span',{'class':'superPageFontColor'}).text)
rot_avgrating.append(rsoup.find('div',{'class':'audience-info hidden-xssuperPageFontColor'}).find('div').contents[2].strip())
rot_users.append(rsoup.find('div',{'class':'audience-info hidden-xssuperPageFontColor'}).contents[3].contents[2].strip())
except AttributeError:
rot_audscore.append("")
rot_avgrating.append("")
rot_users.append("")
cast.append(href.get('title'))
imdb = "http://www.imdb.com" + href.get('href')
try:
iresult = requests.get(imdb)
ic = iresult.content
isoup = BeautifulSoup(ic)
for mov in isoup.findAll('time',text=True):
movielength.append(mov.find(text=True))
rating.append(isoup.find('span',{'itemprop':'ratingValue'}).find(text=True))
description.append(isoup.find('div',{'class':'summary_text'}).find(text=True).strip())
genre.append(isoup.find('span',{'class':'itemprop'}).find(text=True))
except requests.exceptions.ConnectionError:
description.append("")
genre.append("")
movielength.append("")
sleep(.1)
f.value = i
href.find(text=True)
moviename = Series(moviename)
cast = Series(cast)
description = Series(description)
rating = Series(rating)
ratingoutof = Series(ratingoutof)
year = Series(year)
genre = Series(genre)
movielength = Series(movielength)
rot_audscore = Series(rot_audscore)
rot_avgrating = Series(rot_avgrating)
rot_users = Series(rot_users)
imdb_df = pd.concat([moviename,year,description,genre,movielength,cast,rating,ratingoutof,rot_audscore,rot_avgrating,rot_users],axis=1)
imdb_df.columns = [ 'moviename','year','description','genre','movielength','cast','imdb_rating','imdb_ratingbasedon','tomatoes_audscore','tomatoes_rating','tomatoes_ratingbasedon']
imdb_df['rank'] = imdb_df.index + 1
imdb_df.head(10)
```
| github_jupyter |
# The Stanford Sentiment Treebank
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. We use the two-way (positive/negative) class split, and use only sentence-level labels.
```
from IPython.display import display, Markdown
with open('../../doc/env_variables_setup.md', 'r') as fh:
content = fh.read()
display(Markdown(content))
```
## Import Packages
```
import tensorflow as tf
from transformers import (
BertConfig,
BertTokenizer,
XLMRobertaTokenizer,
TFBertModel,
TFXLMRobertaModel,
)
import os
from datetime import datetime
import tensorflow_datasets
from tensorboard import notebook
import math
#from google.cloud import storage
from googleapiclient import discovery
from googleapiclient import errors
import logging
import json
```
## Check configuration
```
print(tf.version.GIT_VERSION, tf.version.VERSION)
print(tf.keras.__version__)
gpus = tf.config.list_physical_devices('GPU')
if len(gpus)>0:
for gpu in gpus:
print('Name:', gpu.name, ' Type:', gpu.device_type)
else:
print('No GPU available !!!!')
```
## Define Paths
```
try:
data_dir=os.environ['PATH_DATASETS']
except KeyError:
print('missing PATH_DATASETS')
try:
tensorboard_dir=os.environ['PATH_TENSORBOARD']
except KeyError:
print('missing PATH_TENSORBOARD')
try:
savemodel_dir=os.environ['PATH_SAVE_MODEL']
except KeyError:
print('missing PATH_SAVE_MODEL')
```
# Import local packages
```
import utils.model_utils as mu
import importlib
importlib.reload(mu);
```
## Train the model on AI Platform Training (for production)
```
project_name = os.environ['PROJECT_ID']
project_id = 'projects/{}'.format(project_name)
ai_platform_training = discovery.build('ml', 'v1', cache_discovery=False)
# choose the model
model_name = 'tf_bert_classification'
#model_name = 'test_log_bert'
# variable used to build some variable's name
type_production = 'test' #'test', 'production'
hardware = 'cpu' #'cpu', 'gpu', 'tpu'
owner = os.environ['OWNER']
tier = 'basic' #'basic', 'custom'
python_version = '3.7'
runtime_version = '2.2'
hp_tuning= False
verbosity = 'INFO'
profiling = False
# use custom container
use_custom_container = False
tag='/test:v0.0.0'
# overwrite parameter for testing logging
test_logging = False
print(' modifying Tensorflow env variable')
# 0 = all messages are logged (default behavior)
# 1 = INFO messages are not printed
# 2 = INFO and WARNING messages are not printed
# 3 = INFO, WARNING, and ERROR messages are not printed
with open(os.environ['DIR_PROJ']+'/utils/env_variables.json', 'r') as outfile:
env_var = json.load(outfile)
if verbosity == 'DEBUG' or verbosity == 'VERBOSE' or verbosity == 'INFO':
env_var['TF_CPP_MIN_LOG_LEVEL'] = 0
env_var['TF_CPP_MIN_VLOG_LEVEL'] = 0
elif verbosity == 'WARNING':
env_var['TF_CPP_MIN_LOG_LEVEL'] = 1
env_var['TF_CPP_MIN_VLOG_LEVEL'] = 1
elif verbosity == 'ERROR':
env_var['TF_CPP_MIN_LOG_LEVEL'] = 2
env_var['TF_CPP_MIN_VLOG_LEVEL'] = 2
else:
env_var['TF_CPP_MIN_LOG_LEVEL'] = 3
env_var['TF_CPP_MIN_VLOG_LEVEL'] = 3
print("env_var['TF_CPP_MIN_LOG_LEVEL']=", env_var['TF_CPP_MIN_LOG_LEVEL'])
print("env_var['TF_CPP_MIN_VLOG_LEVEL']=", env_var['TF_CPP_MIN_VLOG_LEVEL'])
data={}
data['TF_CPP_MIN_LOG_LEVEL'] = env_var['TF_CPP_MIN_LOG_LEVEL']
data['TF_CPP_MIN_VLOG_LEVEL'] = env_var['TF_CPP_MIN_VLOG_LEVEL']
with open(os.environ['DIR_PROJ']+'/utils/env_variables.json', 'w') as outfile:
json.dump(data, outfile)
# define parameters for ai platform training
if not use_custom_container:
# delete old package version
for root, dirs, files in os.walk(os.environ['DIR_PROJ'] + '/dist/'):
for filename in files:
package_dist=os.environ['DIR_PROJ'] + '/dist/'+filename
if package_dist[-7:]=='.tar.gz':
print('removing package"', package_dist)
os.remove(package_dist)
package_gcs = mu.create_module_tar_archive(model_name)
else:
package_gcs = None
timestamp = datetime.now().strftime("%Y_%m_%d_%H%M%S")
if hp_tuning:
job_name = model_name+'_hp_tuning_'+hardware+'_'+timestamp
else:
job_name = model_name+'_'+hardware+'_'+timestamp
module_name = 'model.'+model_name+'.task'
if tier=='basic' and hardware=='cpu':
# CPU
region = 'europe-west1'
elif tier=='basic' and hardware=='gpu':
# GPU
region = 'europe-west1'
elif tier=='custom' and hardware=='gpu':
# Custom GPU
region = 'europe-west4'
elif tier=='basic' and hardware=='tpu':
# TPU
#region = 'us-central1'
region = 'europe-west4' # No zone in region europe-west4 has accelerators of all requested types
#region = 'europe-west6' # The request for 8 TPU_V2 accelerators exceeds the allowed maximum of 0 K80, 0 P100, 0 P4, 0 T4, 0 TPU_V2, 0 TPU_V2_POD, 0 TPU_V3, 0 TPU_V3_POD, 0 V100
#region = 'europe-west2' # No zone in region europe-west2 has accelerators of all requested types
elif tier=='custom' and hardware=='tpu':
# TPU
#region = 'us-central1'
region = 'europe-west4'
#region = 'europe-west6'
#region = 'europe-west2'
else:
# Default
region = 'europe-west1'
# define parameters for training of the model
if type_production=='production':
# reading metadata
_, info = tensorflow_datasets.load(name='glue/sst2',
data_dir=data_dir,
with_info=True)
# define parameters
epochs = 2
batch_size_train = 32
#batch_size_test = 32
batch_size_eval = 64
# Maxium length, becarefull BERT max length is 512!
max_length = 128
# extract parameters
size_train_dataset=info.splits['train'].num_examples
#size_test_dataset=info.splits['test'].num_examples
size_valid_dataset=info.splits['validation'].num_examples
# computer parameter
steps_per_epoch_train = math.ceil(size_train_dataset/batch_size_train)
#steps_per_epoch_test = math.ceil(size_test_dataset/batch_size_test)
steps_per_epoch_eval = math.ceil(size_valid_dataset/batch_size_eval)
#print('Dataset size: {:6}/{:6}/{:6}'.format(size_train_dataset, size_test_dataset, size_valid_dataset))
#print('Batch size: {:6}/{:6}/{:6}'.format(batch_size_train, batch_size_test, batch_size_eval))
#print('Step per epoch: {:6}/{:6}/{:6}'.format(steps_per_epoch_train, steps_per_epoch_test, steps_per_epoch_eval))
#print('Total number of batch: {:6}/{:6}/{:6}'.format(steps_per_epoch_train*(epochs+1), steps_per_epoch_test*(epochs+1), steps_per_epoch_eval*1))
print('Number of epoch: {:6}'.format(epochs))
print('Batch size: {:6}/{:6}'.format(batch_size_train, batch_size_eval))
print('Step per epoch: {:6}/{:6}'.format(steps_per_epoch_train, steps_per_epoch_eval))
else:
if hardware=='tpu':
epochs = 1
steps_per_epoch_train = 6 #5
batch_size_train = 32
steps_per_epoch_eval = 1
batch_size_eval = 64
else:
epochs = 1
steps_per_epoch_train = 6 #5
batch_size_train = 32
steps_per_epoch_eval = 1
batch_size_eval = 64
steps=epochs*steps_per_epoch_train
if steps<=5:
n_steps_history=4
elif steps>=5 and steps<1000:
n_steps_history=10
print('be carefull with profiling between step: 10-20')
else:
n_steps_history=int(steps/100)
print('be carefull with profiling between step: 10-20')
print('will compute accuracy on the test set every {} step so {} time'.format(n_steps_history, int(steps/n_steps_history)))
if profiling:
print(' profiling ...')
steps_per_epoch_train = 100
n_steps_history=25
input_eval_tfrecords = 'gs://'+os.environ['BUCKET_NAME']+'/tfrecord/sst2/bert-base-multilingual-uncased/valid' #'gs://public-test-data-gs/valid'
input_train_tfrecords = 'gs://'+os.environ['BUCKET_NAME']+'/tfrecord/sst2/bert-base-multilingual-uncased/train' #'gs://public-test-data-gs/train'
if hp_tuning:
output_dir = 'gs://'+os.environ['BUCKET_NAME']+'/training_model_gcp/'+model_name+'_hp_tuning_'+hardware+'_'+timestamp
else:
output_dir = 'gs://'+os.environ['BUCKET_NAME']+'/training_model_gcp/'+model_name+'_'+hardware+'_'+timestamp
pretrained_model_dir = 'gs://'+os.environ['BUCKET_NAME']+'/pretrained_model/bert-base-multilingual-uncased'
#epsilon = 1.7788921050163616e-06
#learning_rate= 0.0007763625134788308
epsilon = 1e-8
learning_rate= 5e-5
# bulding training_inputs
parameters = ['--epochs', str(epochs),
'--steps_per_epoch_train', str(steps_per_epoch_train),
'--batch_size_train', str(batch_size_train),
'--steps_per_epoch_eval', str(steps_per_epoch_eval),
'--n_steps_history', str(n_steps_history),
'--batch_size_eval', str(batch_size_eval),
'--input_eval_tfrecords', input_eval_tfrecords ,
'--input_train_tfrecords', input_train_tfrecords,
'--output_dir', output_dir,
'--pretrained_model_dir', pretrained_model_dir,
'--verbosity_level', verbosity,
'--epsilon', str(epsilon),
'--learning_rate', str(learning_rate)]
if hardware=='tpu':
parameters.append('--use_tpu')
parameters.append('True')
training_inputs = {
'args': parameters,
'region': region,
}
if not use_custom_container:
training_inputs['packageUris'] = [package_gcs]
training_inputs['pythonModule'] = module_name
training_inputs['runtimeVersion'] = runtime_version
training_inputs['pythonVersion'] = python_version
else:
accelerator_master = {'imageUri': image_uri}
training_inputs['masterConfig'] = accelerator_master
if tier=='basic' and hardware=='cpu':
# CPU
training_inputs['scaleTier'] = 'BASIC'
#training_inputs['scaleTier'] = 'STANDARD_1'
elif tier=='custom' and hardware=='cpu':
# CPU
training_inputs['scaleTier'] = 'CUSTOM'
training_inputs['masterType'] = 'n1-standard-16'
elif tier=='basic' and hardware=='gpu':
# GPU
training_inputs['scaleTier'] = 'BASIC_GPU'
elif tier=='custom' and hardware=='gpu':
# Custom GPU
training_inputs['scaleTier'] = 'CUSTOM'
training_inputs['masterType'] = 'n1-standard-8'
accelerator_master = {'acceleratorConfig': {
'count': '1',
'type': 'NVIDIA_TESLA_V100'}
}
training_inputs['masterConfig'] = accelerator_master
elif tier=='basic' and hardware=='tpu':
# TPU
training_inputs['scaleTier'] = 'BASIC_TPU'
elif tier=='custom' and hardware=='tpu':
# Custom TPU
training_inputs['scaleTier'] = 'CUSTOM'
training_inputs['masterType'] = 'n1-highcpu-16'
training_inputs['workerType'] = 'cloud_tpu'
training_inputs['workerCount'] = '1'
accelerator_master = {'acceleratorConfig': {
'count': '8',
'type': 'TPU_V3'}
}
training_inputs['workerConfig'] = accelerator_master
else:
# Default
training_inputs['scaleTier'] = 'BASIC'
print('======')
# add hyperparameter tuning to the job config.
if hp_tuning:
hyperparams = {
'algorithm': 'ALGORITHM_UNSPECIFIED',
'goal': 'MAXIMIZE',
'maxTrials': 3,
'maxParallelTrials': 2,
'maxFailedTrials': 1,
'enableTrialEarlyStopping': True,
'hyperparameterMetricTag': 'metric_accuracy_train_epoch',
'params': []}
hyperparams['params'].append({
'parameterName':'learning_rate',
'type':'DOUBLE',
'minValue': 1.0e-8,
'maxValue': 1.0,
'scaleType': 'UNIT_LOG_SCALE'})
hyperparams['params'].append({
'parameterName':'epsilon',
'type':'DOUBLE',
'minValue': 1.0e-9,
'maxValue': 1.0,
'scaleType': 'UNIT_LOG_SCALE'})
# Add hyperparameter specification to the training inputs dictionary.
training_inputs['hyperparameters'] = hyperparams
# building job_spec
labels = {'accelerator': hardware,
'prod_type': type_production,
'owner': owner}
if use_custom_container:
labels['type'] = 'custom_container'
else:
labels['type'] = 'gcp_runtime'
job_spec = {'jobId': job_name,
'labels': labels,
'trainingInput': training_inputs}
if test_logging:
# test
# variable used to build some variable's name
owner = os.environ['OWNER']
tier = 'basic'
verbosity = 'INFO'
# define parameters for ai platform training
if not use_custom_container:
package_gcs = package_gcs
else:
image_uri='gcr.io/'+os.environ['PROJECT_ID']+tag
job_name = 'debug_test_'+datetime.now().strftime("%Y_%m_%d_%H%M%S")
module_name = 'model.test-log.task'
#module_name = 'model.test.task'
region = 'europe-west1'
# bulding training_inputs
parameters = ['--verbosity_level', verbosity]
training_inputs = {
'args': parameters,
'region': region,
}
if not use_custom_container:
training_inputs['packageUris'] = [package_gcs]
training_inputs['pythonModule'] = module_name
training_inputs['runtimeVersion'] = runtime_version
training_inputs['pythonVersion'] = python_version
else:
accelerator_master = {'imageUri': image_uri}
#training_inputs['pythonModule'] = module_name # not working to overwrite the entrypoint
training_inputs['masterConfig'] = accelerator_master
training_inputs['scaleTier'] = 'BASIC'
# building job_spec
labels = {'accelerator': 'cpu',
'prod_type': 'debug',
'owner': owner}
if use_custom_container:
labels['type'] = 'custom_container'
else:
labels['type'] = 'gcp_runtime'
job_spec = {'jobId': job_name,
'labels': labels,
'trainingInput': training_inputs}
training_inputs, job_name
# submit the training job
request = ai_platform_training.projects().jobs().create(body=job_spec,
parent=project_id)
try:
response = request.execute()
print('Job status for {}:'.format(response['jobId']))
print(' state : {}'.format(response['state']))
print(' createTime: {}'.format(response['createTime']))
except errors.HttpError as err:
# For this example, just send some text to the logs.
# You need to import logging for this to work.
logging.error('There was an e0rror creating the training job.'
' Check the details:')
logging.error(err._get_reason())
# if you wnat to specify a specif job ID
#job_name = 'tf_bert_classification_2020_05_16_193551'
jobId = 'projects/{}/jobs/{}'.format(project_name, job_name)
request = ai_platform_training.projects().jobs().get(name=jobId)
response = None
try:
response = request.execute()
print('Job status for {}:'.format(response['jobId']))
print(' state : {}'.format(response['state']))
if 'trainingOutput' in response.keys():
if 'trials' in response['trainingOutput'].keys():
for sub_job in response['trainingOutput']['trials']:
print(' trials : {}'.format(sub_job))
if 'consumedMLUnits' in response.keys():
print(' consumedMLUnits : {}'.format(response['trainingOutput']['consumedMLUnits']))
if 'errorMessage' in response.keys():
print(' errorMessage : {}'.format(response['errorMessage']))
except errors.HttpError as err:
logging.error('There was an error getting the logs.'
' Check the details:')
logging.error(err._get_reason())
# how to stream logs
# --stream-logs
```
# TensorBoard for job running on GCP
```
# View open TensorBoard instance
#notebook.list()
# View pid
#!ps -ef|grep tensorboard
# Killed Tensorboard process by using pid
#!kill -9 pid
%load_ext tensorboard
#%reload_ext tensorboard
%tensorboard --logdir {output_dir+'/tensorboard'} \
#--host 0.0.0.0 \
#--port 6006 \
#--debugger_port 6006
%load_ext tensorboard
#%reload_ext tensorboard
%tensorboard --logdir {output_dir+'/hparams_tuning'} \
#--host 0.0.0.0 \
#--port 6006 \
#--debugger_port 6006
!tensorboard dev upload --logdir \
'gs://multilingual_text_classification/training_model_gcp/tf_bert_classification_cpu_2020_08_20_093837/tensorboard' --one_shot --yes
```
| github_jupyter |
# データサイエンス100本ノック(構造化データ加工編) - Python
## はじめに
- 初めに以下のセルを実行してください
- 必要なライブラリのインポートとデータベース(PostgreSQL)からのデータ読み込みを行います
- pandas等、利用が想定されるライブラリは以下セルでインポートしています
- その他利用したいライブラリがあれば適宜インストールしてください("!pip install ライブラリ名"でインストールも可能)
- 処理は複数回に分けても構いません
- 名前、住所等はダミーデータであり、実在するものではありません
```
import os
import pandas as pd
import numpy as np
from datetime import datetime, date
from dateutil.relativedelta import relativedelta
import math
import psycopg2
from sqlalchemy import create_engine
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from imblearn.under_sampling import RandomUnderSampler
pgconfig = {
'host': 'db',
'port': os.environ['PG_PORT'],
'database': os.environ['PG_DATABASE'],
'user': os.environ['PG_USER'],
'password': os.environ['PG_PASSWORD'],
}
# pd.read_sql用のコネクタ
conn = psycopg2.connect(**pgconfig)
df_customer = pd.read_sql(sql='select * from customer', con=conn)
df_category = pd.read_sql(sql='select * from category', con=conn)
df_product = pd.read_sql(sql='select * from product', con=conn)
df_receipt = pd.read_sql(sql='select * from receipt', con=conn)
df_store = pd.read_sql(sql='select * from store', con=conn)
df_geocode = pd.read_sql(sql='select * from geocode', con=conn)
```
# 演習問題
---
> P-001: レシート明細のデータフレーム(df_receipt)から全項目の先頭10件を表示し、どのようなデータを保有しているか目視で確認せよ。
```
df_receipt.head(10)
```
---
> P-002: レシート明細のデータフレーム(df_receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、10件表示させよ。
```
df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'amount']].head(10)
```
---
> P-003: レシート明細のデータフレーム(df_receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、10件表示させよ。ただし、sales_ymdはsales_dateに項目名を変更しながら抽出すること。
```
df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'amount']]. \
rename(columns={'sales_ymd': 'sales_date'}).head(10)
```
---
> P-004: レシート明細のデータフレーム(df_receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。
> - 顧客ID(customer_id)が"CS018205000001"
```
df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'amount']]. \
query('customer_id == "CS018205000001"')
```
---
> P-005: レシート明細のデータフレーム(df_receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。
> - 顧客ID(customer_id)が"CS018205000001"
> - 売上金額(amount)が1,000以上
```
df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'amount']] \
.query('customer_id == "CS018205000001" & amount >= 1000')
```
---
> P-006: レシート明細データフレーム「df_receipt」から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上数量(quantity)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。
> - 顧客ID(customer_id)が"CS018205000001"
> - 売上金額(amount)が1,000以上または売上数量(quantity)が5以上
```
df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'quantity', 'amount']].\
query('customer_id == "CS018205000001" & (amount >= 1000 | quantity >=5)')
```
---
> P-007: レシート明細のデータフレーム(df_receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。
> - 顧客ID(customer_id)が"CS018205000001"
> - 売上金額(amount)が1,000以上2,000以下
```
df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'amount']] \
.query('customer_id == "CS018205000001" & 1000 <= amount <= 2000')
```
---
> P-008: レシート明細のデータフレーム(df_receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。
> - 顧客ID(customer_id)が"CS018205000001"
> - 商品コード(product_cd)が"P071401019"以外
```
df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'amount']] \
.query('customer_id == "CS018205000001" & product_cd != "P071401019"')
```
---
> P-009: 以下の処理において、出力結果を変えずにORをANDに書き換えよ。
`df_store.query('not(prefecture_cd == "13" | floor_area > 900)')`
```
df_store.query('prefecture_cd != "13" & floor_area <= 900')
```
---
> P-010: 店舗データフレーム(df_store)から、店舗コード(store_cd)が"S14"で始まるものだけ全項目抽出し、10件だけ表示せよ。
```
df_store.query("store_cd.str.startswith('S14')", engine='python').head(10)
```
---
> P-011: 顧客データフレーム(df_customer)から顧客ID(customer_id)の末尾が1のものだけ全項目抽出し、10件だけ表示せよ。
```
df_customer.query("customer_id.str.endswith('1')", engine='python').head(10)
```
---
> P-012: 店舗データフレーム(df_store)から横浜市の店舗だけ全項目表示せよ。
```
df_store.query("address.str.contains('横浜市')", engine='python')
```
---
> P-013: 顧客データフレーム(df_customer)から、ステータスコード(status_cd)の先頭がアルファベットのA〜Fで始まるデータを全項目抽出し、10件だけ表示せよ。
```
df_customer.query("status_cd.str.contains('^[A-F]', regex=True)",
engine='python').head(10)
```
---
> P-014: 顧客データフレーム(df_customer)から、ステータスコード(status_cd)の末尾が数字の1〜9で終わるデータを全項目抽出し、10件だけ表示せよ。
```
df_customer.query("status_cd.str.contains('[1-9]$', regex=True)", engine='python').head(10)
```
---
> P-015: 顧客データフレーム(df_customer)から、ステータスコード(status_cd)の先頭がアルファベットのA〜Fで始まり、末尾が数字の1〜9で終わるデータを全項目抽出し、10件だけ表示せよ。
```
df_customer.query("status_cd.str.contains('^[A-F].*[1-9]$', regex=True)",
engine='python').head(10)
```
---
> P-016: 店舗データフレーム(df_store)から、電話番号(tel_no)が3桁-3桁-4桁のデータを全項目表示せよ。
```
df_store.query("tel_no.str.contains('^[0-9]{3}-[0-9]{3}-[0-9]{4}$',regex=True)",
engine='python')
```
---
> P-17: 顧客データフレーム(df_customer)を生年月日(birth_day)で高齢順にソートし、先頭10件を全項目表示せよ。
```
df_customer.sort_values('birth_day', ascending=True).head(10)
```
---
> P-18: 顧客データフレーム(df_customer)を生年月日(birth_day)で若い順にソートし、先頭10件を全項目表示せよ。
```
df_customer.sort_values('birth_day', ascending=False).head(10)
```
---
> P-19: レシート明細データフレーム(df_receipt)に対し、1件あたりの売上金額(amount)が高い順にランクを付与し、先頭10件を抽出せよ。項目は顧客ID(customer_id)、売上金額(amount)、付与したランクを表示させること。なお、売上金額(amount)が等しい場合は同一順位を付与するものとする。
```
df_tmp = pd.concat([df_receipt[['customer_id', 'amount']]
,df_receipt['amount'].rank(method='min',
ascending=False)], axis=1)
df_tmp.columns = ['customer_id', 'amount', 'ranking']
df_tmp.sort_values('ranking', ascending=True).head(10)
```
---
> P-020: レシート明細データフレーム(df_receipt)に対し、1件あたりの売上金額(amount)が高い順にランクを付与し、先頭10件を抽出せよ。項目は顧客ID(customer_id)、売上金額(amount)、付与したランクを表示させること。なお、売上金額(amount)が等しい場合でも別順位を付与すること。
```
df_tmp = pd.concat([df_receipt[['customer_id', 'amount']]
,df_receipt['amount'].rank(method='first',
ascending=False)], axis=1)
df_tmp.columns = ['customer_id', 'amount', 'ranking']
df_tmp.sort_values('ranking', ascending=True).head(10)
```
---
> P-021: レシート明細データフレーム(df_receipt)に対し、件数をカウントせよ。
```
len(df_receipt)
```
---
> P-022: レシート明細データフレーム(df_receipt)の顧客ID(customer_id)に対し、ユニーク件数をカウントせよ。
```
len(df_receipt['customer_id'].unique())
```
---
> P-023: レシート明細データフレーム(df_receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)と売上数量(quantity)を合計せよ。
```
df_receipt.groupby('store_cd').agg({'amount':'sum',
'quantity':'sum'}).reset_index()
```
---
> P-024: レシート明細データフレーム(df_receipt)に対し、顧客ID(customer_id)ごとに最も新しい売上日(sales_ymd)を求め、10件表示せよ。
```
df_receipt.groupby('customer_id').sales_ymd.max().reset_index().head(10)
```
---
> P-025: レシート明細データフレーム(df_receipt)に対し、顧客ID(customer_id)ごとに最も古い売上日(sales_ymd)を求め、10件表示せよ。
```
df_receipt.groupby('customer_id').agg({'sales_ymd':'min'}).head(10)
```
---
> P-026: レシート明細データフレーム(df_receipt)に対し、顧客ID(customer_id)ごとに最も新しい売上日(sales_ymd)と古い売上日を求め、両者が異なるデータを10件表示せよ。
```
df_tmp = df_receipt.groupby('customer_id'). \
agg({'sales_ymd':['max','min']}).reset_index()
df_tmp.columns = ["_".join(pair) for pair in df_tmp.columns]
df_tmp.query('sales_ymd_max != sales_ymd_min').head(10)
```
---
> P-027: レシート明細データフレーム(df_receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)の平均を計算し、降順でTOP5を表示せよ。
```
df_receipt.groupby('store_cd').agg({'amount':'mean'}).reset_index(). \
sort_values('amount', ascending=False).head(5)
```
---
> P-028: レシート明細データフレーム(df_receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)の中央値を計算し、降順でTOP5を表示せよ。
```
df_receipt.groupby('store_cd').agg({'amount':'median'}).reset_index(). \
sort_values('amount', ascending=False).head(5)
```
---
> P-029: レシート明細データフレーム(df_receipt)に対し、店舗コード(store_cd)ごとに商品コード(product_cd)の最頻値を求めよ。
```
df_receipt.groupby('store_cd').product_cd. \
apply(lambda x: x.mode()).reset_index()
```
---
> P-030: レシート明細データフレーム(df_receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)の標本分散を計算し、降順でTOP5を表示せよ。
```
df_receipt.groupby('store_cd').amount.var(ddof=0).reset_index(). \
sort_values('amount', ascending=False).head(5)
```
---
> P-031: レシート明細データフレーム(df_receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)の標本標準偏差を計算し、降順でTOP5を表示せよ。
TIPS:
PandasとNumpyでddofのデフォルト値が異なることに注意しましょう
```
Pandas:
DataFrame.std(self, axis=None, skipna=None, level=None, ddof=1, numeric_only=None, **kwargs)
Numpy:
numpy.std(a, axis=None, dtype=None, out=None, ddof=0, keepdims=)
```
```
df_receipt.groupby('store_cd').amount.std(ddof=0).reset_index(). \
sort_values('amount', ascending=False).head(5)
```
---
> P-032: レシート明細データフレーム(df_receipt)の売上金額(amount)について、25%刻みでパーセンタイル値を求めよ。
```
# コード例1
np.percentile(df_receipt['amount'], q=[25, 50, 75,100])
# コード例2
df_receipt.amount.quantile(q=np.arange(5)/4)
```
---
> P-033: レシート明細データフレーム(df_receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)の平均を計算し、330以上のものを抽出せよ。
```
df_receipt.groupby('store_cd').amount.mean(). \
reset_index().query('amount >= 330')
```
---
> P-034: レシート明細データフレーム(df_receipt)に対し、顧客ID(customer_id)ごとに売上金額(amount)を合計して全顧客の平均を求めよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。
```
# queryを使わない書き方
df_receipt[~df_receipt['customer_id'].str.startswith("Z")]. \
groupby('customer_id').amount.sum().mean()
# queryを使う書き方
df_receipt.query('not customer_id.str.startswith("Z")',
engine='python').groupby('customer_id').amount.sum().mean()
```
---
> P-035: レシート明細データフレーム(df_receipt)に対し、顧客ID(customer_id)ごとに売上金額(amount)を合計して全顧客の平均を求め、平均以上に買い物をしている顧客を抽出せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。なお、データは10件だけ表示させれば良い。
```
df_receipt_tmp = df_receipt[~df_receipt['customer_id'].str.startswith("Z")]
amount_mean = df_receipt_tmp.groupby('customer_id').amount.sum().mean()
df_amount_sum = df_receipt_tmp.groupby('customer_id').amount.sum().reset_index()
df_amount_sum[df_amount_sum['amount'] >= amount_mean].head(10)
```
---
> P-036: レシート明細データフレーム(df_receipt)と店舗データフレーム(df_store)を内部結合し、レシート明細データフレームの全項目と店舗データフレームの店舗名(store_name)を10件表示させよ。
```
pd.merge(df_receipt, df_store[['store_cd','store_name']],
how='inner', on='store_cd').head(10)
```
---
> P-037: 商品データフレーム(df_product)とカテゴリデータフレーム(df_category)を内部結合し、商品データフレームの全項目とカテゴリデータフレームの小区分名(category_small_name)を10件表示させよ。
```
pd.merge(df_product
, df_category[['category_small_cd','category_small_name']]
, how='inner', on='category_small_cd').head(10)
```
---
> P-038: 顧客データフレーム(df_customer)とレシート明細データフレーム(df_receipt)から、各顧客ごとの売上金額合計を求めよ。ただし、売上実績がない顧客については売上金額を0として表示させること。また、顧客は性別コード(gender_cd)が女性(1)であるものを対象とし、非会員(顧客IDが"Z"から始まるもの)は除外すること。なお、結果は10件だけ表示させれば良い。
```
df_amount_sum = df_receipt.groupby('customer_id').amount.sum().reset_index()
df_tmp = df_customer. \
query('gender_cd == "1" and not customer_id.str.startswith("Z")',
engine='python')
pd.merge(df_tmp['customer_id'], df_amount_sum,
how='left', on='customer_id').fillna(0).head(10)
```
---
> P-039: レシート明細データフレーム(df_receipt)から売上日数の多い顧客の上位20件と、売上金額合計の多い顧客の上位20件を抽出し、完全外部結合せよ。ただし、非会員(顧客IDが"Z"から始まるもの)は除外すること。
```
df_sum = df_receipt.groupby('customer_id').amount.sum().reset_index()
df_sum = df_sum.query('not customer_id.str.startswith("Z")', engine='python')
df_sum = df_sum.sort_values('amount', ascending=False).head(20)
df_cnt = df_receipt[~df_receipt.duplicated(subset=['customer_id', 'sales_ymd'])]
df_cnt = df_cnt.query('not customer_id.str.startswith("Z")', engine='python')
df_cnt = df_cnt.groupby('customer_id').sales_ymd.count().reset_index()
df_cnt = df_cnt.sort_values('sales_ymd', ascending=False).head(20)
pd.merge(df_sum, df_cnt, how='outer', on='customer_id')
```
---
> P-040: 全ての店舗と全ての商品を組み合わせると何件のデータとなるか調査したい。店舗(df_store)と商品(df_product)を直積した件数を計算せよ。
```
df_store_tmp = df_store.copy()
df_product_tmp = df_product.copy()
df_store_tmp['key'] = 0
df_product_tmp['key'] = 0
len(pd.merge(df_store_tmp, df_product_tmp, how='outer', on='key'))
```
---
> P-041: レシート明細データフレーム(df_receipt)の売上金額(amount)を日付(sales_ymd)ごとに集計し、前日からの売上金額増減を計算せよ。なお、計算結果は10件表示すればよい。
```
df_sales_amount_by_date = df_receipt[['sales_ymd', 'amount']].\
groupby('sales_ymd').sum().reset_index()
df_sales_amount_by_date = pd.concat([df_sales_amount_by_date,
df_sales_amount_by_date.shift()], axis=1)
df_sales_amount_by_date.columns = ['sales_ymd','amount','lag_ymd','lag_amount']
df_sales_amount_by_date['diff_amount'] = \
df_sales_amount_by_date['amount'] - df_sales_amount_by_date['lag_amount']
df_sales_amount_by_date.head(10)
```
---
> P-042: レシート明細データフレーム(df_receipt)の売上金額(amount)を日付(sales_ymd)ごとに集計し、各日付のデータに対し、1日前、2日前、3日前のデータを結合せよ。結果は10件表示すればよい。
```
# コード例1:縦持ちケース
df_sales_amount_by_date = df_receipt[['sales_ymd', 'amount']]. \
groupby('sales_ymd').sum().reset_index()
for i in range(1, 4):
if i == 1:
df_lag = pd.concat([df_sales_amount_by_date,
df_sales_amount_by_date.shift(i)],axis=1)
else:
df_lag = df_lag.append(pd.concat([df_sales_amount_by_date,
df_sales_amount_by_date.shift(i)],
axis=1))
df_lag.columns = ['sales_ymd', 'amount', 'lag_ymd', 'lag_amount']
df_lag.dropna().sort_values(['sales_ymd','lag_ymd']).head(10)
# コード例2:横持ちケース
df_sales_amount_by_date = df_receipt[['sales_ymd', 'amount']].\
groupby('sales_ymd').sum().reset_index()
for i in range(1, 4):
if i == 1:
df_lag = pd.concat([df_sales_amount_by_date,
df_sales_amount_by_date.shift(i)],axis=1)
else:
df_lag = pd.concat([df_lag, df_sales_amount_by_date.shift(i)],axis=1)
df_lag.columns = ['sales_ymd', 'amount', 'lag_ymd_1', 'lag_amount_1',
'lag_ymd_2', 'lag_amount_2', 'lag_ymd_3', 'lag_amount_3']
df_lag.dropna().sort_values(['sales_ymd']).head(10)
```
---
> P-043: レシート明細データフレーム(df_receipt)と顧客データフレーム(df_customer)を結合し、性別(gender)と年代(ageから計算)ごとに売上金額(amount)を合計した売上サマリデータフレーム(df_sales_summary)を作成せよ。性別は0が男性、1が女性、9が不明を表すものとする。
>
> ただし、項目構成は年代、女性の売上金額、男性の売上金額、性別不明の売上金額の4項目とすること(縦に年代、横に性別のクロス集計)。また、年代は10歳ごとの階級とすること。
```
# コード例1
df_tmp = pd.merge(df_receipt, df_customer, how ='inner', on="customer_id")
df_tmp['era'] = df_tmp['age'].apply(lambda x: math.floor(x / 10) * 10)
df_sales_summary = pd.pivot_table(df_tmp, index='era', columns='gender_cd',
values='amount', aggfunc='sum').reset_index()
df_sales_summary.columns = ['era', 'male', 'female', 'unknown']
df_sales_summary
# コード例2
df_tmp = pd.merge(df_receipt, df_customer, how ='inner', on="customer_id")
df_tmp['era'] = np.floor(df_tmp['age'] / 10).astype(int) * 10
df_sales_summary = pd.pivot_table(df_tmp, index='era', columns='gender_cd',
values='amount', aggfunc='sum').reset_index()
df_sales_summary.columns = ['era', 'male', 'female', 'unknown']
df_sales_summary
```
---
> P-044: 前設問で作成した売上サマリデータフレーム(df_sales_summary)は性別の売上を横持ちさせたものであった。このデータフレームから性別を縦持ちさせ、年代、性別コード、売上金額の3項目に変換せよ。ただし、性別コードは男性を"00"、女性を"01"、不明を"99"とする。
```
df_sales_summary = df_sales_summary.set_index('era'). \
stack().reset_index().replace({'female':'01','male':'00','unknown':'99'}). \
rename(columns={'level_1':'gender_cd', 0: 'amount'})
df_sales_summary
```
---
> P-045: 顧客データフレーム(df_customer)の生年月日(birth_day)は日付型でデータを保有している。これをYYYYMMDD形式の文字列に変換し、顧客ID(customer_id)とともに抽出せよ。データは10件を抽出すれば良い。
```
pd.concat([df_customer['customer_id'],
pd.to_datetime(df_customer['birth_day']).dt.strftime('%Y%m%d')],
axis = 1).head(10)
```
---
> P-046: 顧客データフレーム(df_customer)の申し込み日(application_date)はYYYYMMDD形式の文字列型でデータを保有している。これを日付型に変換し、顧客ID(customer_id)とともに抽出せよ。データは10件を抽出すれば良い。
```
pd.concat([df_customer['customer_id'],
pd.to_datetime(df_customer['application_date'])], axis=1).head(10)
```
---
> P-047: レシート明細データフレーム(df_receipt)の売上日(sales_ymd)はYYYYMMDD形式の数値型でデータを保有している。これを日付型に変換し、レシート番号(receipt_no)、レシートサブ番号(receipt_sub_no)とともに抽出せよ。データは10件を抽出すれば良い。
```
pd.concat([df_receipt[['receipt_no', 'receipt_sub_no']],
pd.to_datetime(df_receipt['sales_ymd'].astype('str'))],
axis=1).head(10)
```
---
> P-048: レシート明細データフレーム(df_receipt)の売上エポック秒(sales_epoch)は数値型のUNIX秒でデータを保有している。これを日付型に変換し、レシート番号(receipt_no)、レシートサブ番号(receipt_sub_no)とともに抽出せよ。データは10件を抽出すれば良い。
```
pd.concat([df_receipt[['receipt_no', 'receipt_sub_no']],
pd.to_datetime(df_receipt['sales_epoch'], unit='s')],
axis=1).head(10)
```
---
> P-049: レシート明細データフレーム(df_receipt)の売上エポック秒(sales_epoch)を日付型に変換し、「年」だけ取り出してレシート番号(receipt_no)、レシートサブ番号(receipt_sub_no)とともに抽出せよ。データは10件を抽出すれば良い。
```
pd.concat([df_receipt[['receipt_no', 'receipt_sub_no']],
pd.to_datetime(df_receipt['sales_epoch'], unit='s').dt.year],
axis=1).head(10)
```
---
> P-050: レシート明細データフレーム(df_receipt)の売上エポック秒(sales_epoch)を日付型に変換し、「月」だけ取り出してレシート番号(receipt_no)、レシートサブ番号(receipt_sub_no)とともに抽出せよ。なお、「月」は0埋め2桁で取り出すこと。データは10件を抽出すれば良い。
```
# dt.monthでも月を取得できるが、ここでは0埋め2桁で取り出すためstrftimeを利用している
pd.concat([df_receipt[['receipt_no', 'receipt_sub_no']],
pd.to_datetime(df_receipt['sales_epoch'], unit='s'). \
dt.strftime('%m')],axis=1).head(10)
```
---
> P-051: レシート明細データフレーム(df_receipt)の売上エポック秒を日付型に変換し、「日」だけ取り出してレシート番号(receipt_no)、レシートサブ番号(receipt_sub_no)とともに抽出せよ。なお、「日」は0埋め2桁で取り出すこと。データは10件を抽出すれば良い。
```
# dt.dayでも日を取得できるが、ここでは0埋め2桁で取り出すためstrftimeを利用している
pd.concat([df_receipt[['receipt_no', 'receipt_sub_no']],
pd.to_datetime(df_receipt['sales_epoch'], unit='s'). \
dt.strftime('%d')],axis=1).head(10)
```
---
> P-052: レシート明細データフレーム(df_receipt)の売上金額(amount)を顧客ID(customer_id)ごとに合計の上、売上金額合計に対して2,000円以下を0、2,000円より大きい金額を1に2値化し、顧客ID、売上金額合計とともに10件表示せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。
```
# コード例1
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python')
df_sales_amount = df_sales_amount[['customer_id', 'amount']]. \
groupby('customer_id').sum().reset_index()
df_sales_amount['sales_flg'] = df_sales_amount['amount']. \
apply(lambda x: 1 if x > 2000 else 0)
df_sales_amount.head(10)
# コード例2(np.whereの活用)
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python')
df_sales_amount = df_sales_amount[['customer_id', 'amount']]. \
groupby('customer_id').sum().reset_index()
df_sales_amount['sales_flg'] = np.where(df_sales_amount['amount'] > 2000, 1, 0)
df_sales_amount.head(10)
```
---
> P-053: 顧客データフレーム(df_customer)の郵便番号(postal_cd)に対し、東京(先頭3桁が100〜209のもの)を1、それ以外のものを0に2値化せよ。さらにレシート明細データフレーム(df_receipt)と結合し、全期間において売上実績がある顧客数を、作成した2値ごとにカウントせよ。
```
# コード例1
df_tmp = df_customer[['customer_id', 'postal_cd']].copy()
df_tmp['postal_flg'] = df_tmp['postal_cd']. \
apply(lambda x: 1 if 100 <= int(x[0:3]) <= 209 else 0)
pd.merge(df_tmp, df_receipt, how='inner', on='customer_id'). \
groupby('postal_flg').agg({'customer_id':'nunique'})
# コード例2(np.where、betweenの活用)
df_tmp = df_customer[['customer_id', 'postal_cd']].copy()
df_tmp['postal_flg'] = np.where(df_tmp['postal_cd'].str[0:3].astype(int)
.between(100, 209), 1, 0)
pd.merge(df_tmp, df_receipt, how='inner', on='customer_id'). \
groupby('postal_flg').agg({'customer_id':'nunique'})
```
---
> P-054: 顧客データフレーム(df_customer)の住所(address)は、埼玉県、千葉県、東京都、神奈川県のいずれかとなっている。都道府県毎にコード値を作成し、顧客ID、住所とともに抽出せよ。値は埼玉県を11、千葉県を12、東京都を13、神奈川県を14とすること。結果は10件表示させれば良い。
```
pd.concat([df_customer[['customer_id', 'address']],
df_customer['address'].str[0:3].map({'埼玉県': '11',
'千葉県':'12',
'東京都':'13',
'神奈川':'14'})],axis=1).head(10)
```
---
> P-055: レシート明細データフレーム(df_receipt)の売上金額(amount)を顧客ID(customer_id)ごとに合計し、その合計金額の四分位点を求めよ。その上で、顧客ごとの売上金額合計に対して以下の基準でカテゴリ値を作成し、顧客ID、売上金額合計とともに表示せよ。カテゴリ値は上から順に1〜4とする。結果は10件表示させれば良い。
>
> - 最小値以上第一四分位未満
> - 第一四分位以上第二四分位未満
> - 第二四分位以上第三四分位未満
> - 第三四分位以上
```
# コード例1
df_sales_amount = df_receipt[['customer_id', 'amount']]. \
groupby('customer_id').sum().reset_index()
pct25 = np.quantile(df_sales_amount['amount'], 0.25)
pct50 = np.quantile(df_sales_amount['amount'], 0.5)
pct75 = np.quantile(df_sales_amount['amount'], 0.75)
def pct_group(x):
if x < pct25:
return 1
elif pct25 <= x < pct50:
return 2
elif pct50 <= x < pct75:
return 3
elif pct75 <= x:
return 4
df_sales_amount['pct_group'] = df_sales_amount['amount'].apply(lambda x: pct_group(x))
df_sales_amount.head(10)
# 確認用
print('pct25:', pct25)
print('pct50:', pct50)
print('pct75:', pct75)
# コード例2
df_temp = df_receipt.groupby('customer_id')[['amount']].sum()
df_temp['quantile'], bins = \
pd.qcut(df_receipt.groupby('customer_id')['amount'].sum(), 4, retbins=True)
display(df_temp.head())
print('quantiles:', bins)
```
---
> P-056: 顧客データフレーム(df_customer)の年齢(age)をもとに10歳刻みで年代を算出し、顧客ID(customer_id)、生年月日(birth_day)とともに抽出せよ。ただし、60歳以上は全て60歳代とすること。年代を表すカテゴリ名は任意とする。先頭10件を表示させればよい。
```
# コード例1
df_customer_era = pd.concat([df_customer[['customer_id', 'birth_day']],
df_customer['age']. \
apply(lambda x: min(math.floor(x / 10) * 10, 60))],
axis=1)
df_customer_era.head(10)
# コード例2
df_customer['age_group'] = pd.cut(df_customer['age'],
bins=[0, 10, 20, 30, 40, 50, 60, np.inf],
right=False)
df_customer[['customer_id', 'birth_day', 'age_group']].head(10)
```
---
> P-057: 前問題の抽出結果と性別(gender)を組み合わせ、新たに性別×年代の組み合わせを表すカテゴリデータを作成せよ。組み合わせを表すカテゴリの値は任意とする。先頭10件を表示させればよい。
```
df_customer_era['era_gender'] = \
df_customer['gender_cd'] + df_customer_era['age'].astype('str')
df_customer_era.head(10)
```
---
> P-058: 顧客データフレーム(df_customer)の性別コード(gender_cd)をダミー変数化し、顧客ID(customer_id)とともに抽出せよ。結果は10件表示させれば良い。
```
pd.get_dummies(df_customer[['customer_id', 'gender_cd']],
columns=['gender_cd']).head(10)
```
---
> P-059: レシート明細データフレーム(df_receipt)の売上金額(amount)を顧客ID(customer_id)ごとに合計し、売上金額合計を平均0、標準偏差1に標準化して顧客ID、売上金額合計とともに表示せよ。標準化に使用する標準偏差は、不偏標準偏差と標本標準偏差のどちらでも良いものとする。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。結果は10件表示させれば良い。
TIPS:
- query()の引数engineで'python'か'numexpr'かを選択でき、デフォルトはインストールされていればnumexprが、無ければpythonが使われます。さらに、文字列メソッドはengine='python'でないとquery()メソッドで使えません。
```
# skleanのpreprocessing.scaleを利用するため、標本標準偏差で計算されている
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python'). \
groupby('customer_id'). \
agg({'amount':'sum'}).reset_index()
df_sales_amount['amount_ss'] = preprocessing.scale(df_sales_amount['amount'])
df_sales_amount.head(10)
# コード例2(fitを行うことで、別のデータでも同じの平均・標準偏差で標準化を行える)
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python'). \
groupby('customer_id'). \
agg({'amount':'sum'}).reset_index()
scaler = preprocessing.StandardScaler()
scaler.fit(df_sales_amount[['amount']])
df_sales_amount['amount_ss'] = scaler.transform(df_sales_amount[['amount']])
df_sales_amount.head(10)
```
---
> P-060: レシート明細データフレーム(df_receipt)の売上金額(amount)を顧客ID(customer_id)ごとに合計し、売上金額合計を最小値0、最大値1に正規化して顧客ID、売上金額合計とともに表示せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。結果は10件表示させれば良い。
```
# コード例1
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python'). \
groupby('customer_id'). \
agg({'amount':'sum'}).reset_index()
df_sales_amount['amount_mm'] = \
preprocessing.minmax_scale(df_sales_amount['amount'])
df_sales_amount.head(10)
# コード例2(fitを行うことで、別のデータでも同じの平均・標準偏差で標準化を行える)
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python'). \
groupby('customer_id'). \
agg({'amount':'sum'}).reset_index()
scaler = preprocessing.MinMaxScaler()
scaler.fit(df_sales_amount[['amount']])
df_sales_amount['amount_mm'] = scaler.transform(df_sales_amount[['amount']])
df_sales_amount.head(10)
```
---
> P-061: レシート明細データフレーム(df_receipt)の売上金額(amount)を顧客ID(customer_id)ごとに合計し、売上金額合計を常用対数化(底=10)して顧客ID、売上金額合計とともに表示せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。結果は10件表示させれば良い。
```
# skleanのpreprocessing.scaleを利用するため、標本標準偏差で計算されている
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python'). \
groupby('customer_id'). \
agg({'amount':'sum'}).reset_index()
df_sales_amount['amount_log10'] = np.log10(df_sales_amount['amount'] + 0.5)
df_sales_amount.head(10)
```
---
> P-062: レシート明細データフレーム(df_receipt)の売上金額(amount)を顧客ID(customer_id)ごとに合計し、売上金額合計を自然対数化(底=e)して顧客ID、売上金額合計とともに表示せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。結果は10件表示させれば良い。
```
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python'). \
groupby('customer_id'). \
agg({'amount':'sum'}).reset_index()
df_sales_amount['amount_loge'] = np.log(df_sales_amount['amount'] + 0.5)
df_sales_amount.head(10)
```
---
> P-063: 商品データフレーム(df_product)の単価(unit_price)と原価(unit_cost)から、各商品の利益額を算出せよ。結果は10件表示させれば良い。
```
df_tmp = df_product.copy()
df_tmp['unit_profit'] = df_tmp['unit_price'] - df_tmp['unit_cost']
df_tmp.head(10)
```
---
> P-064: 商品データフレーム(df_product)の単価(unit_price)と原価(unit_cost)から、各商品の利益率の全体平均を算出せよ。
ただし、単価と原価にはNULLが存在することに注意せよ。
```
df_tmp = df_product.copy()
df_tmp['unit_profit_rate'] = \
(df_tmp['unit_price'] - df_tmp['unit_cost']) / df_tmp['unit_price']
df_tmp['unit_profit_rate'].mean(skipna=True)
```
---
> P-065: 商品データフレーム(df_product)の各商品について、利益率が30%となる新たな単価を求めよ。ただし、1円未満は切り捨てること。そして結果を10件表示させ、利益率がおよそ30%付近であることを確認せよ。ただし、単価(unit_price)と原価(unit_cost)にはNULLが存在することに注意せよ。
```
df_tmp = df_product.copy()
df_tmp['new_price'] = np.floor(df_tmp['unit_cost'] / 0.7)
df_tmp['new_profit_rate'] = \
(df_tmp['new_price'] - df_tmp['unit_cost']) / df_tmp['new_price']
df_tmp.head(10)
```
---
> P-066: 商品データフレーム(df_product)の各商品について、利益率が30%となる新たな単価を求めよ。今回は、1円未満を丸めること(四捨五入または偶数への丸めで良い)。そして結果を10件表示させ、利益率がおよそ30%付近であることを確認せよ。ただし、単価(unit_price)と原価(unit_cost)にはNULLが存在することに注意せよ。
```
df_tmp = df_product.copy()
df_tmp['new_price'] = np.round(df_tmp['unit_cost'] / 0.7)
df_tmp['new_profit_rate'] = \
(df_tmp['new_price'] - df_tmp['unit_cost']) / df_tmp['new_price']
df_tmp.head(10)
```
---
> P-067: 商品データフレーム(df_product)の各商品について、利益率が30%となる新たな単価を求めよ。今回は、1円未満を切り上げること。そして結果を10件表示させ、利益率がおよそ30%付近であることを確認せよ。ただし、単価(unit_price)と原価(unit_cost)にはNULLが存在することに注意せよ。
```
df_tmp = df_product.copy()
df_tmp['new_price'] = np.ceil(df_tmp['unit_cost'] / 0.7)
df_tmp['new_profit_rate'] = \
(df_tmp['new_price'] - df_tmp['unit_cost']) / df_tmp['new_price']
df_tmp.head(10)
```
---
> P-068: 商品データフレーム(df_product)の各商品について、消費税率10%の税込み金額を求めよ。 1円未満の端数は切り捨てとし、結果は10件表示すれば良い。ただし、単価(unit_price)にはNULLが存在することに注意せよ。
```
df_tmp = df_product.copy()
df_tmp['price_tax'] = np.floor(df_tmp['unit_price'] * 1.1)
df_tmp.head(10)
```
---
> P-069: レシート明細データフレーム(df_receipt)と商品データフレーム(df_product)を結合し、顧客毎に全商品の売上金額合計と、カテゴリ大区分(category_major_cd)が"07"(瓶詰缶詰)の売上金額合計を計算の上、両者の比率を求めよ。抽出対象はカテゴリ大区分"07"(瓶詰缶詰)の売上実績がある顧客のみとし、結果は10件表示させればよい。
```
# コード例1
df_tmp_1 = pd.merge(df_receipt, df_product,
how='inner', on='product_cd').groupby('customer_id'). \
agg({'amount':'sum'}).reset_index()
df_tmp_2 = pd.merge(df_receipt, df_product.query('category_major_cd == "07"'),
how='inner', on='product_cd').groupby('customer_id').\
agg({'amount':'sum'}).reset_index()
df_tmp_3 = pd.merge(df_tmp_1, df_tmp_2, how='inner', on='customer_id')
df_tmp_3['rate_07'] = df_tmp_3['amount_y'] / df_tmp_3['amount_x']
df_tmp_3.head(10)
# コード例2
df_temp = df_receipt.merge(df_product, how='left', on='product_cd'). \
groupby(['customer_id', 'category_major_cd'])['amount'].sum().unstack()
df_temp = df_temp[df_temp['07'] > 0]
df_temp['sum'] = df_temp.sum(axis=1)
df_temp['07_rate'] = df_temp['07'] / df_temp['sum']
df_temp.head(10)
```
---
> P-070: レシート明細データフレーム(df_receipt)の売上日(sales_ymd)に対し、顧客データフレーム(df_customer)の会員申込日(application_date)からの経過日数を計算し、顧客ID(customer_id)、売上日、会員申込日とともに表示せよ。結果は10件表示させれば良い(なお、sales_ymdは数値、application_dateは文字列でデータを保持している点に注意)。
```
df_tmp = pd.merge(df_receipt[['customer_id', 'sales_ymd']],
df_customer[['customer_id', 'application_date']],
how='inner', on='customer_id')
df_tmp = df_tmp.drop_duplicates()
df_tmp['sales_ymd'] = pd.to_datetime(df_tmp['sales_ymd'].astype('str'))
df_tmp['application_date'] = pd.to_datetime(df_tmp['application_date'])
df_tmp['elapsed_date'] = df_tmp['sales_ymd'] - df_tmp['application_date']
df_tmp.head(10)
```
---
> P-071: レシート明細データフレーム(df_receipt)の売上日(sales_ymd)に対し、顧客データフレーム(df_customer)の会員申込日(application_date)からの経過月数を計算し、顧客ID(customer_id)、売上日、会員申込日とともに表示せよ。結果は10件表示させれば良い(なお、sales_ymdは数値、application_dateは文字列でデータを保持している点に注意)。1ヶ月未満は切り捨てること。
```
df_tmp = pd.merge(df_receipt[['customer_id', 'sales_ymd']],
df_customer[['customer_id', 'application_date']],
how='inner', on='customer_id')
df_tmp = df_tmp.drop_duplicates()
df_tmp['sales_ymd'] = pd.to_datetime(df_tmp['sales_ymd'].astype('str'))
df_tmp['application_date'] = pd.to_datetime(df_tmp['application_date'])
df_tmp['elapsed_date'] = df_tmp[['sales_ymd', 'application_date']]. \
apply(lambda x: relativedelta(x[0], x[1]).years * 12 + \
relativedelta(x[0], x[1]).months, axis=1)
df_tmp.sort_values('customer_id').head(10)
```
---
> P-072: レシート明細データフレーム(df_receipt)の売上日(sales_ymd)に対し、顧客データフレーム(df_customer)の会員申込日(application_date)からの経過年数を計算し、顧客ID(customer_id)、売上日、会員申込日とともに表示せよ。結果は10件表示させれば良い(なお、sales_ymdは数値、application_dateは文字列でデータを保持している点に注意)。1年未満は切り捨てること。
```
df_tmp = pd.merge(df_receipt[['customer_id', 'sales_ymd']],
df_customer[['customer_id', 'application_date']],
how='inner', on='customer_id')
df_tmp = df_tmp.drop_duplicates()
df_tmp['sales_ymd'] = pd.to_datetime(df_tmp['sales_ymd'].astype('str'))
df_tmp['application_date'] = pd.to_datetime(df_tmp['application_date'])
df_tmp['elapsed_date'] = df_tmp[['sales_ymd', 'application_date']]. \
apply(lambda x: relativedelta(x[0], x[1]).years, axis=1)
df_tmp.head(10)
```
---
> P-073: レシート明細データフレーム(df_receipt)の売上日(sales_ymd)に対し、顧客データフレーム(df_customer)の会員申込日(application_date)からのエポック秒による経過時間を計算し、顧客ID(customer_id)、売上日、会員申込日とともに表示せよ。結果は10件表示させれば良い(なお、sales_ymdは数値、application_dateは文字列でデータを保持している点に注意)。なお、時間情報は保有していないため各日付は0時0分0秒を表すものとする。
```
df_tmp = pd.merge(df_receipt[['customer_id', 'sales_ymd']],
df_customer[['customer_id', 'application_date']],
how='inner', on='customer_id')
df_tmp = df_tmp.drop_duplicates()
df_tmp['sales_ymd'] = pd.to_datetime(df_tmp['sales_ymd'].astype('str'))
df_tmp['application_date'] = pd.to_datetime(df_tmp['application_date'])
df_tmp['elapsed_date'] = \
(df_tmp['sales_ymd'].view(np.int64) / 10**9) - (df_tmp['application_date'].\
view(np.int64) / 10**9)
df_tmp.head(10)
```
---
> P-074: レシート明細データフレーム(df_receipt)の売上日(sales_ymd)に対し、当該週の月曜日からの経過日数を計算し、売上日、当該週の月曜日付とともに表示せよ。結果は10件表示させれば良い(なお、sales_ymdは数値でデータを保持している点に注意)。
```
df_tmp = df_receipt[['customer_id', 'sales_ymd']]
df_tmp = df_tmp.drop_duplicates()
df_tmp['sales_ymd'] = pd.to_datetime(df_tmp['sales_ymd'].astype('str'))
df_tmp['monday'] = df_tmp['sales_ymd']. \
apply(lambda x: x - relativedelta(days=x.weekday()))
df_tmp['elapsed_weekday'] = df_tmp['sales_ymd'] - df_tmp['monday']
df_tmp.head(10)
```
---
> P-075: 顧客データフレーム(df_customer)からランダムに1%のデータを抽出し、先頭から10件データを抽出せよ。
```
df_customer.sample(frac=0.01).head(10)
```
---
> P-076: 顧客データフレーム(df_customer)から性別(gender_cd)の割合に基づきランダムに10%のデータを層化抽出し、性別ごとに件数を集計せよ。
```
# sklearn.model_selection.train_test_splitを使用した例
_, df_tmp = train_test_split(df_customer, test_size=0.1,
stratify=df_customer['gender_cd'])
df_tmp.groupby('gender_cd').agg({'customer_id' : 'count'})
df_tmp.head(10)
```
---
> P-077: レシート明細データフレーム(df_receipt)の売上金額(amount)を顧客単位に合計し、合計した売上金額の外れ値を抽出せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。なお、ここでは外れ値を平均から3σ以上離れたものとする。結果は10件表示させれば良い。
```
# skleanのpreprocessing.scaleを利用するため、標本標準偏差で計算されている
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python'). \
groupby('customer_id'). \
agg({'amount':'sum'}).reset_index()
df_sales_amount['amount_ss'] = preprocessing.scale(df_sales_amount['amount'])
df_sales_amount.query('abs(amount_ss) >= 3').head(10)
```
---
> P-078: レシート明細データフレーム(df_receipt)の売上金額(amount)を顧客単位に合計し、合計した売上金額の外れ値を抽出せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。なお、ここでは外れ値を第一四分位と第三四分位の差であるIQRを用いて、「第一四分位数-1.5×IQR」よりも下回るもの、または「第三四分位数+1.5×IQR」を超えるものとする。結果は10件表示させれば良い。
```
df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")',
engine='python'). \
groupby('customer_id'). \
agg({'amount':'sum'}).reset_index()
pct75 = np.percentile(df_sales_amount['amount'], q=75)
pct25 = np.percentile(df_sales_amount['amount'], q=25)
iqr = pct75 - pct25
amount_low = pct25 - (iqr * 1.5)
amount_hight = pct75 + (iqr * 1.5)
df_sales_amount.query('amount < @amount_low or @amount_hight < amount').head(10)
```
---
> P-079: 商品データフレーム(df_product)の各項目に対し、欠損数を確認せよ。
```
df_product.isnull().sum()
```
---
> P-080: 商品データフレーム(df_product)のいずれかの項目に欠損が発生しているレコードを全て削除した新たなdf_product_1を作成せよ。なお、削除前後の件数を表示させ、前設問で確認した件数だけ減少していることも確認すること。
```
df_product_1 = df_product.copy()
print('削除前:', len(df_product_1))
df_product_1.dropna(inplace=True)
print('削除後:', len(df_product_1))
```
---
> P-081: 単価(unit_price)と原価(unit_cost)の欠損値について、それぞれの平均値で補完した新たなdf_product_2を作成せよ。なお、平均値については1円未満を丸めること(四捨五入または偶数への丸めで良い)。補完実施後、各項目について欠損が生じていないことも確認すること。
```
df_product_2 = df_product.fillna({
'unit_price':np.round(np.nanmean(df_product['unit_price'])),
'unit_cost':np.round(np.nanmean(df_product['unit_cost']))})
df_product_2.isnull().sum()
```
---
> P-082: 単価(unit_price)と原価(unit_cost)の欠損値について、それぞれの中央値で補完した新たなdf_product_3を作成せよ。なお、中央値については1円未満を丸めること(四捨五入または偶数への丸めで良い)。補完実施後、各項目について欠損が生じていないことも確認すること。
```
df_product_3 = df_product.fillna({
'unit_price':np.round(np.nanmedian(df_product['unit_price'])),
'unit_cost':np.round(np.nanmedian(df_product['unit_cost']))})
df_product_3.isnull().sum()
```
---
> P-083: 単価(unit_price)と原価(unit_cost)の欠損値について、各商品の小区分(category_small_cd)ごとに算出した中央値で補完した新たなdf_product_4を作成せよ。なお、中央値については1円未満を丸めること(四捨五入または偶数への丸めで良い)。補完実施後、各項目について欠損が生じていないことも確認すること。
```
# コード例1
df_tmp = df_product.groupby('category_small_cd'). \
agg({'unit_price':'median', 'unit_cost':'median'}).reset_index()
df_tmp.columns = ['category_small_cd', 'median_price', 'median_cost']
df_product_4 = pd.merge(df_product, df_tmp, how='inner', on='category_small_cd')
df_product_4['unit_price'] = df_product_4[['unit_price', 'median_price']]. \
apply(lambda x: np.round(x[1]) if np.isnan(x[0]) else x[0], axis=1)
df_product_4['unit_cost'] = df_product_4[['unit_cost', 'median_cost']]. \
apply(lambda x: np.round(x[1]) if np.isnan(x[0]) else x[0], axis=1)
df_product_4.isnull().sum()
# コード例2(maskの活用)
df_tmp = (df_product
.groupby('category_small_cd')
.agg(median_price=('unit_price', 'median'),
median_cost=('unit_cost', 'median'))
.reset_index())
df_product_4 = df_product.merge(df_tmp,
how='inner',
on='category_small_cd')
df_product_4['unit_price'] = (df_product_4['unit_price']
.mask(df_product_4['unit_price'].isnull(),
df_product_4['median_price'].round()))
df_product_4['unit_cost'] = (df_product_4['unit_cost']
.mask(df_product_4['unit_cost'].isnull(),
df_product_4['median_cost'].round()))
df_product_4.isnull().sum()
# コード例3(fillna、transformの活用)
df_product_4 = df_product.copy()
for x in ['unit_price', 'unit_cost']:
df_product_4[x] = (df_product_4[x]
.fillna(df_product_4.groupby('category_small_cd')[x]
.transform('median')
.round()))
df_product_4.isnull().sum()
```
---
> P-084: 顧客データフレーム(df_customer)の全顧客に対し、全期間の売上金額に占める2019年売上金額の割合を計算せよ。ただし、売上実績がない場合は0として扱うこと。そして計算した割合が0超のものを抽出せよ。 結果は10件表示させれば良い。また、作成したデータにNAやNANが存在しないことを確認せよ。
```
df_tmp_1 = df_receipt.query('20190101 <= sales_ymd <= 20191231')
df_tmp_1 = pd.merge(df_customer['customer_id'],
df_tmp_1[['customer_id', 'amount']],
how='left', on='customer_id'). \
groupby('customer_id').sum().reset_index(). \
rename(columns={'amount':'amount_2019'})
df_tmp_2 = pd.merge(df_customer['customer_id'],
df_receipt[['customer_id', 'amount']],
how='left', on='customer_id'). \
groupby('customer_id').sum().reset_index()
df_tmp = pd.merge(df_tmp_1, df_tmp_2, how='inner', on='customer_id')
df_tmp['amount_2019'] = df_tmp['amount_2019'].fillna(0)
df_tmp['amount'] = df_tmp['amount'].fillna(0)
df_tmp['amount_rate'] = df_tmp['amount_2019'] / df_tmp['amount']
df_tmp['amount_rate'] = df_tmp['amount_rate'].fillna(0)
df_tmp.query('amount_rate > 0').head(10)
df_tmp.isnull().sum()
```
---
> P-085: 顧客データフレーム(df_customer)の全顧客に対し、郵便番号(postal_cd)を用いて経度緯度変換用データフレーム(df_geocode)を紐付け、新たなdf_customer_1を作成せよ。ただし、複数紐づく場合は経度(longitude)、緯度(latitude)それぞれ平均を算出すること。
```
df_customer_1 = pd.merge(df_customer[['customer_id', 'postal_cd']],
df_geocode[['postal_cd', 'longitude' ,'latitude']],
how='inner', on='postal_cd')
df_customer_1 = df_customer_1.groupby('customer_id'). \
agg({'longitude':'mean', 'latitude':'mean'}).reset_index(). \
rename(columns={'longitude':'m_longitude', 'latitude':'m_latitude'})
df_customer_1 = pd.merge(df_customer, df_customer_1,
how='inner', on='customer_id')
df_customer_1.head(3)
```
---
> P-086: 前設問で作成した緯度経度つき顧客データフレーム(df_customer_1)に対し、申込み店舗コード(application_store_cd)をキーに店舗データフレーム(df_store)と結合せよ。そして申込み店舗の緯度(latitude)・経度情報(longitude)と顧客の緯度・経度を用いて距離(km)を求め、顧客ID(customer_id)、顧客住所(address)、店舗住所(address)とともに表示せよ。計算式は簡易式で良いものとするが、その他精度の高い方式を利用したライブラリを利用してもかまわない。結果は10件表示すれば良い。
$$
緯度(ラジアン):\phi \\
経度(ラジアン):\lambda \\
距離L = 6371 * arccos(sin \phi_1 * sin \phi_2
+ cos \phi_1 * cos \phi_2 * cos(\lambda_1 − \lambda_2))
$$
```
# コード例1
def calc_distance(x1, y1, x2, y2):
distance = 6371 * math.acos(math.sin(math.radians(y1))
* math.sin(math.radians(y2))
+ math.cos(math.radians(y1))
* math.cos(math.radians(y2))
* math.cos(math.radians(x1) - math.radians(x2)))
return distance
df_tmp = pd.merge(df_customer_1, df_store, how='inner', left_on='application_store_cd', right_on='store_cd')
df_tmp['distance'] = df_tmp[['m_longitude', 'm_latitude','longitude', 'latitude']]. \
apply(lambda x: calc_distance(x[0], x[1], x[2], x[3]), axis=1)
df_tmp[['customer_id', 'address_x', 'address_y', 'distance']].head(10)
# コード例2
def calc_distance_numpy(x1, y1, x2, y2):
x1_r = np.radians(x1)
x2_r = np.radians(x2)
y1_r = np.radians(y1)
y2_r = np.radians(y2)
return 6371 * np.arccos(np.sin(y1_r) * np.sin(y2_r)
+ np.cos(y1_r) * np.cos(y2_r)
* np.cos(x1_r - x2_r))
df_tmp = df_customer_1.merge(df_store,
how='inner',
left_on='application_store_cd',
right_on='store_cd')
df_tmp['distance'] = calc_distance_numpy(df_tmp['m_longitude'],
df_tmp['m_latitude'],
df_tmp['longitude'],
df_tmp['latitude'])
df_tmp[['customer_id', 'address_x', 'address_y', 'distance']].head(10)
```
---
> P-087: 顧客データフレーム(df_customer)では、異なる店舗での申込みなどにより同一顧客が複数登録されている。名前(customer_name)と郵便番号(postal_cd)が同じ顧客は同一顧客とみなし、1顧客1レコードとなるように名寄せした名寄顧客データフレーム(df_customer_u)を作成せよ。ただし、同一顧客に対しては売上金額合計が最も高いものを残すものとし、売上金額合計が同一もしくは売上実績がない顧客については顧客ID(customer_id)の番号が小さいものを残すこととする。
```
df_receipt_tmp = df_receipt.groupby('customer_id') \
.agg(sum_amount=('amount','sum')).reset_index()
df_customer_u = pd.merge(df_customer, df_receipt_tmp,
how='left',
on='customer_id')
df_customer_u['sum_amount'] = df_customer_u['sum_amount'].fillna(0)
df_customer_u = df_customer_u.sort_values(['sum_amount', 'customer_id'],
ascending=[False, True])
df_customer_u.drop_duplicates(subset=['customer_name', 'postal_cd'],
keep='first', inplace=True)
print('df_customer:', len(df_customer),
'df_customer_u:', len(df_customer_u),
'diff:', len(df_customer) - len(df_customer_u))
```
---
> P-088: 前設問で作成したデータを元に、顧客データフレームに統合名寄IDを付与したデータフレーム(df_customer_n)を作成せよ。ただし、統合名寄IDは以下の仕様で付与するものとする。
>
> - 重複していない顧客:顧客ID(customer_id)を設定
> - 重複している顧客:前設問で抽出したレコードの顧客IDを設定
```
df_customer_n = pd.merge(df_customer,
df_customer_u[['customer_name',
'postal_cd', 'customer_id']],
how='inner', on =['customer_name', 'postal_cd'])
df_customer_n.rename(columns={'customer_id_x':'customer_id',
'customer_id_y':'integration_id'}, inplace=True)
print('ID数の差', len(df_customer_n['customer_id'].unique())
- len(df_customer_n['integration_id'].unique()))
```
---
> P-閑話: df_customer_1, df_customer_nは使わないので削除する。
```
del df_customer_1
del df_customer_n
```
---
> P-089: 売上実績がある顧客に対し、予測モデル構築のため学習用データとテスト用データに分割したい。それぞれ8:2の割合でランダムにデータを分割せよ。
```
df_sales= df_receipt.groupby('customer_id').agg({'amount':sum}).reset_index()
df_tmp = pd.merge(df_customer, df_sales['customer_id'],
how='inner', on='customer_id')
df_train, df_test = train_test_split(df_tmp, test_size=0.2, random_state=71)
print('学習データ割合: ', len(df_train) / len(df_tmp))
print('テストデータ割合: ', len(df_test) / len(df_tmp))
```
---
> P-090: レシート明細データフレーム(df_receipt)は2017年1月1日〜2019年10月31日までのデータを有している。売上金額(amount)を月次で集計し、学習用に12ヶ月、テスト用に6ヶ月のモデル構築用データを3セット作成せよ。
```
df_tmp = df_receipt[['sales_ymd', 'amount']].copy()
df_tmp['sales_ym'] = df_tmp['sales_ymd'].astype('str').str[0:6]
df_tmp = df_tmp.groupby('sales_ym').agg({'amount':'sum'}).reset_index()
# 関数化することで長期間データに対する多数のデータセットもループなどで処理できるようにする
def split_data(df, train_size, test_size, slide_window, start_point):
train_start = start_point * slide_window
test_start = train_start + train_size
return df[train_start : test_start], df[test_start : test_start + test_size]
df_train_1, df_test_1 = split_data(df_tmp, train_size=12,
test_size=6, slide_window=6, start_point=0)
df_train_2, df_test_2 = split_data(df_tmp, train_size=12,
test_size=6, slide_window=6, start_point=1)
df_train_3, df_test_3 = split_data(df_tmp, train_size=12,
test_size=6, slide_window=6, start_point=2)
df_train_1
df_test_1
```
---
> P-091: 顧客データフレーム(df_customer)の各顧客に対し、売上実績がある顧客数と売上実績がない顧客数が1:1となるようにアンダーサンプリングで抽出せよ。
```
# コード例1
#unbalancedのubUnderを使った例
df_tmp = df_receipt.groupby('customer_id').agg({'amount':'sum'}).reset_index()
df_tmp = pd.merge(df_customer, df_tmp, how='left', on='customer_id')
df_tmp['buy_flg'] = df_tmp['amount'].apply(lambda x: 0 if np.isnan(x) else 1)
print('0の件数', len(df_tmp.query('buy_flg == 0')))
print('1の件数', len(df_tmp.query('buy_flg == 1')))
positive_count = len(df_tmp.query('buy_flg == 1'))
rs = RandomUnderSampler(random_state=71)
df_sample, _ = rs.fit_resample(df_tmp, df_tmp.buy_flg)
print('0の件数', len(df_sample.query('buy_flg == 0')))
print('1の件数', len(df_sample.query('buy_flg == 1')))
# コード例2
#unbalancedのubUnderを使った例
df_tmp = df_customer.merge(df_receipt
.groupby('customer_id')['amount'].sum()
.reset_index(),
how='left',
on='customer_id')
df_tmp['buy_flg'] = np.where(df_tmp['amount'].isnull(), 0, 1)
print("サンプリング前のbuy_flgの件数")
print(df_tmp['buy_flg'].value_counts(), "\n")
positive_count = (df_tmp['buy_flg'] == 1).sum()
rs = RandomUnderSampler(random_state=71)
df_sample, _ = rs.fit_resample(df_tmp, df_tmp.buy_flg)
print("サンプリング後のbuy_flgの件数")
print(df_sample['buy_flg'].value_counts())
```
---
> P-092: 顧客データフレーム(df_customer)では、性別に関する情報が非正規化の状態で保持されている。これを第三正規化せよ。
```
df_gender = df_customer[['gender_cd', 'gender']].drop_duplicates()
df_customer_s = df_customer.drop(columns='gender')
```
---
> P-093: 商品データフレーム(df_product)では各カテゴリのコード値だけを保有し、カテゴリ名は保有していない。カテゴリデータフレーム(df_category)と組み合わせて非正規化し、カテゴリ名を保有した新たな商品データフレームを作成せよ。
```
df_product_full = pd.merge(df_product, df_category[['category_small_cd',
'category_major_name',
'category_medium_name',
'category_small_name']],
how = 'inner', on = 'category_small_cd')
```
---
> P-094: 先に作成したカテゴリ名付き商品データを以下の仕様でファイル出力せよ。なお、出力先のパスはdata配下とする。
>
> - ファイル形式はCSV(カンマ区切り)
> - ヘッダ有り
> - 文字コードはUTF-8
```
# コード例1
df_product_full.to_csv('../data/P_df_product_full_UTF-8_header.csv',
encoding='UTF-8', index=False)
# コード例2(BOM付きでExcelの文字化けを防ぐ)
df_product_full.to_csv('../data/P_df_product_full_UTF-8_header.csv',
encoding='utf_8_sig', index=False)
```
---
> P-095: 先に作成したカテゴリ名付き商品データを以下の仕様でファイル出力せよ。なお、出力先のパスはdata配下とする。
>
> - ファイル形式はCSV(カンマ区切り)
> - ヘッダ有り
> - 文字コードはCP932
```
df_product_full.to_csv('../data/P_df_product_full_CP932_header.csv',
encoding='CP932', index=False)
```
---
> P-096: 先に作成したカテゴリ名付き商品データを以下の仕様でファイル出力せよ。なお、出力先のパスはdata配下とする。
>
> - ファイル形式はCSV(カンマ区切り)
> - ヘッダ無し
> - 文字コードはUTF-8
```
df_product_full.to_csv('../data/P_df_product_full_UTF-8_noh.csv',
header=False ,encoding='UTF-8', index=False)
```
---
> P-097: 先に作成した以下形式のファイルを読み込み、データフレームを作成せよ。また、先頭3件を表示させ、正しくとりまれていることを確認せよ。
>
> - ファイル形式はCSV(カンマ区切り)
> - ヘッダ有り
> - 文字コードはUTF-8
```
df_tmp = pd.read_csv('../data/P_df_product_full_UTF-8_header.csv',
dtype={'category_major_cd':str,
'category_medium_cd':str,
'category_small_cd':str},
encoding='UTF-8')
df_tmp.head(3)
```
---
> P-098: 先に作成した以下形式のファイルを読み込み、データフレームを作成せよ。また、先頭3件を表示させ、正しくとりまれていることを確認せよ。
>
> - ファイル形式はCSV(カンマ区切り)
> - ヘッダ無し
> - 文字コードはUTF-8
```
df_tmp = pd.read_csv('../data/P_df_product_full_UTF-8_noh.csv',
dtype={1:str,
2:str,
3:str},
encoding='UTF-8', header=None)
df_tmp.head(3)
```
---
> P-099: 先に作成したカテゴリ名付き商品データを以下の仕様でファイル出力せよ。なお、出力先のパスはdata配下とする。
>
> - ファイル形式はTSV(タブ区切り)
> - ヘッダ有り
> - 文字コードはUTF-8
```
df_product_full.to_csv('../data/P_df_product_full_UTF-8_header.tsv',
sep='\t', encoding='UTF-8', index=False)
```
---
> P-100: 先に作成した以下形式のファイルを読み込み、データフレームを作成せよ。また、先頭3件を表示させ、正しくとりまれていることを確認せよ。
>
> - ファイル形式はTSV(タブ区切り)
> - ヘッダ有り
> - 文字コードはUTF-8
```
df_tmp = pd.read_table('../data/P_df_product_full_UTF-8_header.tsv',
dtype={'category_major_cd':str,
'category_medium_cd':str,
'category_small_cd':str},
encoding='UTF-8')
df_tmp.head(3)
```
# これで100本終わりです。おつかれさまでした!
| github_jupyter |
# Setup the ABSA Demo
### Step 1 - Install aditional pip packages on your Compute instance
```
!pip install git+https://github.com/hnky/nlp-architect.git@absa
!pip install spacy==2.1.8
```
### Step 2 - Download Notebooks, Training Data, Training / Inference scripts
```
import azureml
from azureml.core import Workspace, Datastore, Experiment, Environment, Model
import urllib.request
from pathlib import Path
# This will open an device login prompt. Login with your credentials that have access to the workspace.
# Connect to the workspace
ws = Workspace.from_config()
print("Using workspace:",ws.name,"in region", ws.location)
# Connect to the default datastore
ds = ws.get_default_datastore()
print("Datastore:",ds.name)
# Create directories
Path("dataset").mkdir(parents=True, exist_ok=True)
Path("notebooks").mkdir(parents=True, exist_ok=True)
Path("scripts").mkdir(parents=True, exist_ok=True)
Path("temp").mkdir(parents=True, exist_ok=True)
```
The cell below will take some time to run as it is downloading a large dataset plus code files. Please allow around 10-15 mins
```
# Download all files needed
base_link = "https://raw.githubusercontent.com/microsoft/ignite-learning-paths-training-aiml/main/aiml40/absa/"
# Download Data
if not Path("dataset/glove.840B.300d.zip").is_file():
urllib.request.urlretrieve('http://nlp.stanford.edu/data/glove.840B.300d.zip', 'dataset/glove.840B.300d.zip')
urllib.request.urlretrieve(base_link+'../dataset/clothing_absa_train.csv', 'dataset/clothing_absa_train.csv')
urllib.request.urlretrieve(base_link+'../dataset/clothing-absa-validation.json', 'dataset/clothing-absa-validation.json')
urllib.request.urlretrieve(base_link+'../dataset/clothing_absa_train_small.csv', 'dataset/clothing_absa_train_small.csv')
# Download Notebooks
urllib.request.urlretrieve(base_link+'notebooks/absa-hyperdrive.ipynb', 'notebooks/absa-hyperdrive.ipynb')
urllib.request.urlretrieve(base_link+'notebooks/absa.ipynb', 'notebooks/absa.ipynb')
# Download Scripts
urllib.request.urlretrieve(base_link+'scripts/score.py', 'scripts/score.py')
urllib.request.urlretrieve(base_link+'scripts/train.py', 'scripts/train.py')
# Upload data to the data store
ds.upload('dataset', target_path='clothing_data', overwrite=False, show_progress=True)
### Step 3 - Setup AMLS
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "absa-cluster"
try:
cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Using compute cluster:', cluster_name)
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D3_V2',
vm_priority='lowpriority',
min_nodes=0,
max_nodes=8)
cluster = ComputeTarget.create(ws, cluster_name, compute_config)
cluster.wait_for_completion(show_output=True)
```
| github_jupyter |
```
# default_exp pds.utils
# default_cls_lvl 3
```
# PDS Utils
> Utilities used by the `pds` sub-package.
```
# hide
from nbverbose.showdoc import show_doc # noqa
# export
from typing import Union
from fastcore.utils import Path
import pandas as pd
import pvl
from planetarypy import utils
# export
class IndexLabel:
"Support working with label files of PDS Index tables."
def __init__(
self,
# Path to the labelfile for a PDS Indexfile.
# The actual table should reside in the same folder to be automatically parsed
# when calling the `read_index_data` method.
labelpath: Union[str, Path],
):
self.path = Path(labelpath)
"search for table name pointer and store key and fpath."
tuple = [i for i in self.pvl_lbl if i[0].startswith("^")][0]
self.tablename = tuple[0][1:]
self.index_name = tuple[1]
@property
def index_path(self):
p = self.path.parent / self.index_name
if not p.exists():
import warnings
warnings.warn(
"Fudging path name to lower case, opposing label value. (PDS data inconsistency)"
)
p = self.path.parent / self.index_name.lower()
if not p.exists():
warnings.warn("`index_path` still doesn't exist.")
return p
@property
def pvl_lbl(self):
return pvl.load(str(self.path))
@property
def table(self):
return self.pvl_lbl[self.tablename]
@property
def pvl_columns(self):
return self.table.getlist("COLUMN")
@property
def columns_dic(self):
return {col["NAME"]: col for col in self.pvl_columns}
@property
def colnames(self):
"""Read the columns in an PDS index label file.
The label file for the PDS indices describes the content
of the index files.
"""
colnames = []
for col in self.pvl_columns:
colnames.extend(PVLColumn(col).name_as_list)
return colnames
@property
def colspecs(self):
colspecs = []
columns = self.table.getlist("COLUMN")
for column in columns:
pvlcol = PVLColumn(column)
if pvlcol.items is None:
colspecs.append(pvlcol.colspecs)
else:
colspecs.extend(pvlcol.colspecs)
return colspecs
def read_index_data(self, convert_times=True):
return index_to_df(self.index_path, self, convert_times=convert_times)
# export
def index_to_df(
# Path to the index TAB file
indexpath: Union[str, Path],
# Label object that has both the column names and the columns widths as attributes
# 'colnames' and 'colspecs'
label: IndexLabel,
# Switch to control if to convert columns with "TIME" in name (unless COUNT is as well in name) to datetime
convert_times=True,
):
"""The main reader function for PDS Indexfiles.
In conjunction with an IndexLabel object that figures out the column widths,
this reader should work for all PDS TAB files.
"""
indexpath = Path(indexpath)
df = pd.read_fwf(
indexpath, header=None, names=label.colnames, colspecs=label.colspecs
)
if convert_times:
for column in [col for col in df.columns if "TIME" in col]:
if column in ["LOCAL_TIME", "DWELL_TIME"]:
continue
try:
df[column] = pd.to_datetime(df[column])
except ValueError:
df[column] = pd.to_datetime(
df[column], format=utils.nasa_dt_format_with_ms, errors="coerce"
)
except KeyError:
raise KeyError(f"{column} not in {df.columns}")
print("Done.")
return df
# export
class PVLColumn:
"Manages just one of the columns in a table that is described via PVL."
def __init__(self, pvlobj):
self.pvlobj = pvlobj
@property
def name(self):
return self.pvlobj["NAME"]
@property
def name_as_list(self):
"needs to return a list for consistency for cases when it's an array."
if self.items is None:
return [self.name]
else:
return [self.name + "_" + str(i + 1) for i in range(self.items)]
@property
def start(self):
"Decrease by one as Python is 0-indexed."
return self.pvlobj["START_BYTE"] - 1
@property
def stop(self):
return self.start + self.pvlobj["BYTES"]
@property
def items(self):
return self.pvlobj.get("ITEMS")
@property
def item_bytes(self):
return self.pvlobj.get("ITEM_BYTES")
@property
def item_offset(self):
return self.pvlobj.get("ITEM_OFFSET")
@property
def colspecs(self):
if self.items is None:
return (self.start, self.stop)
else:
i = 0
bucket = []
for _ in range(self.items):
off = self.start + self.item_offset * i
bucket.append((off, off + self.item_bytes))
i += 1
return bucket
def decode(self, linedata):
if self.items is None:
start, stop = self.colspecs
return linedata[start:stop]
else:
bucket = []
for (start, stop) in self.colspecs:
bucket.append(linedata[start:stop])
return bucket
def __repr__(self):
return self.pvlobj.__repr__()
# export
def decode_line(
linedata: str, # One line of a .tab data file
labelpath: Union[
str, Path
], # Path to the appropriate label that describes the data.
):
"Decode one line of tabbed data with the appropriate label file."
label = IndexLabel(labelpath)
for column in label.pvl_columns:
pvlcol = PVLColumn(column)
print(pvlcol.name, pvlcol.decode(linedata))
# export
def find_mixed_type_cols(
# Dataframe to be searched for mixed data-types
df: pd.DataFrame,
# Switch to control if NaN values in these problem columns should be replaced by the string 'UNKNOWN'
fix: bool = True,
) -> list: # List of column names that have data type changes within themselves.
"""For a given dataframe, find the columns that are of mixed type.
Tool to help with the performance warning when trying to save a pandas DataFrame as a HDF.
When a column changes datatype somewhere, pickling occurs, slowing down the reading process of the HDF file.
"""
result = []
for col in df.columns:
weird = (df[[col]].applymap(type) != df[[col]].iloc[0].apply(type)).any(axis=1)
if len(df[weird]) > 0:
print(col)
result.append(col)
if fix is True:
for col in result:
df[col].fillna("UNKNOWN", inplace=True)
return result
# export
def fix_hirise_edrcumindex(
infname: Union[str, Path], # Path to broken EDRCUMINDEX.TAB
outfname: Union[str, Path], # Path where to store the fixed TAB file
):
"""Fix HiRISE EDRCUMINDEX.
The HiRISE EDRCUMINDEX has some broken lines where the SCAN_EXPOSURE_DURATION is of format
F10.4 instead of the defined F9.4.
This function simply replaces those incidences with one less decimal fraction, so 20000.0000
becomes 20000.000.
"""
with open(str(infname)) as f:
with open(str(outfname, "w")) as newf:
for line in tqdm(f):
exp = line.split(",")[21]
if float(exp) > 9999.999:
# catching the return of write into dummy variable
_ = newf.write(line.replace(exp, exp[:9]))
else:
_ = newf.write(line)
```
| github_jupyter |
# **Built in Functions**
# **bool()**
Valores vazios ou zeros são considerado False, do contrário são considerados True (Truth Value Testing).
"Truth Value Testing". Isto é, decidir quando um valor é considerado True ou False
```
print(bool(0))
print(bool(""))
print(bool(None))
print(bool(1))
print(bool(-100))
print(bool(13.5))
print(bool("teste"))
print(bool(True))
```
# **f'' / .format()**
```
# antes da versão 3.6
a = ('Hello World!')
print('----> {} <----'.format(a))
# ou f''
print(f'----> {a} <----')
nome = 'José'
idade = 23
salario = 987.30
print(f'O {nome} tem {idade} anos e ganha R${salario:.2f}.') #Python 3.6+
print(f'O {nome:-^20} tem {idade} anos e ganha R${salario:.2f}.') #Python 3.6+
print(f'O {nome:-<20} tem {idade} anos e ganha R${salario:.2f}.') #Python 3.6+
print(f'O {nome:->20} tem {idade} anos e ganha R${salario:.2f}.') #Python 3.6+
print('O {} tem {} anos e ganha R${:.2f}.'.format(nome, idade, salario)) #Python 3
print('O %s tem %d anos.' % (nome, idade)) #Python 2
# formatação de traços + casas decimais
vlr = 120
vlr2 = 10.1
print(f'{vlr:->20.2f}')
print(f'{vlr2:-^20.2f}')
print(f'R${vlr:5.2f}')
print(f'R${vlr:6.2f}')
print(f'R${vlr:7.2f}')
print(f'R${vlr:8.2f}')
print(f'R${vlr:08.2f}')
print(f'R${vlr:010.4f}') # f para float
print(f'R${vlr:010d}')# d para inteiros
print(f'R${vlr:04d}')
print(f'R${vlr:4d}')
n1, n2, n3, n4 = 100, 1, 00.758, 15.77
print(f'n1 = {n1:6}\nn1 = {n1:06}') # total de casas = 6 com opção de colocar ou não zero
print(f'n2 = {n2:06}')
print(f'n2 = {n2: 6}')
print(f'n3 = {n3:06.3f}') # variavel + ':' + total de casas decimais + '.' + casas decimais a direita da ','
print(f'n4 = {n4:06.3f} ou {n4:.2f}')
# formatação com tab \t
for c in range(0,5):
print(f'O {c}º valor recebido é \t R$1000,00')
print('Agora sem o tab')
print(f'O {c}º valor recebido é R$1000,00')
print('-' * 35)
```
# **.find() .rfind**
```
frase = ' Curso em Vídeo Python '
print(frase.find('Curso'))
print('A letra "o" aparece a ultima vez na posição {}.'.format(frase.lower().rfind('o')+1))
```
# **print()**
value é o valor que queremos imprimir, as reticências indicam que a função pode receber mais de um valor, basta separá-los por vírgula.
sep é o separador entre os valores, por padrão o separador é um espaço em branco.
end é o que acontecerá ao final da função, por padrão há uma quebra de linha, uma nova linha (\n).
## fomatando o print
```
nome = 'Livio Alvarenga'
print(f'Prazer em te conhecer\n{nome}!') #\n executa um enter
print(f'Prazer em te conhecer {nome:20}!')
print(f'Prazer em te conhecer {nome:>20}!')
print(f'Prazer em te conhecer {nome:<20}!')
print(f'Prazer em te conhecer {nome:^20}!')
print(f'Prazer em te conhecer {nome:=^21}!')
print(f'{"FIM DO PROGRAMA":-^30}')
print(f'{"FIM DO PROGRAMA":^30}')
frase = ' Curso em Vídeo Python '
print(frase[3])
print(frase[:3])
print(frase[3:])
print(frase[0:10:2])
print("""imprimindo um texto longo!!! imprimindo um texto longo!!!
imprimindo um texto longo!!! imprimindo um texto longo!!!
imprimindo um texto longo!!! imprimindo um texto longo!!!""")
```
## print sep e end
```
# print com end
t1 = 't1'
t2 = 't2'
t3 = 't3'
print('{} --> {}'.format(t1, t2), end='')
print(f' --> {t3}', end='')
print(' --> FIM')
print("Brasil", "ganhou", 5, "titulos mundiais", sep="-")
```
## Imprimindo com pprint( )
```
from pprint import pprint
# ! Imprimindo com pprint + width
cliente = {'nome': 'Livio', 'Idade': 40, 'Cidade': 'Belo Horizonte'}
pprint(cliente, width=40)
```
# **round()**
```
# Retorna o valor com arredondamento
round(3.14151922,2)
```
# os.path.isdir
Este método vai nos retornar um booleano, True ou False, que vai dizer se o diretório existe ou não
```
from os.path import isdir
diretorio = "c:\\"
if isdir(diretorio):
print(f"O diretório {diretorio} existe!")
else:
print("O diretório não existe!")
diretorio = "xx:\\"
if isdir(diretorio):
print(f"O diretório {diretorio} existe!")
else:
print("O diretório não existe!")
```
| github_jupyter |
# Validation of gf_eia923
This notebook runs sanity checks on the Generation Fuel data that are reported in EIA Form 923. These are the same tests which are run by the gf_eia923 validation tests by PyTest. The notebook and visualizations are meant to be used as a diagnostic tool, to help understand what's wrong when the PyTest based data validations fail for some reason.
```
%load_ext autoreload
%autoreload 2
import sys
import pandas as pd
import sqlalchemy as sa
import pudl
import warnings
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
handler = logging.StreamHandler(stream=sys.stdout)
formatter = logging.Formatter('%(message)s')
handler.setFormatter(formatter)
logger.handlers = [handler]
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
plt.style.use('ggplot')
mpl.rcParams['figure.figsize'] = (10,4)
mpl.rcParams['figure.dpi'] = 150
pd.options.display.max_columns = 56
pudl_settings = pudl.workspace.setup.get_defaults()
ferc1_engine = sa.create_engine(pudl_settings['ferc1_db'])
pudl_engine = sa.create_engine(pudl_settings['pudl_db'])
pudl_settings
```
## Get the original EIA 923 data
First we pull the original (post-ETL) EIA 923 data out of the database. We will use the values in this dataset as a baseline for checking that latter aggregated data and derived values remain valid. We will also eyeball these values here to make sure they are within the expected range. This may take a minute or two depending on the speed of your machine.
```
pudl_out_orig = pudl.output.pudltabl.PudlTabl(pudl_engine, freq=None)
gf_eia923_orig = pudl_out_orig.gf_eia923()
```
# Validation Against Fixed Bounds
Some of the variables reported in this table have a fixed range of reasonable values, like the heat content per unit of a given fuel type. These varaibles can be tested for validity against external standards directly. In general we have two kinds of tests in this section:
* **Tails:** are the exteme values too extreme? Typically, this is at the 5% and 95% level, but depending on the distribution, sometimes other thresholds are used.
* **Middle:** Is the central value of the distribution where it should be?
### Fields that need checking:
These are all contained in the `frc_eia923` table data validations, and those should just be re-used if possible. Ugh, names not all the same though. Annoying.
* `fuel_mmbtu_per_unit` (BIT, SUB, LIG, coal, DFO, oil, gas)
```
gf_eia923_orig.sample(10)
```
## Coal Heat Content
```
pudl.validate.plot_vs_bounds(gf_eia923_orig, pudl.validate.gf_eia923_coal_heat_content)
```
## Oil Heat Content
```
pudl.validate.plot_vs_bounds(gf_eia923_orig, pudl.validate.gf_eia923_oil_heat_content)
```
## Gas Heat Content
```
pudl.validate.plot_vs_bounds(gf_eia923_orig, pudl.validate.gf_eia923_gas_heat_content)
```
# Validate Monthly Aggregation
It's possible that the distribution will change as a function of aggregation, or we might make an error in the aggregation process. These tests check that a collection of quantiles for the original and the data aggregated by month have internally consistent values.
```
pudl_out_month = pudl.output.pudltabl.PudlTabl(pudl_engine, freq="MS")
gf_eia923_month = pudl_out_month.gf_eia923()
pudl.validate.plot_vs_agg(gf_eia923_orig, gf_eia923_month, pudl.validate.gf_eia923_agg)
```
# Validate Annual Aggregation
It's possible that the distribution will change as a function of aggregation, or we might make an error in the aggregation process. These tests check that a collection of quantiles for the original and the data aggregated by year have internally consistent values.
```
pudl_out_year = pudl.output.pudltabl.PudlTabl(pudl_engine, freq="AS")
gf_eia923_year = pudl_out_year.gf_eia923()
pudl.validate.plot_vs_agg(gf_eia923_orig, gf_eia923_year, pudl.validate.gf_eia923_agg)
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#(a)" data-toc-modified-id="(a)-0.1"><span class="toc-item-num">0.1 </span>(a)</a></span></li><li><span><a href="#(b)" data-toc-modified-id="(b)-0.2"><span class="toc-item-num">0.2 </span>(b)</a></span></li><li><span><a href="#(c)" data-toc-modified-id="(c)-0.3"><span class="toc-item-num">0.3 </span>(c)</a></span></li><li><span><a href="#(d)" data-toc-modified-id="(d)-0.4"><span class="toc-item-num">0.4 </span>(d)</a></span></li></ul></li></ul></div>
Use Newton’s method to find solutions accurate to within $10^{−4}$ for the following problems.
```
import numpy as np
from numpy import linalg
from abc import abstractmethod
import pandas as pd
import math
pd.options.display.float_format = '{:,.8f}'.format
np.set_printoptions(suppress=True, precision=8)
TOR = pow(10.0, -4)
MAX_ITR = 150
class NewtonMethod(object):
def __init__(self):
return
@abstractmethod
def f(self, x):
return NotImplementedError('Implement f()!')
@abstractmethod
def jacobian(self, x):
return NotImplementedError('Implement jacobian()!')
@abstractmethod
def run(self, x):
return NotImplementedError('Implement run()!')
```
## (a)
$$x^3 − 2x^2 − 5 = 0, [1, 4]$$
```
class Newton1D(NewtonMethod):
def __init__(self):
super(NewtonMethod, self).__init__()
def f(self, x):
return pow(x, 3) - 2 * pow(x, 2) - 5
def jacobian(self, x):
return 3 * pow(x, 2) - 4 * x
def run(self, x0):
df = pd.DataFrame(columns=['f(x)'])
row = len(df)
x = x0
df.loc[row] = [x]
for k in range(MAX_ITR):
try:
y = x - self.f(x) / self.jacobian(x)
except ValueError:
break
residual = math.fabs(x - y)
x = y
row = len(df)
df.loc[row] = [y]
if residual < TOR or x > 1e9:
break
return df
Newton1D().run(2.5).astype(np.float64)
```
## (b)
$$x^3 + 3x^2 − 1 = 0, [-3, -2]$$
```
class Newton1D(NewtonMethod):
def __init__(self):
super(NewtonMethod, self).__init__()
def f(self, x):
return pow(x, 3) + 3 * pow(x, 2) - 1
def jacobian(self, x):
return 3 * pow(x, 2) - 6 * x
def run(self, x0):
df = pd.DataFrame(columns=['f(x)'])
row = len(df)
x = x0
df.loc[row] = [x]
for k in range(MAX_ITR):
try:
y = x - self.f(x) / self.jacobian(x)
except ValueError:
break
residual = math.fabs(x - y)
x = y
row = len(df)
df.loc[row] = [y]
if residual < TOR or x > 1e9:
break
return df
Newton1D().run(-2.5).astype(np.float64)
```
## (c)
$$x−\cos x=0, [0, \frac{\pi}{2}]$$
```
class Newton1D(NewtonMethod):
def __init__(self):
super(NewtonMethod, self).__init__()
def f(self, x):
return x - math.cos(x)
def jacobian(self, x):
return 1 + math.sin(x)
def run(self, x0):
df = pd.DataFrame(columns=['f(x)'])
row = len(df)
x = x0
df.loc[row] = [x]
for k in range(MAX_ITR):
try:
y = x - self.f(x) / self.jacobian(x)
except ValueError:
break
residual = math.fabs(x - y)
x = y
row = len(df)
df.loc[row] = [y]
if residual < TOR or x > 1e9:
break
return df
Newton1D().run(math.pi / 4.0).astype(np.float64)
```
## (d)
$$x − 0.8 − 0.2 \sin x = 0, [0, \frac{\pi}{2}]$$
```
class Newton1D(NewtonMethod):
def __init__(self):
super(NewtonMethod, self).__init__()
def f(self, x):
return x - 0.8 - 0.2 * math.sin(x)
def jacobian(self, x):
return 1 - 0.2 * math.cos(x)
def run(self, x0):
df = pd.DataFrame(columns=['f(x)'])
row = len(df)
x = x0
df.loc[row] = [x]
for k in range(MAX_ITR):
try:
y = x - self.f(x) / self.jacobian(x)
except ValueError:
break
residual = math.fabs(x - y)
x = y
row = len(df)
df.loc[row] = [y]
if residual < TOR or x > 1e9:
break
return df
Newton1D().run(math.pi / 4.0).astype(np.float64)
```
| github_jupyter |
```
import torch
from torch.autograd import Variable
from torch import nn
import matplotlib.pyplot as plt
%matplotlib inline
torch.manual_seed(3)
```
# make data
```
x_train = torch.Tensor([[1],[2],[3]])
y_train = torch.Tensor([[1],[2],[3]])
x, y = Variable(x_train), Variable(y_train)
plt.scatter(x.data.numpy(), y.data.numpy())
plt.show()
```
# Naive Model
## Define Linear model
```
x, y
W = Variable(torch.rand(1,1))
W
x.mm(W)
```
## Define cost function
loss(x,y)=1/n∑|xi−yi|2loss(x,y)=1/n∑|xi−yi|2
```
cost_func = nn.MSELoss()
cost_func
```
## Training Linear Regression
```
plt.ion()
lr = 0.01
for step in range(300):
prediction = x.mm(W)
cost = cost_func(prediction, y)
gradient = (prediction-y).view(-1).dot(x.view(-1)) / len(x)
W -= lr * gradient
if step % 10 == 0:
plt.cla()
plt.scatter(x.data.numpy(), y.data.numpy())
plt.plot(x.data.numpy(), prediction.data.numpy(), 'r-')
plt.title('step %d, cost=%.4f, w=%.4f,grad=%.4f' % (step,cost.data, W.data[0], gradient.data))
plt.show()
# if step %10 == 0:
# print(step, "going cost")
# print(cost)
# print((prediction-y).view(-1))
# print((x.view(-1)))
# print(gradient)
# print(W)
plt.ioff()
x_test = Variable(torch.Tensor([[5]]))
y_test = x_test.mm(W)
y_test
```
# w/ nn Module
## Define Linear Model
```
model = nn.Linear(1, 1, bias=True)
print(model)
model.weight, model.bias
cost_func = nn.MSELoss()
for i in model.parameters():
print(i)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
```
## Training w/ nn module
```
model(x)
plt.ion()
for step in range(300):
prediction = model(x)
cost = cost_func(prediction, y)
optimizer.zero_grad()
cost.backward()
optimizer.step()
if step % 10 == 0:
plt.cla()
plt.scatter(x.data.numpy(), y.data.numpy())
plt.plot(x.data.numpy(), prediction.data.numpy(), 'b--')
plt.title('cost=%.4f, w=%.4f, b=%.4f' % (cost.data,model.weight.data[0][0],model.bias.data))
plt.show()
plt.ioff()
x_test = Variable(torch.Tensor([[7]]))
y_test = model(x_test)
print('input : %.4f, output:%.4f' % (x_test.data[0][0], y_test.data[0][0]))
for step in range(300):
prediction = model(x)
cost = cost_func(prediction, y)
optimizer.zero_grad()
cost.backward()
optimizer.step()
x_test = Variable(torch.Tensor([[7]]))
y_test = model(x_test)
print('input : %.4f, output:%.4f' % (x_test.data[0][0], y_test.data[0][0]))
model.weight, model.bias
```
### Has "nn.MSELoss()" Convex Cost Space?
```
W_val, cost_val = [], []
for i in range(-30, 51):
W = i * 0.1
model.weight.data.fill_(W)
cost = cost_func(model(x),y)
W_val.append(W)
cost_val.append(cost.data)
plt.plot(W_val, cost_val, 'ro')
plt.show()
```
# Multivariate Linear model
```
import numpy as np
```
## make Data
```
xy = np.loadtxt('data-01-test-score.csv', delimiter=',', dtype=np.float32)
x_data = xy[:, 0:-1]
y_data = xy[:, [-1]]
print('shape: ', x_data.shape, '\nlength:', len(x_data), '\n', x_data )
print('shape: ', y_data.shape, '\nlength:', len(y_data), '\n', y_data )
x, y = Variable(torch.from_numpy(x_data)), Variable(torch.from_numpy(y_data))
x, y
```
## make Model
```
mv_model = nn.Linear(3, 1, bias=True)
print(mv_model)
print('weigh : ', mv_model.weight)
print('bias : ', mv_model.bias)
cost_func = nn.MSELoss()
optimizer = torch.optim.SGD(mv_model.parameters(), lr=1e-5)
```
## Training Model
```
for step in range(2000):
optimizer.zero_grad()
prediction = mv_model(x)
cost = cost_func(prediction, y)
cost.backward()
optimizer.step()
if step % 50 == 0:
print(step, "Cost: ", cost.data.numpy(), "\nPrediction:\n", prediction.data.t().numpy())
mv_model.state_dict()
```
## test
```
print("Model score : ",mv_model(Variable(torch.Tensor([[73,80,75]]))).data.numpy())
print("Real score : 73,80,75,152")
accuracy_list = []
for i,real_y in enumerate(y):
accuracy = (mv_model((x[i])).data.numpy() - real_y.data.numpy())
accuracy_list.append(np.absolute(accuracy))
for accuracy in accuracy_list:
print(accuracy)
print("sum accuracy : ",sum(accuracy_list))
print("avg accuracy : ",sum(accuracy_list)/len(y))
```
| github_jupyter |
```
import random
import pennylane as qml
from pennylane import numpy as np
import sys
sys.path.insert(0,'..')
from maskit.datasets import load_data
# Setting seeds for reproducible results
np.random.seed(1337)
random.seed(1337)
```
# Loading the data
Data of interest is MNIST data. As we want to go for reproducible results, we
will first go with the option `shuffle=False`. For the rest of the parameters,
we now go with the default options. This gives us data for two classes, the
written numbers 6 and 9. We also only get a limited number of sampes, that is
100 samples for training and 50 for testing. For further details see the
appropriate docstring.
```
data = load_data("mnist", shuffle=False, target_length=2)
```
# Setting up a Variational Quantum Circuit for training
There is an example on the [PennyLane website](https://pennylane.ai/qml/demos/tutorial_variational_classifier.html#iris-classification) for iris data showing a setup for a variational classifier. That is variational quantum circuits that can be trained from labelled (classical) data.
```
wires = 4
layers = 4
epochs = 5
parameters = np.random.uniform(low=-np.pi, high=np.pi, size=(layers, wires, 2))
def variational_circuit(params):
for layer in range(layers):
for wire in range(wires):
qml.RX(params[layer][wire][0], wires=wire)
qml.RY(params[layer][wire][1], wires=wire)
for wire in range(0, wires - 1, 2):
qml.CZ(wires=[wire, wire + 1])
for wire in range(1, wires - 1, 2):
qml.CZ(wires=[wire, wire + 1])
return qml.expval(qml.PauliZ(0))
def variational_training_circuit(params, data):
qml.templates.embeddings.AngleEmbedding(
features=data, wires=range(wires), rotation="X"
)
return variational_circuit(params)
dev = qml.device('default.qubit', wires=wires, shots=1000)
circuit = qml.QNode(func=variational_circuit, device=dev)
training_circuit = qml.QNode(func=variational_training_circuit, device=dev)
circuit(parameters)
training_circuit(parameters, data.train_data[0])
print(training_circuit.draw())
# some helpers
def correctly_classified(params, data, target):
prediction = training_circuit(params, data)
if prediction < 0 and target[0] > 0:
return True
elif prediction > 0 and target[1] > 0:
return True
return False
def overall_cost_and_correct(cost_fn, params, data, targets):
cost = correct_count = 0
for datum, target in zip(data, targets):
cost += cost_fn(params, datum, target)
correct_count += int(correctly_classified(params, datum, target))
return cost, correct_count
# Playing with different cost functions
def crossentropy_cost(params, data, target):
prediction = training_circuit(params, data)
scaled_prediction = prediction + 1 / 2
predictions = np.array([1 - scaled_prediction, scaled_prediction])
return cross_entropy(predictions, target)
def distributed_cost(params, data, target):
"""Cost function distributes probabilities to both classes."""
prediction = training_circuit(params, data)
scaled_prediction = prediction + 1 / 2
predictions = np.array([1 - scaled_prediction, scaled_prediction])
return np.sum(np.abs(target - predictions))
def cost(params, data, target):
"""Cost function penalizes choosing wrong class."""
prediction = training_circuit(params, data)
predictions = np.array([0, prediction]) if prediction > 0 else np.array([prediction * -1, 0])
return np.sum(np.abs(target - predictions))
optimizer = qml.AdamOptimizer()
cost_fn = cost
start_cost, correct_count = overall_cost_and_correct(cost_fn, parameters, data.test_data, data.test_target)
print(f"start cost: {start_cost}, with {correct_count}/{len(data.test_target)} correct samples")
params = parameters.copy()
for _ in range(epochs):
for datum, target in zip(data.train_data, data.train_target):
params = optimizer.step(lambda weights: cost_fn(weights, datum, target), params)
cost, correct_count = overall_cost_and_correct(cost_fn, params, data.test_data, data.test_target)
print(f"epoch{_} cost: {cost}, with {correct_count}/{len(data.test_target)} correct samples")
final_cost, correct_count = overall_cost_and_correct(cost_fn, params, data.test_data, data.test_target)
print(f"final cost: {final_cost}, with {correct_count}/{len(data.test_target)} correct samples")
```
| github_jupyter |
# Import Dependencies
```
from config import api_key
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import datetime
import json
```
# Use API to get .json
```
endpoint = 'breweries'
page = 1
url = f"https://sandbox-api.brewerydb.com/v2/{endpoint}/?key={api_key}&p={page}&withLocations=Y&withSocialAccounts=Y"
brewery_data = requests.get(url).json()
#print(json.dumps(brewery_data, indent=4, sort_keys=True))
```
# Create DataFrame
- Initially, we pull just a few interesting columns for the dataframe, most importantly, the established dates and lat/lon coordinates for each brewery
- We will add distance columns later after doing some math
- Change the Established Date column to numeric in order to use in the scatter plot
```
brewery_dict = []
for result in range(0,19):
try:
brewery_info = {
'Brewery Name': brewery_data['data'][result]['name'],
'Brewery ID': brewery_data['data'][result]['id'],
'Established Date': brewery_data['data'][result]['established'],
'Is in business?': brewery_data['data'][result]['isInBusiness'],
'Website': brewery_data['data'][result]['website'],
'Country': brewery_data['data'][result]['locations'][0]['country']['isoCode'],
'City':brewery_data['data'][result]['locations'][0]['locality'],
'Latitude':brewery_data['data'][result]['locations'][0]['latitude'],
'Longitude':brewery_data['data'][result]['locations'][0]['longitude'],
'Primary Location':brewery_data['data'][result]['locations'][0]['isPrimary'],
'Distance from Chicago (km)':'',
'Distance from Pottsville (km)':''
}
except:
print('id not found')
brewery_dict.append(brewery_info)
brewery_df = pd.DataFrame(brewery_dict)
brewery_df['Established Date']=pd.to_numeric(brewery_df['Established Date'])
#brewery_df
```
# Determine Distances from Chicago
- use geopy to determine distances via lat/long data
- Chicago is one of the hot-spots for early American breweries, made possible by the German immigrant community
- Pottsville (Becky's hometown) is home to the oldest brewery in America - Yeungling!
- update the dataframe, clean it and export as a csv
```
#!pip install geopy
import geopy.distance
Chi_coords = (41.8781, -87.6298)
Pottsville_coords = (40.6856, -76.1955)
for x in range(0,19):
Brewery_coords = (brewery_df['Latitude'][x], brewery_df['Longitude'][x])
brewery_df['Distance from Chicago (km)'][x] = geopy.distance.distance(Chi_coords, Brewery_coords).km
brewery_df['Distance from Pottsville (km)'][x] = geopy.distance.distance(Pottsville_coords, Brewery_coords).km
brewery_df = brewery_df.drop_duplicates(subset=['Brewery ID'], keep='first')
brewery_df
brewery_df.to_csv("data/brewery_data.csv", encoding="utf-8", index=False)
```
# Figures
- I expect a greater number of older breweries closer to Chicago, given that some of the first instances of brewing in America occured here.
- With such few breweries available for free (boo sandbox), the scatter plot looks a little sparse. However, the general trend gives us preliminary data that shows that there may be a coorlation! If I wanted to do more with this, this would be good enough to convince me to splurge the $20 for full access
- plot for Pottsville is just for fun
```
#Chicago
plt.scatter(brewery_df['Distance from Chicago (km)'], brewery_df['Established Date'],
alpha=0.5, edgecolor ='black', color="blue",s=100)
#Chart elements
plt.title(f"Distance from Chicago vs. Established Year")
plt.xlabel('Distance from Chicago (km)')
plt.ylabel('Established Year')
plt.grid(True)
#Save and print
plt.savefig("images/Distance from Chicago vs. Established Year.png")
plt.show()
#Pottsville
plt.scatter(brewery_df['Distance from Pottsville (km)'], brewery_df['Established Date'], alpha=0.5, edgecolor ='black', color="red",s=100)
#Chart elements
plt.title(f"Distance from Pottsville vs. Established Year")
plt.xlabel('Distance from Pottsville (km)')
plt.ylabel('Established Year')
plt.grid(True)
#Save and print
#plt.savefig("images/Distance from Pottsville vs. Established Year.png")
plt.show()
#Empty Plot
plt.scatter(brewery_df['Distance from Chicago (km)'], brewery_df['Established Date'], alpha=0.5, edgecolor ='none', color="none",s=100)
#Chart elements
plt.title(f"Distance from Chicago vs. Established Year")
plt.xlabel('Distance from Chicago (km)')
plt.ylabel('Established Year')
plt.grid(True)
#Save and print
plt.savefig("images/Empty plot.png")
plt.show()
```
| github_jupyter |
# What's this TensorFlow business?
You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.
For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, TensorFlow (or PyTorch, if you choose to work with that notebook).
#### What is it?
TensorFlow is a system for executing computational graphs over Tensor objects, with native support for performing backpropogation for its Variables. In it, we work with Tensors which are n-dimensional arrays analogous to the numpy ndarray.
#### Why?
* Our code will now run on GPUs! Much faster training. Writing your own modules to run on GPUs is beyond the scope of this class, unfortunately.
* We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand.
* We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :)
* We want you to be exposed to the sort of deep learning code you might run into in academia or industry.
## How will I learn TensorFlow?
TensorFlow has many excellent tutorials available, including those from [Google themselves](https://www.tensorflow.org/get_started/get_started).
Otherwise, this notebook will walk you through much of what you need to do to train models in TensorFlow. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.
**NOTE: This notebook is meant to teach you the latest version of Tensorflow 2.0. Most examples on the web today are still in 1.x, so be careful not to confuse the two when looking up documentation**.
## Install Tensorflow 2.0
Tensorflow 2.0 is still not in a fully 100% stable release, but it's still usable and more intuitive than TF 1.x. Please make sure you have it installed before moving on in this notebook! Here are some steps to get started:
1. Have the latest version of Anaconda installed on your machine.
2. Create a new conda environment starting from Python 3.7. In this setup example, we'll call it `tf_20_env`.
3. Run the command: `source activate tf_20_env`
4. Then pip install TF 2.0 as described here: https://www.tensorflow.org/install/pip
A guide on creating Anaconda enviornments: https://uoa-eresearch.github.io/eresearch-cookbook/recipe/2014/11/20/conda/
This will give you an new enviornemnt to play in TF 2.0. Generally, if you plan to also use TensorFlow in your other projects, you might also want to keep a seperate Conda environment or virtualenv in Python 3.7 that has Tensorflow 1.9, so you can switch back and forth at will.
# Table of Contents
This notebook has 5 parts. We will walk through TensorFlow at **three different levels of abstraction**, which should help you better understand it and prepare you for working on your project.
1. Part I, Preparation: load the CIFAR-10 dataset.
2. Part II, Barebone TensorFlow: **Abstraction Level 1**, we will work directly with low-level TensorFlow graphs.
3. Part III, Keras Model API: **Abstraction Level 2**, we will use `tf.keras.Model` to define arbitrary neural network architecture.
4. Part IV, Keras Sequential + Functional API: **Abstraction Level 3**, we will use `tf.keras.Sequential` to define a linear feed-forward network very conveniently, and then explore the functional libraries for building unique and uncommon models that require more flexibility.
5. Part V, CIFAR-10 open-ended challenge: please implement your own network to get as high accuracy as possible on CIFAR-10. You can experiment with any layer, optimizer, hyperparameters or other advanced features.
We will discuss Keras in more detail later in the notebook.
Here is a table of comparison:
| API | Flexibility | Convenience |
|---------------|-------------|-------------|
| Barebone | High | Low |
| `tf.keras.Model` | High | Medium |
| `tf.keras.Sequential` | Low | High |
# Part I: Preparation
First, we load the CIFAR-10 dataset. This might take a few minutes to download the first time you run it, but after that the files should be cached on disk and loading should be faster.
In previous parts of the assignment we used CS231N-specific code to download and read the CIFAR-10 dataset; however the `tf.keras.datasets` package in TensorFlow provides prebuilt utility functions for loading many common datasets.
For the purposes of this assignment we will still write our own code to preprocess the data and iterate through it in minibatches. The `tf.data` package in TensorFlow provides tools for automating this process, but working with this package adds extra complication and is beyond the scope of this notebook. However using `tf.data` can be much more efficient than the simple approach used in this notebook, so you should consider using it for your project.
```
import os
import tensorflow as tf
import numpy as np
import math
import timeit
import matplotlib.pyplot as plt
%matplotlib inline
print(tf.__version__) # need tf 2.0
def load_cifar10(num_training=49000, num_validation=1000, num_test=10000):
"""
Fetch the CIFAR-10 dataset from the web and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 dataset and use appropriate data types and shapes
cifar10 = tf.keras.datasets.cifar10.load_data()
(X_train, y_train), (X_test, y_test) = cifar10
X_train = np.asarray(X_train, dtype=np.float32)
y_train = np.asarray(y_train, dtype=np.int32).flatten()
X_test = np.asarray(X_test, dtype=np.float32)
y_test = np.asarray(y_test, dtype=np.int32).flatten()
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean pixel and divide by std
mean_pixel = X_train.mean(axis=(0, 1, 2), keepdims=True)
std_pixel = X_train.std(axis=(0, 1, 2), keepdims=True)
X_train = (X_train - mean_pixel) / std_pixel
X_val = (X_val - mean_pixel) / std_pixel
X_test = (X_test - mean_pixel) / std_pixel
return X_train, y_train, X_val, y_val, X_test, y_test
# If there are errors with SSL downloading involving self-signed certificates,
# it may be that your Python version was recently installed on the current machine.
# See: https://github.com/tensorflow/tensorflow/issues/10779
# To fix, run the command: /Applications/Python\ 3.7/Install\ Certificates.command
# ...replacing paths as necessary.
# Invoke the above function to get our data.
NHW = (0, 1, 2)
X_train, y_train, X_val, y_val, X_test, y_test = load_cifar10()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape, y_train.dtype)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
class Dataset(object):
def __init__(self, X, y, batch_size, shuffle=False):
"""
Construct a Dataset object to iterate over data X and labels y
Inputs:
- X: Numpy array of data, of any shape
- y: Numpy array of labels, of any shape but with y.shape[0] == X.shape[0]
- batch_size: Integer giving number of elements per minibatch
- shuffle: (optional) Boolean, whether to shuffle the data on each epoch
"""
assert X.shape[0] == y.shape[0], 'Got different numbers of data and labels'
self.X, self.y = X, y
self.batch_size, self.shuffle = batch_size, shuffle
def __iter__(self):
N, B = self.X.shape[0], self.batch_size
idxs = np.arange(N)
if self.shuffle:
np.random.shuffle(idxs)
return iter((self.X[i:i+B], self.y[i:i+B]) for i in range(0, N, B))
train_dset = Dataset(X_train, y_train, batch_size=64, shuffle=True)
val_dset = Dataset(X_val, y_val, batch_size=64, shuffle=False)
test_dset = Dataset(X_test, y_test, batch_size=64)
# We can iterate through a dataset like this:
for t, (x, y) in enumerate(train_dset):
print(t, x.shape, y.shape)
if t > 5: break
```
You can optionally **use GPU by setting the flag to True below**. It's not neccessary to use a GPU for this assignment; if you are working on Google Cloud then we recommend that you do not use a GPU, as it will be significantly more expensive.
```
# Set up some global variables
USE_GPU = True
if USE_GPU:
device = '/device:GPU:0'
else:
device = '/cpu:0'
# Constant to control how often we print when training models
print_every = 100
print('Using device: ', device)
```
# Part II: Barebones TensorFlow
TensorFlow ships with various high-level APIs which make it very convenient to define and train neural networks; we will cover some of these constructs in Part III and Part IV of this notebook. In this section we will start by building a model with basic TensorFlow constructs to help you better understand what's going on under the hood of the higher-level APIs.
**"Barebones Tensorflow" is important to understanding the building blocks of TensorFlow, but much of it involves concepts from TensorFlow 1.x.** We will be working with legacy modules such as `tf.Variable`.
Therefore, please read and understand the differences between legacy (1.x) TF and the new (2.0) TF.
### Historical background on TensorFlow 1.x
TensorFlow 1.x is primarily a framework for working with **static computational graphs**. Nodes in the computational graph are Tensors which will hold n-dimensional arrays when the graph is run; edges in the graph represent functions that will operate on Tensors when the graph is run to actually perform useful computation.
Before Tensorflow 2.0, we had to configure the graph into two phases. There are plenty of tutorials online that explain this two-step process. The process generally looks like the following for TF 1.x:
1. **Build a computational graph that describes the computation that you want to perform**. This stage doesn't actually perform any computation; it just builds up a symbolic representation of your computation. This stage will typically define one or more `placeholder` objects that represent inputs to the computational graph.
2. **Run the computational graph many times.** Each time the graph is run (e.g. for one gradient descent step) you will specify which parts of the graph you want to compute, and pass a `feed_dict` dictionary that will give concrete values to any `placeholder`s in the graph.
### The new paradigm in Tensorflow 2.0
Now, with Tensorflow 2.0, we can simply adopt a functional form that is more Pythonic and similar in spirit to PyTorch and direct Numpy operation. Instead of the 2-step paradigm with computation graphs, making it (among other things) easier to debug TF code. You can read more details at https://www.tensorflow.org/guide/eager.
The main difference between the TF 1.x and 2.0 approach is that the 2.0 approach doesn't make use of `tf.Session`, `tf.run`, `placeholder`, `feed_dict`. To get more details of what's different between the two version and how to convert between the two, check out the official migration guide: https://www.tensorflow.org/alpha/guide/migration_guide
Later, in the rest of this notebook we'll focus on this new, simpler approach.
### TensorFlow warmup: Flatten Function
We can see this in action by defining a simple `flatten` function that will reshape image data for use in a fully-connected network.
In TensorFlow, data for convolutional feature maps is typically stored in a Tensor of shape N x H x W x C where:
- N is the number of datapoints (minibatch size)
- H is the height of the feature map
- W is the width of the feature map
- C is the number of channels in the feature map
This is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we use fully connected affine layers to process the image, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a "flatten" operation to collapse the `H x W x C` values per representation into a single long vector.
Notice the `tf.reshape` call has the target shape as `(N, -1)`, meaning it will reshape/keep the first dimension to be N, and then infer as necessary what the second dimension is in the output, so we can collapse the remaining dimensions from the input properly.
**NOTE**: TensorFlow and PyTorch differ on the default Tensor layout; TensorFlow uses N x H x W x C but PyTorch uses N x C x H x W.
```
def flatten(x):
"""
Input:
- TensorFlow Tensor of shape (N, D1, ..., DM)
Output:
- TensorFlow Tensor of shape (N, D1 * ... * DM)
"""
N = tf.shape(x)[0]
return tf.reshape(x, (N, -1))
def test_flatten():
# Construct concrete values of the input data x using numpy
x_np = np.arange(24).reshape((2, 3, 4))
print('x_np:\n', x_np, '\n')
# Compute a concrete output value.
x_flat_np = flatten(x_np)
print('x_flat_np:\n', x_flat_np, '\n')
test_flatten()
```
### Barebones TensorFlow: Define a Two-Layer Network
We will now implement our first neural network with TensorFlow: a fully-connected ReLU network with two hidden layers and no biases on the CIFAR10 dataset. For now we will use only low-level TensorFlow operators to define the network; later we will see how to use the higher-level abstractions provided by `tf.keras` to simplify the process.
We will define the forward pass of the network in the function `two_layer_fc`; this will accept TensorFlow Tensors for the inputs and weights of the network, and return a TensorFlow Tensor for the scores.
After defining the network architecture in the `two_layer_fc` function, we will test the implementation by checking the shape of the output.
**It's important that you read and understand this implementation.**
```
def two_layer_fc(x, params):
"""
A fully-connected neural network; the architecture is:
fully-connected layer -> ReLU -> fully connected layer.
Note that we only need to define the forward pass here; TensorFlow will take
care of computing the gradients for us.
The input to the network will be a minibatch of data, of shape
(N, d1, ..., dM) where d1 * ... * dM = D. The hidden layer will have H units,
and the output layer will produce scores for C classes.
Inputs:
- x: A TensorFlow Tensor of shape (N, d1, ..., dM) giving a minibatch of
input data.
- params: A list [w1, w2] of TensorFlow Tensors giving weights for the
network, where w1 has shape (D, H) and w2 has shape (H, C).
Returns:
- scores: A TensorFlow Tensor of shape (N, C) giving classification scores
for the input data x.
"""
w1, w2 = params # Unpack the parameters
x = flatten(x) # Flatten the input; now x has shape (N, D)
h = tf.nn.relu(tf.matmul(x, w1)) # Hidden layer: h has shape (N, H)
scores = tf.matmul(h, w2) # Compute scores of shape (N, C)
return scores
def two_layer_fc_test():
hidden_layer_size = 42
# Scoping our TF operations under a tf.device context manager
# lets us tell TensorFlow where we want these Tensors to be
# multiplied and/or operated on, e.g. on a CPU or a GPU.
with tf.device(device):
x = tf.zeros((64, 32, 32, 3))
w1 = tf.zeros((32 * 32 * 3, hidden_layer_size))
w2 = tf.zeros((hidden_layer_size, 10))
# Call our two_layer_fc function for the forward pass of the network.
scores = two_layer_fc(x, [w1, w2])
print(scores.shape)
two_layer_fc_test()
```
### Barebones TensorFlow: Three-Layer ConvNet
Here you will complete the implementation of the function `three_layer_convnet` which will perform the forward pass of a three-layer convolutional network. The network should have the following architecture:
1. A convolutional layer (with bias) with `channel_1` filters, each with shape `KW1 x KH1`, and zero-padding of two
2. ReLU nonlinearity
3. A convolutional layer (with bias) with `channel_2` filters, each with shape `KW2 x KH2`, and zero-padding of one
4. ReLU nonlinearity
5. Fully-connected layer with bias, producing scores for `C` classes.
**HINT**: For convolutions: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/nn/conv2d; be careful with padding!
**HINT**: For biases: https://www.tensorflow.org/performance/xla/broadcasting
```
def three_layer_convnet(x, params):
"""
A three-layer convolutional network with the architecture described above.
Inputs:
- x: A TensorFlow Tensor of shape (N, H, W, 3) giving a minibatch of images
- params: A list of TensorFlow Tensors giving the weights and biases for the
network; should contain the following:
- conv_w1: TensorFlow Tensor of shape (KH1, KW1, 3, channel_1) giving
weights for the first convolutional layer.
- conv_b1: TensorFlow Tensor of shape (channel_1,) giving biases for the
first convolutional layer.
- conv_w2: TensorFlow Tensor of shape (KH2, KW2, channel_1, channel_2)
giving weights for the second convolutional layer
- conv_b2: TensorFlow Tensor of shape (channel_2,) giving biases for the
second convolutional layer.
- fc_w: TensorFlow Tensor giving weights for the fully-connected layer.
Can you figure out what the shape should be?
- fc_b: TensorFlow Tensor giving biases for the fully-connected layer.
Can you figure out what the shape should be?
"""
conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b = params
scores = None
############################################################################
# TODO: Implement the forward pass for the three-layer ConvNet. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
conv1 = tf.nn.conv2d(x, conv_w1, 1, [[0, 0], [2, 2], [2, 2], [0, 0]]) + conv_b1
relu1 = tf.nn.relu(conv1)
conv2 = tf.nn.conv2d(relu1, conv_w2, 1, [[0, 0], [1, 1], [1, 1], [0, 0]]) + conv_b2
relu2 = tf.nn.relu(conv2)
relu2_flat = flatten(relu2)
scores = tf.matmul(relu2_flat, fc_w) + fc_b
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return scores
```
After defing the forward pass of the three-layer ConvNet above, run the following cell to test your implementation. Like the two-layer network, we run the graph on a batch of zeros just to make sure the function doesn't crash, and produces outputs of the correct shape.
When you run this function, `scores_np` should have shape `(64, 10)`.
```
def three_layer_convnet_test():
with tf.device(device):
x = tf.zeros((64, 32, 32, 3))
conv_w1 = tf.zeros((5, 5, 3, 6))
conv_b1 = tf.zeros((6,))
conv_w2 = tf.zeros((3, 3, 6, 9))
conv_b2 = tf.zeros((9,))
fc_w = tf.zeros((32 * 32 * 9, 10))
fc_b = tf.zeros((10,))
params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]
scores = three_layer_convnet(x, params)
# Inputs to convolutional layers are 4-dimensional arrays with shape
# [batch_size, height, width, channels]
print('scores_np has shape: ', scores.shape)
three_layer_convnet_test()
```
### Barebones TensorFlow: Training Step
We now define the `training_step` function performs a single training step. This will take three basic steps:
1. Compute the loss
2. Compute the gradient of the loss with respect to all network weights
3. Make a weight update step using (stochastic) gradient descent.
We need to use a few new TensorFlow functions to do all of this:
- For computing the cross-entropy loss we'll use `tf.nn.sparse_softmax_cross_entropy_with_logits`: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits
- For averaging the loss across a minibatch of data we'll use `tf.reduce_mean`:
https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/reduce_mean
- For computing gradients of the loss with respect to the weights we'll use `tf.GradientTape` (useful for Eager execution): https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/GradientTape
- We'll mutate the weight values stored in a TensorFlow Tensor using `tf.assign_sub` ("sub" is for subtraction): https://www.tensorflow.org/api_docs/python/tf/assign_sub
```
def training_step(model_fn, x, y, params, learning_rate):
with tf.GradientTape() as tape:
scores = model_fn(x, params) # Forward pass of the model
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=scores)
total_loss = tf.reduce_mean(loss)
grad_params = tape.gradient(total_loss, params)
# Make a vanilla gradient descent step on all of the model parameters
# Manually update the weights using assign_sub()
for w, grad_w in zip(params, grad_params):
w.assign_sub(learning_rate * grad_w)
return total_loss
def train_part2(model_fn, init_fn, learning_rate):
"""
Train a model on CIFAR-10.
Inputs:
- model_fn: A Python function that performs the forward pass of the model
using TensorFlow; it should have the following signature:
scores = model_fn(x, params) where x is a TensorFlow Tensor giving a
minibatch of image data, params is a list of TensorFlow Tensors holding
the model weights, and scores is a TensorFlow Tensor of shape (N, C)
giving scores for all elements of x.
- init_fn: A Python function that initializes the parameters of the model.
It should have the signature params = init_fn() where params is a list
of TensorFlow Tensors holding the (randomly initialized) weights of the
model.
- learning_rate: Python float giving the learning rate to use for SGD.
"""
params = init_fn() # Initialize the model parameters
for t, (x_np, y_np) in enumerate(train_dset):
# Run the graph on a batch of training data.
loss = training_step(model_fn, x_np, y_np, params, learning_rate)
# Periodically print the loss and check accuracy on the val set.
if t % print_every == 0:
print('Iteration %d, loss = %.4f' % (t, loss))
check_accuracy(val_dset, x_np, model_fn, params)
def check_accuracy(dset, x, model_fn, params):
"""
Check accuracy on a classification model, e.g. for validation.
Inputs:
- dset: A Dataset object against which to check accuracy
- x: A TensorFlow placeholder Tensor where input images should be fed
- model_fn: the Model we will be calling to make predictions on x
- params: parameters for the model_fn to work with
Returns: Nothing, but prints the accuracy of the model
"""
num_correct, num_samples = 0, 0
for x_batch, y_batch in dset:
scores_np = model_fn(x_batch, params).numpy()
y_pred = scores_np.argmax(axis=1)
num_samples += x_batch.shape[0]
num_correct += (y_pred == y_batch).sum()
acc = float(num_correct) / num_samples
print('Got %d / %d correct (%.2f%%)' % (num_correct, num_samples, 100 * acc))
```
### Barebones TensorFlow: Initialization
We'll use the following utility method to initialize the weight matrices for our models using Kaiming's normalization method.
[1] He et al, *Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
*, ICCV 2015, https://arxiv.org/abs/1502.01852
```
def create_matrix_with_kaiming_normal(shape):
if len(shape) == 2:
fan_in, fan_out = shape[0], shape[1]
elif len(shape) == 4:
fan_in, fan_out = np.prod(shape[:3]), shape[3]
return tf.keras.backend.random_normal(shape) * np.sqrt(2.0 / fan_in)
```
### Barebones TensorFlow: Train a Two-Layer Network
We are finally ready to use all of the pieces defined above to train a two-layer fully-connected network on CIFAR-10.
We just need to define a function to initialize the weights of the model, and call `train_part2`.
Defining the weights of the network introduces another important piece of TensorFlow API: `tf.Variable`. A TensorFlow Variable is a Tensor whose value is stored in the graph and persists across runs of the computational graph; however unlike constants defined with `tf.zeros` or `tf.random_normal`, the values of a Variable can be mutated as the graph runs; these mutations will persist across graph runs. Learnable parameters of the network are usually stored in Variables.
You don't need to tune any hyperparameters, but you should achieve validation accuracies above 40% after one epoch of training.
```
def two_layer_fc_init():
"""
Initialize the weights of a two-layer network, for use with the
two_layer_network function defined above.
You can use the `create_matrix_with_kaiming_normal` helper!
Inputs: None
Returns: A list of:
- w1: TensorFlow tf.Variable giving the weights for the first layer
- w2: TensorFlow tf.Variable giving the weights for the second layer
"""
hidden_layer_size = 4000
w1 = tf.Variable(create_matrix_with_kaiming_normal((3 * 32 * 32, 4000)))
w2 = tf.Variable(create_matrix_with_kaiming_normal((4000, 10)))
return [w1, w2]
learning_rate = 1e-2
train_part2(two_layer_fc, two_layer_fc_init, learning_rate)
```
### Barebones TensorFlow: Train a three-layer ConvNet
We will now use TensorFlow to train a three-layer ConvNet on CIFAR-10.
You need to implement the `three_layer_convnet_init` function. Recall that the architecture of the network is:
1. Convolutional layer (with bias) with 32 5x5 filters, with zero-padding 2
2. ReLU
3. Convolutional layer (with bias) with 16 3x3 filters, with zero-padding 1
4. ReLU
5. Fully-connected layer (with bias) to compute scores for 10 classes
You don't need to do any hyperparameter tuning, but you should see validation accuracies above 43% after one epoch of training.
```
def three_layer_convnet_init():
"""
Initialize the weights of a Three-Layer ConvNet, for use with the
three_layer_convnet function defined above.
You can use the `create_matrix_with_kaiming_normal` helper!
Inputs: None
Returns a list containing:
- conv_w1: TensorFlow tf.Variable giving weights for the first conv layer
- conv_b1: TensorFlow tf.Variable giving biases for the first conv layer
- conv_w2: TensorFlow tf.Variable giving weights for the second conv layer
- conv_b2: TensorFlow tf.Variable giving biases for the second conv layer
- fc_w: TensorFlow tf.Variable giving weights for the fully-connected layer
- fc_b: TensorFlow tf.Variable giving biases for the fully-connected layer
"""
params = None
############################################################################
# TODO: Initialize the parameters of the three-layer network. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# a sample input is 32 x 32 x 3
conv_w1 = tf.Variable(create_matrix_with_kaiming_normal((5, 5, 3, 32)))
conv_b1 = tf.Variable(create_matrix_with_kaiming_normal((1, 32)))
conv_w2 = tf.Variable(create_matrix_with_kaiming_normal((3, 3, 32, 16)))
conv_b2 = tf.Variable(create_matrix_with_kaiming_normal((1, 16)))
fc_w = tf.Variable(create_matrix_with_kaiming_normal((32 * 32 * 16, 10))) # the input size after two convs is 32 x 32 x 16.
fc_b = tf.Variable(create_matrix_with_kaiming_normal((1, 10)))
params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return params
learning_rate = 3e-3
train_part2(three_layer_convnet, three_layer_convnet_init, learning_rate)
```
# Part III: Keras Model Subclassing API
Implementing a neural network using the low-level TensorFlow API is a good way to understand how TensorFlow works, but it's a little inconvenient - we had to manually keep track of all Tensors holding learnable parameters. This was fine for a small network, but could quickly become unweildy for a large complex model.
Fortunately TensorFlow 2.0 provides higher-level APIs such as `tf.keras` which make it easy to build models out of modular, object-oriented layers. Further, TensorFlow 2.0 uses eager execution that evaluates operations immediately, without explicitly constructing any computational graphs. This makes it easy to write and debug models, and reduces the boilerplate code.
In this part of the notebook we will define neural network models using the `tf.keras.Model` API. To implement your own model, you need to do the following:
1. Define a new class which subclasses `tf.keras.Model`. Give your class an intuitive name that describes it, like `TwoLayerFC` or `ThreeLayerConvNet`.
2. In the initializer `__init__()` for your new class, define all the layers you need as class attributes. The `tf.keras.layers` package provides many common neural-network layers, like `tf.keras.layers.Dense` for fully-connected layers and `tf.keras.layers.Conv2D` for convolutional layers. Under the hood, these layers will construct `Variable` Tensors for any learnable parameters. **Warning**: Don't forget to call `super(YourModelName, self).__init__()` as the first line in your initializer!
3. Implement the `call()` method for your class; this implements the forward pass of your model, and defines the *connectivity* of your network. Layers defined in `__init__()` implement `__call__()` so they can be used as function objects that transform input Tensors into output Tensors. Don't define any new layers in `call()`; any layers you want to use in the forward pass should be defined in `__init__()`.
After you define your `tf.keras.Model` subclass, you can instantiate it and use it like the model functions from Part II.
### Keras Model Subclassing API: Two-Layer Network
Here is a concrete example of using the `tf.keras.Model` API to define a two-layer network. There are a few new bits of API to be aware of here:
We use an `Initializer` object to set up the initial values of the learnable parameters of the layers; in particular `tf.initializers.VarianceScaling` gives behavior similar to the Kaiming initialization method we used in Part II. You can read more about it here: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/initializers/VarianceScaling
We construct `tf.keras.layers.Dense` objects to represent the two fully-connected layers of the model. In addition to multiplying their input by a weight matrix and adding a bias vector, these layer can also apply a nonlinearity for you. For the first layer we specify a ReLU activation function by passing `activation='relu'` to the constructor; the second layer uses softmax activation function. Finally, we use `tf.keras.layers.Flatten` to flatten the output from the previous fully-connected layer.
```
class TwoLayerFC(tf.keras.Model):
def __init__(self, hidden_size, num_classes):
super().__init__() #super(TwoLayerFC, self).__init__()
initializer = tf.initializers.VarianceScaling(scale=2.0)
self.flatten = tf.keras.layers.Flatten()
self.fc1 = tf.keras.layers.Dense(hidden_size, activation='relu',
kernel_initializer=initializer)
self.fc2 = tf.keras.layers.Dense(num_classes, activation='softmax',
kernel_initializer=initializer)
def call(self, x, training=False):
x = self.flatten(x)
x = self.fc1(x)
x = self.fc2(x)
return x
def test_TwoLayerFC():
""" A small unit test to exercise the TwoLayerFC model above. """
input_size, hidden_size, num_classes = 50, 42, 10
x = tf.zeros((64, input_size))
model = TwoLayerFC(hidden_size, num_classes)
with tf.device(device):
scores = model(x)
print(scores.shape)
test_TwoLayerFC()
```
### Keras Model Subclassing API: Three-Layer ConvNet
Now it's your turn to implement a three-layer ConvNet using the `tf.keras.Model` API. Your model should have the same architecture used in Part II:
1. Convolutional layer with 5 x 5 kernels, with zero-padding of 2
2. ReLU nonlinearity
3. Convolutional layer with 3 x 3 kernels, with zero-padding of 1
4. ReLU nonlinearity
5. Fully-connected layer to give class scores
6. Softmax nonlinearity
You should initialize the weights of your network using the same initialization method as was used in the two-layer network above.
**Hint**: Refer to the documentation for `tf.keras.layers.Conv2D` and `tf.keras.layers.Dense`:
https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Conv2D
https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dense
```
class ThreeLayerConvNet(tf.keras.Model):
def __init__(self, channel_1, channel_2, num_classes):
super().__init__()
########################################################################
# TODO: Implement the __init__ method for a three-layer ConvNet. You #
# should instantiate layer objects to be used in the forward pass. #
########################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
initializer = tf.initializers.VarianceScaling(scale=2.0)
self.conv1 = tf.keras.layers.Conv2D(channel_1, (5, 5), padding='same', activation='relu', kernel_initializer=initializer)
self.conv2 = tf.keras.layers.Conv2D(channel_2, (3, 3), padding='same', activation='relu', kernel_initializer=initializer)
self.flatten = tf.keras.layers.Flatten()
self.fc = tf.keras.layers.Dense(num_classes, activation='softmax')
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
########################################################################
# END OF YOUR CODE #
########################################################################
def call(self, x, training=False):
scores = None
########################################################################
# TODO: Implement the forward pass for a three-layer ConvNet. You #
# should use the layer objects defined in the __init__ method. #
########################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
x = self.conv1(x)
x = self.conv2(x)
x = self.flatten(x)
scores = self.fc(x)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
########################################################################
# END OF YOUR CODE #
########################################################################
return scores
```
Once you complete the implementation of the `ThreeLayerConvNet` above you can run the following to ensure that your implementation does not crash and produces outputs of the expected shape.
```
def test_ThreeLayerConvNet():
channel_1, channel_2, num_classes = 12, 8, 10
model = ThreeLayerConvNet(channel_1, channel_2, num_classes)
with tf.device(device):
x = tf.zeros((64, 3, 32, 32))
scores = model(x)
print(scores.shape)
test_ThreeLayerConvNet()
```
### Keras Model Subclassing API: Eager Training
While keras models have a builtin training loop (using the `model.fit`), sometimes you need more customization. Here's an example, of a training loop implemented with eager execution.
In particular, notice `tf.GradientTape`. Automatic differentiation is used in the backend for implementing backpropagation in frameworks like TensorFlow. During eager execution, `tf.GradientTape` is used to trace operations for computing gradients later. A particular `tf.GradientTape` can only compute one gradient; subsequent calls to tape will throw a runtime error.
TensorFlow 2.0 ships with easy-to-use built-in metrics under `tf.keras.metrics` module. Each metric is an object, and we can use `update_state()` to add observations and `reset_state()` to clear all observations. We can get the current result of a metric by calling `result()` on the metric object.
```
def train_part34(model_init_fn, optimizer_init_fn, num_epochs=1, is_training=False):
"""
Simple training loop for use with models defined using tf.keras. It trains
a model for one epoch on the CIFAR-10 training set and periodically checks
accuracy on the CIFAR-10 validation set.
Inputs:
- model_init_fn: A function that takes no parameters; when called it
constructs the model we want to train: model = model_init_fn()
- optimizer_init_fn: A function which takes no parameters; when called it
constructs the Optimizer object we will use to optimize the model:
optimizer = optimizer_init_fn()
- num_epochs: The number of epochs to train for
Returns: Nothing, but prints progress during trainingn
"""
with tf.device(device):
# Compute the loss like we did in Part II
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy()
model = model_init_fn()
optimizer = optimizer_init_fn()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
val_loss = tf.keras.metrics.Mean(name='val_loss')
val_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='val_accuracy')
t = 0
for epoch in range(num_epochs):
# Reset the metrics - https://www.tensorflow.org/alpha/guide/migration_guide#new-style_metrics
train_loss.reset_states()
train_accuracy.reset_states()
for x_np, y_np in train_dset:
with tf.GradientTape() as tape:
# Use the model function to build the forward pass.
scores = model(x_np, training=is_training)
loss = loss_fn(y_np, scores)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
# Update the metrics
train_loss.update_state(loss)
train_accuracy.update_state(y_np, scores)
if t % print_every == 0:
val_loss.reset_states()
val_accuracy.reset_states()
for test_x, test_y in val_dset:
# During validation at end of epoch, training set to False
prediction = model(test_x, training=False)
t_loss = loss_fn(test_y, prediction)
val_loss.update_state(t_loss)
val_accuracy.update_state(test_y, prediction)
template = 'Iteration {}, Epoch {}, Loss: {}, Accuracy: {}, Val Loss: {}, Val Accuracy: {}'
print (template.format(t, epoch+1,
train_loss.result(),
train_accuracy.result()*100,
val_loss.result(),
val_accuracy.result()*100))
t += 1
```
### Keras Model Subclassing API: Train a Two-Layer Network
We can now use the tools defined above to train a two-layer network on CIFAR-10. We define the `model_init_fn` and `optimizer_init_fn` that construct the model and optimizer respectively when called. Here we want to train the model using stochastic gradient descent with no momentum, so we construct a `tf.keras.optimizers.SGD` function; you can [read about it here](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/optimizers/SGD).
You don't need to tune any hyperparameters here, but you should achieve validation accuracies above 40% after one epoch of training.
```
hidden_size, num_classes = 4000, 10
learning_rate = 1e-2
def model_init_fn():
return TwoLayerFC(hidden_size, num_classes)
def optimizer_init_fn():
return tf.keras.optimizers.SGD(learning_rate=learning_rate)
train_part34(model_init_fn, optimizer_init_fn, num_epochs=1)
```
### Keras Model Subclassing API: Train a Three-Layer ConvNet
Here you should use the tools we've defined above to train a three-layer ConvNet on CIFAR-10. Your ConvNet should use 32 filters in the first convolutional layer and 16 filters in the second layer.
To train the model you should use gradient descent with Nesterov momentum 0.9.
**HINT**: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/optimizers/SGD
You don't need to perform any hyperparameter tuning, but you should achieve validation accuracies above 50% after training for one epoch.
```
learning_rate = 3e-3
channel_1, channel_2, num_classes = 32, 16, 10
def model_init_fn():
model = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
model = ThreeLayerConvNet(channel_1, channel_2, num_classes)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return model
def optimizer_init_fn():
optimizer = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate, nesterov=True, momentum=0.9)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return optimizer
train_part34(model_init_fn, optimizer_init_fn, num_epochs=1)
```
# Part IV: Keras Sequential API
In Part III we introduced the `tf.keras.Model` API, which allows you to define models with any number of learnable layers and with arbitrary connectivity between layers.
However for many models you don't need such flexibility - a lot of models can be expressed as a sequential stack of layers, with the output of each layer fed to the next layer as input. If your model fits this pattern, then there is an even easier way to define your model: using `tf.keras.Sequential`. You don't need to write any custom classes; you simply call the `tf.keras.Sequential` constructor with a list containing a sequence of layer objects.
One complication with `tf.keras.Sequential` is that you must define the shape of the input to the model by passing a value to the `input_shape` of the first layer in your model.
### Keras Sequential API: Two-Layer Network
In this subsection, we will rewrite the two-layer fully-connected network using `tf.keras.Sequential`, and train it using the training loop defined above.
You don't need to perform any hyperparameter tuning here, but you should see validation accuracies above 40% after training for one epoch.
```
learning_rate = 1e-2
def model_init_fn():
input_shape = (32, 32, 3)
hidden_layer_size, num_classes = 4000, 10
initializer = tf.initializers.VarianceScaling(scale=2.0)
layers = [
tf.keras.layers.Flatten(input_shape=input_shape),
tf.keras.layers.Dense(hidden_layer_size, activation='relu',
kernel_initializer=initializer),
tf.keras.layers.Dense(num_classes, activation='softmax',
kernel_initializer=initializer),
]
model = tf.keras.Sequential(layers)
return model
def optimizer_init_fn():
return tf.keras.optimizers.SGD(learning_rate=learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
```
### Abstracting Away the Training Loop
In the previous examples, we used a customised training loop to train models (e.g. `train_part34`). Writing your own training loop is only required if you need more flexibility and control during training your model. Alternately, you can also use built-in APIs like `tf.keras.Model.fit()` and `tf.keras.Model.evaluate` to train and evaluate a model. Also remember to configure your model for training by calling `tf.keras.Model.compile.
You don't need to perform any hyperparameter tuning here, but you should see validation and test accuracies above 42% after training for one epoch.
```
model = model_init_fn()
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=learning_rate),
loss='sparse_categorical_crossentropy',
metrics=[tf.keras.metrics.sparse_categorical_accuracy])
model.fit(X_train, y_train, batch_size=64, epochs=1, validation_data=(X_val, y_val))
model.evaluate(X_test, y_test)
```
### Keras Sequential API: Three-Layer ConvNet
Here you should use `tf.keras.Sequential` to reimplement the same three-layer ConvNet architecture used in Part II and Part III. As a reminder, your model should have the following architecture:
1. Convolutional layer with 32 5x5 kernels, using zero padding of 2
2. ReLU nonlinearity
3. Convolutional layer with 16 3x3 kernels, using zero padding of 1
4. ReLU nonlinearity
5. Fully-connected layer giving class scores
6. Softmax nonlinearity
You should initialize the weights of the model using a `tf.initializers.VarianceScaling` as above.
You should train the model using Nesterov momentum 0.9.
You don't need to perform any hyperparameter search, but you should achieve accuracy above 45% after training for one epoch.
```
def model_init_fn():
model = None
############################################################################
# TODO: Construct a three-layer ConvNet using tf.keras.Sequential. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
input_shape = (32, 32, 3)
num_classes = 10
initializer = tf.initializers.VarianceScaling(scale=2.0)
layers = [
tf.keras.layers.Conv2D(32, (5, 5), padding='same', activation='relu', kernel_initializer=initializer,
input_shape=input_shape),
tf.keras.layers.Conv2D(16, (3, 3), padding='same', activation='relu', kernel_initializer=initializer),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(num_classes, activation='softmax', kernel_initializer=initializer),
]
model = tf.keras.Sequential(layers)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return model
learning_rate = 5e-4
def optimizer_init_fn():
optimizer = None
############################################################################
# TODO: Complete the implementation of model_fn. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate, nesterov=True, momentum=0.9)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return optimizer
train_part34(model_init_fn, optimizer_init_fn)
```
We will also train this model with the built-in training loop APIs provided by TensorFlow.
```
model = model_init_fn()
model.compile(optimizer='sgd',
loss='sparse_categorical_crossentropy',
metrics=[tf.keras.metrics.sparse_categorical_accuracy])
model.fit(X_train, y_train, batch_size=64, epochs=1, validation_data=(X_val, y_val))
model.evaluate(X_test, y_test)
```
## Part IV: Functional API
### Demonstration with a Two-Layer Network
In the previous section, we saw how we can use `tf.keras.Sequential` to stack layers to quickly build simple models. But this comes at the cost of losing flexibility.
Often we will have to write complex models that have non-sequential data flows: a layer can have **multiple inputs and/or outputs**, such as stacking the output of 2 previous layers together to feed as input to a third! (Some examples are residual connections and dense blocks.)
In such cases, we can use Keras functional API to write models with complex topologies such as:
1. Multi-input models
2. Multi-output models
3. Models with shared layers (the same layer called several times)
4. Models with non-sequential data flows (e.g. residual connections)
Writing a model with Functional API requires us to create a `tf.keras.Model` instance and explicitly write input tensors and output tensors for this model.
```
def two_layer_fc_functional(input_shape, hidden_size, num_classes):
initializer = tf.initializers.VarianceScaling(scale=2.0)
inputs = tf.keras.Input(shape=input_shape)
flattened_inputs = tf.keras.layers.Flatten()(inputs)
fc1_output = tf.keras.layers.Dense(hidden_size, activation='relu',
kernel_initializer=initializer)(flattened_inputs)
scores = tf.keras.layers.Dense(num_classes, activation='softmax',
kernel_initializer=initializer)(fc1_output)
# Instantiate the model given inputs and outputs.
model = tf.keras.Model(inputs=inputs, outputs=scores)
return model
def test_two_layer_fc_functional():
""" A small unit test to exercise the TwoLayerFC model above. """
input_size, hidden_size, num_classes = 50, 42, 10
input_shape = (50,)
x = tf.zeros((64, input_size))
model = two_layer_fc_functional(input_shape, hidden_size, num_classes)
with tf.device(device):
scores = model(x)
print(scores.shape)
test_two_layer_fc_functional()
```
### Keras Functional API: Train a Two-Layer Network
You can now train this two-layer network constructed using the functional API.
You don't need to perform any hyperparameter tuning here, but you should see validation accuracies above 40% after training for one epoch.
```
input_shape = (32, 32, 3)
hidden_size, num_classes = 4000, 10
learning_rate = 1e-2
def model_init_fn():
return two_layer_fc_functional(input_shape, hidden_size, num_classes)
def optimizer_init_fn():
return tf.keras.optimizers.SGD(learning_rate=learning_rate)
train_part34(model_init_fn, optimizer_init_fn)
```
# Part V: CIFAR-10 open-ended challenge
In this section you can experiment with whatever ConvNet architecture you'd like on CIFAR-10.
You should experiment with architectures, hyperparameters, loss functions, regularization, or anything else you can think of to train a model that achieves **at least 70%** accuracy on the **validation** set within 10 epochs. You can use the built-in train function, the `train_part34` function from above, or implement your own training loop.
Describe what you did at the end of the notebook.
### Some things you can try:
- **Filter size**: Above we used 5x5 and 3x3; is this optimal?
- **Number of filters**: Above we used 16 and 32 filters. Would more or fewer do better?
- **Pooling**: We didn't use any pooling above. Would this improve the model?
- **Normalization**: Would your model be improved with batch normalization, layer normalization, group normalization, or some other normalization strategy?
- **Network architecture**: The ConvNet above has only three layers of trainable parameters. Would a deeper model do better?
- **Global average pooling**: Instead of flattening after the final convolutional layer, would global average pooling do better? This strategy is used for example in Google's Inception network and in Residual Networks.
- **Regularization**: Would some kind of regularization improve performance? Maybe weight decay or dropout?
### NOTE: Batch Normalization / Dropout
If you are using Batch Normalization and Dropout, remember to pass `is_training=True` if you use the `train_part34()` function. BatchNorm and Dropout layers have different behaviors at training and inference time. `training` is a specific keyword argument reserved for this purpose in any `tf.keras.Model`'s `call()` function. Read more about this here : https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/BatchNormalization#methods
https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dropout#methods
### Tips for training
For each network architecture that you try, you should tune the learning rate and other hyperparameters. When doing this there are a couple important things to keep in mind:
- If the parameters are working well, you should see improvement within a few hundred iterations
- Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
- Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
- You should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set.
### Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are **not required** to implement any of these, but don't miss the fun if you have time!
- Alternative optimizers: you can try Adam, Adagrad, RMSprop, etc.
- Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.
- Model ensembles
- Data augmentation
- New Architectures
- [ResNets](https://arxiv.org/abs/1512.03385) where the input from the previous layer is added to the output.
- [DenseNets](https://arxiv.org/abs/1608.06993) where inputs into previous layers are concatenated together.
- [This blog has an in-depth overview](https://chatbotslife.com/resnets-highwaynets-and-densenets-oh-my-9bb15918ee32)
### Have fun and happy training!
```
class CustomConvNet(tf.keras.Model):
def __init__(self):
super().__init__()
############################################################################
# TODO: Construct a model that performs well on CIFAR-10 #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# input is [N, 32, 32, 3]
initializer = tf.initializers.VarianceScaling(scale=2.0)
self.conv11 = tf.keras.layers.Conv2D(512, (3, 3), padding='same', kernel_initializer=initializer) # 32, 32, 128
self.prelu11 = tf.keras.layers.PReLU(alpha_initializer=initializer)
self.bn11 = tf.keras.layers.BatchNormalization()
self.conv12 = tf.keras.layers.Conv2D(256, (3, 3), padding='same', kernel_initializer=initializer) # 32, 32, 128
self.prelu12 = tf.keras.layers.PReLU(alpha_initializer=initializer)
self.bn12 = tf.keras.layers.BatchNormalization()
self.conv13 = tf.keras.layers.Conv2D(128, (3, 3), padding='same', kernel_initializer=initializer) # 32, 32, 128
self.prelu13 = tf.keras.layers.PReLU(alpha_initializer=initializer)
self.bn13 = tf.keras.layers.BatchNormalization()
self.conv2 = tf.keras.layers.Conv2D(64, (3, 3), padding='same', kernel_initializer=initializer) # 32, 32, 64
self.prelu2 = tf.keras.layers.PReLU(alpha_initializer=initializer)
self.bn2 = tf.keras.layers.BatchNormalization()
self.maxpool2 = tf.keras.layers.MaxPool2D((2, 2), padding='same') # 16, 16, 64
self.conv3 = tf.keras.layers.Conv2D(32, (3, 3), padding='same', kernel_initializer=initializer) # 16, 16, 32
self.prelu3 = tf.keras.layers.PReLU(alpha_initializer=initializer)
self.bn3 = tf.keras.layers.BatchNormalization()
self.maxpool3 = tf.keras.layers.MaxPool2D((2, 2), padding='same') # 8, 8, 32
self.flatten = tf.keras.layers.Flatten()
self.fc = tf.keras.layers.Dense(10, activation='softmax', kernel_initializer=initializer)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
def call(self, input_tensor, training=False):
############################################################################
# TODO: Construct a model that performs well on CIFAR-10 #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
x = input_tensor
x = self.conv11(x)
x = self.prelu11(x)
x = self.bn11(x, training)
x = self.conv12(x)
x = self.prelu12(x)
x = self.bn12(x, training)
x = self.conv13(x)
x = self.prelu13(x)
x = self.bn13(x, training)
x = self.conv2(x)
x = self.prelu2(x)
x = self.bn2(x, training)
x = self.maxpool2(x)
x = self.conv3(x)
x = self.prelu3(x)
x = self.bn3(x, training)
x = self.maxpool3(x)
x = self.flatten(x)
x = self.fc(x)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return x
device = '/device:GPU:0' # Change this to a CPU/GPU as you wish!
# device = '/cpu:0' # Change this to a CPU/GPU as you wish!
print_every = 300
num_epochs = 10
# model = CustomConvNet()
# model = CustomResNet()
def model_init_fn():
return CustomConvNet()
def optimizer_init_fn():
learning_rate = 1e-3
return tf.keras.optimizers.Adam(learning_rate)
train_part34(model_init_fn, optimizer_init_fn, num_epochs=num_epochs, is_training=True)
```
## Describe what you did
In the cell below you should write an explanation of what you did, any additional features that you implemented, and/or any graphs that you made in the process of training and evaluating your network.
Add more layers, conv layers with deeper channel and padding to keep the same size, only downsample when doing maxpool. Use PReLU instead of ReLU with Kaiming initialization like in his paper. Added batch norm after each conv-relu pair.
| github_jupyter |
```
import itertools
import numpy as np
import pyquil.api as api
from pyquil.gates import *
from pyquil.quil import Program
from gaussian_elimination import *
```
##### Problem Setup
The setup for Simon's problem consists of a given black-box operator that is a generalization from those given in the Deutsch and Deutsch-Jozsa problems, and maps $\mathbf{f}: \{0, 1\}^n \rightarrow \{0, 1\}^m$, such that<br>
<br>
$$ U_f : \left\vert \mathbf{x} \right\rangle \left\vert \mathbf{b} \right\rangle \rightarrow \left\vert \mathbf{x} \right\rangle \left\vert \mathbf{b} \oplus \mathbf{f}(\mathbf{x}) \right\rangle$$
<br>
where $\mathbf{f}(\mathbf{x}) \in \{0, 1\}^m \, \, \forall \mathbf{x} \in \{ 0, 1 \}^n$, $\mathbf{b} \in \{0, 1\}^m$, and the $\oplus$ sign represents mod 2 addition on each of the components separately. The problem consists of finding
$\mathbf{s} \in \{0, 1\}^n$ such that<br>
<br>
$$\mathbf{f} (\mathbf{x} \oplus \mathbf{s}) = \mathbf{f} (\mathbf{x})$$
so that the function $\mathbf{f}$ is periodic with period $\mathbf{s}$.
We solve by first preparing the state $\left\vert\mathbf{x} \right\rangle \left\vert 0 \right\rangle$, applying the black-box to produce the state $\left\vert\mathbf{x}\right\rangle \left\vert \mathbf{f}(\mathbf{x})\right\rangle$, then applying $H^{\otimes n}$ to the first register $\left\vert\mathbf{x}\right\rangle$, then measuring it and recording the value $\mathbf{w}_i$, repeating these steps until $\text{span}\{\mathbf{w}_i\}$ equals $n-1$, at which point we solve the equation $\mathbf{W}\mathbf{s}^{T} = \mathbf{0}^{T}$ via Gaussian elimination to obtain $\mathbf{s}$ as the unique non-zero solution. To see _why_ this works, the reader is referred to "An introduction to quantum computing" by P. Kaye et al.
##### Implementation Notes
We can generalize the black-box operator from the Deutsch-Jozsa problem to construct the one required here
$$U_f = \sum_{\mathbf{x}=0}^{2^{n} - 1} \left\vert \mathbf{x} \right\rangle \left\langle \mathbf{x} \right\vert \otimes \left[ I + f_{i} (\mathbf{x}) \left( X - I \right) \right]^{\otimes_{i=m-1}^{i=0}}$$
For example, if $m=2$, then
$$ \left[ I + f_{i} (\mathbf{x}) \left( X - I \right) \right]^{\otimes_{i=m-1}^{i=0}} = \left[ I + f_1(\mathbf{x}) \left( X - I \right) \right] \otimes \left[ I + f_0(\mathbf{x}) \left( X - I \right)\right]$$
<br>
and further if $n=3$, $\mathbf{x} = 010$, and $\mathbf{f}(\mathbf{x}) = 10$, then
$$ \left[ I + f_{i} (\mathbf{x}) \left( X - I \right) \right]^{\otimes_{i=m-1}^{i=0}} = \left[ I + f_1(010) \left( X - I\right)\right] \otimes \left[ I + f_0(010) \left( X - I\right)\right] \\
= \left[ I + (1)(X-I)\right] \otimes \left[ I + (0) (X-I)\right] \\
= X \otimes I$$
<br>
The sampling of the $\mathbf{w}_{i}$ is done in such a way as to ensure the reduced row-echelon form of the collective $\mathbf{W}$ matrix (note that since we're working with mod 2 arithmetic, we automatically have reduced row-echelon, and not just row-echelon form). Back-substitution is modified to work with mod 2 arithmetic. The entire process is implemented in gaussian_elimination.py, and for an excellent discussion of the mathematical details involved, the reader is referred to Section 18.13 of "<c|Q|c> : A Course in Quantum Computing (for the Community College)", Vol. 1 by Michael Loceff.
### Simon's Algorithm using (n+m) qubits
```
def qubit_strings(n):
qubit_strings = []
for q in itertools.product(['0', '1'], repeat=n):
qubit_strings.append(''.join(q))
return qubit_strings
def black_box_map(n, m, s):
"""
Black-box map f:{0,1}^n -> {0,1}^m, randomly taking values,
and periodic with period s
"""
# ensure s lives in {0,1}^n
if len(s) != n:
raise AssertionError("Length of period vector should equal n")
# control qubits
cont_qubs = qubit_strings(n)
# target qubits
targ_qubs = qubit_strings(m)
# initialize empty dictionary to store map values
d_blackbox = {}
# initialize counter over control qubits
i = 0
# randomly select values from {0,1}^m for the periodic function
while set(cont_qubs) - set(d_blackbox.keys()) != set():
# pick a random target
rand_targ = np.random.choice(targ_qubs)
# set the same value for x and x + s
d_blackbox[cont_qubs[i]] = rand_targ
d_blackbox[add_vec_mod2(cont_qubs[i], s)] = rand_targ
# avoid iterating over keys already assigned values
while cont_qubs[i] in d_blackbox.keys():
i = i + 1
if i >= n:
break
return d_blackbox
def qubit_ket(qub_string):
"""
Form a basis ket out of n-bit string specified by the input 'qub_string', e.g.
'001' -> |001>
"""
e0 = np.array([[1], [0]])
e1 = np.array([[0], [1]])
d_qubstring = {'0': e0, '1': e1}
# initialize ket
ket = d_qubstring[qub_string[0]]
for i in range(1, len(qub_string)):
ket = np.kron(ket, d_qubstring[qub_string[i]])
return ket
def projection_op(qub_string):
"""
Creates a projection operator out of the basis element specified by 'qub_string', e.g.
'101' -> |101> <101|
"""
ket = qubit_ket(qub_string)
bra = np.transpose(ket) # all entries real, so no complex conjugation necessary
proj = np.kron(ket, bra)
return proj
def black_box(n, m, s):
"""
Inputs:-
n: no. of control qubits
m: no. of target qubits
s: bit-string equal to the period of the black-box map
Output:-
Unitary representation of the black-box operator
"""
d_bb = black_box_map(n, m, s)
# initialize unitary matrix
N = 2**(n+m)
unitary_rep = np.zeros(shape=(N, N))
# populate unitary matrix
for k, v in d_bb.items():
# initialize target qubit operator
targ_op = np.eye(2) + int(v[0])*(-np.eye(2) + np.array([[0, 1], [1, 0]]))
# fill out the rest of the target qubit operator
for i in range(1, m):
cont_op = np.eye(2) + int(v[i])*(-np.eye(2) + np.array([[0, 1], [1, 0]]))
targ_op = np.kron(targ_op, cont_op)
# complete the unitary operator for current control qubit-register
unitary_rep += np.kron(projection_op(k), targ_op)
return unitary_rep
qvm = api.QVMConnection()
# pick number of control qubits to be used
n = 4
# pick number of target qubits to be used
m = 2
# specify the period as an n bit-string
s = '1011'
# make sure s has the correct length
if len(s) != n:
raise ValueError("s does not have correct bit-string length")
# make sure s is non-zero
if s == '0' * n:
raise ValueError("s should not be zero vector")
# create the unitary black_box operator
blackbox = black_box(n, m, s)
# initialize the augmented matrix to be solved via Gaussian elimination
W = []
# initialize counter
counter = 0
# run main loop
while rank(W) < n-1:
# initialize the program
p = Program()
# Define U_f
p.defgate("U_f", blackbox)
# Prepare the initial state (1/sqrt[2])*(|0> + |1>)^(\otimes n) \otimes |0>^(\otimes m)
for m_ in range(m):
p.inst(I(m_))
for n_ in range(m, n+m):
p.inst(H(n_))
# Apply U_f
p.inst(("U_f",) + tuple(range(n+m)[::-1]))
# Apply final H^(\otimes n)
for n_ in range(m, n+m):
p.inst(H(n_))
# Final measurement
classical_regs = list(range(n))
for i, n_ in enumerate(list(range(m, n+m))[::-1]):
p.measure(n_, classical_regs[i])
measure_n_qubits = qvm.run(p, classical_regs)
# flatten out list
z = [item for sublist in measure_n_qubits for item in sublist]
z.append(0)
# add (or not) the new sample z to W
W = new_sample(W, z)
# increment counter
counter = counter + 1
del p
print ("The period vector is found to be: ", solve_reduced_row_echelon_form(W))
```
| github_jupyter |
# Image features exercise
*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*
We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.
All of your work for this exercise will be done in this notebook.
```
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
```
## Load data
Similar to previous exercises, we will load CIFAR-10 data from disk.
```
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
```
## Extract Features
For each image we will compute a Histogram of Oriented
Gradients (HOG) as well as a color histogram using the hue channel in HSV
color space. We form our final feature vector for each image by concatenating
the HOG and color histogram feature vectors.
Roughly speaking, HOG should capture the texture of the image while ignoring
color information, and the color histogram represents the color of the input
image while ignoring texture. As a result, we expect that using both together
ought to work better than using either alone. Verifying this assumption would
be a good thing to try for the bonus section.
The `hog_feature` and `color_histogram_hsv` functions both operate on a single
image and return a feature vector for that image. The extract_features
function takes a set of images and a list of feature functions and evaluates
each feature function on each image, storing the results in a matrix where
each column is the concatenation of all feature vectors for a single image.
```
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
```
## Train SVM on features
Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
```
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for learning_rate in learning_rates:
for reg in regularization_strengths:
print('lr %e reg %e' % (learning_rate, reg,))
svm = LinearSVM()
loss_hist = svm.train(X_train_feats, y_train, learning_rate=learning_rate, reg=reg,
num_iters=1500, verbose=True)
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
accuracy_train = np.mean(y_train == y_train_pred)
accuracy_val = np.mean(y_val == y_val_pred)
results[(learning_rate, reg)] = (accuracy_train, accuracy_val)
if best_val < accuracy_val:
best_val = accuracy_val
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
```
### Inline question 1:
Describe the misclassification results that you see. Do they make sense?
## Neural Network on image features
Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels.
For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
```
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 1024
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
# Train the network
_reg=0
_learning_rate=1e-4
_learning_rate_decay=0.95
_num_iters=1000
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=_num_iters, batch_size=200,
learning_rate=_learning_rate, learning_rate_decay=_learning_rate_decay,
reg=_reg, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val_feats) == y_val).mean()
print('Validation accuracy: ', val_acc)
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print(test_acc)
```
# Bonus: Design your own features!
You have seen that simple image features can improve classification performance. So far we have tried HOG and color histograms, but other types of features may be able to achieve even better classification performance.
For bonus points, design and implement a new type of feature and use it for image classification on CIFAR-10. Explain how your feature works and why you expect it to be useful for image classification. Implement it in this notebook, cross-validate any hyperparameters, and compare its performance to the HOG + Color histogram baseline.
# Bonus: Do something extra!
Use the material and code we have presented in this assignment to do something interesting. Was there another question we should have asked? Did any cool ideas pop into your head as you were working on the assignment? This is your chance to show off!
| github_jupyter |
# Automate Retraining of Models using SageMaker Pipelines and Lambda
# Learning Objectives
1. Construct a [SageMaker Pipeline](https://aws.amazon.com/sagemaker/pipelines/) that consists of a data preprocessing step and a model training step.
2. Execute a SageMaker Pipeline manually
3. Build infrastructure, using [CloudFormation](https://aws.amazon.com/cloudformation/) and [AWS Lambda](https://aws.amazon.com/lambda/) to allow the Pipeline steps be executed in an event-driven manner when new data is dropped in S3.
## Introduction
This workshop shows how you can build and deploy SageMaker Pipelines for multistep processes. In this example, we will build a pipeline that:
1. Deduplicates the underlying data
2. Trains a built-in SageMaker algorithm (XGBoost)
A common workflow is that models need to be retrained when new data arrives. This notebook also shows how you can set up a Lambda function that will retrigger the retraining pipeline when new data comes in.
Please use the `Python 3 (Data Science)` kernel for this workshop.
```
import boto3
import json
import logging
import os
import pandas
import sagemaker
from sagemaker.workflow.parameters import ParameterString
from sagemaker.workflow.steps import ProcessingStep, TrainingStep
from sagemaker.sklearn.processing import SKLearnProcessor
from sagemaker.workflow.pipeline import Pipeline
from sagemaker.inputs import TrainingInput
from sagemaker.processing import ProcessingInput, ProcessingOutput
from sagemaker.estimator import Estimator
from time import gmtime, strftime
# set logs if not done already
logger = logging.getLogger("log")
if not logger.handlers:
logger.setLevel(logging.INFO)
logger.addHandler(logging.StreamHandler())
```
First, get permissions and other information. We will also create a pipeline name
```
session = sagemaker.Session()
default_bucket = session.default_bucket()
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
s3_client = boto3.client("s3", region_name=region)
current_timestamp = strftime("%m-%d-%H-%M", gmtime())
pipeline_name = f"my-pipeline-{current_timestamp}"
prefix = f"pipeline-lab{current_timestamp}"
```
## Transfer Data into Your Account
```
copy_source = {
"Bucket": "aws-hcls-ml",
"Key": "workshop/immersion_day_workshop_data_DO_NOT_DELETE/data/ObesityDataSet_with_duplicates.csv",
}
s3_client.copy(
copy_source, default_bucket, f"{prefix}/ObesityDataSet_with_duplicates.csv"
)
copy_source = {
"Bucket": "aws-hcls-ml",
"Key": "workshop/immersion_day_workshop_data_DO_NOT_DELETE/kick_off_sagemaker_pipelines_lambda/other_material/lambda.zip",
}
s3_client.copy(copy_source, default_bucket, f"{prefix}/lambda.zip")
```
## Define the Pipeline
First we will create a preprocessing step. The preprocessing step simply removes duplicated rows from the dataset. The `preprocessing.py` script will be written locally, and then built as a SageMaker Pipelines step.
```
input_data = ParameterString(
name="InputData",
default_value=f"s3://{default_bucket}/{prefix}/ObesityDataSet_with_duplicates.csv",
)
%%writefile preprocessing.py
import pandas
import os
base_dir = "/opt/ml/processing/input"
the_files = os.listdir(base_dir)
the_file=[i for i in the_files if ".csv" in i][0] #get the first csv
print(the_file)
df_1=pandas.read_csv(f'{base_dir}/{the_file}',engine='python')
df_2=df_1.drop_duplicates()
df_2.to_csv(f'/opt/ml/processing/output/deduped_{the_file}.csv')
# Specify the container and framework options
sklearn_processor = SKLearnProcessor(
framework_version="0.23-1",
instance_type="ml.t3.medium",
instance_count=1,
base_job_name="sklearn-abalone-process",
role=role,
)
```
Now will will turn the preprocessing step as a SageMaker Processing Step with SageMaker Pipelines.
```
step_process = ProcessingStep(
name="deduplication-process",
processor=sklearn_processor,
inputs=[
ProcessingInput(source=input_data, destination="/opt/ml/processing/input"),
],
outputs=[
ProcessingOutput(output_name="deduplicated", source="/opt/ml/processing/output")
],
code="preprocessing.py",
)
```
## Define the Model
Now we will create a SageMaker model. We will use the SageMaker built-in XGBoost Algorithm.
```
# Define the model training parameters
model_path = f"s3://{default_bucket}/{prefix}/myPipelineTrain"
image_uri = sagemaker.image_uris.retrieve(
framework="xgboost",
region=region,
version="1.0-1",
py_version="py3",
instance_type="ml.m5.large",
)
xgb_train = Estimator(
image_uri=image_uri,
instance_type="ml.m5.large",
instance_count=1,
output_path=model_path,
role=role,
)
xgb_train.set_hyperparameters(
objective="reg:linear",
num_round=50,
max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.7,
silent=0,
)
```
Turn the model training into a SageMaker Pipeline Training Step.
```
# Define the training steps
step_train = TrainingStep(
name="model-training",
estimator=xgb_train,
inputs={
"train": TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"deduplicated"
].S3Output.S3Uri,
content_type="text/csv",
),
"validation": TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"deduplicated"
].S3Output.S3Uri,
content_type="text/csv",
),
},
)
```
## Create and Start the Pipeline
```
# Create a two-step data processing and model training pipeline
pipeline_name = "ObesityModelRetrainingPipeLine"
pipeline = Pipeline(
name=pipeline_name,
parameters=[
input_data,
],
steps=[step_process, step_train],
)
pipeline.upsert(role_arn=role)
pipeline_execution = pipeline.start()
# Wait 15 minutes for the pipeline to finish running. In the meantime, you can monitor its progress in SageMaker Studio
pipeline_execution.wait()
```
## Deploy a CloudFormation Template to retrain the Pipeline
Now we will deploy a cloudformation template that will allow for automated calling of the Pipeline when new files are dropped in an S3 bucket.
The architecture looks like this:

NOTE: In order to run the following steps you must first associate the following IAM policies to your SageMaker execution role:
- cloudformation:CreateStack
- cloudformation:DeleteStack
- cloudformation:DescribeStacks
- iam:CreateRole
- iam:DeleteRole
- iam:DeleteRolePolicy
- iam:GetRole
- iam:GetRolePolicy
- iam:PassRole
- iam:PutRolePolicy
- lambda:AddPermission
- lambda:CreateFunction
- lambda:GetFunction
- lambda:DeleteFuncton
```
# Create a new CloudFormation stack to trigger retraining with new data
stack_name = "sagemaker-automated-retraining"
with open("cfn_sagemaker_pipelines.yaml") as f:
template_str = f.read()
cfn = boto3.client("cloudformation")
cfn.create_stack(
StackName=stack_name,
TemplateBody=template_str,
Capabilities=["CAPABILITY_IAM"],
Parameters=[
{"ParameterKey": "StaticCodeBucket", "ParameterValue": default_bucket},
{"ParameterKey": "StaticCodeKey", "ParameterValue": f"{prefix}/lambda.zip"},
],
)
# Wait until stack creation is complete
waiter = cfn.get_waiter("stack_create_complete")
waiter.wait(StackName=stack_name)
# Identify the S3 bucket for triggering the training pipeline
input_bucket_name = cfn.describe_stacks(StackName=stack_name)["Stacks"][0]["Outputs"][0]["OutputValue"]
# Copy the training data to the input bucket to start a new pipeline execution
copy_source = {
"Bucket": default_bucket,
"Key": f"{prefix}/ObesityDataSet_with_duplicates.csv",
}
s3_client.copy(copy_source, input_bucket_name, "ObesityDataSet_with_duplicates.csv")
```
### (Optional)
1. Inspect that the `InputBucket` has new data
2. Examine the `SageMaker Pipelines` execution from the SageMaker Studio console
```
#!aws s3 rm --recursive s3://{input_bucket_name}
```
## Closing
In this notebook we demonstrated how to create a SageMaker pipeline for data processing and model training and triggered it using an S3 event.
| github_jupyter |
*This tutorial is part Level 2 in the [Learn Machine Learning](https://www.kaggle.com/learn/machine-learning) curriculum. This tutorial picks up where Level 1 finished, so you will get the most out of it if you've done the exercise from Level 1.*
In this step, you will learn three approaches to dealing with missing values. You will then learn to compare the effectiveness of these approaches on any given dataset.*
# Introduction
There are many ways data can end up with missing values. For example
- A 2 bedroom house wouldn't include an answer for _How large is the third bedroom_
- Someone being surveyed may choose not to share their income
Python libraries represent missing numbers as **nan** which is short for "not a number". You can detect which cells have missing values, and then count how many there are in each column with the command:
```
missing_val_count_by_column = (data.isnull().sum())
print(missing_val_count_by_column[missing_val_count_by_column > 0
```
Most libraries (including scikit-learn) will give you an error if you try to build a model using data with missing values. So you'll need to choose one of the strategies below.
---
## Solutions
## 1) A Simple Option: Drop Columns with Missing Values
If your data is in a DataFrame called `original_data`, you can drop columns with missing values. One way to do that is
```
data_without_missing_values = original_data.dropna(axis=1)
```
In many cases, you'll have both a training dataset and a test dataset. You will want to drop the same columns in both DataFrames. In that case, you would write
```
cols_with_missing = [col for col in original_data.columns
if original_data[col].isnull().any()]
redued_original_data = original_data.drop(cols_with_missing, axis=1)
reduced_test_data = test_data.drop(cols_with_missing, axis=1)
```
If those columns had useful information (in the places that were not missing), your model loses access to this information when the column is dropped. Also, if your test data has missing values in places where your training data did not, this will result in an error.
So, it's somewhat usually not the best solution. However, it can be useful when most values in a column are missing.
## 2) A Better Option: Imputation
Imputation fills in the missing value with some number. The imputed value won't be exactly right in most cases, but it usually gives more accurate models than dropping the column entirely.
This is done with
```
from sklearn.impute import SimpleImputer
my_imputer = SimpleImputer()
data_with_imputed_values = my_imputer.fit_transform(original_data)
```
The default behavior fills in the mean value for imputation. Statisticians have researched more complex strategies, but those complex strategies typically give no benefit once you plug the results into sophisticated machine learning models.
One (of many) nice things about Imputation is that it can be included in a scikit-learn Pipeline. Pipelines simplify model building, model validation and model deployment.
## 3) An Extension To Imputation
Imputation is the standard approach, and it usually works well. However, imputed values may by systematically above or below their actual values (which weren't collected in the dataset). Or rows with missing values may be unique in some other way. In that case, your model would make better predictions by considering which values were originally missing. Here's how it might look:
```
# make copy to avoid changing original data (when Imputing)
new_data = original_data.copy()
# make new columns indicating what will be imputed
cols_with_missing = (col for col in new_data.columns
if new_data[col].isnull().any())
for col in cols_with_missing:
new_data[col + '_was_missing'] = new_data[col].isnull()
# Imputation
my_imputer = SimpleImputer()
new_data = pd.DataFrame(my_imputer.fit_transform(new_data))
new_data.columns = original_data.columns
```
In some cases this approach will meaningfully improve results. In other cases, it doesn't help at all.
---
# Example (Comparing All Solutions)
We will see am example predicting housing prices from the Melbourne Housing data. To master missing value handling, fork this notebook and repeat the same steps with the Iowa Housing data. Find information about both in the **Data** section of the header menu.
### Basic Problem Set-up
```
import pandas as pd
# Load data
melb_data = pd.read_csv('../data/train.csv')
print(melb_data.columns)
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
melb_target = melb_data.SalePrice
melb_predictors = melb_data.drop(['SalePrice'], axis=1)
# For the sake of keeping the example simple, we'll use only numeric predictors.
melb_numeric_predictors = melb_predictors.select_dtypes(exclude=['object'])
```
### Create Function to Measure Quality of An Approach
We divide our data into **training** and **test**. If the reason for this is unfamiliar, review [Welcome to Data Science](https://www.kaggle.com/dansbecker/welcome-to-data-science-1).
We've loaded a function `score_dataset(X_train, X_test, y_train, y_test)` to compare the quality of diffrent approaches to missing values. This function reports the out-of-sample MAE score from a RandomForest.
```
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(melb_numeric_predictors,
melb_target,
train_size=0.7,
test_size=0.3,
random_state=0)
def score_dataset(X_train, X_test, y_train, y_test):
model = RandomForestRegressor()
model.fit(X_train, y_train)
preds = model.predict(X_test)
return mean_absolute_error(y_test, preds)
```
### Get Model Score from Dropping Columns with Missing Values
```
cols_with_missing = [col for col in X_train.columns
if X_train[col].isnull().any()]
reduced_X_train = X_train.drop(cols_with_missing, axis=1)
reduced_X_test = X_test.drop(cols_with_missing, axis=1)
print("Mean Absolute Error from dropping columns with Missing Values:")
print(score_dataset(reduced_X_train, reduced_X_test, y_train, y_test))
```
### Get Model Score from Imputation
```
from sklearn.impute import SimpleImputer
my_imputer = SimpleImputer()
imputed_X_train = my_imputer.fit_transform(X_train)
imputed_X_test = my_imputer.transform(X_test)
print("Mean Absolute Error from Imputation:")
print(score_dataset(imputed_X_train, imputed_X_test, y_train, y_test))
```
### Get Score from Imputation with Extra Columns Showing What Was Imputed
```
imputed_X_train_plus = X_train.copy()
imputed_X_test_plus = X_test.copy()
cols_with_missing = (col for col in X_train.columns
if X_train[col].isnull().any())
for col in cols_with_missing:
imputed_X_train_plus[col + '_was_missing'] = imputed_X_train_plus[col].isnull()
imputed_X_test_plus[col + '_was_missing'] = imputed_X_test_plus[col].isnull()
# Imputation
my_imputer = SimpleImputer()
imputed_X_train_plus = my_imputer.fit_transform(imputed_X_train_plus)
imputed_X_test_plus = my_imputer.transform(imputed_X_test_plus)
print("Mean Absolute Error from Imputation while Track What Was Imputed:")
print(score_dataset(imputed_X_train_plus, imputed_X_test_plus, y_train, y_test))
```
# Conclusion
As is common, imputing missing values allowed us to improve our model compared to dropping those columns. We got an additional boost by tracking what values had been imputed.
# Your Turn
1) Find some columns with missing values in your dataset.
2) Use the Imputer class so you can impute missing values
3) Add columns with missing values to your predictors.
If you find the right columns, you may see an improvement in model scores. That said, the Iowa data doesn't have a lot of columns with missing values. So, whether you see an improvement at this point depends on some other details of your model.
Once you've added the Imputer, keep using those columns for future steps. In the end, it will improve your model (and in most other datasets, it is a big improvement).
# Keep Going
Once you've added the Imputer and included columns with missing values, you are ready to [add categorical variables](https://www.kaggle.com/dansbecker/using-categorical-data-with-one-hot-encoding), which is non-numeric data representing categories (like the name of the neighborhood a house is in).
---
Part of the **[Learn Machine Learning](https://www.kaggle.com/learn/machine-learning)** track.
| github_jupyter |
<a href="https://colab.research.google.com/github/mfernandes61/python-intro-gapminder/blob/binder/colab/04_built_in.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
```
---
title: "Built-in Functions and Help"
teaching: 15
exercises: 10
questions:
- "How can I use built-in functions?"
- "How can I find out what they do?"
- "What kind of errors can occur in programs?"
objectives:
- "Explain the purpose of functions."
- "Correctly call built-in Python functions."
- "Correctly nest calls to built-in functions."
- "Use help to display documentation for built-in functions."
- "Correctly describe situations in which SyntaxError and NameError occur."
keypoints:
- "Use comments to add documentation to programs."
- "A function may take zero or more arguments."
- "Commonly-used built-in functions include `max`, `min`, and `round`."
- "Functions may only work for certain (combinations of) arguments."
- "Functions may have default values for some arguments."
- "Use the built-in function `help` to get help for a function."
- "The Jupyter Notebook has two ways to get help."
- "Every function returns something."
- "Python reports a syntax error when it can't understand the source of a program."
- "Python reports a runtime error when something goes wrong while a program is executing."
- "Fix syntax errors by reading the source code, and runtime errors by tracing the program's execution."
---
## Use comments to add documentation to programs.
~~~
# This sentence isn't executed by Python.
adjustment = 0.5 # Neither is this - anything after '#' is ignored.
~~~
{: .language-python}
## A function may take zero or more arguments.
* We have seen some functions already --- now let's take a closer look.
* An *argument* is a value passed into a function.
* `len` takes exactly one.
* `int`, `str`, and `float` create a new value from an existing one.
* `print` takes zero or more.
* `print` with no arguments prints a blank line.
* Must always use parentheses, even if they're empty,
so that Python knows a function is being called.
~~~
print('before')
print()
print('after')
~~~
{: .language-python}
~~~
before
after
~~~
{: .output}
## Every function returns something.
* Every function call produces some result.
* If the function doesn't have a useful result to return,
it usually returns the special value `None`. `None` is a Python
object that stands in anytime there is no value.
~~~
result = print('example')
print('result of print is', result)
~~~
{: .language-python}
~~~
example
result of print is None
~~~
{: .output}
## Commonly-used built-in functions include `max`, `min`, and `round`.
* Use `max` to find the largest value of one or more values.
* Use `min` to find the smallest.
* Both work on character strings as well as numbers.
* "Larger" and "smaller" use (0-9, A-Z, a-z) to compare letters.
~~~
print(max(1, 2, 3))
print(min('a', 'A', '0'))
~~~
{: .language-python}
~~~
3
0
~~~
{: .output}
## Functions may only work for certain (combinations of) arguments.
* `max` and `min` must be given at least one argument.
* "Largest of the empty set" is a meaningless question.
* And they must be given things that can meaningfully be compared.
~~~
print(max(1, 'a'))
~~~
{: .language-python}
~~~
TypeError Traceback (most recent call last)
<ipython-input-52-3f049acf3762> in <module>
----> 1 print(max(1, 'a'))
TypeError: '>' not supported between instances of 'str' and 'int'
~~~
{: .error}
## Functions may have default values for some arguments.
* `round` will round off a floating-point number.
* By default, rounds to zero decimal places.
~~~
round(3.712)
~~~
{: .language-python}
~~~
4
~~~
{: .output}
* We can specify the number of decimal places we want.
~~~
round(3.712, 1)
~~~
{: .language-python}
~~~
3.7
~~~
{: .output}
## Functions attached to objects are called methods
* Functions take another form that will be common in the pandas episodes.
* Methods have parentheses like functions, but come after the variable.
* Some methods are used for internal Python operations, and are marked with double underlines.
~~~
my_string = 'Hello world!' # creation of a string object
print(len(my_string)) # the len function takes a string as an argument and returns the length of the string
print(my_string.swapcase()) # calling the swapcase method on the my_string object
print(my_string.__len__()) # calling the internal __len__ method on the my_string object, used by len(my_string)
~~~
{: .language-python}
~~~
12
hELLO WORLD!
12
~~~
{: .output}
* You might even see them chained together. They operate left to right.
~~~
print(my_string.isupper()) # Not all the letters are uppercase
print(my_string.upper()) # This capitalizes all the letters
print(my_string.upper().isupper()) # Now all the letters are uppercase
~~~
{: .language-python}
~~~
False
HELLO WORLD
True
~~~
{: .output}
## Use the built-in function `help` to get help for a function.
* Every built-in function has online documentation.
~~~
help(round)
~~~
{: .language-python}
~~~
Help on built-in function round in module builtins:
round(number, ndigits=None)
Round a number to a given precision in decimal digits.
The return value is an integer if ndigits is omitted or None. Otherwise
the return value has the same type as the number. ndigits may be negative.
~~~
{: .output}
## The Jupyter Notebook has two ways to get help.
* Option 1: Place the cursor near where the function is invoked in a cell
(i.e., the function name or its parameters),
* Hold down <kbd>Shift</kbd>, and press <kbd>Tab</kbd>.
* Do this several times to expand the information returned.
* Option 2: Type the function name in a cell with a question mark after it. Then run the cell.
## Python reports a syntax error when it can't understand the source of a program.
* Won't even try to run the program if it can't be parsed.
~~~
# Forgot to close the quote marks around the string.
name = 'Feng
~~~
{: .language-python}
~~~
File "<ipython-input-56-f42768451d55>", line 2
name = 'Feng
^
SyntaxError: EOL while scanning string literal
~~~
{: .error}
~~~
# An extra '=' in the assignment.
age = = 52
~~~
{: .language-python}
~~~
File "<ipython-input-57-ccc3df3cf902>", line 2
age = = 52
^
SyntaxError: invalid syntax
~~~
{: .error}
* Look more closely at the error message:
~~~
print("hello world"
~~~
{: .language-python}
~~~
File "<ipython-input-6-d1cc229bf815>", line 1
print ("hello world"
^
SyntaxError: unexpected EOF while parsing
~~~
{: .error}
* The message indicates a problem on first line of the input ("line 1").
* In this case the "ipython-input" section of the file name tells us that
we are working with input into IPython,
the Python interpreter used by the Jupyter Notebook.
* The `-6-` part of the filename indicates that
the error occurred in cell 6 of our Notebook.
* Next is the problematic line of code,
indicating the problem with a `^` pointer.
## <a name='runtime-error'></a> Python reports a runtime error when something goes wrong while a program is executing.
~~~
age = 53
remaining = 100 - aege # mis-spelled 'age'
~~~
{: .language-python}
~~~
NameError Traceback (most recent call last)
<ipython-input-59-1214fb6c55fc> in <module>
1 age = 53
----> 2 remaining = 100 - aege # mis-spelled 'age'
NameError: name 'aege' is not defined
~~~
{: .error}
* Fix syntax errors by reading the source and runtime errors by tracing execution.
> ## What Happens When
>
> 1. Explain in simple terms the order of operations in the following program:
> when does the addition happen, when does the subtraction happen,
> when is each function called, etc.
> 2. What is the final value of `radiance`?
>
> ~~~
> radiance = 1.0
> radiance = max(2.1, 2.0 + min(radiance, 1.1 * radiance - 0.5))
> ~~~
> {: .language-python}
> > ## Solution
> > 1. Order of operations:
> > 1. `1.1 * radiance = 1.1`
> > 2. `1.1 - 0.5 = 0.6`
> > 3. `min(radiance, 0.6) = 0.6`
> > 4. `2.0 + 0.6 = 2.6`
> > 5. `max(2.1, 2.6) = 2.6`
> > 2. At the end, `radiance = 2.6`
> {: .solution}
{: .challenge}
> ## Spot the Difference
>
> 1. Predict what each of the `print` statements in the program below will print.
> 2. Does `max(len(rich), poor)` run or produce an error message?
> If it runs, does its result make any sense?
>
> ~~~
> easy_string = "abc"
> print(max(easy_string))
> rich = "gold"
> poor = "tin"
> print(max(rich, poor))
> print(max(len(rich), len(poor)))
> ~~~
> {: .language-python}
> > ## Solution
> > ~~~
> > print(max(easy_string))
> > ~~~
> > {: .language-python}
> > ~~~
> > c
> > ~~~
> > {: .output}
> > ~~~
> > print(max(rich, poor))
> > ~~~
> > {: .language-python}
> > ~~~
> > tin
> > ~~~
> > {: .output}
> > ~~~
> > print(max(len(rich), len(poor)))
> > ~~~
> > {: .language-python}
> > ~~~
> > 4
> > ~~~
> > {: .output}
> > `max(len(rich), poor)` throws a TypeError. This turns into `max(4, 'tin')` and
> > as we discussed earlier a string and integer cannot meaningfully be compared.
> > ~~~
> > TypeError Traceback (most recent call last)
> > <ipython-input-65-bc82ad05177a> in <module>
> > ----> 1 max(len(rich), poor)
> >
> > TypeError: '>' not supported between instances of 'str' and 'int'
> > ~~~
> > {: .error }
> {: .solution}
{: .challenge}
> ## Why Not?
>
> Why is it that `max` and `min` do not return `None` when they are called with no arguments?
>
> > ## Solution
> > `max` and `min` return TypeErrors in this case because the correct number of parameters
> > was not supplied. If it just returned `None`, the error would be much harder to trace as it
> > would likely be stored into a variable and used later in the program, only to likely throw
> > a runtime error.
> {: .solution}
{: .challenge}
> ## Last Character of a String
>
> If Python starts counting from zero,
> and `len` returns the number of characters in a string,
> what index expression will get the last character in the string `name`?
> (Note: we will see a simpler way to do this in a later episode.)
>
> > ## Solution
> >
> > `name[len(name) - 1]`
> {: .solution}
{: .challenge}
> ## Explore the Python docs!
>
> The [official Python documentation](https://docs.python.org/3/) is arguably the most complete
> source of information about the language. It is available in different languages and contains a lot of useful
> resources. The [Built-in Functions page](https://docs.python.org/3/library/functions.html) contains a catalogue of
> all of these functions, including the ones that we've covered in this lesson. Some of these are more advanced and
> unnecessary at the moment, but others are very simple and useful.
>
{: .callout}
| github_jupyter |
## Tutorial : Automatically determining TF binding site locations
The code in this tutorial is released under the [MIT License](https://opensource.org/licenses/MIT). All the content in this notebook is under a [CC-by 4.0 License](https://creativecommons.org/licenses/by/4.0/).
Created by Bill Ireland, Suzy Beleer and Manu Flores.
```
#Import basic stuff
import matplotlib.pyplot as plt
import numpy as np
#import the custom analysis software
import scipy as sp
import seaborn as sns
import viz
# Set PBoC plotting style
viz.pboc_style_mpl()
# Activate a setting that causes all plots to be inside the notebook rather than in pop-ups.
%matplotlib inline
# Get svg graphics from the notebook
%config InlineBackend.figure_format = 'svg'
```
To determine locations of binding sites automatically, we must take the information footprints and expression shift plots (which we demonstrate how to generate in the information footprint tutorial). We determine which ones are truly part of binding sites. To do this we will first determine which base pairs have a signficant impact on gene expression. Then if there are 5 or more significant base pairs within a 15 base pair region, then we tentatively classify that area as a binding site. These areas need to be reviewed by hand.
To determine which base pairs have significant impacts on gene expression, we will use the MCMC sampling done when inferring the expression shift to determine the uncertainty in each measurement.
First we will load in the MCMC samples
```
#We will declare the path where all the data for this notebook is stored.
path = '../datasets/'
#we will look at the aphA gene in a low oxygen growth condition.
genelabel = 'aphA'
#We load in each sample in the MCMC run. These are also stored in the datasets/ folder. We store each
#MCMC run as a pickle file (.npy) or an sqlite database (.sql)
MCMC_samples = np.load(path + 'aphAheat_database.npy')
#remove burnin samples. At the start of any MCMC run there will be a 'burnin' period where the sampler
#will not be in a region of high likelihood. In our case we will be safely past it after 60000 iterations.
#We thin samples (only save 1 out of ever 60 samples). So we will throw out the first 1000 saved samples
#to avoid the burnin period.
MCMC_burnin = MCMC_samples[1000:,:]
parameter_to_check = 0
```
We can then look at the distributions of the MCMC samples for a given parameter. We can then construct a confidence interval for the parameter.
```
fig,ax = plt.subplots(figsize=(10,3))
plt.hist(MCMC_burnin[:,parameter_to_check])
ax.set_ylabel('Number of MCMC Counts')
ax.set_xlabel('Parameter Value (A.U.)')
plt.show()
```
The confidence interval is then
```
#We take the mean of all MCMC samples to get the parameter value.
mean_value = np.mean(MCMC_burnin[:,parameter_to_check],axis=0)
#We then determine statistical significance, if the mean value is greater than zero, we will check
#if five percent or more samples are less than zero.
if mean_value > 0:
#We generate the confidence interval by checking where the 1th percentile of all values are.
CI = np.percentile(MCMC_burnin[:,parameter_to_check],[1,100])
else:
#We generate the confidence interval by checking where the 99th percentile of all values are.
CI = np.percentile(MCMC_burnin[:,parameter_to_check],[0,99])
#we can display the confidence interval now.
print(CI)
```
We see that 0 is within the conficent interval, so it does not have a significant effect on expression. We then determine similar information for each base pair.
```
#initialize an array to store whether or not a given base pair is significant. If it is we will store 'True'.
#Otherwise we will store 'False'
all_significance = np.zeros((160))
#loop through the 160 base pair region.
for i in range(160):
#determine confidence interval as in the above panel.
mean_value = np.mean(MCMC_burnin[:,i],axis=0)
if mean_value > 0:
CI = np.percentile(MCMC_burnin[:,i],[5,100])
else:
CI = np.percentile(MCMC_burnin[:,i],[0,95])
#we now check if 0 in the confidence interval. If it is not, we label the significance of
#the base pair location as 'True'.
if 0 > CI[0] and 0 < CI[1]:
all_significance[i] = False
else:
all_significance[i] = True
```
We will plot the results with significant base pair in red.
```
fig,ax = plt.subplots(figsize=(10,1))
plt.imshow(all_significance[np.newaxis,:],aspect='auto',cmap='coolwarm')
plt.yticks([])
ax.set_xlabel('Base Pair')
ax.set_xticks(np.arange(0,160,10))
ax.set_xticklabels(np.arange(-115,45,10))
plt.show()
```
We then check if there are 5 or more significant base pairs in a 15 base pair region. If so, we will declare it part of a binding site.
```
# we are looking at 15 base pair windows so we only need 145 entries.
TF_locations = np.zeros(145)
for i in range(145):
#we get the total number of significant base pairs and see if that is 5 or more.
if all_significance[i:i+15].sum() > 4:
TF_locations[i] = True
else:
TF_locations[i] = False
```
Now we can plot the final results.
```
fig,ax = plt.subplots(figsize=(10,1))
plt.imshow(TF_locations[np.newaxis,:],aspect='auto',cmap='coolwarm')
plt.yticks([])
ax.set_xlabel('Base Pair')
ax.set_xticks(np.arange(0,145,10))
ax.set_xticklabels(np.arange(-108,38,10))
plt.show()
```
We see that there are multiple locations identified by this method. We can see regions from -82 to -70, -53 to -47, -41 to -37, -33 to to 1, 3 to 8, and 25 to 34. The region from -82 to -70 corresponds to a confirmed DeoR binding site and the -53 to -47 binding region corresponds to a part of a known FNR binding site. All regulatory regions from -41 to 1 correspond to an RNAP binding site.
However, the downstream regions are unlikely to correspond to true TF binding sites. This automated method includes the discovered binding sites of the *aphA* gene but also includes some secondary RNAP binding sites and some likely false positives, such as the region downstream of the TSS (2 to 10 bp). The results show that this method is useful but also we need to review all results.
| github_jupyter |
## 0.0. Objetivo do Problema:
-- 1.0. Previsao do primeiro destino que um novo usuário irá escolher.
-- Porque?
-- Qual tipo de modelo de negócio do Airbnb?
- Marketplace ( Conectar pessoas que oferecem acomodacao, com pessoas que estao procurando acomodacao)
- Oferta ( pessoas oferecendo acomodacao )
- Tamanho do portfólio.
- Diversidade/Densidade de Portfólio.
- Preco Medio
- Demanda ( pessoas procurando acomodacao )
- Numero de Usuários
- LTV ( Lifetime Value )
- CAC ( Client Acquisition Cost )
Gross Revenue = ( Fee * Numero cliente ) - CAC
## 0.1. Proposta de solução:
--- Modelo de Predizao do primeiro destino de um novo usario.
- 1.0. Predicoes e salva em tabela do banco de dados.
- 2.0. API
--- Input: usuario e suas caracteristicas
--- Output: usuario e suas caracteristicas com a **predicao do destino**
--- 16 ciclos
# <font color ='red'> 1.0. Imports </font>
```
import pandas as pd
from sklearn import model_selection as ms
from sklearn import preprocessing as pp
from sklearn import metrics as m
from scikitplot import metrics as mt
from keras import models as ml
from keras import layers as l
import warnings
warnings.filterwarnings("ignore")
```
## 1.1. Helper Function
## 1.2. Loading Data
```
df_raw = pd.read_csv('~/repositorio/airbnb_predict/data/raw/train_users_2.csv', low_memory=True)
df_sessions = pd.read_csv('~/repositorio/airbnb_predict/data/raw/sessions.csv', low_memory=True)
```
# 2.0. Data Description
```
df2 = df_raw.copy()
print('Number of rows: {}'.format(df2.shape[0]))
print('Number of columns: {}'.format(df2.shape[1]))
```
## 2.1. Data Type
```
df2.dtypes
```
## 2.2. NA Check
```
df2.isna().sum()
# remove missing value completly
df2 = df2.dropna()
```
## 2.3. Change Data Type
```
# 'date_account_created'
df2['date_account_created'] = pd.to_datetime(df2['date_account_created'])
# 'timestamp_first_active'
df2['timestamp_first_active'] = pd.to_datetime(df2['timestamp_first_active'], format = '%Y%m%d%H%M%S')
# 'date_first_booking'
df2['date_first_booking'] = pd.to_datetime(df2['date_first_booking'])
# 'age'
df2['age'] = df2['age'].astype('int64')
```
## 2.4. Check Balanced Data
```
df2['country_destination'].value_counts(normalize=True)
```
# 3.0. Data Filtering
```
df3 = df2.copy()
```
## 3.1. Filtering Rows
## 3.2. Columns Selection
# 4.0. Data Preparation
```
df4 = df3.copy()
# dummy variable
df4_dummy = pd.get_dummies(df4.drop(['id','country_destination'], axis =1))
# join id and country destination
df4 = pd.concat([df4[['id','country_destination']],df4_dummy], axis =1)
```
# 5.0. Feature Selection
```
df5 = df4.copy()
cols_drop = ['date_account_created','timestamp_first_active','date_first_booking'] # original dates
df5 = df5.drop(cols_drop, axis =1)
X = df5.drop(['id','country_destination'], axis = 1)
Y = df5['country_destination'].copy()
```
# 6.0. Machine Learning Model - Neural Network MLP
```
# Split dataset into training and test
X_train, X_test , y_train, y_test = ms.train_test_split(X, Y, test_size = 0.2 , random_state=32)
ohe = pp.OneHotEncoder()
y_train_nn = ohe.fit_transform(y_train.values.reshape(-1,1)).toarray()
# model definition
model = ml.Sequential()
model.add(l.Dense(128, input_dim = X_train.shape[1], activation= 'relu'))
model.add(l.Dense(11, activation= 'softmax'))
# model compile
model.compile(loss = 'categorical_crossentropy' , optimizer='adam', metrics=['accuracy'])
# tain model
model.fit(X_train, y_train_nn, epochs=100)
```
# 7.0. NN Performance
```
# prediction
pred_nn = model.predict(X_test)
# invert Predict
yhat_nn = ohe.inverse_transform(pred_nn)
# prediction prepare
y_test_nn = y_test.to_numpy()
yhat_nn = yhat_nn.reshape(1,-1)[0]
# accuracy
acc_nn = m.accuracy_score(y_test_nn, yhat_nn)
print('Accuracy: {}'.format(acc_nn))
# confusion matrix
mt.plot_confusion_matrix(y_test_nn , yhat_nn, normalize=False, figsize=(12,12))
```
| github_jupyter |
## Simple regression
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Import relevant modules
import pymc
import numpy as np
def generateData(size, true_intercept, true_slope, order, noiseSigma):
x = np.linspace(0, 1, size)
# y = a + b*x
true_y = true_intercept + true_slope * (x ** order)
# add noise
y = true_y + np.random.normal(scale=noiseSigma, size=size)
return x, y, true_y
def plotData(x, y, true_y):
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, xlabel='x', ylabel='y', title='Generated data and underlying model')
ax.plot(x, y, 'x', label='sampled data')
ax.plot(x, true_y, label='true regression line', lw=2.)
plt.legend(loc=0);
```
### Fit linear model
```
(x, y, true_y) = generateData(size = 200, true_intercept = 1, true_slope = 20, order = 1, noiseSigma=1.0)
plotData(x, y, true_y)
#Fit linear model
sigma = pymc.HalfCauchy('sigma', 10, 1.)
intercept = pymc.Normal('Intercept', 0, 1)
x_coeff = pymc.Normal('x', 0, 1)
@pymc.deterministic
def m(intercept= intercept, x_coeff=x_coeff):
return intercept + (x ** 1) * x_coeff
likelihood = pymc.Normal(name='y', mu=m, tau=1.0/sigma, value=y, observed=True)
# Plot the model dependencies
import pymc.graph
from IPython.display import display_png
graph = pymc.graph.graph(S)
display_png(graph.create_png(), raw=True)
# Run inference
mcmc = pymc.MCMC([likelihood, sigma, intercept, x_coeff])
mcmc.sample(iter=10000, burn=500, thin=2)
pymc.Matplot.plot(mcmc)
```
### Exercise fit cubic model
```
# your code here
```
### Model selection
```
(x, y, true_y) = generateData(size = 200, true_intercept = 1, true_slope = 20, order = 3, noiseSigma=2.0)
plotData(x, y, true_y)
#Model selection
beta = pymc.Beta('beta', 1.0, 1.0)
ber = pymc.Bernoulli('ber', beta)
sigma = pymc.HalfCauchy('sigma', 10, 1.)
intercept = pymc.Normal('Intercept', 0, 1)
x_coeff = pymc.Normal('x', 0, 1)
@pymc.deterministic
def m(intercept= intercept, x_coeff=x_coeff, ber=ber):
if ber:
return intercept + (x ** 3) * x_coeff
else:
return intercept + (x ** 1) * x_coeff
likelihood = pymc.Normal(name='y', mu=m, tau=1.0/sigma, value=y, observed=True)
mcmc = pymc.MCMC([likelihood, sigma, intercept, x_coeff, beta, ber])
mcmc.sample(iter=10000, burn=500, thin=2)
pymc.Matplot.plot(mcmc)
plt.hist(np.array(mcmc.trace("ber")[:], dtype=np.int))
plt.xlim([0, 1.5])
```
### Exercise: find noise effect on the model linearity
```
# your code here
```
| github_jupyter |
# Module 5: Hierarchical Generators
This module covers writing layout/schematic generators that instantiate other generators. We will write a two-stage amplifier generator, which instatiates the common-source amplifier followed by the source-follower amplifier.
## AmpChain Layout Example
First, we will write a layout generator for the two-stage amplifier. The layout floorplan is drawn for you below:
<img src="bootcamp_pics/5_hierarchical_generator/hierachical_generator_1.PNG" alt="Drawing" style="width: 400px;"/>
This floorplan abuts the `AmpCS` instance next to `AmpSF` instance, the `VSS` ports are simply shorted together, and the top `VSS` port of `AmpSF` is ignored (they are connected together internally by dummy connections). The intermediate node of the two-stage amplifier is connected using a vertical routing track in the middle of the two amplifier blocks. `VDD` ports are connected to the top-most M6 horizontal track, and other ports are simply exported in-place.
The layout generator is reproduced below, with some parts missing (which you will fill out later). We will walk through the important sections of the code.
```python
class AmpChain(TemplateBase):
def __init__(self, temp_db, lib_name, params, used_names, **kwargs):
TemplateBase.__init__(self, temp_db, lib_name, params, used_names, **kwargs)
self._sch_params = None
@property
def sch_params(self):
return self._sch_params
@classmethod
def get_params_info(cls):
return dict(
cs_params='common source amplifier parameters.',
sf_params='source follower parameters.',
show_pins='True to draw pin geometries.',
)
def draw_layout(self):
"""Draw the layout of a transistor for characterization.
"""
# make copies of given dictionaries to avoid modifying external data.
cs_params = self.params['cs_params'].copy()
sf_params = self.params['sf_params'].copy()
show_pins = self.params['show_pins']
# disable pins in subcells
cs_params['show_pins'] = False
sf_params['show_pins'] = False
# create layout masters for subcells we will add later
cs_master = self.new_template(params=cs_params, temp_cls=AmpCS)
# TODO: create sf_master. Use AmpSFSoln class
sf_master = None
if sf_master is None:
return
# add subcell instances
cs_inst = self.add_instance(cs_master, 'XCS')
# add source follower to the right of common source
x0 = cs_inst.bound_box.right_unit
sf_inst = self.add_instance(sf_master, 'XSF', loc=(x0, 0), unit_mode=True)
# get VSS wires from AmpCS/AmpSF
cs_vss_warr = cs_inst.get_all_port_pins('VSS')[0]
sf_vss_warrs = sf_inst.get_all_port_pins('VSS')
# only connect bottom VSS wire of source follower
if sf_vss_warrs[0].track_id.base_index < sf_vss_warrs[1].track_id.base_index:
sf_vss_warr = sf_vss_warrs[0]
else:
sf_vss_warr = sf_vss_warrs[1]
# connect VSS of the two blocks together
vss = self.connect_wires([cs_vss_warr, sf_vss_warr])[0]
# get layer IDs from VSS wire
hm_layer = vss.layer_id
vm_layer = hm_layer + 1
top_layer = vm_layer + 1
# calculate template size
tot_box = cs_inst.bound_box.merge(sf_inst.bound_box)
self.set_size_from_bound_box(top_layer, tot_box, round_up=True)
# get subcell ports as WireArrays so we can connect them
vmid0 = cs_inst.get_all_port_pins('vout')[0]
vmid1 = sf_inst.get_all_port_pins('vin')[0]
vdd0 = cs_inst.get_all_port_pins('VDD')[0]
vdd1 = sf_inst.get_all_port_pins('VDD')[0]
# get vertical VDD TrackIDs
vdd0_tid = TrackID(vm_layer, self.grid.coord_to_nearest_track(vm_layer, vdd0.middle))
vdd1_tid = TrackID(vm_layer, self.grid.coord_to_nearest_track(vm_layer, vdd1.middle))
# connect VDD of each block to vertical M5
vdd0 = self.connect_to_tracks(vdd0, vdd0_tid)
vdd1 = self.connect_to_tracks(vdd1, vdd1_tid)
# connect M5 VDD to top M6 horizontal track
vdd_tidx = self.grid.get_num_tracks(self.size, top_layer) - 1
vdd_tid = TrackID(top_layer, vdd_tidx)
vdd = self.connect_to_tracks([vdd0, vdd1], vdd_tid)
# TODO: connect vmid0 and vmid1 to vertical track in the middle of two templates
# hint: use x0
vmid = None
if vmid is None:
return
# add pins on wires
self.add_pin('vmid', vmid, show=show_pins)
self.add_pin('VDD', vdd, show=show_pins)
self.add_pin('VSS', vss, show=show_pins)
# re-export pins on subcells.
self.reexport(cs_inst.get_port('vin'), show=show_pins)
self.reexport(cs_inst.get_port('vbias'), net_name='vb1', show=show_pins)
# TODO: reexport vout and vbias of source follower
# TODO: vbias should be renamed to vb2
# compute schematic parameters.
self._sch_params = dict(
cs_params=cs_master.sch_params,
sf_params=sf_master.sch_params,
)
```
## AmpChain Constructor
```python
class AmpChain(TemplateBase):
def __init__(self, temp_db, lib_name, params, used_names, **kwargs):
TemplateBase.__init__(self, temp_db, lib_name, params, used_names, **kwargs)
self._sch_params = None
@property
def sch_params(self):
return self._sch_params
@classmethod
def get_params_info(cls):
return dict(
cs_params='common source amplifier parameters.',
sf_params='source follower parameters.',
show_pins='True to draw pin geometries.',
)
```
First, notice that instead of subclassing `AnalogBase`, the `AmpChain` class subclasses `TemplateBase`. This is because we are not trying to draw transistor rows inside this layout generator; we just want to place and route multiple layout instances together. `TemplateBase` is the base class for all layout generators and it provides most placement and routing methods you need.
Next, notice that the parameters for `AmpChain` are simply parameter dictionaries for the two sub-generators. The ability to use complex data structures as generator parameters solves the parameter explosion problem when writing generators with many levels of hierarchy.
## Creating Layout Master
```python
# create layout masters for subcells we will add later
cs_master = self.new_template(params=cs_params, temp_cls=AmpCS)
# TODO: create sf_master. Use AmpSFSoln class
sf_master = None
```
Here, the `new_template()` function creates a new layout master, `cs_master`, which represents a generated layout cellview from the `AmpCS` layout generator. We can later instances of this master in the current layout, which are references to the generated `AmpCS` layout cellview, perhaps shifted and rotated. The main take away is that the `new_template()` function does not add any layout geometries to the current layout, but rather create a separate layout cellview which we may use later.
## Creating Layout Instance
```python
# add subcell instances
cs_inst = self.add_instance(cs_master, 'XCS')
# add source follower to the right of common source
x0 = cs_inst.bound_box.right_unit
sf_inst = self.add_instance(sf_master, 'XSF', loc=(x0, 0), unit_mode=True)
```
The `add_instance()` method adds an instance of the given layout master to the current cellview. By default, if no location or orientation is given, it puts the instance at the origin with no rotation. the `bound_box` attribute can then be used on the instance to get the bounding box of the instance. Here, the bounding box is used to determine the X coordinate of the source-follower.
## Get Instance Ports
```python
# get subcell ports as WireArrays so we can connect them
vmid0 = cs_inst.get_all_port_pins('vout')[0]
vmid1 = sf_inst.get_all_port_pins('vin')[0]
vdd0 = cs_inst.get_all_port_pins('VDD')[0]
vdd1 = sf_inst.get_all_port_pins('VDD')[0]
```
after adding an instance, the `get_all_port_pins()` function can be used to obtain a list of all pins as `WireArray` objects with the given name. In this case, we know that there's exactly one pin, so we use Python list indexing to obtain first element of the list.
## Routing Grid Object
```python
# get vertical VDD TrackIDs
vdd0_tid = TrackID(vm_layer, self.grid.coord_to_nearest_track(vm_layer, vdd0.middle))
vdd1_tid = TrackID(vm_layer, self.grid.coord_to_nearest_track(vm_layer, vdd1.middle))
```
the `self.grid` attribute of `TemplateBase` is a `RoutingGrid` objects, which provides many useful functions related to the routing grid. In this particular scenario, `coord_to_nearest_track()` is used to determine the vertical track index closest to the center of the `VDD` ports. These vertical tracks will be used later to connect the `VDD` ports together.
## Re-export Pins on Instances
```python
# re-export pins on subcells.
self.reexport(cs_inst.get_port('vin'), show=show_pins)
self.reexport(cs_inst.get_port('vbias'), net_name='vb1', show=show_pins)
# TODO: reexport vout and vbias of source follower
# TODO: vbias should be renamed to vb2
```
`TemplateBase` also provides a `reexport()` function, which is a convenience function to re-export an instance port in-place. The `net_name` optional parameter can be used to change the port name. In this example, the `vbias` port of common-source amplifier is renamed to `vb1`.
## Layout Exercises
Now you should know everything you need to finish the two-stage amplifier layout generator. Fill in the missing pieces to do the following:
1. Create layout master for `AmpSF` using the `AmpSFSoln` class.
2. Using `RoutingGrid`, determine the vertical track index in the middle of the two amplifier blocks, and connect `vmid` wires together using this track.
* Hint: variable `x0` is the X coordinate of the boundary between the two blocks.
3. Re-export `vout` and `vbias` of the source-follower. Rename `vbias` to `vb2`.
Once you're done, evaluate the cell below, which will generate the layout and run LVS. If everything is done correctly, a layout should be generated inthe `DEMO_AMP_CHAIN` library, and LVS should pass.
```
from bag.layout.routing import TrackID
from bag.layout.template import TemplateBase
from xbase_demo.demo_layout.core import AmpCS, AmpSFSoln
class AmpChain(TemplateBase):
def __init__(self, temp_db, lib_name, params, used_names, **kwargs):
TemplateBase.__init__(self, temp_db, lib_name, params, used_names, **kwargs)
self._sch_params = None
@property
def sch_params(self):
return self._sch_params
@classmethod
def get_params_info(cls):
return dict(
cs_params='common source amplifier parameters.',
sf_params='source follower parameters.',
show_pins='True to draw pin geometries.',
)
def draw_layout(self):
"""Draw the layout of a transistor for characterization.
"""
# make copies of given dictionaries to avoid modifying external data.
cs_params = self.params['cs_params'].copy()
sf_params = self.params['sf_params'].copy()
show_pins = self.params['show_pins']
# disable pins in subcells
cs_params['show_pins'] = False
sf_params['show_pins'] = False
# create layout masters for subcells we will add later
cs_master = self.new_template(params=cs_params, temp_cls=AmpCS)
# TODO: create sf_master. Use AmpSFSoln class
sf_master = None
if sf_master is None:
return
# add subcell instances
cs_inst = self.add_instance(cs_master, 'XCS')
# add source follower to the right of common source
x0 = cs_inst.bound_box.right_unit
sf_inst = self.add_instance(sf_master, 'XSF', loc=(x0, 0), unit_mode=True)
# get VSS wires from AmpCS/AmpSF
cs_vss_warr = cs_inst.get_all_port_pins('VSS')[0]
sf_vss_warrs = sf_inst.get_all_port_pins('VSS')
# only connect bottom VSS wire of source follower
if len(sf_vss_warrs) < 2 or sf_vss_warrs[0].track_id.base_index < sf_vss_warrs[1].track_id.base_index:
sf_vss_warr = sf_vss_warrs[0]
else:
sf_vss_warr = sf_vss_warrs[1]
# connect VSS of the two blocks together
vss = self.connect_wires([cs_vss_warr, sf_vss_warr])[0]
# get layer IDs from VSS wire
hm_layer = vss.layer_id
vm_layer = hm_layer + 1
top_layer = vm_layer + 1
# calculate template size
tot_box = cs_inst.bound_box.merge(sf_inst.bound_box)
self.set_size_from_bound_box(top_layer, tot_box, round_up=True)
# get subcell ports as WireArrays so we can connect them
vmid0 = cs_inst.get_all_port_pins('vout')[0]
vmid1 = sf_inst.get_all_port_pins('vin')[0]
vdd0 = cs_inst.get_all_port_pins('VDD')[0]
vdd1 = sf_inst.get_all_port_pins('VDD')[0]
# get vertical VDD TrackIDs
vdd0_tid = TrackID(vm_layer, self.grid.coord_to_nearest_track(vm_layer, vdd0.middle))
vdd1_tid = TrackID(vm_layer, self.grid.coord_to_nearest_track(vm_layer, vdd1.middle))
# connect VDD of each block to vertical M5
vdd0 = self.connect_to_tracks(vdd0, vdd0_tid)
vdd1 = self.connect_to_tracks(vdd1, vdd1_tid)
# connect M5 VDD to top M6 horizontal track
vdd_tidx = self.grid.get_num_tracks(self.size, top_layer) - 1
vdd_tid = TrackID(top_layer, vdd_tidx)
vdd = self.connect_to_tracks([vdd0, vdd1], vdd_tid)
# TODO: connect vmid0 and vmid1 to vertical track in the middle of two templates
# hint: use x0
vmid = None
if vmid is None:
return
# add pins on wires
self.add_pin('vmid', vmid, show=show_pins)
self.add_pin('VDD', vdd, show=show_pins)
self.add_pin('VSS', vss, show=show_pins)
# re-export pins on subcells.
self.reexport(cs_inst.get_port('vin'), show=show_pins)
self.reexport(cs_inst.get_port('vbias'), net_name='vb1', show=show_pins)
# TODO: reexport vout and vbias of source follower
# TODO: vbias should be renamed to vb2
# compute schematic parameters.
self._sch_params = dict(
cs_params=cs_master.sch_params,
sf_params=sf_master.sch_params,
)
import os
# import bag package
import bag
from bag.io import read_yaml
# import BAG demo Python modules
import xbase_demo.core as demo_core
# load circuit specifications from file
spec_fname = os.path.join(os.environ['BAG_WORK_DIR'], 'specs_demo/demo.yaml')
top_specs = read_yaml(spec_fname)
# obtain BagProject instance
local_dict = locals()
if 'bprj' in local_dict:
print('using existing BagProject')
bprj = local_dict['bprj']
else:
print('creating BagProject')
bprj = bag.BagProject()
demo_core.run_flow(bprj, top_specs, 'amp_chain_soln', AmpChain, run_lvs=True, lvs_only=True)
```
## AmpChain Schematic Template
Now let's move on to schematic generator. As before, we need to create the schematic template first. A half-complete schematic template is provided for you in library `demo_templates`, cell `amp_chain`, shown below:
<img src="bootcamp_pics/5_hierarchical_generator/hierachical_generator_2.PNG" alt="Drawing" style="width: 400px;"/>
The schematic template for a hierarchical generator is very simple; you simply need to instantiate the schematic templates of the sub-blocks (***Not the generated schematic!***). For the exercise, instantiate the `amp_sf` schematic template from the `demo_templates` library, named it `XSF`, connect it, then evaluate the following cell to import the `amp_chain` netlist to Python.
```
import bag
# obtain BagProject instance
local_dict = locals()
if 'bprj' in local_dict:
print('using existing BagProject')
bprj = local_dict['bprj']
else:
print('creating BagProject')
bprj = bag.BagProject()
print('importing netlist from virtuoso')
bprj.import_design_library('demo_templates')
print('netlist import done')
```
## AmpChain Schematic Generator
With schematic template done, you are ready to write the schematic generator. It is also very simple, you just need to call the `design()` method, which you implemented previously, on each instance in the schematic. Complete the following schematic generator, then evaluate the cell to push it through the design flow.
```
%matplotlib inline
import os
from bag.design import Module
# noinspection PyPep8Naming
class demo_templates__amp_chain(Module):
"""Module for library demo_templates cell amp_chain.
Fill in high level description here.
"""
# hard coded netlist flie path to get jupyter notebook working.
yaml_file = os.path.join(os.environ['BAG_WORK_DIR'], 'BAG_XBase_demo',
'BagModules', 'demo_templates', 'netlist_info', 'amp_chain.yaml')
def __init__(self, bag_config, parent=None, prj=None, **kwargs):
Module.__init__(self, bag_config, self.yaml_file, parent=parent, prj=prj, **kwargs)
@classmethod
def get_params_info(cls):
# type: () -> Dict[str, str]
"""Returns a dictionary from parameter names to descriptions.
Returns
-------
param_info : Optional[Dict[str, str]]
dictionary from parameter names to descriptions.
"""
return dict(
cs_params='common-source amplifier parameters dictionary.',
sf_params='source-follwer amplifier parameters dictionary.',
)
def design(self, cs_params=None, sf_params=None):
self.instances['XCS'].design(**cs_params)
# TODO: design XSF
import os
# import bag package
import bag
from bag.io import read_yaml
# import BAG demo Python modules
import xbase_demo.core as demo_core
from xbase_demo.demo_layout.core import AmpChainSoln
# load circuit specifications from file
spec_fname = os.path.join(os.environ['BAG_WORK_DIR'], 'specs_demo/demo.yaml')
top_specs = read_yaml(spec_fname)
# obtain BagProject instance
local_dict = locals()
if 'bprj' in local_dict:
print('using existing BagProject')
bprj = local_dict['bprj']
else:
print('creating BagProject')
bprj = bag.BagProject()
demo_core.run_flow(bprj, top_specs, 'amp_chain', AmpChainSoln, sch_cls=demo_templates__amp_chain, run_lvs=True)
```
| github_jupyter |
# MNIST Convolutional Neural Network - Ensemble Learning
Gaetano Bonofiglio, Veronica Iovinella
In this notebook we will verify if our single-column architecture can get any advantage from using **ensemble learning**, so a multi-column architecture.
We will train multiple networks identical to the best one defined in notebook 03, feeding them with pre-processed images shuffled and distorted using a different pseudo-random seed. This should give us a good ensemble of networks that we can average for each classification.
A prediction doesn't take more time compared to a single-column, but training time scales by a factor of N, where N is the number of columns. Networks could be trained in parallel, but not on our current hardware that is saturated by the training of a single one.
## Imports
```
import os.path
from IPython.display import Image
from util import Util
u = Util()
import numpy as np
# Explicit random seed for reproducibility
np.random.seed(1337)
from keras.callbacks import ModelCheckpoint
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.layers import Merge
from keras.utils import np_utils
from keras.preprocessing.image import ImageDataGenerator
from keras import backend as K
from keras.datasets import mnist
```
## Definitions
For this experiment we are using 5 networks, but usually a good number is in the range of 35 (but with more dataset alterations then we do).
```
batch_size = 1024
nb_classes = 10
nb_epoch = 650
# checkpoint path
checkpoints_dir = "checkpoints"
# number of networks for ensamble learning
number_of_models = 5
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters1 = 20
nb_filters2 = 40
# size of pooling area for max pooling
pool_size1 = (2, 2)
pool_size2 = (3, 3)
# convolution kernel size
kernel_size1 = (4, 4)
kernel_size2 = (5, 5)
# dense layer size
dense_layer_size1 = 200
# dropout rate
dropout = 0.15
# activation type
activation = 'relu'
```
## Data load
```
# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
u.plot_images(X_train[0:9], y_train[0:9])
if K.image_dim_ordering() == 'th':
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
```
## Image preprocessing
```
datagen = ImageDataGenerator(
rotation_range=30,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=0.1,
horizontal_flip=False)
# compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied)
datagen.fit(X_train)
```
## Model definition - Single column
This time we are going to define a helper functions to initialize the model, since we're going to use it on a list of models.
```
def initialize_network(model, dropout1=dropout, dropout2=dropout):
model.add(Convolution2D(nb_filters1, kernel_size1[0], kernel_size1[1],
border_mode='valid',
input_shape=input_shape, name='covolution_1_' + str(nb_filters1) + '_filters'))
model.add(Activation(activation, name='activation_1_' + activation))
model.add(MaxPooling2D(pool_size=pool_size1, name='max_pooling_1_' + str(pool_size1) + '_pool_size'))
model.add(Convolution2D(nb_filters2, kernel_size2[0], kernel_size2[1]))
model.add(Activation(activation, name='activation_2_' + activation))
model.add(MaxPooling2D(pool_size=pool_size2, name='max_pooling_1_' + str(pool_size2) + '_pool_size'))
model.add(Dropout(dropout))
model.add(Flatten())
model.add(Dense(dense_layer_size1, name='fully_connected_1_' + str(dense_layer_size1) + '_neurons'))
model.add(Activation(activation, name='activation_3_' + activation))
model.add(Dropout(dropout))
model.add(Dense(nb_classes, name='output_' + str(nb_classes) + '_neurons'))
model.add(Activation('softmax', name='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy', 'precision', 'recall'])
# pseudo random generation of seeds
seeds = np.random.randint(10000, size=number_of_models)
# initializing all the models
models = [None] * number_of_models
for i in range(number_of_models):
models[i] = Sequential()
initialize_network(models[i])
```
## Training and evaluation - Single column
Again we are going to define a helper functions to train the model, since we're going to use them on a list.
```
def try_load_checkpoints(model, checkpoints_filepath, warn=False):
# loading weights from checkpoints
if os.path.exists(checkpoints_filepath):
model.load_weights(checkpoints_filepath)
elif warn:
print('Warning: ' + checkpoints_filepath + ' could not be loaded')
def fit(model, checkpoints_name='test', seed=1337, initial_epoch=0,
verbose=1, window_size=(-1), plot_history=False, evaluation=True):
if window_size == (-1):
window = 1 + np.random.randint(14)
else:
window = window_size
if window >= nb_epoch:
window = nb_epoch - 1
print("Not pre-processing " + str(window) + " epoch(s)")
checkpoints_filepath = os.path.join(checkpoints_dir, '04_MNIST_weights.best_' + checkpoints_name + '.hdf5')
try_load_checkpoints(model, checkpoints_filepath, True)
# checkpoint
checkpoint = ModelCheckpoint(checkpoints_filepath, monitor='val_precision', verbose=verbose, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
# fits the model on batches with real-time data augmentation, for nb_epoch-100 epochs
history = model.fit_generator(datagen.flow(X_train, Y_train,
batch_size=batch_size,
# save_to_dir='distorted_data',
# save_format='png'
seed=1337),
samples_per_epoch=len(X_train), nb_epoch=(nb_epoch-window), verbose=0,
validation_data=(X_test, Y_test), callbacks=callbacks_list)
# ensuring best val_precision reached during training
try_load_checkpoints(model, checkpoints_filepath)
# fits the model on clear training set, for nb_epoch-700 epochs
history_cont = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=window,
verbose=0, validation_data=(X_test, Y_test), callbacks=callbacks_list)
# ensuring best val_precision reached during training
try_load_checkpoints(model, checkpoints_filepath)
if plot_history:
print("History: ")
u.plot_history(history)
u.plot_history(history, 'precision')
print("Continuation of training with no pre-processing:")
u.plot_history(history_cont)
u.plot_history(history_cont, 'precision')
if evaluation:
print('Evaluating model ' + str(index))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test accuracy:', score[1]*100, '%')
print('Test error:', (1-score[2])*100, '%')
return history, history_cont
for index in range(number_of_models):
print("Training model " + str(index) + " ...")
if index == 0:
window_size = 20
plot_history = True
else:
window_size = (-1)
plot_history = False
history, history_cont = fit(models[index],
str(index),
seed=seeds[index],
initial_epoch=0,
verbose=0,
window_size=window_size,
plot_history=plot_history)
print("Done.\n\n")
```
Just by the different seeds, error changes **from 0.5% to 0.42%** (our best result so far with a single column). The training took 12 hours.
## Model definition - Multi column
The MCDNN is obtained by creating a new model that only has 1 layer, Merge, that does the average of the outputs of the models in the given list. No training is required since we're only doing the average.
```
merged_model = Sequential()
merged_model.add(Merge(models, mode='ave'))
merged_model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy', 'precision', 'recall'])
```
## Evaluation - Multi column
```
print('Evaluating ensemble')
score = merged_model.evaluate([np.asarray(X_test)] * number_of_models,
Y_test,
verbose=0)
print('Test accuracy:', score[1]*100, '%')
print('Test error:', (1-score[2])*100, '%')
```
The error improved from 0.42% with the best network of the ensemble, to 0.4%, that is out best result so far.
```
# The predict_classes function outputs the highest probability class
# according to the trained classifier for each input example.
predicted_classes = merged_model.predict_classes([np.asarray(X_test)] * number_of_models)
# Check which items we got right / wrong
correct_indices = np.nonzero(predicted_classes == y_test)[0]
incorrect_indices = np.nonzero(predicted_classes != y_test)[0]
u.plot_images(X_test[correct_indices[:9]], y_test[correct_indices[:9]],
predicted_classes[correct_indices[:9]])
u.plot_images(X_test[incorrect_indices[:9]], y_test[incorrect_indices[:9]],
predicted_classes[incorrect_indices[:9]])
u.plot_confusion_matrix(y_test, nb_classes, predicted_classes)
```
## Results
Training 5 networks took 12 hours, of course 5 times longer then a single one. The improvement was of 0.05% error, that is quite good considering this dataset (a human has 0.2% test error on MNIST).
To further increase the precision we would need over 30 columns trained on different widths.
| github_jupyter |
<img src="https://github.com/gantian127/pymt_nwis/blob/master/docs/_static/logo.png?raw=true" width='600' align='center'></a>
## Introduction
[nwis](https://github.com/gantian127/nwis) package provides a set of functions that allows downloading of the National Water Information System datasets for data visualization and analysis. nwis package also includes a Basic Model Interface ([BMI](https://bmi.readthedocs.io/en/latest/)).
[pymt_nwis](https://github.com/gantian127/pymt_nwis) package uses the BMI of nwis to convert it into a reusable, plug-and-play data component for [PyMT](https://pymt.readthedocs.io/en/latest/?badge=latest) modeling framework. This allows the National Water Information System datasets to be easily coupled with other datasets or models that expose a BMI.
**To install pymt_nwis, use the following command:**
```
! pip install pymt_nwis
```
## Coding Example
Import nwis class and instantiate it. A configuration file (yaml file) is required to provide the parameter settings for data download. An example config_file.yaml is provided in the same folder with this Jupyter Notebook file. For more details of the parameters specified in the config.yaml file, please check with the link [here](https://nwis.readthedocs.io/en/latest/?badge=latest#parameter-settings).
```
import matplotlib.pyplot as plt
import numpy as np
import cftime
from pymt.models import Nwis
# initiate a data component
data_comp = Nwis()
data_comp.initialize('config_file.yaml')
```
Use variable related methods to check the variable information of the dataset. There are multiple variables and we will check the detailed info of the "discharge" variable.
```
# get variable info
var_names = data_comp.output_var_names
print('All variable names: {}'.format(var_names))
var_name = 'discharge'
var_unit = data_comp.var_units(var_name)
var_location = data_comp.var_location(var_name)
var_type = data_comp.var_type(var_name)
var_grid = data_comp.var_grid(var_name)
print('variable_name: {} \nvar_unit: {} \nvar_location: {} \nvar_type: {} \nvar_grid: {}'.format(
var_name, var_unit, var_location, var_type, var_grid))
```
Use time related methods to check the time information of the dataset. Please note that the time values are stored in a format which follows [CF convention](http://cfconventions.org/Data/cf-conventions/cf-conventions-1.8/cf-conventions.pdf).
```
# get time info
start_time = data_comp.start_time
end_time = data_comp.end_time
time_step = data_comp.time_step
time_units = data_comp.time_units
time_steps = int((end_time - start_time)/time_step) + 1
print('start_time: {} \nend_time: {} \ntime_step: {} \ntime_units: {} \ntime_steps: {}'.format(
start_time, end_time, time_step, time_units, time_steps))
```
Loop through each time step to get the discharge and time values. stream_array stores the discharge values. cftime_array stores the numerical time values. time_array stores the corresponding Python datetime objects. get_value( ) method returns the flow forecast value at each time step. update( ) method updates the current time step of the data component.
```
# get variable data
stream_array = np.empty(time_steps)
cftime_array = np.empty(time_steps)
for i in range(0, time_steps):
stream_array[i] = data_comp.get_value(var_name)
cftime_array[i] = data_comp.time
data_comp.update()
time_array = cftime.num2date(cftime_array, time_units, only_use_cftime_datetimes=False, only_use_python_datetimes=True )
```
Now let's make a plot of the discharge data.
```
# plot data
plt.figure(figsize=(9,5))
plt.plot(time_array, stream_array)
plt.xlabel('Year 2017')
plt.ylabel('{} ({})'.format(var_name, var_unit))
plt.title('Discharge Observation at USGS Gage 03339000')
```
Complete the example by finalizing the component.
```
data_comp.finalize()
```
| github_jupyter |
# Supplementary information for damage mapper tool
Development of the damage mapper tool can be broken down into three parts:
1. A function `damage_zones` to calculate the coordinates of the surface zero location and the airblast damage radii
2. A function to plot the blast zones on a map
3. Functions to locate the postcodes (or postcode sectors) within the blast zones `get_postcodes_by_radius` and look up the population in these postcodes `get_population_of_postcodes`.
For the extension task you will need to develop additional functions.
## Airblast damage
The rapid deposition of energy in the atmosphere is analogous to an explosion and so the environmental consequences of the airburst can be estimated using empirical data from atmospheric explosion experiments [(Glasstone and Dolan, 1977)](https://www.dtra.mil/Portals/61/Documents/NTPR/4-Rad_Exp_Rpts/36_The_Effects_of_Nuclear_Weapons.pdf).
The main cause of damage close to the impact site is a strong (pressure) blastwave in the air, known as the **airblast**. Empirical data suggest that the pressure in this wave $p$ (in Pa) (above ambient, also known as overpressure), as a function of explosion energy $E_k$ (in kilotons of TNT equivalent), burst altitude $z_b$ (in m) and horizontal range $r$ (in m), is given by:
\begin{equation*}
p(r) = 3.14 \times 10^{11} \left(\frac{r^2 + z_b^2}{E_k^{2/3}}\right)^{-1.3} + 1.8 \times 10^{7} \left(\frac{r^2 + z_b^2}{E_k^{2/3}}\right)^{-0.565}
\end{equation*}
To solve this equation using gradient methods:
\begin{equation*}
\frac{dp(r)}{dr} = -1.3 \times 3.14 \times 10^{11} \left(\frac{r^2 + z_b^2}{E_k^{2/3}}\right)^{-2.3} \frac{2r}{E_k^{2/3}}- 0.565 \times 1.8 \times 10^{7} \left(\frac{r^2 + z_b^2}{E_k^{2/3}}\right)^{-1.565} \frac{2r}{E_k^{2/3}}
\end{equation*}
For airbursts, we will take the total kinetic energy lost by the asteroid at the burst altitude as the burst energy $E_k$. For low-altitude airbursts or cratering events, we will define $E_k$ as the **larger** of the total kinetic energy lost by the asteroid at the burst altitude or the residual kinetic energy of the asteroid when it hits the ground.
Note that the burst altitude $z_b$ is the vertical distance from the ground to the point of the airburst and the range $r$ is the (great circle) distance along the surface from the "surface zero point," which is the point on the surface that is closest to the point of the airburst (i.e., directly below).
The following threshold pressures can then be used to define different degrees of damage.
| Damage Level | Description | Pressure (kPa) |
|:-------------:|:---------------:|:--------------:|
| 1 | ~10% glass windows shatter | 1.0 |
| 2 | ~90% glass windows shatter | 3.5 |
| 3 | Wood frame buildings collapse | 27 |
| 4 | Multistory brick buildings collapse | 43 |
<p>
<div align="center">Table 1: Pressure thresholds (in kPa) for airblast damage</div>
According to the equations that we will use in this work, an asteoroid of approximately 7-m radius is required to generate overpressures on the ground exceeding 1 kPa, and an asteoroid of approximately 35-m radius is required to generate overpressures on the ground exceeding 43 kPa.
## Notes on distance, bearing and position
To determine the surface zero location (the point on Earth's surface that is closest to the point of airburst) a useful set of spherical geometric formulae relate the bearing, $\beta$ (also known as forward azimuth) to take to get from one point to another along a great circle,
$$\tan \beta = \frac {\cos \varphi_2\sin (\lambda_2-\lambda_1)}{\cos\varphi_1\sin\varphi_2-\sin\varphi_1\cos\varphi_2\cos(\lambda_2-\lambda_1)},$$
as well as the related problem of the final destination given a surface distance and initial bearing:
$$\sin \varphi_2 = \sin \varphi_1\cos \left(\frac{r}{R_p}\right) +\cos \varphi_1\sin\left(\frac{r}{R_p}\right)\cos \beta,$$
$$ \tan(\lambda_2-\lambda_1) = \frac{\sin\beta\sin\left(\frac{r}{R_p}\right)\cos\varphi_1}{\cos\left(\frac{r}{R_p}\right)-\sin\varphi_1\sin\varphi_2}.$$
These formulae can all be derived from the spherical form of the [sine and cosine laws](https://en.wikipedia.org/wiki/Spherical_trigonometry#Cosine_rules_and_sine_rules) using relevant third points.
## Postcode locations
For those of you unfamiliar with UK postcodes, this [link](https://www.getthedata.com/postcode) might be helpful. Each postcode comprises of two strings of alpha-numeric characters that identify the geographic division of the UK. The first one or two letters of the first part of the postcode (before the number) identify the postcode **area** (e.g., WC); the whole of the first part of the postcode identifies the postcode **district**; the first part of the postcode, plus the first number of the second part of the postcode identifies the postcode **sector**. In this project, we will use the full postcode and the postcode sector.
<img src="images/postcode_map.png" width="640">
The geographic data supplied by running the `download_data.py` script consists of two files. The larger file is `full_postcodes.csv`, which contains a list of current UK postcodes, along with a government-assigned code designating the local administrative area and information information on the average (mean) longitude and latitude of the addresses comprising the unit, using the international WGS 84 geodetic datum as supported by modern GPS.
```
import pandas as pd
postcodes = pd.read_csv('./armageddon/resources/full_postcodes.csv')
postcodes.head()
```
The smaller file is `population_by_postcode_sector.csv`, which contains 2011 census data arranged by postcode sector. The important columns for this project are the postcode sector ("geography") and the total population ("All usual residents"), although you are welcome to use other data in your tool if you wish.
```
census = pd.read_csv('./armageddon/resources/population_by_postcode_sector.csv')
census.head()
```
## Notes on longitude, latitude and distance
Given a pair of points by longitude and latitude, converting this into a distance between them can be a surprisingly involved calculation, involving a successively improving model of the shape of the Earth (the geoid). At the lowest reasonable level of approximation, in which the Earth is considered spherical, points at the same longitude satisfy a formula
$$|\varphi_1 -\varphi_2| = \frac{r}{R_p}$$
where the $\varphi$s are the latitudes (in radians), $r$ the surface distance between the points and $R_p$ the radius of the earth. As long as $r$ and $R_p$ are in the same units, the choice doesn't matter, but metres are usually to be preferred For points at the same latitude, a similar formula applies,
$$|\lambda_1 -\lambda_2| = \frac{r}{R_p\cos\varphi},$$
where the $\lambda$s are the longitudes and the $\varphi$ is the common latitude. In the general case a number of different formulas exist. [Among the more popular](https://en.wikipedia.org/wiki/Great-circle_distance) are the Haversine formula
$$\frac{r}{R_p} = 2\arcsin\sqrt{\sin^2 \frac{|\varphi_1-\varphi_2|}{2}+\cos\varphi_1\cos\varphi_2\sin^2\frac{|\lambda_1-\lambda_2|}{2}},$$
the spherical Vincenty formula
$$\frac{r}{R_p}=\arctan\frac{\sqrt{(\cos\varphi_2\sin|\lambda_1-\lambda_2|)^2+(\cos\varphi_1\sin\varphi_2-\sin\varphi_1\cos\varphi_2\cos|\lambda_1-\lambda_2|)^2}}{\sin\varphi_1 \sin\varphi_2+\cos\varphi_1\cos\varphi_2\cos|\lambda_1-\lambda_2|},$$
and the law of spherical cosines,
$$\frac{r}{R_p}=\arccos\left(\sin\varphi_1\sin\varphi_2+\cos\varphi_1\cos\varphi_2\cos|\lambda_1-\lambda_2|\right).$$
At short distances linearizations such as Pythagoras can also be used.
Which formulae to choose is a balance between the cost of calculation and the accuracy of the result, which also depends on the specifics of the implementation. For example the two argument (also called `arctan2`) inverse tangent function should be preferred when needed (and available). In general the cheaper formulas have fewer trignometric function evaluations and square root calculations.
For this project, you should assume a spherical Earth and use one of the above approximations, but you may be interested to know that at the next level of approximation, the Earth is considered as an oblate spheriod (i.e. flattened sphere) and the full, iterative version of [Vincenty's formulae](https://en.wikipedia.org/wiki/Vincenty%27s_formulae) can be used. Further improvement includes local effects and acknowledges the implications of land elevation, but that sits well outside the scope of this exercise.
## Extended functionality
Additional credit will be given if your damage mapper function demonstrates the following extended capabilities:
* The ability to present the software output on a map. The graphics should be designed to be appropriate for use in emergency response and evacuation planning.
* The ability to perform a simple uncertainty analysis that takes as input a small uncertainty on each input parameter and calculates a risk for each affected UK postcode (sector).
### Plotting on a map
As one possible approach, we have provided a function to plot a circle on a map using the `folium` package. You can use `folium` and expand on this function or you may prefer to use a different package. Please check with us that the mapping package you wish to use is permissible before you start.
```
import folium
import armageddon
armageddon.plot_circle(53., 0, 2000.) #Plots a circle of radius 2000 m at the lat, lon: 53., 0.
```
### Uncertainty analysis
For this second extension exercise, a separate function `impact_risk` should be written that takes an additional set of inputs, describing the standard deviation of each input parameter, as well as the nominal input parameters. The uncertainty in each input parameter can be assumed to follow a gaussian distribution centered on the nominal values. The standard deviations for the parameters can be taken as:
* Entry latitude 0.025$^\circ$
* Entry longitude: 0.025$^\circ$
* Entry bearing: 0.5$^\circ$
* Meteoroid radius: 1 m
* Meteoroid speed: 1000 m/s
* Meteoroid density: 500 kg/m$^3$
* Meteoroid strength: 50\%
* Meteoroid trajectory angle: 1$^\circ$
For the second extension task, risk will be defined as the probability that the postcode sector (or postcode) is within a specified damage zone times the affected population. This function should therefore take as an input the overpressure used in the risk calculation and a flag to indicate whether risk should be calculated at the postcode or postcode sector level. For scoring, we will use damage level 3 (wooden buildings collapse) and postcode sectors.
Your risk calculator should sample the model parameter space $n$ times, where $n$ is an input parameter, but the sampling method is up to you. The probability that a postcode (or sector) is within a specified damage level is defined as the number of times the postcode (sector) is within the specified damage level divided by $n$.
The risk calculator should output a Pandas dataframe with two columns: postcode (unit or sector) and risk.
| github_jupyter |
<a href="https://www.matheharry.de/">
<img src="https://www.matheharry.de/wp-content/uploads/2020/12/cropped-MatheHarry-logos-banner.jpg" width="300" align="center"></a>
---
# Bedingungen (conditions) und Ablaufsteuerung in Python
**Willkommen!** In diesem Notebook lernst du die Bedingungsanweisungen in Python kennen. Am Ende dieser Einheit wirst du wissen, wie man die Bedingungsanweisungen in Python verwendet, einschließlich der Operatoren und Verzweigungen.
## Vergleichsoperatoren
Vergleichsoperationen vergleichen Werte oder Ausdrücke miteinander und erzeugen, basierend auf einer Bedingung, einen Bool-Wert. Beim Vergleich zweier Werte kannst du folgende Operatoren verwenden:
* gleich: **==**
* ungleich: **!=**
* größer als: **>**
* kleiner als: **<**
* größer gleich: **>=**
* kleiner gleich: **<=**
### Ist Gleich: ==
Wir weisen `a` einen Wert von 5 zu. Benutze den Gleichheitsoperator, der mit zwei Gleichheitszeichen **==** angegeben wird, um festzustellen, ob zwei Werte gleich sind.
Der folgende Fall vergleicht die Variable `a` mit 6.
```
# Bedingung Gleich
a = 5 # Ein Gleichheitszeichen: ein Wert wird zugewiesen
a == 6 # Zwei Gleichheitszeichen: zwei Ausdrücke/Werte werden miteinander verglichen
```
Das Ergebnis ist **False**, da 5 nicht gleich 6 ist.
### Größer > oder Kleiner <
Betrachte den folgenden Vergleich: `i > 5`.
* Wenn der Wert des linken Operanden, in diesem Fall die Variable **i**, größer ist als der Wert des rechten Operanden, in diesem Fall 5, dann ist die Aussage **True**.
* Andernfalls ist die Aussage **False**.
* Wäre **i** = 6, wäre die Aussage **True**, weil 6 größer als 5 ist.
```
# Größer als Zeichen
i = 6
i > 5
```
Wenn `i = 2` ist, ist die folgende Aussage falsch, da 2 nicht größer als 5 ist:
```
# Größer als Zeichen
i = 2
i > 5
```
Wir wollen jetzt einige Werte für `i` in der Grafik anzeigen, wobei für Werte größer als 5 der Hintergrund grün und für die anderen rot sein soll. Der grüne Bereich stellt dar, wo die obige Bedingung **True** ist, der rote, wo die Aussage **False** ist.
Wenn der Wert von `i = 2` ist, erhalten wir **False**, da die 2 in den roten Bereich fällt. Wenn der Wert für `i = 6` ist, erhalten wir entsprechend **True**, da die Bedingung in den grünen Bereich fällt.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/CondsGreater.gif" width="650" />
### Ist Ungleich: !=
Die Ungleichheitsprüfung verwendet ein Ausrufezeichen vor dem Gleichheitszeichen, und wenn zwei Operanden ungleich sind, dann wird die Bedingung **True**.
Die folgende Bedingung ergibt beispielsweise **True**, solange der Wert von `i` nicht gleich 6 ist:
```
# Ungleichheitszeichen
i = 2
i != 6
```
Wenn `i` gleich 6 ist, gibt die Ungleichheitsaussage **False** zurück.
```
# Ungleichheitszeichen
i = 6
i != 6
```
Betrachten wir die untenstehende Zahlenreihe. Wenn die Bedingung **True** ist, werden die entsprechenden Zahlen grün markiert und für den Fall, dass die Bedingung **False** ist, wird die entsprechende Zahl rot markiert.
* Wenn wir `i` gleich 2 einstellen, ist das Ergebnis **True**, da 2 im grünen Bereich liegt.
* Wenn wir `i` gleich 6 setzen, erhalten wir als Ergebnis **False**, da die Bedingung in den roten Bereich fällt.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/CondsIneq.gif" width="650" />
### Strings vergleichen
Wir können die gleichen Methoden auf Strings anwenden. Wenn du z.B. einen Gleichheitsoperator für zwei unterschiedliche Strings verwendest erhalten wir **False** da die Strings ja nicht gleich sind.
```
# Benutze das Gleichheitszeichen, um die Strings zu vergleichen.
"ACDC" == "Michael Jackson"
```
Wenn wir den Ungleichheitsoperator verwenden, wird die Ausgabe **True** sein, da die Strings nicht gleich sind.
```
# Verwenden des Ungleichheitszeichens zum Vergleichen der Strings
"ACDC" != "Michael Jackson"
```
Man kann sogar Buchstaben nach ihrer Reihenfolge im Alphabet vergleichen, da jedes Zeichen für den Computer einer Zahl entspricht (*ASCII-Code*). **A** ist z.B. 101, und der Wert für **B** ist 102, deshalb gilt:
```
# Zeichen vergleichen
'B' > 'A'
```
Wenn es mehrere Buchstaben gibt, hat der erste Buchstabe Vorrang bei der Reihenfolge:
```
# Zeichen vergleichen
'BA' > 'AB'
```
**Hinweis**: Großbuchstaben haben einen anderen ASCII-Code als Kleinbuchstaben, d.h. der Vergleich zwischen den Buchstaben in Python ist abhängig von der Groß-/Kleinschreibung.
## Verzweigungen mittels If-Else
Eine Verzweigung ermöglicht es uns, unterschiedliche Anweisungen für verschiedene Eingangsgrößen auszuführen.
### if-Anweisung
Es ist hilfreich, sich eine **if-Anweisung** als einen abgeschlossenen Raum vorzustellen. Wenn die Anweisung **True** ist, können wir den Raum betreten und dein Programm wird einige dort definierte Anweisungen ausführen, aber wenn die Anweisung **False** ist, wird das Programm diese Anweisungen ignorieren.
Nehmen wir zum Beispiel ein blaues Rechteck, das ein ACDC-Konzert darstellen soll. Wenn eine Person älter als 18 Jahre ist, kann sie am ACDC-Konzert teilnehmen. Wenn sie 18 oder jünger als 18 Jahre ist, kann sie nicht an dem Konzert teilnehmen.
Benutze eine der zuvor erlernten Bedingungen und lasse sie in einer **if-Anweisung** überprüfen.
Das geht ganz einfach mit einer Zeile die mit dem Wort `if` beginnt, gefolgt von einer beliebigen Bedingung und einem Doppelpunkt am Ende:
if Bedingung:
Anweisungen
...
weiter im Programm
Die zu erledigenden Anweisungen beginnen unter dieser Bedingung in einer neuen Zeile mit einer Einrückung.
Die Codezeilen nach dem Doppelpunkt und mit einer Einrückung werden nur ausgeführt, wenn die **if-Anweisung** gleich **True** ist. Die Anweisungen enden, wenn die Codezeile keinen Einzug mehr hat.
Da die Codezeile `print("Du kannst eintreten")` einen Einzug hat wird sie nur ausgeführt, wenn die Variable `alter` größer als 18 ist und damit die Bedingung `true`ist.
Die Zeile `print("weiter geht's")` wird jedoch nicht durch die if-Anweisung beeinflusst und wird in jedem Fall ausgeführt.
```
# Beispiel für eine If-Anweisung
alter = 19
if alter > 18: # Bedingung, die wahr oder falsch sein kann.
print("Du kannst eintreten" ) #innerhalb eines Einzugs steht die auszuführende Anweisung, für den Fall dass die Bedingung wahr ist.
print("weiter geht's") #Die Anweisungen nach der if-Anweisung werden unabhängig davon ausgeführt, ob die Bedingung wahr oder falsch ist.
```
Versuche, die Variable `alter` auch auf andere Werte zu setzen
Es ist hilfreich, das folgende Diagramm zu verwenden, um den Prozess zu veranschaulichen.
Auf der linken Seite sehen wir, was passiert, wenn die Bedingung **True** ist. Die Person geht in das AC-DC-Konzert, welches dem Code in dem gerade ausgeführten Einzug entspricht, und danach geht's normal weiter.
Auf der rechten Seite sehen wir, was passiert, wenn die Bedingung **False** ist; der Person wird kein Zugang gewährt, und sie macht also ohne Konzert weiter. In diesem Fall wird das Codesegment im Einzug nicht ausgeführt, aber der Rest der Anweisungen sehr wohl.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/CondsIf.gif" width="650" />
### else-Anweisung
Die Anweisung `else` führt einen Codeblock aus, wenn keine der Bedingungen vor dieser `else`-Anweisung **True** ist.
Laß uns noch einmal unser AC-DC-Konzert betrachten. Wenn der Benutzer 17 Jahre alt ist, kann er zwar nicht zum AC-DC-Konzert gehen, aber er ist alt genug, um ein Meat Loaf Konzert zu besuchen.
Die Syntax der `else`-Anweisung ist ähnlich wie die Syntax der `if`-Anweisung, also `else:`.
if Bedingung:
Anweisungen
...
else:
andere Anweisungen
...
weiter im Programm
Beachte, dass es keine Bedingung für `else` gibt, da diese Anweisungen ja immer ausgeführt werden sollen wenn die if-Bedingung nicht `true` ist.
Versuche, die Werte von `alter` zu ändern, um zu sehen, was passiert:
```
# Else Anweisung Beispiel
alter = 18
# alter = 19
if alter > 18:
print("Du kannst eintreten" )
else:
print("schau dir Meat Loaf an" )
print("weiter geht's")
```
Der Ablauf wird im Folgenden demonstriert, wobei alle Möglichkeiten auf jeder Seite des Bildes dargestellt sind.
Auf der linken Seite ist der Fall dargestellt, dass jemand 17 Jahre alt ist, wir setzen die Variable `alter` also auf 17, und das entspricht einer Person, die das Meat Loaf Konzert besucht.
Der rechte Teil zeigt, was passiert, wenn die Person über 18 Jahre alt ist. In diesem Fall ist sie 19 Jahre alt, und sie darf zum AC-DC Konzert.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/CondsElse.gif" width="650" />
### elif-Anweisung
Die `elif` Anweisung, kurz für **else if**, erlaubt es uns, zusätzliche Bedingungen zu überprüfen, wenn die Bedingungen vor ihr **False** sind.
Wenn die Bedingung für die `elif`-Anweisung **True** ist, wird dann diese Anweisung ausgeführt.
Denken wir uns ein Konzertbeispiel, wo die Person, wenn sie genau 18 Jahre alt ist, zum Pink Floyd-Konzert geht, anstatt am AC-DC- oder Meat-Loaf-Konzert teilzunehmen.
Die Person im Alter von 18 Jahren betritt das Areal, und da sie nicht älter als 18 Jahre ist, kann sie AC-DC nicht sehen, aber sie geht zu Pink Floyd. Nachdem sie Pink Floyd gesehen hat, geht es weiter.
Die Syntax der `elif` Anweisung ist nicht neu, da wir lediglich für die `if`-Abfrage das `if` durch `elif` ersetzen müssen.
if Bedingung:
Anweisungen
...
elif Bedingung:
Anweisungen
...
else:
andere Anweisungen
...
weiter im Programm
Ändere wieder die Werte für `alter` und schau dir an was geschieht.
```
# Beispiel für eine Elif-Anweisung
alter = 18
if alter > 18:
print("Du kannst eintreten" )
elif alter == 18:
print("schau dir Pink Floyd an")
else:
print("schau dir Meat Loaf an" )
print("weiter geht's")
```
Die drei Möglichkeiten sind in der Abbildung unten dargestellt. Der Bereich ganz links zeigt, was passiert, wenn man weniger als 18 Jahre alt ist. Die mittlere Region zeigt den Ablauf wenn man genau 18 Jahre alt ist. Der ganz rechte Bereich zeigt das für über 18 Jahre.
<img src ="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/CondsElif.gif" width="650" />
## Erklärung mit anderem Beispiel
Sieh dir den folgenden Code an:
```
# Beispiel für eine Bedingungsanweisung
album_year = 1983
#album_year = 1970
if album_year > 1980:
print("Das Erscheinungsjahr ist größer als 1980")
print('mach irgendetwas ...')
```
Verändere den Wert von `album_year` auf andere Werte - du wirst sehen, dass sich das Ergebnis ändert!
Beachte, dass der Code im obigen **eingezogenen** Block nur ausgeführt wird, wenn die Ergebnisse **True** sind.
Wie zuvor können wir einen `else` Block zum `if` Block hinzufügen. Der Code im Block `else` wird nur ausgeführt, wenn das Ergebnis **False** ist.
**Syntax:**
if (Bedingung):
# mach etwas
else:
# mach etwas anderes
Wenn die Bedingung in der `if`-Anweisung **False** ist, wird die Anweisung nach dem Block `else` ausgeführt. Dies ist in der Grafik dargestellt:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/CondsLogicMap.png" width="650" />
```
# Beispiel für eine Bedingungsanweisung
#album_year = 1983
album_year = 1970
if album_year > 1980:
print("Das Erscheinungsjahr ist größer als 1980")
else:
print("kleiner als 1980")
print('mach irgendetwas ...')
```
Ändere den Wert von `album_year` in andere Werte - du wirst sehen, dass sich das Ergebnis dementsprechend ändert!
## Logik-Operatoren
Manchmal muss man mehr als eine Bedingung auf einmal überprüfen. Beispielsweise kannst du überprüfen, ob eine Bedingung und eine andere Bedingung **True** ist. Logische Operatoren ermöglichen es dir, Bedingungen zu kombinieren oder zu ändern.
* `and`
* `or`
* `not`
Diese Operatoren werden für zwei Variablen A und B anhand der folgenden Wahrheitstabellen zusammengefasst:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/CondsTable.png" width="650" />
* Die Anweisung `and` ist nur **True**, wenn beide Bedingungen erfüllt sind.
* Die Anweisung `or` ist wahr, wenn eine Bedingung **True** ist.
* Die Anweisung `not` gibt den jeweils entgegengesetzten Wahrheitswert aus.
Schauen wir uns an, wie wir feststellen können, ob ein Album nach 1979 (1979 ist nicht enthalten) und vor 1990 (1990 ist nicht enthalten) veröffentlicht wurde. Die Zeiträume zwischen 1980 und 1989 erfüllen diese Bedingung. Dies wird in der folgenden Grafik veranschaulicht. Das Grün der Zeilen <strong>a</strong> und <strong>b</strong> repräsentiert Zeiträume, in denen die Aussage **True** ist. Das Grün auf der Linie <strong>c</strong> stellt dar, wo beide Bedingungen **True** sind, das entspricht dem Überlappungsbereich der oberen grünen Bereiche.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/CondsEgOne.png" width="650" />
Der Codeblock zur Ausführung dieser Überprüfung lautet:
```
# Beispiel für eine Bedingungsanweisung
album_year = 1980
if(album_year > 1979) and (album_year < 1990):
print ("Das Erscheinungsjahr liegt zwischen 1980 und 1989")
print("")
print("mach etwas ...")
```
Um festzustellen, ob ein Album vor 1980 (? - 1979) oder nach 1989 (1990 - ?) veröffentlicht wurde, kann eine **or** Anweisung verwendet werden. Zeiträume vor 1980 (? - 1979) oder nach 1989 (1990 - ?) erfüllen diese Bedingung. Dies wird in der folgenden Abbildung dargestellt, die Farbe Grün in <strong>a</strong> und <strong>b</strong> repräsentiert Zeiträume, in denen die Aussage wahr ist. Die Farbe Grün in **c** stellt dar, wo mindestens eine der Bedingungen wahr ist.
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/CondsEgTwo.png" width="650" />
Der Codeblock zur Durchführung dieser Überprüfung lautet:
```
# Beispiel für eine Bedingungsanweisung
album_year = 1990
if(album_year < 1980) or (album_year > 1989):
print ("Das Album ist nicht in den 1980'ern erschienen")
else:
print("Das Album erschien in den 1980'ern")
```
Die Anweisung `not` prüft, ob die Anweisung falsch ist:
```
# Beispiel für eine Bedingungsanweisung
album_year = 1983
if not (album_year == '1984'):
print ("Das Album ist nicht 1984 erschienen")
```
<hr>
# Übungen zu Bedingungen und Verzweigungen
## Übung 1
Schreibe eine if-Anweisung, um festzustellen, ob ein Album eine Bewertung von mehr als 8 hat, und prüfe das anhand der Bewertung für das Album **"Back in Black"**, das eine Bewertung von 8.5 hat. Wenn die Aussage wahr ist, soll "Dieses Album ist der Hammer!" ausgegeben werden.
```
# Gib deinen Code unten ein und drücke Shift+Enter, um ihn auszuführen.
```
Doppelklicke __hier__ um die Lösung anzuzeigen. <!-- Antwort:
bewertung = 8.5
if bewertung > 8:
print ("Dieses Album ist der Hammer!")
-->
<hr>
## Übung 2
Schreibe eine if-else-Anweisung, die Folgendes ausführt.
* Wenn die Bewertung größer als acht ist, soll "Dieses Album ist der Hammer!" ausgegeben werden.
* Wenn die Bewertung kleiner oder gleich 8 ist, gib "Dieses Album ist OK." aus.
```
# Gib deinen Code unten ein und drücke Shift+Enter, um ihn auszuführen.
```
Doppelklicke __hier__ um die Lösung anzuzeigen.
<!-- Antwort:
rating = 8.5
if rating > 8:
print ("Dieses Album ist der Hammer!")
else:
print ("Dieses Album ist OK.")
-->
<hr>
## Übung 3
Schreibe eine if-Anweisung, um festzustellen, ob ein Album vor 1980 oder in einem der folgenden Jahre herauskam: 1991 oder 1993.
Wenn die Bedingung erfüllt ist, soll das das Jahr ausgegeben werden, in dem das Album herauskam.
```
# Gib deinen Code unten ein und drücke Shift+Enter, um ihn auszuführen.
```
Doppelklicke __hier__ um die Lösung anzuzeigen.
<!-- Antwort:
album_year = 1979
if album_year < 1980 or album_year == 1991 or album_year == 1993:
print (album_year)
-->
---
>> Zurück zu [03-Turtle](03-Turtle.ipynb) --- Weiter zu [05-Schleifen](05-Schleifen.ipynb)
| github_jupyter |
# PyCaret Fugue Integration
[Fugue](https://github.com/fugue-project/fugue) is a low-code unified interface for different computing frameworks such as Spark, Dask and Pandas. PyCaret is using Fugue to support distributed computing scenarios.
## Hello World
### Classification
Let's start with the most standard example, the code is exactly the same as the local version, there is no magic.
```
from pycaret.datasets import get_data
from pycaret.classification import *
setup(data=get_data("juice"), target = 'Purchase', n_jobs=1)
test_models = models().index.tolist()[:5]
```
`compare_model` is also exactly the same if you don't want to use a distributed system
```
compare_models(include=test_models, n_select=2)
```
Now let's make it distributed, as a toy case, on dask. The only thing changed is an additional parameter `parallel_backend`
```
from pycaret.parallel import FugueBackend
compare_models(include=test_models, n_select=2, parallel=FugueBackend("dask"))
```
In order to use Spark as the execution engine, you must have access to a Spark cluster, and you must have a `SparkSession`, let's initialize a local Spark session
```
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
```
Now just change `parallel_backend` to this session object, you make it run on Spark. You must understand this is a toy case. In the real situation, you need to have a SparkSession pointing to a real Spark cluster to enjoy the power of Spark
```
compare_models(include=test_models, n_select=2, parallel=FugueBackend(spark))
```
In the end, you can `pull` to get the metrics table
```
pull()
```
### Regression
It's follows the same pattern as classification.
```
from pycaret.datasets import get_data
from pycaret.regression import *
setup(data=get_data("insurance"), target = 'charges', n_jobs=1)
test_models = models().index.tolist()[:5]
```
`compare_model` is also exactly the same if you don't want to use a distributed system
```
compare_models(include=test_models, n_select=2)
```
Now let's make it distributed, as a toy case, on dask. The only thing changed is an additional parameter `parallel_backend`
```
from pycaret.parallel import FugueBackend
compare_models(include=test_models, n_select=2, parallel=FugueBackend("dask"))
```
In order to use Spark as the execution engine, you must have access to a Spark cluster, and you must have a `SparkSession`, let's initialize a local Spark session
```
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
```
Now just change `parallel_backend` to this session object, you make it run on Spark. You must understand this is a toy case. In the real situation, you need to have a SparkSession pointing to a real Spark cluster to enjoy the power of Spark
```
compare_models(include=test_models, n_select=2, parallel=FugueBackend(spark))
```
In the end, you can `pull` to get the metrics table
```
pull()
```
As you see, the results from the distributed versions can be different from your local versions. In the next section, we will show how to make them identical.
## A more practical case
The above examples are pure toys, to make things work perfectly in a distributed system you must be careful about a few things
### Use a lambda instead of a dataframe in setup
If you directly provide a dataframe in `setup`, this dataset will need to be sent to all worker nodes. If the dataframe is 1G, you have 100 workers, then it is possible your dirver machine will need to send out up to 100G data (depending on specific framework's implementation), then this data transfer becomes a bottleneck itself. Instead, if you provide a lambda function, it doesn't change the local compute scenario, but the driver will only send the function reference to workers, and each worker will be responsible to load the data by themselves, so there is no heavy traffic on the driver side.
### Be deterministic
You should always use `session_id` to make the distributed compute deterministic, otherwise, for the exactly same logic you could get drastically different selection for each run.
### Set n_jobs
It is important to be explicit on n_jobs when you want to run something distributedly, so it will not overuse the local/remote resources. This can also avoid resrouce contention, and make the compute faster.
```
from pycaret.classification import *
setup(data=lambda: get_data("juice", verbose=False, profile=False), target = 'Purchase', session_id=0, n_jobs=1);
```
### Set the appropriate batch_size
`batch_size` parameter helps adjust between load balence and overhead. For each batch, setup will be called only once. So
| Choice |Load Balance|Overhead|Best Scenario|
|---|---|---|---|
|Smaller batch size|Better|Worse|`training time >> data loading time` or `models ~= workers`|
|Larger batch size|Worse|Better|`training time << data loading time` or `models >> workers`|
The default value is set to `1`, meaning we want the best load balance.
### Display progress
In development, you can enable visual effect by `display_remote=True`, but meanwhile you must also enable [Fugue Callback](https://fugue-tutorials.readthedocs.io/tutorials/advanced/rpc.html) so that the driver can monitor worker progress. But it is recommended to turn off display in production.
```
fconf = {
"fugue.rpc.server": "fugue.rpc.flask.FlaskRPCServer", # keep this value
"fugue.rpc.flask_server.host": "0.0.0.0", # the driver ip address workers can access
"fugue.rpc.flask_server.port": "3333", # the open port on the dirver
"fugue.rpc.flask_server.timeout": "2 sec", # the timeout for worker to talk to driver
}
be = FugueBackend("dask", fconf, display_remote=True, batch_size=3, top_only=False)
compare_models(n_select=2, parallel=be)
```
## Notes
### Spark settings
It is highly recommended to have only 1 worker on each Spark executor, so the worker can fully utilize all cpus (set `spark.task.cpus`). Also when you do this you should explicitly set `n_jobs` in `setup` to the number of cpus of each executor.
```python
executor_cores = 4
spark = SparkSession.builder.config("spark.task.cpus", executor_cores).config("spark.executor.cores", executor_cores).getOrCreate()
setup(data=get_data("juice", verbose=False, profile=False), target = 'Purchase', session_id=0, n_jobs=executor_cores)
compare_models(n_select=2, parallel=FugueBackend(spark))
```
### Databricks
On Databricks, `spark` is the magic variable representing a SparkSession. But there is no difference to use. You do the exactly same thing as before:
```python
compare_models(parallel=FugueBackend(spark))
```
But Databricks, the visualization is difficult, so it may be a good idea to do two things:
* Set `verbose` to False in `setup`
* Set `display_remote` to False in `FugueBackend`
### Dask
Dask has fake distributed modes such as the default (multi-thread) and multi-process modes. The default mode will just work fine (but they are actually running sequentially), and multi-process doesn't work for PyCaret for now because it messes up with PyCaret's global variables. On the other hand, any Spark execution mode will just work fine.
### Local Parallelization
For practical use where you try non-trivial data and models, local parallelization (The eaiest way is to use local Dask as backend as shown above) normally doesn't have performance advantage. Because it's very easy to overload the CPUS on training, increasing the contention of resources. The value of local parallelization is to verify the code and give you confidence that the distributed environment will provide the expected result with much shorter time.
### How to develop
Distributed systems are powerful but you must follow some good practices to use them:
1. **From small to large:** initially, you must start with a small set of data, for example in `compare_model` limit the models you want to try to a small number of cheap models, and when you verify they work, you can change to a larger model collection.
2. **From local to distributed:** you should follow this sequence: verify small data locally then verify small data distributedly and then verify large data distributedly. The current design makes the transition seamless. You can do these sequentially: `parallel=None` -> `parallel=FugueBackend()` -> `parallel=FugueBackend(spark)`. In the second step, you can replace with a local SparkSession or local dask.
| github_jupyter |
We use Embeddings to represent text into a numerical form. Either into a one-hot encoding format called sparse vector or a fixed Dense representation called Dense Vector.
Every Word gets it meaning from the words it is surrounded by, So when we train our embeddings we want word with similar meaning or words used in similar context to be together.
For Example:-
1. Words like Aeroplane, chopper, Helicopter, Drone should be very close to each other because they share the same feature, they are flying object.
2. Words like Man and Women should be exact opposite to each other.
3. Sentences like "Coders are boring people." and "Programmers are boring." the word `coders` and `programmers` are used in similar context so they should be close to each other.
Word Embeddings are nothing but vectors in a vector space. And using some vector calculation we can easily find
1. Synonyms or similar words
2. Finding Analogies
3. Can be used as spell check (if trained on a large corpus)
4. Pretty Much Anything which you can do with vectors.
```
import torchtext
import numpy as np
import torch
glove = torchtext.vocab.GloVe(name = '6B', dim = 100)
print(f'There are {len(glove.itos)} words in the vocabulary')
glove.itos[:10]
glove.stoi["cat"]
def get_embedding(word):
return glove.vectors[glove.stoi[word]]
get_embedding("cat")
```
# Similar Context
To find words similar to input words. We have to first take the vector representation of all words and compute the eucledian distance of the input word with respect to all words and choose the n closest words by sorting the distance ascending order.
```
def get_closest_word(word,n=10):
input_vector = get_embedding(word).numpy() if isinstance(word,str) else word.numpy()
distance = np.linalg.norm(input_vector-glove.vectors.numpy(),axis=1)
sort_dis = np.argsort(distance)[:n]
return list(zip(np.array(glove.itos)[sort_dis] , distance[sort_dis]))
get_closest_word("sad",n=10)
def get_similarity_angle(word1,word2):
word1 = get_embedding(word1).view(1,-1)
word2 = get_embedding(word2).view(1,-1)
simi = torch.nn.CosineSimilarity(dim=1)(word1,word2).numpy()
return simi,np.rad2deg(np.arccos(simi))
get_similarity_angle("sad","awful")
```
# Analogies
```
def analogy( word1, word2, word3, n=5):
#get vectors for each word
word1_vector = get_embedding(word1)
word2_vector = get_embedding(word2)
word3_vector = get_embedding(word3)
#calculate analogy vector
analogy_vector = word2_vector - word1_vector + word3_vector
# #find closest words to analogy vector
candidate_words = get_closest_word( analogy_vector, n=n+3)
#filter out words already in analogy
candidate_words = [(word, dist) for (word, dist) in candidate_words
if word not in [word1, word2, word3]][:n]
print(f'{word1} is to {word2} as {word3} is to...')
return candidate_words
analogy('man', 'king', 'woman')
```
This is the canonical example which shows off this property of word embeddings. So why does it work? Why does the vector of 'woman' added to the vector of 'king' minus the vector of 'man' give us 'queen'?
If we think about it, the vector calculated from 'king' minus 'man' gives us a "royalty vector". This is the vector associated with traveling from a man to his royal counterpart, a king. If we add this "royality vector" to 'woman', this should travel to her royal equivalent, which is a queen!
```
analogy('india', 'delhi', 'australia')
get_closest_word("reliable")
```
# Case Studies
1. https://forums.fast.ai/t/nlp-any-libraries-dictionaries-out-there-for-fixing-common-spelling-errors/16411
2. Multilingual and Cross-lingual analysis: If you work on works in translation, or on the influence of writers who write in one language on those who write in another language, word vectors can valuable ways to study these kinds of cross-lingual relationships algorithmically.
[Case Study: Using word vectors to study endangered languages](https://raw.githubusercontent.com/YaleDHLab/lab-workshops/master/word-vectors/papers/coeckelbergs.pdf)
3. Studying Language Change over Time: If you want to study the way the meaning of a word has changed over time, word vectors provide an exceptional method for this kind of study.
[Case Study: Using word vectors to analyze the changing meaning of the word "gay" in the twentieth century.](https://nlp.stanford.edu/projects/histwords/)
4. Analyzing Historical Concept Formation: If you want to analyze the ways writers in a given historical period understood particular concepts like "honor" and "chivalry", then word vectors can provide excellent opportunities to uncover these hidden associations.
[Case Study: Using word vectors to study the ways eighteenth-century authors organized moral abstractions](https://raw.githubusercontent.com/YaleDHLab/lab-workshops/master/word-vectors/papers/heuser.pdf)
5. Uncovering Text Reuse: If you want to study text reuse or literary imitation (either within one language or across multiple languages), word vectors can provide excellent tools for identifying similar passages of text.
[Case Study: Using word vectors to uncover cross-lingual text reuse in eighteenth-century writing](https://douglasduhaime.com/posts/crosslingual-plagiarism-detection.html)
| github_jupyter |
# Document Embedding with Amazon SageMaker Object2Vec
1. [Introduction](#Introduction)
2. [Background](#Background)
1. [Embedding documents using Object2Vec](#Embedding-documents-using-Object2Vec)
3. [Download and preprocess Wikipedia data](#Download-and-preprocess-Wikipedia-data)
1. [Install and load dependencies](#Install-and-load-dependencies)
2. [Build vocabulary and tokenize datasets](#Build-vocabulary-and-tokenize-datasets)
3. [Upload preprocessed data to S3](#Upload-preprocessed-data-to-S3)
4. [Define SageMaker session, Object2Vec image, S3 input and output paths](#Define-SageMaker-session,-Object2Vec-image,-S3-input-and-output-paths)
5. [Train and deploy doc2vec](#Train-and-deploy-doc2vec)
1. [Learning performance boost with new features](#Learning-performance-boost-with-new-features)
2. [Training speedup with sparse gradient update](#Training-speedup-with-sparse-gradient-update)
6. [Apply learned embeddings to document retrieval task](#Apply-learned-embeddings-to-document-retrieval-task)
1. [Comparison with the StarSpace algorithm](#Comparison-with-the-StarSpace-algorithm)
## Introduction
In this notebook, we introduce four new features to Object2Vec, a general-purpose neural embedding algorithm: negative sampling, sparse gradient update, weight-sharing, and comparator operator customization. The new features together broaden the applicability of Object2Vec, improve its training speed and accuracy, and provide users with greater flexibility. See [Introduction to the Amazon SageMaker Object2Vec](https://aws.amazon.com/blogs/machine-learning/introduction-to-amazon-sagemaker-object2vec/) if you aren’t already familiar with Object2Vec.
We demonstrate how these new features extend the applicability of Object2Vec to a new Document Embedding use-case: A customer has a large collection of documents. Instead of storing these documents in its raw format or as sparse bag-of-words vectors, to achieve training efficiency in the various downstream tasks, she would like to instead embed all documents in a common low-dimensional space, so that the semantic distance between these documents are preserved.
## Background
Object2Vec is a highly customizable multi-purpose algorithm that can learn embeddings of pairs of objects. The embeddings are learned such that it preserves their pairwise similarities in the original space.
- Similarity is user-defined: users need to provide the algorithm with pairs of objects that they define as similar (1) or dissimilar (0); alternatively, the users can define similarity in a continuous sense (provide a real-valued similarity score).
- The learned embeddings can be used to efficiently compute nearest neighbors of objects, as well as to visualize natural clusters of related objects in the embedding space. In addition, the embeddings can also be used as features of the corresponding objects in downstream supervised tasks such as classification or regression.
### Embedding documents using Object2Vec
We demonstrate how, with the new features, Object2Vec can be used to embed a large collection of documents into vectors in the same latent space.
Similar to the widely used Word2Vec algorithm for word embedding, a natural approach to document embedding is to preprocess documents as (sentence, context) pairs, where the sentence and its matching context come from the same document. The matching context is the entire document with the given sentence removed. The idea is to embed both sentence and context into a low dimensional space such that their mutual similarity is maximized, since they belong to the same document and therefore should be semantically related. The learned encoder for the context can then be used to encode new documents into the same embedding space. In order to train the encoders for sentences and documents, we also need negative (sentence, context) pairs so that the model can learn to discriminate between semantically similar and dissimilar pairs. It is easy to generate such negatives by pairing sentences with documents that they do not belong to. Since there are many more negative pairs than positives in naturally occurring data, we typically resort to random sampling techniques to achieve a balance between positive and negative pairs in the training data. The figure below shows pictorially how the positive pairs and negative pairs are generated from unlabeled data for the purpose of learning embeddings for documents (and sentences).
We show how Object2Vec with the new *negative sampling feature* can be applied to the document embedding use-case. In addition, we show how the other new features, namely, *weight-sharing*, *customization of comparator operator*, and *sparse gradient update*, together enhance the algorithm's performance and user-experience in and beyond this use-case. Sections [Learning performance boost with new features](#Learning-performance-boost-with-new-features) and [Training speedup with sparse gradient update](#Training-speedup-with-sparse-gradient-update) in this notebook provide a detailed introduction to the new features.
## Download and preprocess Wikipedia data
Please be aware of the following requirements about the acknowledgment, copyright and availability, cited from the [data source description page](https://github.com/facebookresearch/StarSpace/blob/master/LICENSE.md).
> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
```
%%bash
DATANAME="wikipedia"
DATADIR="/tmp/wiki"
mkdir -p "${DATADIR}"
if [ ! -f "${DATADIR}/${DATANAME}_train250k.txt" ]
then
echo "Downloading wikipedia data"
wget --quiet -c "https://s3-ap-northeast-1.amazonaws.com/dev.tech-sketch.jp/chakki/public/ja.wikipedia_250k.zip" -O "${DATADIR}/${DATANAME}_train.zip"
unzip "${DATADIR}/${DATANAME}_train.zip" -d "${DATADIR}"
fi
datadir = '/tmp/wiki'
!ls /tmp/wiki
```
### Install and load dependencies
```
!pip install keras tensorflow
import json
import os
import random
from itertools import chain
from keras.preprocessing.text import Tokenizer
from sklearn.preprocessing import normalize
## sagemaker api
import sagemaker, boto3
from sagemaker.session import s3_input
from sagemaker.predictor import json_serializer, json_deserializer
```
### Build vocabulary and tokenize datasets
```
def load_articles(filepath):
with open(filepath) as f:
for line in f:
yield map(str.split, line.strip().split('\t'))
def split_sents(article):
return [sent.split(' ') for sent in article.split('\t')]
def build_vocab(sents):
print('Build start...')
tok = Tokenizer(oov_token='<UNK>', filters='')
tok.fit_on_texts(sents)
print('Build end...')
return tok
def generate_positive_pairs_from_single_article(sents, tokenizer):
sents = list(sents)
idx = random.randrange(0, len(sents))
center = sents.pop(idx)
wrapper_tokens = tokenizer.texts_to_sequences(sents)
sent_tokens = tokenizer.texts_to_sequences([center])
wrapper_tokens = list(chain(*wrapper_tokens))
sent_tokens = list(chain(*sent_tokens))
yield {'in0': sent_tokens, 'in1': wrapper_tokens, 'label': 1}
def generate_positive_pairs_from_single_file(sents_per_article, tokenizer):
iter_list = [generate_positive_pairs_from_single_article(sents, tokenizer)
for sents in sents_per_article
]
return chain.from_iterable(iter_list)
filepath = os.path.join(datadir, 'ja.wikipedia_250k.txt')
sents_per_article = load_articles(filepath)
sents = chain(*sents_per_article)
tokenizer = build_vocab(sents)
# save
datadir = '.'
train_prefix = 'train250k'
fname = "wikipedia_{}.txt".format(train_prefix)
outfname = os.path.join(datadir, '{}_tokenized.jsonl'.format(train_prefix))
with open(outfname, 'w') as f:
sents_per_article = load_articles(filepath)
for sample in generate_positive_pairs_from_single_file(sents_per_article, tokenizer):
f.write('{}\n'.format(json.dumps(sample)))
# Shuffle training data
!shuf {outfname} > {train_prefix}_tokenized_shuf.jsonl
```
### Upload preprocessed data to S3
```
TRAIN_DATA="train250k_tokenized_shuf.jsonl"
# NOTE: define your s3 bucket and key here
S3_BUCKET = 'YOUR_BUCKET'
S3_KEY = 'object2vec-doc2vec'
%%bash -s "$TRAIN_DATA" "$S3_BUCKET" "$S3_KEY"
aws s3 cp "$1" s3://$2/$3/input/train/
```
## Define Sagemaker session, Object2Vec image, S3 input and output paths
```
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
region = boto3.Session().region_name
print("Your notebook is running on region '{}'".format(region))
sess = sagemaker.Session()
role = get_execution_role()
print("Your IAM role: '{}'".format(role))
container = get_image_uri(region, 'object2vec')
print("The image uri used is '{}'".format(container))
print("Using s3 buceket: {} and key prefix: {}".format(S3_BUCKET, S3_KEY))
## define input channels
s3_input_path = os.path.join('s3://', S3_BUCKET, S3_KEY, 'input')
s3_train = s3_input(os.path.join(s3_input_path, 'train', TRAIN_DATA),
distribution='ShardedByS3Key', content_type='application/jsonlines')
## define output path
output_path = os.path.join('s3://', S3_BUCKET, S3_KEY, 'models')
```
## Train and deploy doc2vec
We combine four new features into our training of Object2Vec:
- Negative sampling: With the new `negative_sampling_rate` hyperparameter, users of Object2Vec only need to provide positively labeled data pairs, and the algorithm automatically samples for negative data internally during training.
- Weight-sharing of embedding layer: The new `tied_token_embedding_weight` hyperparameter gives user the flexibility to share the embedding weights for both encoders, and it improves the performance of the algorithm in this use-case
- The new `comparator_list` hyperparameter gives users the flexibility to mix-and-match different operators so that they can tune the algorithm towards optimal performance for their applications.
```
# Define training hyperparameters
hyperparameters = {
"_kvstore": "device",
"_num_gpus": 'auto',
"_num_kv_servers": "auto",
"bucket_width": 0,
"dropout": 0.4,
"early_stopping_patience": 2,
"early_stopping_tolerance": 0.01,
"enc0_layers": "auto",
"enc0_max_seq_len": 50,
"enc0_network": "pooled_embedding",
"enc0_pretrained_embedding_file": "",
"enc0_token_embedding_dim": 300,
"enc0_vocab_size": len(tokenizer.word_index) + 1,
"enc1_network": "enc0",
"enc_dim": 300,
"epochs": 20,
"learning_rate": 0.01,
"mini_batch_size": 512,
"mlp_activation": "relu",
"mlp_dim": 512,
"mlp_layers": 2,
"num_classes": 2,
"optimizer": "adam",
"output_layer": "softmax",
"weight_decay": 0
}
hyperparameters['negative_sampling_rate'] = 3
hyperparameters['tied_token_embedding_weight'] = "true"
hyperparameters['comparator_list'] = "hadamard"
hyperparameters['token_embedding_storage_type'] = 'row_sparse'
# get estimator
doc2vec = sagemaker.estimator.Estimator(container,
role,
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
output_path=output_path,
sagemaker_session=sess)
# set hyperparameters
doc2vec.set_hyperparameters(**hyperparameters)
# fit estimator with data
doc2vec.fit({'train': s3_train})
#doc2vec.fit({'train': s3_train, 'validation':s3_valid, 'test':s3_test})
# deploy model
doc2vec_model = doc2vec.create_model(
serializer=json_serializer,
deserializer=json_deserializer,
content_type='application/json')
predictor = doc2vec_model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
sent = '今日 の 昼食 は うどん だっ た'
sent_tokens = tokenizer.texts_to_sequences([sent])
payload = {'instances': [{'in0': sent_tokens[0]}]}
result = predictor.predict(payload)
print(result)
predictor.delete_endpoint()
```
| github_jupyter |
<center>
<img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0220EN-SkillsNetwork/labs/project/Images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# Descriptive Statistics
Estimated time needed: **30** minutes
In this lab, you'll go over some hands-on exercises using Python.
## Objectives
* Import Libraries
* Read in Data
* Lab exercises and questions
***
## Import Libraries
All Libraries required for this lab are listed below. The libraries pre-installed on Skills Network Labs are commented. If you run this notebook in a different environment, e.g. your desktop, you may need to uncomment and install certain libraries.
```
#! mamba install pandas==1.3.3/
#! mamba install numpy=1.21.2
#! mamba install matplotlib=3.4.3-y
```
Import the libraries we need for the lab
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as pyplot
```
Read in the csv file from the URL using the request library
```
ratings_url = 'https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ST0151EN-SkillsNetwork/labs/teachingratings.csv'
ratings_df=pd.read_csv(ratings_url)
```
## Data Description
| Variable | Description |
| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
| minority | Does the instructor belong to a minority (non-Caucasian) group? |
| age | The professor's age |
| gender | Indicating whether the instructor was male or female. |
| credits | Is the course a single-credit elective? |
| beauty | Rating of the instructor's physical appearance by a panel of six students averaged across the six panelists and standardized to have a mean of zero. |
| eval | Course overall teaching evaluation score, on a scale of 1 (very unsatisfactory) to 5 (excellent). |
| division | Is the course an upper or lower division course? |
| native | Is the instructor a native English speaker? |
| tenure | Is the instructor on a tenure track? |
| students | Number of students that participated in the evaluation. |
| allstudents | Number of students enrolled in the course. |
| prof | Indicating instructor identifier. |
## Display information about the dataset
1. Structure of the dataframe
2. Describe the dataset
3. Number of rows and columns
print out the first five rows of the data
```
ratings_df.head()
```
get information about each variable
```
ratings_df.info()
```
get the number of rows and columns - prints as (number of rows, number of columns)
```
ratings_df.shape
```
## Lab Exercises
### Can you identify whether the teachers' Rating data is a time series or cross-sectional?
Print out the first ten rows of the data
1. Does it have a date or time variable? - No - it is not a time series dataset
2. Does it observe more than one teacher being rated? - Yes - it is cross-sectional dataset
> The dataset is a Cross-sectional
```
ratings_df.head(10)
```
### Find the mean, median, minimum, and maximum values for students
Find Mean value for students
```
ratings_df['students'].mean()
```
Find the Median value for students
```
ratings_df['students'].median()
```
Find the Minimum value for students
```
ratings_df['students'].min()
```
Find the Maximum value for students
```
ratings_df['students'].max()
```
### Produce a descriptive statistics table
```
ratings_df.describe()
```
### Create a histogram of the beauty variable and briefly comment on the distribution of data
using the <code>matplotlib</code> library, create a histogram
```
pyplot.hist(ratings_df['beauty'])
```
here are few conclusions from the histogram
most of the data for beauty is around the -0.5 and 0
the distribution is skewed to the right
therefore looking at the data we can say the mean is close to 0
### Does average beauty score differ by gender? Produce the means and standard deviations for both male and female instructors.
Use a group by gender to view the mean scores of the beauty we can say that beauty scores differ by gender as the mean beauty score for women is higher than men
```
ratings_df.groupby('gender').agg({'beauty':['mean', 'std', 'var']}).reset_index()
```
### Calculate the percentage of males and females that are tenured professors. Will you say that tenure status differ by gender?
First groupby to get the total sum
```
tenure_count = ratings_df[ratings_df.tenure == 'yes'].groupby('gender').agg({'tenure': 'count'}).reset_index()
tenure_count
```
Find the percentage
```
tenure_count['percentage'] = 100 * tenure_count.tenure/tenure_count.tenure.sum()
tenure_count
```
## Practice Questions
### Question 1: Calculate the percentage of visible minorities are tenure professors. Will you say that tenure status differed if teacher was a visible minority?
```
## insert code here
minorities_count = ratings_df.groupby('minority').agg({'tenure': 'count'}).reset_index()
minorities_count['percentage'] = 100 * minorities_count.tenure/minorities_count.tenure.sum()
minorities_count
```
Double-click **here** for the solution.
<!-- The answer is below:
### we can use a groupby function for this
## first groupby to get the total sum
tenure_count = ratings_df.groupby('minority').agg({'tenure': 'count'}).reset_index()
# Find the percentage
tenure_count['percentage'] = 100 * tenure_count.tenure/tenure_count.tenure.sum()
##print to see
tenure_count
-->
### Question 2: Does average age differ by tenure? Produce the means and standard deviations for both tenured and untenured professors.
```
## insert code here
ratings_df.groupby('tenure').agg({'beauty':['mean','std']}).reset_index()
```
Double-click **here** for the solution.
<!-- The answer is below:
## group by tenureship and find the mean and standard deviation for each group
ratings_df.groupby('tenure').agg({'age':['mean', 'std']}).reset_index()
-->
### Question 3: Create a histogram for the age variable.
```
## insert code here
pyplot.hist(ratings_df['age'])
```
Double-click **here** for the solution.
<!-- The answer is below:
pyplot.hist(ratings_df['age'])
-->
### Question 4: Create a bar plot for the gender variable.
```
## insert code here
pyplot.bar(ratings_df.gender.unique(),ratings_df.gender.value_counts(),color=['pink','blue'])
pyplot.xlabel('Gender')
pyplot.ylabel('Count')
pyplot.title('Gender distribution bar plot')
```
Double-click **here** for the solution.
<!-- The answer is below:
pyplot.bar(ratings_df.gender.unique(),ratings_df.gender.value_counts(),color=['pink','blue'])
pyplot.xlabel('Gender')
pyplot.ylabel('Count')
pyplot.title('Gender distribution bar plot')
-->
> Note:Bar plot can be rendered vertically or horizontally. Try to replace **pyplot.bar** with **pyplot.barh** in the above cell and see the difference.
### Question 5: What is the Median evaluation score for tenured Professors?
```
## insert code here
ratings_df[ratings_df['tenure'] == 'yes']['eval'].median()
```
Double-click **here** for the solution.
<!-- The answer is below:
## you can index just tenured professors and find their median evaluation scores
ratings_df[ratings_df['tenure'] == 'yes']['eval'].median()
-->
## Authors
[Aije Egwaikhide](https://www.linkedin.com/in/aije-egwaikhide/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkST0151ENSkillsNetwork20531532-2022-01-01) is a Data Scientist at IBM who holds a degree in Economics and Statistics from the University of Manitoba and a Post-grad in Business Analytics from St. Lawrence College, Kingston. She is a current employee of IBM where she started as a Junior Data Scientist at the Global Business Services (GBS) in 2018. Her main role was making meaning out of data for their Oil and Gas clients through basic statistics and advanced Machine Learning algorithms. The highlight of her time in GBS was creating a customized end-to-end Machine learning and Statistics solution on optimizing operations in the Oil and Gas wells. She moved to the Cognitive Systems Group as a Senior Data Scientist where she will be providing the team with actionable insights using Data Science techniques and further improve processes through building machine learning solutions. She recently joined the IBM Developer Skills Network group where she brings her real-world experience to the courses she creates.
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | --------------- | -------------------------------------- |
| 2020-08-14 | 0.1 | Aije Egwaikhide | Created the initial version of the lab |
| 2022-05-10 | 0.2 | Lakshmi Holla | Added exercise for Bar plot |
Copyright © 2020 IBM Corporation. This notebook and its source code are released under the terms of the [MIT License](https://cognitiveclass.ai/mit-license/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkST0151ENSkillsNetwork20531532-2022-01-01).
| github_jupyter |
## Working with CSV files and CSV Module
[1. What is a CSV file?](#section1)
[2. CSV Sample File.](#section2)
[3. Python CSV Module](#section3)
[4. CSV Module Functions](#section4)
[5. Reading CSV Files](#section5)
[6. Reading as a Dictionary](#section6)
[7. Writing to CSV Files](#section7)
<a id="section1"></a>
**1. What is a CSV file**
A CSV file is a type of plain text file that uses specific structuring to arrange tabular data. CSV is a common format for data interchange as it's compact, simple and general. Many online services allow its users to export tabular data from the website into a CSV file. Files of CSV will open into Excel, and nearly all databases have a tool to allow import from CSV file. The standard format is defined by rows and columns data. Moreover, each row is terminated by a newline to begin the next row. Also within the row, each column is separated by a comma.
<a id="section2"></a>
**2. CSV Sample File.**
Data in the form of tables is also called CSV (comma separated values) - literally "comma-separated values." This is a text format intended for the presentation of tabular data. Each line of the file is one line of the table. The values of individual columns are separated by a separator symbol - a comma (,), a semicolon (;) or another symbol. CSV can be easily read and processed by Python.
```
f = open("data.csv")
print(f.read())
```
<a id="section3"></a>
**3. Python CSV Module**
Python provides a CSV module to handle CSV files. To read/write data, you need to loop through rows of the CSV. You need to use the split method to get data from specified columns.
<a id="section4"></a>
**4. CSV Module Functions**
In CSV module documentation you can find following functions:
csv.field_size_limit – return maximum field size
csv.get_dialect – get the dialect which is associated with the name
csv.list_dialects – show all registered dialects
csv.reader – read data from a csv file
csv.register_dialect - associate dialect with name
csv.writer – write data to a csv file
csv.unregister_dialect - delete the dialect associated with the name the dialect registry
csv.QUOTE_ALL - Quote everything, regardless of type.
csv.QUOTE_MINIMAL - Quote fields with special characters
csv.QUOTE_NONNUMERIC - Quote all fields that aren't numbers value
csv.QUOTE_NONE – Don't quote anything in output
<a id="section5"></a>
**5. How to Read a CSV File**
To read data from CSV files, you must use the reader function to generate a reader object.
The reader function is developed to take each row of the file and make a list of all columns. Then, you have to choose the column you want the variable data for.
It sounds a lot more intricate than it is. Let's take a look at this example, and we will find out that working with csv file isn't so hard.
```
import csv
# f = open("data.csv")
with open("data.csv") as f:
data = csv.reader(f)
for row in data:
print(row)
```
<a id="section6"></a>
**6. How to Read a CSV as a Dictionary**
You can also you use DictReader to read CSV files. The results are interpreted as a dictionary where the header row is the key, and other rows are values.
```
import csv
file = csv.DictReader(open("data.csv"))
for row in file:
print(row)
```
<a id="section7"></a>
**7. How to write CSV File**
When you have a set of data that you would like to store in a CSV file you have to use writer() function. To iterate the data over the rows(lines), you have to use the writerow() function.
Consider the following example. We write data into a file "writeData.csv" where the delimiter is an apostrophe.
```
#import necessary modules
import csv
with open('writeData.csv', mode='w') as file:
writer = csv.writer(file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
#way to write to csv file
writer.writerow(['Programming language', 'Designed by', 'Appeared', 'Extension'])
writer.writerow(['Python', 'Guido van Rossum', '1991', '.py'])
writer.writerow(['Java', 'James Gosling', '1995', '.java'])
writer.writerow(['C++', 'Bjarne Stroustrup', '1985', '.cpp'])
f = open('writeData.csv')
data = f.readlines()
for item in data:
print(item, end=" ")
```
| github_jupyter |
# <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS-109B Introduction to Data Science
## Lab 5: Convolutional Neural Networks
**Harvard University**<br>
**Spring 2020**<br>
**Instructors:** Mark Glickman, Pavlos Protopapas, and Chris Tanner<br>
**Lab Instructors:** Chris Tanner and Eleni Angelaki Kaxiras<br>
**Content:** Eleni Angelaki Kaxiras, Pavlos Protopapas, Patrick Ohiomoba, and David Sondak
---
```
# RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2019-CS109B/master/content/styles/cs109.css").text
HTML(styles)
```
## Learning Goals
In this lab we will look at Convolutional Neural Networks (CNNs), and their building blocks.
By the end of this lab, you should:
- have a good undertanding on how images, a common type of data for a CNN, are represented in the computer and how to think of them as arrays of numbers.
- be familiar with preprocessing images with `tf.keras` and `scipy`.
- know how to put together the building blocks used in CNNs - such as convolutional layers and pooling layers - in `tensorflow.keras` with an example.
- run your first CNN.
```
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (5,5)
import numpy as np
from scipy.optimize import minimize
from sklearn.utils import shuffle
%matplotlib inline
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Dense, Dropout, Flatten, Activation, Input
from tensorflow.keras.layers import Conv2D, Conv1D, MaxPooling2D, MaxPooling1D,\
GlobalAveragePooling1D, GlobalMaxPooling1D
from tensorflow.keras.optimizers import Adam, SGD, RMSprop
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.metrics import AUC, Precision, Recall, FalsePositives, FalseNegatives, \
TruePositives, TrueNegatives
from tensorflow.keras.regularizers import l2
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow and tf.keras
import tensorflow as tf
tf.keras.backend.clear_session() # For easy reset of notebook state.
print(tf.__version__) # You should see a > 2.0.0 here!
```
## Part 0: Running on SEAS JupyterHub
**PLEASE READ**: [Instructions for Using SEAS JupyterHub](https://canvas.harvard.edu/courses/65462/pages/instructions-for-using-seas-jupyterhub?module_item_id=638544)
SEAS and FAS are providing you with a platform in AWS to use for the class (accessible from the 'Jupyter' menu link in Canvas). These are AWS p2 instances with a GPU, 10GB of disk space, and 61 GB of RAM, for faster training for your networks. Most of the libraries such as keras, tensorflow, pandas, etc. are pre-installed. If a library is missing you may install it via the Terminal.
**NOTE : The AWS platform is funded by SEAS and FAS for the purposes of the class. It is not running against your individual credit.**
**NOTE NOTE NOTE: You are not allowed to use it for purposes not related to this course.**
**Help us keep this service: Make sure you stop your instance as soon as you do not need it.**

*source:CS231n Stanford: Google Cloud Tutorial*
## Part 1: Parts of a Convolutional Neural Net
We can have
- 1D CNNs which are useful for time-series or 1-Dimensional data,
- 2D CNNs used for 2-Dimensional data such as images, and also
- 3-D CNNs used for video.
### a. Convolutional Layers.
Convolutional layers are comprised of **filters** and **feature maps**. The filters are essentially the **neurons** of the layer. They have the weights and produce the input for the next layer. The feature map is the output of one filter applied to the previous layer.
Convolutions operate over 3D tensors, called feature maps, with two spatial axes (height and width) as well as a depth axis (also called the channels axis). For an RGB image, the dimension of the depth axis is 3, because the image has three color channels: red, green, and blue. For a black-and-white picture, like the MNIST digits, the depth is 1 (levels of gray). The convolution operation extracts patches from its input feature map and applies the same transformation to all of these patches, producing an output feature map. This output feature map is still a 3D tensor: it has a width and a height. Its depth can be arbitrary, because the output depth is a parameter of the layer, and the different channels in that depth axis no longer stand for specific colors as in RGB input; rather, they stand for filters. Filters encode specific aspects of the input data: at a high level, a single filter could encode the concept “presence of a face in the input,” for instance.
In the MNIST example that we will see, the first convolution layer takes a feature map of size (28, 28, 1) and outputs a feature map of size (26, 26, 32): it computes 32 filters over its input. Each of these 32 output channels contains a 26×26 grid of values, which is a response map of the filter over the input, indicating the response of that filter pattern at different locations in the input.
Convolutions are defined by two key parameters:
- Size of the patches extracted from the inputs. These are typically 3×3 or 5×5
- The number of filters computed by the convolution.
**Padding**: One of "valid", "causal" or "same" (case-insensitive). "valid" means "no padding". "same" results in padding the input such that the output has the same length as the original input. "causal" results in causal (dilated) convolutions,
#### 1D Convolutional Network
In `tf.keras` see [1D convolutional layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv1D)

*image source: Deep Learning with Python by François Chollet*
#### 2D Convolutional Network
In `tf.keras` see [2D convolutional layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D)

**keras.layers.Conv2D** (filters, kernel_size, strides=(1, 1), padding='valid', activation=None, use_bias=True,
kernel_initializer='glorot_uniform', data_format='channels_last',
bias_initializer='zeros')
### b. Pooling Layers.
Pooling layers are also comprised of filters and feature maps. Let's say the pooling layer has a 2x2 receptive field and a stride of 2. This stride results in feature maps that are one half the size of the input feature maps. We can use a max() operation for each receptive field.
In `tf.keras` see [2D pooling layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D)
**keras.layers.MaxPooling2D**(pool_size=(2, 2), strides=None, padding='valid', data_format=None)

### c. Dropout Layers.
Dropout consists in randomly setting a fraction rate of input units to 0 at each update during training time, which helps prevent overfitting.
In `tf.keras` see [Dropout layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout)
tf.keras.layers.Dropout(rate, seed=None)
rate: float between 0 and 1. Fraction of the input units to drop.<br>
seed: A Python integer to use as random seed.
References
[Dropout: A Simple Way to Prevent Neural Networks from Overfitting](http://www.jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf)
### d. Fully Connected Layers.
A fully connected layer flattens the square feature map into a vector. Then we can use a sigmoid or softmax activation function to output probabilities of classes.
In `tf.keras` see [Fully Connected layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)
**keras.layers.Dense**(units, activation=None, use_bias=True,
kernel_initializer='glorot_uniform', bias_initializer='zeros')
## Part 2: Preprocessing the data
```
img = plt.imread('../images/cat.1700.jpg')
height, width, channels = img.shape
print(f'PHOTO: height = {height}, width = {width}, number of channels = {channels}, \
image datatype = {img.dtype}')
img.shape
# let's look at the image
imgplot = plt.imshow(img)
```
#### Visualizing the different channels
```
colors = [plt.cm.Reds, plt.cm.Greens, plt.cm.Blues, plt.cm.Greys]
subplots = np.arange(221,224)
for i in range(3):
plt.subplot(subplots[i])
plt.imshow(img[:,:,i], cmap=colors[i])
plt.subplot(224)
plt.imshow(img)
plt.show()
```
If you want to learn more: [Image Processing with Python and Scipy](http://prancer.physics.louisville.edu/astrowiki/index.php/Image_processing_with_Python_and_SciPy)
## Part 3: Putting the Parts together to make a small ConvNet Model
Let's put all the parts together to make a convnet for classifying our good old MNIST digits.
```
# Load data and preprocess
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data(
path='mnist.npz') # load MNIST data
train_images.shape
```
**Notice:** These photos do not have a third dimention channel because they are B&W.
```
train_images.max(), train_images.min()
```
**reshape data to 3 dimensions for keras**
**convert int to float**
```
train_images = train_images.reshape((60000, 28, 28, 1)) # Reshape to get third dimension
test_images = test_images.reshape((10000, 28, 28, 1))
train_images = train_images.astype('float32') / 255 # Normalize between 0 and 1
test_images = test_images.astype('float32') / 255
# Convert labels to categorical data
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
```
**`input_shape` shouldn't be hard coded. width, height, number of filter?**
```
mnist_cnn_model = Sequential() # Create sequential model
# Add network layers
mnist_cnn_model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
mnist_cnn_model.add(MaxPooling2D((2, 2)))
mnist_cnn_model.add(Conv2D(64, (3, 3), activation='relu'))
mnist_cnn_model.add(MaxPooling2D((2, 2)))
mnist_cnn_model.add(Conv2D(64, (3, 3), activation='relu'))
```
The next step is to feed the last output tensor (of shape (3, 3, 64)) into a densely connected classifier network like those you’re already familiar with: a stack of Dense layers. These classifiers process vectors, which are 1D, whereas the output of the last conv layer is a 3D tensor. First we have to flatten the 3D outputs to 1D, and then add a few Dense layers on top.
```
mnist_cnn_model.add(Flatten())
mnist_cnn_model.add(Dense(32, activation='relu')) # before we make decision, how much we squash the layers
mnist_cnn_model.add(Dense(10, activation='softmax')) # number of classes, shouldn't be hard coded as well
mnist_cnn_model.summary()
```
**(26, 26) after convolution, 32 filters**
<div class="Question"><b>Question</b> Why are we using cross-entropy here?</div>
```
loss = tf.keras.losses.categorical_crossentropy
optimizer = Adam(lr=0.001)
#optimizer = RMSprop(lr=1e-2)
# see https://www.tensorflow.org/api_docs/python/tf/keras/metrics
metrics = ['accuracy']
# Compile model
mnist_cnn_model.compile(optimizer=optimizer,
loss=loss,
metrics=metrics)
```
<div class="discussion"><b>Discussion</b> How can we choose the batch size?</div>
```
%%time
# Fit the model
verbose, epochs, batch_size = 1, 10, 64 # try a different num epochs and batch size : 30, 16
history = mnist_cnn_model.fit(train_images, train_labels,
epochs=epochs,
batch_size=batch_size,
verbose=verbose,
validation_split=0.2,
# validation_data=(X_val, y_val) # IF you have val data
shuffle=True)
print(history.history.keys())
print(history.history['val_accuracy'][-1])
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
#plt.savefig('../images/batch8.png')
mnist_cnn_model.metrics_names
# Evaluate the model on the test data:
score = mnist_cnn_model.evaluate(test_images, test_labels,
batch_size=batch_size,
verbose=0, callbacks=None)
#print("%s: %.2f%%" % (mnist_cnn_model.metrics_names[1], score[1]*100))
test_acc = mnist_cnn_model.evaluate(test_images, test_labels)
test_acc
```
<div class="discussion"><b>Discussion</b> Compare validation accuracy and test accuracy? Comment on whether we have overfitting.</div>
### Data Preprocessing : Meet the `ImageDataGenerator` class in `keras`
[(keras ImageGenerator documentation)](https://keras.io/preprocessing/image/)
The MNIST and other pre-loaded dataset are formatted in a way that is almost ready for feeding into the model. What about plain images? They should be formatted into appropriately preprocessed floating-point tensors before being fed into the network.
The Dogs vs. Cats dataset that you’ll use isn’t packaged with Keras. It was made available by Kaggle as part of a computer-vision competition in late 2013, back when convnets weren’t mainstream. The data has been downloaded for you from https://www.kaggle.com/c/dogs-vs-cats/data The pictures are medium-resolution color JPEGs.
```
# TODO: set your base dir to your correct local location
base_dir = '../data/cats_and_dogs_small'
import os, shutil
# Set up directory information
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
test_dir = os.path.join(base_dir, 'test')
train_cats_dir = os.path.join(train_dir, 'cats')
train_dogs_dir = os.path.join(train_dir, 'dogs')
validation_cats_dir = os.path.join(validation_dir, 'cats')
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
test_cats_dir = os.path.join(test_dir, 'cats')
test_dogs_dir = os.path.join(test_dir, 'dogs')
print('total training cat images:', len(os.listdir(train_cats_dir)))
print('total training dog images:', len(os.listdir(train_dogs_dir)))
print('total validation cat images:', len(os.listdir(validation_cats_dir)))
print('total validation dog images:', len(os.listdir(validation_dogs_dir)))
print('total test cat images:', len(os.listdir(test_cats_dir)))
print('total test dog images:', len(os.listdir(test_dogs_dir)))
```
So you do indeed have 2,000 training images, 1,000 validation images, and 1,000 test images. Each split contains the same number of samples from each class: this is a balanced binary-classification problem, which means classification accuracy will be an appropriate measure of success.
<div class="discussion"><b>Discussion</b> Should you always do your own splitting of the data How about shuffling? Does it always make sense?</div>
```
img_path = '../data/cats_and_dogs_small/train/cats/cat.70.jpg'
# We preprocess the image into a 4D tensor
from keras.preprocessing import image
import numpy as np
img = image.load_img(img_path, target_size=(150, 150))
img_tensor = image.img_to_array(img)
img_tensor = np.expand_dims(img_tensor, axis=0)
# Remember that the model was trained on inputs
# that were preprocessed in the following way:
img_tensor /= 255.
# Its shape is (1, 150, 150, 3)
print(img_tensor.shape)
plt.imshow(img_tensor[0])
plt.show()
```
Why do we need an extra dimension here?
#### Building the network
```
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
```
For the compilation step, you’ll go with the RMSprop optimizer. Because you ended the network with a single sigmoid unit, you’ll use binary crossentropy as the loss.
```
loss = tf.keras.losses.binary_crossentropy
#optimizer = Adam(lr=0.001)
optimizer = RMSprop(lr=1e-2)
metrics = ['accuracy']
# Compile model
model.compile(optimizer=optimizer,
loss=loss,
metrics=metrics)
```
The steps for getting it into the network are roughly as follows:
1. Read the picture files.
2. Convert the JPEG content to RGB grids of pixels.
3. Convert these into floating-point tensors.
4. Rescale the pixel values (between 0 and 255) to the [0, 1] interval (as you know, neural networks prefer to deal with small input values).
It may seem a bit daunting, but fortunately Keras has utilities to take care of these steps automatically with the class `ImageDataGenerator`, which lets you quickly set up Python generators that can automatically turn image files on disk into batches of preprocessed tensors. This is what you’ll use here.
```
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
```
Let’s look at the output of one of these generators: it yields batches of 150×150 RGB images (shape (20, 150, 150, 3)) and binary labels (shape (20,)). There are 20 samples in each batch (the batch size). Note that the generator yields these batches indefinitely: it loops endlessly over the images in the target folder. For this reason, you need to break the iteration loop at some point:
```
for data_batch, labels_batch in train_generator:
print('data batch shape:', data_batch.shape)
print('labels batch shape:', labels_batch.shape)
break
```
Let’s fit the model to the data using the generator. You do so using the `.fit_generator` method, the equivalent of `.fit` for data generators like this one. It expects as its first argument a Python generator that will yield batches of inputs and targets indefinitely, like this one does.
Because the data is being generated endlessly, the Keras model needs to know how many samples to draw from the generator before declaring an epoch over. This is the role of the `steps_per_epoch` argument: after having drawn steps_per_epoch batches from the generator—that is, after having run for steps_per_epoch gradient descent steps - the fitting process will go to the next epoch. In this case, batches are 20 samples, so it will take 100 batches until you see your target of 2,000 samples.
When using fit_generator, you can pass a validation_data argument, much as with the fit method. It’s important to note that this argument is allowed to be a data generator, but it could also be a tuple of Numpy arrays. If you pass a generator as validation_data, then this generator is expected to yield batches of validation data endlessly; thus you should also specify the validation_steps argument, which tells the process how many batches to draw from the validation generator for evaluation
```
%%time
# Fit the model <--- always a good idea to time it
verbose, epochs, batch_size, steps_per_epoch = 1, 5, 64, 100
history = model.fit_generator(
train_generator,
steps_per_epoch=steps_per_epoch,
epochs=5, # TODO: should be 100
validation_data=validation_generator,
validation_steps=50)
# It’s good practice to always save your models after training.
model.save('cats_and_dogs_small_1.h5')
```
Let’s plot the loss and accuracy of the model over the training and validation data during training:
```
print(history.history.keys())
print(history.history['val_accuracy'][-1])
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
plt.savefig('../images/batch8.png')
```
Let's try data augmentation
```
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
```
These are just a few of the options available (for more, see the Keras documentation).
Let’s quickly go over this code:
- rotation_range is a value in degrees (0–180), a range within which to randomly rotate pictures.
- width_shift and height_shift are ranges (as a fraction of total width or height) within which to randomly translate pictures vertically or horizontally.
- shear_range is for randomly applying shearing transformations.
- zoom_range is for randomly zooming inside pictures.
- horizontal_flip is for randomly flipping half the images horizontally—relevant when there are no assumptions of - horizontal asymmetry (for example, real-world pictures).
- fill_mode is the strategy used for filling in newly created pixels, which can appear after a rotation or a width/height shift.
Let’s look at the augmented images
```
from keras.preprocessing import image
fnames = [os.path.join(train_dogs_dir, fname) for
fname in os.listdir(train_dogs_dir)]
img_path = fnames[3] # Chooses one image to augment
img = image.load_img(img_path, target_size=(150, 150))
# Reads the image and resizes it
x = image.img_to_array(img) # Converts it to a Numpy array with shape (150, 150, 3)
x = x.reshape((1,) + x.shape) # Reshapes it to (1, 150, 150, 3)
i=0
for batch in datagen.flow(x, batch_size=1):
plt.figure(i)
imgplot = plt.imshow(image.array_to_img(batch[0]))
i += 1
if i % 4 == 0:
break
plt.show()
```
If you train a new network using this data-augmentation configuration, the network will never see the same input twice. But the inputs it sees are still heavily intercorrelated, because they come from a small number of original images—you can’t produce new information, you can only remix existing information. As such, this may not be enough to completely get rid of overfitting. To further fight overfitting, you’ll also add a **Dropout** layer to your model right before the densely connected classifier.
```
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(512, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
loss = tf.keras.losses.binary_crossentropy
optimizer = RMSprop(lr=1e-4)
metrics = ['acc', 'accuracy']
# Compile model
model.compile(loss=loss,
optimizer=optimizer,
metrics=metrics)
# Let’s train the network using data augmentation and dropout.
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
test_datagen = ImageDataGenerator(rescale=1./255)
# Note that the validation data shouldn’t be augmented!
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=5, # TODO: should be 100
validation_data=validation_generator,
validation_steps=50)
# save model if needed
model.save('cats_and_dogs_small_2.h5')
```
And let’s plot the results again. Thanks to data augmentation and dropout, you’re no longer overfitting: the training curves are closely tracking the validation curves. You now reach an accuracy of 82%, a 15% relative improvement over the non-regularized model. (Note: these numbers are for 100 epochs..)
```
print(history.history.keys())
print(history.history['val_accuracy'][-1])
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Accuracy with data augmentation')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Loss with data augmentation')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
#plt.savefig('../images/batch8.png')
```
By using regularization techniques even further, and by tuning the network’s parameters (such as the number of filters per convolution layer, or the number of layers in the network), you may be able to get an even better accuracy, likely up to 86% or 87%. But it would prove difficult to go any higher just by training your own convnet from scratch, because you have so little data to work with. As a next step to improve your accuracy on this problem, you’ll have to use a pretrained model.
| github_jupyter |
# Working with HEALPix data
[HEALPix](https://healpix.jpl.nasa.gov/) (Hierarchical Equal Area isoLatitude Pixelisation) is an algorithm that is often used to store data from all-sky surveys.
There are several tools in the Astropy ecosystem for working with HEALPix data, depending on what you need to do:
* The [astropy-healpix](https://astropy-healpix.readthedocs.io/en/latest/index.html) coordinated package is a BSD-licensed implementation of HEALPix which focuses on being able to convert celestial coordinates to HEALPix indices and vice-versa, as well as providing a few other low-level functions.
* The [reproject](https://reproject.readthedocs.io/en/stable/) coordinated package (which we've already looked at) includes functions for converting from/to HEALPix maps.
* The [HiPS](https://hips.readthedocs.io/en/latest/) affiliated package implements suport for the [HiPS](http://aladin.u-strasbg.fr/hips/) scheme for storing data that is based on HEALPix.
In this tutorial, we will take a look at the two first one of these, but we encourage you to learn more about HiPS too!
<section class="objectives panel panel-warning">
<div class="panel-heading">
<h2><span class="fa fa-certificate"></span> Objectives</h2>
</div>
<div class="panel-body">
<ul>
<li>Convert between celestial coordinates and HEALPix indices</li>
<li>Find the boundaries of HEALPix pixels</li>
<li>Find healpix pixels close to a position</li>
<li>Reproject a HEALPix map to a standard projection</li>
</ul>
</div>
</section>
## Documentation
This notebook only shows a subset of the functionality in astropy-healpix and reproject. For more information about the features presented below as well as other available features, you can read the
[astropy-healpix](https://astropy-healpix.readthedocs.io/en/latest/index.html) and the [reproject](https://reproject.readthedocs.io/en/stable/) documentation.
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.rc('image', origin='lower')
plt.rc('figure', figsize=(10, 6))
```
## Data
For this tutorial, we will be using a downsampled version of the Planck HFI 857Ghz map which is stored as a HEALPix map ([data/HFI_SkyMap_857_2048_R1.10_nominal_ZodiCorrected_lowres.fits](data/HFI_SkyMap_857_2048_R1.10_nominal_ZodiCorrected_lowres.fits)).
## Using astropy-healpix
To start off, we can open the HEALPix file (which is a FITS file) with astropy.io.fits:
```
from astropy.io import fits
hdulist = fits.open('data/HFI_SkyMap_857_2048_R1.10_nominal_ZodiCorrected_lowres.fits')
hdulist.info()
```
The HEALPix map values are stored in HDU 1. This HDU also contains useful header information that helps us understand how to interpret the HEALPix values:
```
hdulist[1].header['NSIDE']
hdulist[1].header['ORDERING']
hdulist[1].header['COORDSYS']
```
With this information we can now construct a ``HEALPix`` object:
```
from astropy_healpix import HEALPix
from astropy.coordinates import Galactic
hp = HEALPix(nside=hdulist[1].header['NSIDE'],
order=hdulist[1].header['ORDERING'],
frame=Galactic())
```
We can then use this object to manipulate the HEALPix map. To start off, we can find out what the coordinates of specific pixels are:
```
hp.healpix_to_skycoord([13322, 2231, 66432])
```
and vice-versa:
```
from astropy.coordinates import SkyCoord
hp.skycoord_to_healpix(SkyCoord.from_name('M31'))
```
You can also find out what the boundaries of a pixel are:
```
edge = hp.boundaries_skycoord(649476, step=100)
edge
```
The ``step`` argument controls how many points to sample along the edge of the pixel. The result should be a polygon:
```
plt.plot(edge[0].l.deg, edge[0].b.deg)
```
You can find all HEALPix pixels within a certain radius of a known position:
```
from astropy import units as u
hp.cone_search_skycoord(SkyCoord.from_name('M31'), radius=1 * u.deg)
```
And finally you can interpolate the map at specific coordinates:
```
hp.interpolate_bilinear_skycoord(SkyCoord.from_name('M31'), hdulist[1].data['I_STOKES'])
```
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2><span class="fa fa-pencil"></span> Challenge</h2>
</div>
<div class="panel-body">
<ol>
<li>Find the mean value of I_STOKES within 2 degrees of M42</li>
<li>Use astropy.coordinates to check that all the pixels returned by the cone search are indeed within 2 degrees of M42 (if not, why not? Hint: check the documentation of <a href="https://astropy-healpix.readthedocs.io/en/latest/api/astropy_healpix.HEALPix.html#astropy_healpix.HEALPix.cone_search_skycoord">cone_search_skycoord()</a>)</li>
</ol>
</div>
</section>
```
#1
import numpy as np
M42 = SkyCoord.from_name('M42')
m42_pixels = hp.cone_search_skycoord(M42, radius=2 * u.deg)
print(np.mean(hdulist[1].data['I_STOKES'][m42_pixels]))
#2
m42_cone_search_coords = hp.healpix_to_skycoord(m42_pixels)
separation = m42_cone_search_coords.separation(M42).degree
_ = plt.hist(separation, bins=50)
```
## Using reproject for HEALPix data
The reproject package is useful for HEALPix data to convert a HEALPix map to a regular projection, and vice-versa. For example, let's define a simple all-sky Plate-Caree WCS:
```
from astropy.wcs import WCS
wcs = WCS(naxis=2)
wcs.wcs.ctype = 'GLON-CAR', 'GLAT-CAR'
wcs.wcs.crval = 0, 0
wcs.wcs.crpix = 180.5, 90.5
wcs.wcs.cdelt = -1, 1
```
We can now use [reproject_from_healpix](https://reproject.readthedocs.io/en/stable/api/reproject.reproject_from_healpix.html#reproject.reproject_from_healpix) to convert the HEALPix map to this header:
```
from reproject import reproject_from_healpix
array, footprint = reproject_from_healpix('data/HFI_SkyMap_857_2048_R1.10_nominal_ZodiCorrected_lowres.fits',
wcs, shape_out=(180, 360))
plt.imshow(array, vmax=100)
```
You can also use [reproject_to_healpix](https://reproject.readthedocs.io/en/stable/api/reproject.reproject_to_healpix.html#reproject.reproject_to_healpix) to convert a regular map to a HEALPix array.
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2><span class="fa fa-pencil"></span> Challenge</h2>
</div>
<div class="panel-body">
<ol>
<li>Reproject the HFI HEALPix map to the projection of the GAIA point source density map as well as the IRAS map that we used in previous tutorials.</li>
<li>Visualize the results using WCSAxes and optionally the image normalization options.</li>
</ol>
</div>
</section>
```
#1
header_gaia = fits.getheader('data/LMCDensFits1k.fits')
header_irsa = fits.getheader('data/ISSA_100_LMC.fits')
array_gaia, _ = reproject_from_healpix('data/HFI_SkyMap_857_2048_R1.10_nominal_ZodiCorrected_lowres.fits',
header_gaia)
array_irsa, _ = reproject_from_healpix('data/HFI_SkyMap_857_2048_R1.10_nominal_ZodiCorrected_lowres.fits',
header_irsa)
#2
from astropy.visualization import simple_norm
ax = plt.subplot(projection=WCS(header_gaia))
im =ax.imshow(array_gaia, cmap='plasma',
norm=simple_norm(array_gaia, stretch='sqrt', percent=99.5))
plt.colorbar(im)
ax.grid()
ax.set_xlabel('Galactic Longitude')
ax.set_ylabel('Galactic Latitude')
```
<center><i>This notebook was written by <a href="https://aperiosoftware.com/">Aperio Software Ltd.</a> © 2019, and is licensed under a <a href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License (CC BY 4.0)</a></i></center>

| github_jupyter |
# Datasets for the book
Here we provide links to the datasets used in the book.
Important Notes:
1. Note that these datasets are provided on external servers by third parties
2. Due to security issues with github you will have to cut and paste FTP links (they are not provided as clickable URLs)
# Python and the Surrounding Software Ecology
### Interfacing with R via rpy2
* sequence.index
Please FTP from this URL(cut and paste)
ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/historical_data/former_toplevel/sequence.index
# Next-generation Sequencing (NGS)
## Working with modern sequence formats
* SRR003265.filt.fastq.gz
Please FTP from this URL (cut and paste)
ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA18489/sequence_read/SRR003265.filt.fastq.gz
## Working with BAM files
* NA18490_20_exome.bam
Please FTP from this URL (cut and paste)
ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA18489/exome_alignment/NA18489.chrom20.ILLUMINA.bwa.YRI.exome.20121211.bam
* NA18490_20_exome.bam.bai
Please FTP from this URL (cut and paste)
ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA18489/exome_alignment/NA18489.chrom20.ILLUMINA.bwa.YRI.exome.20121211.bam.bai
## Analyzing data in Variant Call Format (VCF)
* tabix link:
ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/release/20130502/supporting/vcf_with_sample_level_annotation/ALL.chr22.phase3_shapeit2_mvncall_integrated_v5_extra_anno.20130502.genotypes.vcf.gz
# Genomics
### Working with high-quality reference genomes
* [falciparum.fasta](http://plasmodb.org/common/downloads/release-9.3/Pfalciparum3D7/fasta/data/PlasmoDB-9.3_Pfalciparum3D7_Genome.fasta)
### Dealing with low low-quality genome references
* gambiae.fa.gz
Please FTP from this URL (cut and paste)
ftp://ftp.vectorbase.org/public_data/organism_data/agambiae/Genome/agambiae.CHROMOSOMES-PEST.AgamP3.fa.gz
* [atroparvus.fa.gz](https://www.vectorbase.org/download/anopheles-atroparvus-ebroscaffoldsaatre1fagz)
### Traversing genome annotations
* [gambiae.gff3.gz](http://www.vectorbase.org/download/anopheles-gambiae-pestbasefeaturesagamp42gff3gz)
# PopGen
### Managing datasets with PLINK
* [hapmap.map.bz2](http://hapmap.ncbi.nlm.nih.gov/downloads/genotypes/hapmap3/plink_format/draft_2/hapmap3_r2_b36_fwd.consensus.qc.poly.map.bz2)
* [hapmap.ped.bz2](http://hapmap.ncbi.nlm.nih.gov/downloads/genotypes/hapmap3/plink_format/draft_2/hapmap3_r2_b36_fwd.consensus.qc.poly.ped.bz2)
* [relationships.txt](http://hapmap.ncbi.nlm.nih.gov/downloads/genotypes/hapmap3/plink_format/draft_2/relationships_w_pops_121708.txt)
# PDB
### Parsing mmCIF files with Biopython
* [1TUP.cif](http://www.rcsb.org/pdb/download/downloadFile.do?fileFormat=cif&compression=NO&structureId=1TUP)
# Python for Big genomics datasets
### Setting the stage for high-performance computing
These are the exact same files as _Managing datasets with PLINK_ above
### Programing with lazyness
* SRR003265_1.filt.fastq.gz Please ftp from this URL (cut and paste):
ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA18489/sequence_read/SRR003265_1.filt.fastq.gz
* SRR003265_2.filt.fastq.gz Please ftp from this URL (cut and paste):
ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA18489/sequence_read/SRR003265_2.filt.fastq.gz
| github_jupyter |
```
import torch
import torch.nn as nn
from torch.autograd import Variable
def conv3x3(in_, out):
return nn.Conv2d(in_, out, 3, padding=1)
class ConvRelu(nn.Module):
def __init__(self, in_, out):
super().__init__()
self.conv = conv3x3(in_, out)
self.activation = nn.ReLU(inplace=True)
def forward(self, x):
x = self.conv(x)
x = self.activation(x)
return x
class NoOperation(nn.Module):
def forward(self, x):
return x
class DecoderBlock(nn.Module):
def __init__(self, in_channels, middle_channels, out_channels):
super().__init__()
self.block = nn.Sequential(
ConvRelu(in_channels, middle_channels),
nn.ConvTranspose2d(middle_channels, out_channels, kernel_size=3, stride=2, padding=1, output_padding=1),
nn.ReLU(inplace=True)
)
def forward(self, x):
return self.block(x)
class DecoderBlockV2(nn.Module):
def __init__(self, in_channels, middle_channels, out_channels, is_deconv=True,
output_padding=0):
super(DecoderBlockV2, self).__init__()
self.in_channels = in_channels
if is_deconv:
"""
Paramaters for Deconvolution were chosen to avoid artifacts, following
link https://distill.pub/2016/deconv-checkerboard/
"""
self.block = nn.Sequential(
ConvRelu(in_channels, middle_channels),
nn.ConvTranspose2d(middle_channels, out_channels, kernel_size=4, stride=2,
padding=1, output_padding=output_padding),
nn.ReLU(inplace=True)
)
else:
self.block = nn.Sequential(
nn.Upsample(scale_factor=2, mode='bilinear'),
ConvRelu(in_channels, middle_channels),
ConvRelu(middle_channels, out_channels),
)
def forward(self, x):
return self.block(x)
class Interpolate(nn.Module):
def __init__(self, mode='nearest', scale_factor=2,
align_corners=False, output_padding=0):
super(Interpolate, self).__init__()
self.interp = nn.functional.interpolate
self.mode = mode
self.scale_factor = scale_factor
self.align_corners = align_corners
self.pad = output_padding
def forward(self, x):
if self.mode in ['linear','bilinear','trilinear']:
x = self.interp(x, mode=self.mode,
scale_factor=self.scale_factor,
align_corners=self.align_corners)
else:
x = self.interp(x, mode=self.mode,
scale_factor=self.scale_factor)
if self.pad > 0:
x = nn.ZeroPad2d((0, self.pad, 0, self.pad))(x)
return x
class DecoderBlockV3(nn.Module):
def __init__(self, in_channels, middle_channels, out_channels,
is_deconv=True, output_padding=0):
super(DecoderBlockV3, self).__init__()
self.in_channels = in_channels
if is_deconv:
"""
Paramaters for Deconvolution were chosen to avoid artifacts, following
link https://distill.pub/2016/deconv-checkerboard/
"""
self.block = nn.Sequential(
nn.ConvTranspose2d(in_channels, middle_channels, kernel_size=4, stride=2,
padding=1, output_padding=output_padding),
ConvRelu(middle_channels, out_channels),
)
else:
self.block = nn.Sequential(
Interpolate(mode='nearest', scale_factor=2,
output_padding=output_padding),
# nn.Upsample(scale_factor=2, mode='bilinear'),
ConvRelu(in_channels, middle_channels),
ConvRelu(middle_channels, out_channels),
)
def forward(self, x):
return self.block(x)
class AdaptiveConcatPool2d(nn.Module):
def __init__(self, sz=None):
super().__init__()
sz = sz or (1,1)
self.ap = nn.AdaptiveAvgPool2d(sz)
self.mp = nn.AdaptiveMaxPool2d(sz)
def forward(self, x): return torch.cat([self.mp(x), self.ap(x)], 1)
class Resnet(nn.Module):
def __init__(self, num_classes, num_filters=32,
pretrained=True, is_deconv=False):
super().__init__()
self.num_classes = num_classes
# self.conv4to3 = nn.Conv2d(4, 3, 1)
# self.encoder = pretrainedmodels.__dict__['se_resnext50_32x4d'](num_classes=1000,
# pretrained='imagenet')
# code removes final layer
# layers = resnet34()
layers = list(resnet34().children())[:-2]
# # replace first convolutional layer by 4->64 while keeping corresponding weights
# # and initializing new weights with zeros
# # https://www.kaggle.com/iafoss/pretrained-resnet34-with-rgby-0-448-public-lb/notebook
# w = layers[0].weight
# layers[0] = nn.Conv2d(4,64,kernel_size=(7,7),stride=(2,2),padding=(3, 3),
# bias=False)
# layers[0].weight = torch.nn.Parameter(torch.cat((w,torch.zeros(64,1,7,7)),
# dim=1))
# layers += [AdaptiveConcatPool2d()]
self.encoder = nn.Sequential(*layers)
self.map_logits = nn.Conv2d(512, num_classes, kernel_size=(3,3),
stride=(1,1), padding=1)
# self.encoder = nn.Sequential(*list(self.encoder.children())[:-1])
# self.pool = nn.MaxPool2d(2, 2)
# self.convp = nn.Conv2d(1056, 512, 3)
# self.csize = 1024 * 1 * 1
# self.bn1 = nn.BatchNorm1d(1024)
# self.do1 = nn.Dropout(p=0.5)
# self.lin1 = nn.Linear(1024, 512)
# self.act1 = nn.ReLU()
# self.bn2 = nn.BatchNorm1d(512)
# self.do2 = nn.Dropout(0.5)
# self.lin2 = nn.Linear(512, num_classes)
def forward(self, x):
# set to True for debugging
print_sizes = False
if print_sizes:
print('')
print('x',x.shape)
# print layer dictionary
# print(self.encoder.features)
# x = self.conv4to3(x)
# m = self.encoder._modules
# layer_names = list(m.keys())
# mx = {}
# for i,f in enumerate(m):
# x = m[f](x)
# mx[layer_names[i]] = x
# if print_sizes:
# if isinstance(x,tuple):
# print(i,layer_names[i],x[0].size(),x[1].size())
# else:
# print(i,layer_names[i],x.size())
# if layer_names[i]=='avg_pool': break
x = self.encoder(x)
if print_sizes: print('encoder',x.shape)
x = self.map_logits(x)
if print_sizes: print('map_logits',x.shape)
# x = x.view(-1, self.csize)
# if print_sizes: print('view',x.size())
# x = self.bn1(x)
# x = self.do1(x)
# if print_sizes: print('do1',x.size())
# x = self.lin1(x)
# if print_sizes: print('lin1',x.size())
# x = self.act1(x)
# x = self.bn2(x)
# x = self.do2(x)
# x = self.lin2(x)
# if print_sizes: print('lin2',x.shape)
return x
```
| github_jupyter |
# Table of Contents
<p><div class="lev1 toc-item"><a href="#Python-Basics-with-Numpy-(optional-assignment)" data-toc-modified-id="Python-Basics-with-Numpy-(optional-assignment)-1"><span class="toc-item-num">1 </span>Python Basics with Numpy (optional assignment)</a></div><div class="lev2 toc-item"><a href="#About-iPython-Notebooks" data-toc-modified-id="About-iPython-Notebooks-11"><span class="toc-item-num">1.1 </span>About iPython Notebooks</a></div><div class="lev2 toc-item"><a href="#1---Building-basic-functions-with-numpy" data-toc-modified-id="1---Building-basic-functions-with-numpy-12"><span class="toc-item-num">1.2 </span>1 - Building basic functions with numpy</a></div><div class="lev3 toc-item"><a href="#1.1---sigmoid-function,-np.exp()" data-toc-modified-id="1.1---sigmoid-function,-np.exp()-121"><span class="toc-item-num">1.2.1 </span>1.1 - sigmoid function, np.exp()</a></div><div class="lev3 toc-item"><a href="#1.2---Sigmoid-gradient" data-toc-modified-id="1.2---Sigmoid-gradient-122"><span class="toc-item-num">1.2.2 </span>1.2 - Sigmoid gradient</a></div><div class="lev3 toc-item"><a href="#1.3---Reshaping-arrays" data-toc-modified-id="1.3---Reshaping-arrays-123"><span class="toc-item-num">1.2.3 </span>1.3 - Reshaping arrays</a></div><div class="lev3 toc-item"><a href="#1.4---Normalizing-rows" data-toc-modified-id="1.4---Normalizing-rows-124"><span class="toc-item-num">1.2.4 </span>1.4 - Normalizing rows</a></div><div class="lev3 toc-item"><a href="#1.5---Broadcasting-and-the-softmax-function" data-toc-modified-id="1.5---Broadcasting-and-the-softmax-function-125"><span class="toc-item-num">1.2.5 </span>1.5 - Broadcasting and the softmax function</a></div><div class="lev2 toc-item"><a href="#2)-Vectorization" data-toc-modified-id="2)-Vectorization-13"><span class="toc-item-num">1.3 </span>2) Vectorization</a></div><div class="lev3 toc-item"><a href="#2.1-Implement-the-L1-and-L2-loss-functions" data-toc-modified-id="2.1-Implement-the-L1-and-L2-loss-functions-131"><span class="toc-item-num">1.3.1 </span>2.1 Implement the L1 and L2 loss functions</a></div>
# Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
**Instructions:**
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.
**After this assignment you will:**
- Be able to use iPython Notebooks
- Be able to use numpy functions and numpy matrix/vector operations
- Understand the concept of "broadcasting"
- Be able to vectorize code
Let's get started!
## About iPython Notebooks ##
iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.
**Exercise**: Set test to `"Hello World"` in the cell below to print "Hello World" and run the two cells below.
```
### START CODE HERE ### (≈ 1 line of code)
test = "Hello World"
### END CODE HERE ###
print ("test: " + test)
```
**Expected output**:
test: Hello World
<font color='blue'>
**What you need to remember**:
- Run your cells using SHIFT+ENTER (or "Run cell")
- Write code in the designated areas using Python 3 only
- Do not modify the code outside of the designated areas
## 1 - Building basic functions with numpy ##
Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.
### 1.1 - sigmoid function, np.exp() ###
Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().
**Exercise**: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.
**Reminder**:
$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.
<img src="images/Sigmoid.png" style="width:500px;height:228px;">
To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
```
# GRADED FUNCTION: basic_sigmoid
import math
def basic_sigmoid(x):
"""
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (1 + math.exp(-x))
### END CODE HERE ###
return s
basic_sigmoid(3)
```
**Expected Output**:
<table style = "width:40%">
<tr>
<td>** basic_sigmoid(3) **</td>
<td>0.9525741268224334 </td>
</tr>
</table>
Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
```
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
# basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
```
In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
```
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
```
Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
```
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
```
Any time you need more info on a numpy function, we encourage you to look at [the official documentation](https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.exp.html).
You can also create a new cell in the notebook and write `np.exp?` (for example) to get quick access to the documentation.
**Exercise**: Implement the sigmoid function using numpy.
**Instructions**: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.
$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix}
x_1 \\
x_2 \\
... \\
x_n \\
\end{pmatrix} = \begin{pmatrix}
\frac{1}{1+e^{-x_1}} \\
\frac{1}{1+e^{-x_2}} \\
... \\
\frac{1}{1+e^{-x_n}} \\
\end{pmatrix}\tag{1} $$
```
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
"""
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1. / (1 + np.exp(-x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
```
**Expected Output**:
<table>
<tr>
<td> **sigmoid([1,2,3])**</td>
<td> array([ 0.73105858, 0.88079708, 0.95257413]) </td>
</tr>
</table>
### 1.2 - Sigmoid gradient
As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.
**Exercise**: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid\_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$
You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute $\sigma'(x) = s(1-s)$
```
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
"""
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
"""
### START CODE HERE ### (≈ 2 lines of code)
s = sigmoid(x)
ds = s * (1-s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
```
**Expected Output**:
<table>
<tr>
<td> **sigmoid_derivative([1,2,3])**</td>
<td> [ 0.19661193 0.10499359 0.04517666] </td>
</tr>
</table>
### 1.3 - Reshaping arrays ###
Two common numpy functions used in deep learning are [np.shape](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) and [np.reshape()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html).
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(...) is used to reshape X into some other dimension.
For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(length*height*3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.
<img src="images/image2vector_kiank.png" style="width:500px;height:300;">
**Exercise**: Implement `image2vector()` that takes an input of shape (length, height, 3) and returns a vector of shape (length\*height\*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:
``` python
v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
```
- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with `image.shape[0]`, etc.
```
# GRADED FUNCTION: image2vector
def image2vector(image):
"""
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
"""
### START CODE HERE ### (≈ 1 line of code)
v = image.reshape(image.size, 1)
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
```
**Expected Output**:
<table style="width:100%">
<tr>
<td> **image2vector(image)** </td>
<td> [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]</td>
</tr>
</table>
### 1.4 - Normalizing rows
Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).
For example, if $$x =
\begin{bmatrix}
0 & 3 & 4 \\
2 & 6 & 4 \\
\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix}
5 \\
\sqrt{56} \\
\end{bmatrix}\tag{4} $$and $$ x\_normalized = \frac{x}{\| x\|} = \begin{bmatrix}
0 & \frac{3}{5} & \frac{4}{5} \\
\frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \\
\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.
**Exercise**: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
```
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
"""
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
"""
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x, ord=2, axis=1, keepdims=True)
# Divide x by its norm.
x = x / x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[2, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
```
**Expected Output**:
<table style="width:60%">
<tr>
<td> **normalizeRows(x)** </td>
<td> [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]</td>
</tr>
</table>
**Note**:
In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now!
### 1.5 - Broadcasting and the softmax function ####
A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official [broadcasting documentation](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
**Exercise**: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
**Instructions**:
- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix}
x_1 &&
x_2 &&
... &&
x_n
\end{bmatrix}) = \begin{bmatrix}
\frac{e^{x_1}}{\sum_{j}e^{x_j}} &&
\frac{e^{x_2}}{\sum_{j}e^{x_j}} &&
... &&
\frac{e^{x_n}}{\sum_{j}e^{x_j}}
\end{bmatrix} $
- $\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix}
x_{11} & x_{12} & x_{13} & \dots & x_{1n} \\
x_{21} & x_{22} & x_{23} & \dots & x_{2n} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}
\end{bmatrix} = \begin{bmatrix}
\frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \\
\frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}
\end{bmatrix} = \begin{pmatrix}
softmax\text{(first row of x)} \\
softmax\text{(second row of x)} \\
... \\
softmax\text{(last row of x)} \\
\end{pmatrix} $$
```
# GRADED FUNCTION: softmax
def softmax(x):
"""Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
"""
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp, axis=1, keepdims=True)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp / x_sum
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
```
**Expected Output**:
<table style="width:60%">
<tr>
<td> **softmax(x)** </td>
<td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]</td>
</tr>
</table>
**Note**:
- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). **x_exp/x_sum** works due to python broadcasting.
Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.
<font color='blue'>
**What you need to remember:**
- np.exp(x) works for any np.array x and applies the exponential function to every coordinate
- the sigmoid function and its gradient
- image2vector is commonly used in deep learning
- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
- numpy has efficient built-in functions
- broadcasting is extremely useful
## 2) Vectorization
In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
```
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
```
As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
**Note** that `np.dot()` performs a matrix-matrix or matrix-vector multiplication. This is different from `np.multiply()` and the `*` operator (which is equivalent to `.*` in Matlab/Octave), which performs an element-wise multiplication.
### 2.1 Implement the L1 and L2 loss functions
**Exercise**: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
**Reminder**:
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
- L1 loss is defined as:
$$\begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align*}\tag{6}$$
```
# GRADED FUNCTION: L1
def L1(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.abs((y - yhat)))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td> **L1** </td>
<td> 1.1 </td>
</tr>
</table>
**Exercise**: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then `np.dot(x,x)` = $\sum_{j=0}^n x_j^{2}$.
- L2 loss is defined as $$\begin{align*} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align*}\tag{7}$$
```
# GRADED FUNCTION: L2
def L2(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.square(yhat - y))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td> **L2** </td>
<td> 0.43 </td>
</tr>
</table>
Congratulations on completing this assignment. We hope that this little warm-up exercise helps you in the future assignments, which will be more exciting and interesting!
<font color='blue'>
**What to remember:**
- Vectorization is very important in deep learning. It provides computational efficiency and clarity.
- You have reviewed the L1 and L2 loss.
- You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc...
| github_jupyter |
```
import tensorflow as tf
from matplotlib import pylab
from tensorflow.examples.tutorials.mnist import input_data
import numpy as np
# Required for Data downaload and preparation
import struct
import gzip
import os
from six.moves.urllib.request import urlretrieve
```
## Defining Hyperparameters
Here we define the set of hyperparameters we're going to you in our example. These hyperparameters include `batch_size`, train dataset size (`n_train`), different layers in our CNN (`cnn_layer_ids`). You can find descriptions of each hyperparameter in comments.
```
batch_size = 100 # This is the typical batch size we've been using
image_size = 28 # This is the width/height of a single image
# Number of color channels in an image. These are black and white images
n_channels = 1
# Number of different digits we have images for (i.e. classes)
n_classes = 10
n_train = 55000 # Train dataset size
n_valid = 5000 # Validation dataset size
n_test = 10000 # Test dataset size
# Layers in the CNN in the order from input to output
cnn_layer_ids = ['conv1','pool1','conv2','pool2','fulcon1','softmax']
# Hyperparameters of each layer (e.g. filter size of each convolution layer)
layer_hyperparameters = {'conv1':{'weight_shape':[3,3,n_channels,16],'stride':[1,1,1,1],'padding':'SAME'},
'pool1':{'kernel_shape':[1,3,3,1],'stride':[1,2,2,1],'padding':'SAME'},
'conv2':{'weight_shape':[3,3,16,32],'stride':[1,1,1,1],'padding':'SAME'},
'pool2':{'kernel_shape':[1,3,3,1],'stride':[1,2,2,1],'padding':'SAME'},
'fulcon1':{'weight_shape':[7*7*32,128]},
'softmax':{'weight_shape':[128,n_classes]}
}
```
## Defining Inputs and Outputs
Here we define input and output placeholders required to process a batch of data. We will use the same placeholders for all training, validation and testing data as all of them are processed in same size batches.
```
# Inputs (Images) and Outputs (Labels) Placeholders
tf_inputs = tf.placeholder(shape=[batch_size, image_size, image_size, n_channels],dtype=tf.float32,name='tf_mnist_images')
tf_labels = tf.placeholder(shape=[batch_size, n_classes],dtype=tf.float32,name='tf_mnist_labels')
```
## Defining Model Parameters and Other Variables
Here we define various TensorFlow variables required for the following computations. These includes a global step variable (to decay learning rate) and weights and biases of each layer of the CNN.
```
# Global step for decaying the learning rate
global_step = tf.Variable(0,trainable=False)
# Initializing the variables
layer_weights = {}
layer_biases = {}
for layer_id in cnn_layer_ids:
if 'pool' not in layer_id:
layer_weights[layer_id] = tf.Variable(initial_value=tf.random_normal(shape=layer_hyperparameters[layer_id]['weight_shape'],
stddev=0.02,dtype=tf.float32),name=layer_id+'_weights')
layer_biases[layer_id] = tf.Variable(initial_value=tf.random_normal(shape=[layer_hyperparameters[layer_id]['weight_shape'][-1]],
stddev=0.01,dtype=tf.float32),name=layer_id+'_bias')
print('Variables initialized')
```
## Defining Inference of the CNN
Here we define the computations starting from input placeholder (`tf_inputs`) and then computing the hidden activations for each of the layers found in `cnn_layer_ids` (i.e. convolution/pooling and fulcon layers) and their respective parameters (`layer_hyperparamters`). At the final layer (`softmax`) we do not apply an activation function as for the rest of the layers, but obtain the unnormalized logit values without any activation function.
```
# Calculating Logits
h = tf_inputs
for layer_id in cnn_layer_ids:
if 'conv' in layer_id:
# For each convolution layer, compute the output by using conv2d function
# This operation results in a [batch_size, output_height, output_width, out_channels]
# sized 4 dimensional tensor
h = tf.nn.conv2d(h,layer_weights[layer_id],layer_hyperparameters[layer_id]['stride'],
layer_hyperparameters[layer_id]['padding']) + layer_biases[layer_id]
h = tf.nn.relu(h)
elif 'pool' in layer_id:
# For each pooling layer, compute the output by max pooling
# This operation results in a [batch_size, output_height, output_width, out_channels]
# sized 4 dimensional tensor
h = tf.nn.max_pool(h, layer_hyperparameters[layer_id]['kernel_shape'],layer_hyperparameters[layer_id]['stride'],
layer_hyperparameters[layer_id]['padding'])
elif layer_id == 'fulcon1':
# At the first fulcon layer we need to reshape the 4 dimensional output to a
# 2 dimensional output to be processed by fully connected layers
# Note this should only done once, before
# computing the output of the first fulcon layer
h = tf.reshape(h,[batch_size,-1])
h = tf.matmul(h,layer_weights[layer_id]) + layer_biases[layer_id]
h = tf.nn.relu(h)
elif layer_id == 'softmax':
# Note that here we do not perform the same reshaping we did for fulcon1
# We only perform the matrix multiplication on previous output
h = tf.matmul(h,layer_weights[layer_id]) + layer_biases[layer_id]
print('Calculated logits')
tf_logits = h
```
## Defining Loss
We use softmax cross entropy loss to optimize the parameters of the model.
```
# Calculating the softmax cross entropy loss with the computed logits and true labels (one hot encoded)
tf_loss = tf.nn.softmax_cross_entropy_with_logits_v2(logits=tf_logits,labels=tf_labels)
print('Loss defined')
```
## Model Parameter Optimizer
We define an exponentially decaying learning rate and an optimizer to optimize the parameters.
```
# Optimization
# Here we define the function to decay the learning rate exponentially.
# Everytime the global step increases the learning rate decreases
tf_learning_rate = tf.train.exponential_decay(learning_rate=0.001,global_step=global_step,decay_rate=0.5,decay_steps=1,staircase=True)
tf_loss_minimize = tf.train.RMSPropOptimizer(learning_rate=tf_learning_rate, momentum=0.9).minimize(tf_loss)
print('Loss minimization defined')
```
## Defining Predictions
We get the predictiosn out by applying a softmax activation to the logits. Additionally we define a global step increment function and will be increase every time the validation accuracy plateus.
```
tf_predictions = tf.nn.softmax(tf_logits)
print('Prediction defined')
tf_tic_toc = tf.assign(global_step, global_step + 1)
```
## Define Accuracy
A simple function to calculate accuracy for a given set of labels and predictions.
```
def accuracy(predictions,labels):
'''
Accuracy of a given set of predictions of size (N x n_classes) and
labels of size (N x n_classes)
'''
return np.sum(np.argmax(predictions,axis=1)==np.argmax(labels,axis=1))*100.0/labels.shape[0]
```
## Lolading Data
Here we download (if needed) the MNIST dataset and, perform reshaping and normalization. Also we conver the labels to one hot encoded vectors.
```
def maybe_download(url, filename, expected_bytes, force=False):
"""Download a file if not present, and make sure it's the right size."""
if force or not os.path.exists(filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, filename)
print('\nDownload Complete!')
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
def read_mnist(fname_img, fname_lbl, one_hot=False):
print('\nReading files %s and %s'%(fname_img, fname_lbl))
# Processing images
with gzip.open(fname_img) as fimg:
magic, num, rows, cols = struct.unpack(">IIII", fimg.read(16))
print(num,rows,cols)
img = (np.frombuffer(fimg.read(num*rows*cols), dtype=np.uint8).reshape(num, rows, cols,1)).astype(np.float32)
print('(Images) Returned a tensor of shape ',img.shape)
#img = (img - np.mean(img)) /np.std(img)
img *= 1.0 / 255.0
# Processing labels
with gzip.open(fname_lbl) as flbl:
# flbl.read(8) reads upto 8 bytes
magic, num = struct.unpack(">II", flbl.read(8))
lbl = np.frombuffer(flbl.read(num), dtype=np.int8)
if one_hot:
one_hot_lbl = np.zeros(shape=(num,10),dtype=np.float32)
one_hot_lbl[np.arange(num),lbl] = 1.0
print('(Labels) Returned a tensor of shape: %s'%lbl.shape)
print('Sample labels: ',lbl[:10])
if not one_hot:
return img, lbl
else:
return img, one_hot_lbl
# Download data if needed
url = 'http://yann.lecun.com/exdb/mnist/'
# training data
maybe_download(url,'train-images-idx3-ubyte.gz',9912422)
maybe_download(url,'train-labels-idx1-ubyte.gz',28881)
# testing data
maybe_download(url,'t10k-images-idx3-ubyte.gz',1648877)
maybe_download(url,'t10k-labels-idx1-ubyte.gz',4542)
# Read the training and testing data
train_inputs, train_labels = read_mnist('train-images-idx3-ubyte.gz', 'train-labels-idx1-ubyte.gz',True)
test_inputs, test_labels = read_mnist('t10k-images-idx3-ubyte.gz', 't10k-labels-idx1-ubyte.gz',True)
valid_inputs, valid_labels = train_inputs[-n_valid:,:,:,:], train_labels[-n_valid:,:]
train_inputs, train_labels = train_inputs[:-n_valid,:,:,:], train_labels[:-n_valid,:]
print('\nTrain size: ', train_inputs.shape[0])
print('\nValid size: ', valid_inputs.shape[0])
print('\nTest size: ', test_inputs.shape[0])
```
## Data Generators for MNIST
Here we have the logic to iterate through each training, validation and testing datasets, in `batch_size` size strides.
```
train_index, valid_index, test_index = 0,0,0
def get_train_batch(images, labels, batch_size):
global train_index
batch = images[train_index:train_index+batch_size,:,:,:], labels[train_index:train_index+batch_size,:]
train_index = (train_index + batch_size)%(images.shape[0] - batch_size)
return batch
def get_valid_batch(images, labels, batch_size):
global valid_index
batch = images[valid_index:valid_index+batch_size,:,:,:], labels[valid_index:valid_index+batch_size,:]
valid_index = (valid_index + batch_size)%(images.shape[0] - batch_size)
return batch
def get_test_batch(images, labels, batch_size):
global test_index
batch = images[test_index:test_index+batch_size,:,:,:], labels[test_index:test_index+batch_size,:]
test_index = (test_index + batch_size)%(images.shape[0] - batch_size)
return batch
```
## Visualizing MNIST Results
Here we define a function to collect correctly and incorrectly classified samples to visualize later. Visualizing such samples will help us to understand why the CNN incorrectly classified certain samples.
```
# Makes sure we only collect 10 samples for each
correct_fill_index, incorrect_fill_index = 0,0
# Visualization purposes
correctly_predicted = np.empty(shape=(10,28,28,1),dtype=np.float32)
correct_predictions = np.empty(shape=(10,n_classes),dtype=np.float32)
incorrectly_predicted = np.empty(shape=(10,28,28,1),dtype=np.float32)
incorrect_predictions = np.empty(shape=(10,n_classes),dtype=np.float32)
def collect_samples(test_batch_predictions,test_images, test_labels):
global correctly_predicted, correct_predictions
global incorrectly_predicted, incorrect_predictions
global correct_fill_index, incorrect_fill_index
correct_indices = np.where(np.argmax(test_batch_predictions,axis=1)==np.argmax(test_labels,axis=1))[0]
incorrect_indices = np.where(np.argmax(test_batch_predictions,axis=1)!=np.argmax(test_labels,axis=1))[0]
if correct_indices.size>0 and correct_fill_index<10:
print('\nCollecting Correctly Predicted Samples')
chosen_index = np.random.choice(correct_indices)
correctly_predicted[correct_fill_index,:,:,:]=test_images[chosen_index,:].reshape(1,image_size,image_size,n_channels)
correct_predictions[correct_fill_index,:]=test_batch_predictions[chosen_index,:]
correct_fill_index += 1
if incorrect_indices.size>0 and incorrect_fill_index<10:
print('Collecting InCorrectly Predicted Samples')
chosen_index = np.random.choice(incorrect_indices)
incorrectly_predicted[incorrect_fill_index,:,:,:]=test_images[chosen_index,:].reshape(1,image_size,image_size,n_channels)
incorrect_predictions[incorrect_fill_index,:]=test_batch_predictions[chosen_index,:]
incorrect_fill_index += 1
```
## Running MNIST Classification
Here we train our CNN on MNIST data for `n_epochs` epochs. Each epoch we train the CNN with the full training dataset. Then we calculate the validation accuracy, according to which we decay the learning rate. Finally, each epoch we calculate the test accuracy which is computed using an independent test set. This code should run under 10 minutes if you run on a decent GPU and should reach to a test accuracy of about ~95%
```
# Parameters related to learning rate decay
# counts how many times the validation accuracy has not increased consecutively for
v_acc_not_increased_for = 0
# if the above count is above this value, decrease the learning rate
v_acc_threshold = 3
# currently recorded best validation accuracy
max_v_acc = 0.0
config = tf.ConfigProto(allow_soft_placement=True)
# Good practice to use this to avoid any surprising errors thrown by TensorFlow
config.gpu_options.allow_growth = True
config.gpu_options.per_process_gpu_memory_fraction = 0.9 # Making sure Tensorflow doesn't overflow the GPU
n_epochs = 25 # Number of epochs the training runs for
session = tf.InteractiveSession(config=config)
# Initialize all variables
tf.global_variables_initializer().run()
# Run training loop
for epoch in range(n_epochs):
loss_per_epoch = []
# Training phase. We train with all training data
# processing one batch at a time
for i in range(n_train//batch_size):
# Get the next batch of MNIST dataset
batch = get_train_batch(train_inputs, train_labels, batch_size)
# Run TensorFlow opeartions
l,_ = session.run([tf_loss,tf_loss_minimize],feed_dict={tf_inputs: batch[0].reshape(batch_size,image_size,image_size,n_channels),
tf_labels: batch[1]})
# Add the loss value to a list
loss_per_epoch.append(l)
print('Average loss in epoch %d: %.5f'%(epoch,np.mean(loss_per_epoch)))
# Validation phase. We compute validation accuracy
# processing one batch at a time
valid_accuracy_per_epoch = []
for i in range(n_valid//batch_size):
# Get the next validation data batch
vbatch_images,vbatch_labels = get_valid_batch(valid_inputs, valid_labels, batch_size)
# Compute validation predictions
valid_batch_predictions = session.run(
tf_predictions,feed_dict={tf_inputs: vbatch_images}
)
# Compute and add the validation accuracy to a python list
valid_accuracy_per_epoch.append(accuracy(valid_batch_predictions,vbatch_labels))
# Compute and print average validation accuracy
mean_v_acc = np.mean(valid_accuracy_per_epoch)
print('\tAverage Valid Accuracy in epoch %d: %.5f'%(epoch,np.mean(valid_accuracy_per_epoch)))
# Learning rate decay logic
if mean_v_acc > max_v_acc:
max_v_acc = mean_v_acc
else:
v_acc_not_increased_for += 1
# Time to decrease learning rate
if v_acc_not_increased_for >= v_acc_threshold:
print('\nDecreasing Learning rate\n')
session.run(tf_tic_toc) # Increase global_step
v_acc_not_increased_for = 0
# Testing phase. We compute test accuracy
# processing one batch at a time
accuracy_per_epoch = []
for i in range(n_test//batch_size):
btest_images, btest_labels = get_test_batch(test_inputs, test_labels, batch_size)
test_batch_predictions = session.run(tf_predictions,feed_dict={tf_inputs: btest_images})
accuracy_per_epoch.append(accuracy(test_batch_predictions,btest_labels))
# Collect samples for visualization only in the last epoch
if epoch==n_epochs-1:
collect_samples(test_batch_predictions, btest_images, btest_labels)
print('\tAverage Test Accuracy in epoch %d: %.5f\n'%(epoch,np.mean(accuracy_per_epoch)))
session.close()
```
## Visualizing Predictions
Let us see how when our CNN did when it comes to predictions
```
# Defining the plot related settings
pylab.figure(figsize=(25,20)) # in inches
width=0.5 # Width of a bar in the barchart
padding = 0.05 # Padding between two bars
labels = list(range(0,10)) # Class labels
# Defining X axis
x_axis = np.arange(0,10)
# We create 4 rows and 7 column set of subplots
# We choose these to put the titles in
# First row middle
pylab.subplot(4, 7, 4)
pylab.title('Correctly Classified Samples',fontsize=24)
# Second row middle
pylab.subplot(4, 7,11)
pylab.title('Softmax Predictions for Correctly Classified Samples',fontsize=24)
# For 7 steps
for sub_i in range(7):
# Draw the top row (digit images)
pylab.subplot(4, 7, sub_i + 1)
pylab.imshow(np.squeeze(correctly_predicted[sub_i]),cmap='gray')
pylab.axis('off')
# Draw the second row (prediction bar chart)
pylab.subplot(4, 7, 7 + sub_i + 1)
pylab.bar(x_axis + padding, correct_predictions[sub_i], width)
pylab.ylim([0.0,1.0])
pylab.xticks(x_axis, labels)
# Set titles for the third and fourth rows
pylab.subplot(4, 7, 18)
pylab.title('Incorrectly Classified Samples',fontsize=26)
pylab.subplot(4, 7,25)
pylab.title('Softmax Predictions for Incorrectly Classified Samples',fontsize=24)
# For 7 steps
for sub_i in range(7):
# Draw the third row (incorrectly classified digit images)
pylab.subplot(4, 7, 14 + sub_i + 1)
pylab.imshow(np.squeeze(incorrectly_predicted[sub_i]),cmap='gray')
pylab.axis('off')
# Draw the fourth row (incorrect predictions bar chart)
pylab.subplot(4, 7, 21 + sub_i + 1)
pylab.bar(x_axis + padding, incorrect_predictions[sub_i], width)
pylab.ylim([0.0,1.0])
pylab.xticks(x_axis, labels)
# Save the figure
pylab.savefig('mnist_results.png')
pylab.show()
```
| github_jupyter |
# CCL feature demo
**SLAC 2018 DESC meeting**
In this demo, we use CCL to set up a cosmology and show how to get different quantities of interest.
```
import numpy as np
import matplotlib.pyplot as plt
import pyccl as ccl
```
We start by setting up a cosmology object. This holds the cosmological parameters and metadata. The cosmology object is needed as input for many other functions.
We set three of these to demonstrate the different options which are available.
```
# Basic cosmology with mostly default parameters and calculating setting.
cosmo = ccl.Cosmology(Omega_c=0.27, Omega_b=0.045, h=0.67, A_s=2.1e-9, n_s=0.96,
Neff=3.046, Omega_k=0.)
# Cosmology which incorporates baryonic correction terms in the power.
cosmo_baryons = ccl.Cosmology(Omega_c=0.27, Omega_b=0.045, h=0.67, A_s=2.1e-9, n_s=0.96,
Neff=3.046, Omega_k=0., baryons_power_spectrum='bcm',
bcm_log10Mc=14.079181246047625, bcm_etab=0.5, bcm_ks=55.0)
# Cosmology where the power spectrum will be computed with an emulator.
cosmo_emu = ccl.Cosmology(Omega_c=0.27, Omega_b=0.05, h=0.67, sigma8=0.83, n_s=0.96,
Neff=3.04, Omega_k=0., transfer_function='emulator',
matter_power_spectrum="emu")
```
## Background quantities
We can calculate a variety of background-type quantities. We set up a vector of scale factors at which to compute them.
```
z = np.linspace(0.0001, 5., 100)
a = 1. / (1.+z)
```
Compute ** distances **:
```
chi_rad = ccl.comoving_radial_distance(cosmo, a)
chi_ang = ccl.comoving_angular_distance(cosmo,a)
lum_dist = ccl.luminosity_distance(cosmo, a)
dist_mod = ccl.distance_modulus(cosmo, a)
# Plot the comoving radial distance as a function of redshift, as an example.
plt.figure()
plt.plot(z, chi_rad, 'k', linewidth=2)
plt.xlabel('$z$', fontsize=20)
plt.ylabel('Comoving distance, Mpc', fontsize=15)
plt.tick_params(labelsize=13)
plt.show()
```
Compute ** growth quantities ** :
```
D = ccl.growth_factor(cosmo, a)
f = ccl.growth_rate(cosmo, a)
plt.figure()
plt.plot(z, D, 'k', linewidth=2, label='Growth factor')
plt.plot(z, f, 'g', linewidth=2, label='Growth rate')
plt.xlabel('$z$', fontsize=20)
plt.tick_params(labelsize=13)
plt.legend(loc='lower left')
plt.show()
```
The ratio of the ** Hubble parameter ** at scale factor a to H0:
```
H_over_H0 = ccl.h_over_h0(cosmo, a)
plt.figure()
plt.plot(z, H_over_H0, 'k', linewidth=2)
plt.xlabel('$z$', fontsize=20)
plt.ylabel('$H / H_0$', fontsize=15)
plt.tick_params(labelsize=13)
plt.show()
```
For each component of the matter / energy budget, we can get $\Omega_{\rm x}(z)$, the ** fractional energy density ** at $z \ne 0$.
```
OmM_z = ccl.omega_x(cosmo, a, 'matter')
OmL_z = ccl.omega_x(cosmo, a, 'dark_energy')
OmR_z = ccl.omega_x(cosmo, a, 'radiation')
OmK_z = ccl.omega_x(cosmo, a, 'curvature')
OmNuRel_z = ccl.omega_x(cosmo, a, 'neutrinos_rel')
OmNuMass_z = ccl.omega_x(cosmo, a, 'neutrinos_massive')
plt.figure()
plt.plot(z, OmM_z, 'k', linewidth=2, label='$\Omega_{\\rm M}(z)$')
plt.plot(z, OmL_z, 'g', linewidth=2, label='$\Omega_{\Lambda}(z)$')
plt.plot(z, OmR_z, 'b', linewidth=2, label='$\Omega_{\\rm R}(z)$')
plt.plot(z, OmNuRel_z, 'm', linewidth=2, label='$\Omega_{\\nu}^{\\rm rel}(z)$')
plt.xlabel('$z$',fontsize=20)
plt.ylabel('$\Omega_{\\rm x}(z)$', fontsize= 20)
plt.tick_params(labelsize=13)
plt.legend(loc='upper right')
plt.show()
```
## Matter power spectra and related quantities
To compute the matter power spectrum, we define a vector of k values, and use the same z values as above.
```
k = np.logspace(-3, 2, 100)
```
The first power spectrum call for a given cosmology will take a few seconds to run, because we are computing $P(k)$ with CLASS and initializing splines. Further calls will be much quicker because they just access the precomputed splined values.
```
z_Pk = 0.2
a_Pk = 1. / (1.+z_Pk)
Pk_lin = ccl.linear_matter_power(cosmo, k, a_Pk)
Pk_nonlin = ccl.nonlin_matter_power(cosmo, k, a_Pk)
Pk_baryon = ccl.nonlin_matter_power(cosmo_baryons, k, a_Pk)
Pk_emu = ccl.nonlin_matter_power(cosmo_emu, k, a_Pk)
plt.figure()
plt.loglog(k, Pk_lin, 'k', linewidth=2, label='Linear')
plt.loglog(k, Pk_nonlin, 'g', linewidth=2, label='Non-linear (halofit)')
plt.loglog(k, Pk_baryon, 'm', linewidth=2, linestyle=':', label='With baryonic correction')
plt.loglog(k, Pk_emu, 'b', linewidth=2, linestyle = '--', label='CosmicEmu')
plt.xlabel('$k, \\frac{1}{\\rm Mpc}$', fontsize=20)
plt.ylabel('$P(k), {\\rm Mpc^3}$', fontsize=20)
plt.xlim(0.001, 50)
plt.ylim(0.01, 10**6)
plt.tick_params(labelsize=13)
plt.legend(loc='lower left')
plt.show()
```
We can also compute $\sigma_{\rm R}$, the RMS variance in a top-hat of radius R Mpc, as well as the special case of $\sigma_{8}$.
```
R = np.linspace(5, 20, 15)
sigmaR = ccl.sigmaR(cosmo, R)
sigma8 = ccl.sigma8(cosmo)
print("sigma8 =", sigma8)
```
## $C_\ell$ spectra
We can compute $C_\ell$ for galaxy counts, galaxy lensing, and CMB lensing, for autocorrelations or any cross-correlation.
The first step to getting $C_\ell$'s involving galaxy counts or lensing is to define a photo-z probability function and a galaxy redshift distribution. CCL allows you to flexibly design your own photo-z function, but fo the purposes of demonstration we use the included Gaussian function.
```
z_pz = np.linspace(0.3, 3., 3) # Define the edges of the photo-z bins.
pz = ccl.PhotoZGaussian(sigma_z0=0.05)
```
We get the galaxy redshift distribution for each tomographic bin, for galaxy counts and galaxy lenisng.
```
dNdz_nc = [ccl.dNdz_tomog(z=z, dNdz_type='nc', zmin=z_pz[zi], zmax=z_pz[zi+1], pz_func=pz)
for zi in range(0, len(z_pz)-1)]
dNdz_len = [ccl.dNdz_tomog(z=z, dNdz_type='wl_fid', zmin=z_pz[zi], zmax=z_pz[zi+1], pz_func=pz)
for zi in range(0, len(z_pz)-1)]
```
Let's assume a toy linear galaxy bias for our galaxy-count tracer.
```
bias = 2.*np.ones(len(z))
```
We can now set up tracer objects for CMB lensing and for each tomographic bin of galaxy counts and galaxy lensing.
```
gal_counts = ([ccl.NumberCountsTracer(cosmo, has_rsd=False,
dndz=(z, dNdz_nc[zi]), bias=(z, bias)) for zi in range(0, len(z_pz)-1)])
gal_lens = ([ccl.WeakLensingTracer(cosmo, dndz=(z, dNdz_len[zi])) for zi in range(0, len(z_pz)-1)])
cmb_lens = [ccl.CMBLensingTracer(cosmo, z_source=1089.)]
all_tracers = gal_counts + gal_lens + cmb_lens
```
With these tracer objects, we can now get $C_\ell$'s.
```
ell = np.linspace(1, 2000, 2000)
n_tracer = len(all_tracers)
c_ells = ([[ccl.angular_cl(cosmo, all_tracers[ni], all_tracers[nj], ell)
for ni in range(0, n_tracer)] for nj in range(0, n_tracer)])
```
We can plot a couple of examples
```
plt.figure()
plt.loglog(ell, c_ells[0][0], 'k', linewidth=2, label='gg bin 1 auto')
plt.loglog(ell, c_ells[0][3], 'g', linewidth=2, label='g1 x src2')
plt.loglog(ell, c_ells[4][4], 'm', linewidth=2, label='CMB lensing auto')
plt.xlabel('$\ell$', fontsize=20)
plt.ylabel('$C_\ell$', fontsize=20)
plt.xlim(1, 1000)
plt.tick_params(labelsize=13)
plt.legend(loc='lower left')
plt.show()
```
# Correlation functions
From the $C_\ell$s, we can then get correlatoin functions. Let's do an example of each type.
```
theta_deg = np.logspace(-1, np.log10(5.), 20) # Theta is in degrees
xi_plus = ccl.correlation(cosmo, ell, c_ells[2][2], theta_deg, corr_type='L+', method='FFTLog')
xi_minus = ccl.correlation(cosmo, ell, c_ells[2][2], theta_deg, corr_type='L-', method='FFTLog')
xi_gg = ccl.correlation(cosmo, ell, c_ells[0][0], theta_deg, corr_type='GG', method='FFTLog')
plt.figure()
plt.loglog(theta_deg, xi_plus, '+k', label='+')
plt.loglog(theta_deg, xi_minus, 'ob', label='-')
plt.xlabel('$\\theta$, deg', fontsize=20)
plt.ylabel('$\\xi_{+ / -}$', fontsize=20)
plt.xlim(0.1, 5)
plt.ylim(10**(-7), 10**(-4))
plt.tick_params(labelsize=13)
plt.legend(loc='lower left')
plt.show()
plt.figure()
plt.loglog(theta_deg, xi_gg, 'mo', linewidth=2)
plt.xlabel('$\\theta$, deg', fontsize=20)
plt.ylabel('$\\xi_{gg}$', fontsize=20)
plt.xlim(0.1, 5)
plt.ylim(4*10**(-5), 0.05)
plt.tick_params(labelsize=13)
plt.show()
```
# Halo Mass Function & Halo Bias
We can compute the halo bias and halo mass function from Tinker et al.
```
halo_mass = np.logspace(10, 16, 200)
hmf = ccl.massfunc(cosmo, halo_mass, a=1., overdensity=200)
plt.figure()
plt.loglog(halo_mass, hmf, 'k', linewidth=2)
plt.xlabel('Halo mass, $M_\odot$', fontsize=20)
plt.ylabel('$\\frac{dn}{dlog_{10}M}$', fontsize=20)
plt.tick_params(labelsize=13)
plt.show()
halo_bias = ccl.halo_bias(cosmo, halo_mass, a=1., overdensity=200)
plt.figure()
plt.loglog(halo_mass, halo_bias, 'k', linewidth=2)
plt.xlabel('Halo mass, $M_\odot$', fontsize=20)
plt.ylabel('$b_h$', fontsize=20)
plt.tick_params(labelsize=13)
plt.show()
```
| github_jupyter |
```
import numpy as np
import random
import sys
from scipy.special import expit as sigmoid
training_data_path = sys.argv[1]
testing_data_path = sys.argv[2]
output_path = sys.argv[3]
batch_size = int(sys.argv[4])
n0 = float(sys.argv[5])
activation = sys.argv[6]
hidden_layers_sizes = []
for i in range(7,len(sys.argv)):
hidden_layers_sizes.append(int(sys.argv[i]))
# training_data_path = "../data/devnagri_train.csv"
# testing_data_path = "../data/devnagri_test_public.csv"
# output_path = "../data/nn/a/cs1160328.txt"
# batch_size = 512
# n0 = 0.01
# activation = 'sigmoid'
# hidden_layers_sizes = [100]
def relu(x):
return (x>0) * x
def tanh(x):
return np.tanh(x)
def reluPrime(x):
return (x>0)+0
def tanhPrime(x):
return 1 - np.power(x,2)
def sigmoidPrime(x):
return x * (1 - x)
def exp_normalize(x):
b = np.amax(x,axis=1,keepdims = True)
y = np.exp(x - b)
return y / y.sum(axis=1,keepdims=True)
class NeuralNetwork:
def __init__(self,input_size,output_size,hidden_layers_sizes, activation):
self.weights = []
self.biases = []
if(activation == 'relu'):
self.activation = relu
self.activationPrime = reluPrime
elif(activation == 'tanh'):
self.activation = tanh
self.activationPrime = tanhPrime
else:
self.activation = sigmoid
self.activationPrime = sigmoidPrime
self.input_size = input_size
self.output_size = output_size
self.hiddent_layers_sizes = hidden_layers_sizes
prev_layer_count = input_size
for i in range(len(hidden_layers_sizes) + 1):
if i==len(hidden_layers_sizes):
self.weights.append(np.random.rand(prev_layer_count, output_size)/100)
self.biases.append(np.random.rand(1, output_size)/100)
else:
hidden_layer_count = hidden_layers_sizes[i]
self.weights.append(np.random.rand(prev_layer_count, hidden_layer_count)/100)
self.biases.append(np.random.rand(1, hidden_layer_count)/100)
prev_layer_count = hidden_layer_count
def train(self,inpX,inpY,batch_size,n0,max_iterations):
max_examples = inpX.shape[0]
max_possible_iterations = int(0.5 + max_examples / batch_size)
num_hidden_layers = len(self.weights) - 1
count = 0
lr = n0
totLoss = 0
prevAvgLoss = sys.float_info.max
epoch = 0
for n in range(max_iterations):
# Forming Mini Batches
i_eff = n%max_possible_iterations
# Updating Learning Rate
if (i_eff == 0 and n!=0):
avgLoss = totLoss/max_possible_iterations
if(np.absolute(avgLoss - prevAvgLoss) < 0.0001 * prevAvgLoss):
stopCount += 1
if stopCount > 1:
break
else:
stopCount = 0
if(avgLoss >= prevAvgLoss):
count += 1
lr = n0 / np.sqrt(count+1)
print("Epoch = ",epoch," Average Loss = ",avgLoss," New Learning Rate = ",lr)
epoch += 1
prevAvgLoss = avgLoss
totLoss = 0
outputs = []
if i_eff != max_possible_iterations - 1:
X = inpX[i_eff*batch_size: (i_eff+1)*batch_size]
Y = inpY[i_eff*batch_size: (i_eff+1)*batch_size]
else:
X = inpX[i_eff*batch_size:]
Y = inpY[i_eff*batch_size:]
# Neural Network Forward Propagation
outputs.append(X)
prev_layer_output = X
for i in range(num_hidden_layers + 1):
weight = self.weights[i]
bias = self.biases[i]
if i == num_hidden_layers:
prev_layer_output = sigmoid(prev_layer_output.dot(weight) + bias)
else:
prev_layer_output = self.activation(prev_layer_output.dot(weight) + bias)
outputs.append(prev_layer_output)
# Backpropagation
dWs = []
dbs = []
y_onehot = np.zeros((Y.shape[0],self.output_size))
y_onehot[range(Y.shape[0]),Y] = 1
for i in range(num_hidden_layers + 1,0,-1):
if i == num_hidden_layers + 1:
delta = (outputs[i] - y_onehot).dot(2/Y.shape[0]) * sigmoidPrime(outputs[i])
else:
delta = delta.dot(self.weights[i].T) * self.activationPrime(outputs[i])
dW = (outputs[i-1].T).dot(delta)
dWs.append(dW)
dbs.append(np.sum(delta,axis=0,keepdims=True))
if (n%100 == 0):
loss_ = np.sum(np.power(outputs[-1] - y_onehot,2) )/Y.shape[0]
labels_ = np.argmax(outputs[-1],axis = 1)
accuracy_ = 100 * np.sum(labels_ == Y)/Y.shape[0]
print("Iteration ",n,"\tLoss = ",loss_,"\tAccuracy = ",accuracy_,"%")
dWs.reverse()
dbs.reverse()
# Gradient Descent Parameter Update
for i in range(len(dWs)):
self.weights[i] += dWs[i].dot(-1 * lr)
self.biases[i] += dbs[i].dot(-1 * lr)
loss = np.sum(np.power(outputs[-1] - y_onehot,2) )/Y.shape[0]
totLoss += loss
def predict(self,X):
return self.forward_run(X)
def forward_run(self,X):
prev_layer_output = X
num_hidden_layers = len(self.weights) - 1
for i in range(num_hidden_layers + 1):
weight = self.weights[i]
bias = self.biases[i]
if i == num_hidden_layers:
probabilities = sigmoid(prev_layer_output.dot(weight) + bias)
labels = np.argmax(probabilities,axis = 1)
return labels
else:
prev_layer_output = self.activation(prev_layer_output.dot(weight) + bias)
def load_data(path,avg,std):
if avg is None:
input_data = np.loadtxt(open(path, "rb"), delimiter=",")
Y = input_data[:,0].copy()
X = input_data[:,1:].copy()
avg = np.average(X,axis=0)
X = X - avg
std = np.std(X,axis=0)
std[(std == 0)] = 1
X = X / std
return X,Y,avg,std
else:
input_data = np.loadtxt(open(path, "rb"), delimiter=",")
X = input_data[:,1:].copy()
X = (X - avg)/std
return X
inpX,Y,avg,std = load_data(training_data_path,None,None)
X = inpX.copy()
input_size = X.shape[1]
output_size = int(np.amax(Y))+1
num_examples = X.shape[0]
max_iterations = int(40*(num_examples/batch_size))
if(max_iterations < 25000):
max_iterations = 25000
network = NeuralNetwork(input_size,output_size,hidden_layers_sizes,activation)
network.train(X,Y.astype(int),batch_size,n0,max_iterations)
predictions = network.predict(X.copy())
print("Accuraccy on Training Data = ",100 * np.sum(predictions == Y)/Y.shape[0])
# print("Average of predictions on Training Data = ",np.average(predictions))
testX = load_data(testing_data_path,avg,std)
predictions = network.predict(testX)
np.savetxt(output_path,predictions,fmt="%i")
```
| github_jupyter |
```
from pyesasky import ESASkyWidget
from pyesasky import Catalogue
from pyesasky import CatalogueDescriptor
from pyesasky import MetadataDescriptor
from pyesasky import MetadataType
from pyesasky import CooFrame
# instantiating pyESASky instance
esasky = ESASkyWidget()
# loading pyESASky instance
esasky
# Go to the Cosmos field in ESASky (as resolved by SIMBAD):
esasky.goToTargetName('Cosmos Field')
#####################################################
# EX.1 creating a user defined catalogue on the fly #
#####################################################
catalogue = Catalogue('test catalogue name', CooFrame.FRAME_J2000, '#ee2345', 10)
# adding sources to the catalogue
catalogue.addSource('source name A', '150.44963', '2.24640', 1, [{"name":"Flux 1", "value":"10.5", "type":"STRING" },{"name":"Flux 2", "value":"1.7", "type":"STRING" }])
catalogue.addSource('source name B', '150.54963', '2.34640', 2, [{"name":"Flux 1", "value":"11.5", "type":"STRING" },{"name":"Flux 2", "value":"2.7", "type":"STRING" }])
catalogue.addSource('source name c', '150.34963', '2.44640', 3, [{"name":"Flux 1", "value":"12.5", "type":"STRING" },{"name":"Flux 2", "value":"0.7", "type":"STRING" }])
# overlay catalogue in pyESASky
esasky.overlayCatalogueWithDetails(catalogue)
############################################
# EX.2 importing a catalogue from CSV file #
############################################
# CatalogueDescriptor('<catName>', '<HTMLcolor>', <lineWidth>, '<idColumn>', '<nameColumn>', '<RAColumn>', '<DecColumn>', Metadata)
# where:
# - <catName> : name of the catalogue that will be used in pyESASky as label
# - <HTMLcolor> : HTML color. It could be a "Color name", "Hex color code" or "RGB color code"
# - <lineWidth> : width used to draw sources. From 1 to 10
# - <idColumn> : name of the column containing a unique identifier for sources if any. None if not applicable
# - <nameColumn> : name of the column with the name of the source
# - <RAColumn> : name of the RA column in degrees
# - <DecColumn> : name of the Dec column in degrees
# - Metadata : list of pyesasky.pyesasky.MetadataDescriptor in case it has been defined. [] otherwise.
catalogueDesc =CatalogueDescriptor('my test', 'yellow', 5, 'id', 'name', 'ra', 'dec', [])
# parse, import and overlay a catalogue from a CSV
esasky.overlayCatalogueFromCSV('./testcat', ',', catalogueDesc, 'J2000')
###################################################################
# EX.3 importing a catalogue from AstropyTable using Gaia archive #
###################################################################
from astroquery.gaia import Gaia
job = Gaia.launch_job("select top 10\
ra, dec, source_id, designation, ref_epoch,ra_dec_corr,astrometric_n_obs_al,matched_observations,duplicated_source,phot_variable_flag \
from gaiadr2.gaia_source order by source_id", verbose=True)
myGaiaData = job.get_results()
print(myGaiaData)
job.get_data()
# overlayCatalogueFromAstropyTable('<catName>', '<cooFrame>', <HTMLcolor>, '<(astropy.table)>', '<RAColumn>', '<DecColumn>', '<nameColumn>')
# where:
# - <catName> : name of the catalogue that will be used in pyESASky as label
# - <HTMLcolor> : HTML color. It could be a "Color name", "Hex color code" or "RGB color code"
# - <lineWidth> : width used to draw sources. From 1 to 10
# - <idColumn> : name of the column containing a unique identifier for sources if any. None if not applicable
# - <nameColumn> : name of the column with the name of the source
# - <RAColumn> : name of the RA column in degrees
# - <DecColumn> : name of the Dec column in degrees
esasky.overlayCatalogueFromAstropyTable('Gaia DR2', 'J2000', '#a343ff', 5, myGaiaData, '','','')
# Import the VizieR Astroquery module
from astroquery.vizier import Vizier
# Search for 'The XMM-Newton survey of the COSMOS field (Brusa+, 2010)':
catalog_list = Vizier.find_catalogs('Brusa+, 2010')
print({k:v.description for k,v in catalog_list.items()})
# Get the above list of catalogues:
Vizier.ROW_LIMIT = -1
catalogs = Vizier.get_catalogs(catalog_list.keys())
print(catalogs)
# Access one table:
Brusa = catalogs['J/ApJ/716/348/table2']
print(Brusa)
# Visualise the table in ESASky:
esasky.overlayCatalogueFromAstropyTable('Brusa', CooFrame.FRAME_J2000, '#00ff00', 5, Brusa, 'RAJ2000','DEJ2000','Name')
# Go to the LMC in ESASky (as resolved by SIMBAD):
esasky.goToTargetName('LMC')
# Search for 'The HIZOA-S survey':
catalog_list2 = Vizier.find_catalogs('HIZOA-S survey 2016') #HIZOA-S survey 2016
print({k:v.description for k,v in catalog_list2.items()})
# Get the above list of catalogues:
Vizier.ROW_LIMIT = -1
# Vizier.COLUMN_LIMIT = 20 Can't find the way to get all the columns rather than just the default columns. Going to try the TAP+ module
catalog = Vizier.get_catalogs(catalog_list2.keys())
print(catalog)
# Access the catalogue table:
HIZOA = catalog['J/AJ/151/52/table2'] #
print(HIZOA)
# Visualise the table in ESASky:
###### NOTE: NOT PLOTTING GALACTIC COORDS CORRECTLY
esasky.overlayCatalogueFromAstropyTable('HIZOA', CooFrame.FRAME_GALACTIC, '#0000ff', 7, HIZOA, 'GLON','GLAT','HIZOA')
# TRYING THE SAME BUT USING THE TAP/TAP+ ASTROQUERY MODULE:
# Import the TAP/TAP+ Astroquery module
from astroquery.utils.tap.core import TapPlus
vizier = TapPlus(url="http://tapvizier.u-strasbg.fr/TAPVizieR/tap")
tables = vizier.load_tables(only_names=True)
for table in (tables):
print(table.get_qualified_name())
#ONLY TAP+ compatible, so doesn't seem to work
table = vizier.load_table('viz7."J/AJ/128/16/table2"')
for column in (table.get_columns()):
print(column.get_name())
# This works in TOPCAT to download the whole table: SELECT * FROM "J/AJ/128/16/table2"
# This also works in TOPCAT : SELECT * FROM viz7."J/AJ/128/16/table2"
job = vizier.launch_job("SELECT * FROM "'viz7."J/AJ/128/16/table2"'"")
#This also works:
#job = vizier.launch_job("SELECT * FROM "+str('viz7."J/AJ/128/16/table2"')+"")
print(job)
Koribalski = job.get_results()
print(Koribalski['HIPASS', 'RAJ2000', 'DEJ2000'])
# Visualise the table in ESASky:
esasky.overlayCatalogueFromAstropyTable('Koribalski', CooFrame.FRAME_J2000, '#ff0000', 6, Koribalski, 'RAJ2000','DEJ2000','HIPASS')
```
| github_jupyter |
```
%pylab
%matplotlib inline
%run pdev notebook
```
# Radiosonde SONDE
```
ident = "SONDE"
plt.rcParams['figure.figsize'] = [12.0, 6.0]
plt.rcParams['lines.linewidth'] = 2
plt.rcParams['font.size'] = 15
yplevs = np.array([10,100,200,300,400,500,700,925])*100
save = True
!mkdir -p figures
rt.load_config()
rt.config
isonde = rt.cls.Radiosonde(ident)
#
# All the data available
#
isonde.list_store()
```
## Load Data
```
# close=False -> stay on disk,
# =True -> load to memory
close = False
```
### ERA5
```
if False:
isonde.add('ERA5', filename='ERA5_*.nc', cfunits=True, close=close, verbose=1)
if False:
isonde.add('ERA5_meta', filename='*_ERA5_station.nc', cfunits=True, close=close, verbose=1)
```
### ERA Interim
```
if False:
isonde.add('ERAI', filename='ERAI_*.nc', cfunits=True, close=close, verbose=1)
```
### IGRA v2
```
if False:
isonde.add('IGRAv2', cfunits=True, close=close, verbose=1)
```
### Upper Air Database (UADB)
```
if False:
isonde.add('UADB', cfunits=True, close=close, verbose=1)
```
### JRA-55
```
if False:
isonde.add('JRA55', close=close, verbose=1)
```
### CERA-20C
```
if False:
isonde.add('CERA20C', close=close, verbose=1)
```
### Standardized Combined Data
```
idata = None
#
# ERA5
#
if isonde.in_store('dataE5JC'):
isonde.add('dataE5JC', verbose=1)
idata = isonde.data.dataE5JC
#
# ERA Interim
#
if isonde.in_store('dataEIJC') and idata is None:
isonde.add('dataEIJC', verbose=1)
idata = isonde.data.dataEIJC
#
# IGRA
#
if isonde.in_store('dataIE5JC') and idata is None:
isonde.add('dataIE5JC', verbose=1)
idata = isonde.data.dataIE5JC
```
### Experiment Data
```
isonde.list_store(pattern='exp')
ivar = 'dpd'
version = 'v1'
isrc = 'mars5'
ires = 'era5'
expdata = None
#
# ERA5
#
if isonde.in_store('exp{}{}_{}_{}.nc'.format(ivar,version,isrc,ires)):
isonde.add('exp{}{}_{}_{}'.format(ivar,version,isrc,ires), verbose=1)
expdata = isonde.data['exp{}{}_{}_{}'.format(ivar,version,isrc,ires)]
#
# ERA Interim
#
if expdata is None:
isrc = 'marsi'
ires = 'erai'
if isonde.in_store('exp{}{}_{}_{}.nc'.format(ivar,version,isrc,ires)):
isonde.add('exp{}{}_{}_{}'.format(ivar,version,isrc,ires), verbose=1)
expdata = isonde.data['exp{}{}_{}_{}'.format(ivar,version,isrc,ires)]
#
# JRA55
#
if expdata is None:
isrc = 'mars5'
ires = 'jra55'
if isonde.in_store('exp{}{}_{}_{}.nc'.format(ivar,version,isrc,ires)):
isonde.add('exp{}{}_{}_{}'.format(ivar,version,isrc,ires), verbose=1)
expdata = isonde.data['exp{}{}_{}_{}'.format(ivar,version,isrc,ires)]
if idata is None:
print("No data ?")
exit()
#
# Some definitions
#
times = [0, 12]
start = '1979'
ende = '2019'
period = slice(start, ende)
period_str = "%s-%s" % (start, ende)
#
# Subset to only that period
#
idata = idata.sel(time=period, hour=times)
```
## Station Map
```
rt.plot.map.station_class(isonde, states=True, rivers=True, land=True, lakes=True)
if save:
savefig('figures/%s_station.png' % ident)
```
# Data Availability
```
dpdvars = []
tvars = []
for jvar in list(idata.data_vars):
if 'dpd_' in jvar:
if not any([i in jvar for i in ['err','_fg_','snht']]):
dpdvars.append(jvar)
if 't_' in jvar:
if not any([i in jvar for i in ['err','_fg_','snht']]):
tvars.append(jvar)
print(dpdvars)
print(tvars)
```
## Dewpoint depression
```
counts = idata.reset_coords()[dpdvars].count('time').sum('hour').to_dataframe()
counts.index /= 100.
counts.plot()
xticks(yplevs/100)
grid()
title("%s Counts %s" % (ident, period_str))
ylabel("Total counts [1]")
if save:
savefig('figures/%s_dpd_counts.png' % ident)
```
## Temperature
```
counts = idata.reset_coords()[tvars].count('time').sum('hour').to_dataframe()
counts.index /= 100.
counts.plot()
xticks(yplevs/100)
grid()
title("%s Counts %s" % (ident, period_str))
ylabel("Total counts [1]")
if save:
savefig('figures/%s_t_counts.png' % ident)
```
## Annual
```
counts = idata.reset_coords()[dpdvars].count('plev').resample(time='A').sum().to_dataframe()
n = len(idata.hour.values)
f, ax = subplots(n,1, sharex=True)
ax[0].set_title("%s Annual counts %s" % (ident, period_str))
for i,ihour in enumerate(idata.hour.values):
counts.xs(ihour, level=0).plot(grid=True, ax=ax[i], legend=True if i==0 else False)
ax[i].set_ylabel("%02d Z" % (ihour))
ax[i].set_xlabel('Years')
tight_layout()
if save:
savefig('figures/%s_dpd_ancounts.png' % (ident))
counts = idata.reset_coords()[tvars].count('plev').resample(time='A').sum().to_dataframe()
n = len(idata.hour.values)
f, ax = subplots(n,1, sharex=True)
ax[0].set_title("%s Annual counts %s" % (ident, period_str))
for i,ihour in enumerate(idata.hour.values):
counts.xs(ihour, level=0).plot(grid=True, ax=ax[i], legend=True if i==0 else False)
ax[i].set_ylabel("%02d Z" % (ihour))
ax[i].set_xlabel('Years')
tight_layout()
if save:
savefig('figures/%s_t_ancounts.png' % (ident))
```
# Dewpoint depression
```
obs = 'dpd_{}'.format(isrc)
hdim = 'hour'
for ihour in idata[hdim].values:
rt.plot.time.var(idata[obs].sel(**{hdim:ihour}), dim='time', lev='plev',
title='%s %s Radiosonde at %02d Z' % (ident, obs, ihour))
```
# Temperature
```
obs = 't_{}'.format(isrc)
hdim = 'hour'
for ihour in idata[hdim].values:
rt.plot.time.var(idata[obs].sel(**{hdim:ihour}), dim='time', lev='plev',
title='%s %s Radiosonde at %02d Z' % (ident, obs, ihour))
```
# Comparison with Reanalysis
```
dim = 'time'
hdim = 'hour'
lev = 'plev'
ivar = 'dpd'
obs = '{}_{}'.format(ivar, isrc)
plotvars = []
#
# Select Variables
#
for jvar in list(idata.data_vars):
if '_' in jvar:
iname = jvar.split('_')[1]
if jvar == "%s_%s" %(ivar, iname):
plotvars += [jvar]
print(plotvars)
#
# Select Level
#
ipres=10000
#
# Plot
#
ylims = (np.round(idata[obs].min()), np.round(idata[obs].max()))
for i,j in idata[plotvars].groupby(hdim):
m = j.sel(**{lev:ipres}).resample(**{dim:'M'}).mean(dim)
f, ax = plt.subplots(figsize=(16,4))
for jvar in plotvars:
rt.plot.time.var(m[jvar], ax=ax, dim=dim, label=jvar.replace(ivar+'_',''))
ax.set_ylabel("%s [%s]" % (ivar, idata[jvar].attrs['units']))
ax.set_xlabel('Time [M]')
ax.set_title('%s %s Comparison %s %02dZ at %d hPa' %(ident, ivar, period_str, i, ipres/100))
ax.legend(ncol=len(plotvars))
ax.set_ylim(ylims)
tight_layout()
if save:
savefig('figures/%s_%s_comparison_%04d_%02dZ.png' % (ident, ivar, ipres/100, i))
```
## Departures
```
dim = 'time'
hdim = 'hour'
lev = 'plev'
ivar = 'dpd'
obs = '{}_{}'.format(ivar, isrc)
plotvars = []
#
# Select Variables
#
for jvar in list(idata.data_vars):
if '_' in jvar:
iname = jvar.split('_')[1]
if jvar == "%s_%s" %(ivar, iname):
plotvars += [jvar]
print(plotvars)
#
# Select Level
#
ipres=30000
#
# Plot
#
ylims = (-10,10) # Manual
for i,j in idata[plotvars].groupby(hdim):
m = j.sel(**{lev:ipres}).resample(**{dim:'M'}).mean(dim)
f, ax = plt.subplots(figsize=(16,4))
for jvar in plotvars:
if jvar == obs:
continue
rt.plot.time.var(m[obs] - m[jvar], ax=ax, dim=dim, label=jvar.replace(ivar+'_',''))
ax.set_ylabel("%s [%s]" % (ivar, idata[jvar].attrs['units']))
ax.set_xlabel('Time [M]')
ax.set_title('%s Departures %s (OBS-BG) %s %02dZ at %d hPa' %(ident, ivar, period_str, i, ipres/100))
ax.legend(ncol=len(plotvars))
ax.set_ylim(ylims)
tight_layout()
if save:
savefig('figures/%s_%s_dep_%04d_%02dZ.png' % (ident, ivar, ipres/100, i))
```
# Adjustment Process
```
if expdata is None:
#
# Make Experiments
#
expdata = idata.copy()
else:
expdata = expdata.sel(**{dim: period})
```
## SNHT
```
dim = 'time'
hdim = 'hour'
ivar = 'dpd'
obs = '{}_{}'.format(ivar, isrc)
res = '{}_{}'.format(ivar, ires)
#
# Execute SNHT ?
#
if not '{}_snht'.format(obs) in expdata.data_vars:
#
# Calculate SNHT values with Parameters (window and missing)
#
expdata = rt.bp.snht(expdata, var=obs, dep=res, dim=dim,
window=1460,
missing=600,
verbose=1)
#
# Apply Threshold (threshold) and detect Peaks
# allowed distances between peaks (dist)
# minimum requires significant levels (min_levels)
#
expdata = expdata.groupby(hdim).apply(rt.bp.apply_threshold,
threshold=50,
dist=730,
min_levels=3,
var=obs + '_snht',
dim=dim)
#
# Plot SNHT
#
for i,j in expdata.groupby(hdim):
ax = rt.plot.time.threshold(j[obs + '_snht'], dim=dim, lev=lev, logy=False,
title=" %s SNHT %s at %02dZ" % (ident, period_str, i),
figsize=(12,4),
yticklabels=yplevs)
rt.plot.time.breakpoints(j[obs + '_snht_breaks'], ax=ax, startend=True)
tight_layout()
if save:
savefig('figures/%s_%s_snht_%s_%02dZ.png' % (ident, obs, ires, i))
```
## Breakpoints
```
#
# Give Breakpoint Information
#
for i,j in expdata.groupby(hdim):
_=rt.bp.get_breakpoints(j[obs + '_snht_breaks'], dim=dim, verbose=1)
```
## Adjustments
```
dim = 'time'
hdim = 'hour'
ivar = 'dpd'
obs = '{}_{}'.format(ivar, isrc)
res = '{}_{}'.format(ivar, ires)
# plotvars = [i for i in expdata.data_vars if '_dep' in i]
adjvars = "{obs},{obs}_m,{obs}_q,{obs}_qa".format(obs=obs)
adjvars = adjvars.split(',')
print(adjvars)
missing = False
for jvar in adjvars:
if jvar not in expdata.data_vars:
missing = True
```
### Run standard adjustment process
```
if missing:
from detect import run_standard
expdata = run_standard(idata, obs, res, meanadj=True, qadj=True, qqadj=True, verbose=1)
```
## Breakpoint Stats
```
ipres=85000
#
# MEAN ADJ
#
bins = np.round(np.nanpercentile(np.ravel(expdata[obs].sel(**{lev:ipres}).values), [1,99]))
bins = np.arange(bins[0]-2,bins[1]+2,1)
for i,j in expdata.groupby(hdim):
rt.plot.breakpoints_histograms(j.sel(**{lev:ipres}),
obs, '{}_m'.format(obs), '{}_snht_breaks'.format(obs),
figsize=(18,8),
other_var=res,
bins=bins);
if save:
savefig('figures/%s_bhist_m_%s_%02dZ_%04dhPa.png' % (ident, ivar, i, ipres/100))
ipres=85000
#
# QUANTIL ADJ
#
bins = np.round(np.nanpercentile(np.ravel(expdata[obs].sel(**{lev:ipres}).values), [1,99]))
bins = np.arange(bins[0]-2,bins[1]+2,1)
for i,j in expdata.groupby(hdim):
rt.plot.breakpoints_histograms(j.sel(**{lev:ipres}),
obs, '{}_q'.format(obs), '{}_snht_breaks'.format(obs),
figsize=(18,8),
other_var=res,
bins=bins);
if save:
savefig('figures/%s_bhist_q_%s_%02dZ_%04dhPa.png' % (ident, ivar, i, ipres/100))
ipres=85000
#
# QUANTIL ADJ
#
bins = np.round(np.nanpercentile(np.ravel(expdata[obs].sel(**{lev:ipres}).values), [1,99]))
bins = np.arange(bins[0]-2,bins[1]+2,1)
for i,j in expdata.groupby(hdim):
rt.plot.breakpoints_histograms(j.sel(**{lev:ipres}),
obs, '{}_qa'.format(obs), '{}_snht_breaks'.format(obs),
figsize=(18,8),
other_var=res,
bins=bins);
if save:
savefig('figures/%s_bhist_qa_%s_%02dZ_%04dhPa.png' % (ident, ivar, i, ipres/100))
```
## Adjustment methods
```
bvar = '{}_snht_breaks'.format(obs)
#
# Select Level
#
ipres=30000
#
# Plot
#
ylims = np.round(np.nanpercentile(np.ravel(expdata[obs].sel(**{lev:ipres}).rolling(**{dim:30, 'center':True, 'min_periods':10}).mean().values), [1,99]))
ylims += [-2,2]
for i,j in expdata[adjvars].groupby(hdim):
m = j.sel(**{lev:ipres}).rolling(**{dim:30, 'center':True, 'min_periods':10}).mean()
f, ax = plt.subplots(figsize=(16,4))
for jvar in adjvars:
rt.plot.time.var(m[jvar], ax=ax, dim=dim, label=jvar[-1:].upper() if jvar != obs else ivar, ls='-' if jvar == obs else '--')
if bvar in expdata.data_vars:
rt.plot.time.breakpoints(expdata[bvar].sel(**{hdim:i}), ax=ax, color='k', lw=2, ls='--')
ax.set_ylabel("%s [%s]" % (ivar, expdata[jvar].attrs['units']))
ax.set_xlabel('Time [M]')
ax.set_title('%s Adjustments %s %s %02dZ at %d hPa' %(ident, ivar, period_str, i, ipres/100))
ax.legend(ncol=len(plotvars))
ax.set_ylim(ylims)
tight_layout()
if save:
savefig('figures/%s_%s_adj_%04d_%02dZ.png' % (ident, ivar, ipres/100, i))
```
# Analysis
```
#
# Monthly Means
#
variables = list(unique(dpdvars + tvars + adjvars))
for jvar in variables[:]:
if jvar not in expdata.data_vars:
variables.remove(jvar)
print(variables)
mdata = expdata[variables].resample(**{dim:'M'}).mean(keep_attrs=True)
```
## Trends
```
trends = rt.met.time.trend(mdata, period=period, dim=dim, only_slopes=True)
with xr.set_options(keep_attrs=True):
trends = trends*3650. # Trends per Decade
for jvar in trends.data_vars:
trends[jvar].attrs['units'] = trends[jvar].attrs['units'].replace('day','decade')
xlims = (np.round(trends.min().to_array().min()), np.round(trends.max().to_array().max()))
n = mdata[hdim].size
f,ax = rt.plot.init_fig_horizontal(n=n, ratios=tuple([2]*n), sharey=True)
for i, ihour in enumerate(trends[hdim].values):
for jvar in variables:
rt.plot.profile.var(trends[jvar].sel(**{hdim:ihour}), ax=ax[i], label=jvar[-1:].upper() if jvar != obs else ivar)
ax[i].set_title('%02d' % ihour)
ax[i].set_xlim(xlims)
ax[i].set_xlabel("%s [%s]" % (mdata[obs].attrs['standard_name'], trends[jvar].attrs['units']))
f.suptitle('%s %s Trends %s' % (ident, ivar.upper(), period_str))
if save:
savefig('figures/%s_trends_%s.png' % (ident, ivar))
```
## Statistics
```
from detect import skills_table
for jvar in mdata.data_vars:
if jvar == obs or jvar == res:
continue
_ , ytable = skills_table(mdata[obs], mdata[res], mdata[jvar])
print("#"*50)
print(ident, obs, res, jvar)
print(ytable)
print("#"*50)
```
| github_jupyter |
# Rechenpyramiden
Die Zellen werden ausgeführt mit gelichzeitigem drücken von 'Shift'+'Enter' </br>
Führe als erstes die Zelle unten aus damit der Pyramidengenerator parat ist. </br>
Die Funktion Pyramide() erzeugt die Pyramide und die Lösung: </br>
`Pyramide(7)` </br>
=> erzeugt eine Pyramide mit 7 zufälligen Basiszahlen wobei die grösste Basiszahl 2 ist. </br>
`Pyramide(9,1)` </br>
=> erzeugt eine Pyramide mit 9 zufälligen Basiszahlen und sowohl Addition als auch Subtraktion in der Pyramide. </br>
`Pyramide(9,7)` </br>
=> erzeugt eine Pyramide die weniger Zahlen zeigt und damit schwieriger zu lösen ist. </br>
`Pyramide([1,2,1,2,3,2,1,2,0,3,2,3])` </br>
=> erzeugt eine Pyramide mit den Basiszahlen: 1, 2, 1, 2, 3, 2, 1, 2, 0, 3, 2, 3 und aussschliesslich Addition. </br>
`Pyramide([11,2])` </br>
=> erzeugt eine Pyramide mit 11 zufälligen Basiszahlen wobei die grösste Basiszahl 2 sein darf. </br>
`Pyramide([Höhe,MaxBasis],Schwierigkeit)` </br>
=> erzeugt eine Pyramide mit `Höhe` zufälligen Basiszahlen wobei die grösste Basiszahl `MaxBasis` sein darf und die `Schwierigkeit` zwischen 0 und `MaxBasis` liegen sollte. </br>
#### Es gibt keine Garantie dass jede Pyramide lösbar ist. Insbesondere wenn die Schwierigkeit nahe bei MaxBasis liegt.
#### Alle Eingaben sind ganze positive Zahlen
```
# Diese Zelle mit `Shift`+`Enter` als erstes ausführen
from random import randint as randd
def cre_pyramid(numbers):
if type(numbers) is list:
if len(numbers) == 1:
n = numbers[0]+1
nmax = 2
numbers = [randd(0,nmax-1) for ii in range(n)]
elif len(numbers) == 2:
n = numbers[0]+1
nmax = numbers[1]
numbers = [randd(0,nmax-1) for ii in range(n)]
else:
n = len(numbers)
nmax = max(numbers)+1
elif type(numbers) is int:
n = numbers+1
nmax = 2
numbers = [randd(0,nmax-1) for ii in range(n)]
else:
return None
pyramid = [numbers]
for ii in range(n-1):
pyramid.append([numbers[iii]+numbers[iii+1] for iii in range(n-ii-1)])
numbers = pyramid[-1]
return pyramid
def cancalc(holes,level,index):
# check calc from below
if level > 0 and index < len(holes[level-1]) and bool(holes[level-1][index]*holes[level-1][index+1]):
return True
#check left
elif index > 0 and level < len(holes)-1 and bool(holes[level][index-1]*holes[level+1][index-1]):
return True
elif level < len(holes)-1 and index < len(holes[level])-1 and bool(holes[level][index+1]*holes[level+1][index]):
return True
else:
return False
def pri_solution(py):
center = round((len(str(py[0]))+len(py[0])-1)/2)+1
n = len(py)
for ii in range(n):
nwidth =len(str(py[n-ii-1]))+len(py[n-ii-1])+1
start = center - round(nwidth/2-0.5)
print(start*' '+nwidth*'-')
ss = '| '
for jj in range(len(py[n-ii-1])):
ss += str(py[n-ii-1][jj]) +' | '
print(start*' '+ss)
def rxy(holes):
level = randd(0,len(holes)-1)
index = randd(0,len(holes[level])-1)
return level,index
def pri_pyramid(py,sol = False, hardness = 0):
holes = [[1 for ii in jj] for jj in py]
n = len(py)
if not sol:
if hardness > 0:
for ii in range(800):
level,index=rxy(holes)
if cancalc(holes,level,index):
holes[level][index] = 0
for hard in range(hardness):
for level in [ss+hard*2 for ss in range(n-hard*2)]:
if bool(randd(0,1)):
for index in range(len(py[level])):
if cancalc(holes,level,len(py[level])-index-1):
holes[level][len(py[level])-index-1] = 0
else:
for index in range(len(py[level])):
if cancalc(holes,level,index):
holes[level][index] = 0
for level in [ss+hard for ss in range(n-hard)]:
for index in range(len(py[level])-1):
if bool(holes[level][index]*holes[level][index]):
if level < n and holes[level+1][index] == 0:
holes[level+1][index] = 1
holes[level][index] = 0
for level in [ss+hard for ss in range(n-hard)]:
if bool(randd(0,1)):
for index in range(len(py[level])):
if cancalc(holes,level,len(py[level])-index-1):
holes[level][len(py[level])-index-1] = 0
else:
for index in range(len(py[level])):
if cancalc(holes,level,index):
holes[level][index] = 0
else:
holes[1:] = [[0 for ii in jj] for jj in py[1:]]
center = len(py[0])*3+1
ll = 0
for ii in range(n):
ss = '| '
for jj in range(len(py[n-ii-1])):
if holes[n-ii-1][jj] == 1:
sadd = str(py[n-ii-1][jj])
if len(sadd) == 2:
sadd = ' '+sadd
elif len(sadd) == 1:
sadd = ' '+sadd+' '
else:
sadd = max(len(str(py[n-ii-1][jj])),3)*' '
ss += sadd +' | '
dw = ll+3-len(ss)
for oo in range(dw):
pos = round((oo+1)*len(ss)/(dw+1))
while ss[pos-1].isdigit() and ss[pos].isdigit():
pos += -1
ss = ss[:pos]+' '+ss[pos:]
nwidth =len(ss)-1
start = round(center - nwidth/2+0.5)
print(start*' '+nwidth*'-')
print(start*' '+ss)
ll = nwidth
print(start*' '+nwidth*'-')
def Pyramide(py, hardness = 0):
pyramid = cre_pyramid(py)
print()
print()
print()
pri_pyramid(pyramid,False,hardness)
print()
print()
pri_pyramid(pyramid,True,hardness)
print()
print()
Pyramide(7)
Pyramide(9,1)
Pyramide(9,7)
Pyramide([1,2,1,2,3,2,1,2,0,3,2,3])
Pyramide([1,2,1,2,3,2,1,2,0,3,2,3],1)
Pyramide([1,2,1,2,3,2,1,2,0,3,2,3],8)
Pyramide([9,3],3)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.