code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''sports-ball-j0_vIAAz-py3.9'': poetry)'
# name: python3
# ---
# # Shot Data ETL
#
# This notebook ETLs the CSV play-by-play data into the MySQL database service that is defined in the `docker-compose.yml`. The ETL notebooks and processes in this repository use a plugin architecture and a registry of different plugins that are tagged with either `"row"` or `"game"` to determine if its game-level metadata or play-by-play cleaning, calculation, etc.
# ### Imports
# +
import pathlib
import yaml
import glob
import random
from functools import reduce, partial
import numpy as np
import pandas as pd
import dask.bag as db
import sqlalchemy
from dask.distributed import Client, Lock
from etl import ETL_FUNCS
# -
# ### Config
with open("config.yaml") as f:
CONFIG = yaml.safe_load(f)
# ### Dask
client = Client()
client
# ### Get Data Information
filepaths = glob.glob(str(pathlib.Path.home() / CONFIG["SOURCE_DATA_PATH"] / "**/*.csv"))
filepaths = [fp for fp in filepaths if "combined" not in fp]
fp_bag = db.from_sequence(filepaths)
def get_data_info(fp):
df = pd.read_csv(fp, encoding="utf-8")
return df.dtypes
# +
# this cell allows for reading all the csvs up and calculating all the shared columns
# from functools import reduce
# dtype_info = fp_bag.map(get_data_info).compute()
# cols = reduce(lambda x, y: {*x}.intersection({*y}), [d.keys() for d in dtype_info])
# print(cols)
# output:
# {'a2', 'play_length', 'player', 'num', 'steal', 'left', 'shot_distance', 'reason', 'team', 'original_x', 'converted_x',
# 'outof', 'event_type', 'opponent', 'h5', 'entered', 'elapsed', 'game_id', 'h1', 'assist', 'a1', 'h4', 'block', 'remaining_time',
# 'h3', 'away', 'possession', 'a4', 'h2', 'home_score', 'period', 'away_score', 'converted_y', 'play_id', 'a3', 'original_y',
# 'date', 'points', 'home', 'result', 'a5', 'data_set', 'description', 'type'}
# -
# ### Get game count
from collections import Counter
games = Counter(fp_bag.map(lambda fp: pd.read_csv(fp).loc[0, "data_set"]).compute())
pd.DataFrame.from_dict({k: v for k,v in games.items()}, orient="index") \
.reset_index(drop=False) \
.rename({"index" : "season", 0 : "games"}, axis=1) \
.sort_values("season") \
.reset_index(drop=True)
# ### Test data cleaning
# +
# test the cleaning functions before running
# on full set ~13K files
def clean_data(fp: str):
df = pd.read_csv(fp)
for k in ETL_FUNCS.find("row"):
df = ETL_FUNCS[k](df)
return df
tmp_bag = db.from_sequence(random.sample(filepaths, 100))
_ = tmp_bag.map(clean_data).compute()
print("Sucessfully ran 100 files")
# -
# ### Read in and clean data
# +
# helper function to apply the "row" funcs in registry for all the read in csvs
def clean_data(df: pd.DataFrame):
for k in ETL_FUNCS.find("row"):
df = ETL_FUNCS[k](df)
return df
def load_shot_data(engine, fp):
# read in and clean data
df = pd.read_csv(fp)
df = clean_data(df)
# columns subset
col_list = [
"game_id",
"play_id",
"player",
"result",
"points",
"period",
"elapsed",
"converted_x",
"converted_y",
]
# filter data
df = df.loc[df["event_type"].isin(["shot", "miss"]), col_list]
df = df.dropna(subset=["player", "result", "converted_x", "converted_y"])
# write down data
df.to_sql(f"shots", con=engine, if_exists="append", index=False)
return len(df)
# -
# ### Configure Database
# +
# db config
config = {
"host": "localhost",
"port": 3306,
"user": "root",
"password": "<PASSWORD>",
}
db_user = config.get("user")
db_pwd = <PASSWORD>.get("password")
db_host = config.get("host")
db_port = config.get("port")
# specify connection string
connection_str = f"mysql+pymysql://{db_user}:{db_pwd}@{db_host}:{db_port}"
engine = sqlalchemy.create_engine(connection_str)
# -
# ### Test Database Connection
try:
engine.execute(f"CREATE DATABASE {CONFIG['DB_NAME']};")
except sqlalchemy.exc.ProgrammingError as e:
print("most likely: database already exists")
# ### Run Data Loading
# +
# make database
engine.execute(f"USE {CONFIG['DB_NAME']}")
# process all csvs
load_shot_data(engine=engine, fp=filepaths[0])
fp_bag = db.from_sequence(filepaths[1:])
shot_data = fp_bag.map(partial(load_shot_data, engine=engine)).compute()
# -
# ### Query speed check
# %%time
# before index
query = f"""
select * from {CONFIG['TABLE_NAME']}
where player = "<NAME>"
"""
pd.read_sql(query, engine).head()
# +
# create indices on player, game_id, play_id (commonly looked up columns)
_ = engine.execute(f"""
create index player_index
on {CONFIG['TABLE_NAME']}(player)
""")
_ = engine.execute(f"""
create index game_id_index
on {CONFIG['TABLE_NAME']}(game_id)
""")
_ = engine.execute(f"""
create index play_id_index
on {CONFIG['TABLE_NAME']}(play_id)
""")
# -
# %%time
# after index
query = f"""
select * from {CONFIG['TABLE_NAME']}
where player = "<NAME>"
"""
pd.read_sql(query, engine).head()
| material/etl/shot_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: r
# ---
# #Text Analytics with Microsoft Cognitive Services
#
# [Cognitive Services](https://www.microsoft.com/cognitive-services) consists of a series of APIs for advanced text, vision, and speech integration.
#
# In order to take advantage of these APIs such as Text Analytics, you can sign up in the [Azure Portal](https://portal.azure.com). You can also sign up for Preview APIs such as Lingustic Analysis directly from the Cognitive Services website.
#
# 
#
# ## What does the Text Analytics API offer?
#
# This demo focuses on connecting to the [Text Analytics API](https://www.microsoft.com/cognitive-services/en-us/text-analytics/documentation) to obtain sentiment scores and extract key phrases. In addition to those endpoints, the API also offers language and topic detection. The R language is used for the demo, but any language or application that can send a POST request can take advantage of the API.
#
# 
#
#
# ## Background
#
# For data, we use a small sample of Amazon reviews ([original source](https://www.kaggle.com/snap/amazon-fine-food-reviews)). We will connect to and download a CSV file from Azure Blob storage, then process the data using R. We connect to the Sentiment endpoint to obtain a score from 0 (negative) to 1 (positive). We then connect to the Key Phrases endpoint to get a list of words or phrases that are helpful for categorizing each review.
#
# Note that by using the API free tier, you can process 5,000 transactions per month. In the case of this demo, we send 100 records to the API and obtain sentiment scores and phrases for each record. Since we connect to two endpoints with 100 records, we use 200 transactions.
#
# ## Sign Up in the Azure Portal
#
# If you do not already have a key for the Text Analytics API, you can sign up in the Azure Portal. If you have an existing API key, you can skip this section.
#
# Go to http://portal.azure.com, login, and go to New (+), Data + Analytics, and select Cognitive Services APIs.
#
# 
#
# Enter an Account Name, select your Azure subscription, and choose "Text Analytics API" as your API Type. Select "Free" or another option as your Pricing Tier, then complete the rest of the form. When ready, click Create at the bottom of the panel.
#
# 
#
# Open the Cognitive Services account and click Settings, then Keys. Copy the KEY 1 value for later use.
#
# 
# ## Setup R environment
#
# Once you have an API key, you can run the following R code to define the base URL for the Text Analytics API and create a few helper functions for connecting to various API endpoints. This code is hosted in a Jupyter notebook, but it should run without issue in any R environment.
# +
library(httr)
library(jsonlite)
library(dplyr)
base.url <- "https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/"
Request <- function(call.url, call.source){
headers <- add_headers("Content-Type" = "application/json", "Ocp-Apim-Subscription-Key" = key)
raw.result <- POST(call.url, headers, body = call.source)
text.result <- fromJSON(content(raw.result, "text"))
final.result <- as.data.frame(text.result[1])
return(final.result)
}
Sentiment <- function(source){
sentiment.url <- paste0(base.url, "sentiment")
sentiment.body <- toJSON(list(documents = source))
sentiment.result <- Request(sentiment.url, sentiment.body)
colnames(sentiment.result) <- c("sentiment", "id")
return(sentiment.result)
}
KeyPhrases <- function(source){
phrases.url <- paste0(base.url, "keyPhrases")
phrases.body <- toJSON(list(documents = source))
phrases.result <- Request(phrases.url, phrases.body)
colnames(phrases.result) <- c("key.phrases", "id")
return(phrases.result)
}
Languages <- function(source){
languages.url <- paste0(base.url, "languages")
languages.body <- toJSON(list(documents = source))
languages.result <- Request(languages.url, languages.body)
colnames(languages.result) <- c("id", "detected.languages")
return(languages.result)
}
# -
# ## API key
#
# Enter your API key within quotation marks and run the following code.
# +
#enter key manually for now as readLine or other key storage does not work for Jupyter input
#you can regenerate your key in the Azure Portal so that anyone who publicly views it can no longer use it
key <- "[Enter API key]"
# -
# ## Download sample file
#
# Run the following code block to download the sample of 100 Amazon reviews from Azure Blob storage.
GET("https://drecognitive.blob.core.windows.net/samples/amazon-fine-food-samples.csv",
write_disk("amazon-fine-food-samples.csv", overwrite=TRUE))
# ## Prepare the data
#
# Run the following code to read the sample CSV file into an R data frame, then select and rename the relevant columns.
# The output displays a preview of the data that will be sent to the Text Analytics API.
raw <- read.csv("amazon-fine-food-samples.csv", stringsAsFactors = FALSE)
language <- "en"
text.source <- select(raw, Id, Text)
text.source <- data.frame(language, text.source, stringsAsFactors = FALSE)
colnames(text.source) <- c("language", "id", "text")
text.source$id <- as.character(text.source$id)
head(text.source)
# ## Get sentiment
#
# Run the following code to pass the review data to the previously defined *Sentiment* function and store the response to a variable called *text.sentiment*. The output displays a preview of the response with sentiment scores ranging from 0 (negative) to 1 (positive).
text.sentiment <- Sentiment(text.source)
head(text.sentiment)
# ## Get key phrases
#
# Run the following code to pass the review data to the *KeyPrhases* function and store the response to a variable called *text.key.phrases*. The output displays a preview of the response with a selection of key phrases from the review text.
text.key.phrases <- KeyPhrases(text.source)
head(text.key.phrases)
# ## Combine outputs
#
# Run the following code to join the sentiment and key phrase responses back to the original data frame.
# The output displays a preview of the final data frame.
combined <- list(text.source, text.sentiment, text.key.phrases)
text.results <- Reduce(inner_join, combined)
head(text.results)
# ## Next steps
#
# The final output can then be stored, analyzed, or used elsewhere as needed. For example, you can rank reviews by sentiment, categorize them by parsing key phrases. You can even visualize this data using R or another application such as Microsoft Power BI.
text.results
| Text Analytics - Cognitive Services - Amazon Reviews.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
from matplotlib import gridspec
import os
for dirname, _, filenames in os.walk('../input/NASA-bearing-dataset'):
for filename in filenames:
print(os.path.join(dirname, filename))
# -
# ## Have a look to a sample time serie
# Set No. 2, first file:
# - 2kHz acquisition rate, 1s in total
sample2 = pd.read_csv("../input/NASA-bearing-dataset/sample_3rd.csv",index_col=0)
sample2.shape
# Earlier signal time serie
figure(figsize=(15, 6), dpi=80)
sample2['Bearing 1'].loc[:1000].plot()
plt.title('Raw signal from Bearing 1')
plt.xlabel('Point no.')
plt.ylabel('Acceleration')
plt.show()
# # Dataset preprocessing
# Read the CSV file and set first column as the dataframe index
dataset = pd.read_csv("../input/NASA-bearing-dataset/merged_dataset_BearingTest_3.csv",
index_col=0)
dataset.describe()
# +
x_ticks_span = 50
dataset.plot(figsize=(18, 4))
plt.xlabel('Timestamp')
plt.xticks(np.arange(0, dataset.shape[0], x_ticks_span), fontsize=10, rotation = 30)
plt.ylabel('Acceleration')
plt.legend(loc="upper left")
plt.title('Time series for all 4 accelerometers. Set no.3', fontweight ="bold")
plt.show()
# -
# ## Normalize the dataset
# +
from sklearn import preprocessing
# Dataset is scaled so that maximum for every column is 1
scaler = preprocessing.MinMaxScaler()
dataset_scaled = pd.DataFrame(scaler.fit_transform(dataset),
columns=dataset.columns,
index=dataset.index)
dataset_scaled.describe()
# +
x_ticks_span = 50
dataset_scaled.plot(figsize=(18, 4))
plt.xlabel('Timestamp')
plt.xticks(np.arange(0, dataset_scaled.shape[0], x_ticks_span), fontsize=10, rotation = 30)
plt.ylabel('Acceleration')
plt.legend(loc="upper left")
plt.title('Time series for all 4 accelerometers (normalized signals)', fontweight ="bold")
plt.show()
# -
# ## Build train and test datasets
# - We want the training set contains only "normal" data
# - The rest of points will be in the test set, that will contain both "normal" and anomalous data
# Split baseline and analysis set with a ratio 1:3
row_slice = round( 0.25*dataset_scaled.shape[0] )
index_slice = dataset_scaled.index[row_slice]
index_slice_ = dataset_scaled.index[row_slice + 1]
print("dataset_scaled shape is",dataset_scaled.shape,"and will be slice at timestamp", index_slice)
print("Analysis set will start at timestamp", index_slice_)
# +
dataset_train = dataset_scaled[:index_slice]
dataset_test = dataset_scaled[index_slice_:]
# Random shuffle training data
dataset_train.sample(frac=1)
print("Train dataset has lenght", dataset_train.shape[0], "while test dataset is", dataset_test.shape[0],
"TOTAL=", dataset_train.shape[0]+dataset_test.shape[0])
# +
x_ticks_span = 50
dataset_train.plot(figsize = (6,6), title ='Left time series with "normal" data (normalized signals)')
plt.xticks(np.arange(0, dataset_train.shape[0], x_ticks_span), fontsize=10, rotation = 30)
plt.ylim(0,1)
plt.legend(loc="upper left")
plt.show()
dataset_test.plot(figsize = (18,6), title='Right time series with "normal" & "anomalous" data (normalized signals)')
plt.xticks(np.arange(0, dataset_test.shape[0], x_ticks_span), fontsize=10, rotation = 30)
plt.ylim(0,1)
plt.legend(loc="upper left")
plt.show()
# +
fig, axes = plt.subplots(1,3, figsize=(18, 5))
fig.suptitle('TRAINING SET: Comparison against Bearing 1', fontsize=20)
axes[0].scatter(np.array(dataset_train['Bearing 1']), np.array(dataset_train['Bearing 2']))
axes[0].set_xlabel('Bearing 1')
axes[0].set_ylabel('Bearing 2')
axes[0].set_title('Bearing 1 vs. 2')
axes[0].set_xlim(0,1)
axes[0].set_ylim(0,1)
axes[1].scatter(np.array(dataset_train['Bearing 1']), np.array(dataset_train['Bearing 3']))
axes[1].set_xlabel('Bearing 1')
axes[1].set_ylabel('Bearing 3')
axes[1].set_title('Bearings 1 vs. 3')
axes[1].set_xlim(0,1)
axes[1].set_ylim(0,1)
axes[2].scatter(np.array(dataset_train['Bearing 1']), np.array(dataset_train['Bearing 3']))
axes[2].set_xlabel('Bearing 1')
axes[2].set_ylabel('Bearing 4')
axes[2].set_title('Bearings 1 vs. 4')
axes[2].set_xlim(0,1)
axes[2].set_ylim(0,1)
plt.show()
# +
fig, axes = plt.subplots(1,3, figsize=(15, 5))
fig.suptitle('TEST SET: Comparison against Bearing 1', fontsize=20)
axes[0].scatter(np.array(dataset_test['Bearing 1']), np.array(dataset_test['Bearing 2']))
axes[0].set_xlabel('Bearing 1')
axes[0].set_ylabel('Bearing 2')
axes[0].set_xlim(0,1)
axes[0].set_ylim(0,1)
axes[0].set_title('Bearing 1 vs. 2')
axes[1].scatter(np.array(dataset_test['Bearing 1']), np.array(dataset_test['Bearing 3']))
axes[1].set_xlabel('Bearing 1')
axes[1].set_ylabel('Bearing 3')
axes[1].set_xlim(0,1)
axes[1].set_ylim(0,1)
axes[1].set_title('Bearings 1 vs. 3')
axes[2].scatter(np.array(dataset_test['Bearing 1']), np.array(dataset_test['Bearing 4']))
axes[2].set_xlabel('Bearing 1')
axes[2].set_ylabel('Bearing 4')
axes[2].set_xlim(0,1)
axes[2].set_ylim(0,1)
axes[2].set_title('Bearings 1 vs. 4')
plt.show()
# -
# # PCA model: Principal Components analysis
# Apply dimensionality reduction to scale down from 4 dimensions to only 2 signal
# +
from sklearn.decomposition import PCA
n_components = 4 # How many dimensions you want to reduce to
pca = PCA(n_components=n_components, svd_solver= 'full')
# +
# Compute all PCA components FOR THE TRAINING SET
X_train_PCA = pca.fit_transform(dataset_train)
X_train_PCA = pd.DataFrame(X_train_PCA)
X_train_PCA.index = dataset_train.index
# Project the TEST SET onto the PCA space
X_test_PCA = pca.transform(dataset_test)
X_test_PCA = pd.DataFrame(X_test_PCA)
X_test_PCA.index = dataset_test.index
# +
fig, axes = plt.subplots(1,2, figsize=(12, 5))
gs = gridspec.GridSpec(1, 2, width_ratios=[1, 1])
fig.suptitle('TRAINING & TEST datasets in PCA axes', fontsize=20)
ax0 = plt.subplot(gs[0])
ax0.scatter(X_train_PCA.loc[:,0], X_train_PCA.loc[:,1])
ax0.set_xlabel('PC1')
ax0.set_ylabel('PC2')
ax0.set_xlim(-0.1,0.1)
ax0.set_ylim(-0.1,0.1)
ax0.set_title('Training set Principal Components')
ax1 = plt.subplot(gs[1])
ax1.scatter(X_test_PCA.loc[:,0], X_test_PCA.loc[:,1])
ax1.set_xlabel('PC1')
ax1.set_ylabel('PC2')
ax1.set_title('Test set Principal Components')
plt.show()
# -
# ## Variance ratio (as per training set)
# We can see that in the PCA space, the variance is maximized along PC1 (explains 93% of the variance) and PC2 (explains 5.5%)
# - Components are listed per importance (greater variance contribution)
# +
np.set_printoptions(precision=3, suppress=True) # 3 decimal places and don't use scientific notation
pca.fit_transform(dataset_train)
print(pca.explained_variance_ratio_)
# -
np.set_printoptions(precision=3, suppress=False) # Use scientific notation
print(pca.explained_variance_)
# ### As per the result above, components 1 and 2 accounts for more than 90% of the total variance
# ## Reduce the analysis to the first two PCA components
# +
pca = PCA(n_components= 2, svd_solver= 'full')
# Compute (2) PCA most relevant components FOR THE TRAINING SET
X_train_PCA = pca.fit_transform(dataset_train)
X_train_PCA = pd.DataFrame(X_train_PCA)
X_train_PCA.index = dataset_train.index
# Project the TEST SET onto the PCA space (2 dimensions)
X_test_PCA = pca.transform(dataset_test)
X_test_PCA = pd.DataFrame(X_test_PCA)
X_test_PCA.index = dataset_test.index
# -
print(pca.explained_variance_ratio_)
# # Computing the Mahalanobis distance
# The Mahalanobis distance is widely used in cluster analysis and classification techniques. In order to use the Mahalanobis distance to classify a test point as belonging to one of N classes, one first estimates the covariance matrix of each class, usually based on samples known to belong to each class.
#
# In our case, as we are only interested in classifying “normal” vs “anomaly”, we use training data that only contains normal operating conditions to calculate the covariance matrix.
# - This explains while we built the training dataset as explained above
# ### Functions' definition
# +
# CALCULATE THE COVARIANCE
def cov_matrix(data, verbose=False):
covariance_matrix = np.cov(data, rowvar=False)
if is_pos_def(covariance_matrix):
inv_covariance_matrix = np.linalg.inv(covariance_matrix)
if is_pos_def(inv_covariance_matrix):
return covariance_matrix, inv_covariance_matrix
else:
print("Error: Inverse of Covariance Matrix is not positive definite!")
else:
print("Error: Covariance Matrix is not positive definite!")
# CALCULATE THE MAHALANOBIS DISTANCE
def MahalanobisDist(inv_cov_matrix, mean_distr, data, verbose=False):
inv_covariance_matrix = inv_cov_matrix
vars_mean = mean_distr
diff = data - vars_mean
md = []
for i in range(len(diff)):
md.append(np.sqrt(diff[i].dot(inv_covariance_matrix).dot(diff[i])))
return md
# CHECK IF MATRIX IS POSITIVE DEFINITE
def is_pos_def(A):
if np.allclose(A, A.T):
try:
np.linalg.cholesky(A)
return True
except np.linalg.LinAlgError:
return False
else:
return False
# -
# ## Set up PCA model
# Define train/test set from the two main principal components:
data_train = np.array(X_train_PCA.values)
data_test = np.array(X_test_PCA.values)
# Calculate the covariance matrix and its inverse, based on data in the training set:
cov_matrix, inv_cov_matrix = cov_matrix(data_train)
# We also calculate the mean value for the input variables in the training set, as this is used later to calculate the Mahalanobis distance to datapoints in the test set:
# Mean of each column: PCA1, PCA2
## - It should be very close to zero
mean_distr = data_train.mean(axis=0) # axis=0 means that average is computed per column
np.set_printoptions(precision=3, suppress=False)
mean_distr
# Using the covariance matrix and its inverse, we can calculate the Mahalanobis distance for the training data defining “normal conditions”, and find the threshold value to flag datapoints as an anomaly.
# Then calculate the Mahalanobis distance for the datapoints in the test set, and compare that with the anomaly threshold:
# +
dist_test = MahalanobisDist(inv_cov_matrix, mean_distr, data_test, verbose=False)
dist_train = MahalanobisDist(inv_cov_matrix, mean_distr, data_train, verbose=False)
print("Minimum & maximum MD in training set:", min(dist_train), max(dist_train) )
print("Minimum & maximum MD in test set :", min(dist_test), max(dist_test) )
# -
np.array(dist_test).shape
figure(figsize=(15, 4), dpi=80)
plt.plot(np.array(dist_train) , label="Baseline (25% of dataset, i.e. first 225 points)")
plt.plot(np.array(dist_test)[:-4000], label="Analysis data (first 600 out of 759 points)")
plt.legend(loc="upper left")
plt.title("Comparison of test set (analysis set) distance over train set (baseline)")
plt.show()
# You can see that there is a large portion of *normal* data in the analysis (test) set. This validates the lenght of the dataset we selected for establishing the baseline (train set).
# ## Threshold value for flagging an anomaly
# +
# CALCULATE THRESHOLD FOR CLASSIFYING AS ANOMALY
def MD_threshold(dist, extreme=False, verbose=False):
k = 3. if extreme else 2.
threshold = np.mean(dist) * k
return threshold
threshold = MD_threshold(dist_train, extreme = True)
# * extreme = True => twice the mean of incoming data (dist_train)
# * extreme = False => three times the mean
print("Threshold value for flagging an anomaly is", "{:.2f}".format(threshold) )
# -
# The square of the Mahalanobis distance to the centroid of the distribution should follow a χ2 distribution if the assumption of normal distributed input variables is fulfilled. This is also the assumption behind the above calculation of the “threshold value” for flagging an anomaly. As this assumption is not necessarily fulfilled in our case, it is beneficial to visualize the distribution of the Mahalanobis distance to set a good threshold value for flagging anomalies.
import seaborn as sns
plt.figure()
sns.distplot(dist_train,
bins = 20,
kde= True,
color = 'green');
plt.xlim([0.0,5])
plt.xlabel('Mahalanobis dist')
plt.title('Mahalanobis distance distribution for the baseline (train set)')
plt.show()
# From the above distributions, the calculated threshold value of 3.66 for flagging an anomaly seems reasonable (defined as 3 standard deviations from the center of the distribution).
#
# We can then save the Mahalanobis distance, as well as the threshold value and “anomaly flag” variable for both train and test data in a dataframe:
# ### Outliers in the baseline (*train set*)
# +
anomaly_train = pd.DataFrame()
anomaly_train['Mob dist']= dist_train
anomaly_train['Thresh'] = threshold
# If Mob dist above threshold: Flag as anomaly
anomaly_train['Anomaly'] = anomaly_train['Mob dist'] > anomaly_train['Thresh']
anomaly_train.index = X_train_PCA.index
n_outliers_train = anomaly_train[ anomaly_train['Anomaly'] == True].shape[0]
print("There are", n_outliers_train, "anomalies in the train set out of", anomaly_train.shape[0], "points")
anomaly_train.head()
# -
anomaly_train.plot(logy=True, figsize = (15,6), ylim = [1e-1,1e3], color = ['green','red'])
plt.xticks(np.arange(0, anomaly_train.shape[0], 50), fontsize=10, rotation = 30)
plt.title('Baseline plot against anomaly threshold')
plt.show()
# ### Outliers in the analysis set (*test set*)
# +
anomaly = pd.DataFrame()
anomaly['Mob dist']= dist_test
anomaly['Thresh'] = threshold
# If Mob dist above threshold: Flag as anomaly
anomaly['Anomaly'] = anomaly['Mob dist'] > anomaly['Thresh']
anomaly.index = X_test_PCA.index
n_outliers = anomaly[ anomaly['Anomaly'] == True].shape[0]
print("There are", n_outliers, "anomalies in the test set out of", anomaly.shape[0], "points")
anomaly_train.head()
# -
anomaly = pd.DataFrame()
anomaly['Mob dist']= dist_test
anomaly['Thresh'] = threshold
# If Mob dist above threshold: Flag as anomaly
anomaly['Anomaly'] = anomaly['Mob dist'] > anomaly['Thresh']
anomaly.index = X_test_PCA.index
anomaly.head()
# Based on the calculated statistics, any distance above the threshold value will be flagged as an anomaly.
#
# We can now merge the data in a single dataframe and save it as a .csv file:
anomaly_alldata = pd.concat([anomaly_train, anomaly])
#anomaly_alldata.to_csv('Anomaly_distance.csv')
# We can now plot the calculated anomaly metric (Mob dist), and check when it crosses the anomaly threshold (note the logarithmic y-axis).
anomaly_alldata.plot(logy=True, figsize = (15,6), ylim = [1e-1,1e3], color = ['green','red'])
plt.xticks(np.arange(0, anomaly_alldata.shape[0], 50), fontsize=10, rotation = 30)
plt.title('Whole dataset plot against anomaly threshold')
plt.show()
| data/PCA-MDistance_outliers_detection/nasabearingdataset-pca-outliers-detection_SetNo3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="IASaUioGaTdW"
# # Simple MIDI Chorder
#
# ***
#
# ### A simple, yet very capable MIDI chords detector and annotator
#
# ***
#
# ### Based upon Yating Music repo/code:
#
# https://github.com/YatingMusic/compound-word-transformer
#
# ### And chorder repo/code by <NAME>:
#
# https://github.com/joshuachang2311/chorder
#
# ***
#
# #### Project Los Angeles
#
# #### Tegridy Code 2021
#
# ***
#
# + [markdown] id="4q3vvWAueGiv"
# # Setup Environment
# + id="TqnjCzqsWIEs" cellView="form"
#@title Install dependencies
# !pip install miditoolkit
# !pip install chorder
# + cellView="form" id="1cFvJby0bhl8"
#@title Import needed modules and create IO dirs
import os
import copy
import numpy as np
import multiprocessing as mp
import miditoolkit
from miditoolkit.midi import parser as mid_parser
from miditoolkit.pianoroll import parser as pr_parser
from miditoolkit.midi.containers import Marker, Instrument, TempoChange
from chorder import Dechorder
if not os.path.exists('./Input_MIDIs'):
os.mkdir('./Input_MIDIs')
if not os.path.exists('./Chorded_MIDIs'):
os.mkdir('./Chorded_MIDIs')
# + [markdown] id="0mJ_dRpLeI4-"
# # Chord MIDIs
#
# Default input dir is ./Input_MIDIs. Upload your MIDIs in this dir.
#
# Default output dir is ./Chorded_MIDIs. Pick-up your chorded MIDIs from this dir.
# + id="jAcwRokPWGAc" cellView="form"
#@title Run this code to chord your MIDIs
num2pitch = {
0: 'C',
1: 'C#',
2: 'D',
3: 'D#',
4: 'E',
5: 'F',
6: 'F#',
7: 'G',
8: 'G#',
9: 'A',
10: 'A#',
11: 'B',
}
def traverse_dir(
root_dir,
extension=('mid', 'MID', 'midi'),
amount=None,
str_=None,
is_pure=False,
verbose=False,
is_sort=False,
is_ext=True):
if verbose:
print('[*] Scanning...')
file_list = []
cnt = 0
for root, _, files in os.walk(root_dir):
for file in files:
if file.endswith(extension):
if (amount is not None) and (cnt == amount):
break
if str_ is not None:
if str_ not in file:
continue
mix_path = os.path.join(root, file)
pure_path = mix_path[len(root_dir)+1:] if is_pure else mix_path
if not is_ext:
ext = pure_path.split('.')[-1]
pure_path = pure_path[:-(len(ext)+1)]
if verbose:
print(pure_path)
file_list.append(pure_path)
cnt += 1
if verbose:
print('Total: %d files' % len(file_list))
print('Done!!!')
if is_sort:
file_list.sort()
return file_list
def proc_one(path_infile, path_outfile):
print('----')
print(' >', path_infile)
print(' >', path_outfile)
# load
midi_obj = miditoolkit.midi.parser.MidiFile(path_infile)
midi_obj_out = copy.deepcopy(midi_obj)
notes = midi_obj.instruments[0].notes
notes = sorted(notes, key=lambda x: (x.start, x.pitch))
# --- chord --- #
# exctract chord
chords = Dechorder.dechord(midi_obj)
markers = []
for cidx, chord in enumerate(chords):
if chord.is_complete():
chord_text = num2pitch[chord.root_pc] + '_' + chord.quality + '_' + num2pitch[chord.bass_pc]
else:
chord_text = 'N_N_N'
markers.append(Marker(time=int(cidx*480), text=chord_text))
# dedup
prev_chord = None
dedup_chords = []
for m in markers:
if m.text != prev_chord:
prev_chord = m.text
dedup_chords.append(m)
# --- global properties --- #
# global tempo
tempos = [b.tempo for b in midi_obj.tempo_changes][:40]
tempo_median = np.median(tempos)
global_bpm =int(tempo_median)
print(' > [global] bpm:', global_bpm)
# === save === #
# mkdir
fn = os.path.basename(path_outfile)
os.makedirs(path_outfile[:-len(fn)], exist_ok=True)
# markers
midi_obj_out.markers = dedup_chords
midi_obj_out.markers.insert(0, Marker(text='global_bpm_'+str(int(global_bpm)), time=0))
# save
midi_obj_out.instruments[0].name = 'piano'
midi_obj_out.dump(path_outfile)
if __name__ == '__main__':
# paths
path_indir = './Input_MIDIs'
path_outdir = './Chorded_MIDIs'
os.makedirs(path_outdir, exist_ok=True)
# list files
midifiles = traverse_dir(
path_indir,
is_pure=True,
is_sort=True)
n_files = len(midifiles)
print('num fiels:', n_files)
# collect
data = []
for fidx in range(n_files):
path_midi = midifiles[fidx]
print('{}/{}'.format(fidx, n_files))
# paths
path_infile = os.path.join(path_indir, path_midi)
path_outfile = os.path.join(path_outdir, path_midi)
# append
data.append([path_infile, path_outfile])
# run, multi-thread
pool = mp.Pool()
pool.starmap(proc_one, data)
# + [markdown] id="SDoNb1T2d42v"
# # Congrats! You did it! :)
| tegridy-tools/notebooks/Simple_MIDI_Chorder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from optics_calcs.opticscalcs import deltaKZ
# -
deltaKZ(1030,1030,0,0,'fusedsilica','water')
angles = np.arange(0,66.4,0.1)
dkzs = np.apply_along_axis((lambda x: deltaKZ(1030,1030,x,x,'fusedsilica','water')),0,angles)
plt.figure()
plt.plot(angles,dkzs)
anglesDF = pd.DataFrame(angles)
anglesDF.to_clipboard(excel=True)
dkzsDF = pd.DataFrame(dkzs)
dkzsDF.to_clipboard(excel=True)
write_to_clipboard(np.array2string(angles))
deltaKZ(1030,1030,66,66,'fusedsilica','water')
| Processing/dkz.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from numpy import array
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
import string
import os
from PIL import Image
import glob
from pickle import dump, load
from time import time
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import LSTM, Embedding, TimeDistributed, Dense, RepeatVector,\
Activation, Flatten, Reshape, concatenate, Dropout, BatchNormalization
from keras.optimizers import Adam, RMSprop
from keras.layers.wrappers import Bidirectional
from keras.layers.merge import add
from keras.applications.inception_v3 import InceptionV3
from keras.preprocessing import image
from keras.models import Model
from keras import Input, layers
from keras import optimizers
from keras.applications.inception_v3 import preprocess_input
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
# +
def load_doc(filename):
# open the file as read only
file = open(filename, 'r')
# read all text
text = file.read()
# close the file
file.close()
return text
filename = "C:\\Users\<NAME>\Anaconda3\AAIC\Flickr8k_text\Flickr8k.token.txt"
#filename=open(r'''C:\Users\<NAME>\Anaconda3\AAIC\Flickr8k_text\Flickr8k.token.txt''')
# load descriptions
doc = load_doc(filename)
print(doc[:300])
# +
def load_descriptions(doc):
mapping = dict()
# process lines
for line in doc.split('\n'):
# split line by white space
tokens = line.split()
if len(line) < 2:
continue
# take the first token as the image id, the rest as the description
image_id, image_desc = tokens[0], tokens[1:]
# extract filename from image id
image_id = image_id.split('.')[0]
# convert description tokens back to string
image_desc = ' '.join(image_desc)
# create the list if needed
if image_id not in mapping:
mapping[image_id] = list()
# store description
mapping[image_id].append(image_desc)
return mapping
# parse descriptions
descriptions = load_descriptions(doc)
print('Loaded: %d ' % len(descriptions))
# -
list(descriptions.keys())[:5]
descriptions['1000268201_693b08cb0e']
descriptions['1001773457_577c3a7d70']
# +
def clean_descriptions(descriptions):
# prepare translation table for removing punctuation
table = str.maketrans('', '', string.punctuation)
for key, desc_list in descriptions.items():
for i in range(len(desc_list)):
desc = desc_list[i]
# tokenize
desc = desc.split()
# convert to lower case
desc = [word.lower() for word in desc]
# remove punctuation from each token
desc = [w.translate(table) for w in desc]
# remove hanging 's' and 'a'
desc = [word for word in desc if len(word)>1]
# remove tokens with numbers in them
desc = [word for word in desc if word.isalpha()]
# store as string
desc_list[i] = ' '.join(desc)
# clean descriptions
clean_descriptions(descriptions)
# -
descriptions['1000268201_693b08cb0e']
descriptions['1001773457_577c3a7d70']
# +
# convert the loaded descriptions into a vocabulary of words
def to_vocabulary(descriptions):
# build a list of all description strings
all_desc = set()
for key in descriptions.keys():
[all_desc.update(d.split()) for d in descriptions[key]]
return all_desc
# summarize vocabulary
vocabulary = to_vocabulary(descriptions)
print('Original Vocabulary Size: %d' % len(vocabulary))
# +
# save descriptions to file, one per line
def save_descriptions(descriptions, filename):
lines = list()
for key, desc_list in descriptions.items():
for desc in desc_list:
lines.append(key + ' ' + desc)
data = '\n'.join(lines)
file = open(filename, 'w')
file.write(data)
file.close()
save_descriptions(descriptions, 'descriptions.txt')
# +
# load a pre-defined list of photo identifiers
def load_set(filename):
doc = load_doc(filename)
dataset = list()
# process line by line
for line in doc.split('\n'):
# skip empty lines
if len(line) < 1:
continue
# get the image identifier
identifier = line.split('.')[0]
dataset.append(identifier)
return set(dataset)
# load training dataset (6K)
filename="C:\\Users\<NAME>\Anaconda3\AAIC\Flickr8k_text\Flickr_8k.trainImages.txt"
train = load_set(filename)
print('Dataset: %d' % len(train))
# -
#images="C:\\Users\<NAME>\Anaconda3\AAIC\Flickr8k_Dataset\Flicker8k_Dataset"
images="C://Users/<NAME>/Anaconda3/AAIC/Flickr8k_Dataset/Flicker8k_Dataset/"
#this path contains all the images
img=glob.glob(images + '*.jpg') #create a list of all images in directory
len(img)
# +
# Below file conatains the names of images to be used in train data
train_images_file = "C:\\Users\<NAME>\Anaconda3\AAIC\Flickr8k_text\Flickr_8k.trainImages.txt"
# Read the train image names in a set
train_images = set(open(train_images_file, 'r').read().strip().split('\n'))
# Create a list of all the training images with their full path names
train_img = []
for i in img: # img is list of full path names of all images
if i[len(images):] in train_images: # Check if the image belongs to training set
train_img.append(i) # Add it to the list of train images
# +
# Below file conatains the names of images to be used in test data
test_images_file = "C:\\Users\<NAME>\Anaconda3\AAIC\Flickr8k_text\Flickr_8k.testImages.txt"
# Read the validation image names in a set# Read the test image names in a set
test_images = set(open(test_images_file, 'r').read().strip().split('\n'))
# Create a list of all the test images with their full path names
test_img = []
for i in img: # img is list of full path names of all images
if i[len(images):] in test_images: # Check if the image belongs to test set
test_img.append(i) # Add it to the list of test images
# +
# load clean descriptions into memory
def load_clean_descriptions(filename, dataset):
# load document
doc = load_doc(filename)
descriptions = dict()
for line in doc.split('\n'):
# split line by white space
tokens = line.split()
# split id from description
image_id, image_desc = tokens[0], tokens[1:]
# skip images not in the set
if image_id in dataset:
# create list
if image_id not in descriptions:
descriptions[image_id] = list()
# wrap description in tokens
desc = 'startseq ' + ' '.join(image_desc) + ' endseq'
# store
descriptions[image_id].append(desc)
return descriptions
# descriptions
train_descriptions = load_clean_descriptions('descriptions.txt', train)
print('Descriptions: train=%d' % len(train_descriptions))
# -
def preprocess(image_path):
# Convert all the images to size 299x299 as expected by the inception v3 model
img = image.load_img(image_path, target_size=(299, 299))
# Convert PIL image to numpy array of 3-dimensions
x = image.img_to_array(img)
# Add one more dimension
x = np.expand_dims(x, axis=0)
# preprocess the images using preprocess_input() from inception module
x = preprocess_input(x)
return x
# Load the inception v3 model
model = InceptionV3(weights='imagenet')
# Create a new model, by removing the last layer (output layer) from the inception v3
model_new = Model(model.input, model.layers[-2].output)
# Function to encode a given image into a vector of size (2048, )
def encode(image):
image = preprocess(image) # preprocess the image
fea_vec = model_new.predict(image) # Get the encoding vector for the image
fea_vec = np.reshape(fea_vec, fea_vec.shape[1]) # reshape from (1, 2048) to (2048, )
return fea_vec
# Call the funtion to encode all the train images
# This will take a while on CPU - Execute this only once
start = time()
encoding_train = {}
for img in train_img:
encoding_train[img[len(images):]] = encode(img)
print("Time taken in seconds =", time()-start)
import pickle
# Save the bottleneck train features to disk
with open("C:\\Users\<NAME>\Anaconda3\AAIC\encoded_train_images.pkl", "wb") as encoded_pickle:
pickle.dump(encoding_train, encoded_pickle)
# Call the funtion to encode all the test images - Execute this only once
start = time()
encoding_test = {}
for img in test_img:
encoding_test[img[len(images):]] = encode(img)
print("Time taken in seconds =", time()-start)
# Save the bottleneck test features to disk
with open("C:\\Users\<NAME>\Anaconda3\AAIC\encoded_test_images.pkl", "wb") as encoded_pickle:
pickle.dump(encoding_test, encoded_pickle)
train_features = load(open("C:\\Users\<NAME>\Anaconda3\AAIC\encoded_train_images.pkl", "rb"))
print('Photos: train=%d' % len(train_features))
# Create a list of all the training captions
all_train_captions = []
for key, val in train_descriptions.items():
for cap in val:
all_train_captions.append(cap)
len(all_train_captions)
# +
# Consider only words which occur at least 10 times in the corpus
word_count_threshold = 10
word_counts = {}
nsents = 0
for sent in all_train_captions:
nsents += 1
for w in sent.split(' '):
word_counts[w] = word_counts.get(w, 0) + 1
vocab = [w for w in word_counts if word_counts[w] >= word_count_threshold]
print('preprocessed words %d -> %d' % (len(word_counts), len(vocab)))
# +
ixtoword = {}
wordtoix = {}
ix = 1
for w in vocab:
wordtoix[w] = ix
ixtoword[ix] = w
ix += 1
# -
vocab_size = len(ixtoword) + 1 # one for appended 0's
vocab_size
# +
# convert a dictionary of clean descriptions to a list of descriptions
def to_lines(descriptions):
all_desc = list()
for key in descriptions.keys():
[all_desc.append(d) for d in descriptions[key]]
return all_desc
# calculate the length of the description with the most words
def max_length(descriptions):
lines = to_lines(descriptions)
return max(len(d.split()) for d in lines)
# determine the maximum sequence length
max_length = max_length(train_descriptions)
print('Description Length: %d' % max_length)
# -
# data generator, intended to be used in a call to model.fit_generator()
def data_generator(descriptions, photos, wordtoix, max_length, num_photos_per_batch):
X1, X2, y = list(), list(), list()
n=0
# loop for ever over images
while 1:
for key, desc_list in descriptions.items():
n+=1
# retrieve the photo feature
photo = photos[key+'.jpg']
for desc in desc_list:
# encode the sequence
seq = [wordtoix[word] for word in desc.split(' ') if word in wordtoix]
# split one sequence into multiple X, y pairs
for i in range(1, len(seq)):
# split into input and output pair
in_seq, out_seq = seq[:i], seq[i]
# pad input sequence
in_seq = pad_sequences([in_seq], maxlen=max_length)[0]
# encode output sequence
out_seq = to_categorical([out_seq], num_classes=vocab_size)[0]
# store
X1.append(photo)
X2.append(in_seq)
y.append(out_seq)
# yield the batch data
if n==num_photos_per_batch:
yield [[array(X1), array(X2)], array(y)]
X1, X2, y = list(), list(), list()
n=0
# +
# Load Glove vectors
glove_dir = 'C:\\Users\<NAME>\Anaconda3\AAIC\glove\glove.6B.200d.txt'
embeddings_index = {} # empty dictionary
f = open(os.path.join(glove_dir, 'glove.6B.200d.txt'), encoding="utf-8")
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Found %s word vectors.' % len(embeddings_index))
# +
embedding_dim = 200
# Get 200-dim dense vector for each of the 10000 words in out vocabulary
embedding_matrix = np.zeros((vocab_size, embedding_dim))
for word, i in wordtoix.items():
#if i < max_words:
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# Words not found in the embedding index will be all zeros
embedding_matrix[i] = embedding_vector
# -
embedding_matrix.shape
inputs1 = Input(shape=(2048,))
fe1 = Dropout(0.5)(inputs1)
fe2 = Dense(256, activation='relu')(fe1)
inputs2 = Input(shape=(max_length,))
se1 = Embedding(vocab_size, embedding_dim, mask_zero=True)(inputs2)
se2 = Dropout(0.5)(se1)
se3 = LSTM(256)(se2)
decoder1 = add([fe2, se3])
decoder2 = Dense(256, activation='relu')(decoder1)
outputs = Dense(vocab_size, activation='softmax')(decoder2)
model = Model(inputs=[inputs1, inputs2], outputs=outputs)
model.summary()
model.layers[2]
model.layers[2].set_weights([embedding_matrix])
model.layers[2].trainable = False
model.compile(loss='categorical_crossentropy', optimizer='adam')
epochs = 10
number_pics_per_bath = 3
steps = len(train_descriptions)//number_pics_per_bath
for i in range(epochs):
generator = data_generator(train_descriptions, train_features, wordtoix, max_length, number_pics_per_bath)
model.fit_generator(generator, epochs=1, steps_per_epoch=steps, verbose=1)
model.save('C:\\Users\<NAME>\Anaconda3\AAIC\Model weights\model' + str(i) + '.h5')
for i in range(epochs):
generator = data_generator(train_descriptions, train_features, wordtoix, max_length, number_pics_per_bath)
model.fit_generator(generator, epochs=1, steps_per_epoch=steps, verbose=1)
model.save('C:\\Users\Debojit Dutta\Anaconda3\AAIC\Model weights\model' + str(i) + '.h5')
model.save_weights('C:\\Users\De<NAME>\Anaconda3\AAIC\Model weights\model_30.h5')
# +
model.load_weights('C:\\Users\Debojit Dutta\Anaconda3\AAIC\Model weights\model_30.h5')
# -
images="C://Users/<NAME>/Anaconda3/AAIC/Flickr8k_Dataset/Flicker8k_Dataset/"
with open("C:\\Users\De<NAME>\Anaconda3\AAIC\encoded_test_images.pkl", "rb") as encoded_pickle:
encoding_test = load(encoded_pickle)
def greedySearch(photo):
in_text = 'startseq'
for i in range(max_length):
sequence = [wordtoix[w] for w in in_text.split() if w in wordtoix]
sequence = pad_sequences([sequence], maxlen=max_length)
yhat = model.predict([photo,sequence], verbose=0)
yhat = np.argmax(yhat)
word = ixtoword[yhat]
in_text += ' ' + word
if word == 'endseq':
break
final = in_text.split()
final = final[1:-1]
final = ' '.join(final)
return final
z=7
z+=1
pic = list(encoding_test.keys())[z]
image = encoding_test[pic].reshape((1,2048))
x=plt.imread(images+pic)
plt.imshow(x)
plt.show()
print("Greedy:",greedySearch(image))
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from integrators import contact as ic
from integrators.common import rk4, pad_and_cumsum
# +
import numpy as np
from scipy.ndimage.interpolation import shift
import matplotlib.pyplot as plt
plt.style.use("fast") # alt: 'seaborn-white'
# plt.rcParams.update({'font.size': 20, 'font.family': 'serif', 'font.weight':'normal'})
plt.rcParams["font.size"] = 16
plt.rcParams["font.family"] = "serif"
plt.rcParams["axes.labelsize"] = 20
plt.rcParams["xtick.labelsize"] = 14
plt.rcParams["xtick.direction"] = "in"
plt.rcParams["xtick.bottom"] = True
plt.rcParams["xtick.major.size"] = 5
plt.rcParams["ytick.labelsize"] = 14
plt.rcParams["ytick.direction"] = "in"
plt.rcParams["ytick.left"] = True
plt.rcParams["ytick.major.size"] = 5
plt.rcParams["legend.fontsize"] = 16
plt.rcParams["mathtext.fontset"] = "cm"
plt.rcParams["savefig.bbox"] = "tight"
# -
class Osc:
def __init__(self, alpha):
self.alpha = alpha
def f(self, t):
return alpha
def V(self, q, t):
return q ** 2 / 2
def Vq(self, q, t):
return q
# +
upper_error_bound = lambda a, dt, p0, q0: dt ** 3 / 12 * abs(q0 * a + p0 * (a ** 2 - 1))
upper_error_bound_p = (
lambda a, dt, p0, q0: dt ** 3 / 12 * abs(p0 * a + q0 * (2 + a ** 2))
)
total_error_bound = lambda a, dt, p0, q0: np.linalg.norm(
[upper_error_bound(a, dt, p0, q0), upper_error_bound_p(a, dt, p0, q0)]
)
def exact(a, t):
discriminant = np.lib.scimath.sqrt(a ** 2 - 4)
return np.real(
np.exp(-1 / 2 * (discriminant + a) * t)
* ((discriminant + 2 + a) * np.exp(discriminant * t) + discriminant - 2 - a)
/ (2 * discriminant)
)
def exactp(a, t):
discriminant = np.lib.scimath.sqrt(a ** 2 - 4)
return np.real(
np.exp(-1 / 2 * (discriminant + a) * t)
* (2 + a + discriminant + (discriminant - 2 - a) * np.exp(discriminant * t))
/ (2 * discriminant)
)
# -
idx = 0
for t0, tf, dt in [(0.0, 50.0, 0.5), (0.0, 50.0, 0.001)]:
tspan = np.arange(t0, tf, dt)
steps = len(tspan)
err = np.empty([steps], dtype=np.float64)
for (alpha, p0, q0) in [
(0.125, 1.0, 1.0)
]: # ,(0.2, 1.0, 1.0),(0.5, 1.0, 1.0), (1, 1.0, 1.0), (5, 1.0, 1.0)]:
do = Osc(alpha)
sol, _, _ = ic.integrate(ic.step, do, tspan, p0, q0, 0.0)
ex = lambda tspan: exact(alpha, tspan)
ex1 = lambda tspan: exactp(alpha, tspan)
plt.figure(figsize=(15, 10))
plt.suptitle(
f"$\\gamma = {do.alpha}$, $(p_0, q_0) = {p0}, {q0}$, $\\tau = {dt}$",
size=16,
)
################
plt.subplot(221)
plt.plot(tspan, ex(tspan), linewidth=1, label="Exact")
plt.plot(tspan, sol[:, 1], linewidth=1, label="Numerical")
plt.legend()
plt.xlabel("$t$")
plt.ylabel("$q$")
################
plt.subplot(223)
plt.plot(tspan, (abs(sol[:, 1] - ex(tspan))), linewidth=1, label="Numerical")
plt.plot(
tspan,
pad_and_cumsum([total_error_bound(alpha, dt, p0, q0) for p0, q0 in sol[:]]),
linewidth=1,
label="Estimated",
)
plt.legend(loc="lower right")
plt.yscale("log")
plt.xlabel("$t$")
plt.ylabel("Error on $q$")
################
plt.subplot(222)
plt.plot(tspan, ex1(tspan), linewidth=1, label="Exact")
plt.plot(tspan, sol[:, 0], linewidth=1, label="Numerical")
plt.legend()
plt.xlabel("$t$")
plt.ylabel("$p$")
################
plt.subplot(224)
plt.plot(tspan, (abs(sol[:, 0] - ex1(tspan))), linewidth=1, label="Numerical")
plt.plot(
tspan,
pad_and_cumsum([total_error_bound(alpha, dt, p0, q0) for p0, q0 in sol[:]]),
linewidth=1,
label="Estimated",
)
plt.legend(loc="lower right")
plt.yscale("log")
plt.xlabel("$t$")
plt.ylabel("Error on $p$")
plt.subplots_adjust(wspace=0.25, hspace=0.25, top=0.93)
idx += 1
plt.savefig(f"damped_{idx}.pdf")
plt.show()
# +
t0, tf = (0.0, 100.0)
tspan = np.arange(t0, tf, dt)
tspansmall = np.arange(t0, tf, dt / 8)
for (alpha, p0, q0) in [(0.01, 1.0, 1.0), (0.1, 1.0, 1.0), (1.0, 1.0, 1.0)]:
do = Osc(alpha)
plt.figure(figsize=(15, 10))
plt.suptitle(f"$\\alpha = {do.alpha}$, $(p_0, q_0) = {p0}, {q0}$")
plt.subplot(221)
plt.title(f"dt = {dt}")
sol, _, _ = ic.integrate(ic.step, do, tspan, p0, q0, 0.0)
solsmall, _, _ = ic.integrate(ic.step, do, tspansmall, p0, q0, 0.0)
ex = lambda tspan: exact(alpha, tspan)
plt.plot(tspan, ex(tspan))
plt.plot(tspan, sol[:, 1])
plt.subplot(222)
plt.title(f"dt = {dt/8}")
plt.plot(tspansmall, ex(tspansmall), linewidth=1)
plt.plot(tspansmall, solsmall[:, 1], linewidth=1)
plt.subplot(223)
plt.plot(tspan, abs(sol[:, 1] - ex(tspan)), linewidth=1)
plt.plot(tspan, np.cumsum(abs(sol[:, 1] - ex(tspan))), linewidth=1)
plt.plot(
tspan,
pad_and_cumsum([total_error_bound(alpha, dt, p0, q0) for p0, q0 in sol[:]]),
linewidth=1,
)
plt.yscale("log")
plt.subplot(224)
plt.plot(tspansmall, abs(solsmall[:, 1] - ex(tspansmall)), linewidth=1)
plt.plot(tspansmall, np.cumsum(abs(solsmall[:, 1] - ex(tspansmall))), linewidth=1)
plt.plot(
tspansmall,
pad_and_cumsum(
[total_error_bound(alpha, dt, p0, q0) for p0, q0 in solsmall[:]]
),
linewidth=1,
)
plt.yscale("log")
plt.show()
# +
dt = 0.2
t0 = 0.0
tf = 120.0
tspan = np.arange(t0, tf, dt)
plt.figure(figsize=(15, 10))
plt.subplot(211)
for (alpha, p0, q0) in [(0.01, 1.0, 0.0), (0.1, 1.0, 0.0), (1.0, 1.0, 0.0)]:
do = Osc(alpha)
sol, _, _ = ic.integrate(ic.step, do, tspan, p0, q0, 0.0)
soll, _, _ = ic.integrate(ic.variational_step, do, tspan, p0, q0, 0.0)
plt.plot(sol[:, 1], sol[:, 0], linewidth=0.8)
plt.plot(soll[:, 1], soll[:, 0], linewidth=0.8)
plt.subplot(212)
for (alpha, p0, q0) in [(0.01, 1.0, 0.0), (0.1, 1.0, 0.0), (1.0, 1.0, 0.0)]:
# FIXME: pointless to do it twice...
sol, _, _ = ic.integrate(ic.step, do, tspan, p0, q0, 0.0)
soll, _, _ = ic.integrate(ic.variational_step, do, tspan, p0, q0, 0.0)
plt.plot(tspan, sol[:, 0], linewidth=0.8)
plt.plot(tspan, soll[:, 0], linewidth=0.8)
plt.show()
# -
| Damped Oscillator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Node Attribute Inference (multi-class) using GraphSAGE and the Pubmed-Diabetes citation network with calibration
# This notebook demonstrates probability calibration for multi-class node attribute inference. The classifier used is GraphSAGE and the dataset is the citation network Pubmed-Diabetes. Our task is to predict the subject of a paper (the nodes in the graph) that is one of 3 classes. The data are the network structure and for each paper a 500-dimensional TF/IDF word vector.
#
# The notebook demonstrates the use of `stellargraph`'s `TemperatureCalibration` and `IsotonicCalibration` classes as well as supporting methods for calculating the Expected Calibration Error (ECE) and plotting reliability diagrams.
#
# Since the focus of this notebook is to demonstrate the calibration of `stellargraph`'s graph neural network models for classification, we do not go into detail on the training and evaluation of said models. We suggest the reader considers the following notebook for more details on how to train and evaluate a GraphSAGE model for node attribute inference,
#
# [Stellargraph example: GraphSAGE on the CORA citation network](https://github.com/stellargraph/stellargraph/blob/master/demos/node-classification-graphsage/graphsage-cora-node-classification-example.ipynb)
#
# **References**
# 1. Inductive Representation Learning on Large Graphs. <NAME>, <NAME>, and <NAME> arXiv:1706.02216
# [cs.SI], 2017. ([link](http://snap.stanford.edu/graphsage/))
#
# 2. On Calibration of Modern Neural Networks. <NAME>, <NAME>, <NAME>, and <NAME>.
# ICML 2017. ([link](https://geoffpleiss.com/nn_calibration))
# +
import networkx as nx
import pandas as pd
import os
import itertools
import stellargraph as sg
from stellargraph.mapper import GraphSAGENodeGenerator
from stellargraph.layer import GraphSAGE
from tensorflow.keras import layers, optimizers, losses, metrics, Model
import tensorflow as tf
import numpy as np
from sklearn import preprocessing, feature_extraction, model_selection
from sklearn.calibration import calibration_curve
from sklearn.linear_model import LogisticRegressionCV
from sklearn.isotonic import IsotonicRegression
from sklearn.metrics import accuracy_score
from stellargraph import TemperatureCalibration, IsotonicCalibration
from stellargraph import plot_reliability_diagram, expected_calibration_error
# -
# Given a GraphSAGE model, a node generator, and the number of predictions per point
# this method makes n_predictions number of predictions and then returns the average
# prediction for each query node.
def predict(model, node_generator, n_predictions=1):
preds = []
for i in range(n_predictions):
preds.append(model.predict_generator(node_generator))
preds_ar = np.array(preds)
print(preds_ar.shape)
return np.mean(preds_ar, axis=0)
# ### Some global parameters
epochs = 20 # Numper of training epochs for GraphSAGE model.
n_predictions = 5 # number of predictions per query node
# ### Loading the Pubmed-Diabetes network data
# **Downloading the dataset:**
#
# The dataset used in this demo can be downloaded from https://linqs-data.soe.ucsc.edu/public/Pubmed-Diabetes.tgz
#
# The following is the description of the dataset:
#
# > The Pubmed Diabetes dataset consists of 19717 scientific publications from PubMed database
# > pertaining to diabetes classified into one of three classes. The citation network consists
# > of 44338 links. Each publication in the dataset is described by a TF/IDF weighted word
# > vector from a dictionary which consists of 500 unique words.
#
# Download and unzip the `Pubmed-Diabetes.tgz` file to a location on your computer.
#
# Set the `data_dir` variable to point to the location of the dataset.
data_dir = "~/data/pubmed/Pubmed-Diabetes/data"
# Now prepare the data so that we can create a `networkx` object that can be used by `stellargraph`.
# Load the graph from edgelist
edgelist = pd.read_csv(os.path.join(data_dir, 'Pubmed-Diabetes.DIRECTED.cites.tab'),
sep="\t", skiprows=2,
header=None )
edgelist.drop(columns=[0,2], inplace=True)
edgelist.columns = ['source', 'target']
# delete unneccessary prefix
edgelist['source'] = edgelist['source'].map(lambda x: x.lstrip('paper:'))
edgelist['target'] = edgelist['target'].map(lambda x: x.lstrip('paper:'))
edgelist["label"] = "cites" # set the edge type
edgelist.head()
Gnx = nx.from_pandas_edgelist(edgelist, edge_attr="label")
# Load the features and subject for the nodes
# +
nodes_as_dict = []
with open(os.path.join(os.path.expanduser(data_dir), "Pubmed-Diabetes.NODE.paper.tab")) as fp:
for line in itertools.islice(fp, 2, None):
line_res = line.split("\t")
pid = line_res[0]
feat_name = ['pid'] + [l.split("=")[0] for l in line_res[1:]][:-1] # delete summary
feat_value = [l.split("=")[1] for l in line_res[1:]][:-1] # delete summary
feat_value = [pid] + [ float(x) for x in feat_value ] # change to numeric from str
row = dict(zip(feat_name, feat_value))
nodes_as_dict.append(row)
# Create a Pandas dataframe holding the node data
node_data = pd.DataFrame(nodes_as_dict)
node_data.fillna(0, inplace=True)
node_data['label'] = node_data['label'].astype(int)
node_data['label'] = node_data['label'].astype(str)
# -
node_data.head()
set(node_data["label"])
node_data['pid'].dtype
node_data['pid'] = node_data.pid.astype(str)
node_data.index = node_data['pid']
node_data.drop(columns=['pid'], inplace=True)
node_data.head()
# ### Splitting the data
# For machine learning we want to take a subset of the nodes for training, and use the rest for testing. We'll use scikit-learn again to do this
# +
train_data, test_data = model_selection.train_test_split(node_data,
train_size=0.75,
test_size=None,
stratify=node_data['label'])
train_data, val_data = model_selection.train_test_split(train_data,
train_size=0.75,
test_size=None,
stratify=train_data['label'])
# -
train_data.shape, val_data.shape, test_data.shape
train_data.head()
val_data.head()
test_data.head()
# Note using stratified sampling gives the following counts:
from collections import Counter
Counter(train_data['label']), Counter(val_data['label']), Counter(test_data['label'])
# The training set has class imbalance that might need to be compensated, e.g., via using a weighted cross-entropy loss in model training, with class weights inversely proportional to class support. However, we will ignore the class imbalance in this example, for simplicity.
# ### Converting to numeric arrays
# For our categorical target, we will use one-hot vectors that will be fed into a soft-max Keras layer during training. To do this conversion ...
# +
target_encoding = feature_extraction.DictVectorizer(sparse=False)
train_targets = target_encoding.fit_transform(train_data[["label"]].to_dict('records'))
val_targets = target_encoding.fit_transform(val_data[["label"]].to_dict('records'))
test_targets = target_encoding.transform(test_data[["label"]].to_dict('records'))
# -
train_targets
# We now do the same for the node attributes we want to use to predict the subject. These are the feature vectors that the Keras model will use as input. The dataset contains attributes 'w-*' that correspond to words found in that publication.
node_features = node_data.drop(columns=['label'])
node_features.head()
# ## Creating the GraphSAGE model in Keras
# Now create a StellarGraph object from the NetworkX graph and the node features and targets. It is StellarGraph objects that we use in this library to perform machine learning tasks on.
G = sg.StellarGraph(Gnx, node_features=node_features)
print(G.info())
# To feed data from the graph to the Keras model we need a node generator. The node generators are specialized to the model and the learning task so we choose the `GraphSAGENodeMapper` as we are predicting node attributes with a GraphSAGE model.
#
# We need two other parameters, the `batch_size` to use for training and the number of nodes to sample at each level of the model. Here we choose a two-level model with 10 nodes sampled in the first layer, and 5 in the second.
batch_size = 50; num_samples = [10, 5]
# A `GraphSAGENodeGenerator` object is required to send the node features in sampled subgraphs to Keras
generator = GraphSAGENodeGenerator(G, batch_size, num_samples)
# For training we map only the training nodes returned from our splitter and the target values.
train_gen = generator.flow(train_data.index, train_targets)
# Now we can specify our machine learning model, we need a few more parameters for this:
#
# * the `layer_sizes` is a list of hidden feature sizes of each layer in the model. In this example we use 32-dimensional hidden node features at each layer.
# * The `bias` and `dropout` are internal parameters of the model.
graphsage_model = GraphSAGE(
layer_sizes=[32, 32],
generator=generator,
bias=True,
dropout=0.5,
)
# Now we create a model to predict the 3 categories using Keras softmax layers.
# +
x_inp, x_out = graphsage_model.build()
logits = layers.Dense(units=train_targets.shape[1], activation="linear")(x_out)
prediction = layers.Activation(activation="softmax")(logits)
# -
prediction.shape
# ### Training the model
# Now let's create the actual Keras model with the graph inputs `x_inp` provided by the `graph_model` and outputs being the predictions from the softmax layer
model = Model(inputs=x_inp, outputs=prediction)
model.compile(
optimizer=optimizers.Adam(lr=0.005),
loss=losses.categorical_crossentropy,
metrics=[metrics.categorical_accuracy],
)
# Train the model, keeping track of its loss and accuracy on the training set, and its generalisation performance on the test set (we need to create another generator over the test data for this)
val_gen = generator.flow(val_data.index, val_targets)
test_gen = generator.flow(test_data.index, test_targets)
history = model.fit_generator(
train_gen,
epochs=epochs,
validation_data=val_gen,
verbose=0,
shuffle=True,
)
# +
import matplotlib.pyplot as plt
# %matplotlib inline
def plot_history(history):
metrics = sorted(history.history.keys())
metrics = metrics[:len(metrics)//2]
for m in metrics:
# summarize history for metric m
plt.plot(history.history[m])
plt.plot(history.history['val_' + m])
plt.title(m)
plt.ylabel(m)
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')
plt.show()
# -
plot_history(history)
# Now we have trained the model we can evaluate on the test set.
test_metrics = model.evaluate_generator(test_gen)
print("\nTest Set Metrics:")
for name, val in zip(model.metrics_names, test_metrics):
print("\t{}: {:0.4f}".format(name, val))
# ## Calibration Curves
#
#
# We want to determine if the classifier produces well-calibrated probabilities. Calibration curves also known as reliability diagrams are a visual method for this task. See reference [2] for a description of calibration curves also known as reliability diagrams.
#
# Diagnosis of model miscalibration should be performed on a held-out dataset that was not used for training. We are going to utilise our test set for this purpose. Equivalently, we can use our validation dataset.
test_nodes = test_data.index
test_node_generator = generator.flow(test_nodes)
# test_predictions holds the model's probabilistic output predictions
test_predictions = predict(model, test_node_generator, n_predictions=n_predictions)
# This produces a list of dictionaries (one entry per point in test set) and one entry
# in the dictionary for each class label, label=1, label=2, label=3.
y_test_pred = target_encoding.inverse_transform(test_predictions)
# Convert the list of dictionaries to a dataframe so that it is easier to work with the data
test_pred_results = pd.DataFrame(y_test_pred)
test_pred_results.index = test_data.index
test_pred_results.head()
# We are going to draw one calibration curve for each column in `test_pred_results`.
test_pred = test_pred_results.values
test_pred.shape
calibration_data = []
for i in range(test_pred.shape[1]): # iterate over classes
calibration_data.append(calibration_curve(y_prob=test_pred[:, i],
y_true=test_targets[:, i],
n_bins=10,
normalize=True))
calibration_data[0], type(calibration_data[0])
# Also calculate Expected Calibration Error (ECE) for each class. See reference [2] for the definition of ECE.
ece = []
for i in range(test_pred.shape[1]):
fraction_of_positives, mean_predicted_value = calibration_data[i]
ece.append(expected_calibration_error(prediction_probabilities=test_pred[:, i],
accuracy=fraction_of_positives,
confidence=mean_predicted_value))
ece
# Draw the reliability diagrams for each class
plot_reliability_diagram(calibration_data, test_pred, ece=ece)
# ## Temperature scaling calibration
# Temperature scaling is an extension of [Platt scaling](https://en.wikipedia.org/wiki/Platt_scaling) for calibrating multi-class classification models. It was proposed in reference [2].
#
# Temperature scaling uses a single parameter called the `temperature` to scale a classifier's non-probabilistic outputs (logits) before the application of the softmax operator that generates the model's probabilistic outputs.
#
# $\hat{q}_i = \max\limits_{k} \sigma_{SM}(\mathbf{z}_i/T)^{(k)}$
#
# where $\hat{q}_i$ is the calibrated probability for the predicted class of the i-th node; $\mathbf{z}_i$ is the vector of logits; $T$ is the temperature; $k$ is the k-th class; and, $\sigma_{SM}$ is the softmax function.
# this model gives the model's non-probabilistic outputs required for Temperature scaling.
score_model = Model(inputs=x_inp, outputs=logits)
# Prepare the training data such that inputs are the model output logits and corresponding true class labels are the one-hot encoded.
#
# We are going to train the calibration model on the validation dataset.
val_nodes = val_data.index
val_node_generator = generator.flow(val_nodes)
test_score_predictions = predict(score_model, test_node_generator, n_predictions=n_predictions)
val_score_predictions = predict(score_model, val_node_generator, n_predictions=n_predictions)
test_score_predictions.shape, val_score_predictions.shape
x_cal_train_all = val_score_predictions
y_cal_train_all = val_targets
# We are going to split the above data to a training and validation set. We are going to use the former for training the calibration model and the latter for early stopping.
x_cal_train, x_cal_val, y_cal_train, y_cal_val = model_selection.train_test_split(x_cal_train_all, y_cal_train_all)
x_cal_train.shape, x_cal_val.shape, y_cal_train.shape, y_cal_val.shape
# Create the calibration object
calibration_model_temperature = TemperatureCalibration(epochs=1000)
calibration_model_temperature
# Now call the `fit` method to train the calibration model.
calibration_model_temperature.fit(x_train=x_cal_train,
y_train=y_cal_train,
x_val=x_cal_val,
y_val=y_cal_val)
calibration_model_temperature.plot_training_history()
# Now we can take the GraphSAGE logits, scale them by `temperature` and then apply the `softmax` to obtain the calibrated probabilities for each class.
#
# **Note** that scaling the logits by `temperature` does not change the predictions so the model's accuracy will not change and there is no need to recalculate them.
test_predictions_calibrated_temperature = calibration_model_temperature.predict(x=test_score_predictions)
test_predictions_calibrated_temperature.shape
# Now plot the calibration curves and calculate the ECE for each class. We should expect the ECE to be lower after calibration. If not, then a different calibration method should be considered, e.g., Isotonic Regression as described later in this notebook.
calibration_data_after_temperature_scaling = []
for i in range(test_predictions_calibrated_temperature.shape[1]): # iterate over classes
calibration_data_after_temperature_scaling.append(calibration_curve(y_prob=test_predictions_calibrated_temperature[:, i],
y_true=test_targets[:, i],
n_bins=10,
normalize=True))
ece_after_scaling_temperature = []
for i in range(test_predictions_calibrated_temperature.shape[1]):
fraction_of_positives, mean_predicted_value = calibration_data_after_temperature_scaling[i]
ece_after_scaling_temperature.append(expected_calibration_error(prediction_probabilities=test_predictions_calibrated_temperature[:, i],
accuracy=fraction_of_positives,
confidence=mean_predicted_value))
ece_after_scaling_temperature
plot_reliability_diagram(calibration_data_after_temperature_scaling,
test_predictions_calibrated_temperature,
ece=ece_after_scaling_temperature )
# ## Isotonic Regression
#
# We extend [Isotonic calibration](https://scikit-learn.org/stable/modules/generated/sklearn.isotonic.IsotonicRegression.html#sklearn.isotonic.IsotonicRegression) to the multi-class case by calibrating OVR models, one for each class.
#
# At test time, we calibrate the predictions for each class and then normalize the vector to unit norm so that the output of the calibration is a probability distribution.
# **Note** that the input to the Isotonic Calibration model is the classifier's probabilistic outputs as compared to Temperature scaling where the input was the logits.
test_pred.shape # Holds the probabilistic predictions for each query node
# The probabilistic predictions for the validation set
val_predictions = predict(model, val_node_generator, n_predictions=n_predictions)
val_predictions.shape
# Create the calibration object of type `IsotonicCalibration`.
isotonic_calib = IsotonicCalibration()
# Now call the `fit` method to train the calibraiton model.
isotonic_calib.fit(x_train=val_predictions, y_train=val_targets)
test_pred_calibrated_isotonic = isotonic_calib.predict(test_pred)
test_pred_calibrated_isotonic.shape
# Now plot the calibration curves and calculate the ECE for each class. We should expect the ECE to be lower after calibration. If not, then a different calibration method should be considered, e.g., Temperature Scaling as described earlier in this notebook.
calibration_data_after_isotonic_scaling = []
for i in range(test_pred_calibrated_isotonic.shape[1]): # iterate over classes
calibration_data_after_isotonic_scaling.append(calibration_curve(y_prob=test_pred_calibrated_isotonic[:, i],
y_true=test_targets[:, i],
n_bins=10,
normalize=True))
ece_after_scaling_isotonic = []
for i in range(test_pred_calibrated_isotonic.shape[1]):
fraction_of_positives, mean_predicted_value = calibration_data_after_isotonic_scaling[i]
ece_after_scaling_isotonic.append(expected_calibration_error(prediction_probabilities=test_pred_calibrated_isotonic[:, i],
accuracy=fraction_of_positives,
confidence=mean_predicted_value))
ece_after_scaling_isotonic
plot_reliability_diagram(calibration_data_after_isotonic_scaling,
test_pred_calibrated_isotonic,
ece=ece_after_scaling_isotonic)
# ### Compare ECE before and after calibration.
#
# Let's print the ECE for the original model before calibration and for the model after calibration using Temperature Scaling and Isotonic Regression.
#
# If model calibration is successful then either one or both of the calibrated models should have reduced ECE across all or most of the classes.
cal_error = ",".join(format(e, " 0.4f") for e in ece)
print("ECE before calibration: {}".format(cal_error))
cal_error = ",".join(format(e, " 0.4f") for e in ece_after_scaling_temperature)
print("ECE after Temperature Scaling: {}".format(cal_error))
cal_error = ",".join(format(e, " 0.4f") for e in ece_after_scaling_isotonic)
print("ECE after Isotonic Calibration: {}".format(cal_error) )
# ### Recalculate classifier accuracy before and after calibration
y_pred = np.argmax(test_pred, axis=1)
y_pred_calibrated_temperature = np.argmax(test_predictions_calibrated_temperature, axis=1)
y_pred_calibrated_isotonic = np.argmax(test_pred_calibrated_isotonic, axis=1)
print("Accurace before calibration: {:.2f}".format(accuracy_score(y_pred=y_pred,
y_true=np.argmax(test_targets, axis=1))))
print("Accurace after Temperature Scaling: {:.2f}".format(accuracy_score(y_pred=y_pred_calibrated_temperature,
y_true=np.argmax(test_targets, axis=1))))
print("Accurace after Isotonic Calibration: {:.2f}".format(accuracy_score(y_pred=y_pred_calibrated_isotonic,
y_true=np.argmax(test_targets, axis=1))))
# ## Conclusion
#
# This notebook demonstrated how to use temperature scaling and isotonic regression to calibrate the output probabilities of a GraphSAGE model used for multi-class node attribute inference.
| demos/calibration/calibration-pubmed-node-classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] ein.tags="worksheet-0" slideshow={"slide_type": "-"}
# # MNIST handwritten digits classification with parameter grid search for SVM
#
# In this notebook, we'll use [grid search](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) and a validation set to find optimal values for our SVM model's hyperparameters.
#
# First, the needed imports.
# + ein.hycell=false ein.tags="worksheet-0" jupyter={"outputs_hidden": false} slideshow={"slide_type": "-"}
# %matplotlib inline
from pml_utils import get_mnist
import numpy as np
from sklearn import svm, datasets, __version__
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import GridSearchCV, PredefinedSplit
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# Suppress annoying warnings...
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
from distutils.version import LooseVersion as LV
assert(LV(__version__) >= LV("0.20")), "Version >= 0.20 of sklearn is required."
# + [markdown] ein.tags="worksheet-0" slideshow={"slide_type": "-"}
# Then we load the MNIST data. First time it downloads the data, which can take a while.
# + ein.hycell=false ein.tags="worksheet-0" jupyter={"outputs_hidden": false} slideshow={"slide_type": "-"}
X_train, y_train, X_test, y_test = get_mnist('MNIST')
print('MNIST data loaded: train:',len(X_train),'test:',len(X_test))
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('X_test', X_test.shape)
print('y_test', y_test.shape)
# + [markdown] ein.tags="worksheet-0" slideshow={"slide_type": "-"}
# ## Linear SVM
#
# Let's start with the linear SVM trained with a subset of training data. `C` is the penalty parameter that we need to specify. Let's first try with just some guess, e.g., `C=1.0`.
# + ein.hycell=false ein.tags="worksheet-0" jupyter={"outputs_hidden": false} slideshow={"slide_type": "-"}
# %%time
clf_lsvm = svm.LinearSVC(C=1.0)
print(clf_lsvm.fit(X_train[:10000,:], y_train[:10000]))
pred_lsvm = clf_lsvm.predict(X_test)
print('Predicted', len(pred_lsvm), 'digits with accuracy:', accuracy_score(y_test, pred_lsvm))
# + [markdown] ein.tags="worksheet-0" slideshow={"slide_type": "-"}
# Next, let's try grid search, i.e., we try several different values for the parameter `C`. Remember that it's important to *not* use the test set for evaluating hyperparameters. Instead we opt to set aside the last 1000 images as a validation set.
#
# + ein.hycell=false ein.tags="worksheet-0" jupyter={"outputs_hidden": false} slideshow={"slide_type": "-"}
# %%time
# The values for C that we will try out
param_grid = {'C': [1, 10, 100, 1000]}
# Define the validation set
valid_split = PredefinedSplit(9000*[-1] + 1000*[0])
clf_lsvm_grid = GridSearchCV(clf_lsvm, param_grid, cv=valid_split, verbose=2)
print(clf_lsvm_grid.fit(X_train[:10000,:], y_train[:10000]))
# + [markdown] ein.tags="worksheet-0" slideshow={"slide_type": "-"}
# We can now see what was the best value for C that was selected.
# + ein.hycell=false ein.tags="worksheet-0" jupyter={"outputs_hidden": false} slideshow={"slide_type": "-"}
print(clf_lsvm_grid.best_params_)
best_C = clf_lsvm_grid.best_params_['C']
# + [markdown] ein.tags="worksheet-0" slideshow={"slide_type": "-"}
# Let's try predicting with out new model with optimal hyperparameters.
# + ein.hycell=false ein.tags="worksheet-0" jupyter={"outputs_hidden": false} slideshow={"slide_type": "-"}
clf_lsvm2 = svm.LinearSVC(C=best_C)
print(clf_lsvm2.fit(X_train[:10000,:], y_train[:10000]))
pred_lsvm2 = clf_lsvm2.predict(X_test)
print('Predicted', len(pred_lsvm2), 'digits with accuracy:', accuracy_score(y_test, pred_lsvm2))
# + [markdown] ein.tags="worksheet-0" slideshow={"slide_type": "-"}
# ## Kernel SVM
#
# The Kernel SVM typically has two hyperparameters that need to be set. For example for a Gaussian (or RBF) kernel we also have `gamma` (Greek $\gamma$) in addition to `C`. Let's first try with some initial guesses for the values.
# + ein.hycell=false ein.tags="worksheet-0" jupyter={"outputs_hidden": false} slideshow={"slide_type": "-"}
# %%time
clf_ksvm = svm.SVC(decision_function_shape='ovr', kernel='rbf', C=1.0, gamma=1e-6)
print(clf_ksvm.fit(X_train[:10000,:], y_train[:10000]))
pred_ksvm = clf_ksvm.predict(X_test)
print('Predicted', len(pred_ksvm), 'digits with accuracy:', accuracy_score(y_test, pred_ksvm))
# + [markdown] ein.tags="worksheet-0" slideshow={"slide_type": "-"}
# Now we can try grid search again, now with two parameters. We use even a smaller subset of the training set it will otherwise be too slow.
# + ein.hycell=false ein.tags="worksheet-0" jupyter={"outputs_hidden": false} slideshow={"slide_type": "-"}
# %%time
param_grid = {'C': [1, 10, 100],
'gamma': [1e-8, 5e-8, 1e-7, 5e-7, 1e-6]}
train_items = 3000
valid_items = 500
tot_items = train_items + valid_items
valid_split = PredefinedSplit(train_items*[-1] + valid_items*[0])
clf_ksvm_grid = GridSearchCV(clf_ksvm, param_grid, cv=valid_split, verbose=2)
print(clf_ksvm_grid.fit(X_train[:tot_items,:], y_train[:tot_items]))
# + [markdown] ein.tags="worksheet-0" slideshow={"slide_type": "-"}
# Again, let's see what parameters were selected.
# + ein.hycell=false ein.tags="worksheet-0" jupyter={"outputs_hidden": false} slideshow={"slide_type": "-"}
print(clf_ksvm_grid.best_params_)
best_C = clf_ksvm_grid.best_params_['C']
best_gamma = clf_ksvm_grid.best_params_['gamma']
# + [markdown] ein.tags="worksheet-0" slideshow={"slide_type": "-"}
# As we did the grid search on a small subset of the training set it probably makes sense to retrain the model with the selected parameters using a bigger part of the training data.
# + ein.hycell=false ein.tags="worksheet-0" jupyter={"outputs_hidden": false} slideshow={"slide_type": "-"}
clf_ksvm2 = svm.SVC(decision_function_shape='ovr', kernel='rbf', C=best_C, gamma=best_gamma)
print(clf_ksvm2.fit(X_train[:10000,:], y_train[:10000]))
pred_ksvm2 = clf_ksvm2.predict(X_test)
print('Predicted', len(pred_ksvm2), 'digits with accuracy:', accuracy_score(y_test, pred_ksvm2))
# + ein.hycell=false ein.tags="worksheet-0" jupyter={"outputs_hidden": false} slideshow={"slide_type": "-"}
| notebooks/sklearn-mnist-grid.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import os, sys, time, copy
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import multiprocessing
from functools import partial
from tqdm import tqdm
import collections
from typing import List
import myokit
sys.path.append('../')
sys.path.append('../Protocols')
sys.path.append('../Models')
sys.path.append('../Lib')
import protocol_lib
import mod_trace
import simulator_myokit
import simulator_scipy
import vc_protocols
# +
def find_closest_index(array, t):
"""Given an array, return the index with the value closest to t."""
return (np.abs(np.array(array) - t)).argmin()
# def get_currents_with_constant_dt(xs, window=1, step_size=1):
# times = xs[0]
# currents = xs[1:]
# data_li = []
# for I in currents:
# data_temp = []
# t = 0
# while t <= times[-1] - window:
# start_index = find_closest_index(times, t)
# end_index = find_closest_index(times, t + window)
# I_window = I[start_index: end_index + 1]
# data_temp.append(sum(I_window)/len(I_window))
# t += step_size
# data_li.append(data_temp)
# return data_li
def get_currents_with_constant_dt(param, x):
window = param['window']
step_size = param['step_size']
times = x[0]
i_ion = x[1]
i_ion_window = []
t = 0
while t <= times[-1] - window:
start_index = find_closest_index(times, t)
end_index = find_closest_index(times, t + window)
I_window = i_ion[start_index: end_index + 1]
i_ion_window.append(sum(I_window)/len(I_window))
t += step_size
return i_ion_window
# +
# VC_protocol = vc_protocols.hERG_CiPA()
# VC_protocol = vc_protocols.cav12_CiPA()
# VC_protocol = vc_protocols.lateNav15_CiPA()
VC_protocol = protocol_lib.VoltageClampProtocol() # steps=steps
VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-80, duration=100) )
VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-90, duration=100) )
VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-80, duration=100) )
VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-35, duration=40) )
VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-80, duration=200) )
VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-40, duration=40) )
VC_protocol.add( protocol_lib.VoltageClampStep(voltage=0, duration=40) ) # <- why?? vo
VC_protocol.add( protocol_lib.VoltageClampStep(voltage=40, duration=500) )
VC_protocol.add( protocol_lib.VoltageClampRamp(voltage_start=40, voltage_end=-120, duration=200)) # ramp step
# VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-80, duration=100) )
# VC_protocol.add( protocol_lib.VoltageClampStep(voltage=0, duration=100) )
# VC_protocol.add( protocol_lib.VoltageClampStep(voltage=60, duration=500) )
# VC_protocol.add( protocol_lib.VoltageClampRamp(voltage_start=60, voltage_end=-80, duration=200)) # ramp step
vhold = VC_protocol.steps[0].voltage
print(f'The protocol is {VC_protocol.get_voltage_change_endpoints()[-1]} ms')
# -
gen_params = {
'end_time': VC_protocol.get_voltage_change_endpoints()[-1],
'log_li' : ['ina.INa', 'inal.INaL', 'ito.Ito', 'ical.ICaL', 'ical.ICaNa', 'ical.ICaK', 'ikr.IKr', 'iks.IKs', 'ik1.IK1', 'inaca.INaCa', 'inacass.INaCa_ss', 'inak.INaK', 'ikb.IKb', 'inab.INab', 'icab.ICab', 'ipca.IpCa'],
'save_log_li' : ['ina.INa', 'ikr.IKr', 'iks.IKs', 'ito.Ito', 'ical.ICaL', 'ik1.IK1', 'inal.INaL'],
'nData' : 5,
'dataset_dir' : '../../Dataset/ohara2017_LeemV1',
'data_file_name' : 'currents',
'window' : 10,
'step_size' : 5,
}
# gen_params['dataset_dir'] = gen_params['dataset_dir'] + f"_w{gen_params['window']}_s{gen_params['step_size']}"
print( gen_params['dataset_dir'] )
cell_types = {
'Endocardial' : 0,
'Epicardial' : 1,
'Mid-myocardial' : 2,
}
sys.path.append(gen_params['dataset_dir'])
from agetdata import get_dataset, get_dataset2
gen_params['dataset_dir']
# +
# for i in range(5):
# xs, ys = get_dataset(file_numbers=range(1, 3), multi=True, use_torch=True)
# xs, ys = get_dataset(file_numbers=range(1, 31), window=10, step_size=5, multi=False, use_torch=False, get_raw=False)
xs, ys = get_dataset2(file_numbers=range(1, 31), window=10, step_size=5, multi=True, use_torch=True) # <-- fast
# -
print(xs.shape, ys.shape)
# +
dataNo = 0
sol1 = {}
sol1["I_ion"] = xs[dataNo]
y = ys[dataNo]
# sol1 = pd.DataFrame(data=sol1)
# sol1.head()
# +
def find_closest_index(array, t):
"""Given an array, return the index with the value closest to t."""
return (np.abs(np.array(array) - t)).argmin()
def get_currents_with_constant_dt(sol, window=1, step_size=1):
times = sol['Time'].values
currents = sol.drop(['Time'], axis=1)
avg_currents = collections.defaultdict(list)
t = 0
while t <= times[-1] - window:
start_index = find_closest_index(times, t)
end_index = find_closest_index(times, t + window)
currents_in_window = currents[start_index: end_index + 1]
window_avg_currents = {}
for name in currents.columns:
window_avg_currents[name] = currents_in_window[name].sum()/len(currents_in_window[name])
# window_avg_currents[name] = currents_in_window[name].min()
# window_avg_currents[name] = currents_in_window[name].max()
avg_currents['Time Start'].append(t)
avg_currents['Time End'].append(t + window)
avg_currents['Time Mid'].append((2*t + window)/2)
for key, val in window_avg_currents.items():
# print(key, val)
avg_currents[key].append(val)
t += step_size
return avg_currents
# +
start_time = time.time()
model, p, s = myokit.load("../mmt-model-files/ohara-cipa-v1-2017_JK-v1.mmt")
sim = simulator_myokit.Simulator(model, VC_protocol, max_step=1.0, abs_tol=1e-06, rel_tol=1e-6, vhold=-80) # 1e-12, 1e-14 # 1e-08, 1e-10
sim.name = "ohara2017"
f = 1.5
params = {
'cell.mode': cell_types['Mid-myocardial'],
'setting.simType': 1, # 0: AP | 1: VC
'ina.gNa' : 75.0 * f,
'inal.gNaL' : 0.0075 * 2.661 * f,
'ito.gto' : 0.02 * 4 * f,
'ical.PCa' : 0.0001 * 1.007 * 2.5 * f,
'ikr.gKr' : 4.65854545454545618e-2 * 1.3 * f, # [mS/uF]
'iks.gKs' : 0.0034 * 1.87 * 1.4 * f,
'ik1.gK1' : 0.1908 * 1.698 * 1.3 * f,
'inaca.gNaCa' : 0.0008 * 1.4,
'inak.PNaK' : 30 * 0.7,
'ikb.gKb' : 0.003,
'inab.PNab' : 3.75e-10,
'icab.PCab' : 2.5e-8,
'ipca.GpCa' : 0.0005,
}
sim.set_simulation_params(params)
print("--- %s seconds ---"%(time.time()-start_time))
# +
start_time = time.time()
g_adj_li= {
'ina.g_adj' : y[0],
'inal.g_adj' : y[1],
'ito.g_adj' : y[2],
'ical.g_adj' : y[3],
'ikr.g_adj' : y[4],
'iks.g_adj' : y[5],
'ik1.g_adj' : y[6],
# 'if.g_adj' : g_fc[7]
}
sim.set_simulation_params(g_adj_li)
sim.pre_simulate(5000, sim_type=1)
d = sim.simulate( gen_params['end_time'], extra_log=['membrane.VC', 'membrane.i_ion']+gen_params['log_li'])
sol2 = {}
sol2["Time"] = d['engine.time']
# sol2["Voltage"] = d['membrane.VC']
sol2["I_ion"] = d['membrane.i_ion'] #+ np.random.normal(0, 2, d['membrane.i_ion'].shape) # add noise
sol2["I_Na"] = sim.current_response_info.get_current(['INa'])
sol2["I_NaL"] = sim.current_response_info.get_current(['INaL'])
sol2["I_To"] = sim.current_response_info.get_current(['Ito'])
sol2["I_CaL"] = sim.current_response_info.get_current(['ICaL'])
sol2["I_Kr"] = sim.current_response_info.get_current(['IKr'])
sol2["I_Ks"] = sim.current_response_info.get_current(['IKs'])
sol2["I_K1"] = sim.current_response_info.get_current(['IK1'])
sol2 = pd.DataFrame(data=sol2)
sol2.head()
# np.random.normal(0, noise_sigma, current.shape) # add noise
print("--- %s seconds ---"%(time.time()-start_time))
# -
sol2
sol2 = get_currents_with_constant_dt(sol2, window=gen_params['window'], step_size=gen_params['step_size'])
# +
'''
Plot
'''
fig, ax = plt.subplots(2,1, figsize=(10,10))
# fig.suptitle(sim.name, fontsize=14)
axNo = 0
for name, value in sol1.items():
if name!='Time Start' and name!='Time End' and name!='Time Mid':
# ax.set_title('Simulation %d'%(simulationNo))
# axes[i].set_xlim(model_scipy.times.min(), model_scipy.times.max())
# ax.set_ylim(ylim[0], ylim[1])
ax[axNo].set_xlabel('Time (ms)')
ax[axNo].set_ylabel(f'{name}')
ax[axNo].plot( value, label='control', color='k', linewidth=5)
ax[axNo].plot( sol2[name], label='treatment', color='r', linewidth=2)
ax[axNo].legend()
ax[axNo].grid()
axNo += 1
# ax[-1].set_ylim(-5, 5)
plt.subplots_adjust(left=0.07, bottom=0.05, right=0.95, top=0.95, wspace=0.5, hspace=0.15)
plt.show()
# fig.savefig(os.path.join('Results', "C.jpg"), dpi=100)
# -
| Generate_dataset_ohara2017/check_dataset.ipynb |
;; -*- coding: utf-8 -*-
;; ---
;; jupyter:
;; jupytext:
;; text_representation:
;; extension: .scm
;; format_name: light
;; format_version: '1.5'
;; jupytext_version: 1.14.4
;; kernelspec:
;; display_name: Calysto Scheme 3
;; language: scheme
;; name: calysto_scheme
;; ---
;; ### 練習問題2.28
;; リストとして表現された⽊を引数として取り、
;; その⽊のすべての葉を左から右の順で要素として持つリストを返す⼿続きfringeを書け。
;; 例えば、次のようになる。
;;
;; (define x (list (list 1 2) (list 3 4)))
;; (fringe x)
;; (1 2 3 4)
;; (fringe (list x x))
;; (1 2 3 4 1 2 3 4)
;; #### 所感
;; 2.2.3の「列の演算」に回答がある。
;; 記載されている回答と同じにならなかった。
;; 結構難しかった。
;; append手続きは、リスト同士を結合して新たなリストを返すので、
;; リストをネストすることはない。
;; この練習問題でappend手続きを使うのは有効である。
(define (fringe l)
(define (iter ll)
(cond ((not (pair? ll)) ll) ; 葉を返す
((pair? (car ll)) (append (fringe (car ll)) (fringe (cdr ll)))) ; append手続きは2つの引数がリストでないと実行できない
(else (cons (fringe (car ll)) (fringe (cdr ll))))
)
)
(iter l)
)
(define x (list (list 1 2) (list 3 4)))
x
(fringe x)
(list x x)
(fringe (list x x))
(define y (list (list (list (list 1 2) (list 3 4)) (list 5 6)) (list (list 7 8) (list 9 10)) (list 11 12)))
y
(fringe y)
(define z (list (list 1 2) (list 3 4) (list 5 6)))
z
(fringe z)
; 2.2.3を見た後の回答
(define (fringe l)
(define (iter ll)
(cond ((null? ll) ())
((not (pair? ll)) (list ll)) ; append手続きは2つの引数がリストでないと実行できないため、リストにする。
(else (append (fringe (car ll)) (fringe (cdr ll))))
)
)
(iter l)
)
(fringe x)
(fringe y)
(fringe z)
| exercises/2.28.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# *Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python" by [<NAME>](https://sebastianraschka.com). All code examples are released under the [MIT license](https://github.com/rasbt/deep-learning-book/blob/master/LICENSE). If you find this content useful, please consider supporting the work by buying a [copy of the book](https://leanpub.com/ann-and-deeplearning).*
#
# Other code examples and content are available on [GitHub](https://github.com/rasbt/deep-learning-book). The PDF and ebook versions of the book are available through [Leanpub](https://leanpub.com/ann-and-deeplearning).
# %load_ext watermark
# %watermark -a '<NAME>' -v -p torch
# - Runs on CPU or GPU (if available)
# # Model Zoo -- Autoencoder
# A simple, single-layer autoencoder that compresses 768-pixel MNIST images into 32-pixel vectors (32-times smaller representations).
# ## Imports
# +
import numpy as np
import torch
import torch.nn.functional as F
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
##########################
### SETTINGS
##########################
# Device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print('Device:', device)
# Hyperparameters
random_seed = 123
learning_rate = 0.005
num_epochs = 5
batch_size = 128
# Architecture
num_features = 784
num_hidden_1 = 32
##########################
### MNIST DATASET
##########################
# Note transforms.ToTensor() scales input images
# to 0-1 range
train_dataset = datasets.MNIST(root='data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = datasets.MNIST(root='data',
train=False,
transform=transforms.ToTensor())
train_loader = DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
# -
# ## Model
# +
##########################
### MODEL
##########################
class Autoencoder(torch.nn.Module):
def __init__(self, num_features):
super(Autoencoder, self).__init__()
### ENCODER
self.linear_1 = torch.nn.Linear(num_features, num_hidden_1)
# The following to lones are not necessary,
# but used here to demonstrate how to access the weights
# and use a different weight initialization.
# By default, PyTorch uses Xavier/Glorot initialization, which
# should usually be preferred.
self.linear_1.weight.detach().normal_(0.0, 0.1)
self.linear_1.bias.detach().zero_()
### DECODER
self.linear_2 = torch.nn.Linear(num_hidden_1, num_features)
self.linear_1.weight.detach().normal_(0.0, 0.1)
self.linear_1.bias.detach().zero_()
def forward(self, x):
### ENCODER
encoded = self.linear_1(x)
encoded = F.leaky_relu(encoded)
### DECODER
logits = self.linear_2(encoded)
decoded = F.sigmoid(logits)
return decoded
torch.manual_seed(random_seed)
model = Autoencoder(num_features=num_features)
model = model.to(device)
##########################
### COST AND OPTIMIZER
##########################
cost_fn = torch.nn.BCELoss() #torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# +
## Training
# -
for epoch in range(num_epochs):
for batch_idx, (features, targets) in enumerate(train_loader):
# don't need labels, only the images (features)
features = features.view(-1, 28*28).to(device)
### FORWARD AND BACK PROP
decoded = model(features)
cost = cost_fn(decoded, features)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_dataset)//batch_size, cost))
# ## Evaluation
# +
# %matplotlib inline
import matplotlib.pyplot as plt
##########################
### VISUALIZATION
##########################
n_images = 15
image_width = 28
fig, axes = plt.subplots(nrows=2, ncols=n_images,
sharex=True, sharey=True, figsize=(20, 2.5))
orig_images = features[:n_images]
decoded_images = decoded[:n_images]
for i in range(n_images):
for ax, img in zip(axes, [orig_images, decoded_images]):
ax[i].imshow(img[i].detach().reshape((image_width, image_width)), cmap='binary')
| code/model_zoo/pytorch_ipynb/autoencoder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Crocoddyl: Contact RObot COntrol by Differential DYnamic programming Library
#
#
# ## I. Welcome to crocoddyl
# Crocoddyl is an **optimal control library for robot control under contact sequence**. Its solver is based on an efficient Differential Dynamic Programming (DDP) algorithm. Crocoddyl computes optimal trajectories along with optimal feedback gains. It uses Pinocchio for fast computation of robot dynamics and its analytical derivatives.
#
# Crocoddyl is focused on multi-contact optimal control problem (MCOP) which as the form:
#
# $$\mathbf{X}^*,\mathbf{U}^*=
# \begin{Bmatrix} \mathbf{x}^*_0,\cdots,\mathbf{x}^*_N \\
# \mathbf{u}^*_0,\cdots,\mathbf{u}^*_N
# \end{Bmatrix} =
# \arg\min_{\mathbf{X},\mathbf{U}} \sum_{k=1}^N \int_{t_k}^{t_k+\Delta t} l(\mathbf{x},\mathbf{u})dt$$
# subject to
# $$ \mathbf{\dot{x}} = \mathbf{f}(\mathbf{x},\mathbf{u}),$$
# $$ \mathbf{x}\in\mathcal{X}, \mathbf{u}\in\mathcal{U}, \boldsymbol{\lambda}\in\mathcal{K}.$$
# where
# - the state $\mathbf{x}=(\mathbf{q},\mathbf{v})$ lies in a manifold, e.g. Lie manifold $\mathbf{q}\in SE(3)\times \mathbb{R}^{n_j}$, $n_j$ being the number of degrees of freedom of the robot.
# - the system has underactuacted dynamics, i.e. $\mathbf{u}=(\mathbf{0},\boldsymbol{\tau})$,
# - $\mathcal{X}$, $\mathcal{U}$ are the state and control admissible sets, and
# - $\mathcal{K}$ represents the contact constraints.
#
# Note that $\boldsymbol{\lambda}=\mathbf{g}(\mathbf{x},\mathbf{u})$ denotes the contact force, and is dependent on the state and control.
#
# Let's start by understanding the concept behind crocoddyl design.
# # II. Action models
#
# In crocoddyl, an action model combines dynamics and cost models. Each node, in our optimal control problem, is described through an action model. In order to describe a problem, we need to provide ways of computing the dynamics, the cost functions and their derivatives. All these are described inside the action model.
#
# To understand the mathematical aspects behind an action model, let's first get a locally linearize version of our optimal control problem as:
#
# $$\mathbf{X}^*(\mathbf{x}_0),\mathbf{U}^*(\mathbf{x}_0)
# =
# \arg\max_{\mathbf{X},\mathbf{U}} = cost_T(\delta\mathbf{x}_N) + \sum_{k=1}^N cost_t(\delta\mathbf{x}_k, \delta\mathbf{u}_k)$$
# subject to
# $$dynamics(\delta\mathbf{x}_{k+1},\delta\mathbf{x}_k,\delta\mathbf{u}_k)=\mathbf{0},$$
#
# where
# $$cost_T(\delta\mathbf{x}) = \frac{1}{2}
# \begin{bmatrix}
# 1 \\ \delta\mathbf{x}
# \end{bmatrix}^\top
# \begin{bmatrix}
# 0 & \mathbf{l_x}^\top \\
# \mathbf{l_x} & \mathbf{l_{xx}}
# \end{bmatrix}
# \begin{bmatrix}
# 1 \\ \delta\mathbf{x}
# \end{bmatrix}
# $$
#
# $$cost_t(\delta\mathbf{x},\delta\mathbf{u}) = \frac{1}{2}
# \begin{bmatrix}
# 1 \\ \delta\mathbf{x} \\ \delta\mathbf{u}
# \end{bmatrix}^\top
# \begin{bmatrix}
# 0 & \mathbf{l_x}^\top & \mathbf{l_u}^\top\\
# \mathbf{l_x} & \mathbf{l_{xx}} & \mathbf{l_{ux}}^\top\\
# \mathbf{l_u} & \mathbf{l_{ux}} & \mathbf{l_{uu}}
# \end{bmatrix}
# \begin{bmatrix}
# 1 \\ \delta\mathbf{x} \\ \delta\mathbf{u}
# \end{bmatrix}
# $$
#
# $$
# dynamics(\delta\mathbf{x}_{k+1},\delta\mathbf{x}_k,\delta\mathbf{u}_k) = \delta\mathbf{x}_{k+1} - (\mathbf{f_x}\delta\mathbf{x}_k + \mathbf{f_u}\delta\mathbf{u}_k)
# $$
#
# where an action model defines a time interval of this problem:
# - $actions = dynamics + cost$
#
# ### Important notes:
# - An action model describes the dynamics and cost functions for a node in our optimal control problem.
# - Action models lie in the discrete time space.
# - For debugging and prototyping, we have also implemented numerical differentiation (NumDiff) abstractions. These computations depend only on the definition of the dynamics equation and cost functions. However to asses efficiency, crocoddyl uses **analytical derivatives** computed from Pinocchio.
#
#
# ## II.a Differential and Integrated Action Models
# Optimal control solvers require the time-discrete model of the cost and the dynamics. However, it's often convenient to implement them in continuous time (e.g. to combine with abstract integration rules). In crocoddyl, this continuous-time action models are called "Differential Action Model (DAM)". And together with predefined "Integrated Action Models (IAM)", it possible to retrieve the time-discrete action model.
#
# At the moment, we have:
# - a simpletic Euler and
# - a Runge-Kutte 4 integration rules.
#
# An optimal control problem can be written from a set of DAMs as:
# $$\mathbf{X}^*(\mathbf{x}_0),\mathbf{U}^*(\mathbf{x}_0)
# =
# \arg\max_{\mathbf{X},\mathbf{U}} = cost_T(\delta\mathbf{x}_N) + \sum_{k=1}^N \int_{t_k}^{t_k+\Delta t} cost_t(\delta\mathbf{x}_k, \delta\mathbf{u}_k) dt$$
# subject to
# $$dynamics(\delta\mathbf{x}_{k+1},\delta\mathbf{x}_k,\delta\mathbf{u}_k)=\mathbf{0},$$
#
# where
# $$cost_T(\delta\mathbf{x}) = \frac{1}{2}
# \begin{bmatrix}
# 1 \\ \delta\mathbf{x}
# \end{bmatrix}^\top
# \begin{bmatrix}
# 0 & \mathbf{l_x}^\top \\
# \mathbf{l_x} & \mathbf{l_{xx}}
# \end{bmatrix}
# \begin{bmatrix}
# 1 \\ \delta\mathbf{x}
# \end{bmatrix}
# $$
#
# $$cost_t(\delta\mathbf{x},\delta\mathbf{u}) = \frac{1}{2}
# \begin{bmatrix}
# 1 \\ \delta\mathbf{x} \\ \delta\mathbf{u}
# \end{bmatrix}^\top
# \begin{bmatrix}
# 0 & \mathbf{l_x}^\top & \mathbf{l_u}^\top\\
# \mathbf{l_x} & \mathbf{l_{xx}} & \mathbf{l_{ux}}^\top\\
# \mathbf{l_u} & \mathbf{l_{ux}} & \mathbf{l_{uu}}
# \end{bmatrix}
# \begin{bmatrix}
# 1 \\ \delta\mathbf{x} \\ \delta\mathbf{u}
# \end{bmatrix}
# $$
#
# $$
# dynamics(\delta\mathbf{\dot{x}},\delta\mathbf{x},\delta\mathbf{u}) = \delta\mathbf{\dot{x}} - (\mathbf{f_x}\delta\mathbf{x} + \mathbf{f_u}\delta\mathbf{u})
# $$
# ### Building a differential action model for robot forward dynamics
# #### Loading the robot
#
# Crocoddyl offers several robot models for benchmarking our optimal control solvers (e.g. manipulators, humanoids, quadrupeds, etc). The collection of Talos models can be downloaded in Ubuntu with the APT package *robotpkg-talos-data*.
#
# Let's load a single Talos arm (left one):
# +
from crocoddyl import *
import numpy as np
talos_arm = loadTalosArm()
# Defining a initial state
q0 = [0.173046, 1., -0.52366, 0., 0., 0.1, -0.005]
x0 = np.hstack([q0, np.zeros(talos_arm.model.nv)])
# -
# ### calc and calcDiff
# Optimal control solvers often need to compute a quadratic approximation of the action model (as previously described); this provides a search direction (computeDirection). Then it's needed to try the step along this direction (tryStep).
#
# Typically calc and calcDiff do the precomputations that are required before computeDirection and tryStep respectively (inside the solver). These functions update the information of:
# - **calc**: update the next state and its cost value
# $$\delta\mathbf{\dot{x}}_{k+1} = \mathbf{f}(\delta\mathbf{x}_k,\mathbf{u}_k)$$
# - **calcDiff**: update the derivatives of the dynamics and cost (quadratic approximation)
# $$\mathbf{f_x}, \mathbf{f_u} \hspace{1em} (dynamics)$$
# $$\mathbf{l_x}, \mathbf{l_u}, \mathbf{l_{xx}}, \mathbf{l_{ux}}, \mathbf{l_{uu}} \hspace{1em} (cost)$$
#
# **Crocoddyl put all information inside data**, so avoiding dynamic reallocation.
# +
import numpy as np
import pinocchio
class DifferentialActionModelABA:
def __init__(self,pinocchioModel):
# The forward dynamics and its derivatives are computed with Pinocchio
self.pinocchio = pinocchioModel
self.nq,self.nv = self.pinocchio.nq, self.pinocchio.nv
# Describes integrate, difference, Jacobian integrate and Jacobian difference
# for any Pinocchio model
self.State = StatePinocchio(self.pinocchio)
# Keeps a stack of cost functions
self.costs = CostModelSum(self.pinocchio)
# Dimension of the state, and its tangent space, and control
self.nx = self.State.nx
self.ndx = self.State.ndx
self.nout = self.nv
self.nu = self.nv
self.unone = np.zeros(self.nu)
@property
def ncost(self): return self.costs.ncost
def createData(self): return DifferentialActionDataABA(self) # create the data needed for this model
def calc(model,data,x,u=None):
if u is None: u=model.unone
nx,nu,nq,nv,nout = model.nx,model.nu,model.nq,model.nv,model.nout
q = a2m(x[:nq]) # from np array to matrix
v = a2m(x[-nv:]) # from np array to matrix
tauq = a2m(u) # from np array to matrix
# Computes the next state through ABA
data.xout[:] = pinocchio.aba(model.pinocchio,data.pinocchio,q,v,tauq).flat
# Updates the kinematics needed for cost computation
pinocchio.forwardKinematics(model.pinocchio,data.pinocchio,q,v)
pinocchio.updateFramePlacements(model.pinocchio,data.pinocchio)
# Computes the cost from a set of single cost functions
data.cost = model.costs.calc(data.costs,x,u)
return data.xout,data.cost
def calcDiff(model,data,x,u=None,recalc=True):
if u is None: u=model.unone
if recalc: xout,cost = model.calc(data,x,u)
nx,ndx,nu,nq,nv,nout = model.nx,model.State.ndx,model.nu,model.nq,model.nv,model.nout
q = a2m(x[:nq]) # from np array to matrix
v = a2m(x[-nv:]) # from np array to matrix
tauq = a2m(u) # from np array to matrix
# Computes the ABA derivatives (dynamics), and keeps them inside data
pinocchio.computeABADerivatives(model.pinocchio,data.pinocchio,q,v,tauq)
data.Fx[:,:nv] = data.pinocchio.ddq_dq
data.Fx[:,nv:] = data.pinocchio.ddq_dv
data.Fu[:,:] = data.Minv
# Updates the kinematics Jacobians needed for getting the derivatives of the cost function
pinocchio.computeJointJacobians(model.pinocchio,data.pinocchio,q)
pinocchio.updateFramePlacements(model.pinocchio,data.pinocchio)
# Computes all derivatives of cost function
model.costs.calcDiff(data.costs,x,u,recalc=False)
return data.xout,data.cost
class DifferentialActionDataABA:
def __init__(self,model):
self.pinocchio = model.pinocchio.createData()
self.costs = model.costs.createData(self.pinocchio)
self.cost = np.nan
self.xout = np.zeros(model.nout)
nx,nu,ndx,nq,nv,nout = model.nx,model.nu,model.State.ndx,model.nq,model.nv,model.nout
self.F = np.zeros([ nout,ndx+nu ])
self.costResiduals = self.costs.residuals
self.Fx = self.F[:,:ndx]
self.Fu = self.F[:,-nu:]
self.g = self.costs.g
self.L = self.costs.L
self.Lx = self.costs.Lx
self.Lu = self.costs.Lu
self.Lxx = self.costs.Lxx
self.Lxu = self.costs.Lxu
self.Luu = self.costs.Luu
self.Rx = self.costs.Rx
self.Ru = self.costs.Ru
# -
# ## II.b State and its integrate and difference rules
# General speaking, the system's state can lie in a manifold $M$ where the state rate of change lies in its tangent space $T_\mathbf{x}M$. There are few operators that needs to be defined for different rutines inside our solvers:
# - $\mathbf{x}_{k+1} = integrate(\mathbf{x}_k,\delta\mathbf{x}_k) = \mathbf{x}_k \oplus \delta\mathbf{x}_k$
# - $\delta\mathbf{x}_k = difference(\mathbf{x}_{k+1},\mathbf{x}_k) = \mathbf{x}_{k+1} \ominus \mathbf{x}_k$
#
# where $\mathbf{x}\in M$ and $\delta\mathbf{x}\in T_\mathbf{x} M$.
#
#
# And we also need to defined the Jacobians of these operators with respect to the first and second arguments:
# - $\frac{\partial \mathbf{x}\oplus\delta\mathbf{x}}{\partial \mathbf{x}}, \frac{\partial \mathbf{x}\oplus\delta\mathbf{x}}{\partial\delta\mathbf{x}} =Jintegrante(\mathbf{x},\delta\mathbf{x})$
# - $\frac{\partial\mathbf{x}_2\ominus\mathbf{x}_2}{\partial \mathbf{x}_1}, \frac{\partial \mathbf{x}_2\ominus\mathbf{x}_1}{\partial\mathbf{x}_1} =Jdifference(\mathbf{x}_2,\mathbf{x}_1)$
#
# For instance, a state that lies in the Euclidean space will the typical operators:
# - $integrate(\mathbf{x},\delta\mathbf{x}) = \mathbf{x} + \delta\mathbf{x}$
# - $difference(\mathbf{x}_2,\mathbf{x}_1) = \mathbf{x}_2 - \mathbf{x}_1$
# - $Jintegrate(\cdot,\cdot) = Jdifference(\cdot,\cdot) = \mathbf{I}$
#
#
# These defines inare encapsulate inside the State class. **For Pinocchio models, we have implemented the StatePinocchio class which can be used for any robot model**.
# # III. Solving optimal control problems with DDP
#
# ## III.a ABA dynamics for reaching a goal with Talos arm
#
# Our optimal control solver interacts with a defined ShootingProblem. A shooting problem represents a stack of action models in which an action model defins a specific knot along the OC problem.
#
# First we need to create an action model from DifferentialActionModelABA. We use it for building terminal and running action models. In this example, we employ an simpletic Euler integration rule as follows:
# +
# Running and terminal action models
runningModel = IntegratedActionModelEuler(DifferentialActionModelABA(talos_arm.model))
terminalModel = IntegratedActionModelEuler(DifferentialActionModelABA(talos_arm.model))
# Defining the time duration for running action models and the terminal one
dt = 1e-3
runningModel.timeStep = dt
# -
# Next we define the set of cost functions for this problem. For this particular example, we formulate three running-cost functions:
# - goal-tracking cost, $log(^fXd_o \,^oX_f)$
#
# - state and control regularization; and $\|\mathbf{x}-\mathbf{x}_{ref}\|, \|\mathbf{u}\|$
#
# one terminal-cost:
# - goal cost. $\|\mathbf{u}_T\|$
#
# First, let's create the common cost functions.
# +
from pinocchio.utils import *
# Goal tracking cost
frameName = 'gripper_left_joint' #gripper_left_fingertip_2_link'
state = StatePinocchio(talos_arm.model)
SE3ref = pinocchio.SE3(np.eye(3), np.array([ [.0],[.0],[.4] ]))
goalTrackingCost = CostModelFramePlacement(talos_arm.model,
nu=talos_arm.model.nv,
frame=talos_arm.model.getFrameId(frameName),
ref=SE3ref)
# State and control regularization
xRegCost = CostModelState(talos_arm.model,
state,
ref=state.zero(),
nu=talos_arm.model.nv)
uRegCost = CostModelControl(talos_arm.model,nu=talos_arm.model.nv)
# Adds the running and terminal cost functions
runningCostModel = runningModel.differential.costs
runningCostModel.addCost( name="pos", weight = 1e-3, cost = goalTrackingCost)
runningCostModel.addCost( name="regx", weight = 1e-7, cost = xRegCost)
runningCostModel.addCost( name="regu", weight = 1e-7, cost = uRegCost)
terminalCostModel = terminalModel.differential.costs
terminalCostModel.addCost( name="pos", weight = 1, cost = goalTrackingCost)
# Let's compute the cost and its derivatives
robot_data = talos_arm.model.createData() # Pinocchio data
data = goalTrackingCost.createData(robot_data)
# Update kinematics
q = pinocchio.randomConfiguration(talos_arm.model)
v = rand(talos_arm.model.nv)
x = m2a(np.concatenate([q,v]))
u = m2a(rand(talos_arm.model.nv))
pinocchio.forwardKinematics(talos_arm.model,robot_data,q,v)
pinocchio.computeJointJacobians(talos_arm.model,robot_data,q)
pinocchio.updateFramePlacements(talos_arm.model,robot_data)
print 'cost =', goalTrackingCost.calc(data, x, u)
print 'cost =', goalTrackingCost.calcDiff(data, x, u)
print
print 'lx =', data.Lx
print 'lu =', data.Lu
print
print 'lxx =', data.Lxx
print 'luu =', data.Luu
# -
# We create a trajectory with 250 knots
# For this optimal control problem, we define 250 knots (or running action
# models) plus a terminal knot
T = 250
problem = ShootingProblem(x0, [ runningModel ]*T, terminalModel)
# Onces we have defined our shooting problem, we create a DDP solver object and pass some callback functions for analysing its performance.
#
# Please note that:
# - CallbackDDPLogger: store the solution information.
# - CallbackDDPVerbose(level): printing message during the iterates.
# - CallbackSolverDisplay(robot,rate): display the state trajectory using Gepetto viewer.
# +
# Creating the DDP solver for this OC problem, defining a logger
ddp = SolverDDP(problem)
cameraTF = [2., 2.68, 0.54, 0.2, 0.62, 0.72, 0.22]
ddp.callback = [CallbackDDPLogger(), CallbackDDPVerbose(1), CallbackSolverDisplay(talos_arm,4,1,cameraTF)]
# Solving it with the DDP algorithm
ddp.solve()
# Printing the reached position
log = ddp.callback[0]
frame_idx = talos_arm.model.getFrameId(frameName)
xT = log.xs[-1]
qT = np.asmatrix(xT[:talos_arm.model.nq]).T
print
print "The reached pose by the wrist is"
print talos_arm.framePlacement(qT, frame_idx)
# -
# Let's plot the results and display final trajectory
# +
# %matplotlib inline
# Plotting the solution and the DDP convergence
log = ddp.callback[0]
plotOCSolution(log.xs, log.us)
plotDDPConvergence(log.costs,log.control_regs,
log.state_regs,log.gm_stops,
log.th_stops,log.steps)
# Visualizing the solution in gepetto-viewer
CallbackSolverDisplay(talos_arm)(ddp)
# -
# ## III.b Multi-Contact dynamics for biped walking (Talos legs)
# In crocoddyl, we can describe the multi-contact dynamics through holonomic constraints for the support legs. From the Gauss principle, we have derived the model as:
# $$
# \left[\begin{matrix}
# \mathbf{M} & \mathbf{J}^{\top}_c \\
# {\mathbf{J}_{c}} & \mathbf{0} \\
# \end{matrix}\right]
# \left[\begin{matrix}
# \dot{\mathbf{v}} \\ -\boldsymbol{\lambda}
# \end{matrix}\right]
# =
# \left[\begin{matrix}
# \boldsymbol{\tau} - \mathbf{h} \\
# -\dot{\mathbf{J}}_c \mathbf{v} \\
# \end{matrix}\right]$$.
#
# This DAM is defined in "DifferentialActionModelFloatingInContact" class.
#
# Given a predefined contact sequence and timings, we build per each phase a specific multi-contact dynamics. Indeed we need to describe multi-phase optimal control problem. One can formulate the multi-contact optimal control problem (MCOP) as follows:
#
#
# $$\mathbf{X}^*,\mathbf{U}^*=
# \begin{Bmatrix} \mathbf{x}^*_0,\cdots,\mathbf{x}^*_N \\
# \mathbf{u}^*_0,\cdots,\mathbf{u}^*_N
# \end{Bmatrix} =
# \arg\min_{\mathbf{X},\mathbf{U}} \sum_{p=0}^P \sum_{k=1}^{N(p)} \int_{t_k}^{t_k+\Delta t} l_p(\mathbf{x},\mathbf{u})dt$$
# subject to
# $$ \mathbf{\dot{x}} = \mathbf{f}_p(\mathbf{x},\mathbf{u}), \text{for } t \in [\tau_p,\tau_{p+1}]$$
#
# $$ \mathbf{g}(\mathbf{v}^{p+1},\mathbf{v}^p) = \mathbf{0}$$
#
# $$ \mathbf{x}\in\mathcal{X}_p, \mathbf{u}\in\mathcal{U}_p, \boldsymbol{\lambda}\in\mathcal{K}_p.$$
#
# where $\mathbf{g}(\cdot,\cdot,\cdot)$ describes the contact dynamics, and they represents terminal constraints in each walking phase. In this example we use the following impact model:
#
# $$\mathbf{M}(\mathbf{v}_{next}-\mathbf{v}) = \mathbf{J}_{impulse}^T$$
#
# $$\mathbf{J}_{impulse} \mathbf{v}_{next} = \mathbf{0}$$
#
# $$\mathbf{J}_{c} \mathbf{v}_{next} = \mathbf{J}_{c} \mathbf{v}$$
#
# ### Note:
# You can find an example of such kind of problems in bipedal_walking_from_foot_traj.ipynb.
# ## Reference
#
# The material presented in this Notebook was previously presented at the ICHR at 2018. For more information, please read the following paper:
#
# <NAME>, <NAME>, <NAME> and <NAME>. *Differential Dynamic Programming for Multi-Phase Rigid Contact Dynamics*
# +
from IPython.display import HTML
# Youtube
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/X82tFTR4Mcc?start=11" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>')
# -
#
#
# 
| examples/notebooks/introduction_to_crocoddyl.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # SHANGRLA - Fisher Combination
# This hypothetical election follows the example from CORLA18's `fisher_combined_pvalue.ipynb`.
#
# We'll set up a hypothetical election and a single sample of ballots to illustrate how to combine a ballot-polling audit with a ballot-comparison audit using Fisher's combining function.
#
# There are two strata. One contains every CVR county and the other contains every no-CVR county.
# There were 11,000 ballots cast in the election, 10,000 in the CVR stratum and 1,000 in the no-CVR stratum.
#
# In the CVR stratum, there were 4,550 votes reported for A, 4,950 votes for candidate B, and 500 invalid ballots.
# In the no-CVR stratum, there were 750 votes reported for A, 150 votes for B, and 100 invalid ballots.
# A won overall, with 5,300 votes to B's 5,1000, but not in the CVR stratum.
# The reported vote margin between A and B is 200 votes, a "diluted margin" of $200/11,000 = 1.8\%$.
#
#
# Candidate | Stratum 1 | Stratum 2 | total
# ---|---|---|---
# A | 4,550 | 750 | 5,300
# B | 4,950 | 150 | 5,100
# Ballots | 10,000 | 1,000 | 11,000
# Diluted margin | -4% | 60% | 1.8%
#
# We want to limit the risk of certifying an incorrect outcome to at most $\alpha=10\%$.
#
# In the CVR stratum, we sample 500 ballots and find one 1-vote overstatement.
#
# In the no-CVR stratum, we sample 250 ballots. We are unusually lucky and the vote proportions in the sample match those in the population. There are $187$ ballots for A and $37$ ballots for B.
# + tags=[]
import numpy as np
import scipy as sp
import scipy.stats
import scipy.optimize
import json
from assertion_audit_utils import TestNonnegMean
from fishers_combination import fisher_combined_pvalue, maximize_fisher_combined_pvalue, calculate_beta_range, create_modulus
import matplotlib.pyplot as plt
# +
N1 = 10000
N2 = 1000
N_w1 = 4550
N_l1 = 4950
N_w2 = 750
N_l2= 150
n1 = 500
n2 = 250
# -
# Sample array
# + tags=[]
# cvr/mvr arrays
cvr_array_c = np.array([0]*int(n1*N_l1/N1)+[1]*int(n1*N_w1/N1)+ [1/2]*int(n1*(N1-N_l1-N_w1)/N1 + 1))
# 0 o1, o2, u1, u2
#cvr_array_m = np.array([0]*int(n1*N_l1/N1)+[1]*int(n1*N_w1/N1)+ [1/2]*int(n1*(N1-N_l1-N_w1)/N1 + 1))
# 1 o1
cvr_array_m = np.array([0]*int(n1*N_l1/N1)+[1]*int(n1*N_w1/N1)+ [1/2]*int(n1*(N1-N_l1-N_w1)/N1)+[0])
overstatement = cvr_array_c-cvr_array_m
margin = 2*np.mean(cvr_array_c)-1
cvr_array = (1-overstatement)/(2-margin)
nocvr_array = np.array([0]*int(n2*N_l2/N2)+[1]*int(n2*N_w2/N2)+ [1/2]*int(n2*(N2-N_l2-N_w2)/N2 + 1))
# -
# Define functions for computing $P$-values with input $\beta$
# + tags=[]
g_0 = 0.1
margin = 0.01
upper_bound = 1
#risk_fns = ["kaplan_martingale", "kaplan_martingale"]
#cvr_pvalue = lambda beta: TestNonnegMean.kaplan_martingale(x=cvr_array, N=N1+N2, t=beta*(N1+N2)/N1, random_order=False)
#nocvr_pvalue = lambda beta: TestNonnegMean.kaplan_martingale(x=nocvr_array, N=N1+N2, t=(1/2-beta)*(N1+N2)/N2, random_order=False)
risk_fns = ["kaplan_kolmogorov", "kaplan_kolmogorov"]
cvr_pvalue = lambda beta: TestNonnegMean.kaplan_kolmogorov(x=cvr_array, N=N1+N2, t=beta*(N1+N2)/N1, g=g_0,random_order=False)
nocvr_pvalue = lambda beta: TestNonnegMean.kaplan_kolmogorov(x=nocvr_array, N=N1+N2, t=(1/2-beta)*(N1+N2)/N2, g=g_0, random_order=False)
# -
# Maximizing the $P$-value over $\beta$
# + tags=[]
(beta_lower, beta_upper) = calculate_beta_range(N1, N2)
beta_test_count_0 = 10
test_betas = np.array(np.linspace(beta_lower, beta_upper, beta_test_count_0))
print("beta limits:", beta_lower, beta_upper)
fisher_pvalues = []
cvr_pvalues = []
nocvr_pvalues = []
for b in test_betas:
cvr_pvalues.append(cvr_pvalue(b))
nocvr_pvalues.append(nocvr_pvalue(b))
fisher_pvalues.append(fisher_combined_pvalue([cvr_pvalues[-1], nocvr_pvalues[-1]]))
plt.scatter(test_betas, cvr_pvalues, color='r', label='CVR')
plt.scatter(test_betas, nocvr_pvalues, color='b', label='no-CVR')
plt.legend()
plt.xlabel('beta')
plt.ylabel('p-value')
plt.ylim(0, 1)
plt.show()
# + tags=[]
plt.scatter(test_betas, fisher_pvalues, color='black')
plt.axhline(y=0.1, linestyle='--', color='gray')
plt.title("Fisher's combined P-value")
plt.xlabel("beta")
plt.ylabel("P-value")
plt.show()
print('(max p-value, beta): ', (max(fisher_pvalues), test_betas[fisher_pvalues.index(max(fisher_pvalues))]))
# + tags=[]
mod = create_modulus(risk_fns, N1, N2, n1, n2, margin, upper_bound, g_0, cvr_array, nocvr_array)
m = maximize_fisher_combined_pvalue(N1, N2, pvalue_funs=[cvr_pvalue, nocvr_pvalue], beta_test_count=10, modulus=mod, alpha=0.10, feasible_beta_range=(beta_lower, beta_upper))
print(json.dumps(m, indent=4))
# + tags=[]
plt.scatter(test_betas, fisher_pvalues, color='black')
plt.scatter(m['allocation beta'], m['max_pvalue'], color='black', marker='x')
plt.axhline(y=0.1, linestyle='--', color='gray')
plt.title("Fisher's combined P-value")
plt.xlabel("beta")
plt.ylabel("P-value")
plt.show()
| Code/fisher_combined_pvalue.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.2 64-bit ('.tgy_veribilimi')
# metadata:
# interpreter:
# hash: ba55969ab8b665a0d7ebc430170b2908b5d222710046023b08b5a17088334349
# name: Python 3.8.2 64-bit ('.tgy_veribilimi')
# ---
# # CATEGORY BOOSTING ( CATBOOST )
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import mean_squared_error, r2_score
import matplotlib.pyplot as plt
from sklearn import model_selection
df = pd.read_csv("verisetleri\Hitters.csv")
df = df.dropna()
dms = pd.get_dummies(df[['League', 'Division', 'NewLeague']])
y = df["Salary"]
X_ = df.drop(['Salary', 'League', 'Division', 'NewLeague'], axis=1).astype('float64')
X = pd.concat([X_, dms[['League_N', 'Division_W', 'NewLeague_N']]], axis=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
# !pip install catboost
from catboost import CatBoostRegressor
# + tags=["outputPrepend"]
catb_model = CatBoostRegressor().fit(X_train, y_train)
# -
y_pred = catb_model.predict(X_test)
np.sqrt(mean_squared_error(y_test, y_pred))
# +
## MODEL TUNNING
# -
# iterations --> Ağaç sayısı, fit edilecek olan model sayısısdır.
# catboost kişisel bilgisayarda uygulamaya çalışırken en fazla işlemleri uzun süren algoritma olacak. Bu sebeple mümkün olduğu kadar az yazmak gerekir. Parametleri bir kaç tane arttırınca işlemler yaklaşık 45 dakika filan sürmektedir.
catb_params = {"iterations" : [200, 500, 1000], "learning_rate" : [0.01, 0.1], "depth" : [3, 6, 8]}
catb_model = CatBoostRegressor()
catb_cv_model = GridSearchCV(catb_model, catb_params, cv=5, n_jobs=-1, verbose=2).fit(X_train, y_train)
catb_cv_model.best_params_
# + tags=[]
catb_tuned = CatBoostRegressor(depth=3, iterations=200, learning_rate=0.1).fit(X_train, y_train)
# -
y_pred = catb_tuned.predict(X_test)
np.sqrt(mean_squared_error(y_test, y_pred))
# +
# Önceki algoritmalara göre en iyi sonucu buldu.
| 8 3 9 Category Boosting - CATBOOST Regresyon.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "4a3fa57346cb495363568cadf58daf99", "grade": false, "grade_id": "cell-6fa1bcf98952e899", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Exercise Sheet No. 6
#
# ---
#
# > Machine Learning for Natural Sciences, Summer 2021, Jun.-Prof. <NAME>, <EMAIL>
# >
# > Deadline: 31.05.2021, 8 am
# >
# > Tutor: <NAME> [<EMAIL>](mailto:<EMAIL>)
# > **Please ask questions in the forum and only contact the tutor for issues regarding the grading**
#
# ---
#
# **Topic**: This exercise sheet will focus on feed-forward neural networks, their implementation and training, as well as an application to materials science.
#
# **You have two weeks to finish for Pentecost (Pfingst) holidays :)**
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "a6f070a7393873bbf78caedcd7581fc1", "grade": false, "grade_id": "cell-f67fa73d6edbbdd0", "locked": true, "schema_version": 3, "solution": false, "task": false}
# 
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "d6fb2fec63b2df60a203d396e2013a96", "grade": false, "grade_id": "cell-7edbf20d7f15f37b", "locked": true, "schema_version": 3, "solution": false, "task": false}
# # Preliminaries
#
# As we had problems with the nbgrader versions, **everyone needs to rebuild their environment**. When downloading the assignment you were provided with an environment.yml file.
# Please shutdown jupyter and run on the command line / anaconda prompt using the new environment file:
# ```
# # Deactivate aimat
# conda deactivate aimat
#
# # Remove aimat
# conda env remove -n aimat
#
# # Rebuild the conda environment. From here it is all the initial setup:
# conda env create -f environment.yml
#
# # add nbgrader extensions needed to validate your results locally
# conda run -n aimat jupyter nbextension install --sys-prefix --py nbgrader --overwrite
# conda run -n aimat jupyter nbextension enable --sys-prefix --py nbgrader
# conda run -n aimat jupyter serverextension enable --sys-prefix --py nbgrader
#
# # activate the conda environment, otherwise, your previously installed
# # packages won't be accessible
# conda activate aimat
#
# # Start jupyter again
# jupyter notebook Exercise06.ipynb
# ```
#
# # General remarks
#
# We do our best to provide you with interesting, as well as challenging exercises. Many of you submit high quality notebooks and also start playing around with the existing code to go further. We like to see that and strongly encourage you to go beyond what is covered in the exercises if time and motivation allows.
#
# **Nevertheless, it makes your life and our life much easier if you use another copy of the notebook as playground and try to stay as concise as possible in the submitted notebooks.** We are a small team and it is impossible to check all the exercise sheets manually. Therefore when things fail because you did additional things in the notebooks, we won't see that in the first run and it is quite some effort to debug this. So please don't add additional cells or delete cells in you submission.
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "bf4ea73bd54b5e65bbf4a1d29d832931", "grade": false, "grade_id": "cell-bac561682fa2ef9d", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ---
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "108b8b05489da7623bceafd770079e9d", "grade": false, "grade_id": "cell-3236a919b585de02", "locked": true, "schema_version": 3, "solution": false, "task": false}
# # Application
#
# Next week you will start learning about applications of NNs in materials science. To give you a little teaser, we will use an application in materials science already here:
#
# # Organic Solar Cells
# For organic materials to become semi-conducting, electrons must be delocalized in the molecule. For electrons to be delocalized, a high level of conjugation is necessary:
# When single and double bonds are alternating in an organic molecule, electrons can move. When we think about an aromatic ring, like benzene, it is not defined where the double bonds would form, so they can move around the ring and are delocalized along the whole aromatic system:
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "b8cb042b4c739fccbbfd613d887c252f", "grade": false, "grade_id": "cell-1cd6d0e1688214d5", "locked": true, "schema_version": 3, "solution": false, "task": false}
from rdkit import Chem
mol = Chem.MolFromSmiles('c1ccccc1')
mol
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "b810eb8c6b4f7d11efcd6bedef31bf20", "grade": false, "grade_id": "cell-ad8bd72c4fa018c6", "locked": true, "schema_version": 3, "solution": false, "task": false}
# These electrons have higher energies and are part of what is called the highest occupied molecular orbit (HOMO). This is basically the equivalent of the valence band in classical semiconductors.
#
# The equivalent of the conduction band is the lowest level at higher energies that is unoccupied or the lowest unoccupied molecular orbit (LUMO). The gap between those two levels is the bandgap of organic semiconductors.
#
# For OPV (organic photovoltaic) solar cells this bandgap needs to be small enough so that visible light can excite an electron from the HOMO to the LUMO. This requires a high level of conjugation and hence aromatic systems (Figure 1)
#
# <a title="Alevina89, CC BY-SA 3.0 <https://creativecommons.org/licenses/by-sa/3.0>, via Wikimedia Commons" href="https://commons.wikimedia.org/wiki/File:Homo-lumo_gap.tif"><img width="512" alt="Homo-lumo gap" src="https://upload.wikimedia.org/wikipedia/commons/thumb/5/58/Homo-lumo_gap.tif/lossless-page1-607px-Homo-lumo_gap.tif.png"></a>
# <p style="text-align:center;font-size:80%;font-style:italic">
# Figure 1: Conjugation induced LUMO reduction.
# </p>
#
# For the development of organic semiconductors, HOMO and LUMO can be simulated by density functional theory (DFT) using different levels of theory. You will learn about this in later lectures. All you need to know now, is that depending on the level of theory - or better the details of the approximations taken - the calculation of properties like HOMO and LUMO from the molecule can take hours.
#
# This is a problem for high-throughput screening if one wants to discover new materials. It is extremely costly to evaluate e.g. 100,000 molecules for their properties with methods that are precise enough. Hence, one usually tries to do a detailed simulation only on a subset of molecules and then train a ML-model to predict the properties of interest on the labeled data.
#
# ## Dataset
# The dataset of [Lopez et Al. 2017](https://www.sciencedirect.com/science/article/pii/S2542435117301307) contains the simulated LUMO and HOMO values for 51,247 organic molecules. Additionally, it contains 63 molecule descriptors that were calculated using [rdkit](https://www.rdkit.org/docs/index.html):
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "52a7c4d7aafb014fb308b246ff590ce1", "grade": false, "grade_id": "cell-868d8be608b5077e", "locked": true, "schema_version": 3, "solution": false, "task": false}
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
df = pd.read_hdf('OPV.h5')
df.describe()
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "633636d6cac62ef0484bb5972c74ef3d", "grade": false, "grade_id": "cell-42ef56ae22c7077a", "locked": true, "schema_version": 3, "solution": false, "task": false}
# The dataframe has a two-level column index so you can index the labels by calling `df['labels']`:
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "4d0e91dfe3c2384ab4e53d11aa8dbdeb", "grade": false, "grade_id": "cell-996e09cffe1ea828", "locked": true, "schema_version": 3, "solution": false, "task": false}
df['labels'].hist(bins=100)
plt.show()
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "3a5a3615b53471b9b911b0ddb1e7641d", "grade": false, "grade_id": "cell-cff397671bc615ac", "locked": true, "schema_version": 3, "solution": false, "task": false}
# # Linear regression benchmark
# For a first shot we can train a ridge regression to predict the `labels` from the `mol_descriptors`.
#
# This time we will use `sklearn`. Additionally we will also pre-process the features with a standard scaler to shift them to zero mean and scale them to unit variance.
#
# As already mentioned, the calculation of DFT properties can be very costly. Hence, we want to have a model that extrapolates well to unseen data. Hence, we will use only 20% of the dataset for training and test on 80%.
#
#
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "4144c4650fe99794fb819743ad51fc90", "grade": false, "grade_id": "cell-200cc37a5b72eb23", "locked": true, "schema_version": 3, "solution": false, "task": false}
from sklearn.linear_model import LinearRegression, Ridge
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
X = df['mol_descriptors'].values
y = df['labels'].values
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "ebbd6b8923aa3df805a0742916705d33", "grade": false, "grade_id": "cell-5c747b57ec6724fa", "locked": true, "schema_version": 3, "solution": false, "task": false}
# First use the [`StandardScaler()`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) to scale the features (`X`) to mean of 0 and unit variance:
# + deletable=false nbgrader={"cell_type": "code", "checksum": "eea93bf92b9f5d8b68c9c2c751580c46", "grade": false, "grade_id": "cell-c9f2df20a8d0cf01", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Assign a StandardScaler object to scaler and obtain the scaled fatures as X_scaled
scaler = None
X_scaled = None
# YOUR CODE HERE
scaler = StandardScaler()
scaler.fit(X)
X_scaled = scaler.transform(X)
# raise NotImplementedError()
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "2c8a5ab885363529eebab69020604548", "grade": true, "grade_id": "Scaler-1", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
# Scaler - 1 point
assert isinstance(scaler, StandardScaler), "The scaler should be an instance of the sklearn StandardScaler"
np.testing.assert_almost_equal(np.mean(X_scaled), 0, 10)
np.testing.assert_almost_equal(np.var(X_scaled, axis=0).sum(), 63.)
# Possible hidden tests
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "2c3b0c0a2f1b58b77004de8fae20133b", "grade": false, "grade_id": "cell-d0ed310e0c9ef260", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Next we use the [`train_test_split()`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html?highlight=train_test_split#sklearn.model_selection.train_test_split) to split of 20% of `X_scaled` and `y` as train set and use the rest as test set:
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "6a78eed9979bc872c50db84a260c8a5c", "grade": false, "grade_id": "cell-6e293b67656d2a39", "locked": true, "schema_version": 3, "solution": false, "task": false}
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.8, random_state=0)
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "b29110c13e1d3ff7772eed7a5d1796a4", "grade": false, "grade_id": "cell-eabac20dab75b13d", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Now use the [`Ridge()`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html?highlight=ridge#sklearn.linear_model.Ridge) model to fit a ridge regression with standard parameters to the train data, and assign the predictions of the fitted model on the test data to `y_pred`:
# + deletable=false nbgrader={"cell_type": "code", "checksum": "2e3006652180e3013f24e9b708880ca4", "grade": false, "grade_id": "cell-b1b48084492b70d2", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Instantiate a Ridge() model as model, fit it and assign the predictions to y_pred
model = None
y_pred = None
# YOUR CODE HERE
model = Ridge(alpha=1.0)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
# raise NotImplementedError()
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "22cb08f34d47366286109fcf1dae3801", "grade": false, "grade_id": "cell-69dfde6ff482f06b", "locked": true, "schema_version": 3, "solution": false, "task": false}
r2_lumo_ridge = r2_score(y_test[:,0], y_pred[:,0])
r2_homo_ridge = r2_score(y_test[:,1], y_pred[:,1])
print(f'R2 LUMO: {r2_lumo_ridge}\nR2 HOMO: {r2_homo_ridge}')
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "438eb3187e157479740bea551785acab", "grade": true, "grade_id": "Ridge_Regression-1", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
# Ridge Regression - 1 point
assert y_pred.shape[0] == y_test.shape[0]
assert y_pred.shape[1] == y_test.shape[1]
assert r2_lumo_ridge > 0.70
assert r2_homo_ridge > 0.78
# Possible hidden tests
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "1f58d0ad84f10801c84341ddf6c449e2", "grade": false, "grade_id": "cell-372ca63030fe0b3d", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Now we can plot the correlations of the the true and predicted values:
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "79c871a6eeacb2bb48c3b833a50c8f16", "grade": false, "grade_id": "cell-0a06841aa21a0f40", "locked": true, "schema_version": 3, "solution": false, "task": false}
fig, axs = plt.subplots(1, 2, figsize=(12,6))
axs[0].scatter(y_test[:,0], y_pred[:,0], alpha=0.2, s=5)
axs[0].plot([-6, -1], [-6, -1], 'k')
axs[0].set_title(f'R2 LUMO: {r2_lumo_ridge}')
axs[0].set_xlabel('true LUMO')
axs[0].set_ylabel('predicted LUMO')
axs[0].set_xlim([-6, -1])
axs[0].set_ylim([-6, -1])
axs[1].scatter(y_test[:,1], y_pred[:,1], alpha=0.2, s=5)
axs[1].plot([-9, -3], [-9, -3], 'k')
axs[1].set_title(f'R2 HOMO: {r2_homo_ridge}')
axs[1].set_xlabel('true HOMO')
axs[1].set_ylabel('predicted HOMO')
axs[1].set_xlim([-9, -3])
axs[1].set_ylim([-9, -3])
plt.show()
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "bba25e25f53ae08a4b56bf28e54f03e5", "grade": false, "grade_id": "cell-b045f7ffe8f279f8", "locked": true, "schema_version": 3, "solution": false, "task": false}
# As you can see, the ridge regression learns something, but the predictions are still way off from the true values.
#
# Hence, we will apply a non-linear model, namely a:
#
# # Neural Network
#
# ## A little bit of biology background
#
# The rough idea of todays neural networks stems from biological neural networks. A neuron, integrates information from its upstream neurons at the dendrites, creating a post-synaptic potential. At the axon hillock this potential is encoded into a spike train of action potentials. These are sent down he axon until they reach a synapse of an axon terminal. At the synapse the spike train is again decoded into a pre-synaptic potential and depending on the incoming spike frequency more or less neurotransmitters are released to pass the signal on to the next neuron (Figure 2).
# <p style="text-align:center;font-size:80%;font-style:italic">
# <img src="https://recurrence-reviewed.de/content/images/2019/04/Neuron2.png", width="50%">
# <br>
# Figure 2: Simplified sketch of a neuron.
# </p>
#
#
# ## Forward Pass
#
# For todays artificial neural networks this was simplified in the following way:
# Instead of simulating spikes, we only simulate a spike frequency as a continuous value. Depending on the post-synaptic potential, this frequency can be higher or lower. E.g. neurons do not respond at all until a specific post-synaptic potential is reached and also have a maximum spike-frequency. This implies two things:
# **1. The transformation from post-synaptic potential to spike-frequency is non-linear. (activation function)**
# **2. Each neuron has a different base activity. (bias)**
#
# In the following mathematical definitions, lower-case letters denote scalars, while upper-case letters denote vectors or matrices. No dot or $\cdot$ denotes a dot product of vectors or matrices.
#
# ### Single neuron
# Given a subjected neuron $l$ receives inputs from an upstream layer $k$ and the upstream layer consists of three neurons with the outputs $x_1, x_2, x_3$ via connections with weights $\theta_{1,l}, \theta_{2,l}, \theta_{3,l}$ (Figure 3), we can compute the weighted sum as state $h_l(X)$ of the subjected neuron by linear algebra as the product of the row vector $X_k$ and the column vector $\Theta_{k,l}$. Additionally, we add a bias $b$ to the neuron.
#
# \begin{align}
# h_l(X) &= X_k \cdot \Theta_{k,l} + b_l\\
# &=
# \begin{bmatrix}
# x_1 & x_2 & x_3
# \end{bmatrix}
# \cdot
# \begin{bmatrix}
# \theta_{1,l} \\ \theta_{2,l} \\ \theta_{3,l}
# \end{bmatrix}
# # + b_l\\
# &=x_1 \theta_{1,l} + x_2 \theta_{2,l} + x_3 \theta_{3,l} + b_l
# \end{align}
#
# And the output or activation $a_l(x)$ of the subjected neuron in terms of spike-frequency is computed using a non-linear activation function $\sigma()$:
#
# \begin{align}
# a_l(X) &= \sigma(h_l(X))
# \end{align}
#
# <div>
# <img src="attachment:ff_b1.png" width="30%"/>
# </div>
# <p style="text-align:center;font-size:80%;font-style:italic">
# Figure 3: A neuron layer k with three neurons has feed-forward connections to a single neuron l.
# </p>
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "1b73b00204c130c25d426c5b774fd2c1", "grade": false, "grade_id": "cell-65de2b486b4e7331", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Neuron Layer
# Now given that the subjected $l$ is not only a single neuron but e.g. a layer of two neurons (Figure 4), the notation is the same with the only difference that we add a column to the weight column vector $\Theta_{k,l}$ changing it to a $3 \times 2$ dimensional matrix to calculate the $1 \times 2$ dimensional row state vector $H_l(X)$:
#
# \begin{align}
# H_l(X) &= X_k \cdot \Theta_{k,l} + B_l\\
# &=
# \begin{bmatrix}
# x_1 & x_2 & x_3
# \end{bmatrix}
# \cdot
# \begin{bmatrix}
# \theta_{1,1} & \theta_{1,2} \\
# \theta_{2,1} & \theta_{2,2} \\
# \theta_{3,1} & \theta_{3,2}
# \end{bmatrix}
# +
# \begin{bmatrix}
# b_1 & b_2
# \end{bmatrix} \\
# &=
# \begin{bmatrix}
# \theta_{1,1} x_1 + \theta_{1,2} x_2 + \theta_{1,3} x_3 + b_1 &
# \theta_{2,1} x_1 + \theta_{2,2} x_2 + \theta_{2,3} x_3 + b_2
# \end{bmatrix}
# \end{align}
#
# And the activation of the layer:
#
# \begin{align}
# A_i(X) &= \sigma(H_i(X))
# \end{align}
#
# <div>
# <img src="attachment:ff_b2.png" width="30%"/>
# </div>
# <p style="text-align:center;font-size:80%;font-style:italic">
# Figure 4: A neuron layer j with three neurons has feed-forward connections to a neuron layer i with two neurons.
# </p>
#
# For the bias there exist two approaches:
# 1. One can see the bias as an additional input with a constant $1$ from the previous layer and a learnable weight vector.
# 2. Adding the bias simply as row vector that added to the neuron state $H(x)$.
#
# Here we use approach 2.
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "3eb59ad77ff00a4cf7b946eba55b6d74", "grade": false, "grade_id": "cell-635ae832ad50c9c8", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Batches
# Usually data is processed in batches, so not only one single sample is processed at a time but multiple samples as this is much faster when calculated using vectorization. In our case, samples are rows with the length of the features. So when we pass 10 samples of our features (the `mol_descriptors`), we pass a matrix of shape $10 \times 63$. This already fits well with our current notation. E.g. if we use a hidden layer with $32$ neurons and input our features it will return a $10 \times 32$ state matrix.
#
#
# ### Weight initialization
# To compute anything, the weights must be different from $0$. There are several approaches to initialize weights. We will use the approach of [Glorot et Al., 2010](http://proceedings.mlr.press/v9/glorot10a.html).
# The initial weights are drawn form a uniform distribution $U(-z, z)$ with the limit $z$ calculated as:
# \begin{align}
# z &= \sqrt{\frac{6}{k+l}}
# \end{align}
# ...with input length $k$ and output length $l$.
#
#
# Just to get an idea how this all works in python we will build a layer with 32 neurons and feed all 63 `mol_descriptors`.
# Initialize the weight matrix `theta` using numpy functions:
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "6ba0a8c79eb613fdfb568ef468171880", "grade": false, "grade_id": "cell-6da9b09109577066", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Take 10 rows as one batch
sample = X_train[0:10]
n_inputs = sample.shape[1]
n_outputs = 32
# #首先要搞清楚行列是啥?
#每行是一组observation,每列是一个feature...
# + deletable=false nbgrader={"cell_type": "code", "checksum": "e12c0498043d82f9f0df88e867892a89", "grade": false, "grade_id": "cell-31a3010d3f198150", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Initialize the weights theta after Glorot et Al. 2010 (glorot uniform):
theta = None
# YOUR CODE HERE
k = n_inputs
l = n_outputs
z = np.sqrt(6/(k+l))
theta = np.random.uniform(-z,z,(k,l))
# raise NotImplementedError()
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "249e1c571db8c4153d3bfd53d467a62e", "grade": true, "grade_id": "Weights-1", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
# Weights - 1 point
assert theta.shape[0] == n_inputs, "Your weight shape doesn't match!"
assert theta.shape[1] == n_outputs, "Your weight shape doesn't match!"
# Hidden asserts for the limits of the uniform distribution
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "0a8d40b239a972514487a815cce117a4", "grade": false, "grade_id": "cell-6f705f2c4c9570d4", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Now calculate h(x) for the sample using your weights:
# + deletable=false nbgrader={"cell_type": "code", "checksum": "9b6f77c2eaeb20c4ad68cd763fa03328", "grade": false, "grade_id": "cell-e47a93a7f3cc2876", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Initialize the bias row vector with zeros:
b = None
# YOUR CODE HERE
b= np.zeros((1,n_outputs))
# raise NotImplementedError()
# Calculate h(x) using sample, theta and b
# The bias won't change anything but it will check for correct shapes.
h = None
# YOUR CODE HERE
h = np.dot(sample,theta)+b
# raise NotImplementedError()
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "6dc5b9220c0b0d8f8f843afe4da33f68", "grade": true, "grade_id": "States-1", "locked": true, "points": 2, "schema_version": 3, "solution": false, "task": false}
# States - 2 points
assert h.shape[0] == sample.shape[0]
assert h.shape[1] == n_outputs
# Possible hidden asserts
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "dd3e2974ab3824eda9befd7c5d835c67", "grade": false, "grade_id": "cell-1b37830e571f872c", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ## Activation functions
#
# Our model will be a regression model. For the output of the model we need to fit the unscaled `y`. Hence we need an unbound linear activation function for the output, which is basically an identity function:
#
# \begin{align}
# linear(H(x)) &= H(x)
# \end{align}
#
# As non-linear activation function for the hidden layers we will use the ReLu function:
#
# \begin{align}
# relu(H(x)) &=
# \begin{cases}
# 0~\text{ if }~h_l(x) \leq 0\\
# h_l(x)~\text{ if }~h_l(x)>0
# \end{cases}
# \end{align}
#
# So it sets all values below zero to zero.
# The ReLu has the biological motivation of having a minimal threshold until it starts outputting values different from 0. Additionally, it yields sparse activations in the network, which is usually favorable, and its gradient is trivial to calculate.
#
# Please calculate the ReLu of your previously calculated `h`:
# + deletable=false nbgrader={"cell_type": "code", "checksum": "9ab1acb5fc88c13ad757486c80348314", "grade": false, "grade_id": "cell-5c8a3e28ebedb119", "locked": false, "schema_version": 3, "solution": true, "task": false}
# Calculate the activation a by computing the ReLu of h
a = None
# YOUR CODE HERE
a = h.copy()
a[a<0] = 0
# raise NotImplementedError()
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "2b6cf6d2a764b2c0113a0678acb0e9cf", "grade": true, "grade_id": "ReLu-1", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
# ReLu - 1 point
assert np.min(a) >= 0
# Possible hidden asserts
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "06675eb32f5b39cd71405148def6b4d0", "grade": false, "grade_id": "cell-b2b2af32ce0ca084", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ## Backpropagation
#
# ### Weights
# From the lecture recall the following chain to get the gradient of the error $J$ with respect to the weights $\Theta$ of the output layer:
#
# \begin{align}
# \frac{\partial J}{\partial \Theta} &= \underset{(1)}{\frac{\partial J}{\partial a}}
# \underset{(2)}{\frac{\partial a}{\partial h}}
# \underset{(3)}{\frac{\partial h}{\partial \Theta}} \\
# \end{align}
#
# For our case the error is the MSE defined as:
# \begin{align}
# J &= \frac{1}{m} \sum_{i=1}^m \left( a^{(i)} - y^{(i)} \right)^2
# \end{align}
# ...for $m$ samples $i$.
#
# Everything else is given. Now calculate the partial derivatives for a single sample $i$ of:
# **(1) $\frac{\partial \text{J}}{\partial a^{(i)}}$ with the MSE as $J$
# (2) $\frac{\partial a^{(i)}}{\partial h^{(i)}}$ with the linear and ReLu functions as $a$
# (3) $\frac{\partial h^{(i)}}{\partial \theta^{(i)}}$ using the calculation for a neuron layer**
#
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "650a0f838bd12eee53c46634e7114de0", "grade": false, "grade_id": "cell-c639ca9b3959931f", "locked": true, "schema_version": 3, "solution": false, "task": false}
from IPython.display import display, Markdown
display(Markdown(r"\begin{align}"
r"\frac{\partial \text{MSE}}{\partial a^{(i)}} &= \frac{1}{m} \left( a^{(i)} - y^{(i)} \right) \tag{1}\\"
r"\frac{\partial \text{ReLu}^{(i)}}{\partial h^{(i)}} &= "
r"\begin{cases} "
r"0~\text{ if }~h^{(i)}(x) \leq 0\\"
r"1~\text{ if }~h^{(i)}(x)>0"
r"\end{cases} \tag{2}\\"
r"\frac{\partial \text{linear}^{(i)}}{\partial h^{(i)}} &= 1 \tag{2}\\"
r"\frac{\partial h^{(i)}}{\partial \theta^{(i)}} &= x^{(i)} \tag{3}"
r"\end{align}"))
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "1ab2acada8e49cf1b058b83d633c227a", "grade": false, "grade_id": "cell-c47108b07d99fbf0", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Now let's take the above calculated `a` as output and calculate some random true labels `y_` for testing:
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "984ef23cfd5fda32d981d71af0c46445", "grade": false, "grade_id": "cell-1b8e39a0adf8c3a9", "locked": true, "schema_version": 3, "solution": false, "task": false}
y_ = np.random.uniform(low=0, high=3, size=[10, 32])
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "07198fe2ede8e6228e69e802a8f44f20", "grade": false, "grade_id": "cell-00bda07a4a44647d", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Next we can use the drivatives from above to calculate the gradient for one layer with ReLu activation:
#
# \begin{align}
# \frac{\partial J}{\partial \Theta} &= \underset{(1)}{\frac{\partial J}{\partial a}}
# \underset{(2)}{\frac{\partial a}{\partial h}}
# \underset{(3)}{\frac{\partial h}{\partial \Theta}} \\
# \end{align}
# + deletable=false nbgrader={"cell_type": "code", "checksum": "92779ac0363dc94a79abde8b4453a8f7", "grade": false, "grade_id": "cell-d04d29795455d3b4", "locked": false, "schema_version": 3, "solution": true, "task": false}
# We start from the back:
# (3) is trivial
#<NAME>
dh_dtheta = None
# YOUR CODE HERE
dh_dtheta = sample
# raise NotImplementedError()
# (2) for ReLu: Here you need a reference to h and maybe two steps
da_dh = None
# YOUR CODE HERE
# a ref???????
da_dh = h.copy()
if np.min(a) <0:
da_dh = np.zeros((10,32))
else:
da_dh[da_dh <=0]= 0
da_dh[da_dh>0] = 1
# raise NotImplementedError()
# (1) m is the batch size!
m = a.shape[0]
dJ_da = 1/m*(a-y_)
# YOUR CODE HERE
# raise NotImplementedError()
#meishadaioyong
dJ_dTheta = np.dot(dh_dtheta.T, (dJ_da * da_dh))
# The error gradient with respect to the weights and the shape of the weights should agree:
print(dJ_dTheta.shape, theta.shape)
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "4234c64dbac518d3e8e61520d3f745b2", "grade": true, "grade_id": "dh_dtheta-1", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
# dh dtheta - 1 point
assert dh_dtheta.shape[0] == 10
assert dh_dtheta.shape[1] == 63
# Hidden test for the content of dh_dtheta
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "d11c554e8aecdf1be6ebfc92d4f1637a", "grade": true, "grade_id": "da_dh-1", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
# da dh - 1 point
assert da_dh.shape[0] == 10
assert da_dh.shape[1] == 32
# Hidden test for the content of drelu_dh
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "bee76a01284e330535afe1d7ba5b6d95", "grade": true, "grade_id": "dJ_da-1", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
# dJ dh - 1 point
assert dJ_da.shape[0] == 10
assert dJ_da.shape[1] == 32
# Hidden test for the content of dJ_da
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "c3b2f296fac8196ac43a71c7f3b13a99", "grade": false, "grade_id": "cell-ba0b36989b2f7672", "locked": true, "schema_version": 3, "solution": false, "task": false}
# We can then update the weights for the next step using:
# \begin{align}
# \Theta_{t+1} &= \Theta_t - \alpha \cdot \frac{\partial J}{\partial \Theta} \\
# \end{align}
# ...with learning rate $\alpha$.
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "6a723e4505dbb32f796f5f0d187891f0", "grade": false, "grade_id": "cell-b7e8e14e2d041ed8", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ### Bias
#
# Besides the weights we also need to fit the bias. The bias can be derived in the same manner, only that instead of $x$ the input is $1$. We simply multiply $1$ with the bias weights. Therefore (3) only for the bias collapses to:
# \begin{align}
# \frac{\partial h^{(i)}_b}{\partial \theta^{(i)}_b} &= x^{(i)}_b = 1\tag{3}
# \end{align}
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "72d713aef4e8a41b69704987b4972425", "grade": false, "grade_id": "cell-25e02e4c3321aa4d", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Next we will build a simple feed forward network.
#
# Recall from the lecture (Figure 5):
#
# <div>
# <img src="attachment:backprop_1.png" width="80%"/>
# </div>
# <p style="text-align:center;font-size:80%;font-style:italic">
# Figure 5: Backpropagation.
# </p>
#
# For backpropagation we are still missing the one element connecting the layers, namely $\frac{\partial a^2}{\partial a^1}$:
# \begin{align}
# a^2 &= \Theta^2 a^1 \\
# \frac{\partial a^2}{\partial a^1} &= \Theta^2
# \end{align}
#
# With this and the above derived equations we can calculated the gradient of the first (hidden) layer weights regarding the output error as:
# \begin{align}
# \frac{\partial J}{\partial \Theta^1} &= \underbrace{\frac{\partial J}{\partial a^2}
# \frac{\partial a^2}{\partial a^1}}_\text{(a.)}
# \underbrace{\frac{\partial a^1}{\partial h^1}
# \frac{\partial h^1}{\partial \Theta^1}}_\text{(b.)} \\
# \end{align}
# **The part (a.) can be calculated in the second (output) layer and is returned as upstream gradient.**
#
# **In the first (hidden) layer we can then use this gradient and combine it with part (b.) to obtain the gradient for the weights of layer 1 with respect to the output error.**
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "8074640cf9b5e7032a04ff486ef892db", "grade": false, "grade_id": "cell-336b1a80f598e3c7", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Let's put this into work by building a feed forward network for our regression task with:
#
# 1. The 63 `mol_descriptors` as input $x^{(i)}$
# 2. One hidden layer $1$ with 32 neurons and a ReLu activation: `class HiddenLayer`
# 3. One output layer $2$ with two outputs (LUMO and HOMO) with a linear (no) activation function: `class OutputLayer`
#
# Both have a forward pass, which calculates the output of the layer, a backward pass which calculates the gradients and an update function that updates the weights using the gradients and the learning rate.
#
# Only the `OutputLayer` has to return the part (a.) from above to then pass it on to the backward pass of the `HiddenLayer`.
#
# We start implementing from the bottom up with the `OutputLayer` (layer 2):
#
# + deletable=false nbgrader={"cell_type": "code", "checksum": "722562420e313f738a93b8bf4320811d", "grade": false, "grade_id": "cell-5cec7f87a6783db2", "locked": false, "schema_version": 3, "solution": true, "task": false}
class OutputLayer:
def __init__(self, n_inputs: int, n_outputs: int):
# Initialize (n_inputs X n_outputs)-dimensional weight matrix self.theta with a
# Glorot et Al. 2010 uniform initialization:
self.theta = None
# YOUR CODE HERE
k = n_inputs
l = n_outputs
z = np.sqrt(6/(k+l))
self.theta = np.random.uniform(-z,z,(k,l))
# raise NotImplementedError()
# Initialize the bias vector self.b to zeros:
self.b = None
# YOUR CODE HERE
self.b = np.zeros(n_outputs)
# raise NotImplementedError()
def forward(self, input_vector):
self.input = input_vector
# Compute the states h(x) as self.h
self.h = None
# YOUR CODE HERE
self.h = np.dot(self.input,self.theta)
self.h = self.h + self.b
# raise NotImplementedError()
# As this is a linear layer a(x) = h(x)
self.a = self.h.copy()
return self.a
def backward(self, y_predicted, y_true):
# HINT: as we do things backwards you might have to transpose some matrices
# partial derivative of the states with respect to the weights
dh2_dtheta2 = None
# YOUR CODE HERE
#why call it a back ward is that it is easier to start from back.......
dh2_dtheta2 = self.input
# raise NotImplementedError()
# partial derivative of activations with respect to the states
# One as we have a linear/no activation for the regression output
da2_dh2 = 1
# partial derivative of the error (MSE) with respect to the acivation
# infer the batch size from the input shape for normalization
dJ_da2 = None
# YOUR CODE HERE
m = self.a.shape[0]
dJ_da2 = 1/m*(y_predicted-y_true)
# raise NotImplementedError()
# Gradient of the weights with respect to the error
# for the weight updates for this layer:
self.dJ_dTheta2 = None
# YOUR CODE HERE
self.dJ_dTheta2 = np.dot(dh2_dtheta2.T, (dJ_da2 * da2_dh2))
# raise NotImplementedError()
# Gradient of the bias with respect to the error
# Recall using dh2_db2 = 1 and handle the batch size by summation
self.dJ_db2 = None
# YOUR CODE HERE
self.dJ_db2 = np.sum(dJ_da2,axis=0)
# raise NotImplementedError()
# The downstream gradient for layer 1.
# employing da2_da1 = theta
downstream_gradient = None
# YOUR CODE HERE
#Ah, you want Dj/ da-1
downstream_gradient = np.dot(dJ_da2, self.theta.T)
# raise NotImplementedError()
return downstream_gradient
def update(self, learning_rate):
# You don't need to change this
# HINT:
# If your model gets worse instead of better make sure you calculate the correct MSE
self.theta = self.theta - learning_rate*self.dJ_dTheta2
self.b = self.b - learning_rate*self.dJ_db2
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "01dbdad3a58c5a41b09b3aadef59de4a", "grade": false, "grade_id": "cell-a2f72c3745dfbb62", "locked": true, "schema_version": 3, "solution": false, "task": false}
X_sample = X_train[0:100, :]
y_sample = y_train[0:100, :]
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "47b1bbc1e90b5ad8a1412d3dbbf97303", "grade": true, "grade_id": "Output_Forward_Pass-1", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
# Output Forward Pass - 1 point
l2 = OutputLayer(X_sample.shape[1], y_sample.shape[1])
y_pred_sample = l2.forward(X_sample)
assert y_sample.shape[0] == y_pred_sample.shape[0]
assert y_sample.shape[1] == y_pred_sample.shape[1]
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "f0c70551912c6b9a0e4549980fdc7477", "grade": true, "grade_id": "Output_Backward_Pass-1", "locked": true, "points": 2, "schema_version": 3, "solution": false, "task": false}
# Output Backward Pass - 2 points
downstream_gradient = l2.backward(y_pred_sample, y_sample)
assert downstream_gradient.shape[0] == X_sample.shape[0]
assert downstream_gradient.shape[1] == X_sample.shape[1]
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "3b40b8e942762ae0a2e992501e963507", "grade": true, "grade_id": "Output_Weight_Update-1", "locked": true, "points": 2, "schema_version": 3, "solution": false, "task": false}
# Output Weight Update - 2 points
l2 = OutputLayer(X_sample.shape[1], y_sample.shape[1])
y_pred_before = l2.forward(X_sample)
for i in range(100):
y_pred_sample = l2.forward(X_sample)
l2.backward(y_pred_sample, y_sample)
l2.update(0.05)
y_pred_after = l2.forward(X_sample)
r2_before = r2_score(y_sample, y_pred_before)
r2_after = r2_score(y_sample, y_pred_after)
assert r2_before < r2_after
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "b095fdf2cfd03d569292a1962be9d662", "grade": false, "grade_id": "cell-677fcd4d04d26c49", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Next is the `HiddenLayer`. There are mainly two main differences you have to keep in mind here:
# 1. You have to work with the ReLu activation, so $\frac{\partial a}{\partial h}$ isn't $1$ anymore.
# 2. We use the `upstream_gradient` for the backward pass, provided by the `backward` method of the `OutputLayer`.
# + deletable=false nbgrader={"cell_type": "code", "checksum": "5f8582b27cf50feb008670fa26c584e2", "grade": false, "grade_id": "cell-10128093607d268a", "locked": false, "schema_version": 3, "solution": true, "task": false}
class HiddenLayer:
def __init__(self, n_inputs: int, n_outputs: int):
# Initialize the weight matrix self.theta with a
# Glorot et Al. 2010 uniform initialization:
self.theta = None
# YOUR CODE HERE
k = n_inputs
l = n_outputs
z = np.sqrt(6/(k+l))
self.theta = np.random.uniform(-z,z,(k,l))
# raise NotImplementedError()
# Initialize the bias vector to zeros:
self.b = None
# YOUR CODE HERE
self.b = np.zeros(n_outputs)
# raise NotImplementedError()
def forward(self, input_vector):
self.input = input_vector
# Compute the states h(x) as self.h
self.h = None
# YOUR CODE HERE
self.h = np.dot(self.input,self.theta)
self.h += self.b
# raise NotImplementedError()
# Compute the activations a(x) as self.a with ReLu activation
self.a = None
# YOUR CODE HERE
temp = self.h.copy()
temp[temp<= 0] = 0
self.a = temp.copy()
del temp
# raise NotImplementedError()
return self.a
def backward(self, upstream_gradient):
# HINT: as we do things backwards you might have to transpose some matrices
# Gradient of the states with respect to the inputs (trivial):
dh1_dtheta1 = None
# YOUR CODE HERE
dh1_dtheta1 = self.input
# raise NotImplementedError()
# Gradient of the activations with respect to the states:
# Remember to apply ReLu here
da1_dh1 = None
# YOUR CODE HERE
da1_dh1 = self.h.copy()
da1_dh1[da1_dh1<=0]=0
da1_dh1[da1_dh1>0] = 1
# raise NotImplementedError()
# Gradient of the error with respect to the weights:
# Now we can finally use the upstream gradient...
self.dJ_dTheta1 = None
# YOUR CODE HERE
self.dJ_dTheta1 = np.dot(dh1_dtheta1.T, (upstream_gradient * da1_dh1))
# raise NotImplementedError()
# And for the bias, similar to the output layer but this time
# using the upstream gradient:
self.dJ_db1 = None
# YOUR CODE HERE
self.dJ_db1 = np.sum(upstream_gradient,axis=0)
# raise NotImplementedError()
def update(self, learning_rate):
# You don't need to change this
self.theta = self.theta - learning_rate*self.dJ_dTheta1
self.b = self.b - learning_rate*self.dJ_db1
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "420fd29ad8936e8d861cd74478c91ca7", "grade": false, "grade_id": "cell-8c9fba61d9a7e611", "locked": true, "schema_version": 3, "solution": false, "task": false}
# We create a sample from the existing data that matches the output dimension
# and also shift it above zero, so it is learnable with ReLu:
n_hidden = 32
X_sample = X_train[0:100, :]
y_sample = np.array([y_train[i*100:i*100+100, 0] for i in range(n_hidden)]).T
y_sample = y_sample-y_sample.min()
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "0cd3ab232551bde363a738e85c024eaa", "grade": true, "grade_id": "Hidden_Forward_Pass-1", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
# Hidden Forward Pass - 1 point
l1 = HiddenLayer(X_sample.shape[1], n_hidden)
y_pred_sample = l1.forward(X_sample)
assert y_pred_sample.shape[0] == X_sample.shape[0]
assert y_pred_sample.shape[1] == n_hidden
assert y_pred_sample.min() >= 0
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "151acb70a0dde1032957c65aa983ade5", "grade": true, "grade_id": "Hidden_Weight_Update-1", "locked": true, "points": 2, "schema_version": 3, "solution": false, "task": false}
# Hidden Weight Update - 2 points
l1 = HiddenLayer(X_sample.shape[1], n_hidden)
y_pred_before = l1.forward(X_sample)
for i in range(100):
y_pred_sample = l1.forward(X_sample)
downstream_gradient = 1/100 * (y_pred_sample-y_sample)
l1.backward(downstream_gradient)
l1.update(0.05)
y_pred_after = l1.forward(X_sample)
r2_before = r2_score(y_sample, y_pred_before)
r2_after = r2_score(y_sample, y_pred_after)
assert r2_before < r2_after
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "83f716735540190bf138fe343af3ca71", "grade": false, "grade_id": "cell-354dc7f5d7680449", "locked": true, "schema_version": 3, "solution": false, "task": false}
# ## NN Training
#
# Okay now let's see whether we can be better with our network than the benchmark ridge regression.
#
# As said before we will train our network in batches (`batch_size`) and for `n_epochs`.
# Per epoch we therefore have to pass `X_train.shape[0] // batch_size` batches.
# For each epoch we randomly shuffle the dataset. Using `np.random.permutation(X_train.shape[0])` we generate a randomly shuffled index. Indexing X and y with slices of this shuffled index, creates differently shuffled batches for each epoch.
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "5e9496a196e31d24b70904c2356e32c8", "grade": false, "grade_id": "cell-db017859cd6974c3", "locked": true, "schema_version": 3, "solution": false, "task": false}
n_hidden = 64
lr = 0.05
n_epochs = 60
batch_size = 100
n_batches = X_train.shape[0] // batch_size
l1 = HiddenLayer(X_train.shape[1], n_hidden)
l2 = OutputLayer(n_hidden, y_train.shape[1])
y_pred_before = l2.forward(l1.forward(X_test))
# + deletable=false nbgrader={"cell_type": "code", "checksum": "cf403da0b634f56dccf95367f6737e90", "grade": false, "grade_id": "cell-5e91f098bcd19cff", "locked": false, "schema_version": 3, "solution": true, "task": false}
for epoch in range(n_epochs):
permutation = np.random.permutation(X_train.shape[0])
for batch in range(n_batches):
start = batch*batch_size
end = start+batch_size
X_batch = X_train[permutation[start:end]]
y_batch = y_train[permutation[start:end]]
# Get the predictions
y_pred = None
# YOUR CODE HERE
y_pred = l2.forward(l1.forward(X_batch))
# raise NotImplementedError()
# Do the backward pass of both layers and update the weights
# YOUR CODE HERE
downstream_gradient = l2.backward(y_pred,y_batch)
l2.update(lr)
l1.backward(downstream_gradient)
l1.update(lr)
# raise NotImplementedError()
y_pred = l2.forward(l1.forward(X_test))
if epoch%4 == 0:
print(f"Epoch {epoch}: Test R2 = {r2_score(y_test, y_pred)}")
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "512a47b90437c3dd95c5aa3b998738f8", "grade": true, "grade_id": "Full_Training-1", "locked": true, "points": 2, "schema_version": 3, "solution": false, "task": false}
# Full Training - 2 points
y_pred_after = l2.forward(l1.forward(X_test))
r2_before = r2_score(y_test, y_pred_before)
r2_after = r2_score(y_test, y_pred_after)
assert r2_before < r2_after
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "571d91bbddda6804f430eb9deebdb6d4", "grade": false, "grade_id": "cell-aa9f5444c49aa249", "locked": true, "schema_version": 3, "solution": false, "task": false}
y_pred = l2.forward(l1.forward(X_test))
r2_lumo_nn = r2_score(y_test[:,0], y_pred[:,0])
r2_homo_nn = r2_score(y_test[:,1], y_pred[:,1])
print(f'R2 LUMO: {r2_lumo_nn}\nR2 HOMO: {r2_homo_nn}')
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "769d31191675e9386f0c2cf068095f59", "grade": true, "grade_id": "Bonus_Beating_Ridge-1", "locked": true, "points": 1, "schema_version": 3, "solution": false, "task": false}
# Bonus Beating Ridge - 1 point
print(f'\tRidge\t\t\tNN\nLUMO\t{r2_lumo_ridge}\t{r2_lumo_nn}\nHOMO\t{r2_homo_ridge}\t{r2_homo_nn}')
assert r2_lumo_nn > r2_lumo_ridge and r2_homo_nn > r2_homo_ridge
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "ba0d17f177b2dd43f1e8a233e262d2c7", "grade": false, "grade_id": "cell-1d228c998d797378", "locked": true, "schema_version": 3, "solution": false, "task": false}
# Quite possibly you were able to beat the ridge regression at that point, only with one hidden ReLu layer. Of course the hyperparameters of both models haven't been optimized yet, so you can get even better. But please do this in a separate notebook ;)
#
# Let's have a look at the final plots:
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "18674f487fd01c6bbf8033925f5c83b4", "grade": false, "grade_id": "cell-afdaac610e54e883", "locked": true, "schema_version": 3, "solution": false, "task": false}
fig, axs = plt.subplots(1, 2, figsize=(12,6))
axs[0].scatter(y_test[:,0], y_pred[:,0], alpha=0.2, s=5)
axs[0].plot([-6, -1], [-6, -1], 'k')
axs[0].set_title(f'R2 LUMO: {r2_lumo_nn}')
axs[0].set_xlabel('true LUMO')
axs[0].set_ylabel('predicted LUMO')
axs[0].set_xlim([-6, -1])
axs[0].set_ylim([-6, -1])
axs[1].scatter(y_test[:,1], y_pred[:,1], alpha=0.2, s=5)
axs[1].plot([-9, -3], [-9, -3], 'k')
axs[1].set_title(f'R2 HOMO: {r2_homo_nn}')
axs[1].set_xlabel('true HOMO')
axs[1].set_ylabel('predicted HOMO')
axs[1].set_xlim([-9, -3])
axs[1].set_ylim([-9, -3])
plt.show()
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "4fbdcb385dc9820f3017fac00e1b0e90", "grade": false, "grade_id": "cell-1e05cf72bc62162b", "locked": true, "schema_version": 3, "solution": false, "task": false}
# **You finished the Exercise!**
#
# Next time we will start using Tensorflow and Keras to build neural networks.
#
# Here is a little example for our application. Be aware that Tensorflow does things slightly different internally, so it might give results that differ from your own implementation.
#
# You can use this example as a start to experiment with own implementations **in another notebook**:
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "2a605a09476140fb59538cee7f352f40", "grade": false, "grade_id": "cell-b41ed4cb02d0788f", "locked": true, "schema_version": 3, "solution": false, "task": false} pycharm={"name": "#%%\n"}
from tensorflow import keras
from tensorflow.keras import layers
n_hidden = 64
lr = 0.05
# A low number of epochs so the nb doesn't slow down during grading
n_epochs = 5
batch_size = 100
inputs = keras.Input(shape=(X.shape[1],))
hidden1 = layers.Dense(n_hidden, activation='relu')(inputs)
outputs = layers.Dense(2, activation='linear')(hidden1)
model = keras.Model(inputs=inputs, outputs=outputs, name="simple_ff")
print(model.summary())
model.compile(
loss=keras.losses.MeanSquaredError(),
optimizer=keras.optimizers.SGD(),
metrics=["MSE"],
)
model.fit(X_train, y_train, batch_size=batch_size, epochs=n_epochs)
y_pred = model.predict(X_test)
r2_lumo_keras = r2_score(y_test[:,0], y_pred[:,0])
r2_homo_keras = r2_score(y_test[:,1], y_pred[:,1])
print(f'R2 LUMO: {r2_lumo_keras}\nR2 HOMO: {r2_homo_keras}')
fig, axs = plt.subplots(1, 2, figsize=(12,6))
axs[0].scatter(y_test[:,0], y_pred[:,0], alpha=0.2, s=5)
axs[0].plot([-6, -1], [-6, -1], 'k')
axs[0].set_title(f'R2 LUMO: {r2_lumo_keras}')
axs[0].set_xlabel('true LUMO')
axs[0].set_ylabel('predicted LUMO')
axs[0].set_xlim([-6, -1])
axs[0].set_ylim([-6, -1])
axs[1].scatter(y_test[:,1], y_pred[:,1], alpha=0.2, s=5)
axs[1].plot([-9, -3], [-9, -3], 'k')
axs[1].set_title(f'R2 HOMO: {r2_homo_keras}')
axs[1].set_xlabel('true HOMO')
axs[1].set_ylabel('predicted HOMO')
axs[1].set_xlim([-9, -3])
axs[1].set_ylim([-9, -3])
plt.show()
| RepeatExpriment/Exercise06.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import cv2
import numpy as np
import matplotlib.pyplot as plt
from collections import Counter
# %matplotlib inline
# +
# Jul. 12, 2017 class
r = 256
def color(x, y, n):
if (x ** 2 + y ** 2) <= 1:
r = (np.arctan2(y, x) / np.pi + 1) / 2
c = int(n * r) / n
c = int(255 * (c + 1) / 2.)
return (255, c, 255)
return (0, 0, 0)
for n in range(1, 11):
print(n)
img = np.array([[color(x / r,y / r, n) for x in range(-r, r)] for y in range(-r, r)])
img = img.astype('uint8')
plt.figure();plt.imshow(img)
# -
# Jul. 13, 2017 class
img = np.zeros((400, 400, 3)).astype('uint8')
cv2.line(img, (100,100), (200,100), color=(255, 0, 0), thickness=3)
cv2.line(img, (100,100), (100,200), color=(255, 128, 0), thickness=3)
cv2.line(img, (200,100), (200,200), color=(0, 255, 0), thickness=3)
cv2.line(img, (100,200), (200,200), color=(0, 0, 255), thickness=3)
plt.imshow(img)
# +
print(5)
img = np.zeros((400, 400, 3)).astype('uint8')
cv2.line(img, (0,100), (100,0), color=(255, 0, 0), thickness=3)
cv2.line(img, (100,0), (200,100), color=(255, 128, 0), thickness=3)
cv2.line(img, (0,100), (100,200), color=(0, 255, 0), thickness=3)
cv2.line(img, (100,200), (200,100), color=(0, 0, 255), thickness=3)
plt.imshow(img)
# -
def deg(d):
return d / 180. * np.pi
img = np.zeros((400, 400, 3)).astype('uint8')
cv2.line(img, (100,300), (300,300), color=(255, 128, 0), thickness=3)
x1 = int(100+200*np.cos(deg(60)))
y1 = int(300-200*np.sin(deg(60)))
cv2.line(img, (100,300), (x1, y1), color=(255, 0, 0), thickness=3)
cv2.line(img, (x1,y1), (300,300), color=(0, 255, 0), thickness=3)
plt.imshow(img)
| Sectors.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Nigeria COVID-19 Data Prep Tool
# ### 0. Import Python module
# +
# import modules
import datetime
from datetime import date
import numpy as np
import pandas as pd
import geopandas as gpd
from geopandas import GeoDataFrame as gdf
import fiona
import json
import lxml
import requests
from datetime import date
from bs4 import BeautifulSoup
# -
# ### 1. Scrape data from website
# +
# make request to website for data
res = requests.get('https://covid19.ncdc.gov.ng/')
status_code = res.status_code
text = res.text
if status_code == 200:
soup = BeautifulSoup(text, 'lxml')
table_html = soup.find_all('table', {'id':'custom1'})
table_df = pd.read_html(str(table_html))[0]
table_df = table_df.rename(columns={'States Affected': 'STATE', 'No. of Cases (Lab Confirmed)': 'CASES', 'No. of Cases (on admission)': 'HOSPITALISED', 'No. Discharged': 'RECOVERED', 'No. of Deaths': 'DEATHS'})
# export table2 to csv
# file_name = "ncdc_covid_"+ str(date.today()) +".csv"
# table_df.to_csv(file_name, index=False, encoding='utf-8')
# print(table_df)
else:
print("Unable to fetch data from URL.")
# +
# read states shapefile data
states_shp_df = gpd.read_file("../data/shp/ncdc-covid19-states.shp")
# select columns
states_shp_df = states_shp_df[["OBJECTID", "CODE", "STATE", "ADMIN_NAME", "GEO_ZONE", "AREA_SQKM", "POP_2016", "CENTER_Y", "CENTER_X", "SCREENED", "ACTIVE", "geometry"]]
# merge dataframes
df = pd.merge(states_shp_df, table_df, on='STATE', how='outer')
# remove NAN
df['CASES'] = df['CASES'].replace(np.nan, 0)
df['HOSPITALISED'] = df['HOSPITALISED'].replace(np.nan, 0)
df['RECOVERED'] = df['RECOVERED'].replace(np.nan, 0)
df['DEATHS'] = df['DEATHS'].replace(np.nan, 0)
# change columns data type
df = df.astype({"CASES": int, "HOSPITALISED": int, "RECOVERED": int, "DEATHS": int})
# reorder columns
df = df[["OBJECTID", "CODE", "STATE", "ADMIN_NAME", "GEO_ZONE", "AREA_SQKM", "POP_2016", "CENTER_Y", "CENTER_X", "CASES", "HOSPITALISED", "RECOVERED", "DEATHS", "SCREENED", "ACTIVE", "geometry"]]
# convert dataframe to geodataframe
gdf = gpd.GeoDataFrame(df, geometry='geometry')
# set projection
gdf.crs = "EPSG:4326"
# export to shp
gdf.to_file("../data/shp/ncdc-covid19-states.shp")
# view data
gdf.head()
# -
# ### 2. Prep States SHP data
# +
# read states shapefile data
states_shp_df = gpd.read_file("../data/shp/ncdc-covid19-states.shp")
# calculate number of ACTIVE cases
states_shp_df['ACTIVE'] = states_shp_df['CASES'] - (states_shp_df['RECOVERED'] + states_shp_df['DEATHS'])
# reorder columns
states_shp_df = states_shp_df[["OBJECTID", "CODE", "STATE", "ADMIN_NAME", "GEO_ZONE", "AREA_SQKM", "POP_2016", "CENTER_Y", "CENTER_X", "CASES", "DEATHS", "RECOVERED", "ACTIVE", "SCREENED", "geometry"]]
# export to shp
states_shp_df.to_file("../data/shp/ncdc-covid19-states.shp")
# export to geojson
states_shp_df.to_file("../data/geojson/ncdc-covid19-states.geojson", driver='GeoJSON')
# export to csv
states_shp_df.rename(columns={'CENTER_Y':'LAT', 'CENTER_X':'LONG'}, inplace=True)
states_shp_df.drop('geometry',axis=1).to_csv("../data/csv/ncdc-covid19-states.csv")
# export to json
states_shp_df = pd.read_csv("../data/csv/ncdc-covid19-states.csv", index_col=0)
states_shp_df.to_json("../data/json/ncdc-covid19-states.json", orient='records')
# view data
states_shp_df.head()
# -
# ### 3. Prep DAILYUPDATES csv data
# +
# load dailyupdates csv data
df = pd.read_csv("../data/csv/ncdc-covid19-dailyupdates.csv")
# read states shapefile data
states_shp_df = gpd.read_file("../data/shp/ncdc-covid19-states.shp")
values = []
# todays date
today_date = date.today().strftime("%m/%d/%Y")
values.append(str(today_date))
# delete new_row if exists
if str(today_date) in df.index:
df.drop(str(today_date))
# values
prev_cases = int(df[-1:]['TOTAL CONFIRMED'])
prev_deaths = int(df[-1:]['DEATHS'])
prev_recovered = int(df[-1:]['RECOVERED'])
# TOTAL CASES
total_cases = sum(states_shp_df['CASES'])
values.append(total_cases)
# NEW CASES
new_cases = total_cases - prev_cases
values.append(new_cases)
# TOTAL ACTIVE
total_active = sum(states_shp_df['ACTIVE'])
values.append(total_active)
# TOTAL DEATHS
total_deaths = sum(states_shp_df['DEATHS'])
values.append(total_deaths)
# TOTAL RECOVERED
total_recovered = sum(states_shp_df['RECOVERED'])
values.append(total_recovered)
# DAILY DEATHS
new_deaths = total_deaths - prev_deaths
values.append(new_deaths)
# DAILY RECOVERED
new_recovered = total_recovered - prev_recovered
values.append(new_recovered)
# add new row to df
df.loc[len(df)] = values
# export to csv
df.to_csv('../data/csv/ncdc-covid19-dailyupdates.csv', index=False)
# view data
df.tail()
# -
# ### 4. Prep States Daily CASES csv data
# +
# update daily cases from shapefile
df = pd.read_csv("../data/csv/ncdc-covid19-states-daily-cases.csv", index_col=0)
states_shp_df = gpd.read_file("../data/shp/ncdc-covid19-states.shp")
# todays date
today_date = date.today().strftime("%m/%d/%Y")
# delete new_row if exists
if str(today_date) in df.index:
df.drop(str(today_date))
# create array of all new cases
values = []
for index, row in states_shp_df.iterrows():
# values.append(str(today_date))
values.append(row['CASES'])
# add new row to df
df.loc[str(today_date)] = values
# convert the 'Date' column to datetime format
df = df.reset_index()
df['Date']= pd.to_datetime(df['Date'])
# export to csv
df.to_csv('../data/csv/ncdc-covid19-states-daily-cases.csv', index=False)
# view data
df.tail()
# -
# ### 5. Prep States Daily RECOVERED csv data
# +
# update daily cases from shapefile
df = pd.read_csv("../data/csv/ncdc-covid19-states-daily-recovered.csv", index_col=0)
states_shp_df = gpd.read_file("../data/shp/ncdc-covid19-states.shp")
# todays date
today_date = date.today().strftime("%m/%d/%Y")
# delete new_row if exists
if str(today_date) in df.index:
df.drop(str(today_date))
# create array of all new cases
values = []
for index, row in states_shp_df.iterrows():
values.append(row['RECOVERED'])
# add new row to df
df.loc[str(today_date)] = values
# export to csv
df.to_csv('../data/csv/ncdc-covid19-states-daily-recovered.csv')
# view data
df.tail()
# -
# ### 6. Prep States Daily DEATHS csv data
# +
# update daily cases from shapefile
df = pd.read_csv("../data/csv/ncdc-covid19-states-daily-deaths.csv", index_col=0)
states_shp_df = gpd.read_file("../data/shp/ncdc-covid19-states.shp")
# todays date
today_date = date.today().strftime("%m/%d/%Y")
# delete new_row if exists
if str(today_date) in df.index:
df.drop(str(today_date))
# create array of all new cases
values = []
for index, row in states_shp_df.iterrows():
values.append(row['DEATHS'])
# add new row to df
df.loc[str(today_date)] = values
# export to csv
df.to_csv('../data/csv/ncdc-covid19-states-daily-deaths.csv')
# view data
df.tail()
# -
| tools/.ipynb_checkpoints/data_prep-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
result_tf = pd.read_csv(r'F:\AppliedDataScience\data\multivariate_result_total.csv')
df = pd.read_csv(r'F:\AppliedDataScience\data\dat.csv')
result_tf.head()
df.head()
result_tf.columns
df.columns
mat = df[['session_id', 'session_position', 'session_length',
'track_id', 'skip_1', 'skip_2', 'skip_3', 'not_skipped',]]
mat.head()
len(mat['session_id'].value_counts().unique())
mat['session_id'].value_counts().unique()
user = pd.DataFrame(mat['session_id'].unique())
len(user)
user.head()
mat.head()
user1.head()
user1 = user1.rename(columns = {0: "session_id",1:"session_id_new"})
# +
### track id ##
# -
track = pd.DataFrame(mat['track_id'].unique())
track.head()
R_df = df.pivot(index = 'session_id', columns ='track_id', values = 'not_skipped').fillna(0)
R_df.head()
df_session = pd.DataFrame(df.session_id.unique())
df_track = pd.DataFrame(df.track_id.unique())
mat = df[['session_id', 'session_position', 'session_length',
'track_id', 'skip_1', 'skip_2', 'skip_3', 'not_skipped',]]
mat = mat.head(500)
mat
mat2 = mat.drop_duplicates('session_id')
len(mat)
len(mat2)
mat2
matrix = mat2.pivot(index = 'session_id', columns ='track_id', values = 'not_skipped').fillna(0)
matrix
matrix_arr = matrix.values
matrix_arr
matrix_arr[matrix_arr==True]=1
matrix_arr[matrix_arr==False]=0.5
matrix_arr
matrix_df = pd.DataFrame(matrix_arr)
matrix_df.head()
R = matrix_df.as_matrix()
user_ratings_mean = np.mean(R, axis = 1)
R_demeaned = R - user_ratings_mean.reshape(-1, 1)
R_demeaned = pd.DataFrame(R_demeaned)
R_demeaned
R_demeaned.shape
R_demeaned_arr = R_demeaned.values
R_demeaned_new = R_demeaned_arr.reshape(-1,1)
R_demeaned_new
from scipy.sparse.linalg import svds
U, sigma, Vt = svds(R_demeaned_new, k = 1)
sigma = np.diag(R_demeaned)
sigma
| soptify_matrix_factorization_try.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# + language="html"
# <style>.container { width:75% !important; }</style>
# <link rel='stylesheet' type='text/css' href='static/css_backup/mobile-package-094623.css'>
# <link rel='stylesheet' type='text/css' href='static/css_backup/homepage-package-3bd234.css'>
#
# <body>
# <article id="homepage">
# <a id="library-section"></a>
# <div class="library-section">
# <div class="section-separator library-section-separator">
# <center><img src="static/images/logoNipype_tutorial.png" width=700></center>
# <p>Welcome to the Nipype Tutorial! It covers the basic concepts and most common use cases of Nipype and will teach
# you everything so that you can start creating your own workflows in no time. We recommend that you start with
# the introduction section to familiarize yourself with the tools used in this tutorial and then move on to the
# basic concepts section to learn everything you need to know for your everyday life with Nipype. The workflow
# examples section shows you a real example how you can use Nipype to analyze an actual dataset. For a very
# quick non-imaging introduction, you can check the Nipype Quickstart notebook in the introduciton section.
# </p><p>
# All of the notebooks used in this tutorial can be found on <a href="https://github.com/miykael/nipype_tutorial">github.com/miykael/nipype_tutorial</a>.
# But if you want to have the real experience and want to go through the computations by yourself, we highly
# recommend you to use a Docker container. More about the Docker image that can be used to run the tutorial can be found
# <a href="https://miykael.github.io/nipype_tutorial/notebooks/introduction_docker.html">here</a>.
# This docker container gives you the opportunity to adapt the commands to your liking and discover the flexibility and real power of
# Nipype yourself.
# </p><p>
# To run the tutorial locally on your system, we will use a <a href="http://www.docker.com/">Docker</a> container. For this you
# need to install Docker and download a docker image that provides you a neuroimaging environment based on a Debian system,
# with working Python 3 software (including Nipype, dipy, matplotlib, nibabel, nipy, numpy, pandas, scipy, seaborn and more),
# FSL, ANTs and SPM12 (no license needed). We used <a href="https://github.com/kaczmarj/neurodocker">Neurodocker</a> to create this docker image.
# </p><p>
# If you do not want to run tutorial locally, you can also use
# <a href="https://mybinder.org/v2/gh/miykael/nipype_tutorial/master">Binder service</a>.
# Binder automatically launch the Docker container for you and you have access to all of the notebooks.
# Note, that Binder provides between 1G and 4G RAM memory, some notebooks from Workflow Examples might not work.
# All notebooks from Introduction and Basic Concepts parts should work.
# </p><p>
# For everything that isn't covered in this tutorial, check out the <a href="http://nipype.readthedocs.io/en/latest/">main homepage</a>.
# And if you haven't had enough and want to learn even more about Nipype and Neuroimaging, make sure to look at
# the <a href="https://miykael.github.io/nipype-beginner-s-guide/">detailed beginner's guide</a>.
# </p>
# </div>
#
# <!--Comment: to change the color of the title or section, change the second h2 class argument and the third div
# argument to either science, computing, humanities, test-prep or economics-finance-domain-->
#
# <!--to change the number of rows per column, change the last number in 'pure-u-1-3'.
# For example, to have three columns, change the value to 'pure-u-1-3'-->
#
# <h2 class="domain-header economics-finance-domain"><a class="domain-title">Introduction</a></h2>
# <div class="pure-g domain-table-container economics-finance-domain">
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/introduction_nipype.ipynb">Nipype</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/introduction_jupyter-notebook.ipynb">Jupyter-Notebook</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/introduction_dataset.ipynb">BIDS & Tutorial Dataset</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/introduction_docker.ipynb">Docker</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/introduction_python.ipynb">Python</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/introduction_quickstart.ipynb">Nipype Quickstart</a>
# </div>
# <p>This section is meant as a general overview. It should give you a short introduction to the main topics that
# you need to understand to use Nipype and this tutorial.
# The section also contains a very quick non-imaging introduction to Nipype workflows.</p>
#
# <h2 class="domain-header humanities"><a class="domain-title">Basic Concepts</a></h2>
# <div class="pure-g domain-table-container humanities">
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/basic_interfaces.ipynb">Interfaces</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/basic_nodes.ipynb">Nodes</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/basic_workflow.ipynb">Workflow</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/basic_interfaces_caching.ipynb">Interfaces Caching</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/basic_graph_visualization.ipynb">Graph Visualization</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/basic_data_input.ipynb">Data Input</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/basic_data_input_bids.ipynb">Data Input with BIDS</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/basic_data_output.ipynb">Data Output</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/basic_iteration.ipynb">Iteration / Iterables</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/basic_mapnodes.ipynb">MapNodes</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/basic_function_nodes.ipynb">Function Nodes</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/basic_joinnodes.ipynb">JoinNodes</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/basic_model_specification.ipynb">Model Specification</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/basic_import_workflows.ipynb">Import existing Workflows</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/basic_plugins.ipynb">Execution Plugins</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/basic_configuration.ipynb">Execution Configuration</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/basic_error_and_crashes.ipynb">Errors & Crashes</a>
# </div>
# <p>This section will introduce you to all of the key players in Nipype. Basic concepts that you need to learn to
# fully understand and appreciate Nipype. Once you understand this section, you will know all that you need to know
# to create any kind of Nipype workflow.</p>
#
# <h2 class="domain-header science"><a class="domain-title">Workflow Examples</a></h2>
# <div class="pure-g domain-table-container science">
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/example_preprocessing.ipynb">Preprocessing</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/example_1stlevel.ipynb">1st-level Analysis</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/example_normalize.ipynb">Normalize Data</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/example_2ndlevel.ipynb">2nd-level Analysis</a>
# </div>
# <p>In this section you will find some practical examples that show you how to use Nipype in a "real world" scenario.</p>
#
# <h2 class="domain-header test-prep"><a class="domain-title">Useful Resources & Links</a></h2>
# <div class="pure-g domain-table-container test-prep">
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/resources_installation.ipynb">Install Nipype</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/resources_resources.ipynb">Useful Resources & Links</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/resources_help.ipynb">Where to find Help</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="notebooks/resources_python_cheat_sheet.ipynb">Python Cheat Sheet</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="http://nipype.readthedocs.io/en/latest/">Nipype (main homepage)</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="https://miykael.github.io/nipype-beginner-s-guide/">Nipype Beginner's Guide</a>
# <a class="subject-link pure-u-1-4" target="_blank" href="https://github.com/miykael/nipype_tutorial">Github of Nipype Tutorial</a>
# </div>
# <p>This section will give you helpful links and resources, so that you always know where to go to learn more.</p>
#
# </div>
# </article>
# </body>
#
# <!--The following code will cause the code cell to disappear-->
#
# <script>
# code_show=true;
# function code_toggle() {
# if (code_show){
# $('div.input').hide();
# } else {
# $('div.input').show();
# }
# code_show = !code_show
# }
# $( document ).ready(code_toggle);
# </script>
#
# <hr/>
#
# <h2>You want to help with this tutorial?</h2>
# <p>Find the github repo of this tutorial under <a href="https://github.com/miykael/nipype_tutorial">https://github.com/miykael/nipype_tutorial</a>.
# Feel free to send a pull request or leave an <a href="https://github.com/miykael/nipype_tutorial/issues">issue</a> with your feedback or ideas.
# </p>
# To inspect the html code of this page, click: <form action="javascript:code_toggle()"><input type="submit" value="Show HTML code"></form>
| index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import numpy as np
class read_data():
def __init__(self,data_directory, batch_size):
self.data_directory = data_directory
self.batch_size = batch_size
def next_batch(self):
return np.random.rand(4,32,64,64,3), np.random.rand(4,32,64,64,3)
# return X, y as per batch size ...
# infinite batch generation
def val_batch_init(self):
pass
# reset validation iterator
def val_next_batch(self):
return np.random.rand(4,32,64,64,3), np.random.rand(4,32,64,64,3)
# +
# TensorFlow Model !
import os
import shutil
import tensorflow as tf
from cell import ConvLSTMCell
class conv_lstm_model():
def __init__(self):
# Run when your in trouble ... !
tf.reset_default_graph()
"""Parameter initialization"""
self.batch_size = 4 #128
self.timesteps = 32
self.shape = [64, 64] # Image shape
self.kernel = [3, 3]
self.channels = 3
self.filters = [32,128,32,3] # 4 stacked conv lstm filters
# Create a placeholder for videos.
self.inputs = tf.placeholder(tf.float32, [self.batch_size, self.timesteps] + self.shape + [self.channels]) # (batch_size, timestep, H, W, C)
self.outputs_exp = tf.placeholder(tf.float32, [self.batch_size, self.timesteps] + self.shape + [self.channels] ) # (batch_size, timestep, H, W, C)
# model output
self.model_output = None
# loss
self.l2_loss = None
# optimizer
self.optimizer = None
def create_model(self):
cells = []
for i, each_filter in enumerate(self.filters):
cell = ConvLSTMCell(self.shape, each_filter, self.kernel)
cells.append(cell)
cell = tf.nn.rnn_cell.MultiRNNCell(cells, state_is_tuple=True)
states_series, current_state = tf.nn.dynamic_rnn(cell, self.inputs, dtype=self.inputs.dtype)
# current_state => Not used ...
self.model_output = states_series
def loss(self):
frames_difference = tf.subtract(self.outputs_exp, self.model_output)
batch_l2_loss = tf.nn.l2_loss(frames_difference)
# divide by batch size ...
l2_loss = tf.divide(batch_l2_loss, float(self.batch_size))
self.l2_loss = l2_loss
def optimize(self):
train_step = tf.train.AdamOptimizer().minimize(self.l2_loss)
self.optimizer = train_step
def build_model(self):
self.create_model()
self.loss()
self.optimize()
log_dir_file_path = "../logs/"
model_save_file_path = "../checkpoint/"
checkpoint_iterations = 100
best_model_iterations = 25
best_l2_loss = float("inf")
iterations="iterations/"
best = "best/"
def log_directory_creation():
if tf.gfile.Exists(log_dir_file_path):
tf.gfile.DeleteRecursively(log_dir_file_path)
tf.gfile.MakeDirs(log_dir_file_path)
# model save directory
if os.path.exists(model_save_file_path):
shutil.rmtree(model_save_file_path)
os.makedirs(model_save_file_path+iterations)
os.makedirs(model_save_file_path+best)
def save_model_session(sess,file_name):
saver = tf.train.Saver()
save_path = saver.save(sess, model_save_file_path+file_name+".ckpt")
def restore_model_session(sess,file_name):
saver = tf.train.Saver()
saver.restore(sess, model_save_file_path+file_name+".ckpt")
def train():
global best_l2_loss
# clear logs !
log_directory_creation()
# data read iterator
data = read_data("../data/UCF-101/",128)
# conv lstm model
model = conv_lstm_model()
model.build_model()
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Start training
sess = tf.InteractiveSession()
sess.run(init)
# Tensorflow Summary
tf.summary.scalar("train_l2_loss",model.l2_loss)
summary_merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter(log_dir_file_path+"/train", sess.graph)
test_writer = tf.summary.FileWriter(log_dir_file_path+"/test", sess.graph)
global_step=0
while True:
X_batch, y_batch = data.next_batch()
_, summary = sess.run([model.optimizer, summary_merged], feed_dict={model.inputs: X_batch, model.outputs_exp: y_batch})
train_writer.add_summary(summary,global_step)
global_step += 1
if global_step%checkpoint_iterations==0:
save_model_session(sess,iterations+"conv_lstm_model")
if global_step%best_model_iterations==0:
data.val_batch_init()
val_l2_loss_history = list()
# iterate on validation batch ...
# for X_val, y_val in data.val_next_batch():
X_val, y_val = data.val_next_batch()
test_summary, val_l2_loss = sess.run([summary_merged, model.l2_loss], feed_dict={model.inputs: X_val, model.outputs_exp: y_val})
test_writer.add_summary(test_summary,global_step)
val_l2_loss_history.append(val_l2_loss)
temp_loss = sum(val_l2_loss_history) * 1.0 /len(val_l2_loss_history)
# save if better !
if best_l2_loss > temp_loss:
best_l2_loss = temp_loss
save_model_session(sess,best+"conv_lstm_model")
if global_step%100==0:
print ("Iteration ",global_step, " best_l2_loss ", best_l2_loss)
train_writer.close()
test_writer.close()
# -
train()
| notebooks/Conv_LSTM_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # NumPy Array Basics - Querying, Slicing, Combining, and Splitting Arrays
import sys
print(sys.version)
import numpy as np
print(np.__version__)
# I'll be covering querying, slicing, combining, and splitting arrays. Now this information is really important because it will consistently come up when we’re working in pandas. Overall it’s a pretty simple idea and it’s fairly declarative.
np.random.seed(10)
#
# First let’s generate some random arrays. We’ll generate some that are 1 dimension, 2 dimensional, and 3 dimensional.
#
ar = np.arange(12)
ar
ar2 = np.random.random_integers(12, size=12)
ar2
ndim_ar = np.arange(12).reshape(3,4)
ndim_ar
ndim_ar2 = np.random.random_integers(12, size=(3,4))
ndim_ar2
ndim_ar3d = np.arange(8).reshape(2,2,2)
ndim_ar3d
#
# Querying 1 dimensional arrays is easy, we just perform the lookup like we would a regular array.
#
ar[5]
ar[5:]
ar[1:6:2]
ar[-1:-6:-2]
ndim_ar
# Querying 2 dimensional arrays is a bit more interesting, we use commas to separate the axis.
#
ndim_ar[:,1:3]
ndim_ar[1:3,:]
ndim_ar[1:3,1:3]
ndim_ar3d
#
# Things get even more interesting with 3+ dimensions, obviously it’s a lot to keep track in your head. but We just go dimension by dimension.
#
# We’ll get the first dimension, then all the items in the second dimension, then everything beyond the first item in the 3rd dimension.
#
ndim_ar3d[0,:,1:]
#
# Now that we’ve got some experience querying, let’s go over combining different arrays.
#
# Now let’s stack the first two arrays vertically, we’ll do that with vstack. We can do the same with multidimensional arrays.
#
np.vstack((ar,ar2))
np.vstack((ndim_ar, ndim_ar2))
#
# Now you can probably guess what horizontal stacking is. That would be hstack.
#
#
np.hstack((ndim_ar, ndim_ar2))
ar3 = np.hstack((ar,ar2))
ar3
ar
# Of course we aren’t limited to two arrays, we can stack as many as we like.
#
# We can also use concatenate to join them together. We can specify the axis to do so.
np.concatenate((ar,ar2))
# we can stack them dimensionally with dstack. Now we’ve got a 3 dimensional join of these two two dimensional arrays.
#
ndim_ar3 = np.concatenate((ndim_ar,ndim_ar2), axis=0)
ndim_ar3
np.concatenate((ndim_ar,ndim_ar2), axis=1)
ndim_ar3d = np.dstack((ndim_ar,ndim_ar2))
ndim_ar3d
# We can split them back with the dsplit command. This gives us back our original arrays.
ndim_ar3d_split = np.dsplit(ndim_ar3d, 2)
ndim_ar3d_split
ndim_ar3d_split[0].flatten()
ndim_ar3d_split[1].flatten()
ar
# We can do the same with hsplit and vsplit.
#
np.hsplit(ar3,2)
ndim_ar3
np.vsplit(ndim_ar3,2)
# Now that's all that I wanted to cover. I will mention that there isa dedicated matrix formation but going over this is outside the scope of this tutorial.
#
#
# At this point we’ve covered a lot of numpy. Now I’m sure you’re thinking this isn’t quite applied but a lot of this functionality will be embedded into pandas. so it’s great to review.
| Data_Analysis_with_Pandas/01-Numpy Basics/1-5 NumPy Array Basics - Querying, Slicing, Combining, and Splitting Arrays.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # vis_using_node_titles_html
#
# The vis.js website provides a 'getting started' example for its Network tool: http://visjs.org/docs/network/
#
# This notebook looks at recreating this in Jupyter notebooks, and adding node titles which display when the nodes is hovered over.
# ## Using an external html file with local links
from IPython.display import IFrame
IFrame('vis_using_node_titles_local_links.html',650,450)
# ## Using an external file with web links
IFrame('vis_using_node_titles_web_links.html',650,450)
# This also works and it renders on nbviewer.
#
#
| vis_using_node_titles/vis_using_node_titles_html.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # [LEGALST-123] Lab 03: Dataframe Operations and Simple Visualizations
# +
# install required libraries
# # !pip install numpy
# # !pip install pandas
# import libraries
import numpy as np
import pandas as pd
from collections import Counter
import matplotlib.pyplot as plt
# %matplotlib inline
# -
# ## Data Cleaning
#
# We are going to be using the ANES data set, which contains survey responses regarding political attitudes from before and after the 2016 presidential election.
anes_raw = pd.read_csv('../data/anes/ANES_legalst123.csv')
anes_raw.head()
# ### Renaming Columns
# Wow that's a lot of data! You may have noticed that our column names are not very informative. This means we need to clean our data before we can use it.
#
# First, let's rename our columns so that it is more clear what kind of information they contain. To do this we can use the `rename` method for `pandas` dataframes. You can find the documentation for this method [here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html).
#
# For more basic statistics and more information about what these column names mean, please refer to the codebook.
# Dictionary to pass to `rename` mapping the old name to the new name.
# Feel free to look through this to see the kind of attitudes this survey aimed to capture.
# If in the future you'd like to refer go back to the codebook for more information about what a column name
# means, you can use this as a reference to find the column ID in the codebook.
new_names = {
"V160101f": "pre_election_weight_ftf",
"V160102f": "post_election_weight_ftf",
"V161024x": "pre_voting_status",
"V161140x": "pre_economy_last_year",
"V161158x": "pre_party_id",
"V161188": "pre_gun_access_importance",
"V161192": "pre_unauthorized_immigrants",
"V161194x": "pre_birthright_citizenship",
"V161198": "pre_govt_assist_to_blacks",
"V161204x": "pre_affirmative_action",
"V161208": "pre_crime_budget",
"V161209": "pre_welfare_budget",
"V161210": "pre_childcare_budget",
"V161211": "pre_aid_to_poor_budget",
"V161212": "pre_environment_budget",
"V161213x": "pre_troops_to_fight_isis",
"V161214x": "pre_syrian_refugees",
"V161215": "pre_trust_washington",
"V161216": "pre_interests_of_few_or_many",
"V161217": "pre_govt_waste_tax_money",
"V161218": "pre_govt_corruption",
"V161219": "pre_are_people_trustworthy",
"V161220": "pre_govt_attention",
"V161221": "pre_global_warming",
"V161225x": "pre_govt_action_rising_temp",
"V161227x": "pre_govt_services_same_sex_couples",
"V161228x": "pre_transgender_policy",
"V161229x": "pre_lgbt_protection_laws",
"V161231": "pre_gay_marriage",
"V161232": "pre_abortion",
"V161233x": "pre_death_penalty",
"V161235x": "pre_economy_since_2008",
"V161236": "pre_angry_at_obama",
"V161237": "pre_proud_of_obama",
"V161241": "pre_religion_importance",
"V161242": "pre_religion_provides_guidance",
"V161243": "pre_bible_word_of_got_or_men",
"V161244": "pre_attend_religions_services",
"V161245": "pre_how_often_religious_services",
"V161267x": "pre_age_group",
"V161268": "pre_marital_status",
"V161270": "pre_education_level",
"V161276x": "pre_occupation_status",
"V161310x": "pre_race",
"V161316": "pre_place_of_birth",
"V161324": "pre_children_in_household",
"V161326": "pre_home_internet_use",
"V161327": "pre_cell_or_landline",
"V161331x": "pre_length_in_current_coummunity",
"V161334": "pre_home_ownership",
"V161340": "pre_unexpired_passport",
"V161342": "pre_gender",
"V161343": "pre_roughing_up_protestors",
"V161344": "pre_justified_use_of_violence",
"V161345": "pre_feminist",
"V161361x": "pre_income",
"V161362": "pre_political_correctness",
"V161496": "pre_gun_ownership",
"V161507": "pre_sexist_remarks",
"V161508": "pre_women_appreciating_men",
"V161509": "pre_women_power_over_men",
"V161510": "pre_men_on_leash",
"V161511": "pre_sexual_orientation",
"V161515": "pre_party_representation_house",
"V161516": "pre_party_representation_senate",
"V161522": "pre_general_satisfaction",
"V162010": "pre_talk_about_voting",
"V162011": "pre_political_meetings",
"V162012": "pre_political_visibility",
"V162013": "pre_work_for_party",
"V162014": "pre_monetary_contribution_to_party",
"V162014a": "pre_party_contributed_to",
"V162016": "post_monetary_contribution_to_party",
"V162016a": "post_party_contributed_to",
"V162018a": "post_protest_participation",
"V162018b": "post_signed_petition",
"V162018c": "post_give_money_to_relig_org",
"V162018d": "post_give_money_to_soc_pol_org",
"V162018e": "post_social_media_political_message",
"V162019": "post_contact_representative",
"V162030x": "post_party_registration",
"V162031x": "post_voted_in_2016",
"V162062x": "post_pres_vote_admin",
"V162066x": "post_strength_of_vote",
"V162067x": "post_house_vote",
"V162068x": "post_senate_vote",
"V162069x": "post_governor_vote",
"V162078": "post_clinton_rating",
"V162079": "post_trump_rating",
"V162095": "post_christian_fundamentalist_rating",
"V162096": "post_feminist_rating",
"V162097": "post_liberal_rating",
"V162098": "post_labor_unions_rating",
"V162099": "post_poor_people_rating",
"V162100": "post_big_business_rating",
"V162101": "post_conservative_rating",
"V162102": "post_supreme_court_rating",
"V162103": "post_lgbt_rating",
"V162104": "post_congress_rating",
"V162105": "post_rich_people_rating",
"V162106": "post_muslims_rating",
"V162107": "post_christians_rating",
"V162108": "post_jews_rating",
"V162109": "post_tea_party_rating",
"V162110": "post_police_rating",
"V162111": "post_transgender_rating",
"V162112": "post_scientists_rating",
"V162113": "post_blm_rating",
"V162123": "post_world_like_america",
"V162125x": "post_american_flag",
"V162136x": "post_economic_mobility",
"V162147x": "post_vaccines",
"V162150x": "post_gender_income_equality",
"V162157": "post_immigration_levels",
"V162158": "post_immigration_takes_away_jobs",
"V162160": "post_worry_terrorist_attack",
"V162168": "post_need_free_thinkers",
"V162169": "post_rotten_apples",
"V162170": "post_strong_leader",
"V162171": "post_liberal_conservative",
"V162174a": "post_discuss_politics",
"V162178": "post_wiretaps",
"V162179": "post_marijuana",
"V162180x": "post_bank_regulation",
"V162188x": "post_trump_towards_women",
"V162191a": "post_which_is_conservative_party",
"V162193x": "post_healthcare_spending",
"V162207": "post_attitude_toward_changing_world",
"V162209": "post_tolerate_other_morals",
"V162210": "post_trad_values",
"V162211": "post_no_favors_for_blacks",
"V162212": "post_slavery_impact",
"V162213": "post_blacks_deserve_more",
"V162214": "post_blacks_should_try_harder",
"V162229x": "post_bond_with_child",
"V162230x": "post_man_works",
"V162231x": "post_women_discrimination",
"V162238x": "post_preferential_hiring",
"V162239": "post_child_indep_respect",
"V162240": "post_child_curiosity_manners",
"V162241": "post_child_obedience_self_reliance",
"V162242": "post_child_considerate_behave",
"V162254": "post_govt_knew_9_11",
"V162255x": "post_obama_muslim",
"V162262": "post_politicians_are_problem",
"V162263": "post_strong_leader_bend_rules",
"V162266": "post_minorities_should_adapt",
"V162268": "post_immigrants_good_for_economy",
"V162269": "post_immigrants_harm_culture",
"V162270": "post_immigrants_increase_crime",
"V162271": "post_truly_american_us_born",
"V162272": "post_truly_american_us_ancestry",
"V162273": "post_truly_american_speak_english",
"V162274": "post_truly_american_follow_trad",
"V162290": "post_satisfied_with_democracy",
"V162310": "post_asian_american_feeling_therm",
"V162311": "post_hispanics_feeling_therm",
"V162312": "post_blacks_feeling_therm",
"V162313": "post_illegal_imm_feeling_therm",
"V162314": "post_whites_feeling_therm",
"V162316": "post_whites_work_together",
"V162317": "post_hiring_minorities",
"V162318": "post_govt_treatment_whites_blacks",
"V162319": "post_govt_treatment_degree",
"V162320": "post_police_treatment_whites_blacks",
"V162321": "post_police_treatment_degree",
"V162322": "post_white_influence",
"V162323": "post_black_influence",
"V162324": "post_hispanic_influence",
"V162325": "post_asian_influence",
"V162345": "post_whites_hardworking",
"V162346": "post_blacks_hardworking",
"V162347": "post_hispanics_hardworking",
"V162348": "post_asians_hardworking",
"V162349": "post_whites_violent",
"V162350": "post_blacks_violent",
"V162351": "post_hispanics_violent",
"V162352": "post_asians_violent",
"V162353": "post_muslims_violent",
"V162354": "post_christians_violent",
"V162355": "post_muslims_patriotic",
"V162356": "post_christians_patriotic",
"V162357": "post_discrim_blacks",
"V162358": "post_discrim_hispanics",
"V162359": "post_discrim_asians",
"V162360": "post_discrim_whites",
"V162361": "post_discrim_lgbt",
"V162362": "post_discrim_women",
"V162363": "post_discrim_men",
"V162364": "post_discrim_muslim",
"V162365": "post_discrim_christian",
"V162366": "post_discrim_transgender",
"V162367": "post_discrim_personal",
"V162368": "post_skintone",
"V162369": "post_discrim_skintone",
"V168112": "post_inform_level",
"V168113": "post_intelligence"
}
anes = anes_raw.rename(index = str, columns=new_names)
anes.head()
# ### Null Values
# Our data also contain a lot of missing values. The creators of the survey wanted a way to encode different reasons why a value is missing by assigning different reasons to different negative numbers. However, for our purposes, we would just like to know if a value is missing, so we can just replace these with `np.NaN`. `NaN` stands for "not a number," and is just a handy way for us to indicate missing values.
anes[anes < 0] = np.nan
anes[anes > 100] = np.nan
# ## Groupby and Summary Statistics
# With data this large, it's often difficult to know where to start looking. It's often handy to start by looking at individual columns and getting some basic information about how different variables interact. Groupby operations generally follow a similar format. First `groupby` your category(s) of interest, then select columns to compare, and finally apply `agg`regator function(s). Here's are some examples of what this looks like:
# +
# First, we need to change the values from the survey data from numbers into
# more easily understandable information. Here is an example of how you can do
# this using the pd.Series.map function. You will need to refer to the
# codebook to find out what the values mean. If you need to change the values
# of any other columns, you can use this function.
def change_values(column, new_values):
anes[column] = anes[column].map(new_values, na_action="ignore")
parties = {
1.0: "dem",
2.0: "rep",
3.0: "indep",
4.0: "other"
}
change_values("post_party_registration", parties)
# -
# Grouping by a single column, then performing multiple agg functions on multiple columns
demographics = anes.groupby("post_party_registration")[["post_whites_violent",
"post_blacks_violent",
"post_muslims_violent",
"post_christians_violent"]].agg(["mean", "std"])
demographics
# From this simple groupby, we can see that the different political parties, on average, do have different attitudes about different racial and religious groups.
# +
# Grouping by multiple columns. Here, the selected column is a dummy column that
# I know contains no NA values so that our aggregator function will count the
# full size of each group.
support_marijuana = {
1.0: "support",
2.0: "oppose",
3.0: "neither"
}
change_values("post_marijuana", support_marijuana)
party_by_race = anes.groupby(["post_party_registration", "post_marijuana"])["pre_voting_status"].agg(['count'])
party_by_race
# -
# From this table, we can see that different parties do have generally different opinions regarding the legalization of marijuana.
# As you can see, you can group by multiple columns, select multiple columns to aggregate on, and even use multiple aggregator functions. More information can be found in the documentation for [`.groupby`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html) and [`.agg`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.agg.html).
#
# Now see if you can use these methods to do your own groupby operation on how the different parties viewed crime spending before the election. Refer back to the dictionary above if you are having trouble finding the name of the column you need. You will need to change the values in the column in order to have easily interpretable results.
# +
# EXAMPLE
crime = {
1.0: "increase",
2.0: "decrease",
3.0: "same"
}
change_values("pre_crime_budget", crime)
anes.groupby(["post_party_registration", "pre_crime_budget"])["pre_voting_status"].agg(["count"])
# -
# ## Visualizations
# ### Histograms
# Histograms are a nifty way to display quantitative information. The x-axis is typically a quantitative variable of interest, and the y-axis is generally a frequency. Plot a histogram of the losses, and then experiment with the bin sizes.
anes.hist('post_rich_people_rating', bins=range(0,100,10));
anes.hist('post_rich_people_rating', bins=range(0,100,1));
# #### Question 1: Histograms
# What happens as you increase the number of bins to 100? Does having too many bins hinder interpretation?
# As the bin number increases, bins get smaller. Since this variable is a respondent generated rating, many of the responses cluster around nonrandom values (i.e. 50/100), and we get a few very tall bars with a lot of very short bars. In this case, having fewer bins allows us to more easily interpret the data by minimizing the effect of numbers that are more likely to be chosen.
# ### Scatter Plots
# Scatter plots are generally used to relate two variables to one another. They can be useful when trying to infer relationships between variables, visualize simple regressions, and get a general sense of the "spread" of your data. Run the following code to plot each individual's response about whether minorities should adapt to American culture against the response about whether or speaking English is required to be "truly American."
# +
# Since our data is all categorical, an unaltered scatterplot would have many overlapping points
# that would mask the density of our data. In order to see all the points so we can better
# understand the distribution of our data, we can use a technique called jittering, where the
# values are all adjusted slightly by a random amount so that they no longer overlap. It's ok if
# you don't understand what this code is doing!
x = [i + np.random.normal(scale = 0.25) for i in anes["post_minorities_should_adapt"]]
y = [i + np.random.normal(scale = 0.25) for i in anes["post_truly_american_speak_english"]]
plt.scatter(x, y, alpha=0.5)
plt.xlabel("post_minorities_should_adapt")
plt.ylabel("post_truly_american_speak_english");
# -
# #### Question 2: Scatterplots
# Looking at the scatterplot above, what can you infer about attitudes towards immigrants? What other variables might you want to compare to corroborate this?
# In general, the majority of the clustering is around values of 0 for both variables, meaning that in general most respondents do not strongly agree with the statements. Additionally, the density of points along the x-axis is more spread out, which inc=dicates that respondents in general agree more with minorities needing to adapt to majority culture than minorities needing to speak English to be truly American. Very few respondents thought that minorities should speak English but do not need to adapt to the majority culture, while a moderate number of respondents agreed with the reverse of that.
# ### Boxplots
# Boxplots can be used to get a general idea of the spread of your data, and are especially useful if you need to compare across more than two categories. For example, if we want to look at how different political parties perceive different groups, we can use a boxplot to easily construct these comparisons.
anes.boxplot(column=["post_illegal_imm_feeling_therm", "post_christians_rating"], by="post_party_registration")
plt.suptitle('');
# #### Question 3: Boxplots
# In the past few sections we have seen some ways that a lack of continuity in our data can affect the visualizations we produce. Why or why not is it ok to use data we know are discrete in these boxplots?
# For these box plots, we are just getting a general idea of the preceptions the different parties have about the different groups. Even though our data is neither discrete nor nicely distributed, we can still get an idea of the differences between parties using this data.
# ### Practice with Plots
# Practice on your own! Try plotting a visualization of some variables that you find interesting in the data, then interpret them. If you're feeling ambitious, try creating a graph that we haven't described here by looking at the [matplotlib](https://matplotlib.org/gallery/index.html) documentation!
# EXAMPLE
anes.boxplot(column=["post_discrim_lgbt", "post_discrim_transgender"], by="post_party_registration")
plt.suptitle('');
# Generally democrats and other believe that LGBT and transgender individuals are victims of discrimination to a greater degree than republicans. We can also see the effect that our discrete data have on these visualizations because in some of our bars, the mean and the end of a quartile are in the same place.
# ## Lopsided Distributions
# With symmetrical data, we expect measures of central tendency (mean, median, and mode) to all overlap. When data are not distributed symmetrically, we often say that the data is _skewed right_ (right tailed) or _skewed left_ (left-tailed), and the mean, median, and mode typically do not overlap.
mobility = anes["post_economic_mobility"]
mobility.hist(bins = np.arange(8))
plt.axvline(x = mobility.mean(), color="red", label = "mean")
plt.axvline(x = mobility.median(), color="green", label = "median")
plt.axvline(x = mobility.mode()[0], color="yellow", label = "mode")
plt.legend();
# Notice that this variable is skewed left, and the mean has been pulled to the left when compared to the median. Since this variable has a particularly extreme skew, the mode is at the extreme right of the graph and is not a particularly good measure of central tendency.
# Sometimes, just looking at the histogram can hide trends. For instance, look at the histogram of the column asking how often the respondent felt proud of President Obama. Note that 1 = "Never" and 5 = "Always".
trump = anes["pre_proud_of_obama"]
trump.hist(bins=np.arange(1, 6))
plt.axvline(x = trump.mean(), color="red", label = "mean")
plt.axvline(x = trump.median(), color="green", label = "median")
plt.axvline(x = trump.mode()[0], color="yellow", label = "mode")
plt.legend();
# There doesn't appear to be a significant trend based on this histogram. However, this particular example should make you itch to do a groupby on party affiliation to see if there are any trends we can tease out.
anes.boxplot(column="pre_proud_of_obama", by="post_party_registration")
plt.suptitle("");
# The green line shows the mean ratings for the different parties. As expected, the distribution of this variable is very different when different political parties are considered separately. Plotting Democrat and Republican responses in the same histogram makes this even clearer.
proud_dem = anes[anes["post_party_registration"] == "dem"]["pre_proud_of_obama"]
proud_rep = anes[anes["post_party_registration"] == "rep"]["pre_proud_of_obama"]
plt.hist([proud_dem, proud_rep], color=["blue", "green"], range=(1, 6), label = ["dem", "rep"], rwidth=1)
plt.legend();
# ----------
# You're all done! The plotting functions we used today all have many different parameters that can be adjusted to create different looking graphs. If you have sometime, try playing around with these functions and see what kind of graphs you can make!
| labs/03_Dataframe Operations and Simple Visualizations/03_Dataframe_Operations_and_Simple_Visualizations_solutions.ipynb |
# ---
# jupyter:
# jupytext:
# split_at_heading: true
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#hide
# !pip install -Uqq fastbook
import fastbook
fastbook.setup_book()
#hide
from fastbook import *
# + active=""
# [[chapter_multicat]]
# -
# # Other Computer Vision Problems
# In the previous chapter you learned some important practical techniques for training models in practice. Considerations like selecting learning rates and the number of epochs are very important to getting good results.
#
# In this chapter we are going to look at two other types of computer vision problems: multi-label classification and regression. The first one is when you want to predict more than one label per image (or sometimes none at all), and the second is when your labels are one or several numbers—a quantity instead of a category.
#
# In the process will study more deeply the output activations, targets, and loss functions in deep learning models.
# ## Multi-Label Classification
# Multi-label classification refers to the problem of identifying the categories of objects in images that may not contain exactly one type of object. There may be more than one kind of object, or there may be no objects at all in the classes that you are looking for.
#
# For instance, this would have been a great approach for our bear classifier. One problem with the bear classifier that we rolled out in <<chapter_production>> was that if a user uploaded something that wasn't any kind of bear, the model would still say it was either a grizzly, black, or teddy bear—it had no ability to predict "not a bear at all." In fact, after we have completed this chapter, it would be a great exercise for you to go back to your image classifier application, and try to retrain it using the multi-label technique, then test it by passing in an image that is not of any of your recognized classes.
#
# In practice, we have not seen many examples of people training multi-label classifiers for this purpose—but we very often see both users and developers complaining about this problem. It appears that this simple solution is not at all widely understood or appreciated! Because in practice it is probably more common to have some images with zero matches or more than one match, we should probably expect in practice that multi-label classifiers are more widely applicable than single-label classifiers.
#
# First, let's see what a multi-label dataset looks like, then we'll explain how to get it ready for our model. You'll see that the architecture of the model does not change from the last chapter; only the loss function does. Let's start with the data.
# ### The Data
# For our example we are going to use the PASCAL dataset, which can have more than one kind of classified object per image.
#
# We begin by downloading and extracting the dataset as per usual:
from fastai.vision.all import *
path = untar_data(URLs.PASCAL_2007)
# This dataset is different from the ones we have seen before, in that it is not structured by filename or folder but instead comes with a CSV (comma-separated values) file telling us what labels to use for each image. We can inspect the CSV file by reading it into a Pandas DataFrame:
df = pd.read_csv(path/'train.csv')
df.head()
# As you can see, the list of categories in each image is shown as a space-delimited string.
# ### Sidebar: Pandas and DataFrames
# No, it’s not actually a panda! *Pandas* is a Python library that is used to manipulate and analyze tabular and time series data. The main class is `DataFrame`, which represents a table of rows and columns. You can get a DataFrame from a CSV file, a database table, Python dictionaries, and many other sources. In Jupyter, a DataFrame is output as a formatted table, as shown here.
#
# You can access rows and columns of a DataFrame with the `iloc` property, as if it were a matrix:
df.iloc[:,0]
df.iloc[0,:]
# Trailing :s are always optional (in numpy, pytorch, pandas, etc.),
# so this is equivalent:
df.iloc[0]
# You can also grab a column by name by indexing into a DataFrame directly:
df['fname']
# You can create new columns and do calculations using columns:
tmp_df = pd.DataFrame({'a':[1,2], 'b':[3,4]})
tmp_df
tmp_df['c'] = tmp_df['a']+tmp_df['b']
tmp_df
# Pandas is a fast and flexible library, and an important part of every data scientist’s Python toolbox. Unfortunately, its API can be rather confusing and surprising, so it takes a while to get familiar with it. If you haven’t used Pandas before, we’d suggest going through a tutorial; we are particularly fond of the book [*Python for Data Analysis*](http://shop.oreilly.com/product/0636920023784.do) by <NAME>, the creator of Pandas (O'Reilly). It also covers other important libraries like `matplotlib` and `numpy`. We will try to briefly describe Pandas functionality we use as we come across it, but will not go into the level of detail of McKinney’s book.
# ### End sidebar
# Now that we have seen what the data looks like, let's make it ready for model training.
# ### Constructing a DataBlock
# How do we convert from a `DataFrame` object to a `DataLoaders` object? We generally suggest using the data block API for creating a `DataLoaders` object, where possible, since it provides a good mix of flexibility and simplicity. Here we will show you the steps that we take to use the data blocks API to construct a `DataLoaders` object in practice, using this dataset as an example.
#
# As we have seen, PyTorch and fastai have two main classes for representing and accessing a training set or validation set:
#
# - `Dataset`:: A collection that returns a tuple of your independent and dependent variable for a single item
# - `DataLoader`:: An iterator that provides a stream of mini-batches, where each mini-batch is a tuple of a batch of independent variables and a batch of dependent variables
# On top of these, fastai provides two classes for bringing your training and validation sets together:
#
# - `Datasets`:: An object that contains a training `Dataset` and a validation `Dataset`
# - `DataLoaders`:: An object that contains a training `DataLoader` and a validation `DataLoader`
#
# Since a `DataLoader` builds on top of a `Dataset` and adds additional functionality to it (collating multiple items into a mini-batch), it’s often easiest to start by creating and testing `Datasets`, and then look at `DataLoaders` after that’s working.
# When we create a `DataBlock`, we build up gradually, step by step, and use the notebook to check our data along the way. This is a great way to make sure that you maintain momentum as you are coding, and that you keep an eye out for any problems. It’s easy to debug, because you know that if a problem arises, it is in the line of code you just typed!
#
# Let’s start with the simplest case, which is a data block created with no parameters:
dblock = DataBlock()
# We can create a `Datasets` object from this. The only thing needed is a source—in this case, our DataFrame:
dsets = dblock.datasets(df)
# This contains a `train` and a `valid` dataset, which we can index into:
len(dsets.train),len(dsets.valid)
x,y = dsets.train[0]
x,y
# As you can see, this simply returns a row of the DataFrame, twice. This is because by default, the data block assumes we have two things: input and target. We are going to need to grab the appropriate fields from the DataFrame, which we can do by passing `get_x` and `get_y` functions:
x['fname']
dblock = DataBlock(get_x = lambda r: r['fname'], get_y = lambda r: r['labels'])
dsets = dblock.datasets(df)
dsets.train[0]
# As you can see, rather than defining a function in the usual way, we are using Python’s `lambda` keyword. This is just a shortcut for defining and then referring to a function. The following more verbose approach is identical:
def get_x(r): return r['fname']
def get_y(r): return r['labels']
dblock = DataBlock(get_x = get_x, get_y = get_y)
dsets = dblock.datasets(df)
dsets.train[0]
# Lambda functions are great for quickly iterating, but they are not compatible with serialization, so we advise you to use the more verbose approach if you want to export your `Learner` after training (lambdas are fine if you are just experimenting).
# We can see that the independent variable will need to be converted into a complete path, so that we can open it as an image, and the dependent variable will need to be split on the space character (which is the default for Python’s `split` function) so that it becomes a list:
def get_x(r): return path/'train'/r['fname']
def get_y(r): return r['labels'].split(' ')
dblock = DataBlock(get_x = get_x, get_y = get_y)
dsets = dblock.datasets(df)
dsets.train[0]
# To actually open the image and do the conversion to tensors, we will need to use a set of transforms; block types will provide us with those. We can use the same block types that we have used previously, with one exception: the `ImageBlock` will work fine again, because we have a path that points to a valid image, but the `CategoryBlock` is not going to work. The problem is that block returns a single integer, but we need to be able to have multiple labels for each item. To solve this, we use a `MultiCategoryBlock`. This type of block expects to receive a list of strings, as we have in this case, so let’s test it out:
dblock = DataBlock(blocks=(ImageBlock, MultiCategoryBlock),
get_x = get_x, get_y = get_y)
dsets = dblock.datasets(df)
dsets.train[0]
# As you can see, our list of categories is not encoded in the same way that it was for the regular `CategoryBlock`. In that case, we had a single integer representing which category was present, based on its location in our vocab. In this case, however, we instead have a list of zeros, with a one in any position where that category is present. For example, if there is a one in the second and fourth positions, then that means that vocab items two and four are present in this image. This is known as *one-hot encoding*. The reason we can’t easily just use a list of category indices is that each list would be a different length, and PyTorch requires tensors, where everything has to be the same length.
# > jargon: One-hot encoding: Using a vector of zeros, with a one in each location that is represented in the data, to encode a list of integers.
# Let’s check what the categories represent for this example (we are using the convenient `torch.where` function, which tells us all of the indices where our condition is true or false):
idxs = torch.where(dsets.train[0][1]==1.)[0]
dsets.train.vocab[idxs]
# With NumPy arrays, PyTorch tensors, and fastai’s `L` class, we can index directly using a list or vector, which makes a lot of code (such as this example) much clearer and more concise.
#
# We have ignored the column `is_valid` up until now, which means that `DataBlock` has been using a random split by default. To explicitly choose the elements of our validation set, we need to write a function and pass it to `splitter` (or use one of fastai's predefined functions or classes). It will take the items (here our whole DataFrame) and must return two (or more) lists of integers:
# +
def splitter(df):
train = df.index[~df['is_valid']].tolist()
valid = df.index[df['is_valid']].tolist()
return train,valid
dblock = DataBlock(blocks=(ImageBlock, MultiCategoryBlock),
splitter=splitter,
get_x=get_x,
get_y=get_y)
dsets = dblock.datasets(df)
dsets.train[0]
# -
# As we have discussed, a `DataLoader` collates the items from a `Dataset` into a mini-batch. This is a tuple of tensors, where each tensor simply stacks the items from that location in the `Dataset` item.
#
# Now that we have confirmed that the individual items look okay, there's one more step we need to ensure we can create our `DataLoaders`, which is to ensure that every item is of the same size. To do this, we can use `RandomResizedCrop`:
dblock = DataBlock(blocks=(ImageBlock, MultiCategoryBlock),
splitter=splitter,
get_x=get_x,
get_y=get_y,
item_tfms = RandomResizedCrop(128, min_scale=0.35))
dls = dblock.dataloaders(df)
# And now we can display a sample of our data:
dls.show_batch(nrows=1, ncols=3)
# Remember that if anything goes wrong when you create your `DataLoaders` from your `DataBlock`, or if you want to view exactly what happens with your `DataBlock`, you can use the `summary` method we presented in the last chapter.
# Our data is now ready for training a model. As we will see, nothing is going to change when we create our `Learner`, but behind the scenes, the fastai library will pick a new loss function for us: binary cross-entropy.
# ### Binary Cross-Entropy
# Now we'll create our `Learner`. We saw in <<chapter_mnist_basics>> that a `Learner` object contains four main things: the model, a `DataLoaders` object, an `Optimizer`, and the loss function to use. We already have our `DataLoaders`, we can leverage fastai's `resnet` models (which we'll learn how to create from scratch later), and we know how to create an `SGD` optimizer. So let's focus on ensuring we have a suitable loss function. To do this, let's use `cnn_learner` to create a `Learner`, so we can look at its activations:
learn = cnn_learner(dls, resnet18)
# We also saw that the model in a `Learner` is generally an object of a class inheriting from `nn.Module`, and that we can call it using parentheses and it will return the activations of a model. You should pass it your independent variable, as a mini-batch. We can try it out by grabbing a mini batch from our `DataLoader` and then passing it to the model:
x,y = to_cpu(dls.train.one_batch())
activs = learn.model(x)
activs.shape
# Think about why `activs` has this shape—we have a batch size of 64, and we need to calculate the probability of each of 20 categories. Here’s what one of those activations looks like:
activs[0]
# > note: Getting Model Activations: Knowing how to manually get a mini-batch and pass it into a model, and look at the activations and loss, is really important for debugging your model. It is also very helpful for learning, so that you can see exactly what is going on.
# They aren’t yet scaled to between 0 and 1, but we learned how to do that in <<chapter_mnist_basics>>, using the `sigmoid` function. We also saw how to calculate a loss based on this—this is our loss function from <<chapter_mnist_basics>>, with the addition of `log` as discussed in the last chapter:
def binary_cross_entropy(inputs, targets):
inputs = inputs.sigmoid()
return -torch.where(targets==1, 1-inputs, inputs).log().mean()
# Note that because we have a one-hot-encoded dependent variable, we can't directly use `nll_loss` or `softmax` (and therefore we can't use `cross_entropy`):
#
# - `softmax`, as we saw, requires that all predictions sum to 1, and tends to push one activation to be much larger than the others (due to the use of `exp`); however, we may well have multiple objects that we're confident appear in an image, so restricting the maximum sum of activations to 1 is not a good idea. By the same reasoning, we may want the sum to be *less* than 1, if we don't think *any* of the categories appear in an image.
# - `nll_loss`, as we saw, returns the value of just one activation: the single activation corresponding with the single label for an item. This doesn't make sense when we have multiple labels.
#
# On the other hand, the `binary_cross_entropy` function, which is just `mnist_loss` along with `log`, provides just what we need, thanks to the magic of PyTorch's elementwise operations. Each activation will be compared to each target for each column, so we don't have to do anything to make this function work for multiple columns.
# > j: One of the things I really like about working with libraries like PyTorch, with broadcasting and elementwise operations, is that quite frequently I find I can write code that works equally well for a single item or a batch of items, without changes. `binary_cross_entropy` is a great example of this. By using these operations, we don't have to write loops ourselves, and can rely on PyTorch to do the looping we need as appropriate for the rank of the tensors we're working with.
# PyTorch already provides this function for us. In fact, it provides a number of versions, with rather confusing names!
#
# `F.binary_cross_entropy` and its module equivalent `nn.BCELoss` calculate cross-entropy on a one-hot-encoded target, but do not include the initial `sigmoid`. Normally for one-hot-encoded targets you'll want `F.binary_cross_entropy_with_logits` (or `nn.BCEWithLogitsLoss`), which do both sigmoid and binary cross-entropy in a single function, as in the preceding example.
#
# The equivalent for single-label datasets (like MNIST or the Pet dataset), where the target is encoded as a single integer, is `F.nll_loss` or `nn.NLLLoss` for the version without the initial softmax, and `F.cross_entropy` or `nn.CrossEntropyLoss` for the version with the initial softmax.
#
# Since we have a one-hot-encoded target, we will use `BCEWithLogitsLoss`:
loss_func = nn.BCEWithLogitsLoss()
loss = loss_func(activs, y)
loss
# We don't actually need to tell fastai to use this loss function (although we can if we want) since it will be automatically chosen for us. fastai knows that the `DataLoaders` has multiple category labels, so it will use `nn.BCEWithLogitsLoss` by default.
#
# One change compared to the last chapter is the metric we use: because this is a multilabel problem, we can't use the accuracy function. Why is that? Well, accuracy was comparing our outputs to our targets like so:
#
# ```python
# def accuracy(inp, targ, axis=-1):
# "Compute accuracy with `targ` when `pred` is bs * n_classes"
# pred = inp.argmax(dim=axis)
# return (pred == targ).float().mean()
# ```
#
# The class predicted was the one with the highest activation (this is what `argmax` does). Here it doesn't work because we could have more than one prediction on a single image. After applying the sigmoid to our activations (to make them between 0 and 1), we need to decide which ones are 0s and which ones are 1s by picking a *threshold*. Each value above the threshold will be considered as a 1, and each value lower than the threshold will be considered a 0:
#
# ```python
# def accuracy_multi(inp, targ, thresh=0.5, sigmoid=True):
# "Compute accuracy when `inp` and `targ` are the same size."
# if sigmoid: inp = inp.sigmoid()
# return ((inp>thresh)==targ.bool()).float().mean()
# ```
# If we pass `accuracy_multi` directly as a metric, it will use the default value for `threshold`, which is 0.5. We might want to adjust that default and create a new version of `accuracy_multi` that has a different default. To help with this, there is a function in Python called `partial`. It allows us to *bind* a function with some arguments or keyword arguments, making a new version of that function that, whenever it is called, always includes those arguments. For instance, here is a simple function taking two arguments:
def say_hello(name, say_what="Hello"): return f"{say_what} {name}."
say_hello('Jeremy'),say_hello('Jeremy', 'Ahoy!')
# We can switch to a French version of that function by using `partial`:
f = partial(say_hello, say_what="Bonjour")
f("Jeremy"),f("Sylvain")
# We can now train our model. Let's try setting the accuracy threshold to 0.2 for our metric:
learn = cnn_learner(dls, resnet50, metrics=partial(accuracy_multi, thresh=0.2))
learn.fine_tune(3, base_lr=3e-3, freeze_epochs=4)
# Picking a threshold is important. If you pick a threshold that's too low, you'll often be failing to select correctly labeled objects. We can see this by changing our metric, and then calling `validate`, which returns the validation loss and metrics:
learn.metrics = partial(accuracy_multi, thresh=0.1)
learn.validate()
# If you pick a threshold that's too high, you'll only be selecting the objects for which your model is very confident:
learn.metrics = partial(accuracy_multi, thresh=0.99)
learn.validate()
# We can find the best threshold by trying a few levels and seeing what works best. This is much faster if we just grab the predictions once:
preds,targs = learn.get_preds()
# Then we can call the metric directly. Note that by default `get_preds` applies the output activation function (sigmoid, in this case) for us, so we'll need to tell `accuracy_multi` to not apply it:
accuracy_multi(preds, targs, thresh=0.9, sigmoid=False)
# We can now use this approach to find the best threshold level:
xs = torch.linspace(0.05,0.95,29)
accs = [accuracy_multi(preds, targs, thresh=i, sigmoid=False) for i in xs]
plt.plot(xs,accs);
# In this case, we're using the validation set to pick a hyperparameter (the threshold), which is the purpose of the validation set. Sometimes students have expressed their concern that we might be *overfitting* to the validation set, since we're trying lots of values to see which is the best. However, as you see in the plot, changing the threshold in this case results in a smooth curve, so we're clearly not picking some inappropriate outlier. This is a good example of where you have to be careful of the difference between theory (don't try lots of hyperparameter values or you might overfit the validation set) versus practice (if the relationship is smooth, then it's fine to do this).
#
# This concludes the part of this chapter dedicated to multi-label classification. Next, we'll take a look at a regression problem.
# ## Regression
# It's easy to think of deep learning models as being classified into domains, like *computer vision*, *NLP*, and so forth. And indeed, that's how fastai classifies its applications—largely because that's how most people are used to thinking of things.
#
# But really, that's hiding a more interesting and deeper perspective. A model is defined by its independent and dependent variables, along with its loss function. That means that there's really a far wider array of models than just the simple domain-based split. Perhaps we have an independent variable that's an image, and a dependent that's text (e.g., generating a caption from an image); or perhaps we have an independent variable that's text and dependent that's an image (e.g., generating an image from a caption—which is actually possible for deep learning to do!); or perhaps we've got images, texts, and tabular data as independent variables, and we're trying to predict product purchases... the possibilities really are endless.
#
# To be able to move beyond fixed applications, to crafting your own novel solutions to novel problems, it helps to really understand the data block API (and maybe also the mid-tier API, which we'll see later in the book). As an example, let's consider the problem of *image regression*. This refers to learning from a dataset where the independent variable is an image, and the dependent variable is one or more floats. Often we see people treat image regression as a whole separate application—but as you'll see here, we can treat it as just another CNN on top of the data block API.
#
# We're going to jump straight to a somewhat tricky variant of image regression, because we know you're ready for it! We're going to do a key point model. A *key point* refers to a specific location represented in an image—in this case, we'll use images of people and we'll be looking for the center of the person's face in each image. That means we'll actually be predicting *two* values for each image: the row and column of the face center.
# ### Assemble the Data
# We will use the [Biwi Kinect Head Pose dataset](https://icu.ee.ethz.ch/research/datsets.html) for this section. We'll begin by downloading the dataset as usual:
path = untar_data(URLs.BIWI_HEAD_POSE)
#hide
Path.BASE_PATH = path
# Let's see what we've got!
path.ls().sorted()
# There are 24 directories numbered from 01 to 24 (they correspond to the different people photographed), and a corresponding *.obj* file for each (we won't need them here). Let's take a look inside one of these directories:
(path/'01').ls().sorted()
# Inside the subdirectories, we have different frames, each of them come with an image (*\_rgb.jpg*) and a pose file (*\_pose.txt*). We can easily get all the image files recursively with `get_image_files`, then write a function that converts an image filename to its associated pose file:
img_files = get_image_files(path)
def img2pose(x): return Path(f'{str(x)[:-7]}pose.txt')
img2pose(img_files[0])
# Let's take a look at our first image:
im = PILImage.create(img_files[0])
im.shape
im.to_thumb(160)
# The Biwi dataset website used to explain the format of the pose text file associated with each image, which shows the location of the center of the head. The details of this aren't important for our purposes, so we'll just show the function we use to extract the head center point:
cal = np.genfromtxt(path/'01'/'rgb.cal', skip_footer=6)
def get_ctr(f):
ctr = np.genfromtxt(img2pose(f), skip_header=3)
c1 = ctr[0] * cal[0][0]/ctr[2] + cal[0][2]
c2 = ctr[1] * cal[1][1]/ctr[2] + cal[1][2]
return tensor([c1,c2])
# This function returns the coordinates as a tensor of two items:
get_ctr(img_files[0])
# We can pass this function to `DataBlock` as `get_y`, since it is responsible for labeling each item. We'll resize the images to half their input size, just to speed up training a bit.
#
# One important point to note is that we should not just use a random splitter. The reason for this is that the same people appear in multiple images in this dataset, but we want to ensure that our model can generalize to people that it hasn't seen yet. Each folder in the dataset contains the images for one person. Therefore, we can create a splitter function that returns true for just one person, resulting in a validation set containing just that person's images.
#
# The only other difference from the previous data block examples is that the second block is a `PointBlock`. This is necessary so that fastai knows that the labels represent coordinates; that way, it knows that when doing data augmentation, it should do the same augmentation to these coordinates as it does to the images:
biwi = DataBlock(
blocks=(ImageBlock, PointBlock),
get_items=get_image_files,
get_y=get_ctr,
splitter=FuncSplitter(lambda o: o.parent.name=='13'),
batch_tfms=[*aug_transforms(size=(240,320)),
Normalize.from_stats(*imagenet_stats)]
)
# > important: Points and Data Augmentation: We're not aware of other libraries (except for fastai) that automatically and correctly apply data augmentation to coordinates. So, if you're working with another library, you may need to disable data augmentation for these kinds of problems.
# Before doing any modeling, we should look at our data to confirm it seems okay:
dls = biwi.dataloaders(path)
dls.show_batch(max_n=9, figsize=(8,6))
# That's looking good! As well as looking at the batch visually, it's a good idea to also look at the underlying tensors (especially as a student; it will help clarify your understanding of what your model is really seeing):
xb,yb = dls.one_batch()
xb.shape,yb.shape
# Make sure that you understand *why* these are the shapes for our mini-batches.
# Here's an example of one row from the dependent variable:
yb[0]
# As you can see, we haven't had to use a separate *image regression* application; all we've had to do is label the data, and tell fastai what kinds of data the independent and dependent variables represent.
# It's the same for creating our `Learner`. We will use the same function as before, with one new parameter, and we will be ready to train our model.
# ### Training a Model
# As usual, we can use `cnn_learner` to create our `Learner`. Remember way back in <<chapter_intro>> how we used `y_range` to tell fastai the range of our targets? We'll do the same here (coordinates in fastai and PyTorch are always rescaled between -1 and +1):
learn = cnn_learner(dls, resnet18, y_range=(-1,1))
# `y_range` is implemented in fastai using `sigmoid_range`, which is defined as:
def sigmoid_range(x, lo, hi): return torch.sigmoid(x) * (hi-lo) + lo
# This is set as the final layer of the model, if `y_range` is defined. Take a moment to think about what this function does, and why it forces the model to output activations in the range `(lo,hi)`.
#
# Here's what it looks like:
plot_function(partial(sigmoid_range,lo=-1,hi=1), min=-4, max=4)
# We didn't specify a loss function, which means we're getting whatever fastai chooses as the default. Let's see what it picked for us:
dls.loss_func
# This makes sense, since when coordinates are used as the dependent variable, most of the time we're likely to be trying to predict something as close as possible; that's basically what `MSELoss` (mean squared error loss) does. If you want to use a different loss function, you can pass it to `cnn_learner` using the `loss_func` parameter.
#
# Note also that we didn't specify any metrics. That's because the MSE is already a useful metric for this task (although it's probably more interpretable after we take the square root).
#
# We can pick a good learning rate with the learning rate finder:
learn.lr_find()
# We'll try an LR of 1e-2:
lr = 1e-2
learn.fine_tune(3, lr)
# Generally when we run this we get a loss of around 0.0001, which corresponds to an average coordinate prediction error of:
math.sqrt(0.0001)
# This sounds very accurate! But it's important to take a look at our results with `Learner.show_results`. The left side are the actual (*ground truth*) coordinates and the right side are our model's predictions:
learn.show_results(ds_idx=1, nrows=3, figsize=(6,8))
# It's quite amazing that with just a few minutes of computation we've created such an accurate key points model, and without any special domain-specific application. This is the power of building on flexible APIs, and using transfer learning! It's particularly striking that we've been able to use transfer learning so effectively even between totally different tasks; our pretrained model was trained to do image classification, and we fine-tuned for image regression.
# ## Conclusion
# In problems that are at first glance completely different (single-label classification, multi-label classification, and regression), we end up using the same model with just different numbers of outputs. The loss function is the one thing that changes, which is why it's important to double-check that you are using the right loss function for your problem.
#
# fastai will automatically try to pick the right one from the data you built, but if you are using pure PyTorch to build your `DataLoader`s, make sure you think hard when you have to decide on your choice of loss function, and remember that you most probably want:
#
# - `nn.CrossEntropyLoss` for single-label classification
# - `nn.BCEWithLogitsLoss` for multi-label classification
# - `nn.MSELoss` for regression
# ## Questionnaire
# 1. How could multi-label classification improve the usability of the bear classifier?
# 1. How do we encode the dependent variable in a multi-label classification problem?
# 1. How do you access the rows and columns of a DataFrame as if it was a matrix?
# 1. How do you get a column by name from a DataFrame?
# 1. What is the difference between a `Dataset` and `DataLoader`?
# 1. What does a `Datasets` object normally contain?
# 1. What does a `DataLoaders` object normally contain?
# 1. What does `lambda` do in Python?
# 1. What are the methods to customize how the independent and dependent variables are created with the data block API?
# 1. Why is softmax not an appropriate output activation function when using a one hot encoded target?
# 1. Why is `nll_loss` not an appropriate loss function when using a one-hot-encoded target?
# 1. What is the difference between `nn.BCELoss` and `nn.BCEWithLogitsLoss`?
# 1. Why can't we use regular accuracy in a multi-label problem?
# 1. When is it okay to tune a hyperparameter on the validation set?
# 1. How is `y_range` implemented in fastai? (See if you can implement it yourself and test it without peeking!)
# 1. What is a regression problem? What loss function should you use for such a problem?
# 1. What do you need to do to make sure the fastai library applies the same data augmentation to your input images and your target point coordinates?
# ### Further Research
# 1. Read a tutorial about Pandas DataFrames and experiment with a few methods that look interesting to you. See the book's website for recommended tutorials.
# 1. Retrain the bear classifier using multi-label classification. See if you can make it work effectively with images that don't contain any bears, including showing that information in the web application. Try an image with two different kinds of bears. Check whether the accuracy on the single-label dataset is impacted using multi-label classification.
| fastbook/06_multicat.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
text = '''This paper quantifies the impact of inundation risk and
salinization on the family structure and economic welfare
of coastal households in Bangladesh. These households
are already on the “front line” of climate change, so their
adaptation presages the future for hundreds of millions of
families worldwide who will face similar threats by 2100.
The analysis is based on a household decision model that
relates spatial deployment of working-age, migration-capable members to inundation and salinization threats. The
analysis uses appropriate estimation techniques, including adjustments for spatial autocorrelation, and finds
that households subject to high inundation and salinization threats have significantly higher out-migration rates
for working-age adults (particularly males), dependency
ratios, and poverty incidence than their counterparts in
non-threatened areas. The findings indicate that the critical
zone for inundation risk lies within four kilometers of the
coast, with attenuated impacts for coastal-zone households
at higher elevations. The results paint a sobering picture
of life at the coastal margin for Bangladeshi households
threatened by inundation and salinization, particularly
households that are relatively isolated from market centers.
They respond by “hollowing out,” as economic necessity
drives more working-age adults to seek outside earnings.
Those left behind face a far greater likelihood of extreme
poverty than their counterparts in less-threatened areas. The
powerful results for market access, coupled with previous
findings on salinity and road maintenance, suggest that
infrastructure investment may offer a promising option.
Road improvements that reduce travel times for isolated
settlements compensate them for an increase in salinity.
Thus, road improvement may warrant particular attention
as an attractive adaptation investment in coastal Bangladesh.'''
with open('/home/wb536061/wb-nlp/CORPUS/WB/TXT_ORIG/wb_10000577.txt', 'rb') as fl:
text = fl.read().decode('utf-8', errors='ignore')
import re
text = re.sub('[\r\n]+', '. ', text)
# + jupyter={"outputs_hidden": true}
import spacy
nlp = spacy.load("en_core_web_sm")
# text = "But Google is starting from behind. The company made a late push\ninto hardware, and Apple’s Siri, available on iPhones, and Amazon’s Alexa\nsoftware, which runs on its Echo and Dot devices, have clear leads in\nconsumer adoption."
doc = nlp(text)
for ent in doc.ents:
print(ent.text, ent.start_char, ent.end_char, ent.label_)
# -
import pandas as pd
entities = pd.DataFrame([{'label': ent.label_, 'text': ent.text} for ent in doc.ents])
entities[entities.label=='GPE'].text.value_counts()
entities[entities.label=='GPE'].text.value_counts()
for i in entities[entities.label=='ORG'].text.value_counts().index:
print(i)
| notebooks/archive/SCRIPTS/ner/Named Entity Recognition.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import json
import numpy as np
import matplotlib.pyplot as plt
# +
# %%time
fileName = 'exportUsageAllNoHeuristic.json'
usage = pd.read_json(fileName, orient='records')
usingPackages = usage[usage['PackagesUsedCount'] > 0]
Y = []
for pu in usingPackages['PercentageUsed']:
sum = 0
total = 0
for key,value in pu.items():
if value > 0:
sum = sum + value
total = total + 1.0
if total > 0:
Y.append(sum/total)
Y = np.array(Y)
# -
plt.figure(figsize=(8,6), dpi=100)
n, bins, patches = plt.hist(Y, bins='doane')
plt.grid(axis='y', alpha=0.75)
plt.gca().set_xticklabels(['{:.0f}%'.format(x*100) for x in plt.gca().get_xticks()])
plt.xlabel('Percentage of an API used')
plt.ylabel('Packages')
print(n,bins)
plt.savefig('{}.png'.format(fileName))
# +
# %%time
packageCosts = pd.read_csv("packageCost.csv")
packageCostMap = {}
for index, row in packageCosts.iterrows():
pkg = row['package']
packageCostMap[pkg] = row['cost']
# +
# %%time
costSavings = []
for pu in usingPackages['PercentageUsed']:
sumCosts = 0
for key,value in pu.items():
if value > 0 and value <= 0.1:
if key in packageCostMap:
cost = packageCostMap[key]
# adds removed package itself to the savings
sumCosts = sumCosts + cost + 1
costSavings.append(sumCosts)
costSavings = np.array(costSavings)
# +
import matplotlib
matplotlib.rcParams.update({'font.size': 14})
bars = [
costSavings[costSavings == 0],
costSavings[(costSavings >= 1) & (costSavings <= 9)],
costSavings[(costSavings >= 10) & (costSavings <= 99)],
costSavings[(costSavings >= 100) & (costSavings <= 999)]
]
Y = []
for b in bars:
Y.append(len(b))
X = np.arange(0, len(Y), step=1)
maxSize = costSavings.max()
plt.figure(figsize=(9,6), dpi=90)
plt.bar(X, Y, color="blue")
# for x,y in zip(X,Y):
# plt.text(x, y, y, ha='center', va= 'bottom')
# plt.ylim(0, 700000)
plt.xticks(np.arange(0, len(Y), step=1), ['0', '1-9', '10-99', '100-{}'.format(maxSize)])
plt.xlabel('Packages saved')
plt.ylabel('Packages')
plt.savefig('costSavings.png')
| jupyterlab/graphs/ExportUsage.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from DeepSky import deepsky
SkyTrainer = deepsky.Trainer()
SkyTrainer.slugs_list
SkyTrainer.table_names
dic = {
'Sentinel-2-Top-of-Atmosphere-Reflectance': ['2015-06-23', ''],
'Landsat-7-Surface-Reflectance': ['1999-01-01', ''],
'Landsat-8-Surface-Reflectance': ['2013-04-01', ''],
'USDA-NASS-Cropland-Data-Layers': ['1997-01-01', ''],
'USGS-National-Land-Cover-Database': ['1992-01-01', '2017-01-01'],
'Lake-Water-Quality-100m': ['2019-01-01', '2019-12-31']
}
SkyTrainer.images
SkyTrainer.db.Query("select * from jobs")
SkyTrainer.db.insert('image', [{'dataset_id':1,'bands_selections':['B2', 'B3', 'B4', 'B5', 'ndvi', 'ndwi']},{'dataset_id':2,'bands_selections':['B2', 'B3', 'B4', 'B5', 'ndvi', 'ndwi']}])
SkyTrainer.db.delete('jobs', 1)
SkyTrainer.db.update('model', {'model_name':'test2'}, 4)
# +
def df_to_db(df, db, table_name, id, operation):
"""Save DataFrames into database."""
data = df.iloc[id].to_dict()
if operation == 'insert':
return db.insert(table_name, [data])
elif operation == 'update':
return db.update(table_name, data, id)
else:
raise f'operation {operation} not in [insert, update]'
df_to_db(SkyTrainer.versions, 'db', 'model_versions', 2)
# -
| notebooks/dbclass_usage.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import cv2
import numpy as np
import torch
import time
from PIL import Image
import torchvision.transforms as transforms
# Neural_Style
from Net1_Res import Net1_3Res
# define Neural_style_transfer model and transform tool
def NeuralStyle_init(weight_path):
model = Net1_3Res()
model.load_state_dict(torch.load(weight_path))
model.cuda()
model.eval()
return model
# define how to transform a image
# 'img' here is a cv2.VideoCapture return
def transform(img, model):
img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
img = cv2.resize(img, (320, 240)).astype(np.float32)
img = torch.from_numpy(img.transpose(2,0,1))
img = img.cuda()
# style transfer
t_img = model(img.unsqueeze(0)).data.squeeze(0).cpu()
# process after transferring
#t_img /=255
t_img[t_img > 255] = 255
t_img[t_img < 0] = 0
img = t_img.numpy().transpose(1,2,0)
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
return img
# -
img = cv2.imread("images/test3.jpg")
model2 = NeuralStyle_init("checkpoints/GB_Net3_ResInConv.pth")
out = transform(img, model2)
cv2.imwrite("images/out_net3_ResInConv.jpg", out)
| Neural_Style/.ipynb_checkpoints/test_for_Net1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from collections import Counter
class Solution:
def checkRecord(self, s: str) -> bool:
cnt = Counter(s)
if cnt['A'] > 1:
return False
for ss in s.split('A'):
if False in [len(x) < 3 for x in ss.split('P') if x]:
return False
return True
# -
s = Solution()
s.checkRecord("LPLPLPLPLPLLL")
| algorithms/551-student-attendance-record-i.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:benchmet]
# language: python
# name: conda-env-benchmet-py
# ---
import numpy as np
import pandas as pd
import pingouin as pg
from scipy.stats import zscore, spearmanr
from tqdm import tqdm
# # Read Human Ratings
# ### Simplicity DA
df_ratings_simp = pd.read_csv("../ratings_per_annotator/simplicity_DA_ratings.csv")
# +
def standardise_ratings(df, rater_id, aspect):
return df.groupby(by=rater_id)[aspect].transform(lambda x: zscore(x))
df_ratings_simp[f"simplicity_zscore"] = standardise_ratings(df_ratings_simp, rater_id='rater_id', aspect="simplicity")
# -
# ### Simplicity Gain
df_ratings_simpgain = pd.read_csv("../ratings_per_annotator/simplicity_gain_ratings.csv")
# ### Structural Simplicity
df_ratings_struct = pd.read_csv("../ratings_per_annotator/structural_simplicity_ratings.csv")
# # Compute ICC
# ### Simplicity-DA
# +
# Reformat the dataset
df_ratings_simp['segment_id'] = df_ratings_simp['sent_id'].astype(str) + df_ratings_simp['sys_name']
df_ratings_simp["rater_num"] = df_ratings_simp.groupby(["segment_id"]).cumcount()
# Compute Intraclass Correlation Coeficient (ICC)
icc = pg.intraclass_corr(data=df_ratings_simp, targets='segment_id', raters='rater_num', ratings='simplicity_zscore').round(3)
icc
# -
# ### Simplicity Gain
# +
# Reformat the dataset
df_ratings_simpgain['segment_id'] = df_ratings_simpgain['sent_id'].astype(str) + df_ratings_simpgain['sys_name']
df_ratings_simpgain["rater_num"] = df_ratings_simpgain.groupby(["segment_id"]).cumcount()
# Compute Intraclass Correlation Coeficient (ICC)
icc = pg.intraclass_corr(data=df_ratings_simpgain, targets='segment_id', raters='rater_num', ratings='simplicity_gain').round(3)
icc
# -
# ### Structural Simplicity
# +
# Reformat the dataset
df_ratings_struct['segment_id'] = df_ratings_struct['sent_id'].astype(str) + df_ratings_struct['sys_name']
# Compute Intraclass Correlation Coeficient (ICC)
icc = pg.intraclass_corr(data=df_ratings_struct, targets='segment_id', raters='rater_id', ratings='structural_simplicity').round(3)
icc
# -
# # Compute Correlation
def simulate_two_annotators(ratings, num_ratings_annotatorA=1):
ratings_shuffled = np.random.permutation(ratings)
ratingA = np.mean(ratings_shuffled[:num_ratings_annotatorA])
ratingB = np.mean(ratings_shuffled[num_ratings_annotatorA:])
return [ratingA, ratingB]
def compute_correlation(df_ratings, segment_id, aspects, n_simulations=1000):
corr_per_aspect = {}
for aspect in aspects:
df_scores = df_ratings[[segment_id, aspect]]
corr_values = []
for _ in tqdm(range(n_simulations)):
ratings_simulation = df_scores.groupby(segment_id)[aspect].apply(simulate_two_annotators).to_list()
raterA, raterB = zip(*ratings_simulation)
corr_values.append(spearmanr(raterA, raterB)[0])
corr_per_aspect[aspect] = (np.mean(corr_values), np.std(corr_values))
return corr_per_aspect
# ### Simplicity-DA
compute_correlation(df_ratings=df_ratings_simp, segment_id="segment_id", aspects=["simplicity_zscore"])
# ### Simplicity Gain
compute_correlation(df_ratings=df_ratings_simpgain, segment_id="segment_id", aspects=["simplicity_gain"])
# ### Structural Simplicity
compute_correlation(df_ratings=df_ratings_struct, segment_id="segment_id", aspects=["structural_simplicity"])
| notebooks/agreement.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <NAME> - praca domowa nr 3
# <h1 id="tocheading">Spis treści</h1>
# <div id="toc"></div>
# + language="javascript"
# $.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')
# -
# # Przygotowanie podzbiorów do klasyfikacji
# ## Wczytanie i ogląd
# I cyk, kolejny ciekawy zbiór danych! I znowu jakże praktyczny - szczerze powiedziawszy obawiałem się od przedmiotów uczenio-maszynowo-podobnych niekiedy zestawów pierwszych-lepszych czy też dobieranych na siłę byleby tylko pasowały pod dany problem dla studentów, a tu praca w pracę uśmiecham się na samą specyfikację i informacje zawarte w ramce.
#
# Ponadto tym razem wszystko jest już przygotowane - od opisu kolumn, po nawet zajęcie się brakami danych. **Czy mogłoby być piękniej?**
#
# Wpierw wczytam potrzebne pakiety oraz przygotowany plik *.csv*. Jako, że przerobiłem już **kurs uczenia nadzorowanego w Pythonie w pakiecie *sklearn*** (który zresztą bardzo polecam i póki co jest to jeden z najlepszych tam spośród tych za które się wziąłem), przy okazji pracy domowej zamierzam także w dalszym ciągu masterować moją biegłość z ową biblioteką.
import pandas as pd
import numpy as np
import sklearn
import matplotlib.pyplot as plt
# Otwórzmy nasz zbiór i upewnijmy się "na szybko", że zaiste wszystko z nim w porządku \[nie żebym nie wierzył\].
df = pd.read_csv('australia.csv')
df.sample(10)
# ## Zmienna celu
# Features i target również są podane - chcemy przewidzieć czy następnego dnia będzie padać, więc zmienną celu będzie oczywiście *RainTomorrow*.
X = df.drop(['RainTomorrow'], axis = 1)
y = df[['RainTomorrow']]
X
# ## Podział na zbiór testowy i treningowy
# ... znając funkcję od *sklearn* zostanie on dokonany w jednej linijce.
#
# Przyjmijmy standardowy podział 80% wierszy na trening / 20% na test - bo czemu nie?
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 6)
# Na tych czterech podramkach będę pracował już aż do końca - dzięki temu dla każdej z trzech klasyfikacji będę miał podobne wyniki.
#
# Mamy styczność z aż 56 420 wierszami danych w wejściowej data frame - możemy więc wnioskować z Prawa Wielkich Liczb, że nasze klasyfikatory nie powinny stracić na jakości przez "niefortunne" rozlosowanie train-test.
# # Modele i ich rezultaty
# ## k-Nearest Neighbors
# Zacznijmy od klasyka, a co.
#
# Naprawdę nie wiem ile najbliższych sąsiadów wybrać - przetestuję więc wiele parametrów i wybiorę najlepszy z nich (wiem że trochę wyprzedzam ale ciekawość by mnie zbytnio zżerała, a jak coś zrobię, to czemu by się nie pochwalić?).
#
# Pomocniczo skorzystam z narzędzia wyszukiwania najlepszego parametru **GridSearchCV** z 5-krotną kroswalidacją.
# +
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
param_grid = {'n_neighbors': np.arange(1, 10)}
knn = KNeighborsClassifier()
knn_cv = GridSearchCV(knn, param_grid, cv=5)
knn_cv.fit(X_train, y_train.values.ravel())
knn_cv.best_params_
# -
# Szukając najlepszej wartości parametru *n_neighbors* z pierwszych dziewięć liczb naturalnych, *GridSearchCV* wskazał nam 9. Strzelałem, że może mała liczba w stylu 3 / 4 najlepiej się sprawdzi i tu będzie można zakończyć wyszukiwanie, ale wychodzi na to, że prawdopodobnie będzie to jednak więcej.
#
# Wyszukujmy liniowo wyniki 5-krotnej kroswalidacji dla kolejnej naturalnej liczby sąsiadów od 10 poczynając. Wypisujmy pomocniczo każdy kolejny wynik... aż spotkamy się z sytuacją, że dla pewnej *n_neighbors* wynik dla kolejnych dwóch większych liczb będzie mniejszy. Będzie to oznaczać, że prawdopodobnie skuteczność modelu z każdym kolejnym większym *n_neighbors* będzie już tylko spadać.
# +
from sklearn.model_selection import cross_val_score
knn_scores_by_no_neighbors = []
i = 10
while True:
knn = KNeighborsClassifier(i)
cv_scores = cross_val_score(knn, X_train, y_train.values.ravel(), cv = 5)
mean_score = np.mean(cv_scores)
print("Wynik dla ", i, " sąsiadów: ", mean_score, sep = "")
knn_scores_by_no_neighbors.append(mean_score)
if len(knn_scores_by_no_neighbors) > 3:
if(knn_scores_by_no_neighbors[-3] > knn_scores_by_no_neighbors[-2] and
knn_scores_by_no_neighbors[-3] > knn_scores_by_no_neighbors[-1]):
result = i - 2
break
i += 1
print("Najlepszy parametr:", result)
# -
# Wspaniale, **17**! Urodziłem się 17. dnia miesiąca, więc cieszy mnie ten wynik.
# Biorąc pod uwagę pomysł na algorytm prosty ale chyba niegłupi, myślę więc, że możemy 17 najbliższym sąsiadom zaufać. A nawet jeżeli nie jest to najbardziej optymalna wartość - zauważmy, że dla liczby sąsiadów między 10 a 19 mamy stycznośc różnicą wyników *mniejszą niż 0.3%*.
# Nauczmy nasz zbiór testowy na liczbie sąsiadów 17 i zobaczmy efekt roboty na zbiorze testowym.
#
# Wygenerujmy także *confusion_matrix* i *classification_report*.
# +
knn = KNeighborsClassifier(n_neighbors = 17)
knn.fit(X_train, y_train.values.ravel())
y_pred_knn = knn.predict(X_test)
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y_test, y_pred_knn))
print(classification_report(y_test, y_pred_knn))
# -
# Wygenerujmy także krzywą ROC
# +
from sklearn.metrics import roc_curve
y_pred_knn_prob = knn.predict_proba(X_test)[:,-1]
fpr_knn, tpr_knn, thresholds = roc_curve(y_test, y_pred_knn_prob)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_knn,tpr_knn)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
plt.show()
# -
# ... a także wyliczmy AUC.
# +
from sklearn.metrics import roc_auc_score
print("AUC: {}".format(roc_auc_score(y_test, y_pred_knn_prob)))
cv_auc_knn = cross_val_score(knn, X_test, y_test.values.ravel(), cv = 5, scoring = 'roc_auc')
print("AUC scores computed using 5-fold cross-validation: {}".format(cv_auc_knn))
# -
# Myślę, że możemy czuć się mocno usatysfakcjonowani.
#
# Miernie wypadła właściwie tylko miara *recall* dla 1 - oznacza to, że mimo generalnie bardzo dobrego przewidywania w przypadku dni gdy deszcz nie spadł, tylko w połowie przypadków kiedy opady nastały zostały skutecznie przewidziane.
#
# Wyborne mamy za to wynik wszystkich miar dla braku deszczu - czyli jeżeli model zakładał że opadów brak, to niemalże zawsze miał rację.
# ## Decision Tree Classifier
# Odczuwam jakąś taką mocną sympatię do tego modelu mimo że wiem, że doskonały nie jest. Tak czy inaczej zajmę się nim w drugiej kolejności.
#
# Tym razem będę chciał wziąć pod uwagę więcej hiperparametrów - aby nie czekać na w miarę dobry rezultat godzinami, tym razem skorzystam z narzędzia **RandomizedSearchCV**.
# Owemu narzędziu pozwolę wylosować legitne wartości dla trzech hiperparametrów:
#
# 1. *criterion* - „gini” / „entropia” - kryterium obliczenia przyrostu informacji
# 2. *max_depth* - maksymalna głębokość drzewa. Dla *None* - nie ma ograniczenia
# 3. *max_feautures* - największa liczba kolumn brana pod uwagę
# +
from scipy.stats import randint
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import RandomizedSearchCV
param_dist = {"max_depth": np.append(np.arange(1, 26), None),
"max_features": np.arange(4, 18),
"criterion": ["gini", "entropy"]}
tree = DecisionTreeClassifier()
tree_cv = RandomizedSearchCV(tree, param_dist, cv=10)
tree_cv.fit(X_train, y_train)
print("Tuned Decision Tree Parameters: {}".format(tree_cv.best_params_))
print("Best score is {}".format(tree_cv.best_score_))
# -
# Powyższe strojenie hiperparametrów również nie jest doskonałe - sprawdziłem tylko trzy pierwsze-lepsze atrybuty zaoferowane przez *DecisionTreeClassifier*, podczas gdy jest ich o wiele więcej.
# No ale dobra, wiemy już jakie parametry są w miarę spoko. Zobaczmy efekt działania funkcji dla nich na naszym zbiorze testowym, a także jak prezentuje się confusion matrix.
# +
y_pred_tree = tree_cv.predict(X_test)
print(confusion_matrix(y_test, y_pred_tree))
print(classification_report(y_test, y_pred_tree))
# -
# Wygenerujmy także krzywą ROC
# +
y_pred_tree_prob = tree_cv.predict_proba(X_test)[:,-1]
fpr_tree, tpr_tree, thresholds = roc_curve(y_test, y_pred_tree_prob)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_tree,tpr_tree)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
plt.show()
# -
# No i cyk miara AUC.
# +
print("AUC: {}".format(roc_auc_score(y_test, y_pred_tree_prob)))
cv_auc_tree = cross_val_score(tree_cv, X_test, y_test.values.ravel(), cv = 5, scoring = 'roc_auc')
print("AUC scores computed using 5-fold cross-validation: {}".format(cv_auc_tree))
# -
# Niesamowite, ale k-Nearest Neighbors i Decision Tree **poradziły sobie bardzo podobnie**.
#
# Dla szacowań dni bez deszczu **miary są takie same, jedynie dla precision dla dni z opadami DTC minimalnie przegrywa** - a dokładniej dwoma punktami procentowymi (74% > 72%).
#
# Także patrząc na konkretne liczby *confusion matrix* wartości są bardzo zbliżone - jedynie rzuca się w oczy trochę większa ilość FN dla modelu drzewa decyzyjnego - oznacza to, że minimalnie częściej był przewidywany deszcz, którego jednak nie było (stąd też troszkę gorsze *precision*).
#
# Można więc wnioskować, że wybór modelu spośród tych dwóch niemalże nie wpływa na rezultat przewidywania czy następnego dnia w Australii będzie padać.
# ## Logistic Regression
# Dopiero zaczynam moją przygodę z uczeniem maszynowym, a w owym datasecie mamy styczność z klasyfikacją binarną - grzechem byłoby nie przećwiczyć klasycznej regresji logistycznej! Pobawiłem się już nieco z prostym hyperparameter tuning, tym razem użyję najprostszej funkcji bez żadnego wydziwiania.
#
# Ustawię jedynie maksymalną ilość iteracji na 1000.
# +
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(max_iter = 1000)
logreg.fit(X_train, y_train.values.ravel())
y_pred_log = logreg.predict(X_test)
print(confusion_matrix(y_test, y_pred_log))
print(classification_report(y_test, y_pred_log))
# -
# Zobaczmy także krzywą ROC.
# +
y_pred_logreg_prob = logreg.predict_proba(X_test)[:,-1]
fpr_logreg, tpr_logreg, thresholds = roc_curve(y_test, y_pred_logreg_prob)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_logreg,tpr_logreg)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
plt.show()
# -
# Na koniec jeszcze miara AUC.
print("AUC: {}".format(roc_auc_score(y_test, y_pred_logreg_prob)))
# ## Porównanie i podsumowanie
# Niesamowite jak podobne efekty dały te trzy modele. Zwłaszcza wydawałoby się z totalnie innego świata *k-Nearest Neighbors* i *Decision tree* - ich miary wyniosły niemalże tyle samo, przy najbardziej ogólnym AUC równym kolejno **86,7%** i **86,4%**.
#
# Mimo strojenia hiperparametrów dla dwóch pierwszych, **najlepsza okazała się jednak regresja logistyczna z AUC = 87,9%**. Także dla każdej innej miary efekt *Logistic regression* był taki sam co do procenta lub troszkę lepszy. Porównajmy jeszcze krzywe ROC.
# +
plt.plot(fpr_knn, tpr_knn)
plt.plot(fpr_tree, tpr_tree)
plt.plot(fpr_logreg, tpr_logreg)
plt.plot([0, 1], [0, 1], 'k--')
plt.legend(['knn', 'tree', 'logreg'])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
plt.show()
# -
# Także i owe krzywe potwierdzają, że rezultat jest bardzo zbliżony.
#
# Choć można odczytać, że **decision tree lekko odstaje**, a **logreg faktycznie jest nieco lepszy od pozostałych dwóch**, to wciąż **krzywe wyglądają niemalże tak samo**.
# Najlepsza miara? Biorąc pod uwagę złożoność problemu, myślę, że najbardziej reprezentatywna w tym wypadku jest **AUC** - i faktycznie przynajmniej dla tych trzech modeli na swój sposób "uśrednia" wyniki miar pozostałych.
# # Zadanie bonusowe
# ### Wczytanie i ogląd
# Przyznam, że jakoś sprawniej mi poszło niż z zadaniem z *encodingiem*, a u mnie na zegarku dopiero 13:33 w piątek po wykładzie. Nie mówiąc już o tym że mimo że jestem absolutnie świadomy owego bez sensu, jestem jedną z tych osób, która semestr w semestr od 4. klasy podstawówki walczy o jak najlepszą średnią. Mam trochę czasu do poniedziałku wieczór - tak więc zbiorze *Allegro*, powracaj!!
allegro = pd.read_csv("allegro-api-transactions.csv")
allegro.sample(10)
# Które kolumny wezmę pod uwagę, a które nie?
#
# * *lp* nie - z oczywistego powodu,
# * *date* może się przydać - przekonwertuję daty i godziny do integera. Zapewne z czasem zmieniają się trendy.
# +
from datetime import datetime
newDate = [0] * len(allegro["date"])
for i in range(len(allegro["date"])):
newOne = datetime.strptime(allegro["date"][i], "%Y-%m-%d %H:%M:%S")
newOne = int(newOne.timestamp())
newDate[i] = newOne
# -
# Zobaczmy czy na pewno wszystko poszło zgodnie z planem.
from random import sample
sample(newDate, 10)
# No i bueno!
#
# Zastąpmy kolumnę *date* w naszej wejściowej dataframe tą nowo-powstałą.
allegro[["date"]] = newDate
allegro.sample(10)
# ... Idąc dalej:
#
# * *item_id* - zdecydowanie nieprzydatne,
# * *categories* - tutaj można by z każdej listy kategorii wyjąć podane i na przykład za pomocą dodatkowych binarnych kolumn na każdą odznaczać czy dany produkt jest w konkretnej, ale zajęłoby to trochę roboty i nie jest to chyba cel tego ćwiczenia,
# * *pay_option_on_delivery* - jak najbardziej,
# * *pay_option_transfer* - j. w.,
# * *seller* - z pewnością w praktyce okazało by się, że jedni użytkownicy stosunkowo wydają droższe produkty, a inni tańsze, ale kategorii jest tu jednak za dużo - żaden model nie brałby pod uwagę tej kolumny w przypadku featura na każdego użytkownika \[zresztą o zgrozo! Ile by tego wyszło, już nie mówiąc o czasie pamięci by zabrakło\],
# * *price* - no, to właśnie będzie nasza zmienna celu,
# * *it_is_allegro_standard* - bez wątpienia się przyda i jest już w fajnej formie,
# * *it_quantity* - bierzemy!
# * *it_is_brand_zone* - to też,
# * *it_seller_rating* - i to również.
#
# Do przekminienia zostały jeszcze feautures *it_location* i *main_category* - te zdecydowanie weźmiemy pod uwagę, ale idąc śladem pd2, zastosujemy encoding.
#
# Sensowne wydawało się zakodowanie *it_location* za pomocą target encoding, toteż powtórzymy tamtą operację. Zanim jednak - przyjrzyjmy się owej kolumnie. Można by jeszcze z nią zrobić coś sensownego?
allegro[["it_location"]].groupby('it_location').size().reset_index()
# No tak średnio bym powiedział, tak średnio. Jak widać kolumna "miasto" została wpisywana ręcznie i biorąc pod uwagę liczbę małych miasteczek i różne formaty wpisania, na wejściowe +420 tys. obserwacji zyskaliśmy po nad 10 000 unikalnych miast. Czy możemy coż z tym zrobić? Jakoś poprawić na pewno, ale to już temat na inną pracę. Tak czy inaczej na pewno nie zaszkodzi konwersja wszystkich znaków na małe - w ten sposób chociażby 'Warszawa' == 'warszawa', co poprawi wynik.
allegro["it_location"] = allegro["it_location"].str.lower()
allegro[["it_location"]].groupby('it_location').size().reset_index()
# No i troszkę lepiej! Po nad 20% mniej miast, a jednak tyle samo.
#
# Dokonajmy **target encodingu**.
# +
import category_encoders
te = category_encoders.target_encoder.TargetEncoder(allegro)
encoded = te.fit_transform(allegro['it_location'], allegro['price'])
encoded.sample(10)
# -
allegro['it_location'] = encoded
# Bajerancko. Słuchając polecenia, *main_category* będziemy już kodować na trzy różne sposoby - ale wpierw odchudźmy naszą dataframe o kolumny, które nie będą nam potrzebne.
allegro = allegro.drop(columns = ['lp', 'item_id', 'categories', 'seller'])
allegro.sample(10)
# No i cyk same ładne liczby. Z wyjątkiem *main_category*... Stwórzmy trzy nowe ramki, każda odpowiadająca za jeden z wybranych ostatnio encodingów.
# #### One-hot encoding
# +
values = np.array(allegro[["main_category"]])
from sklearn.preprocessing import LabelEncoder
# integer encode
le = LabelEncoder()
integer_encoded = le.fit_transform(values)
print(integer_encoded)
#invert
print(le.inverse_transform(integer_encoded))
# -
category_series = pd.concat([pd.DataFrame(integer_encoded), pd.DataFrame(le.inverse_transform(integer_encoded))], axis = 1)
category_series = category_series.drop_duplicates()
category_series.columns = ["Index", "Category"]
category_series = category_series.sort_values("Index")
category_series
# +
from sklearn.preprocessing import OneHotEncoder
# one hot encode
onehot_encoder = OneHotEncoder(sparse=False)
integer_encoded = integer_encoded.reshape(len(integer_encoded), 1)
onehot_encoded = onehot_encoder.fit_transform(integer_encoded)
print(onehot_encoded)
# invert
inverted = onehot_encoder.inverse_transform(onehot_encoded)
print(inverted.transpose())
# -
onehot_encoded = pd.DataFrame(onehot_encoded)
onehot_encoded.columns = category_series["Category"]
onehot_encoded
allegro_onehot = pd.concat([allegro, onehot_encoded], axis = 1).drop(columns = 'main_category')
allegro_onehot.sample(10)
# #### Binary encoding
bin_e = category_encoders.BinaryEncoder(cols = ['main_category'])
allegro_binary = bin_e.fit_transform(allegro)
allegro_binary
# Ależ satysfakcjonująco wąski ten binary encoding!
# #### Polynomial encoding
pe = category_encoders.PolynomialEncoder(cols = ['main_category'])
allegro_polynomial = pe.fit_transform(allegro)
allegro_polynomial
# ### Podział na cechy i zmienne celu
# Jak wskazuje treść, targetem będzie *price*.
Xa = allegro.drop(['price'], axis = 1)
ya = allegro[['price']]
Xa
Xa_onehot = allegro_onehot.drop(['price'], axis = 1)
ya_onehot = allegro_onehot[['price']]
Xa_binary = allegro_binary.drop(['price'], axis = 1)
ya_binary = allegro_binary[['price']]
Xa_polynomial = allegro_polynomial.drop(['price'], axis = 1)
ya_polynomial = allegro_polynomial[['price']]
# ### Zbiory testowe i treningowe
# ... to w tym wypadku, no i w sumie jak niemalże w każdym na tym etapie zaawansowania raczej formalność - ponieważ danych jest dużo, ale także i spora ilość unikalnych miast, pozwolę sobie na proporcję 9 / 1 wierszy w zbiorze treningowym w stosunku do testowego.
Xa_train, Xa_test, ya_train, ya_test = train_test_split(Xa, ya, test_size = 0.1, random_state = 6)
Xa_onehot_train, Xa_onehot_test, ya_onehot_train, ya_onehot_test = train_test_split(Xa_onehot, ya_onehot, test_size = 0.1, random_state = 6)
Xa_binary_train, Xa_binary_test, ya_binary_train, ya_binary_test = train_test_split(Xa_binary, ya_binary, test_size = 0.1, random_state = 6)
Xa_polynomial_train, Xa_polynomial_test, ya_polynomial_train, ya_polynomial_test = train_test_split(Xa_polynomial, ya_polynomial, test_size = 0.1, random_state = 6)
# ### Algorytm uczenia maszynowego - Ridge
# Jak czytamy w kursie uczenia nadzorowanego w Pythonie ze *skirit-learn*:
#
# *Lasso is great for feature selection, but when building regression models, Ridge regression should be your first choice.*
#
# Tak więc jak mi radzą nie wezmę się za Lasso, a właśnie za algorytm Ridge.
ya_onehot_train
# +
from sklearn.linear_model import RidgeClassifier
clf_onehot = RidgeClassifier()
clf_onehot.fit(np.array(Xa_onehot_train).astype(int), np.array(ya_onehot_train).astype(int))
yA_onehot_predicted = clf_onehot.predict(Xa_onehot_test)
rms = sqrt(mean_squared_error(yA_onehot_test, yA_onehot_predicted))
rms
# -
# A to pech, **MemoryError**! A niech to, muszę wziąć lżejszy zbiór treningowy...
sample_indexes = sample(list(range(1,378019)), 100000)
sample_indexes[1:10]
# +
from math import sqrt
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
clf_onehot = RidgeClassifier()
clf_onehot.fit(np.array(Xa_onehot_train.iloc[sample_indexes, :]).astype(int),
np.array(ya_onehot_train.iloc[sample_indexes, :]).astype(int).ravel())
ya_onehot_predicted = clf_onehot.predict(Xa_onehot_test)
rms = sqrt(mean_squared_error(ya_onehot_test, ya_onehot_predicted))
print("RMS dla One Hot:", rms)
coefficient_of_dermination = r2_score(ya_onehot_test, ya_onehot_predicted)
print("R2 dla One Hot:", coefficient_of_dermination)
# +
clf_binary = RidgeClassifier()
clf_binary.fit(np.array(Xa_binary_train.iloc[sample_indexes, :]).astype(int),
np.array(ya_binary_train.iloc[sample_indexes, :]).astype(int).ravel())
ya_binary_predicted = clf_binary.predict(Xa_binary_test)
rms = sqrt(mean_squared_error(ya_binary_test, ya_binary_predicted))
print("RMS dla Binary:", rms)
coefficient_of_dermination = r2_score(ya_binary_test, ya_binary_predicted)
print("R2 dla One Hot:", coefficient_of_dermination)
# +
clf_polynomial = RidgeClassifier()
clf_polynomial.fit(np.array(Xa_polynomial_train.iloc[sample_indexes, :]).astype(int),
np.array(ya_polynomial_train.iloc[sample_indexes, :]).astype(int).ravel())
ya_polynomial_predicted = clf_polynomial.predict(Xa_polynomial_test)
rms = sqrt(mean_squared_error(ya_polynomial_test, ya_polynomial_predicted))
print("RMS dla Polynomial:", rms)
coefficient_of_dermination = r2_score(ya_polynomial_test, ya_polynomial_predicted)
print("R2 dla One Hot:", coefficient_of_dermination)
# -
# **Szok i niedowierzanie!!**
#
# Co do porównania wartości, wyniki wyszły totalnie odwrotnie w stosunku do tego, czego się spodziewałem - patrząc na średnią kwadratową **najbardziej złożone kodowanie wielomianowe dało najgorszy wynik, zaś binary - najlepszy**. Należy jednak zauważyć, że we wszystkich trzech przypadkach wynik wyszedł niemalże taki sam - wartości wahają się między 395.5 a 396.4 - czyli mniej niż 1!! Zapewne dla inaczej wylosowanych danych treningowych i testowych (a także olanych ze względu na *MemoryError*) sytuacja mogłaby się odwrócić. Patrząc na RMS możnaby więc zaryzykować stwierdzeniem, że **wybór kodowania nie ma wpływu na działanie *RidgeClassifier()***.
#
# Podobnie w kwestii współczynnika determinacji wyniki są bardzo zbliżone, ale z tym samym "rankingiem" co do bardzo małej różnicy.
#
# Podsumowując, dla naszego zbioru danych **nie było dużego znaczenia jakiego użyliśmy encodingu** - zarówno *one-hot*, *binary* i *polynomial* dały bardzo podobne wyniki.
| Prace_domowe/Praca_domowa3/Grupa2/KosternaJakub/pd3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exercise 2.2
#
# ## Goal
#
# The performance of machine learning systems directly depends on the quality of input features. In this exercise, you will investigate the impact of individual features on a system for named entity recognition: what does the inclusion of each individual feature do to the results? And what happens when they are combined?
#
#
#
# ## Acknowledgement
#
# This exercise made use of examples from the following exercise (in the HLT course):
#
# https://github.com/cltl/ma-hlt-labs/
#
# Lab3: machine learning
#
#
# ## Procedure
#
# This notebook will provide the code for running the experiments outlined above. You will only need to make minor adaptations to run the feature ablation analysis. This notebook was developed for another more introductory course where students did not need to generate their own features. Please take that into account while reading this code (i.e. you can use this as an example, but it will not work one-to-one on your own data).
#
# The notebooks and set up have been designed for educational purposes: design choices are based on clearly illustrating what is going on and facilitating the exercises.
#
# ## The Data
#
# The data of the original assignment been preprocessed to make some useful features directly availabe (as you were recommended to do as well in Assignment 2).
#
# The format of the conll files provided to the students in this original exercise was:
#
# Token Preceding_token Capitalization POS-tag Chunklabel Goldlabel
#
# The first lines look like this:
#
# -DOCSTART- FULLCAP -X- -X- O
#
# EU FULLCAP NNP B-NP B-ORG
#
# rejects EU LOWCASE VBZ B-VP O
#
# Preceding_token:
# This column provides the token preceding the current token. (This is an empty space if there is no previous token).
#
# Capitalization:
# This column provides information on the capitalization of the token.
# ## Packages
#
# We will make use of the following packages:
#
# * scikit-learn : provides lots of useful implementations for machine learning (and has relatively good documentation!)
# * csv: a light-weight package to deal with data represented in csv (or related formats such as tsv)
# * gensim: a useful package for working with word embeddings
# * numpy: a packages that (among others) provides useful datastructures and operations to work with vectors
#
# Some notes on design decisions (feel free to ignore these if this is all new to you):
#
# * We are using csv rather than (the more common) pandas for working with the conll files, because pandas standardly applies type conversion, which we do not want when dealing with text that contains numbers (fixing this will make the code look more complex).
# * scikit-learn provides several machine learning algorithms, but this is not the focus of this exercise. We are using logistic regression, because it serves the purpose of our experiments and is relatively efficient.
#
#
# +
#this cell imports all the modules we'll need. Make sure to run this once before running the other cells
#sklearn is scikit-learn
import sklearn
import csv
import gensim
import numpy as np
import pandas as pd
from sklearn import metrics
from sklearn.feature_extraction import DictVectorizer
from sklearn.linear_model import LogisticRegression
# -
# # Part 1: Traditional Features
#
# In this first part, we will explore the impact of various features on named entity recognition.
# We will use so-called traditional features, where the feature values (strings) are presented by one-hot encoding
#
# ## Step 1: A Basic Classifier
#
# We will first walk through the process of creating and evaluating a simple classifier that only uses the token itself as a feature. In the next step, we will run evaluations on this basic system.
#
# This is generally a good way to start experimenting: first walk through the entire experimental process with a very basic, easy to create system to see if everything works, there are no problems with the data etc. You can then build up from there towards a more sophististicated system.
#
# +
#functions for feature extraction and training a classifier
## For documentation on how to create input representations of features in scikit-learn:
# https://scikit-learn.org/stable/modules/feature_extraction.html
#Setting some variables that we will use multiple times
trainfile = '../data/conll_train_cap.txt'
testfile = '../data/conll_test_cap.txt'
def extract_features_token_only_and_labels(conllfile):
'''Function that extracts features and gold label from preprocessed conll (here: tokens only).
:param conllfile: path to the (preprocessed) conll file
:type conllfile: string
:return features: a list of dictionaries, with key-value pair providing the value for the feature `token' for individual instances
:return labels: a list of gold labels of individual instances
'''
features = []
labels = []
conllinput = open(conllfile, 'r')
#delimiter indicates we are working with a tab separated value (default is comma)
#quotechar has as default value '"', which is used to indicate the borders of a cell containing longer pieces of text
#in this file, we have only one token as text, but this token can be '"', which then messes up the format. We set quotechar to a character that does not occur in our file
csvreader = csv.reader(conllinput, delimiter='\t',quotechar='|')
for row in csvreader:
#I preprocessed the file so that all rows with instances should contain 6 values, the others are empty lines indicating the beginning of a sentence
if len(row) == 6:
#structuring feature value pairs as key-value pairs in a dictionary
#the first column in the conll file represents tokens
feature_value = {'Token': row[0]}
features.append(feature_value)
#The last column provides the gold label (= the correct answer).
labels.append(row[-1])
return features, labels
def create_vectorizer_and_classifier(features, labels):
'''
Function that takes feature-value pairs and gold labels as input and trains a logistic regression classifier
:param features: feature-value pairs
:param labels: gold labels
:type features: a list of dictionaries
:type labels: a list of strings
:return lr_classifier: a trained LogisticRegression classifier
:return vec: a DictVectorizer to which the feature values are fitted.
'''
vec = DictVectorizer()
#fit creates a mapping between observed feature values and dimensions in a one-hot vector, transform represents the current values as a vector
tokens_vectorized = vec.fit_transform(features)
lr_classifier = LogisticRegression(solver='saga')
lr_classifier.fit(tokens_vectorized, labels)
return lr_classifier, vec
#extract features and labels:
feature_values, labels = extract_features_token_only_and_labels(trainfile)
#create vectorizer and trained classifier:
lr_classifier, vectorizer = create_vectorizer_and_classifier(feature_values, labels)
# -
# ## Step 2: Evaluation
#
# We will now run a basic evaluation of the system on a test file.
# Two important properties of the test file:
#
# 1. the test file and training file are independent sets (if they contain identical examples, this is coincidental)
# 2. the test file is preprocessed in the exact same way as the training file
#
# The first function runs our classifier on the test data.
#
# The second function prints out a confusion matrix (comparing predictions and gold labels per class).
# You can find more information on confusion matrices here: https://www.geeksforgeeks.org/confusion-matrix-machine-learning/
#
# The third function prints out the macro precision, recall and f-score of the system
# +
def get_predicted_and_gold_labels_token_only(testfile, vectorizer, classifier):
'''
Function that extracts features and runs classifier on a test file returning predicted and gold labels
:param testfile: path to the (preprocessed) test file
:param vectorizer: vectorizer in which the mapping between feature values and dimensions is stored
:param classifier: the trained classifier
:type testfile: string
:type vectorizer: DictVectorizer
:type classifier: LogisticRegression()
:return predictions: list of output labels provided by the classifier on the test file
:return goldlabels: list of gold labels as included in the test file
'''
#we use the same function as above (guarantees features have the same name and form)
sparse_feature_reps, goldlabels = extract_features_token_only_and_labels(testfile)
#we need to use the same fitting as before, so now we only transform the current features according to this mapping (using only transform)
test_features_vectorized = vectorizer.transform(sparse_feature_reps)
predictions = classifier.predict(test_features_vectorized)
return predictions, goldlabels
def print_confusion_matrix(predictions, goldlabels):
'''
Function that prints out a confusion matrix
:param predictions: predicted labels
:param goldlabels: gold standard labels
:type predictions, goldlabels: list of strings
'''
#based on example from https://datatofish.com/confusion-matrix-python/
data = {'Gold': goldlabels, 'Predicted': predictions }
df = pd.DataFrame(data, columns=['Gold','Predicted'])
confusion_matrix = pd.crosstab(df['Gold'], df['Predicted'], rownames=['Gold'], colnames=['Predicted'])
print (confusion_matrix)
def print_precision_recall_fscore(predictions, goldlabels):
'''
Function that prints out precision, recall and f-score
:param predictions: predicted output by classifier
:param goldlabels: original gold labels
:type predictions, goldlabels: list of strings
'''
precision = metrics.precision_score(y_true=goldlabels,
y_pred=predictions,
average='macro')
recall = metrics.recall_score(y_true=goldlabels,
y_pred=predictions,
average='macro')
fscore = metrics.f1_score(y_true=goldlabels,
y_pred=predictions,
average='macro')
print('P:', precision, 'R:', recall, 'F1:', fscore)
#vectorizer and lr_classifier are the vectorizer and classifiers created in the previous cell.
#it is important that the same vectorizer is used for both training and testing: they should use the same mapping from values to dimensions
predictions, goldlabels = get_predicted_and_gold_labels_token_only(testfile, vectorizer, lr_classifier)
print_confusion_matrix(predictions, goldlabels)
print_precision_recall_fscore(predictions, goldlabels)
# -
# ## Step 3: A More Elaborate System
#
# Now that we have run a basic experiment, we are going to investigate alternatives. In this exercise, we only focus on features. We will continue to use the same logistic regression classifier throughout the exercise.
#
# We want to investigate the impact of individual features. We will thus use a function that allows us to specify whether we include a specific feature or not. The features we have at our disposal are:
#
# * the token itself (as used above)
# * the preceding token
# * the capitalization indication (see above for values that this takes)
# * the pos-tag of the token
# * the chunklabel of the chunk the token is part of
# +
# the functions with multiple features and analysis
#defines the column in which each feature is located (note: you can also define headers and use csv.DictReader)
feature_to_index = {'Token': 0, 'Prevtoken': 1, 'Cap': 2, 'Pos': 3, 'Chunklabel': 4}
def extract_features_and_gold_labels(conllfile, selected_features):
'''Function that extracts features and gold label from preprocessed conll (here: tokens only).
:param conllfile: path to the (preprocessed) conll file
:type conllfile: string
:return features: a list of dictionaries, with key-value pair providing the value for the feature `token' for individual instances
:return labels: a list of gold labels of individual instances
'''
features = []
labels = []
conllinput = open(conllfile, 'r')
#delimiter indicates we are working with a tab separated value (default is comma)
#quotechar has as default value '"', which is used to indicate the borders of a cell containing longer pieces of text
#in this file, we have only one token as text, but this token can be '"', which then messes up the format. We set quotechar to a character that does not occur in our file
csvreader = csv.reader(conllinput, delimiter='\t',quotechar='|')
for row in csvreader:
#I preprocessed the file so that all rows with instances should contain 6 values, the others are empty lines indicating the beginning of a sentence
if len(row) == 6:
#structuring feature value pairs as key-value pairs in a dictionary
#the first column in the conll file represents tokens
feature_value = {}
for feature_name in selected_features:
row_index = feature_to_index.get(feature_name)
feature_value[feature_name] = row[row_index]
features.append(feature_value)
#The last column provides the gold label (= the correct answer).
labels.append(row[-1])
return features, labels
def get_predicted_and_gold_labels(testfile, vectorizer, classifier, selected_features):
'''
Function that extracts features and runs classifier on a test file returning predicted and gold labels
:param testfile: path to the (preprocessed) test file
:param vectorizer: vectorizer in which the mapping between feature values and dimensions is stored
:param classifier: the trained classifier
:type testfile: string
:type vectorizer: DictVectorizer
:type classifier: LogisticRegression()
:return predictions: list of output labels provided by the classifier on the test file
:return goldlabels: list of gold labels as included in the test file
'''
#we use the same function as above (guarantees features have the same name and form)
features, goldlabels = extract_features_and_gold_labels(testfile, selected_features)
#we need to use the same fitting as before, so now we only transform the current features according to this mapping (using only transform)
test_features_vectorized = vectorizer.transform(features)
predictions = classifier.predict(test_features_vectorized)
return predictions, goldlabels
#define which from the available features will be used (names must match key names of dictionary feature_to_index)
all_features = ['Token','Prevtoken','Cap','Pos','Chunklabel']
sparse_feature_reps, labels = extract_features_and_gold_labels(trainfile, all_features)
#we can use the same function as before for creating the classifier and vectorizer
lr_classifier, vectorizer = create_vectorizer_and_classifier(sparse_feature_reps, labels)
#when applying our model to new data, we need to use the same features
predictions, goldlabels = get_predicted_and_gold_labels(testfile, vectorizer, lr_classifier, all_features)
print_confusion_matrix(predictions, goldlabels)
print_precision_recall_fscore(predictions, goldlabels)
# -
# ## Step 4: Feature Ablation Analysis
#
# If all worked well, the system that made use of all features worked better than the system with just the tokens.
# We now want to know which of the features contributed to this improved: do we want to include all features?
# Or just some?
#
# We can investigate this using *feature ablation analysis*. This means that we systematically test what happens if we add or remove a specific feature. Ideally, we investigate all possible combinations.
#
# The cell below illustrates how you can use the code above to investigate a system with three features. You can modify the selected features to try out different combinations. You can either do this manually and rerun the cell or write a function that creates list of all combinations you want to tests and runs them one after the other.
#
# Include your results in the report of this week.
# +
# example of system with just one additional feature
#define which from the available features will be used (names must match key names of dictionary feature_to_index)
selected_features = ['Token','Prevtoken','Pos']
feature_values, labels = extract_features_and_gold_labels(trainfile, selected_features)
#we can use the same function as before for creating the classifier and vectorizer
lr_classifier, vectorizer = create_vectorizer_and_classifier(feature_values, labels)
#when applying our model to new data, we need to use the same features
predictions, goldlabels = get_predicted_and_gold_labels(testfile, vectorizer, lr_classifier, selected_features)
print_confusion_matrix(predictions, goldlabels)
print_precision_recall_fscore(predictions, goldlabels)
# -
# # Part 2: One-hot versus Embeddings
#
# In this second part of the exercise, we will compare results using one-hot encodings to pretrained word embeddings.
#
# ## One-hot representation
#
# In one-hot representation, each feature value is represented by an n-dimensional vector, where n corresponds to the number of possible values the feature can take. In our system, the Token feature can take the value of each token that occurs at least once in the corpus. This feature thus uses a vector with the size of the vocabulary in the corpus. Each possible value is associated with a specific dimension. If this value is represented, that dimension will receive the value 1 and all other dimensions will have the value 0.
#
# The system receive a concatenation of all feature representations as input.
#
#
# ## What does one-hot look like?
#
# We will start with an illustration of a one-hot representation. We will use the capitalization feature for this: it has 6 possible values and is therefore represented by a 6-dimensional vector. If you would like a more precise look, you may consider creating a toy example of a few lines, in which the capitalization feature has different values.
# +
# create classifier with caps feature only and print vectorizer, then with token only (but you see less)
selected_features = ['Cap']
feature_values, labels = extract_features_and_gold_labels(trainfile, selected_features)
#creating a vectorizing
vectorizer = DictVectorizer()
#fitting the values to dimensions (creating a mapping) and transforming the current observations according to this mapping
capitalization_vectorized = vectorizer.fit_transform(feature_values)
print(capitalization_vectorized.toarray())
# -
# ## Using word embeddings
#
# We are now going to use word embeddings to represent tokens. We load a pretrained distributional semantic model.
# You can use the same model as in Exercise 2.1. We tested the exercise with the same model (GoogleNews negative sampling 300 dimensions) as Exercise 2.1 as well.
#
# Note: loading the model may take a while. You probably want to run that only once.
#
# this step takes a while
word_embedding_model = gensim.models.KeyedVectors.load_word2vec_format('../models/GoogleNews-vectors-negative300.bin.gz', binary=True)
# +
def extract_embeddings_as_features_and_gold(conllfile,word_embedding_model):
'''
Function that extracts features and gold labels using word embeddings
:param conllfile: path to conll file
:param word_embedding_model: a pretrained word embedding model
:type conllfile: string
:type word_embedding_model: gensim.models.keyedvectors.Word2VecKeyedVectors
:return features: list of vector representation of tokens
:return labels: list of gold labels
'''
labels = []
features = []
conllinput = open(conllfile, 'r')
csvreader = csv.reader(conllinput, delimiter='\t',quotechar='|')
for row in csvreader:
if len(row) == 6:
if row[0] in word_embedding_model:
vector = word_embedding_model[row[0]]
else:
vector = [0]*300
features.append(vector)
labels.append(row[-1])
return features, labels
def create_classifier(features, labels):
'''
Function that creates classifier from features represented as vectors and gold labels
:param features: list of vector representations of tokens
:param labels: list of gold labels
:type features: list of vectors
:type labels: list of strings
:returns trained logistic regression classifier
'''
lr_classifier = LogisticRegression(solver='saga')
lr_classifier.fit(features, labels)
return lr_classifier
def label_data_using_word_embeddings(testfile, word_embedding_model, classifier):
'''
Function that extracts word embeddings as features and gold labels from test data and runs a classifier
:param testfile: path to test file
:param word_embedding_model: distributional semantic model
:param classifier: trained classifier
:type testfile: string
:type word_embedding_model: gensim.models.keyedvectors.Word2VecKeyedVectors
:type classifier: LogisticRegression
:return predictions: list of predicted labels
:return labels: list of gold labels
'''
dense_feature_representations, labels = extract_embeddings_as_features_and_gold(testfile,word_embedding_model)
predictions = classifier.predict(dense_feature_representations)
return predictions, labels
# I printing announcements of where the code is at (since some of these steps take a while)
print('Extracting dense features...')
dense_feature_representations, labels = extract_embeddings_as_features_and_gold(trainfile,word_embedding_model)
print('Training classifier....')
classifier = create_classifier(dense_feature_representations, labels)
print('Running evaluation...')
predicted, gold = label_data_using_word_embeddings(testfile, word_embedding_model, classifier)
print_confusion_matrix(predictions, goldlabels)
print_precision_recall_fscore(predicted, gold)
# -
# ## Including the preceding token
#
# We can include the preceding token as a feature in a similar way. We simply concatenate the two vectors.
# +
def extract_embeddings_of_current_and_preceding_as_features_and_gold(conllfile,word_embedding_model):
'''
Function that extracts features and gold labels using word embeddings for current and preceding token
:param conllfile: path to conll file
:param word_embedding_model: a pretrained word embedding model
:type conllfile: string
:type word_embedding_model: gensim.models.keyedvectors.Word2VecKeyedVectors
:return features: list of vector representation of tokens
:return labels: list of gold labels
'''
labels = []
features = []
conllinput = open(conllfile, 'r')
csvreader = csv.reader(conllinput, delimiter='\t',quotechar='|')
for row in csvreader:
if len(row) == 6:
if row[0] in word_embedding_model:
vector1 = word_embedding_model[row[0]]
else:
vector1 = [0]*300
if row[1] in word_embedding_model:
vector2 = word_embedding_model[row[1]]
else:
vector2 = [0]*300
features.append(np.concatenate((vector1,vector2)))
labels.append(row[-1])
return features, labels
def label_data_using_word_embeddings_current_and_preceding(testfile, word_embedding_model, classifier):
'''
Function that extracts word embeddings as features (of current and preceding token) and gold labels from test data and runs a trained classifier
:param testfile: path to test file
:param word_embedding_model: distributional semantic model
:param classifier: trained classifier
:type testfile: string
:type word_embedding_model: gensim.models.keyedvectors.Word2VecKeyedVectors
:type classifier: LogisticRegression
:return predictions: list of predicted labels
:return labels: list of gold labels
'''
features, labels = extract_embeddings_of_current_and_preceding_as_features_and_gold(testfile,word_embedding_model)
predictions = classifier.predict(features)
return predictions, labels
print('Extracting dense features...')
features, labels = extract_embeddings_of_current_and_preceding_as_features_and_gold(trainfile,word_embedding_model)
print('Training classifier...')
#we can use the same function as for just the tokens itself
classifier = create_classifier(features, labels)
print('Running evaluation...')
predicted, gold = label_data_using_word_embeddings_current_and_preceding(testfile, word_embedding_model, classifier)
print_confusion_matrix(predictions, goldlabels)
print_precision_recall_fscore(predicted, gold)
# -
# ## A mixed system
#
# The code below combines traditional features with word embeddings. Note that we only include features with a limited range of possible values. Combining one-hot token representations (using highly sparse dimensions) with dense representations is generally not a good idea.
# +
def extract_word_embedding(token, word_embedding_model):
'''
Function that returns the word embedding for a given token out of a distributional semantic model and a 300-dimension vector of 0s otherwise
:param token: the token
:param word_embedding_model: the distributional semantic model
:type token: string
:type word_embedding_model: gensim.models.keyedvectors.Word2VecKeyedVectors
:returns a vector representation of the token
'''
if token in word_embedding_model:
vector = word_embedding_model[token]
else:
vector = [0]*300
return vector
def extract_feature_values(row, selected_features):
'''
Function that extracts feature value pairs from row
:param row: row from conll file
:param selected_features: list of selected features
:type row: string
:type selected_features: list of strings
:returns: dictionary of feature value pairs
'''
feature_values = {}
for feature_name in selected_features:
r_index = feature_to_index.get(feature_name)
feature_values[feature_name] = row[r_index]
return feature_values
def create_vectorizer_traditional_features(feature_values):
'''
Function that creates vectorizer for set of feature values
:param feature_values: list of dictionaries containing feature-value pairs
:type feature_values: list of dictionairies (key and values are strings)
:returns: vectorizer with feature values fitted
'''
vectorizer = DictVectorizer()
vectorizer.fit(feature_values)
return vectorizer
def combine_sparse_and_dense_features(dense_vectors, sparse_features):
'''
Function that takes sparse and dense feature representations and appends their vector representation
:param dense_vectors: list of dense vector representations
:param sparse_features: list of sparse vector representations
:type dense_vector: list of arrays
:type sparse_features: list of lists
:returns: list of arrays in which sparse and dense vectors are concatenated
'''
combined_vectors = []
sparse_vectors = np.array(sparse_features.toarray())
for index, vector in enumerate(sparse_vectors):
combined_vector = np.concatenate((vector,dense_vectors[index]))
combined_vectors.append(combined_vector)
return combined_vectors
def extract_traditional_features_and_embeddings_plus_gold_labels(conllfile, word_embedding_model, vectorizer=None):
'''
Function that extracts traditional features as well as embeddings and gold labels using word embeddings for current and preceding token
:param conllfile: path to conll file
:param word_embedding_model: a pretrained word embedding model
:type conllfile: string
:type word_embedding_model: gensim.models.keyedvectors.Word2VecKeyedVectors
:return features: list of vector representation of tokens
:return labels: list of gold labels
'''
labels = []
dense_vectors = []
traditional_features = []
conllinput = open(conllfile, 'r')
csvreader = csv.reader(conllinput, delimiter='\t',quotechar='|')
for row in csvreader:
if len(row) == 6:
token_vector = extract_word_embedding(row[0], word_embedding_model)
pt_vector = extract_word_embedding(row[1], word_embedding_model)
dense_vectors.append(np.concatenate((token_vector,pt_vector)))
#mixing very sparse representations (for one-hot tokens) and dense representations is a bad idea
#we thus only use other features with limited values
other_features = extract_feature_values(row, ['Cap','Pos','Chunklabel'])
traditional_features.append(other_features)
#adding gold label to labels
labels.append(row[-1])
#create vector representation of traditional features
if vectorizer is None:
#creates vectorizer that provides mapping (only if not created earlier)
vectorizer = create_vectorizer_traditional_features(traditional_features)
sparse_features = vectorizer.transform(traditional_features)
combined_vectors = combine_sparse_and_dense_features(dense_vectors, sparse_features)
return combined_vectors, vectorizer, labels
def label_data_with_combined_features(testfile, classifier, vectorizer, word_embedding_model):
'''
Function that labels data with model using both sparse and dense features
'''
feature_vectors, vectorizer, goldlabels = extract_traditional_features_and_embeddings_plus_gold_labels(testfile, word_embedding_model, vectorizer)
predictions = classifier.predict(feature_vectors)
return predictions, goldlabels
print('Extracting Features...')
feature_vectors, vectorizer, gold_labels = extract_traditional_features_and_embeddings_plus_gold_labels(trainfile, word_embedding_model)
print('Training classifier....')
lr_classifier = create_classifier(feature_vectors, gold_labels)
print('Running the evaluation...')
predictions, goldlabels = label_data_with_combined_features(testfile, lr_classifier, vectorizer, word_embedding_model)
print_confusion_matrix(predictions, goldlabels)
print_precision_recall_fscore(predictions, goldlabels)
| code/assignment3/sample_code_features_ablation_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="z2VHzhy6-YVW" colab_type="text"
# # Double Layer Network Example
# + [markdown] id="zv1yWJVN-duo" colab_type="text"
# Note: This notebook is desinged to run with Python3 and CPU (no GPU) runtime.
#
# 
# + [markdown] id="mzChqMYe_a2e" colab_type="text"
# This notebook uses TensorFlow 2.x.
# + id="6dTIqB-q88xc" colab_type="code" outputId="48451d30-a038-4303-f195-d30e50fcb49d" colab={"base_uri": "https://localhost:8080/", "height": 34}
# %tensorflow_version 2.x
# + [markdown] id="VJO3PPzqsq8d" colab_type="text"
# ####[DNE-01]
# Import modules and set a random seed.
# + id="gB5UUoAXIVmC" colab_type="code" colab={}
import numpy as np
from numpy.random import multivariate_normal, permutation
import pandas as pd
from pandas import DataFrame
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import layers, models, initializers
np.random.seed(20190830)
tf.random.set_seed(20190703)
# + [markdown] id="yz2h7_8St1wi" colab_type="text"
# ####[DNE-02]
# Generate a training dateset.
# + id="ASgzWK5AjWvn" colab_type="code" colab={}
def generate_datablock(n, mu, var, t):
data = multivariate_normal(mu, np.eye(2)*var, n)
df = DataFrame(data, columns=['x1', 'x2'])
df['t'] = t
return df
df0 = generate_datablock(30, [-7, -7], 18, 1)
df1 = generate_datablock(30, [-7, 7], 18, 0)
df2 = generate_datablock(30, [7, -7], 18, 0)
df3 = generate_datablock(30, [7, 7], 18, 1)
df = pd.concat([df0, df1, df2, df3], ignore_index=True)
train_set = df.reindex(permutation(df.index)).reset_index(drop=True)
# + [markdown] id="GeoERRDrryKF" colab_type="text"
# ####[DNE-03]
# Store the coordinates $(x_1,x_2)$ and label values $t$ into NumPy arrays.
# + id="Sb0P6z0h4nMF" colab_type="code" colab={}
train_x = train_set[['x1', 'x2']].values
train_t = train_set['t'].values
# + [markdown] id="qdQ0Tp2IvFy8" colab_type="text"
# ####[DNE-04]
# Define a model for the binary classification using two hidden layers.
#
# Weights in the hidden layers are initialized with random values.
# + id="tpL_niBTXggS" colab_type="code" outputId="c5dc55b4-730f-46fa-886a-20100d1664fd" colab={"base_uri": "https://localhost:8080/", "height": 255}
model = models.Sequential()
model.add(layers.Dense(2, activation='tanh',input_shape=(2,),
kernel_initializer=initializers.TruncatedNormal(),
name='hidden1'))
model.add(layers.Dense(2, activation='tanh',
kernel_initializer=initializers.TruncatedNormal(),
name='hidden2'))
model.add(layers.Dense(1, activation='sigmoid',
name='output'))
model.summary()
# + [markdown] id="fBltXsSRvZn0" colab_type="text"
# ####[DNE-05]
# Compile the model using the Adam optimizer, and Cross entroy as a loss function.
# + id="BakcuKxdQoSL" colab_type="code" colab={}
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['acc'])
# + [markdown] id="r8LmLDYO_Vf7" colab_type="text"
# ####[DNE-06]
# Train the model.
# + id="LlQCTsKKXkr5" colab_type="code" outputId="7ac2de88-4b5b-415a-8a99-8525eb1c8ffd" colab={"base_uri": "https://localhost:8080/", "height": 88}
history = model.fit(train_x, train_t,
batch_size=len(train_set), epochs=2000, verbose=0)
# + [markdown] id="gqLORk3JsMmb" colab_type="text"
# ####[DNE-07]
# Plot charts for the accuracy and loss values.
# + id="5HYXAt_0vXwr" colab_type="code" outputId="c8b86491-2400-4f07-e253-c89c0787a154" colab={"base_uri": "https://localhost:8080/", "height": 538}
DataFrame({'acc': history.history['acc']}).plot(grid=False)
DataFrame({'loss': history.history['loss']}).plot(grid=False)
# + [markdown] id="MpSePiAysZHn" colab_type="text"
# ####[DNE-08]
# Plot charts for the final result.
# + id="jY2TeEWCwxix" colab_type="code" outputId="228fa6c4-b2fb-4eee-dcf5-e314b3a28384" colab={"base_uri": "https://localhost:8080/", "height": 449}
train_set1 = train_set[train_set['t']==1]
train_set2 = train_set[train_set['t']==0]
fig = plt.figure(figsize=(7, 7))
subplot = fig.add_subplot(1, 1, 1)
subplot.set_ylim([-15, 15])
subplot.set_xlim([-15, 15])
subplot.scatter(train_set1.x1, train_set1.x2, marker='x')
subplot.scatter(train_set2.x1, train_set2.x2, marker='o')
locations = [[x1, x2] for x2 in np.linspace(-15, 15, 100)
for x1 in np.linspace(-15, 15, 100)]
p_vals = model.predict(np.array(locations)).reshape((100, 100))
subplot.imshow(p_vals, origin='lower', extent=(-15, 15, -15, 15),
cmap=plt.cm.gray_r, alpha=0.5)
| Chapter03/4. Double layer network example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Fashion MNIST
# ### After downloading our dataset we see it's coded in the ubyte form
# - We then use the following function to read the data and return it as a numpy array
# +
import struct
import numpy as np
def read_idx(filename):
"""Credit: https://gist.github.com/tylerneylon"""
with open(filename, 'rb') as f:
zero, data_type, dims = struct.unpack('>HBB', f.read(4))
shape = tuple(struct.unpack('>I', f.read(4))[0] for d in range(dims))
return np.frombuffer(f.read(), dtype=np.uint8).reshape(shape)
# -
# ### We use the function to extact our training and test datasets
x_train = read_idx("./fashion_mnist/train-images-idx3-ubyte")
y_train = read_idx("./fashion_mnist/train-labels-idx1-ubyte")
x_test = read_idx("./fashion_mnist/t10k-images-idx3-ubyte")
y_test = read_idx("./fashion_mnist/t10k-labels-idx1-ubyte")
# ### Let's inspect our dataset
# +
# printing the number of samples in x_train, x_test, y_train, y_test
print("Initial shape or dimensions of x_train", str(x_train.shape))
print ("Number of samples in our training data: " + str(len(x_train)))
print ("Number of labels in our training data: " + str(len(y_train)))
print ("Number of samples in our test data: " + str(len(x_test)))
print ("Number of labels in our test data: " + str(len(y_test)))
print()
print ("Dimensions of x_train:" + str(x_train[0].shape))
print ("Labels in x_train:" + str(y_train.shape))
print()
print ("Dimensions of x_test:" + str(x_test[0].shape))
print ("Labels in y_test:" + str(y_test.shape))
# -
# ### Let's view some sample images
# +
# Let's do the same thing but using matplotlib to plot 6 images
import matplotlib.pyplot as plt
# Plots 6 images, note subplot's arugments are nrows,ncols,index
# we set the color map to grey since our image dataset is grayscale
plt.subplot(331)
random_num = np.random.randint(0,len(x_train))
plt.imshow(x_train[random_num], cmap=plt.get_cmap('gray'))
plt.subplot(332)
random_num = np.random.randint(0,len(x_train))
plt.imshow(x_train[random_num], cmap=plt.get_cmap('gray'))
plt.subplot(333)
random_num = np.random.randint(0,len(x_train))
plt.imshow(x_train[random_num], cmap=plt.get_cmap('gray'))
plt.subplot(334)
random_num = np.random.randint(0,len(x_train))
plt.imshow(x_train[random_num], cmap=plt.get_cmap('gray'))
plt.subplot(335)
random_num = np.random.randint(0,len(x_train))
plt.imshow(x_train[random_num], cmap=plt.get_cmap('gray'))
plt.subplot(336)
random_num = np.random.randint(0,len(x_train))
plt.imshow(x_train[random_num], cmap=plt.get_cmap('gray'))
# Display out plots
plt.show()
# -
# ### Let's create our model
# +
from keras.datasets import mnist
from keras.utils import np_utils
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D, BatchNormalization
from keras import backend as K
# Training Parameters
batch_size = 128
epochs = 1
# Lets store the number of rows and columns
img_rows = x_train[0].shape[0]
img_cols = x_train[1].shape[0]
# Getting our date in the right 'shape' needed for Keras
# We need to add a 4th dimenion to our date thereby changing our
# Our original image shape of (60000,28,28) to (60000,28,28,1)
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
# store the shape of a single image
input_shape = (img_rows, img_cols, 1)
# change our image type to float32 data type
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
# Normalize our data by changing the range from (0 to 255) to (0 to 1)
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# Now we one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
# Let's count the number columns in our hot encoded matrix
print ("Number of Classes: " + str(y_test.shape[1]))
num_classes = y_test.shape[1]
num_pixels = x_train.shape[1] * x_train.shape[2]
# create model
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(BatchNormalization())
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss = 'categorical_crossentropy',
optimizer = keras.optimizers.Adadelta(),
metrics = ['accuracy'])
print(model.summary())
# -
# ### Let's train our model
# +
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
# -
# ### Let's test out our model
# +
import cv2
import numpy as np
def getLabel(input_class):
number = int(input_class)
if number == 0:
return "T-shirt/top "
if number == 1:
return "Trouser"
if number == 2:
return "Pullover"
if number == 3:
return "Dress"
if number == 4:
return "Coat"
if number == 5:
return "Sandal"
if number == 6:
return "Shirt"
if number == 7:
return "Sneaker"
if number == 8:
return "Bag"
if number == 9:
return "Ankle boot"
def draw_test(name, pred, actual, input_im):
BLACK = [0,0,0]
res = getLabel(pred)
actual = getLabel(actual)
expanded_image = cv2.copyMakeBorder(input_im, 0, 0, 0, 4*imageL.shape[0] ,cv2.BORDER_CONSTANT,value=BLACK)
expanded_image = cv2.cvtColor(expanded_image, cv2.COLOR_GRAY2BGR)
cv2.putText(expanded_image, "Predicted - " + str(res), (152, 70) , cv2.FONT_HERSHEY_COMPLEX_SMALL,1, (0,255,0), 1)
cv2.putText(expanded_image, " Actual - " + str(actual), (152, 90) , cv2.FONT_HERSHEY_COMPLEX_SMALL,1, (0,0,255), 1)
cv2.imshow(name, expanded_image)
for i in range(0,10):
rand = np.random.randint(0,len(x_test))
input_im = x_test[rand]
actual = y_test[rand].argmax(axis=0)
imageL = cv2.resize(input_im, None, fx=4, fy=4, interpolation = cv2.INTER_CUBIC)
input_im = input_im.reshape(1,28,28,1)
## Get Prediction
res = str(model.predict_classes(input_im, 1, verbose = 0)[0])
draw_test("Prediction", res, actual, imageL)
cv2.waitKey(0)
cv2.destroyAllWindows()
# -
| 13. Building LeNet and AlexNet in Keras/13.4 Fashion MNIST.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="I8B-quBwqVVD" colab_type="code" colab={}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten,Conv2D, MaxPooling2D
from keras.optimizers import Adam,SGD
from keras.layers.normalization import BatchNormalization
from keras.utils import np_utils
# Disable Tensorflow Warnings
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
# + id="aHSTJltNqVVJ" colab_type="code" outputId="bcd8f746-8988-4e84-a85d-a8f23395404e" colab={"base_uri": "https://localhost:8080/", "height": 85}
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print("X_train original shape", X_train.shape)
print("y_train original shape", y_train.shape)
print("X_test original shape", X_test.shape)
print("y_test original shape", y_test.shape)
# + id="qF8ShUAyqVVQ" colab_type="code" outputId="974c1533-cec1-4d50-cd02-1c36ea68b2a8" colab={"base_uri": "https://localhost:8080/", "height": 298}
plt.imshow(X_train[0], cmap='gray')
plt.title('Class '+ str(y_train[0]))
# + id="eOyrpeAREwQp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="b9cfdf61-165f-4fed-86c7-b49f65e09edc"
print(f"X-train Shape:{X_train.shape}")
print(f"Y-train Shape:{Y_train.shape}")
# + id="aqYPvvrdqVVT" colab_type="code" outputId="1f90ce09-6979-4c72-f44c-a898b62f019a" colab={"base_uri": "https://localhost:8080/", "height": 34}
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# Normalizing the Pixel Intensity (0-255 => 0-1)
X_train/=255
X_test/=255
print(f"X-train Shape(after reshape):{X_train.shape}")
# + id="SvBkgZO6qVVX" colab_type="code" outputId="96d6570f-e5a2-497d-fe16-a1fa0f6cfc74" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Number of Output Classes
number_of_classes = 10
# Convert to One-hot Vector
Y_train = np_utils.to_categorical(y_train, number_of_classes)
Y_test = np_utils.to_categorical(y_test, number_of_classes)
print(f"Before:{y_train[0]},After: {Y_train[0]}")
# + id="3T0vzZmfqVVa" colab_type="code" outputId="816df7b1-62f3-4e74-a799-b634bc7895fa" colab={"base_uri": "https://localhost:8080/", "height": 85}
# Three steps to Convolution
# 1. Convolution
# 2. Activation
# 3. Polling
# Repeat Steps 1,2,3 for adding more hidden layers
# 4. After that make a fully connected network
# This fully connected network gives ability to the CNN
# to classify the samples
model = Sequential()
model.add(Conv2D(32, 3, 3, input_shape=(28,28,1)))
model.add(Activation('relu'))
BatchNormalization(axis=1)
model.add(Conv2D(32,3,3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
# Fully connected layer
BatchNormalization()
model.add(Dense(512))
model.add(Activation('relu'))
BatchNormalization()
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('softmax'))
# + id="ch06DtDIqVVf" colab_type="code" outputId="53acd694-6660-40b5-ca68-854aef138320" colab={"base_uri": "https://localhost:8080/", "height": 527}
model.summary()
# + id="aXiVTu9eqVVj" colab_type="code" colab={}
model.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['accuracy'])
# + id="wzZww8QOqVVt" colab_type="code" outputId="8933d606-50e3-4478-85a6-0a1da00ab83d" colab={"base_uri": "https://localhost:8080/", "height": 187}
model.fit(X_train, Y_train, batch_size=128, nb_epoch=3, validation_data=(X_test, Y_test))
# + id="Mkk66ViwqVVw" colab_type="code" outputId="c3ebfcfb-015f-4eb0-b690-7c1988bcc361" colab={"base_uri": "https://localhost:8080/", "height": 68}
score = model.evaluate(X_test, Y_test)
print()
print('Test accuracy: ', score[1])
# + id="I7l8TOffqVWI" colab_type="code" colab={}
| ANN and CNN/CNN_mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PaddlePaddle 2.1.0 (Python 3.5)
# language: python
# name: py35-paddle1.2.0
# ---
from paddle.vision.transforms import Compose, Normalize
import paddle
import paddle.nn.functional as F
import numpy as np
from paddle.metric import Accuracy
import random
from paddle import fluid
from visualdl import LogWriter
log_writer=LogWriter("data/data65") #log记录器
transform = Compose([Normalize(mean=[127.5],
std=[127.5],
data_format='CHW')])
# +
train_dataset = paddle.vision.datasets.MNIST(mode='train', transform=transform)
test_dataset = paddle.vision.datasets.MNIST(mode='test', transform=transform)
# +
class InceptionA(paddle.nn.Layer): #作为网络一层
def __init__(self,in_channels):
super(InceptionA,self).__init__()
self.branch3x3_1=paddle.nn.Conv2D(in_channels,16,kernel_size=1) #第一个分支
self.branch3x3_2=paddle.nn.Conv2D( 16,24,kernel_size=3,padding=1)
self.branch3x3_3=paddle.nn.Conv2D(24,24,kernel_size=3,padding=1)
self.branch5x5_1=paddle.nn.Conv2D(in_channels, 16,kernel_size=1) #第二个分支
self.branch5x5_2=paddle.nn.Conv2D( 16,24,kernel_size=5,padding=2)
self.branch1x1=paddle.nn.Conv2D(in_channels, 16,kernel_size=1) #第三个分支
self.branch_pool=paddle.nn.Conv2D(in_channels,24,kernel_size= 1) #第四个分支
def forward(self,x):
#分支1处理过程
branch3x3= self.branch3x3_1(x)
branch3x3= self.branch3x3_2(branch3x3)
branch3x3= self.branch3x3_3(branch3x3)
#分支2处理过程
branch5x5=self.branch5x5_1(x)
branch5x5=self.branch5x5_2(branch5x5)
#分支3处理过程
branch1x1=self.branch1x1(x)
#分支4处理过程
branch_pool=F.avg_pool2d(x,kernel_size=3,stride=1,padding= 1)
branch_pool=self.branch_pool(branch_pool)
outputs=[branch1x1,branch5x5,branch3x3,branch_pool] #将4个分支的输出拼接起来
return fluid.layers.concat(outputs,axis=1) #横着拼接, 共有24+24+16+24=88个通道
class Net(paddle.nn.Layer): #卷积,池化,inception,卷积,池化,inception,全连接
def __init__(self):
super(Net,self).__init__()
#定义两个卷积层
self.conv1=paddle.nn.Conv2D(1,10,kernel_size=5)
self.conv2=paddle.nn.Conv2D(88,20,kernel_size=5)
#Inception模块的输出均为88通道
self.incep1=InceptionA(in_channels=10 )
self.incep2=InceptionA(in_channels=20)
self.mp=paddle.nn.MaxPool2D(2)
self.fc=paddle.nn.Linear(1408,10) #图像高*宽*通道数
def forward(self,x):
x=F.relu(self.mp(self.conv1(x)))# 卷积池化,relu 输出x为图像尺寸14*14*10
x =self.incep1(x) #图像尺寸14*14*88
x =F.relu(self.mp(self.conv2(x)))# 卷积池化,relu 输出x为图像尺寸5*5*20
x = self.incep2(x) #图像尺寸5*5*88
x = paddle.flatten(x, start_axis=1,stop_axis=-1)
x = self.fc(x)
return x
# +
model = paddle.Model(Net()) # 封装模型
optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters()) # adam优化器
# 配置模型
model.prepare(
optim,
paddle.nn.CrossEntropyLoss(),
Accuracy()
)
# 训练模型
model.fit(train_dataset,epochs=2,batch_size=64,verbose=1)
#评估
model.evaluate(test_dataset, batch_size=64, verbose=1)
# -
def train(model,Batch_size=64):
train_loader = paddle.io.DataLoader(train_dataset, batch_size=Batch_size, shuffle=True)
model.train()
iterator = 0
epochs = 10
total_steps = (int(50000//Batch_size)+1)*epochs
lr = paddle.optimizer.lr.PolynomialDecay(learning_rate=0.01,decay_steps=total_steps,end_lr=0.001)
optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())
# 用Adam作为优化函数
for epoch in range(epochs):
for batch_id, data in enumerate(train_loader()):
x_data = data[0]
y_data = data[1]
predicts = model(x_data)
loss = F.cross_entropy(predicts, y_data)
# 计算损失
acc = paddle.metric.accuracy(predicts, y_data)
loss.backward()
if batch_id % 200 == 0:
print("epoch: {}, batch_id: {}, loss is: {}, acc is: {}".format(epoch, batch_id, loss.numpy(), acc.numpy()))
log_writer.add_scalar(tag='acc',step=iterator,value=acc.numpy())
log_writer.add_scalar(tag='loss',step=iterator,value=loss.numpy())
iterator+=200
optim.step()
optim.clear_grad()
paddle.save(model.state_dict(),'./data/checkpoint/mnist_epoch{}'.format(epoch)+'.pdparams')
paddle.save(optim.state_dict(),'./data/checkpoint/mnist_epoch{}'.format(epoch)+'.pdopt')
# +
def test(model):
# 加载测试数据集
test_loader = paddle.io.DataLoader(test_dataset, places=paddle.CPUPlace(), batch_size=64)
model.eval()
for batch_id, data in enumerate(test_loader()):
x_data = data[0]
y_data = data[1]
predicts = model(x_data)
# 获取预测结果
loss = F.cross_entropy(predicts, y_data)
acc = paddle.metric.accuracy(predicts, y_data)
if batch_id % 20 == 0:
print("batch_id: {}, loss is: {}, acc is: {}".format(batch_id, loss.numpy(), acc.numpy()))
# -
def random_test(model,num=100):
select_id = random.sample(range(1, 10000), 100) #生成一百张测试图片的下标
test_loader = paddle.io.DataLoader(test_dataset, places=paddle.CPUPlace(), batch_size=64)
for batch_id, data in enumerate(test_loader()):
x_data = data[0]
label = data[1]
predicts = model(x_data)
#返回正确率
acc = paddle.metric.accuracy(predicts, label)
print("正确率为:{}".format(acc.numpy()))
# +
if __name__ == '__main__':
model = Net()
train(model)
test(model)
random_test(model)
| examples/minist_paddle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: miniconda3
# language: python
# name: miniconda3
# ---
# +
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.lines as mlines
import seaborn as sns
from sequencing_tools.viz_tools import okabeito_palette, color_encoder, simpsons_palette
from sequencing_tools.stats_tools import p_adjust
from scipy.special import ndtr
from collections import defaultdict
from sequencing_tools.fastq_tools import reverse_complement
from sequencing_tools.bam_tools import get_strand
import RNA
from multiprocessing import Pool
import random
import pysam
import glob
import re
from pybedtools import BedTool
import mappy as mp
from plotting_utils import figure_path
from matplotlib import rcParams
from peak_utils import *
plt.rc('axes', labelsize=15)
plt.rc('xtick', labelsize = 15)
plt.rc('ytick', labelsize = 15)
rcParams['font.family'] = 'sans-serif'
rcParams['font.sans-serif'] = ['Arial']
# -
project_path = '/stor/work/Lambowitz/cdw2854/cfNA/tgirt_map'
peak_path = project_path + '/CLAM/bed_files/peaks/annotated/'
peak_tsv = peak_path + '/unfragmented.bed'
peak_df = load_peaks(peak_tsv) \
.assign(sense_gtype = lambda d: np.where(d.sense_gtype == ".", 'Unannotated', d.sense_gtype))\
.assign(antisense_gtype = lambda d: np.where(d.antisense_gtype == ".", 'Unannotated', d.antisense_gtype)) \
.sort_values('pileup', ascending=False)
peak_df.head()
peak_df \
.query('pileup >= %i & sample_count >= %i ' %(pileup_cutoff, sample_cutoff))\
.assign(seq = lambda d: list(map(fetch_seq, d.chrom, d.start, d.end, d.strand)))\
.assign(is_mt = lambda d: d.seq.map(is_mt)) \
.query('is_mt =="not_MT"').shape
# +
from sequencing_tools.fastq_tools.cutadapt_align import Aligner
import pysam
pcr_r1 = 'CTACACGTTCAGAGTTCTACAGTCCGACGATC'
pcr_r2 = 'GTGACTGGAGTTCAGACGTGTGCTCTTCCGATCT'
r1_searcher = Aligner(pcr_r1, max_error_rate = 0.1)
r2_searcher = Aligner(pcr_r2, max_error_rate = 0.1)
def is_pcr_artifacts(chrom, start, end , strand):
if strand == '+':
seq = fa.fetch(chrom, end, end + 10)
elif strand == '-':
seq = fa.fetch(chrom, start - 10, start)
seq = reverse_complement(seq)
seq = seq.upper()
r1_check = r1_searcher.locate(seq)
r2_check = r2_searcher.locate(seq)
if r1_check and r1_check[-2] > 3:
return 'R1'
elif r2_check and r2_check[-2] > 3:
return 'R2'
else:
return 'No'
is_pcr_artifacts('chr19',2271477,2271501,'-')
# -
ce = color_encoder()
colors = simpsons_palette()
#colors.extend(['black','white'])
random.seed(12)
colors = random.sample(colors, k = len(peak_df.sense_gtype.unique()))
ce.fit(peak_df.sense_gtype, colors)
ce.encoder = {'Long RNA': '#370335',
'RBP': '#91331F',
'Repeats': '#197EC0',
'Unannotated': '#46732E',
'miRNA': '#FD7446',
'misc RNA': '#FD8CC1',
'piRNA': '#D5E4A2',
'snRNA': '#8A9197',
'tRNA':'black',
'tRF3':'black',
'tRF5':'black',
'snoRNA': '#FED439'}
# peak_df\
# .assign(peak_count = 1)\
# .groupby(['sense_gtype', 'pileup'], as_index=False)\
# .agg({'peak_count':'sum'}) \
# .sort_values('pileup')\
# .reset_index() \
# .assign(cum_count = lambda d: d.groupby('sense_gtype').peak_count.cumsum())\
# .assign(log_pile = lambda d: d.pileup.transform(np.log10))\
# .query('sense_gtype == "Long RNA"')
peak_df\
.query('sense_gtype == "Long RNA"')\
.sort_values('pileup', ascending=False)\
.query('pileup >= %i & sample_count >= %i' %(pileup_cutoff, 8))\
.assign(seq = lambda d: list(map(fetch_seq, d.chrom, d.start, d.end, d.strand)))\
.assign(is_mt = lambda d: d.seq.map(is_mt))
pd.DataFrame({'gt':peak_df.sense_gtype,
'col':peak_df.sense_gtype.map(ce.encoder)}).drop_duplicates()
# +
fig = plt.figure(figsize=(10,10))
strand_ax = fig.add_axes([-0.1, 0.5, 0.45, 0.45])
pie_ax = fig.add_axes([0.6, 0.5, 0.5, 0.5])
rbp_ax = fig.add_axes([-0.1, 0, 0.35, 0.5])
long_ax = fig.add_axes([0.38, 0, 0.35, 0.5])
misc_ax = fig.add_axes([0.84, 0, 0.35, 0.5])
top_n = 15
plot_peak_strand(peak_df, strand_ax)
sense_peaks = peak_df.query('is_sense == "Sense"')
plot_peak_pie(sense_peaks, pie_ax, ce)
plot_RNA(sense_peaks, misc_ax, ce, rnatype='Repeats', top_n = top_n)
rbp_df = plot_rbp(sense_peaks, rbp_ax, ce, top_n = top_n)
plot_long_RNA_peak(peak_df, long_ax, ce, top_n = top_n, y_val = 'log10p')
l1 = mlines.Line2D([0.3,0.85],[0.9,0.955], color= 'black',
figure = fig, transform=fig.transFigure)
l2 = mlines.Line2D([0.3,0.7],[0.58,0.51], color= 'black',
figure = fig, transform=fig.transFigure)
fig.lines.extend([l1, l2])
figure_name = figure_path + '/peak_figure.pdf'
fig.savefig(figure_name, bbox_inches = 'tight')
print('Saved:', figure_name)
# -
peak_df\
.query('sense_gtype == "snoRNA"')\
.query('pileup >= %i & sample_count > %i' %(pileup_cutoff, sample_cutoff))\
.sort_values('log10p', ascending=False)\
.assign(seq = lambda d: list(map(fetch_seq, d.chrom, d.start, d.end, d.strand)))\
.assign(is_mt = lambda d: d.seq.map(is_mt))
sns.jointplot(peak_df.pileup.transform(np.log),
peak_df.sample_count)
','.join(rbp_df.head(15).index)
peak_df \
.query('pileup >= %i & sample_count > %i' %(pileup_cutoff, sample_cutoff))\
.query('sense_gtype == "RBP"')
peak_df\
.query('pileup >= %i' %pileup_cutoff)\
.to_csv(peak_path + '/peaks.tsv',sep='\t', index=False)
# +
ax = plt.subplot()
pdf = peak_df\
.pipe(lambda d: d[~d.sense_gtype.str.contains('tRF')])\
.query('pileup >= %i' %pileup_cutoff)\
.assign(peak_width = lambda d: d.end-d.start)\
.assign(log_pile = lambda d: d.pileup.transform(np.log10))
pdf.plot.scatter('peak_width','pileup',
color = ce.transform(pdf.sense_gtype), ax = ax,
alpha = 0.2)
ax.set_xscale('log')
ax.set_yscale('log')
ce.show_legend(ax = ax, bbox_to_anchor =(1,1), frameon=False)
sns.despine()
# +
ax = plt.subplot()
for gt, gd in peak_df\
.query('pileup >= %i' %(pileup_cutoff))\
.assign(peak_width = lambda d: np.log10(d.end-d.start))\
.groupby('sense_gtype'):
alpha = 1 if gt in ["Long RNA"] else 0.15
sns.distplot(gd.peak_width, ax = ax, kde_kws={'alpha':alpha},
label = gt, color = ce.encoder[gt],
hist=False)
lgd = ax.legend(frameon=False)
for lh in lgd.legendHandles:
lh.set_alpha(1)
ax.set_ylabel('Density')
ax.set_xlabel('Peak width ($log_{10}$ nt)')
x_range = np.arange(1,4, 0.5)
ax.set_xlim(x_range.min(), x_range.max())
ax.set_xticks(x_range)
for xt, x in zip(ax.get_xticklabels(), x_range):
xt.set_text(r'$10^{%s}$' %(x))
# +
fig = plt.figure(figsize=(10,8))
cov_ax = fig.add_subplot(221)
number_ax = fig.add_subplot(223)
dist_cov_ax = fig.add_subplot(222)
#peak_annotation_ax = fig.add_subplot(224)
plot_peak_coverage(peak_df, cov_ax)
#plot_cov_density(peak_df, dist_cov_ax)
plot_peak_cum_cov(peak_df, dist_cov_ax)
plot_peak_number(peak_df, number_ax, ce)
#### add hepG2
#combined_peaks = pd.concat([peak_df.assign(annotation = 'K562'),
# hep_peak_df.assign(annotation = 'K562 + HepG2')])
#plot_peak_bar(peak_annotation_ax, combined_peaks)
fig.tight_layout()
figurename = figure_path + '/peak_qc.pdf'
fig.savefig(figurename, bbox_inches = 'tight')
print('Plotted: ', figurename)
# -
anti_peaks = peak_df.query('is_sense == "Antisense"')
fig = plt.figure(figsize=(10,5))
ax = fig.add_axes([0,0.1,0.4,0.8])
plot_peak_pie(anti_peaks, ax, ce, gtype='antisense_gtype')
ax = fig.add_axes([0.7, 0, 0.4, 1])
anti_plot = anti_peaks.nlargest(15, 'log10p')
anti_plot.plot\
.bar('antisense_gname', 'log10p',
color = ce.transform(anti_plot\
.antisense_gtype),
ax = ax)
ax.legend().set_visible(False)
ax.set_xlabel('RNA type')
ax.set_ylabel('-$log_{10}$ p-value')
sns.despine()
fig.tight_layout()
figurename = figure_path + '/peak_anti.pdf'
fig.savefig(figurename, bbox_inches = 'tight')
print('Plotted: ', figurename)
# +
bam_path = '/stor/work/Lambowitz/cdw2854/cell_Free_nucleotides/tgirt_map/merged_bam'
ref_path = '/stor/work/Lambowitz/ref/hg19'
tracks = {'DNase I': bam_path + '/unfragmented.bam',
'NaOH': bam_path + '/alkaline_hydrolysis.bam',
'sncRNA': ref_path + '/new_genes/sncRNA_viz.bed',
'Protein': ref_path + '/new_genes/genes.bed12.bed'}
genome = ref_path + '/genome/hg19_genome.fa'
def color_func(interval):
return 'salmon' if get_strand(interval.read) == '+' else 'steelblue'
# -
# regions = 'chr14:50329268-50329569'
# matches = re.search('(chr[0-9XY]+):([0-9]+)-([0-9]+)', regions)
# chrom, start, end = matches.groups()
#
# viz = genomeview.visualize_data(tracks, chrom, int(start)-400, int(end)+400, genome)
# for track in ['DNase I', 'NaOH']:
# tr = genomeview.get_one_track(viz, track)
# tr.color_fn = color_func
# if track == "DNase I":
# tr.row_height = 0.02
#
# viz
fa = pysam.FastaFile('/stor/work/Lambowitz/ref/hg19/genome/hg19_genome.fa')
fa.fetch('chr12',22158771,22158870)
columns = peak_df.columns
columns = np.append(columns,['intron_chrom','intron_start','intron_end',
'intron_gene','intron_score','intron_strand'])
intron_df = BedTool()\
.from_dataframe(peak_df )\
.intersect('/stor/work/Lambowitz/ref/hg19/genome/independent_intron.bed',
f= 0.8,F=0.8,wb=True)\
.to_dataframe(names = columns)
intron_df.shape
intron_df \
.query('pileup >= 5' )
# +
ss_dinucleotide = defaultdict(int)
ss_dinucleotide_seq = defaultdict(list)
seqs = []
fa = pysam.Fastafile('/stor/work/Lambowitz/ref/hg19/genome/hg19_genome.fa')
def fetch_seq(chrom, start, end, strand):
intron_seq = fa.fetch(chrom, start - 1, end)
intron_seq = intron_seq if strand == "+" else reverse_complement(intron_seq)
return intron_seq
intron_df = intron_df.query('pileup >=3') \
.assign(seq = lambda d: list(map(fetch_seq, d.chrom, d.start, d.end, d.strand))) \
.assign(dinucleotide = lambda d: d.seq.str.slice(0,2) + ':' + (d.seq + 'N').str.slice(-3,-1))
intron_df.head()
# -
tablename = figure_path + '/intron_table.csv'
intron_df \
.filter(regex='chrom|start|end|log10|pileup|intron_gene|seq') \
.sort_values('pileup', ascending=False)\
.to_csv(tablename, index=False)
print('Written: ', tablename)
intron_df.query('pileup >= %i' %pileup_cutoff) \
.assign(length = lambda d: d.end - d.start) \
.describe()
# +
# %load_ext autoreload
# %autoreload 2
import mygene as mg
import gseapy as gsp
mgi = mg.MyGeneInfo()
glist = intron_df.query('pileup >= %i' %pileup_cutoff) \
.filter(['pileup','gid']) \
.assign(ensg = lambda d: d.gid.str.extract('(ENSG[0-9]+)')) \
.assign(symbol = lambda d: list(map(lambda x: x['symbol'], mgi.getgenes(d.ensg))))
# -
glist
# +
# %tb
rnk = glist\
.filter(['symbol','pileup']) \
.pipe(lambda d: d[~d.symbol.str.contains('^AC')])\
.rename(columns={'symbol':'gene_name'})
#res = gsp.prerank(rnk = rnk, gene_sets='/stor/work/Lambowitz/ref/gene_sets/c2.all.v6.2.symbols.gmt')
print('\n'.join(rnk.gene_name.tolist()))
# -
peaks\
.query('merged_type == "miRNA"')\
.filter(regex='log10p|picked_RNA_sense')\
.set_index('picked_RNA_sense')\
.nlargest(10, 'log10p')\
.plot.bar()
peaks.pipe(lambda d: d[d.picked_RNA_sense.str.contains("CGGA")])
peaks\
.assign(anti_merged_type = lambda d: d.picked_type_anti.map(merge_type)) \
.query('merged_type == "Repeats" | (anti_merged_type == "Repeats" & is_sense != "Sense")')
peaks\
.query('merged_type=="RBP"')\
.pipe(lambda d: d[~d.gtype.str.contains(lrna_regex)])
import gseapy as gsp
res = gsp.prerank(rnk = rbp_df.sort_values(0,ascending=False),
gene_sets = 'KEGG_2016')
res.res2d
aligner = mp.Aligner('/stor/work/Lambowitz/ref/hg19/genome/chrM.minimap2_idx', preset='sr')
aln = aligner.map(fa.fetch('chr17',33981908,33982067))
print(next(aln))
def check_MT(peaks, return_column=False):
mt = 0
aligner = mp.Aligner('/stor/work/Lambowitz/ref/hg19/genome/chrM.minimap2_idx', preset='sr')
fa = pysam.FastaFile('/stor/work/Lambowitz/ref/hg19/genome/hg19_genome.fa')
mts = []
for peak_count, row in peaks.reset_index().iterrows():
seq = fa.fetch(row['chrom'], row['start'], row['end'])
seq = seq if row['strand'] == "+" else reverse_complement(seq)
alns = aligner.map(seq)
try:
aln = next(alns)
mt += 1
mts.append('MT')
#print(aln.cigar_str)
except StopIteration:
#print(row)
mts.append('no')
pass
print('%i seq: %i in MT' %(peak_count, mt))
if return_column:
return mts
anti = peak_df.query('pileup >= 5').query('is_sense == "Unannotated"')
anti['MT'] = check_MT(anti, return_column=True)
peak_df.query("sense_gtype == 'tRF3'")
| plots/clam_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# +
plt.figure(figsize=(10, 5))
mu, sigma = 100, 15
x = mu + sigma*np.random.randn(10000)
# the histogram of the data
plt.hist(x, 50, normed=1, facecolor='green', alpha=0.65)
plt.xlabel('Smarts')
plt.ylabel('Probability')
plt.title(r'$\mathrm{Histogram\ of\ IQ:}\ \mu=%f,\ \sigma=%f$'%(np.mean(x),np.std(x)))
plt.grid(True)
plt.show()
# +
plt.figure(figsize=(10, 5))
q75, q25 = np.percentile(x, [75 ,25])
x = (x - q25) / (q75 - q25)
plt.hist(x, 50, facecolor='green', alpha=0.65)
plt.xlabel('Smarts')
plt.ylabel('Probability')
plt.title(r'$\mathrm{Histogram\ of\ IQ:}\ \mu=%f,\ \sigma=%f$'%(np.mean(x),np.std(x)))
plt.grid(True)
plt.show()
# -
| Statistics-Python-Tutorial/scaling/robust.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/CanopySimulations/canopy-python-examples/blob/master/fitting_aero_data_to_polynomial_and_rbf.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="NtIv7oIVvn5i"
# # Upgrade Runtime
# This cell ensures the runtime supports `asyncio` async/await, and is needed on Google Colab. If the runtime is upgraded, you will be prompted to restart it, which you should do before continuing execution.
# + id="E-AnefvPvmtR"
# !pip install "ipython>=7"
# + [markdown] id="_61Bg2_KwIR2"
# # Set Up Environment
# + [markdown] id="zX-xTWzWG1bt"
# ### Import required libraries
# + id="gNnEvtkmwL0W"
# !pip install -q canopy
# + id="0bBsqDyhwN_8"
import canopy
import logging
import numpy as np
from numpy.matlib import repmat
import matplotlib.pyplot as plt
import pandas as pd
import json
from typing import Sequence, Optional, NamedTuple, Any
import nest_asyncio
logging.basicConfig(level=logging.INFO)
np.set_printoptions(suppress=True)
nest_asyncio.apply()
# + [markdown] id="nH5-pXaZw5jg"
# ### Authenticate
# + id="SAnwvDofwhw1"
authentication_data = canopy.prompt_for_authentication()
session = canopy.Session(authentication_data)
# + [markdown] id="VSDal4PawY8-"
# # Set Up Example
# + [markdown] id="t2NztzxRQtGm"
# Later in this example we render the aero component we have generated as JSON. The `np.ndarray` type is not JSON serializable by default, so this simple encoder converts `np.ndarray` to a Python array during serialization.
#
# + id="fDWyk1suQcyq"
class NumpyEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, np.ndarray):
return obj.tolist()
return json.JSONEncoder.default(self, obj)
# + [markdown] id="tnJeZcc0vHif"
# # Example: Fitting Aero Data to a Polynomial and RBF.
#
# Using a CSV of aero data containing the columns `hRidef`, `hRideR`, `aRollAbs`, `CLiftBodyF`, `CLiftBodyR` and `CDragBody` we will fit the data to a polynomial and then fit the residuals of the polynomial to an RBF.
#
# We will then generate a Canopy aero component from the data and save a new car to the Canopy platform containing the new aero data.
# + [markdown] id="zOJAdFYTNtIH"
# ## Create Helper Functions
# + [markdown] id="0f5HddYrN1fK"
# This function returns a random set indices, given a data length and maximum number of positions.
# + id="YjT25ZJENw60"
def create_basis_position_indices(data_length: int, max_positions: int) -> Sequence[int]:
n = min(max_positions, data_length)
random_index_permutation = np.random.permutation(data_length)
basis_position_indices = random_index_permutation[0:n]
return basis_position_indices
# + [markdown] id="1tu4NwABOE0s"
# This function will do the fitting, and return a `FittedAeroData` class containing the information we need to construct the car component.
#
# + id="fYuVTy4hWIzz" pycharm={"is_executing": false}
class FittedAeroData(NamedTuple):
polynomial_residuals: pd.DataFrame
polynomial_rbf_residuals: pd.DataFrame
polynomial_coefficients: np.ndarray
rbf_basis_weights: np.ndarray
input_offsets: np.ndarray
input_scales: np.ndarray
input_length_scales: np.ndarray
input_basis_positions: np.ndarray
# input_basis_position_indices defaults to a random 200 indices.
# input_length_scales defaults to all ones.
def fit_aero_data(
data: pd.DataFrame,
input_names: Sequence[str],
output_names: Sequence[str],
maximum_input_basis_positions: Optional[int] = 200,
input_basis_position_indices: Optional[Sequence[int]] = None,
input_length_scales: Optional[Sequence[float]] = None) -> FittedAeroData:
data_length = len(data)
input_count = len(input_names)
inputs = data[input_names].to_numpy()
outputs = data[output_names].to_numpy()
# Set default parameter values if not provided.
if input_basis_position_indices is None:
input_basis_position_indices = create_basis_position_indices(
data_length,
maximum_input_basis_positions)
if input_length_scales is None:
input_length_scales = np.ones((input_count,1))
else:
input_length_scales = np.reshape(input_length_scales, (input_count, 1))
# Scale all the input data so it ranges between +/- 1.
inputs_max = inputs.max(0)
inputs_min = inputs.min(0)
input_offsets = -0.5 * (inputs_max + inputs_min)
input_scales = 0.5 * (inputs_max - inputs_min)
input_scaled = inputs + repmat(input_offsets, data_length, 1)
input_scaled = input_scaled / repmat(input_scales, data_length, 1)
# Now fit a simple polynomial. This consists of:
# 1. Cubic terms;
# 2. Quadratic terms.
# 3. All linear cross terms.
# 4. All linear terms.
# 5. A constant.
# It's not meant to be a perfect fit, just a passable one.
polynomial = np.concatenate((
inputs**3,
inputs**2),
axis=1)
for i in range(0, input_count):
for j in range(i+1, input_count):
polynomial = np.concatenate((
polynomial,
(inputs[:,i] * inputs[:,j])[:, np.newaxis]),
axis=1)
polynomial = np.concatenate((
polynomial,
inputs,
(1+0*inputs[:, 0])[:, np.newaxis]),
axis=1)
polynomial_coefficients_result = np.linalg.lstsq(polynomial, outputs, rcond=None)
polynomial_coefficients = polynomial_coefficients_result[0]
polynomial_y = polynomial @ polynomial_coefficients
# Compute the fit residuals: we're going to fit the RBF to these.
polynomial_residuals = outputs - polynomial_y
input_basis_positions = input_scaled[input_basis_position_indices, :]
input_basis_positions_length = len(input_basis_positions)
# Compute the matrix of mahalanobis distances between all basis function
# positions and all data points.
mahalanobis_distances = np.zeros((data_length, input_basis_positions_length))
for k in range(0,input_count):
mahalanobis_distances = \
mahalanobis_distances + \
input_length_scales[k] * (
repmat(np.vstack(input_scaled[:,k]), 1, input_basis_positions_length) -
repmat(input_basis_positions[:, k].T, len(input_scaled), 1)
)**2
# Take the exponential to yield the learning matrix.
rbf = np.power(np.math.e, -mahalanobis_distances)
# And compute the basis weights.
rbf_basis_weights_result = np.linalg.lstsq(rbf, polynomial_residuals, rcond=None)
rbf_basis_weights = rbf_basis_weights_result[0]
# The final estimate is just the polynomial estimate + the rbf correction.
polynomial_rbf_y = rbf @ rbf_basis_weights + polynomial_y
polynomial_rbf_residuals = polynomial_rbf_y - outputs
return FittedAeroData(
polynomial_residuals,
polynomial_rbf_residuals,
polynomial_coefficients,
rbf_basis_weights,
input_offsets,
input_scales,
input_length_scales.flatten(),
input_basis_positions)
# + [markdown] id="3i45tJepOhvg"
# This function will take the output of the `fit_aero_data` function and return a component we can put on a car.
# + id="saWZ2nKuOdfF"
def get_fit_as_component(
fitted_aero_data: FittedAeroData,
input_names: Sequence[str],
output_names: Sequence[str]) -> Any:
polynomial_coefficients = fitted_aero_data.polynomial_coefficients
input_offsets = fitted_aero_data.input_offsets
input_scales = fitted_aero_data.input_scales
input_length_scales = fitted_aero_data.input_length_scales
input_basis_positions = fitted_aero_data.input_basis_positions
rbf_basis_weights = fitted_aero_data.rbf_basis_weights
aero_freedoms = input_names
polymaps = [f'Polynomial{output_name}Definition' for output_name in output_names]
config = {}
for i in range(0, len(polymaps)):
polymap = polymaps[i]
maps = []
config[polymap] = maps
k = 0
for j in range(0, len(aero_freedoms)):
maps.append({
'expression': f'{aero_freedoms[j]} * {aero_freedoms[j]} * {aero_freedoms[j]}',
'coefficient': polynomial_coefficients[k,i],
})
k += 1
for j in range(0, len(aero_freedoms)):
maps.append({
'expression': f'{aero_freedoms[j]} * {aero_freedoms[j]}',
'coefficient': polynomial_coefficients[k,i],
})
k += 1
for m in range(0, len(aero_freedoms)):
for j in range(m+1, len(aero_freedoms)):
maps.append({
'expression': f'{aero_freedoms[m]} * {aero_freedoms[j]}',
'coefficient': polynomial_coefficients[k,i],
})
k += 1
for j in range(0, len(aero_freedoms)):
maps.append({
'expression': aero_freedoms[j],
'coefficient': polynomial_coefficients[k,i],
})
k += 1
maps.append({
'expression': 'Const',
'coefficient': polynomial_coefficients[k,i],
})
radial_basis_function_aero_map = {
'basisVariables': input_names,
'xOffset': input_offsets,
'xScaling': input_scales,
'xLengthScales': input_length_scales,
'xBasisPositions': input_basis_positions
}
for i, name in enumerate([f'{output_name}BasisWeights' for output_name in output_names]):
radial_basis_function_aero_map[name] = rbf_basis_weights[:, i]
config['radialBasisFunctionAeroMap'] = radial_basis_function_aero_map
return config
# + [markdown] id="niOCpN3TO0o_"
# ## Load data
# + [markdown] id="vcPQr5UkO73z"
# We are loading the sample data from a CSV file in our `canopy-python-examples` GitHub repository.
# + id="66qecRbyPFUg" colab={"base_uri": "https://localhost:8080/", "height": 415} outputId="aa7b5bb3-86dd-4198-eaa6-769ec55da8e5"
sample_data = pd.read_csv('https://raw.githubusercontent.com/CanopySimulations/canopy-python-examples/master/data/canopy_f1_90_percent_barc_t9_t10_rh_scan_for_aeromap_rbf_sine_wave.csv')
sample_data
# + [markdown] id="HBdj74c1RG3A"
# ## Perform the fit
# + id="MoEJ2TCRKbhs"
input_names = ['hRideF', 'hRideR', 'aRollAbs']
output_names = ['CLiftBodyF', 'CLiftBodyR', 'CDragBody']
# We'll override the length scales to something more appropriate for this data.
input_length_scales = [10., 10., 10.]
fit_result = fit_aero_data(
sample_data,
input_names,
output_names,
input_length_scales=input_length_scales)
# + [markdown] id="r5h6Ou9CKcJE"
# ## Plot the sorted residuals
# + pycharm={"name": "#%%\n", "is_executing": false} id="GrxUa-LyWCRv" colab={"base_uri": "https://localhost:8080/", "height": 413} outputId="e55899ab-e058-4eb5-c059-1a765290707b"
plt.figure(figsize=(8, 6), dpi=80)
for r, output_name in zip(fit_result.polynomial_residuals.T, output_names):
plt.plot(sorted(abs(r)), '-', lw=1, label=f'{output_name} Poly Residual')
for r, output_name in zip(fit_result.polynomial_rbf_residuals.T, output_names):
plt.plot(sorted(abs(r)), ',', lw=1, label=f'{output_name} Poly+RBF Residual')
plt.legend()
plt.show()
# + [markdown] id="TW-ZcA1aRJgp"
# ## Create the car component
# + id="ynaamkaIPTy0" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="0d70245e-97bf-42cc-b819-ba45580acd8e"
aero_component = get_fit_as_component(
fit_result,
input_names,
output_names)
print(json.dumps(aero_component, cls=NumpyEncoder, indent=2))
# + [markdown] id="oBUw0_zGRPM2"
# ## Save a new car to the platform
# + [markdown] id="j_-7-xwDdpPy"
# For this example we'll load the `Canopy F1 Car 2019` and merge in the new aero data before saving it as a new car.
# + id="fqjAZ2x0RRcW" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="d6488fd6-a080-4f79-f4d2-69ac4211225f"
input_car = await canopy.load_default_config(
session,
'car',
'Canopy F1 Car 2019')
# Set these to zero as they are accounted for in the aero data we have fitted.
input_car.data.aero.CDragBodyUserOffset = 0
input_car.data.aero.CLiftBodyUserOffset = 0
# Merge in the new aero component.
input_car.data.aero = {**input_car.data.aero, **aero_component}
new_config_id = await canopy.create_config(
session,
'car',
'Canopy F1 Car 2019 with Fitted Aero',
input_car.data)
new_config_id
| fitting_aero_data_to_polynomial_and_rbf.ipynb |
# ## MLflow Quick Start: Training and Logging
# This is a Quick Start notebook based on [MLflow's tutorial](https://mlflow.org/docs/latest/tutorial.html). In this tutorial, we’ll:
# * Install the MLflow library on a Databricks cluster
# * Train a diabetes progression model and log metrics, parameters, models, and a .png plot from the training to the MLflow tracking server
# * View the training results in the MLflow tracking UI
#
# This notebook uses the `diabetes` dataset in scikit-learn and predicts the progression metric (a quantitative measure of disease progression after one year after) based on BMI, blood pressure, etc. It uses the scikit-learn ElasticNet linear regression model, where we vary the `alpha` and `l1_ratio` parameters for tuning. For more information on ElasticNet, refer to:
# * [Elastic net regularization](https://en.wikipedia.org/wiki/Elastic_net_regularization)
# * [Regularization and Variable Selection via the Elastic Net](https://web.stanford.edu/~hastie/TALKS/enet_talk.pdf)
# **Note:** This notebook expects that you use a Databricks hosted MLflow tracking server. If you would like to preview the Databricks MLflow tracking server, contact your Databricks sales representative to request access. To set up your own tracking server, see the instructions in [MLflow Tracking Servers](https://www.mlflow.org/docs/latest/tracking.html#mlflow-tracking-servers) and configure your connection to your tracking server by running [mlflow.set_tracking_uri](https://www.mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri).
# ## Setup
# 1. Ensure you are using or create a cluster specifying
# * **Databricks Runtime Version:** Databricks Runtime 5.0 or above
# * **Python Version:** Python 3
# 1. Install required libraries or if using Databricks Runtime 5.1 or above, run Cmd 5.
# 1. Create required libraries.
# * Source **PyPI** and enter `mlflow[extras]`.
# * Source **PyPI** and enter `matplotlib==2.2.2`.
# 1. Install the libraries into the cluster.
# 1. Attach this notebook to the cluster.
# +
#dbutils.library.installPyPI("mlflow", extras="extras")
#dbutils.library.installPyPI("matplotlib", "2.2.2")
#dbutils.library.restartPython()
# -
# #### Write Your ML Code Based on the`train_diabetes.py` Code
# This tutorial is based on the MLflow's [train_diabetes.py](https://github.com/mlflow/mlflow/blob/master/examples/sklearn_elasticnet_diabetes/train_diabetes.py) example, which uses the `sklearn.diabetes` built-in dataset to predict disease progression based on various factors.
# +
dbutils.widgets.text("alpha", " ", label="alpha")
dbutils.widgets.text("l1_ratio", " ", label="l1_ratio")
alpha = dbutils.widgets.get("alpha")
l1_ratio = dbutils.widgets.get("l1_ratio")
# +
# Import various libraries including matplotlib, sklearn, mlflow
import os
import warnings
import sys
import pandas as pd
import numpy as np
from itertools import cycle
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import ElasticNet
from sklearn.linear_model import lasso_path, enet_path
from sklearn import datasets
# Import mlflow
import mlflow
import mlflow.sklearn
mlflow.set_experiment("/Shared/diabetes")
# Load Diabetes datasets
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
# Create pandas DataFrame for sklearn ElasticNet linear_model
Y = np.array([y]).transpose()
d = np.concatenate((X, Y), axis=1)
cols = ['age', 'sex', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6', 'progression']
data = pd.DataFrame(d, columns=cols)
# -
# #### Train the Diabetes Model
# The next function trains ElasticNet linear regression based on the input parameters of `alpha (in_alpha)` and `l1_ratio (in_l1_ratio)`.
#
# In addition, this function uses MLflow Tracking to record its
# * parameters
# * metrics
# * model
# * arbitrary files, namely the above noted Lasso Descent Path plot.
#
# **Tip:** Use `with mlflow.start_run:` in the Python code to create a new MLflow run. This is the recommended way to use MLflow in notebook cells. Whether your code completes or exits with an error, the `with` context will make sure to close the MLflow run, so you don't have to call `mlflow.end_run`.
# train_diabetes
# Uses the sklearn Diabetes dataset to predict diabetes progression using ElasticNet
# The predicted "progression" column is a quantitative measure of disease progression one year after baseline
# http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html
def train_diabetes(data, in_alpha, in_l1_ratio):
# Evaluate metrics
def eval_metrics(actual, pred):
rmse = np.sqrt(mean_squared_error(actual, pred))
mae = mean_absolute_error(actual, pred)
r2 = r2_score(actual, pred)
return rmse, mae, r2
warnings.filterwarnings("ignore")
np.random.seed(40)
# Split the data into training and test sets. (0.75, 0.25) split.
train, test = train_test_split(data)
# The predicted column is "progression" which is a quantitative measure of disease progression one year after baseline
train_x = train.drop(["progression"], axis=1)
test_x = test.drop(["progression"], axis=1)
train_y = train[["progression"]]
test_y = test[["progression"]]
if float(in_alpha) is None:
alpha = 0.05
else:
alpha = float(in_alpha)
if float(in_l1_ratio) is None:
l1_ratio = 0.05
else:
l1_ratio = float(in_l1_ratio)
# Start an MLflow run; the "with" keyword ensures we'll close the run even if this cell crashes
with mlflow.start_run():
lr = ElasticNet(alpha=alpha, l1_ratio=l1_ratio, random_state=42)
lr.fit(train_x, train_y)
predicted_qualities = lr.predict(test_x)
(rmse, mae, r2) = eval_metrics(test_y, predicted_qualities)
# Print out ElasticNet model metrics
print("Elasticnet model (alpha=%f, l1_ratio=%f):" % (alpha, l1_ratio))
print(" RMSE: %s" % rmse)
print(" MAE: %s" % mae)
print(" R2: %s" % r2)
# Set tracking_URI first and then reset it back to not specifying port
# Note, we had specified this in an earlier cell
#mlflow.set_tracking_uri(mlflow_tracking_URI)
# Log mlflow attributes for mlflow UI
mlflow.log_param("alpha", alpha)
mlflow.log_param("l1_ratio", l1_ratio)
mlflow.log_metric("rmse", rmse)
mlflow.log_metric("r2", r2)
mlflow.log_metric("mae", mae)
mlflow.sklearn.log_model(lr, "model")
modelpath = "/dbfs/mlflow/test_diabetes/model-%f-%f" % (alpha, l1_ratio)
mlflow.sklearn.save_model(lr, modelpath)
# #### Experiment with Different Parameters
#
# Call `train_diabetes` with different parameters. Later, you'll be able to visualize all these runs in the MLflow experiment.
# %fs rm -r dbfs:/mlflow/test_diabetes
# alpha and l1_ratio values of 0.01, 0.01
train_diabetes(data, alpha, l1_ratio)
| notebooks/diabetes_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ESPEI
#
# ### Extensible Self-optimizating Phase Equilibria Infrastructure
#
# Documentation for internal and external APIs can be found at https://espei.org
#
# Solutions to this notebook can be found at https://github.com/materialsgenomefoundation/2021-workshop-material
#
# ## Starting an assessment
#
# To perform thermodynamic assessments in ESPEI you need:
#
# 1. A JSON-formatted file that describes the phases to be assessed
# 2. A directory containing one or more JSON-formatted files (following the schema at https://espei.org/en/latest/input_data.html#making-espei-datasets) of non-equilibrium thermochemical data, equilibrium thermochemical data, and phase boundary data
# 3. A YAML file that declares what fitting steps should be run and allows for tuning hyperparameters
#
# The names these files are given do not matter. Some are named by convention (e.g. `phases.json` to hold the phase descriptions) and others can be used more descriptively (e.g. `CR-NI-HM_FORM-FCC_A1-watson1995enthapies.json` for binary enthalpy of formation data from a paper referenced as `watson1995enthapies`).
#
# ### Phase descriptions
#
# See detailed documentation at https://espei.org/en/latest/input_data.html#phase-descriptions
#
# The `phases.json` file contains the species and phases in the assessment.
#
# Each phase requires a definition for:
#
# * `sublattice_model`: how many sublattices will be used to model this phase and which chemical species may enter
# * `sublattice_site_ratios`: the stoichiometric ratios for each sublattice
# * (optional) `aliases`: a list of alternative names that the phase can have
#
# Using Cr-Ni as a simple example, we can model three solution phases using one sublattice each.
#
# ```json
# {
# "components": ["CR", "NI"],
# "phases": {
# "LIQUID" : {
# "sublattice_model": [["CR", "NI"]],
# "sublattice_site_ratios": [1]
# },
# "BCC_A2": {
# "sublattice_model": [["CR", "NI"]],
# "sublattice_site_ratios": [1]
# },
# "FCC_A1": {
# "sublattice_model": [["CR", "NI"]],
# "sublattice_site_ratios": [1]
# }
# }
# }
# ```
#
# ### JSON files
#
# Many different types and representations of thermodynamic data exist. ESPEI currently targets 3 classes of data and is designed to be exensible to support any type of data that could be forward-calculated using pycalphad.
#
# * **Non-equilibrium thermochemical data** (where the thermodynamic quantity and internal degrees of freedom for a stable, metastable, or unstable phase is known)
# * **Equilibrium thermochemical data** (where the thermodynamic quantity of a stable or metastable phase is known)
# * **Phase boundary data** (the conditions where a new phase becomes stable or an existing phase becomes metastable)
#
# Parameter generation (this notebook) only supports non-equilibrium thermochemical data. Markov Chain Monte Carlo fitting and uncertainty quantification can handle any type of data. In this workshop, we will only use non-equilibrium thermochemical data and phase boundary data. These data are described in detail in [ESPEI's datasets documentation](https://espei.org/en/latest/input_data.html).
#
# #### Non-equilibrium thermochemical data
#
# ```json
# {
# "components": ["CR", "NI"],
# "phases": ["FCC_A1"],
# "solver": {
# "mode": "manual",
# "sublattice_site_ratios": [1],
# "sublattice_configurations": [ [["CR", "NI"]], [["CR", "NI"]], [["CR", "NI"]], [["CR", "NI"]]],
# "sublattice_occupancies": [ [[0.10, 0.90]], [[0.10, 0.90]], [[0.40, 0.60]], [[0.40, 0.60]]]
# },
# "conditions": {
# "P": 101325,
# "T": 1583
# },
# "output": "HM_FORM",
# "values": [[[-1100, -1510, 2880, 3280]]],
# "reference": "watson1995enthapies",
# "comment": "From Table 5. Converted from kJ to J. Two phase data neglected."
# }
#
# ```
#
# #### Phase boundary data
#
# ```json
# {
# "components": ["CR", "NI"],
# "phases": ["BCC_A2", "FCC_A1"],
# "broadcast_conditions": false,
# "conditions": {
# "T": [1073, 1173, 1273, 1373, 1548],
# "P": [101325.0]
# },
# "output": "ZPF",
# "values": [
# [["FCC", ["CR"], [0.3866]], ["BCC", ["NI"], [null]]],
# [["FCC", ["CR"], [0.3975]], ["BCC", ["NI"], [null]]],
# [["FCC", ["CR"], [0.4480]], ["BCC", ["NI"], [null]]],
# [["FCC", ["CR"], [0.4643]], ["BCC", ["NI"], [null]]],
# [["FCC", ["CR"], [0.4984]], ["BCC", ["NI"], [null]]]
# ],
# "reference": "bechtoldt1961redetermination",
# "comment": "Digitized from figure 5, points in figure 4 were too far apart to identify a boundary properly."
# }
# ```
#
#
# ### Input YAML files
#
# ```yaml
# system:
# phase_models: phases.json
# datasets: input-data
#
# output:
# verbosity: 1
# output_db: dft.tdb
#
# generate_parameters:
# excess_model: linear
# ref_state: SGTE91
# ```
# ## Parameter generation
import yaml
from espei import run_espei
from pycalphad import Database, binplot, equilibrium, variables as v
with open('generate_params_settings.yaml') as fp:
generate_params_settings = yaml.safe_load(fp)
dbf = run_espei(generate_params_settings)
comps = ['CR', 'NI']
phases = ['FCC_A1', 'BCC_A2', 'LIQUID']
conds = {v.N: 1.0, v.P: 101325, v.T: (300, 2300, 20), v.X('NI'): (0, 1, 0.02)}
binplot(dbf, comps, phases, conds)
# Now that we have an initial fit to only the derivatives of the Gibbs energy functions, we can judge our initial parameterization against the phase diagram data.
#
# All data is stored in JSON files, so we can load that into an in-memory database of `datasets`.
# +
from espei.plot import dataplot
from espei.datasets import recursive_glob, load_datasets
# load our JSON datasets into an in-memory database
datasets = load_datasets(recursive_glob('input-data', '*.json'))
# -
# Then plot the binary phase diagram with these new datasets on the same axes
# +
# plot the binary phase diagram, saving the output matplotlib axes as a variable
ax = binplot(dbf, comps, phases, conds)
# plot the phase boundary data as marked points
dataplot(comps, phases, conds, datasets, ax=ax)
# -
# ## Understanding parameter generation
#
# We can use pycalphad to plot the Gibbs energy surface to get an idea about what is causing BCC to become stable:
from pycalphad import calculate
import matplotlib.pyplot as plt
calc_res = calculate(dbf, comps, phases, T=1500, N=1, P=101325)
for phase_name in phases:
mask = calc_res.Phase == phase_name
plt.scatter(calc_res.X.sel(component='NI'), calc_res.GM.where(mask), s=2, label=phase_name)
plt.legend()
plt.xlabel("X(NI)")
plt.ylabel("GM (J/mol-atom)")
plt.title("Cr-Ni Energy Surface")
# The BCC phase seems to have a strange shape. Investigating the parameters and data that were fit may reveal the underlying issue.
#
# ESPEI provides `plot_interaction` to visualize how ESPEI fit our thermochemical data to the interaction parameters.
from espei.plot import plot_interaction
plot_interaction(dbf, comps, 'BCC_A2', (('CR', 'NI'),), 'HM_MIX', datasets=datasets, symmetry=None)
plot_interaction(dbf, comps, 'FCC_A1', (('CR', 'NI'),), 'HM_MIX', datasets=datasets, symmetry=None)
plot_interaction(dbf, comps, 'LIQUID', (('CR', 'NI'),), 'HM_MIX', datasets=datasets, symmetry=None)
# To be more quantitative, the parameters in the pycalphad Database can be searched for the excess parameters that were fit.
#
# The mathematical expression for each excess parameter is found in the `parameter` key, i.e. for the first parameter:
#
# $$ {}^{0} L^{\mathrm{BCC\_A2}} = \mathrm{VV0006} + \mathrm{VV0005} T $$
#
# where $ \mathrm{VV0005} $ and $ \mathrm{VV0006} $ are symbols that are generated by ESPEI during the fitting.
import tinydb
dbf._parameters.search(
(tinydb.where('phase_name') == 'BCC_A2') & # LIQUID phase parameters
(tinydb.where('parameter_type') == 'L') # Parameters of type L (i.e. excess parameters)
)
# The `dbf.symbols` dictionary maps the name of symbols in the database to their values. We can look up the value of $ \mathrm{VV0016}$
print(f"L0-BCC_A2 = {dbf.symbols['VV0006']} + {dbf.symbols['VV0005']}*T")
# ## Tuning parameter selection
#
# Looking again at the parameterization for BCC_A2, we can see that the optimal tradeoff between the goodness of fit and number of parameter according to the corrected Akiake information criterion was to choose 4 excess parameters ($ {}^{0} L $, $ {}^{1} L $, $ {}^{2} L $, and $ {}^{3} L $).
#
# Judging by eye, we may have some intuition that the AICc was wrong.
plot_interaction(dbf, comps, 'BCC_A2', (('CR', 'NI'),), 'HM_MIX', datasets=datasets, symmetry=None)
# In ESPEI, there are a variety of ways to tune and use modeling expertise to influence the automated parameterization. One of them is by using the `aicc_penalty_factor` option ([documented in more detail here](https://espei.org/en/latest/writing_input.html#aicc-penalty-factor)) to increase the penalty for introducing more fitting parameters.
#
# The default is `aicc_penalty_factor = 1.0` for every phase and data type. `aicc_penalty_factor > 1` will increase the penalty for increasing the number of parameters. For the liquid phase, we can increase the penalty factor as follows (`generate_params_settings_aicc_penalty.yaml`):
#
# ```yaml
# system:
# phase_models: Cr-Ni_phases.json
# datasets: input-data
# tags:
# dft:
# excluded_model_contributions: ['idmix', 'mag']
# weight: 0.1
# nomag:
# excluded_model_contributions: ['mag']
# estimated-entropy:
# excluded_model_contributions: ['idmix', 'mag']
# weight: 0.1
# generate_parameters:
# excess_model: linear
# ref_state: SGTE91
# aicc_penalty_factor:
# BCC_A2:
# HM: 2.0
# SM: 2.0
# LIQUID:
# HM: 10
# SM: 10
# output:
# verbosity: 1
# output_db: generated-aicc.tdb
# ```
#
# Then we can fit a new database:
with open('generate_params_settings-aicc_penalty.yaml') as fp:
generate_params_settings = yaml.safe_load(fp)
dbf_aicc_penalty = run_espei(generate_params_settings)
# Now we can verify the new parameterization for the BCC_A2 phase:
dbf_aicc_penalty._parameters.search(
(tinydb.where('phase_name') == 'BCC_A2') & # LIQUID phase parameters
(tinydb.where('parameter_type') == 'L') # Parameters of type L (i.e. excess parameters)
)
plot_interaction(dbf_aicc_penalty, comps, 'BCC_A2', (('CR', 'NI'),), 'HM_MIX', datasets=datasets, symmetry=None)
calc_res = calculate(dbf_aicc_penalty, comps, phases, T=1500, N=1, P=101325)
for phase_name in phases:
mask = calc_res.Phase == phase_name
plt.scatter(calc_res.X.sel(component='NI'), calc_res.GM.where(mask), s=2, label=phase_name)
plt.legend()
plt.xlabel("X(NI)")
plt.ylabel("GM (J/mol-atom)")
plt.title("Cr-Ni Energy Surface")
# +
# Original database with four BCC_A2 excess parameters
ax = binplot(dbf, comps, phases, conds)
dataplot(comps, phases, conds, datasets, ax=ax)
# New database with two excess parameters
ax = binplot(dbf_aicc_penalty, comps, phases, conds)
dataplot(comps, phases, conds, datasets, ax=ax)
| ESPEI/ESPEI Parameter Selection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This assignment provided a list of cusine with the ingredients and asks to predict which country a recipe is from,
# given a list of its ingredient. In the dataset, it includes the recipe id, the type of cuisine, and the list of
# ingredients of each recipe.
#
# The data is stored in JSON format and here is the snippet.
#
# {
# "id": 24717,
# "cuisine": "indian",
# "ingredients": [
# "tumeric",
# "vegetable stock",
# "tomatoes",
# "garam masala",
# "naan",
# "red lentils",
# "red chili peppers",
# "onions",
# "spinach",
# "sweet potatoes"
# ]
# }
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from collections import Counter
# %matplotlib inline
plt.style.use('ggplot')
# Before processing the data visualisation is done below in order using math plot.
df_train = pd.read_json('train.json')
df_train.head(7)
# Now we will plot a graph to see how many ingredients and cuisines are available in the data set.
df_train['cuisine'].value_counts().plot(kind='bar')
# Italian and Mexican have the largest number of recipes and brazilian have least. So, it may influence the training and output.There are few ingredient which are duplicate like "Salt", "Eggs", "Tomatoes" etc.. So, we need to find out the unique ingredients in order to predict the correct cuisine.
# # Training with logistic regression classifier
# We will use sci kit to perform classification. To do this, we first create a new column in our dataframe by simply concatening the ingredients to a single string. Mainly we need to check wheter or not an ingredient is avaliable in given cuisine. It can be done using CountVectorizer.
df_train['all_ingredients'] = df_train['ingredients'].map(";".join)
df_train.head()
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
X = cv.fit_transform(df_train['all_ingredients'].values)
X.shape
print(list(cv.vocabulary_.keys())[:100])
from sklearn.preprocessing import LabelEncoder
lenc = LabelEncoder()
y = lenc.fit_transform(df_train.cuisine)
y[:100]
# "y" is now a vector with number instead of strings for each cuisine.
lenc.classes_
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
from sklearn.linear_model import LogisticRegression
logistic = LogisticRegression()
trained_model = logistic.fit(X_train, y_train)
predictions = trained_model.predict(X_test)
print (predictions)
logistic.score(X_test, y_test)
| What'sCooking/cooking.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Check the correctness of your metadata generation: compare lenghts of vectors 'coords' 'label' 'local_density', 'nucleus_size' & 'fluo_signal'
# +
import h5py
import sys
sys.path.append("../")
from Movie_Analysis_Pipeline.Single_Movie_Processing.Server_Movies_Paths import Get_MDCK_Movies_Paths
movies = Get_MDCK_Movies_Paths()
# -
for movie in movies:
hdf5_file = movie + "HDF/segmented.hdf5"
print (hdf5_file)
with h5py.File(hdf5_file, 'r') as f:
c = len(f["objects"]["obj_type_1"]["coords"])
l = len(f["objects"]["obj_type_1"]["labels"])
d = len(f["objects"]["obj_type_1"]["local_density"])
n = len(f["objects"]["obj_type_1"]["nucleus_size"])
f = len(f["objects"]["obj_type_1"]["fluo_signal_sum"])
print ("{}\t{}\t{}\t{}\t{}\t\t\t{}".format(c, l, d, n, f, c == l == d == n ==f))
for movie in movies:
hdf5_file = movie + "HDF/segmented.hdf5"
print (hdf5_file)
with h5py.File(hdf5_file, 'r') as f:
m = len(f["tracks"]["obj_type_1"]["map"])
l = len(f["tracks"]["obj_type_1"]["LBEPR"])
c = len(f["tracks"]["obj_type_1"]["fa"])
f = len(f["tracks"]["obj_type_1"]["Ch_Ch_Gen_CCT"])
print ("{}\t{}\t{}\t\t\t{}".format(m, l, f, m == l == f))
| HDF_MetaData_Generation/Check_Whole_HDF5_File_Datasets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import scipy as sp
import scipy.io as sio
import numpy as np
import os
import sqlite3
data_path = r'C:\data\experiment_db_data'
data_list = os.listdir(data_path)
# +
sqlite_file = 'experiments.sqlite'
table_name1 = 'overview'
new_column1 = 'date'
new_column2 = 'decoder_type'
column_type1 = 'TEXT'
column_type2 = 'TEXT'
conn = sqlite3.connect(os.path.join(data_path, sqlite_file))
c = conn.cursor()
c.execute('CREATE TABLE {tn} ({nc} {ct})'.format(tn=table_name1, nc=new_column1, ct=column_type1))
c.execute("ALTER TABLE {tn} ADD COLUMN '{nc}' {ct}".format(tn=table_name1, nc=new_column2, ct=column_type2))
conn.commit()
conn.close()
# -
# Find kalmanInitParams .mat files
kalman_files = [name for name in data_list if name[-4:]=='.mat' and name[0:6]=='kalman']
print('First file found:', kalman_files[0])
kalman_files_with_path = [os.path.join(data_path, name) for name in kalman_files]
current_info = sio.loadmat(kalman_files_with_path[0])['kalmanInitParams'][0,0]
print(current_info['date'][0,0])
print(current_info['includeEye'][0,0])
| matlab-sqlite-experiment-integration-test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="https://raw.githubusercontent.com/Qiskit/qiskit-tutorials/master/images/qiskit-heading.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
# ## _*Comparing Classical and Quantum Finite Automata (QFA)*_
#
# Finite Automaton has been a mathematical model for computation since its invention in the 1940s. The purpose of a Finite State Machine is to recognize patterns within an input taken from some character set and accept or reject the input based on whether the pattern defined by the machine occurs in the input. The machine requires a list of states, the initial state, and the conditions for each transition from state to state. Such classical examples are vending machines, coin-operated turnstiles, elevators, traffic lights, etc.
#
# In the classical algorithm, the sequence begins in the start state, and will only make a transition if the next character in the input string matches the label on the transition from the current to the next state. The machine will continue making transitions on each input character until no move is possible. The string will be accepted if its final state is in the accept state and will be rejected if its final state is anywhere else.
#
# As for Quantum Finite Automata (QFA), the machine works by accepting a finite-length string of letters from a finite alphabet and utilizing quantum properties such as superposition to assign the string a probability of being in either the accept or reject state.
#
# ***
# ### Contributors
# <NAME>, <NAME>
#
# ### Qiskit Package Versions
import qiskit
qiskit.__qiskit_version__
# ## Prime Divisibility Algorithm
#
# Let's say that we have a string with $ a^i $ letters and we want to know whether the string is in the language $ L $ where $ L $ = {$ a^i $ | $ i $ is divisble by $ p $} and $ p $ is a prime number. If $ i $ is divisible by $ p $, we want to accept the string into the language, and if not, we want to reject it.
# $|0\rangle $ and $ |1\rangle $ serve as our accept and reject states.
#
# Classically, this algorithm requires a minimum of $ log(p) $ bits to store the information, whereas the quantum algorithm only requires $ log(log(p)) $ qubits. For example, using the highest known prime integer, the classical algorithm requires **a minimum of 77,232,917 bits**, whereas the quantum algorithm **only requires 27 qubits**.
# ## Introduction <a id='introduction'></a>
#
# The algorithm in this notebook follows that in [Ambainis et al. 1998](https://arxiv.org/pdf/quant-ph/9802062.pdf). We assume that we are given a string and a prime integer. If the user does not input a prime number, the output will be a ValueError. First, we demonstrate a simpler version of the quantum algorithm that uses $ log(p) $ qubits to store the information. Then, we can use this to more easily understand the quantum algorithm that requires only $ log(log(p)) $ qubits.
# ## The Algorithm for Log(p) Qubits
#
# The algorithm is quite simple as follows.
# 1. Prepare quantum and classical registers for $ log(p) $ qubits initialized to zero.
# $$ |0\ldots 0\rangle $$
# 2. Prepare $ log(p) $ random numbers k in the range {$ 1 $... $ p-1 $}. These numbers will be used to decrease the probability of a string getting accepted when $ i $ does not divide $ p $.
# 3. Perform a number of $ i $ Y-Rotations on each qubit, where $ \theta $ is initially zero and $ \Phi $ is the angle of rotation for each unitary. $$ \Phi = \frac{2 \pi k}{p} $$
# 4. In the final state:
# $$ \cos \theta |0\rangle + \sin \theta |1\rangle $$
# $$ \theta = \frac{2 \pi k} p {i} $$
# 5. Measure each of the qubits in the classical register. If $ i $ divides $ p $, $ \cos \theta $ will be one for every qubit and the state will collapse to $ |0\rangle $ to demonstrate an accept state with a probability of one. Otherwise, the output will consist of a small probability of accepting the string into the language and a higher probability of rejecting the string.
# ## The Circuit <a id="circuit"></a>
#
# We now implement the QFA Prime Divisibility algorithm with QISKit by first preparing the environment.
# +
# useful additional packages
import random
import math
from sympy.ntheory import isprime
# importing QISKit
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import Aer, IBMQ, execute
from qiskit.tools.monitor import job_monitor
from qiskit.providers.ibmq import least_busy
from qiskit.tools.visualization import plot_histogram
# -
IBMQ.load_account()
sim_backend = Aer.get_backend('qasm_simulator')
device_backend = least_busy(IBMQ.backends(operational=True, simulator=False))
# We then use QISKit to program the algorithm.
#Function that takes in a prime number and a string of letters and returns a quantum circuit
def qfa_algorithm(string, prime):
if isprime(prime) == False:
raise ValueError("This number is not a prime") #Raises a ValueError if the input prime number is not prime
else:
n = math.ceil((math.log(prime))) #Rounds up to the next integer of the log(prime)
qr = QuantumRegister(n) #Creates a quantum register of length log(prime) for log(prime) qubits
cr = ClassicalRegister(n) #Creates a classical register for measurement
qfaCircuit = QuantumCircuit(qr, cr) #Defining the circuit to take in the values of qr and cr
for x in range(n): #For each qubit, we want to apply a series of unitary operations with a random int
random_value = random.randint(1,prime - 1) #Generates the random int for each qubit from {1, prime -1}
for letter in string: #For each letter in the string, we want to apply the same unitary operation to each qubit
qfaCircuit.ry((2*math.pi*random_value) / prime, qr[x]) #Applies the Y-Rotation to each qubit
qfaCircuit.measure(qr[x], cr[x]) #Measures each qubit
return qfaCircuit #Returns the created quantum circuit
# The qfa_algorithm function returns the Quantum Circuit qfaCircuit.
# ## Experiment with Simulators
#
# We can run the above circuit on the simulator.
#A function that returns a string saying if the string is accepted into the language or rejected
def accept(parameter):
states = list(result.get_counts(parameter))
for s in states:
for integer in s:
if integer == "1":
return "Reject: the string is not accepted into the language"
return "Accept: the string is accepted into the language"
# Insert your own parameters and try even larger prime numbers.
range_lower = 0
range_higher = 36
prime_number = 11
for length in range(range_lower,range_higher):
params = qfa_algorithm("a"* length, prime_number)
job = execute(params, sim_backend, shots=1000)
result = job.result()
print(accept(params), "\n", "Length:",length," " ,result.get_counts(params))
# ### Drawing the circuit of the QFA
#
# Below is the snapshop of the QFA for reading the bitstring of length $3$. It can be seen that there are independent QFAs each of which performs $Y$ rotation for $3$ times.
qfa_algorithm("a"* 3, prime_number).draw(output='mpl')
# ## The Algorithm for Log(Log(p)) Qubits
#
# The algorithm is quite simple as follows.
# 1. Prepare a quantum register for $ log(log(p)) + 1 $ qubits initialized to zero. The $ log(log(p))$ qubits will act as your control bits and the 1 extra will act as your target bit. Also prepare a classical register for 1 bit to measure the target.
# $$ |0\ldots 0\rangle |0\rangle $$
# 2. Hadamard the control bits to put them in a superposition so that we can perform multiple QFA's at the same time.
# 3. For each of $s $ states in the superposition, we can perform an individual QFA with the control qubits acting as the random integer $ k $ from the previous algorithm. Thus, we need $ n $ values from $ 1... log(p)$ for $ k $. For each letter $ i $ in the string, we perform a controlled y-rotation on the target qubit, where $ \theta $ is initially zero and $ \Phi $ is the angle of rotation for each unitary. $$ \Phi = \frac{2 \pi k_{s}}{p} $$
# 4. The target qubit in the final state:
# $$ \cos \theta |0\rangle + \sin \theta |1\rangle $$
# $$ \theta = \sum_{s=0}^n \frac{2 \pi k_{s}} p {i} $$
# 5. Measure the target qubit in the classical register. If $ i $ divides $ p $, $ \cos \theta $ will be one for every QFA and the state of the target will collapse to $ |0\rangle $ to demonstrate an accept state with a probability of one. Otherwise, the output will consist of a small probability of accepting the string into the language and a higher probability of rejecting the string.
# ## The Circuit <a id="circuit"></a>
# We then use QISKit to program the algorithm.
#Function that takes in a prime number and a string of letters and returns a quantum circuit
def qfa_controlled_algorithm(string, prime):
if isprime(prime) == False:
raise ValueError("This number is not a prime") #Raises a ValueError if the input prime number is not prime
else:
n = math.ceil((math.log(math.log(prime,2),2))) #Represents log(log(p)) control qubits
states = 2 ** (n) #Number of states that the qubits can represent/Number of QFA's to be performed
qr = QuantumRegister(n+1) #Creates a quantum register of log(log(prime)) control qubits + 1 target qubit
cr = ClassicalRegister(1) #Creates a classical register of log(log(prime)) control qubits + 1 target qubit
control_qfaCircuit = QuantumCircuit(qr, cr) #Defining the circuit to take in the values of qr and cr
for q in range(n): #We want to take each control qubit and put them in a superposition by applying a Hadamard Gate
control_qfaCircuit.h(qr[q])
for letter in string: #For each letter in the string, we want to apply a series of Controlled Y-rotations
for q in range(n):
control_qfaCircuit.cu3(2*math.pi*(2**q)/prime, 0, 0, qr[q], qr[n]) #Controlled Y on Target qubit
control_qfaCircuit.measure(qr[n], cr[0]) #Measure the target qubit
return control_qfaCircuit #Returns the created quantum circuit
# The qfa_algorithm function returns the Quantum Circuit control_qfaCircuit.
# ## Experiment with Simulators
#
# We can run the above circuit on the simulator.
for length in range(range_lower,range_higher):
params = qfa_controlled_algorithm("a"* length, prime_number)
job = execute(params, sim_backend, shots=1000)
result = job.result()
print(accept(params), "\n", "Length:",length," " ,result.get_counts(params))
# ### Drawing the circuit of the QFA
#
# Below is the snapshot of the QFA for reading the bitstring of length $3$. It can be seen that there is a superposition of QFAs instead of independent QFAs.
qfa_controlled_algorithm("a"* 3, prime_number).draw(output='mpl')
# ## Experimenting with Real Devices
#
# Real-device backends have errors and if the above QFAs are executed on the noisy backends, errors in rejecting strings that should have been accepted can happen. Let us see how well the real-device backends can realize the QFAs.
# Let us look an example when the QFA should reject the bitstring because the length of the bitstring is not divisible by the prime number.
prime_number = 3
length = 2 # set the length so that it is not divisible by the prime_number
print("The length of a is", length, " while the prime number is", prime_number)
qfa1 = qfa_controlled_algorithm("a"* length, prime_number)
job = execute(qfa1, backend=device_backend, shots=100)
job_monitor(job)
result = job.result()
plot_histogram(result.get_counts())
# In the above, we can see that the probability of observing "1" is quite significant. Let us see how the circuit looks like.
qfa1.draw(output='mpl')
# Now, let us see what happens when the QFAs should accept the input string.
print_number = length = 3 # set the length so that it is divisible by the prime_number
print("The length of a is", length, " while the prime number is", prime_number)
qfa2 = qfa_controlled_algorithm("a"* length, prime_number)
job = execute(qfa2, backend=device_backend, shots=100)
job_monitor(job)
result = job.result()
plot_histogram(result.get_counts())
# The error of rejecting the bitstring is equal to the probability of observing "1" which can be checked from the above histogram. We can see that the noise of real-device backends prevents us to have a correct answer. It is left as future work on how to mitigate errors of the backends in the QFA models.
qfa2.draw(output='mpl')
| terra/qis_adv/comparing_classical_and_quantum_finite_automata.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # 1-5.1 Python Intro
# ## conditionals, type, and mathematics extended
# - **conditionals: `elif`**
# - **casting**
# - basic math operators
#
# -----
#
# ><font size="5" color="#00A0B2" face="verdana"> <B>Student will be able to</B></font>
# - **code more than two choices using `elif`**
# - **gather numeric input using type casting**
# - perform subtraction, multiplication and division operations in code
#
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Concepts</B></font>
# ## conditional `elif`
#
# []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/a2ac5f4b-0400-4a60-91d5-d350c3cc0515/Unit1_Section5.1-elif.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/a2ac5f4b-0400-4a60-91d5-d350c3cc0515/Unit1_Section5.1-elif.vtt","srclang":"en","kind":"subtitles","label":"english"}])
# ### a little review
# - **`if`** means "**if** a condition exists then do some task." **`if`** is usually followed by **`else`**
# - **`else`** means "**or else** after we have tested **if**, then do an alternative task"
#
# When there is a need to test for multiple conditions there is **`elif`**
# - **`elif`** statement follows **`if`**, and means **"else, if "** another condition exists do something else
# - **`elif`** can be used many times
# - **`else`** is used after the last test condition (**`if`** or **`elif`**)
#
# #### in psuedo code
# **If** it is raining bring an umbrella
# or **Else If** (`elif`) it is snowing bring a warm coat
# or **Else** go as usual
#
# Like **`else`**, the **`elif`** only executes when the previous conditional is False
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Examples</B></font>
# +
# [ ] review the code then run testing different inputs
# WHAT TO WEAR
weather = input("Enter weather (sunny, rainy, snowy): ")
if weather.lower() == "sunny":
print("Wear a t-shirt")
elif weather.lower() == "rainy":
print("Bring an umbrella and boots")
elif weather.lower() == "snowy":
print("Wear a warm coat and hat")
else:
print("Sorry, not sure what to suggest for", weather)
# +
# [ ] review the code then run testing different inputs
# SECRET NUMBER GUESS
secret_num = "2"
guess = input("Enter a guess for the secret number (1-3): ")
if guess.isdigit() == False:
print("Invalid: guess should only use digits")
elif guess == "1":
print("Guess is too low")
elif guess == secret_num:
print("Guess is right")
elif guess == "3":
print("Guess is too high")
else:
print(guess, "is not a valid guess (1-3)")
# -
# #
# <font size="6" color="#B24C00" face="verdana"> <B>Task 1</B></font>
#
# ## Program: Shirt Sale
# ### Complete program using `if, elif, else`
# - Get user input for variable size (S, M, L)
# - reply with each shirt size and price (Small = \$ 6, Medium = \$ 7, Large = \$ 8)
# - if the reply is other than S, M, L, give a message for not available
# - *optional*: add additional sizes
# +
# [ ] code and test SHIRT SALE
# -
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Concepts</B></font>
# ## casting
# Casting is the conversion from one data type to another Such as converting from **`str`** to **`int`**.
# []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/4cbf7f96-9ddd-4962-88a8-71081d7d5ef6/Unit1_Section5.1-casting-input.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/4cbf7f96-9ddd-4962-88a8-71081d7d5ef6/Unit1_Section5.1-casting-input.vtt","srclang":"en","kind":"subtitles","label":"english"}])
# ### `int()`
# the **`int()`** function can convert stings that represent whole counting numbers into integers and strip decimals to convert float numbers to integers
# - `int("1") = 1` the string representing the integer character `"1"`, cast to a number
# - `int(5.1) = 5` the decimal (float), `5.1`, truncated into a non-decimal (integer)
# - `int("5.1") = ValueError` `"5.1"` isn't a string representation of integer, `int()` can cast only strings representing integer values
#
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Example</B></font>
weight1 = '60' # a string
weight2 = 170 # an integer
# add 2 integers
total_weight = int(weight1) + weight2
print(total_weight)
# #
# <font size="6" color="#B24C00" face="verdana"> <B>Task 2</B></font>
# ## casting with `int()` & `str()`
# +
str_num_1 = "11"
str_num_2 = "15"
int_num_3 = 10
# [ ] Add the 3 numbers as integers and print the result
# +
str_num_1 = "11"
str_num_2 = "15"
int_num_3 = 10
# [ ] Add the 3 numbers as test strings and print the result
# -
# <font size="4" color="#B24C00" face="verdana"> <B>Task 2 cont...</B></font>
# ### Program: adding using `int` casting
# - **[ ]** initialize **`str_integer`** variable to a **string containing characters of an integer** (quotes)
# - **[ ]** initialize **`int_number`** variable with an **integer value** (no quotes)
# - **[ ]** initialize **`number_total`** variable and **add int_number + str_integer** using **`int`** casting
# - **[ ]** print the sum (**`number_total`**)
# [ ] code and test: adding using int casting
str_integer = "2"
int_number = 10
number_total = int(str_integer) + int_number
print(number_total)
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Concepts</B></font>
# ## `input()` strings that represent numbers can be "cast" to integer values
#
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Example</B></font>
# [ ] review and run code
student_age = input('enter student age (integer): ')
age_next_year = int(student_age) + 1
print('Next year student will be',age_next_year)
# +
# [ ] review and run code
# cast to int at input
student_age = int(input('enter student age (integer): '))
age_in_decade = student_age + 10
print('In a decade the student will be', age_in_decade)
# -
# #
# <font size="6" color="#B24C00" face="verdana"> <B>Task 3</B></font>
# ## Program: adding calculator
# - get input of 2 **integer** numbers
# - cast the input and print the input followed by the result
# - Output Example: **`9 + 13 = 22`**
#
# Optional: check if input .isdigit() before trying integer addition to avoid errors in casting invalid inputs
# +
# [ ] code and test the adding calculator
# -
# [Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) © 2017 Microsoft
| Python Absolute Beginner/Module_3_4_Absolute_Beginner.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# I'm a little confused by lognormal distributions.
# Let's say I have a parameter that is distributed as:
#
# \begin{equation}
# V \sim \mathcal{N}(5, 2) \, ,
# \end{equation}
#
# how would I represent this in a lognormal distribution so it can't fall below zero? A lognormal distribution is defined as:
#
# \begin{equation}
# \ln(V) \sim \ln(\mathcal{N}(5, 2)) \, ,
# \end{equation}
#
# > In probability theory, a lognormal distribution is a ... probability distribution o a random variable who's *logarithm is normally distributed*.
#
# > Thus if $X$ is log-normally distributed, then $Y - \ln(X)$ is normally distributed.
import numpy as np
import seaborn as sns
import pylab as plt
npts = 5000
mu = 10
sig = 2
v = np.random.normal(mu, sig, npts)
sns.distplot(v)
plt.xlabel('V')
plt.axvline(mu)
# This extends below 0, which we don't want it to do. What does the logarithm look like?
sns.distplot(np.log(v))
plt.xlabel(r'$\ln(V)$')
plt.axvline(np.log(mu))
plt.show()
# If we wanted a lognormal distribution to recapture this same distribution, should we be using
#
# \begin{equation}
# \ln(V) \sim \log\mathcal{N}(\ln(10), \ln(2)) \, ?
# \end{equation}
lnv = np.random.lognormal(np.log(mu), np.log(2), npts)
sns.distplot(lnv, label='LogNormal')
sns.distplot(np.log(v), label='ln(V)')
plt.legend()
# Okay, these don't line up...
sns.distplot(lnv, label='LogNormal')
sns.distplot(v, label='V')
# I guess this only truly works if the logarithm of V is normally distributed, but in this case we want it to be the other way round?
import mystyle as ms
with plt.style.context(ms.ms):
sns.distplot(v)
sns.distplot(lnv)
plt.axvline(np.median(v), c='r', label='Median V')
plt.axvline(np.median(lnv), c='g', label='Median LnV')
plt.axvline(np.mean(lnv), c='b', label='Mean LnV')
plt.legend()
# In both cases the median values intersect at $\mu$, but with $V$ distributed lognormally, it can not fall below zero.
# Let's try a more realistic example.
npts = 10000
mu = 0.7
sigma = .1
v = np.random.normal(mu, sigma, npts)
lnv = np.random.lognormal(np.log(mu), sigma, npts)
with plt.style.context(ms.ms):
sns.distplot(v, label='Normally distributed')
sns.distplot(lnv, label='Lognormally distributed')
plt.legend()
print(f'Median of Normal: {np.median(v)}')
print(f'Median of LogNormal: {np.median(lnv)}')
# This example indicates that $\sigma$ should remain as is.
# The equations seem to agree. Let's check our previous example:
npts = 10000
mu = 10.
sigma = 2.
v = np.random.normal(mu, sigma, npts)
lnv = np.random.lognormal(np.log(mu), sigma, npts)
with plt.style.context(ms.ms):
sns.distplot(v, label='Normally distributed')
sns.distplot(lnv, label='Lognormally distributed')
plt.legend()
print(f'Median of Normal: {np.median(v)}')
print(f'Median of LogNormal: {np.median(lnv)}')
# Yep, the plots are unsightly but otherwise okay.
#
# Conclusions: our parameters that can't go below zero should be distributed as
#
# \begin{equation}
# V \sim \rm{LogNormal}(\ln(\mu), \sigma)
# \end{equation}
| code/tests/examples/investigate_lognormal.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <img src='img/logo.png' alt='Drawing' style='width:2000px;'/>
# # <font color=blue>3. SDOF Systems</font>
# <img src='https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171213112855424-0593:9781316761403:fig2_21.png?pub-status=live' alt='Drawing' style='height:200px;'/>
#
# ## <font color=blue>3.1. Required Libraries</font>
# Although only OpenSeesPy library is required to perform structural analysis we can make use of other libraries to perform other tasks. This is the nicest thing about Python!
# <font color=red><div style="text-align: right"> **Documentation for**
# [**`openseespy`**](https://openseespydoc.readthedocs.io/en/latest/)
# [**`numpy`**](https://docs.scipy.org/doc/numpy/)
# [**`matplotlib.pyplot`**](https://matplotlib.org/api/pyplot_api.html)</div></font>
# +
# import libraries
# ------------------------------------------------------------------------
# A library to use OpenSees via Python
import openseespy.opensees as ops
# A library provides high-performance vector, matrix and higher-dimensional data structures for Python
import numpy as np
# A Library to visualize data from Python
import matplotlib.pyplot as plt
# A command required to print the figures on notebook
# %matplotlib inline
# -
# ## <font color=blue>3.2. Unit Definitions</font>
# We can use any units in definition of parameters as long as we properly define other unit measures. In our examples we will use meters (m) for displacements, kiloNewtons (kN) for forces, and seconds (sec) for time as basic unit definitions. In other words, the results obtained from OpenSeesPy will be in these units.
# +
# Define units
# ------------------------------------------------------------------------
# Basic Units
m = 1.0
kN = 1.0
sec = 1.0
# Length
mm = m / 1000.0
cm = m / 100.0
inch = 25.4 * mm
ft = 12.0 * inch
# Force
N = kN / 1000.0
kips = kN * 4.448221615
lb = kips / 1.0e3
# Stress (kN/m2 or kPa)
Pa = N / (m ** 2)
kPa = Pa * 1.0e3
MPa = Pa * 1.0e6
GPa = Pa * 1.0e9
ksi = 6.8947573 * MPa
psi = 1e-3 * ksi
# Mass - Weight
tonne = kN * sec ** 2 / m
kg = N * sec ** 2 / m
lb = psi*inch**2
# Gravitational acceleration
g = 9.81*m/sec**2
# Time
min = 60*sec
hr = 60*min
# -
# ## <font color=blue>3.3. Numerical Model of an Elastic Single Degree of Freedom (SDOF) System </font>
# In this section, an elastic SDoF system is modelled using OpenSeesPy library. Various OpenSeesPy commands are introduced.
# <font color=red><div style="text-align: right"> **Documentation for**
# [**`ops.wipe`**](https://openseespydoc.readthedocs.io/en/latest/src/wipe.html)
# [**`ops.model`**](https://openseespydoc.readthedocs.io/en/latest/src/model.html)
# [**`ops.node`**](https://openseespydoc.readthedocs.io/en/latest/src/node.html)
# [**`ops.mass`**](https://openseespydoc.readthedocs.io/en/latest/src/mass.html)
# [**`ops.fix`**](https://openseespydoc.readthedocs.io/en/latest/src/fix.html)
# [**`ops.uniaxialMaterial`**](https://openseespydoc.readthedocs.io/en/latest/src/uniaxialMaterial.html)
# [**`ops.element`**](https://openseespydoc.readthedocs.io/en/latest/src/element.html#)
# </div></font>
# +
# Single degree of freedom (SDOF) system properties
# ------------------------------------------------------------------------
# Dynamic properties
mass = 1*tonne # mass
T_n = 0.7*sec # natural period
omega_n = 2*np.pi/T_n # natural circular frequency
# Spring properties
k_el = mass*omega_n**2
# Dashpot properties
xi = 0.05 # damping ratio
omega_n = 2*np.pi/T_n # natural circular frequency
C = 2*xi*omega_n*mass # viscous damping coefficient
alpha = 1 # power factor (=1 means linear damping)
# Wipe any existing model
# ------------------------------------------------------------------------
ops.wipe()
# Create ModelBuilder (with 1-dimension and 1 DOF/node)
# ------------------------------------------------------------------------
ops.model('basic', '-ndm', 1, '-ndf', 1)
# Define nodes
# ------------------------------------------------------------------------
node1 = 1 # Tag for node 1 (fixed node)
node2 = 2 # Tag for node 2 (free node)
coord1 = 0.0 # 1 dimensional coordinate for node 1
coord2 = 0.0 # 1 dimensional coordinate for node 2
ops.node(node1, coord1)
ops.node(node2, coord2)
# Define single-point constraints
# ------------------------------------------------------------------------
ops.fix(node1, 1) # Fix node 1,
ops.fix(node2, 0) # release node 2 (this is optional, by default it is unrestrained)
# Define the nodal mass
# ------------------------------------------------------------------------
ops.mass(node2, mass)
# Define materials
# ------------------------------------------------------------------------
# spring
spring_tag = 1 # tag for spring material
ops.uniaxialMaterial('Elastic', spring_tag, k_el)
# dashpot
dashpot_tag = 2 # tag for dashpot material
ops.uniaxialMaterial('Viscous', dashpot_tag, C, alpha)
# Define elements
# ------------------------------------------------------------------------
ops.element('zeroLength', spring_tag, node1, node2, "-mat", spring_tag, "-dir", 1)
ops.element('zeroLength', dashpot_tag, node1, node2, "-mat", dashpot_tag, "-dir", 1)
# Check if assumed period is the same as computed period
# ------------------------------------------------------------------------
eigen_values = ops.eigen('-fullGenLapack', 1)
omega_computed = eigen_values[0]**0.5
T_computed = 2*np.pi/omega_computed
if T_computed==T_n:
print('Both assumed and computed periods are equal.')
# -
# Define the load pattern
# ------------------------------------------------------------------------
with open('Records//GMR_names.txt') as file: # read the record names
gm_names = [line.rstrip() for line in file]
dts = np.loadtxt('Records//GMR_dts.txt') # load the time steps for records
gm_idx = 3 # index for record being applied
A_g = np.loadtxt('Records//'+gm_names[gm_idx]) # load the record file as an array
dt = dts[gm_idx] # time step of record
tsTag = 1 # tag for time series to use
pTag = 1 # tag for load pattern to use
ops.timeSeries('Path', tsTag, '-dt', dt, '-values', *A_g, '-factor', g) # time series object
ops.pattern('UniformExcitation', pTag, 1, '-accel', tsTag) # pattern object
# +
# Set analysis settings
# ------------------------------------------------------------------------
# Wipe any previous analysis object
ops.wipeAnalysis()
# Convergence Test -- determines when convergence has been achieved.
tol = 1.0e-8 # Set the tolerance (default)
iterMax = 50 # Set the max bumber of iterations (default)
pFlag = 0 # Optional print flag (default is 0). Valid options: 0-5
nType = 2 # optional type of norm (default is 2). Valid options: 0-2
ops.test('NormDispIncr', tol, iterMax, pFlag, nType)
# SolutionAlgorithm -- determines the sequence of steps taken to solve the non-linear equation at the current time step
ops.algorithm('Newton', '-initial')
# DOF_Numberer -- determines the mapping between equation numbers and degrees-of-freedom
ops.numberer('RCM')
# SystemOfEqn/Solver -- within the solution algorithm, it specifies how to store and solve the system of equations in the analysis
ops.system('BandGeneral')
# Constraints handler: determines how the constraint equations are enforced in the analysis -- how it handles the boundary conditions/imposed displacements
ops.constraints('Transformation')
# Integrator -- determines the predictive step for time t+dt
# About Newmark Integrator;
# gamma = 1/2, beta = 1/4 --> Average Acceleration Method; Unconditionally stable
# gamma = 1/2, beta = 1/6 --> Linear Acceleration Method; Conditionally stable: Dt / T > 0.551
gamma = 0.5 # Set Newmark gamma coefficient
beta = 0.25 # Set Newmark beta coefficient
ops.integrator('Newmark', gamma, beta)
# AnalysisType -- defines what type of analysis is to be performed ('Static', 'Transient' etc.)
ops.analysis('Transient')
# Initialize some parameters
# ------------------------------------------------------------------------
analysis_time = (len(A_g) - 1) * dt
analysis_dt = 0.001
outputs = {
"time": [0],
"rel_disp": [0],
"rel_accel": [0],
"rel_vel": [0],
"spring_force": [0],
"dashpot_force": [0],
"inertia_force": [0]
}
# Perform step by step analysis
# ------------------------------------------------------------------------
while ops.getTime() < analysis_time:
curr_time = ops.getTime()
ops.analyze(1, analysis_dt)
# Save outputs, you can use but you do not need recorders!
outputs["time"].append(curr_time)
outputs["rel_disp"].append(ops.nodeDisp(node2, 1))
outputs["rel_vel"].append(ops.nodeVel(node2, 1))
outputs["rel_accel"].append(ops.nodeAccel(node2, 1))
outputs["spring_force"].append(ops.basicForce(spring_tag)[0])
outputs["dashpot_force"].append(ops.basicForce(dashpot_tag)[0])
outputs["inertia_force"].append(ops.nodeAccel(node2, 1) * ops.nodeMass(node2, 1))
# convert lists to array
for item in outputs:
outputs[item] = np.array(outputs[item])
# Check for equality: left-hand side (LHS) of the equation of motion
outputs["LHS"]=outputs["inertia_force"] + outputs["dashpot_force"] + outputs["spring_force"]
# Check for equality: right-hand side (RHS) of the equation of motion
inputs = {"RHS": -A_g*g*mass/tonne, "time": np.arange(0,len(A_g)*dt,dt)}
# +
# Plot the results
# ------------------------------------------------------------------------
plt.figure()
plt.plot(outputs["time"], outputs["rel_disp"])
plt.xlabel('Time [sec]')
plt.ylabel('Relative Displacement [m]')
plt.grid(True)
plt.show()
plt.figure()
plt.plot(inputs["time"], inputs["RHS"], label = 'RHS')
plt.plot(outputs["time"], outputs["LHS"], label = 'LHS')
plt.xlabel('Time [sec]')
plt.ylabel('Force [kN]')
plt.grid(True)
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False)
plt.show()
plt.figure()
plt.plot(outputs["rel_disp"], outputs["spring_force"], label='Spring')
plt.plot(outputs["rel_disp"], outputs["dashpot_force"], label='Dashpot')
plt.xlabel('Relative Displacement [m]')
plt.ylabel('Force [kN]')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False)
plt.grid(True)
plt.show()
# -
# ## <font color=blue>3.4. Uniaxial Material Testing via Static Analysis</font>
# OpenSees have various number of *uniaxial* materials. These materials can be tested via displacement controlled static analysis. As an example, three different materials are going to be tested.
#
# - [Elastic](https://openseespydoc.readthedocs.io/en/latest/src/ElasticUni.html) (Materal1): Is a elastic uniaxial material.
#
# - [ElasticPP](https://openseespydoc.readthedocs.io/en/latest/src/ElasticPP.html) (Materal2): Is a elastic perfectly-plastic uniaxial material.
#
# - [Steel01](https://openseespydoc.readthedocs.io/en/latest/src/steel01.html) (Materal3): Is a bilinear steel material with kinematic hardening and optional isotropic hardening described by a non-linear evolution equation.
# ### <font color=blue>3.4.1 Motonic Loading Test</font>
def monotonic_test(dref, nSteps, Material):
"""
Run monotonic loading test on a uniaxial material
Args:
Material: List containg, Material properties
dref: Reference displacement to which cycles are run
nSteps: Number of displacement increments
Returns:
Deformation: List containing deformation of material thoughout the analysis
Forces: List containing forces in material throughout the analysis
"""
# Wipe any existing model
# ------------------------------------------------------------------------
ops.wipe()
# Create ModelBuilder (with 1-dimension and 1 DOF/node)
# ------------------------------------------------------------------------
ops.model('basic', '-ndm', 1, '-ndf', 1)
# Define nodes
# ------------------------------------------------------------------------
node1 = 1 # Tag for node 1 (fixed node)
node2 = 2 # Tag for node 2 (free node)
coord1 = 0.0 # 1 dimensional coordinate for node 1
coord2 = 0.0 # 1 dimensional coordinate for node 2
ops.node(node1, coord1)
ops.node(node2, coord2)
# Define single-point constraints
# ------------------------------------------------------------------------
ops.fix(node1, 1) # Fix node 1,
ops.fix(node2, 0) # release node 2 (this is optional, by default it is unrestrained)
# Define the nodal mass
# ------------------------------------------------------------------------
ops.mass(node2, mass)
# Define materials
# ------------------------------------------------------------------------
# spring
spring_tag = 1
spring_type = Material[0]
spring_props = Material[1:]
ops.uniaxialMaterial(spring_type, spring_tag, *spring_props)
# low stiffness material (for numerical purposes)
dummy_tag = 2 # tag for low stiffness material
ops.uniaxialMaterial('Elastic', dummy_tag, 1e-5)
# Define elements
# ------------------------------------------------------------------------
ops.element('zeroLength', spring_tag, node1, node2, "-mat", spring_tag, "-dir", 1)
ops.element('zeroLength', dummy_tag, node1, node2, "-mat", dummy_tag, "-dir", 1)
# Define the load pattern
# ------------------------------------------------------------------------
ops.timeSeries('Linear', 1)
ops.pattern('Plain', 1, 1)
ops.load(2, 1)
# Set analysis settings
# ------------------------------------------------------------------------
# Wipe any previous analysis object
ops.wipeAnalysis()
# Convergence Test -- determines when convergence has been achieved.
tol = 1.0e-8 # Set the tolerance (default)
iterMax = 50 # Set the max bumber of iterations (default)
pFlag = 0 # Optional print flag (default is 0). Valid options: 0-5
nType = 2 # optional type of norm (default is 2). Valid options: 0-2
ops.test('NormDispIncr', tol, iterMax, pFlag, nType)
# SolutionAlgorithm -- determines the sequence of steps taken to solve the non-linear equation at the current time step
ops.algorithm('Newton', '-initial')
# DOF_Numberer -- determines the mapping between equation numbers and degrees-of-freedom
ops.numberer('RCM')
# SystemOfEqn/Solver -- within the solution algorithm, it specifies how to store and solve the system of equations in the analysis
ops.system('BandGeneral')
# Constraints handler: determines how the constraint equations are enforced in the analysis -- how it handles the boundary conditions/imposed displacements
ops.constraints('Transformation')
# Integrator -- determines the predictive step for time t+dt
dU = dref/nSteps # displacement increment
ops.integrator('DisplacementControl', 2, 1, dU)
# AnalysisType -- defines what type of analysis is to be performed ('Static', 'Transient' etc.)
ops.analysis('Static')
# Initialize some parameters
# ------------------------------------------------------------------------
Force = [0]
Deformation = [0]
# Perform step by step analysis
# ------------------------------------------------------------------------
for l in range(0,nSteps,1):
ok = ops.analyze(1)
Force.append(ops.basicForce(spring_tag)[0])
Deformation.append(ops.basicDeformation(spring_tag)[0])
if ok !=0:
print("DispControl Analysis is FAILED")
print("Analysis failed at nSteps: %s" %(l))
print('-------------------------------------------------------------------------')
break
return [Deformation,Force]
# +
# Material properties
# ------------------------------------------------------------------------
eta = 0.2 # strength factor, reduce this to make it nonlinear
F_y = eta*mass*g # Yield strength
u_y = F_y/k_el # yield displacement
r_post = 0.1 # strain-hardening ratio (ratio between post-yield tangent and initial elastic tangent)
Material1 = ['Elastic', k_el] # Elastic
Material2 = ['ElasticPP', k_el, u_y] # Elastic-Perfectly Plastic
Material3 = ['Steel01', F_y, k_el, r_post] # Bilinear with kinematic hardening
# Perform the analyses
# ------------------------------------------------------------------------
dref = 5*cm # maximum displacement to run the analysis
nSteps = 2000 # number of steps to run the the analysis
outputs1 = monotonic_test(dref, nSteps, Material1)
outputs2 = monotonic_test(dref, nSteps, Material2)
outputs3 = monotonic_test(dref, nSteps, Material3)
# Plot the results
# ------------------------------------------------------------------------
plt.figure()
plt.plot(outputs1[0], outputs1[1], label='Elastic Material')
plt.plot(outputs2[0], outputs2[1], label='Elastic-Perfectly Plastic Material')
plt.plot(outputs3[0], outputs3[1], label='Bilinear material with kinematic hardening')
plt.xlabel('Deformation [m]')
plt.ylabel('Force')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False)
plt.grid(True)
plt.show()
# -
# ### <font color=blue>3.4.2 Cyclic Loading Test</font>
def cyclic_test(dref, numCycles, nSteps, Material):
"""
Run cyclic loading test on a uniaxial material
Args:
Material: List containg, Material properties
dref: Reference displacement to which cycles are run.
numCycles: No. of cycles. Valid options either 1,2,3,4,5,6
nSteps: Number of displacement increments per load reversal
Returns:
Deformation: List containing deformation of material thoughout the analysis
Forces: List containing forces in material throughout the analysis
"""
# Wipe any existing model
# ------------------------------------------------------------------------
ops.wipe()
# Create ModelBuilder (with 1-dimension and 1 DOF/node)
# ------------------------------------------------------------------------
ops.model('basic', '-ndm', 1, '-ndf', 1)
# Define nodes
# ------------------------------------------------------------------------
node1 = 1 # Tag for node 1 (fixed node)
node2 = 2 # Tag for node 2 (free node)
coord1 = 0.0 # 1 dimensional coordinate for node 1
coord2 = 0.0 # 1 dimensional coordinate for node 2
ops.node(node1, coord1)
ops.node(node2, coord2)
# Define single-point constraints
# ------------------------------------------------------------------------
ops.fix(node1, 1) # Fix node 1,
ops.fix(node2, 0) # release node 2 (this is optional, by default it is unrestrained)
# Define the nodal mass
# ------------------------------------------------------------------------
ops.mass(node2, mass)
# Define materials
# ------------------------------------------------------------------------
# spring
spring_tag = 1
spring_type = Material[0]
spring_props = Material[1:]
ops.uniaxialMaterial(spring_type, spring_tag, *spring_props)
# low stiffness material (for numerical purposes)
dummy_tag = 2 # tag for low stiffness material
ops.uniaxialMaterial('Elastic', dummy_tag, 1e-5)
# Define elements
# ------------------------------------------------------------------------
ops.element('zeroLength', spring_tag, node1, node2, "-mat", spring_tag, "-dir", 1)
ops.element('zeroLength', dummy_tag, node1, node2, "-mat", dummy_tag, "-dir", 1)
# Define the load pattern
# ------------------------------------------------------------------------
ops.timeSeries('Linear', 1)
ops.pattern('Plain', 1, 1)
ops.load(2, 1)
# Initialize some parameters
# ------------------------------------------------------------------------
Force = [0]
Deformation = [0]
# Set analysis settings
# ------------------------------------------------------------------------
# Wipe any previous analysis object
ops.wipeAnalysis()
# Convergence Test -- determines when convergence has been achieved.
tol = 1.0e-8 # Set the tolerance (default)
iterMax = 50 # Set the max bumber of iterations (default)
pFlag = 0 # Optional print flag (default is 0). Valid options: 0-5
nType = 2 # optional type of norm (default is 2). Valid options: 0-2
ops.test('NormDispIncr', tol, iterMax, pFlag, nType)
# SolutionAlgorithm -- determines the sequence of steps taken to solve the non-linear equation at the current time step
ops.algorithm('Newton', '-initial')
# DOF_Numberer -- determines the mapping between equation numbers and degrees-of-freedom
ops.numberer('RCM')
# SystemOfEqn/Solver -- within the solution algorithm, it specifies how to store and solve the system of equations in the analysis
ops.system('BandGeneral')
# Constraints handler: determines how the constraint equations are enforced in the analysis -- how it handles the boundary conditions/imposed displacements
ops.constraints('Transformation')
# Create the list of displacements
if numCycles == 1:
dispList = [dref, -2*dref, dref]
dispNoMax = 3
elif numCycles == 2:
dispList = [dref, -2*dref, dref,
dref, -2*dref, dref]
dispNoMax = 6
elif numCycles == 3:
dispList = [dref, -2*dref, dref,
dref, -2*dref, dref,
dref, -2*dref, dref]
dispNoMax = 9
elif numCycles == 4:
dispList = [dref, -2*dref, dref,
dref, -2*dref, dref,
dref, -2*dref, dref,
dref, -2*dref, dref]
dispNoMax = 12
elif numCycles == 5:
dispList = [dref, -2*dref, dref,
dref, -2*dref, dref,
dref, -2*dref, dref,
dref, -2*dref, dref,
dref, -2*dref, dref]
dispNoMax = 15
elif numCycles == 6:
dispList = [dref, -2*dref, dref,
dref, -2*dref, dref,
dref, -2*dref, dref,
dref, -2*dref, dref,
ddref, -2*dref, dref,
dref, -2*dref, dref]
dispNoMax = 18
else:
print("ERROR: Value for numCycles not a valid choice. Choose between 1 and 6")
print('-------------------------------------------------------------------------')
# Iterate for each load reversal
for d in range(1,dispNoMax+1,1):
# Integrator -- determines the predictive step for time t+dt
dU = dispList[d-1]/nSteps # displacement increment
ops.integrator('DisplacementControl', 2, 1, dU)
# AnalysisType -- defines what type of analysis is to be performed ('Static', 'Transient' etc.)
ops.analysis('Static')
# Perform step by step analysis
# ------------------------------------------------------------------------
for l in range(0,nSteps,1):
ok = ops.analyze(1)
Force.append(ops.basicForce(spring_tag)[0])
Deformation.append(ops.basicDeformation(spring_tag)[0])
if ok !=0:
print("DispControl Analysis is FAILED")
print("Analysis failed at cycle: %s and nSteps: %s" %(d,l))
print('-------------------------------------------------------------------------')
break
return [Deformation,Force]
# +
# Perform the analyses
# ------------------------------------------------------------------------
dref = 10*cm
numCycles = 2
nSteps = 1000
outputs1 = cyclic_test(dref, numCycles, nSteps, Material1)
outputs2 = cyclic_test(dref, numCycles, nSteps, Material2)
outputs3 = cyclic_test(dref, numCycles, nSteps, Material3)
# Plot the results
# ------------------------------------------------------------------------
plt.figure()
plt.plot(outputs1[0], outputs1[1], label='Elastic Material')
plt.plot(outputs2[0], outputs2[1], label='Elastic-Perfectly Plastic Material')
plt.plot(outputs3[0], outputs3[1], label='Bilinear material with kinematic hardening')
plt.xlabel('Deformation [m]')
plt.ylabel('Force')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False)
plt.grid(True)
plt.show()
# -
# ## <font color=blue>3.5. Dynamic Analysis of Various SDOF Systems</font>
# Instead of performing single dynamic analysis, a parametric analysis can be performed on various SDOF systems. For example, different inputs (signals, material properties etc.)
def sdof_response(mass, motion, dt, Material, C, alpha = 1):
"""
Run seismic analysis of a nonlinear SDOF
:param mass: SDOF mass
:param motion: list, acceleration values
:param dt: float, time step of acceleration values
:param xi: damping ratio
:param Material: list containg, Material properties
:return:
"""
# Wipe any existing model
# ------------------------------------------------------------------------
ops.wipe()
# Create ModelBuilder (with 1-dimension and 1 DOF/node)
# ------------------------------------------------------------------------
ops.model('basic', '-ndm', 1, '-ndf', 1)
# Define nodes
# ------------------------------------------------------------------------
node1 = 1 # Tag for node 1 (fixed node)
node2 = 2 # Tag for node 2 (free node)
coord1 = 0.0 # 1 dimensional coordinate for node 1
coord2 = 0.0 # 1 dimensional coordinate for node 2
ops.node(node1, coord1)
ops.node(node2, coord2)
# Define single-point constraints
# ------------------------------------------------------------------------
ops.fix(node1, 1) # Fix node 1,
ops.fix(node2, 0) # release node 2 (this is optional, by default it is unrestrained)
# Define the nodal mass
# ------------------------------------------------------------------------
ops.mass(node2, mass)
# Define materials
# ------------------------------------------------------------------------
# spring
spring_tag = 1
spring_type = Material[0]
spring_props = Material[1:]
ops.uniaxialMaterial(spring_type, spring_tag, *spring_props)
# dashpot
dashpot_tag = 2 # tag for dashpot material
ops.uniaxialMaterial('Viscous', dashpot_tag, C, alpha)
# Define elements
# ------------------------------------------------------------------------
ops.element('zeroLength', spring_tag, node1, node2, "-mat", spring_tag, "-dir", 1)
ops.element('zeroLength', dashpot_tag, node1, node2, "-mat", dashpot_tag, "-dir", 1)
# Define the load pattern
# ------------------------------------------------------------------------
tsTag = 1 # tag for time series to use
pTag = 1 # tag for load pattern to use
values = list(-1 * A_g) # should be negative
ops.timeSeries('Path', tsTag, '-dt', dt, '-values', *A_g, '-factor', g) # time series object
ops.pattern('UniformExcitation', pTag, 1, '-accel', tsTag) # pattern object
# Set analysis settings
# ------------------------------------------------------------------------
# Wipe any previous analysis object
ops.wipeAnalysis()
# Convergence Test -- determines when convergence has been achieved.
tol = 1.0e-8 # Set the tolerance (default)
iterMax = 50 # Set the max bumber of iterations (default)
pFlag = 0 # Optional print flag (default is 0). Valid options: 0-5
nType = 2 # optional type of norm (default is 2). Valid options: 0-2
ops.test('NormDispIncr', tol, iterMax, pFlag, nType)
# SolutionAlgorithm -- determines the sequence of steps taken to solve the non-linear equation at the current time step
ops.algorithm('Newton', '-initial')
# DOF_Numberer -- determines the mapping between equation numbers and degrees-of-freedom
ops.numberer('RCM')
# SystemOfEqn/Solver -- within the solution algorithm, it specifies how to store and solve the system of equations in the analysis
ops.system('BandGeneral')
# Constraints handler: determines how the constraint equations are enforced in the analysis -- how it handles the boundary conditions/imposed displacements
ops.constraints('Transformation')
# Integrator -- determines the predictive step for time t+dt
# About Newmark Integrator;
# gamma = 1/2, beta = 1/4 --> Average Acceleration Method; Unconditionally stable
# gamma = 1/2, beta = 1/6 --> Linear Acceleration Method; Conditionally stable: Dt / T > 0.551
gamma = 0.5 # Set Newmark gamma coefficient
beta = 0.25 # Set Newmark beta coefficient
ops.integrator('Newmark', gamma, beta)
# AnalysisType -- defines what type of analysis is to be performed ('Static', 'Transient' etc.)
ops.analysis('Transient')
# Initialize some parameters
# ------------------------------------------------------------------------
analysis_time = (len(values) - 1) * dt
analysis_dt = 0.001
outputs = {
"time": [0],
"rel_disp": [0],
"rel_accel": [0],
"rel_vel": [0],
"spring_force": [0],
"dashpot_force": [0]
}
# Perform step by step analysis
# ------------------------------------------------------------------------
while ops.getTime() < analysis_time:
curr_time = ops.getTime()
ops.analyze(1, analysis_dt)
# Save outputs, you can use but you do not need recorders!
outputs["time"].append(curr_time)
outputs["rel_disp"].append(ops.nodeDisp(node2, 1))
outputs["rel_vel"].append(ops.nodeVel(node2, 1))
outputs["rel_accel"].append(ops.nodeAccel(node2, 1))
outputs["spring_force"].append(ops.basicForce(spring_tag)[0])
outputs["dashpot_force"].append(ops.basicForce(dashpot_tag)[0])
for item in outputs:
outputs[item] = np.array(outputs[item])
return outputs
# Perform sequential dynamic analyses for 3 models
# ------------------------------------------------------------------------
for idx in range(len(gm_names)): # lgo through all the records
A_g = -np.loadtxt('Records//'+gm_names[idx]) # record (g)
dt = dts[idx] # time step of record (sec)
outputs1 = sdof_response(mass, A_g, dt, Material1, C, alpha = 1)
outputs2 = sdof_response(mass, A_g, dt, Material2, C, alpha = 1)
outputs3 = sdof_response(mass, A_g, dt, Material3, C, alpha = 1)
acc1 = np.max(outputs1["rel_accel"]); vel1 = np.max(outputs1["rel_vel"]); disp1 = np.max(outputs1["rel_disp"]);
acc2 = np.max(outputs2["rel_accel"]); vel2 = np.max(outputs2["rel_vel"]); disp2 = np.max(outputs2["rel_disp"]);
acc3 = np.max(outputs3["rel_accel"]); vel3 = np.max(outputs3["rel_vel"]); disp3 = np.max(outputs3["rel_disp"]);
print('Ground Motion Record File: %s' %gm_names[idx])
print('Peak Accelerations (g) \nModel 1: %.3f\nModel 2: %.3f\nModel 3: %.3f' % (acc1/g, acc2/g, acc3/g))
print('Peak Velocities (m/s) \nModel 1: %.3f\nModel 2: %.3f\nModel 3: %.3f' % (vel1, vel2, vel3))
print('Peak Displacements (m) \nModel 1: %.3f\nModel 2: %.3f\nModel 3: %.3f' % (disp1, disp2, disp3))
print('------------------------------------------------------------------------')
# +
# Plot the results from the final analyses
# ------------------------------------------------------------------------
plt.figure()
plt.plot(outputs1["time"], outputs1["rel_disp"], label='Elastic Material')
plt.plot(outputs2["time"], outputs2["rel_disp"], label='Elastic-Perfectly Plastic Material')
plt.plot(outputs3["time"], outputs3["rel_disp"], label='Bilinear material with kinematic hardening')
plt.xlabel('Time [sec]')
plt.ylabel('Relative Displacement [m]')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False)
plt.grid(True)
plt.show()
plt.figure()
plt.plot(outputs1["rel_disp"], outputs1["spring_force"], label='Elastic Material')
plt.plot(outputs2["rel_disp"], outputs2["spring_force"], label='Elastic-Perfectly Plastic Material')
plt.plot(outputs3["rel_disp"], outputs3["spring_force"], label='Bilinear material with kinematic hardening')
plt.xlabel('Relative Displacement [m]')
plt.ylabel('Force [kN]')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False)
plt.grid(True)
plt.show()
plt.figure()
plt.plot(outputs1["rel_disp"], outputs1["dashpot_force"], label='Elastic Material')
plt.plot(outputs2["rel_disp"], outputs2["dashpot_force"], label='Elastic-Perfectly Plastic Material')
plt.plot(outputs3["rel_disp"], outputs3["dashpot_force"], label='Bilinear material with kinematic hardening')
plt.xlabel('Relative Displacement [m]')
plt.ylabel('Force [kN]')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False)
plt.grid(True)
plt.show()
# -
# [Back to 2. Python for Beginners](./2.%20Python.ipynb)
#
# [Jump to 4. 2D Frame Systems](./4.%20Frame.ipynb)
| 3. SDOF.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Archivos y Bases de datos
import mysql.connector
# La idea de este taller es manipular archivos (leerlos, parsearlos y escribirlos) y hacer lo mismo con bases de datos estructuradas.
# ## Ejercicio 1
#
# Baje el archivo de "All associations with added ontology annotations" del GWAS Catalog.
# + https://www.ebi.ac.uk/gwas/docs/file-downloads
#
# Describa las columnas del archivo (_que información estamos mirando? Para qué sirve? Por qué la hicieron?_)
import pandas as pd
df= pd.read_csv('C:/Users/Alex/Documents/eafit/semestres/X semestre/programacion/taller2.tsv', sep = '\t')
df[:1]
# Qué Entidades (tablas) puede definir?
# 1. Enfermedad
# 2. Plataforma (tecnologia de secuenciacion)
# 3. Loci
# 4. Enfermedad-loci
# 5. Journal
# 6. Estudio
# 7. Publicacion
# Cree la base de datos (copie el código SQL que se usó)
# +
CREATE TABLE enfermedad
(
id_enfermedad int PRIMARY KEY,
nombre varchar(255)
);
create table plataforma
(
id_plataforma int primary key,
nombre varchar(255)
);
CREATE TABLE loci
(
id_loci int NOT NULL PRIMARY KEY,
region varchar(255),
chrom varchar(255),
pos int,
genes_reportados int,
gen_mapped varchar(255),
gen_upstream int,
gen_downstream int,
SNP_GENE_IDS int,
UPSTREAM_GENE_DISTANCE int,
DOWNSTREAM_GENE_DISTANCE int,
STRONGEST_SP_RISK varchar(255),
SNPS varchar(255),
MERGED int,
SNP_ID_CURRENT varchar(255),
CONTEXTO varchar(255),
risk_allele varchar(255),
PVAl int,
PVALUE_MLOG int,
PVALUE_txt varchar(255),
BETA int,
novCI varchar(255),
id_plataforma int,
foreign key (id_plataforma) references plataforma(id_plataforma)
);
CREATE TABLE enfermedad_loci
(
id_enfermedad int,
id_loci int,
PRIMARY KEY (id_enfermedad, id_loci),
foreign key (id_enfermedad) references enfermedad(id_enfermedad),
foreign key (id_loci) references loci(id_loci)
);
CREATE TABLE journal
(
id_journal int primary key,
nombre varchar(255)
);
create table publicacion
(
id_publicacion int,
id_pubmed int,
autor varchar (255),
fecha_pub varchar (20),
link varchar (255),
id_journal int,
id_estudio int,
foreign key (id_journal) references journal(id_journal),
foreign key (id_estudio) references estudio(id_estudio)
);
CREATE TABLE estudio
(
nombre varchar(255),
id_estudio int primary key,
id_enfermedad int,
id_publicacion int,
foreign key (id_publicacion) references publicacion(id_publicacion),
foreign key (id_enfermedad) references enfermedad(id_enfermedad),
tamano_muestra int,
replicas int
);
# -
# ## Ejercicio 2
#
# Lea el archivo y guarde la infomación en la base de datos en las tablas que se definidieron en el __Ejercicio 1__.
df.head(1)
# +
hostname = '127.0.0.1'
username = 'alexacl95'
password = '<PASSWORD>'
database = 'programacion'
def doQuery( conn ) :
cur = conn.cursor()
cur.execute( "select * from enfermedad" )
for id_nombre, nombre_enf in cur.fetchall() :
print (id_nombre, nombre_enf)
myConnection = mysql.connector.connect( host=hostname, user=username, passwd=password, db=database )
doQuery( myConnection )
myConnection.close()
# -
def get_diseaseId(disease_name):
cur = myConnection.cursor()
cur.execute( """select * from enfermedad where nombre = "%s" """ % (disease_name) )
id_enf = None
for id_, nombre_enf in cur.fetchall() :
id_enf = id_
if not id_enf:
cur.execute("""insert into enfermedad values (NULL, "%s" )""" % (disease_name))
cur.execute("SELECT LAST_INSERT_ID()")
id_enf = cur.fetchall()[0][0]
myConnection.commit()
return id_enf
# +
def get_platId(plat_name):
cur = myConnection.cursor()
cur.execute( """select * from plataforma where nombre = "%s" """ % (plat_name) )
id_plat = None
for id_, nombre_plat in cur.fetchall() :
id_plat = id_
if not id_plat:
print("""insert into plataforma values (NULL, "%s" )""" % (plat_name))
cur.execute("""insert into plataforma values (NULL, "%s" )""" % (plat_name))
cur.execute("SELECT LAST_INSERT_ID()")
id_plat = cur.fetchall()[0][0]
myConnection.commit()
return id_plat
for index, row in df.iterrows():
plat_name = row['PLATFORM [SNPS PASSING QC]']
plat_id = get_platId(plat_name)
# -
def get_lociId(loci_name):
cur = myConnection.cursor()
cur.execute( """select * from loci where nombre = "%s" """ % (disease_name) )
id_loci = None
for id_, nombre_enf in cur.fetchall() :
id_loci = id_
if not id_loci:
print("""insert into enfermedad values (NULL, "%s", )""" % (disease_name))
cur.execute("""insert into enfermedad values (NULL, "%s" )""" % (disease_name))
cur.execute("SELECT LAST_INSERT_ID()")
id_enf = cur.fetchall()[0][0]
myConnection.commit()
return id_enf
# +
hostname = '127.0.0.1'
username = 'alexacl95'
password = '<PASSWORD>'
database = 'programacion'
myConnection = mysql.connector.connect( host=hostname, user=username, passwd=password, db=database )
for index, row in df.iterrows():
dis_name = row['DISEASE/TRAIT']
dissease_id = get_diseaseId(dis_name)
print()
myConnection.close()
# -
# ## Ejercicio 3
#
# Realize de la base de datos una consulta que le responda una pregunta biológica
# (e.g. qué genes estan relacionados con cuales enfermedades)
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_query.html
# ## Ejercicio 4
#
# Guarde el resultado de la consulta anterior en un archivo csv
| Alex/Taller 2 - Archivos y Bases de Datos.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.5.3
# language: julia
# name: julia-1.5
# ---
# ### Vanden-Eijnden
using PyPlot, StatsBase, Printf, DelimitedFiles, Combinatorics;
using Revise;
using MDToolbox;
PyPlot.plt.style.use("seaborn-colorblind");
ENV["COLUMNS"] = 110; #display width for MDToolbox
# ### Potential energy function its gradients
V(x; k=1.0) = sum(- (1.0 / 2.0) .* k .* x.^2 .+ (1.0 ./ 4.0) .* k .* x.^4)
#V(x; k=1.0) = sum((1.0 .- x.^2).^2 .- 0.25 .* x)
V([0.0])
grad(x; k=1.0) = - k .* x .+ k .* x.^3
#grad(x; k=1.0) = -2.0.*(1.0 .- x.^2).*2.0.*x .- 0.25
#ISgrad(x, beta_replica, beta_sigma; k=1.0) = (-k .* x .+ k .* x.^3) ./ beta_replica .* beta_sigma
ISgrad(x, beta_replica, beta_sigma; k=1.0) = (-2.0.*(1.0 .- x.^2).*2.0.*x .- 0.25) ./ beta_replica .* beta_sigma
# ### Define functions for infinite swapping
# あるi番目のレプリカのpdf
# \begin{align}
# {\rho}_i({\bf x_i}) = e^{-V({\bf x_i})/T_i}
# \end{align}
# 系全体のpdfをQ(x)とするとき
# ある置換σのpdfは
# \begin{align}
# Q(\sigma ,X) =1/N! {\rho}_{\sigma(1)}({\bf x_1}),...,{\rho}_{\sigma(N)}({\bf x_N})
# \end{align}
# ある置換σでのポテンシャル
# \begin{align}
# V(X, \sigma) = -\beta^{-1}{\bf log}Q(\sigma,X)=\beta^{-1}\sum_{i=1}^{N}\beta_{\sigma(i)}V(x_i)+const
# \end{align}
# 運動方程式
# \begin{align}
# ma_j=-\beta^{-1}\beta_{\sigma(j)}\nabla V(x_j)-\gamma mv+\sqrt{2\gamma m \beta^{-1}}
# \end{align}
# 次のステップの座標計算
# \begin{align}
# x_{next}=x_{current} -{\Delta t} \nabla{V(x_{current})} + \sqrt{2 \Delta t\beta^{-1}}(rand)
# \end{align}
#
# ISREMDの運動方程式
# \begin{align}
# ma_j=-\beta^{-1}\sum_{\sigma} \beta_{\sigma(j)}\omega_x(\sigma)\nabla V(x_j)-\gamma mv+ \sqrt{2\gamma m \beta^{-1}}
# \end{align}
# ISREMDの
# 次のステップの座標計算
# \begin{align}
# x_{next}=x_{current} -{\Delta t}\nabla{V(x_{current})}\beta^{-1}_{replica}\sum_{\sigma}\beta _{\sigma(replica)} \omega_X(\sigma) + \sqrt{2 \Delta t\beta^{-1}}(rand)
# \end{align}
#
#
# 各置換の重み
#
# \begin{align}
# \omega_X({\sigma}):= \frac{ Q(\sigma,X)}{ \sum_{{\sigma}'}Q({\sigma}',X)}
# \end{align}
# \begin{align}
# <A>_j=\int A(x)\rho_j(x)dx
# =\int \sum_{j=1}^N A(x_i)\sum_{\sigma}1_{j=\sigma(j)}\omega_x(\sigma)
# \end{align}
# ##温度Tにおけるxの分布は、以下のπに比例する
#
# \begin{align}
# \pi({\bf x},T) = e^{-V({\bf x})/T}
# \end{align}
rho(x, T) = exp.(-V(x) ./ T)
function calc_omega!(omega, x, T, perm)
nreplica = length(x)
nperm=length(perm)
qu = ones(Float64,nperm)
qu_sum = 0.0
for n = 1:nperm
for i = 1:nreplica
qu[n] *= rho(x[i], T[perm[n][i]])
end
qu[n] = 1/nperm * qu[n]
qu_sum += qu[n]
end
for n = 1:nperm
omega[n] = qu[n] / qu_sum
end
end
function calc_betasigma!(beta_sigma, beta_replica, perm,omega)
beta_sigma .= 0.0
nreplica=length(beta_replica)
nperm=length(perm)
for i=1:nreplica
for n=1:nperm
beta_sigma[i] += beta_replica[perm[n][i]] * omega[n]
end
end
end
function flush_weight(io::IOStream, m, omega, perm)
nperm = length(perm)
nreplica = length(perm[1])
weight = zeros(Float64, nreplica)
for n = 1:nperm
#id_replica = perm[n][m]
id_replica = findall(iszero, perm[n] .- m)[1]
weight[id_replica] += omega[n]
end
for i = 1:nreplica
@printf(io, "%f ", weight[i])
end
@printf(io, "\n")
end
# ### Infinite swap MD
# +
nreplica = 4
temperature_replica = [0.01, 0.10, 0.30, 0.40];
#temperature_replica = [0.01, 0.10, 0.50, 10.0];
#temperature_replica = [0.04, 1.25];
#temperature_replica = [0.04, 0.05, 0.06, 1.25];
nstep = 1000000;
perm=collect(permutations(1:nreplica))
nperm=factorial(nreplica)
omega=zeros(Float64,nperm)
beta_sigma=zeros(Float64,nreplica)
beta_replica=1 ./ temperature_replica
x_replica = []
for i = 1:nreplica
x = [0.0]
push!(x_replica, x)
end
io_replica = []
for i = 1:nreplica
filename = "is_replica$(i).dat"
io = open(filename, "w")
push!(io_replica, io)
end
io_weight=[]
for m = 1:nreplica
filename = "is_weight$(m).dat"
io = open(filename, "w")
push!(io_weight, io)
end
# -
icount = 0
for istep = 1:nstep
for i = 1:nreplica
calc_omega!(omega, x_replica, temperature_replica, perm)
calc_betasigma!(beta_sigma, beta_replica, perm, omega)
x_replica[i] = propagate_md(y -> ISgrad(y,beta_replica[i], beta_sigma[i], k=1.0), x_replica[i], temperature_replica[i], nstep=1, io=io_replica[i]);
#x_replica[i] = propagate_md(y -> ISgrad(y, beta_replica[1], beta_sigma[i], k=1.0), x_replica[i], temperature_replica[1], nstep=1, io=io_replica[i]);
end
for m=1:nreplica
flush_weight(io_weight[m], m, omega, perm)
end
end
for i = 1:nreplica
close(io_replica[i])
close(io_weight[i])
end
# ### Trajectory analysis
# +
traj_replica = []
temp_replica = []
for i = 1:nreplica
filename = "is_replica$(i).dat"
data = readdlm(filename);
push!(temp_replica, data[:, 1])
push!(traj_replica, data[:, 2])
end
# +
fig, ax = subplots(figsize=(8, 6))
for i = 1:nreplica
ax.plot(traj_replica[i][:], linewidth=0.5)
end
xlabel("step",fontsize=20)
ylabel("x(step)",fontsize=20)
ax.legend(["replica 1", "replica 2", "replica 3", "replica 4"])
tight_layout()
# -
filename = "is_weight1.dat"
weight_replica = readdlm(filename);
# +
fig, ax = subplots(figsize=(8, 6))
ax.plot(weight_replica[:, 1], linewidth=0.3)
xlabel("step",fontsize=20)
ylabel("weight(step)",fontsize=20)
ax.legend(["replica 1", "replica 2", "replica 3", "replica 4"])
tight_layout()
# +
traj = traj_replica[1]
weight = weight_replica[:, 1]
for i = 2:nreplica
traj = [traj; traj_replica[i]]
weight = [weight; weight_replica[:, i]]
end
weight .= weight ./ sum(weight)
# -
#x_grid = range(-1.3, 1.3, length=100);
x_grid = collect(-1.3:0.1:1.3)
pmf_theory = V.(x_grid) ./ temperature_replica[1]
pmf_theory .= pmf_theory .- minimum(pmf_theory);
size(weight)
pmf_observed, _ = compute_pmf(traj_replica[1], weight=weight_replica[:, 1], grid_x = collect(x_grid), bandwidth=0.05);
# +
fig, ax = subplots(figsize=(8, 6))
ax.plot(x_grid, pmf_theory, linewidth=3)
xlabel("x",fontsize=20)
ylabel("PMF (KBT)",fontsize=20)
ax.plot(x_grid, pmf_observed, linewidth=3)
ax.legend(["theory", "observed"])
ax.xaxis.set_tick_params(which="major",labelsize=15)
ax.yaxis.set_tick_params(which="major",labelsize=15)
ax.grid(linestyle="--", linewidth=0.5)
tight_layout()
savefig("md_infinite_swap.png", dpi=350)
# -
x_grid[4], x_grid[24]
abs(pmf_observed[4] - pmf_observed[24])
| infinite_swapping/md_infinite_swap.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.7 64-bit (''azureml'': conda)'
# name: python3
# ---
# ## Install the Azure ML Python SDK:
pip install azureml-sdk
# ## Create Workspace
# +
from azureml.core import Workspace
ws = Workspace.create(name='aml-workshop', # provide a name for your workspace
subscription_id='<KEY>', # provide your subscription ID
resource_group='aml-workshop', # provide a resource group name
create_resource_group=True,
location='australiaeast') # e.g. 'westeurope' or 'eastus2' or 'westus2' or 'southeastasia'.
# write out the workspace details to a configuration file: .azureml/config.json
ws.write_config(path='.azureml')
# -
# ## Create Compute Target
# The following example creates a compute target in your workspace with:
#
# * VM type: CPU
# * VM size: STANDARD_D2_V2
# * Cluster size: up to 4 nodes
# * Idle time: 2400s before the node scales down automatically
# Modify this code to update to GPU, or to change the SKU of your VMs.
# +
from azureml.core import Workspace
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
ws = Workspace.from_config() # automatically looks for a directory .azureml/
# name for your cluster
cpu_cluster_name = "cpu-cluster"
try:
# check if cluster already exists
cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# if not, create it
compute_config = AmlCompute.provisioning_configuration(
vm_size='STANDARD_D2_V2',
max_nodes=4,
idle_seconds_before_scaledown=2400,)
cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
cpu_cluster.wait_for_completion(show_output=True)
# -
| deploy/install.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="zEktOX6vkP-z" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="8a855309-b143-4efd-b898-82f031b714b9" executionInfo={"status": "ok", "timestamp": 1581446693884, "user_tz": -60, "elapsed": 827, "user": {"displayName": "Mi\u014<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDDDuKRjTk9QL4B59mhG7-WNkps0ukkCCFyqp7khg=s64", "userId": "13492438108932326155"}}
print("Hello Github")
# + id="_iXhAhzkkW5z" colab_type="code" colab={}
| HelloGithub.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### This notebook will be an initial exploration of Folium's capabilities.
#
# Per the use cases supplied by ACEP, we want to create a map of Alaska with icons of solar installations over the various regions. When the cursor hovers over the region, a box will pop up which displays the number of installations that were analyzed in that region along with the highest normalized annual production, and the average normalized annual production.
import os
import folium
import pandas as pd
alaska_map = folium.Map(location = [63.977161, -152.024667], zoom_start = 4)
alaska_map
alaska_terrain = folium.Map(location = [63.977161, -152.024667], tiles = 'Stamen Terrain', zoom_start = 4)
alaska_terrain
tooltip = 'Click for more info'
#add a marker for the site of Fairbanks. Also, add a hoverover text box.
icon_type = 'star'
folium.Marker([64.838033, -147.668970], popup = 'Fairbanks, AK.', icon = folium.Icon(icon = icon_type),
tooltip = tooltip).add_to(alaska_terrain)
alaska_terrain
#Add markers for other major cities.
#Anchorage
folium.Marker([61.193625, -149.694974], popup = 'Anchorage, AK.',
tooltip = tooltip).add_to(alaska_terrain)
#Kodiak
folium.Marker([57.794934, -152.392466], popup = 'Kodiak, AK.',
tooltip = tooltip).add_to(alaska_terrain)
#Nome
folium.Marker([64.516804, -165.399455], popup = 'Nome, AK.',
tooltip = tooltip).add_to(alaska_terrain)
#Juneau
folium.Marker([58.306096, -134.422802], popup = 'Juneau, AK.',
tooltip = tooltip).add_to(alaska_terrain)
#Prudhoe Bay
folium.Marker([70.212820, -148.408886], popup = 'Prudhoe Bay, AK.',
tooltip = tooltip).add_to(alaska_terrain)
#Kotzebue
folium.Marker([66.888948, -162.512280], popup = 'Kotzebue, AK.',
tooltip = tooltip).add_to(alaska_terrain)
#Bethel
folium.Marker([60.794938, -161.770716], popup = 'Bethel, AK.',
tooltip = tooltip).add_to(alaska_terrain)
#Function that enables the map to show a lattitude and longitude when clicked.
alaska_terrain.add_child(folium.LatLngPopup())
#Display the map we've made.
alaska_terrain
# You can also embed into each of these markers things like graphs. We would just need three lines of text, which would probably have to be hand-updated. Or, it could reference a variable which is set by a function call? Unclear what the right strategy is now, but we can definitely find an answer here.
# +
#To save this graphic as an interactive map:
#alaska_terrain.save("Practice Marker Map.html")
# -
# For us, the next step then would be to find a way to express data in these popups. I'll just do it for Fairbanks, as right now the S.o.A. Fairbanks panels are the ones we've been working with.
#First step is to write the code that pulls in the data. We'll work with pre-formatted data.
panel_data = pd.read_excel(io = 'spirit_of_alaska_fairbanks_cleaned.xlsx')
panel_data.head()
#Next, we'll need to generate a representation of the data we want to present as a table, in HTML
#First step: Calculate the normalized annual production.
normalized_annual_production = panel_data.mean(axis = 0, skipna = True)
print(normalized_annual_production)
print(normalized_annual_production[1])
#Ok, and the last thing is to normalize by the capacity of the Johansen Solar array - 2.8 kW
johansen_normalized = normalized_annual_production[1]/2.8
print(johansen_normalized)
# +
#Ok, so we can also use PVWatts to calculate what a similar system should produce in a year.
#To make it comparable, we'll normalize the predictions based on the installation capacity.
#We'll simulate a 4 kW system, using normal pvwatts parameters and for Fairbanks AK.
#Our results is 3,698 kWh per year for a 4 kW DC system.
pvwatts = 3698
pvwatts_normalized = pvwatts/4
#Next, we'll make an entry for the number of installations: 1
number_of_installations = 1
#And finally, make the dataframe we'll use:
fairbanks_display_dataframe = pd.DataFrame(data = [['Number of Installations', int(number_of_installations)],
['PVWatts Predicted Annual Power', pvwatts_normalized],
['Average Produced Annual Power', johansen_normalized],
['Maximum Produced Annual Power', johansen_normalized]],
columns = ['Local Values', 'Quantity'])
print(fairbanks_display_dataframe)
# -
#Ok, and now it's time to embed that dataframe!
html = fairbanks_display_dataframe.to_html()
fairbanks_popup = folium.Popup(html)
folium.Marker([64.838033, -147.668970], popup = fairbanks_popup).add_to(alaska_terrain)
alaska_terrain
# +
#And then save this graphic too.
#alaska_terrain.save(os.path.join('alaska map.html', 'html_popups.html'))
# +
#Exploring making the map icon interactive:
#Pulled directly offline -
import folium
from ipywidgets import interact
cities = ["Berlin", "Paris", "Chicago", "Singapore"]
examples = ["Traffic", "Truck info", "Transit", "Regular", "Satellite"]
@interact(city=cities, example=examples)
def show_canned_examples(city, example):
attr = "HERE.com"
latlon_for_city = {
"Berlin": (52.518611, 13.408333),
"Paris": (48.8567, 2.3508),
"Chicago": (41.88416, -87.63243),
"Singapore": (1.283333, 103.833333)
}
zoom = 14
queries = {
"Traffic":
"https://1.traffic.maps.api.here.com/maptile/2.1/traffictile/newest/normal.traffic.day/{z}/{x}/{y}/256/png8?lg=eng&app_id=%s&app_code=%s" % (app_id, app_code),
"Regular":
"https://1.base.maps.api.here.com/maptile/2.1/maptile/newest/normal.day/{z}/{x}/{y}/256/png8?lg=eng&app_id=%s&app_code=%s" % (app_id, app_code),
"Truck info":
"https://1.base.maps.api.here.com/maptile/2.1/trucktile/newest/normal.day.grey/{z}/{x}/{y}/256/png8?lg=eng&app_id=%s&app_code=%s" % (app_id, app_code),
"Transit":
"https://1.base.maps.api.here.com/maptile/2.1/maptile/newest/normal.day.transit/{z}/{x}/{y}/256/png8?lg=eng&app_id=%s&app_code=%s" % (app_id, app_code),
"Satellite":
"https://1.aerial.maps.api.here.com/maptile/2.1/maptile/newest/hybrid.day/{z}/{x}/{y}/256/png8?lg=eng&app_id=%s&app_code=%s" % (app_id, app_code),
}
return folium.Map(location=latlon_for_city[city], tiles=queries[example],attr=attr, zoom_start=zoom)
# +
from ipywidgets import interact
cities = ["Berlin", "Paris", "Chicago", "Singapore"]
#examples = ["Traffic", "Truck info", "Transit", "Regular", "Satellite"]
@interact(city=cities)
def show_canned_examples(city):
attr = "HERE.com"
latlon_for_city = {
"Berlin": (52.518611, 13.408333),
"Paris": (48.8567, 2.3508),
"Chicago": (41.88416, -87.63243),
"Singapore": (1.283333, 103.833333)
}
zoom = 14
return folium.Map(location = latlon_for_city[city])
# -
#
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
def f(x):
return x
#This is the slider bar code exactly that we would use for the tilt angle graph.
interact(f, x = widgets.IntSlider(min = 5, max = 80, step = 5, value = 45));
import vincent
import os
import json
import requests
import numpy as np
import pandas as pd
vincent.core.initialize_notebook()
tilt = [5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80]
nrel_list_of_responses = []
for i in range(len(tilt)):
list_parameters = {"format": 'JSON', "api_key": "DEMO_KEY", "system_capacity": 4, "module_type": 0, "losses": 14.08,
"array_type": 0, "tilt": tilt[i], "azimuth": 180, "lat": 64.82, "lon": -147.87, "dataset": 'tmy2'}
json_response = requests.get("https://developer.nrel.gov/api/pvwatts/v6", params = list_parameters).json()
new_dataframe = pd.DataFrame(data = json_response['outputs'])
nrel_list_of_responses.append(new_dataframe)
print(nrel_list_of_responses[2])
def disp_tilt_monthly(tilt_angle):
index = tilt.index(tilt_angle)
tilt_line_graph = vincent.Line(nrel_list_of_responses[index]['ac_monthly'])
tilt_line_graph.axis_titles(x = "Month", y = "Energy Produced (kWh)")
tilt_line_graph.display()
disp_tilt_monthly(45)
interactive_tilt_angle_plot = interact(disp_tilt_monthly, tilt_angle = widgets.IntSlider(min = 5, max = 80, step = 5, value = 45));
# +
#Now we need to save that as an html file to embed in the popup.
from ipywidgets.embed import embed_minimal_html
embed_minimal_html('tiltangleplot.html', views=[interactive_tilt_angle_plot], title='TiltAnglePlot')
# +
# from ipywidgets import IntSlider
# from ipywidgets.embed import embed_minimal_html
# slider = IntSlider(value=40)
# embed_minimal_html('export.html', views=[slider], title='Widgets export')
# -
import bokeh
from bokeh.io import output_file, show
from bokeh.models.widgets import Button
output_file("button.html")
button = Button(label="Foo", button_type="success")
show(button)
# +
from bokeh.io import output_file, show
from bokeh.models.widgets import Slider
output_file("slider.html")
slider = Slider(start=0, end=10, value=1, step=.1, title="Stuff")
show(slider)
# +
from bokeh.io import output_file, show
from bokeh.models.widgets import Div
output_file("div.html")
div = Div(text="""Your <a href="https://en.wikipedia.org/wiki/HTML">HTML</a>-supported text is initialized with the <b>text</b> argument. The
remaining div arguments are <b>width</b> and <b>height</b>. For this example, those values
are <i>200</i> and <i>100</i> respectively.""",
width=200, height=100)
show(div)
# -
#Ok, now we're going to try to put the figure into the Folium popup
import branca
# +
# #Homer, with an interactive graph maybe
# folium.Marker([59.645449, -151.543460], popup = ,
# tooltip = "Homer, AK").add_to(alaska_terrain)
| Initial Folium Exploration.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # 查看R包的清单
# 默认加载的R包的清单
getOption("defaultPackages")
# 当前已加载的R包的清单
(.packages()) # .packages密盖ing
# 已安装的R包的清单
(.packages(all.available=TRUE));
library();
# # 加载R包
library(rpart)
(.packages())
# installed.packages() # 已安装的包的信息
# available.packages() # 资源库中可用的R包
# old.packages() # 已安装包的新版本
# new.packages() # 可从资源库中获取的当前未安装的包
# download.packages("abc", destdir="~/Downloads/") # 下载包到本地
install.packages(c("abc")) # 从资源库下载安装包
remove.packages(c("abc"), .Library) # 移除已安装的包
# ?setRepositories # 设置资源库列表
# +
# 从Github安装包
# install.packages("devtools")
# library(devtools)
# devtools::install_github("tidyverse/ggplot2")
# -
# # 定制R包
#
# `package.skeleton(..)`
#
# 见Writing R Extensions.
#
#
| code-snippet/R/notebook/r-packages.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: skforecast_dev
# language: python
# name: skforecast_dev
# ---
#
# %load_ext autoreload
# %autoreload 2
import sys
#sys.path.insert(1, '/home/ximo/Documents/GitHub/skforecast')
# %config Completer.use_jedi = False
# Libraries
# ==============================================================================
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from xgboost import XGBRegressor
from skforecast.ForecasterAutoreg import ForecasterAutoreg
# XGBoost, acronym for Extreme Gradient Boosting, is a very efficient implementation of the stochastic gradient boosting algorithm that has become a benchmark in the field of machine learning. In addition to its own API, XGBoost library includes the XGBRegressor class which follows the scikit learn API and therefore it is compatible with skforecast.
#
# > **NOTE:**
# > Since the success of XGBoost as a machine learning algorithm, new implementations have been developed that also achieve excellent results, two of them are: [LightGBM](https://lightgbm.readthedocs.io/en/latest/) and [CatBoost](https://catboost.ai/). A more detailed example can be found [here](https://www.cienciadedatos.net/documentos/py39-forecasting-time-series-with-skforecast-xgboost-lightgbm-catboost.html).
#
# +
# Download data
# ==============================================================================
url = ('https://raw.githubusercontent.com/JoaquinAmatRodrigo/skforecast/master/data/h2o_exog.csv')
data = pd.read_csv(url, sep=',', header=0, names=['date', 'y', 'exog_1', 'exog_2'])
# Data preprocessing
# ==============================================================================
data['date'] = pd.to_datetime(data['date'], format='%Y/%m/%d')
data = data.set_index('date')
data = data.asfreq('MS')
steps = 36
data_train = data.iloc[:-steps, :]
data_test = data.iloc[-steps:, :]
# +
# Create and fit forecaster
# ==============================================================================
forecaster = ForecasterAutoreg(
regressor = XGBRegressor(),
lags = 8
)
forecaster.fit(y=data['y'], exog=data[['exog_1', 'exog_2']])
forecaster
# -
# Predict
# ==============================================================================
forecaster.predict(steps=10, exog=data_test[['exog_1', 'exog_2']])
# Predictors importance
# ==============================================================================
print(forecaster.get_feature_importance().to_markdown(tablefmt="github"))
| notebooks/notebooks_for_docs/xgboost.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from astropy.io import fits
from astropy.wcs import WCS
from astropy.nddata import Cutout2D
from astropy.coordinates import SkyCoord
hdu = fits.open('HFI_SkyMap_100_2048_R3.01_full.fits')[0]
hdu.header
hdu = fits.open('HFI_SkyMap_100_2048_R3.01_full.fits')[1]
hdu.header
NAXIS1 = 10240 / Axis Length
NAXIS2 = 10240 / Axis Length
CRPIX1 = 5120.5 / Coordinate reference pixel
CRPIX2 = 5120.5 / Coordinate reference pixel
PC1_1 = 0.70710677 / Transformation matrix element
PC1_2 = 0.70710677 / Transformation matrix element
PC2_1 = -0.70710677 / Transformation matrix element
PC2_2 = 0.70710677 / Transformation matrix element
CDELT1 = -0.031074028 / [deg] Coordinate increment
CDELT2 = 0.031074028 / [deg] Coordinate increment
CTYPE1 = 'GLON-HPX' / Galactic longitude in an HPX projection
CTYPE2 = 'GLAT-HPX' / Galactic latitude in an HPX projection
CRVAL1 = 0 / [deg] Galactic longitude at the reference point
CRVAL2 = 0 / [deg] Galactic latitude at the reference point
PV2_1 = 4 / HPX H parameter (longitude)
PV2_2 = 3 / HPX K parameter (latitude)
hdu.data
wcs = WCS(hdu.header)
wcs
wcs.celestial
size = (400, 400)
position = SkyCoord(8.4744770, -56.3420895, frame='galactic', unit='deg') # GLON,deg;GLAT,deg
#position = SkyCoord(334.4436414, -35.7155681, frame='fk5', unit='deg', equinox='J2000.0') # RA,deg;Dec,deg
cutout = Cutout2D(hdu.data, position, size, wcs=wcs)
pf[0].data = cutout.data
pf[0].header['CRPIX1'] = cutout.wcs.wcs.crpix[0]
pf[0].header['CRPIX2'] = cutout.wcs.wcs.crpix[1]
pf.writeto('PSZ2_G008.47-56.34_545_GHz.fits')
pf.close()
SkyCoord.from_name("PSZ2 G006.68-35.55")
cutout = Cutout2D(hdu.data, m51_pos, size, wcs=wcs)
| rebuild_dataset/original.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import tqdm
import numpy as np
import tensorflow as tf
class Config:
data_path='data/zh.tsv'
dict_path='data/dict.txt'
model_path='logs/model'
new_dict='data/ndict.txt'
embed_size = 300
lr = 0.0003
is_training = True
epochs = 25
batch_size = 4
num_heads = 8
num_blocks = 6
input_vocab_size = 50
label_vocab_size = 50
# embedding size
max_length = 100
hidden_units = 512
dropout_rate = 0.2
def read_dict(dcit_path=Config.dict_path):
"""
根据路径dict_path读取文本和英文字典
return: pny2idx idx2pny hanzi2idx idx2hanzi
"""
pnys=['<PAD>']
hanzis=['<PAD>']
with open(dcit_path,'r',encoding='utf-8') as file:
for line in file:
pny,hanzi=line.strip('\n').split('\t')
pnys.append(pny)
hanzis+=list(hanzi)
hanzis=list(set(hanzis))
pny2idx={pny:idx for idx,pny in enumerate(pnys)}
idx2pny={idx:pny for idx,pny in enumerate(pnys)}
hanzi2idx={hanzi:idx for idx,hanzi in enumerate(hanzis)}
idx2hanzi={idx:hanzi for idx,hanzi in enumerate(hanzis)}
Config.input_vocab_size=len(pny2idx)
Config.label_vocab_size=len(hanzi2idx)
return pny2idx,idx2pny,hanzi2idx,idx2hanzi
a,b,c,d=read_dict()
def read_data():
"""
根据路径data_path读取中文文本到英文文本的对应关系
return: inputs->拼音->[[一句话的拼音列表],[]] lables->汉字->[[一句话的汉字列表],[]]
"""
inputs=[]
labels=[]
hanzid=[]
pnysd=[]
phdict={}
with open(Config.dict_path,'r',encoding='utf-8') as file:
for line in file:
pny,hanzi=line.strip('\n').split('\t')
phdict[pny]=list(hanzi)
with open(Config.data_path,'r',encoding='utf-8') as file:
for line in file:
key,pny,hanzi=line.strip('\n').strip().split('\t')
pnys=pny.strip().split(' ')
hanzis=hanzi.strip().split(' ')
inputs.append(pnys)
labels.append(hanzis)
assert len(pnys)==len(hanzis)
if '' in pnys:
hanzis.remove('')
pnys.remove('')
for ipny,ihanzi in zip(pnys,hanzis):
if ipny in phdict.keys():
if ihanzi not in phdict[ipny]:
phdict[ipny].append(ihanzi)
else:
phdict[ipny]=ihanzi
with open(Config.new_dict,'w',encoding='utf-8') as file:
for key,value in phdict.items():
file.write(key+'\t'+' '.join(value)+'\n')
pny2idx,idx2pny,hanzi2idx,idx2hanzi=read_dict(Config.new_dict)
input_num = [[pny2idx[pny] for pny in line ] for line in inputs]
label_num = [[hanzi2idx[han] for han in line] for line in labels]
return input_num,label_num
def get_batch(input_data, label_data, batch_size):
batch_num = len(input_data) // batch_size
for k in range(batch_num):
begin = k * batch_size
end = begin + batch_size
input_batch = input_data[begin:end]
label_batch = label_data[begin:end]
max_len = max([len(line) for line in input_batch])
input_batch = np.array([line + [0] * (max_len - len(line)) for line in input_batch])
label_batch = np.array([line + [0] * (max_len - len(line)) for line in label_batch])
yield input_batch, label_batch
def normalize(inputs,
epsilon = 1e-8,
scope="ln",
reuse=None):
'''Applies layer normalization.
Args:
inputs: A tensor with 2 or more dimensions, where the first dimension has
`batch_size`.
epsilon: A floating number. A very small number for preventing ZeroDivision Error.
scope: Optional scope for `variable_scope`.
reuse: Boolean, whether to reuse the weights of a previous layer
by the same name.
Returns:
A tensor with the same shape and data dtype as `inputs`.
'''
with tf.variable_scope(scope, reuse=reuse):
inputs_shape = inputs.get_shape()
params_shape = inputs_shape[-1:]
mean, variance = tf.nn.moments(inputs, [-1], keep_dims=True)
beta= tf.Variable(tf.zeros(params_shape))
gamma = tf.Variable(tf.ones(params_shape))
normalized = (inputs - mean) / ( (variance + epsilon) ** (.5) )
outputs = gamma * normalized + beta
return outputs
def embedding(inputs,
vocab_size,
num_units,
zero_pad=True,
scale=True,
scope="embedding",
reuse=None):
'''Embeds a given tensor.
Args:
inputs: A `Tensor` with type `int32` or `int64` containing the ids
to be looked up in `lookup table`.
vocab_size: An int. Vocabulary size.
num_units: An int. Number of embedding hidden units.
zero_pad: A boolean. If True, all the values of the fist row (id 0)
should be constant zeros.
scale: A boolean. If True. the outputs is multiplied by sqrt num_units.
scope: Optional scope for `variable_scope`.
reuse: Boolean, whether to reuse the weights of a previous layer
by the same name.
Returns:
A `Tensor` with one more rank than inputs's. The last dimensionality
should be `num_units`.
For example,
```
import tensorflow as tf
inputs = tf.to_int32(tf.reshape(tf.range(2*3), (2, 3)))
outputs = embedding(inputs, 6, 2, zero_pad=True)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print sess.run(outputs)
>>
[[[ 0. 0. ]
[ 0.09754146 0.67385566]
[ 0.37864095 -0.35689294]]
[[-1.01329422 -1.09939694]
[ 0.7521342 0.38203377]
[-0.04973143 -0.06210355]]]
```
```
import tensorflow as tf
inputs = tf.to_int32(tf.reshape(tf.range(2*3), (2, 3)))
outputs = embedding(inputs, 6, 2, zero_pad=False)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print sess.run(outputs)
>>
[[[-0.19172323 -0.39159766]
[-0.43212751 -0.66207761]
[ 1.03452027 -0.26704335]]
[[-0.11634696 -0.35983452]
[ 0.50208133 0.53509563]
[ 1.22204471 -0.96587461]]]
```
'''
with tf.variable_scope(scope, reuse=reuse):
lookup_table = tf.get_variable('lookup_table',
dtype=tf.float32,
shape=[vocab_size, num_units],
initializer=tf.contrib.layers.xavier_initializer())
if zero_pad:
lookup_table = tf.concat((tf.zeros(shape=[1, num_units]),
lookup_table[1:, :]), 0)
outputs = tf.nn.embedding_lookup(lookup_table, inputs)
if scale:
outputs = outputs * (num_units ** 0.5)
return outputs
def multihead_attention(emb,
queries,
keys,
num_units=None,
num_heads=8,
dropout_rate=0,
is_training=True,
causality=False,
scope="multihead_attention",
reuse=None):
'''Applies multihead attention.
Args:
queries: A 3d tensor with shape of [N, T_q, C_q].
keys: A 3d tensor with shape of [N, T_k, C_k].
num_units: A scalar. Attention size.
dropout_rate: A floating point number.
is_training: Boolean. Controller of mechanism for dropout.
causality: Boolean. If true, units that reference the future are masked.
num_heads: An int. Number of heads.
scope: Optional scope for `variable_scope`.
reuse: Boolean, whether to reuse the weights of a previous layer
by the same name.
Returns
A 3d tensor with shape of (N, T_q, C)
'''
with tf.variable_scope(scope, reuse=reuse):
# Set the fall back option for num_units
if num_units is None:
num_units = queries.get_shape().as_list[-1]
# Linear projections
Q = tf.layers.dense(queries, num_units, activation=tf.nn.relu) # (N, T_q, C)
K = tf.layers.dense(keys, num_units, activation=tf.nn.relu) # (N, T_k, C)
V = tf.layers.dense(keys, num_units, activation=tf.nn.relu) # (N, T_k, C)
# Split and concat
Q_ = tf.concat(tf.split(Q, num_heads, axis=2), axis=0) # (h*N, T_q, C/h)
K_ = tf.concat(tf.split(K, num_heads, axis=2), axis=0) # (h*N, T_k, C/h)
V_ = tf.concat(tf.split(V, num_heads, axis=2), axis=0) # (h*N, T_k, C/h)
# Multiplication
outputs = tf.matmul(Q_, tf.transpose(K_, [0, 2, 1])) # (h*N, T_q, T_k)
# Scale
outputs = outputs / (K_.get_shape().as_list()[-1] ** 0.5)
# Key Masking
key_masks = tf.sign(tf.abs(tf.reduce_sum(emb, axis=-1))) # (N, T_k)
key_masks = tf.tile(key_masks, [num_heads, 1]) # (h*N, T_k)
key_masks = tf.tile(tf.expand_dims(key_masks, 1), [1, tf.shape(queries)[1], 1]) # (h*N, T_q, T_k)
paddings = tf.ones_like(outputs)*(-2**32+1)
outputs = tf.where(tf.equal(key_masks, 0), paddings, outputs) # (h*N, T_q, T_k)
# Causality = Future blinding
if causality:
diag_vals = tf.ones_like(outputs[0, :, :]) # (T_q, T_k)
tril = tf.contrib.linalg.LinearOperatorTriL(diag_vals).to_dense() # (T_q, T_k)
masks = tf.tile(tf.expand_dims(tril, 0), [tf.shape(outputs)[0], 1, 1]) # (h*N, T_q, T_k)
paddings = tf.ones_like(masks)*(-2**32+1)
outputs = tf.where(tf.equal(masks, 0), paddings, outputs) # (h*N, T_q, T_k)
# Activation
outputs = tf.nn.softmax(outputs) # (h*N, T_q, T_k)
# Query Masking
query_masks = tf.sign(tf.abs(tf.reduce_sum(emb, axis=-1))) # (N, T_q)
query_masks = tf.tile(query_masks, [num_heads, 1]) # (h*N, T_q)
query_masks = tf.tile(tf.expand_dims(query_masks, -1), [1, 1, tf.shape(keys)[1]]) # (h*N, T_q, T_k)
outputs *= query_masks # broadcasting. (N, T_q, C)
# Dropouts
outputs = tf.layers.dropout(outputs, rate=dropout_rate, training=is_training)
# Weighted sum
outputs = tf.matmul(outputs, V_) # ( h*N, T_q, C/h)
# Restore shape
outputs = tf.concat(tf.split(outputs, num_heads, axis=0), axis=2 ) # (N, T_q, C)
# Residual connection
outputs += queries
# Normalize
outputs = normalize(outputs) # (N, T_q, C)
return outputs
def feedforward(inputs,
num_units=[2048, 512],
scope="multihead_attention",
reuse=None):
'''Point-wise feed forward net.
Args:
inputs: A 3d tensor with shape of [N, T, C].
num_units: A list of two integers.
scope: Optional scope for `variable_scope`.
reuse: Boolean, whether to reuse the weights of a previous layer
by the same name.
Returns:
A 3d tensor with the same shape and dtype as inputs
'''
with tf.variable_scope(scope, reuse=reuse):
# Inner layer
params = {"inputs": inputs, "filters": num_units[0], "kernel_size": 1,
"activation": tf.nn.relu, "use_bias": True}
outputs = tf.layers.conv1d(**params)
# Readout layer
params = {"inputs": outputs, "filters": num_units[1], "kernel_size": 1,
"activation": None, "use_bias": True}
outputs = tf.layers.conv1d(**params)
# Residual connection
outputs += inputs
# Normalize
outputs = normalize(outputs)
return outputs
def label_smoothing(inputs, epsilon=0.1):
'''Applies label smoothing. See https://arxiv.org/abs/1512.00567.
Args:
inputs: A 3d tensor with shape of [N, T, V], where V is the number of vocabulary.
epsilon: Smoothing rate.
For example,
```
import tensorflow as tf
inputs = tf.convert_to_tensor([[[0, 0, 1],
[0, 1, 0],
[1, 0, 0]],
[[1, 0, 0],
[1, 0, 0],
[0, 1, 0]]], tf.float32)
outputs = label_smoothing(inputs)
with tf.Session() as sess:
print(sess.run([outputs]))
>>
[array([[[ 0.03333334, 0.03333334, 0.93333334],
[ 0.03333334, 0.93333334, 0.03333334],
[ 0.93333334, 0.03333334, 0.03333334]],
[[ 0.93333334, 0.03333334, 0.03333334],
[ 0.93333334, 0.03333334, 0.03333334],
[ 0.03333334, 0.93333334, 0.03333334]]], dtype=float32)]
```
'''
K = inputs.get_shape().as_list()[-1] # number of channels
return ((1-epsilon) * inputs) + (epsilon / K)
class Graph():
def __init__(self, is_training=True):
tf.reset_default_graph()
self.is_training = Config.is_training
self.hidden_units = Config.hidden_units
self.input_vocab_size = Config.input_vocab_size
self.label_vocab_size = Config.label_vocab_size
self.num_heads = Config.num_heads
self.num_blocks = Config.num_blocks
self.max_length = Config.max_length
self.lr = Config.lr
self.dropout_rate = Config.dropout_rate
# input
self.x = tf.placeholder(tf.int32, shape=(None, None))
self.y = tf.placeholder(tf.int32, shape=(None, None))
# embedding
self.emb = embedding(self.x, vocab_size=self.input_vocab_size, num_units=self.hidden_units, scale=True, scope="enc_embed")
self.enc = self.emb + embedding(tf.tile(tf.expand_dims(tf.range(tf.shape(self.x)[1]), 0), [tf.shape(self.x)[0], 1]),
vocab_size=self.max_length,num_units=self.hidden_units, zero_pad=False, scale=False,scope="enc_pe")
## Dropout
self.enc = tf.layers.dropout(self.enc,
rate=self.dropout_rate,
training=self.is_training)
## Blocks
for i in range(self.num_blocks):
with tf.variable_scope("num_blocks_{}".format(i)):
### Multihead Attention
self.enc = multihead_attention(emb = self.emb,
queries=self.enc,
keys=self.enc,
num_units=self.hidden_units,
num_heads=self.num_heads,
dropout_rate=self.dropout_rate,
is_training=self.is_training,
causality=False)
### Feed Forward
self.outputs = feedforward(self.enc, num_units=[4*self.hidden_units, self.hidden_units])
# Final linear projection
self.logits = tf.layers.dense(self.outputs, self.label_vocab_size)
self.preds = tf.to_int32(tf.argmax(self.logits, axis=-1))
self.istarget = tf.to_float(tf.not_equal(self.y, 0))
self.acc = tf.reduce_sum(tf.to_float(tf.equal(self.preds, self.y))*self.istarget)/ (tf.reduce_sum(self.istarget))
tf.summary.scalar('acc', self.acc)
if is_training:
# Loss
self.y_smoothed = label_smoothing(tf.one_hot(self.y, depth=self.label_vocab_size))
self.loss = tf.nn.softmax_cross_entropy_with_logits(logits=self.logits, labels=self.y_smoothed)
self.mean_loss = tf.reduce_sum(self.loss*self.istarget) / (tf.reduce_sum(self.istarget))
# Training Scheme
self.global_step = tf.Variable(0, name='global_step', trainable=False)
self.optimizer = tf.train.AdamOptimizer(learning_rate=self.lr, beta1=0.9, beta2=0.98, epsilon=1e-8)
self.train_op = self.optimizer.minimize(self.mean_loss, global_step=self.global_step)
# Summary
tf.summary.scalar('mean_loss', self.mean_loss)
self.merged = tf.summary.merge_all()
def train():
input_num,label_num=read_data()
batch_size=Config.batch_size
g = Graph()
saver =tf.train.Saver()
with tf.Session() as sess:
merged = tf.summary.merge_all()
sess.run(tf.global_variables_initializer())
if os.path.exists('logs/model.meta'):
saver.restore(sess, 'logs/model')
writer = tf.summary.FileWriter('tensorboard/lm', tf.get_default_graph())
for k in range(Config.epochs):
total_loss = 0
batch_num = len(input_num) // batch_size
batch = get_batch(input_num, label_num, batch_size)
for i in range(batch_num):
input_batch, label_batch = next(batch)
feed = {g.x: input_batch, g.y: label_batch}
cost,_ = sess.run([g.mean_loss,g.train_op], feed_dict=feed)
total_loss += cost
if (k * batch_num + i) % 10 == 0:
rs=sess.run(merged, feed_dict=feed)
writer.add_summary(rs, k * batch_num + i)
if (k+1) % 5 == 0:
print('epochs', k+1, ': average loss = ', total_loss/batch_num)
saver.save(sess, 'logs/model')
writer.close()
def test():
Config.is_training = False
g = Graph()
saver =tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, 'logs/model')
while True:
line = input('输入测试拼音: ')
if line == 'exit': break
line = line.strip('\n').split(' ')
x = np.array([pny2id.index(pny) for pny in line])
x = x.reshape(1, -1)
preds = sess.run(g.preds, {g.x: x})
got = ''.join(han2id[idx] for idx in preds[0])
print(got)
train()
| language_model/Transform_self.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="j5KeMucPswTR" outputId="f96d1e4b-99a8-4b1b-9866-27e488160119"
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/", "height": 37} id="tZoa4U1hwxjU" outputId="b3adb032-1dce-4a59-9922-4f26a67a2d1e"
import os
os.getcwd()
# + colab={"base_uri": "https://localhost:8080/"} id="zEH8ebgTw-T9" outputId="ac7ba1ec-b4cf-45d9-b842-2c4bf1cf055f"
# !pip install mglearn
# + id="LHhOJ6uGxC-8"
# import libraries
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import mglearn
# %matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix, accuracy_score
# + colab={"base_uri": "https://localhost:8080/", "height": 268} id="L71cch_qx338" outputId="74c4a203-d001-4b05-ebfc-a259c8f9e9ee"
gen = pd.read_csv('drive/MyDrive/Predict_Voice_From_Gender_Application/voice.csv')
gen_data = pd.DataFrame(gen)
gen_data.head()
# + colab={"base_uri": "https://localhost:8080/"} id="UJbfy1sgyq7p" outputId="39930849-c598-4984-c764-ecbc139edf42"
# Check for any null values
gen_data.isnull().sum()
# + colab={"base_uri": "https://localhost:8080/", "height": 361} id="FU3MNiXG3oDx" outputId="a2f6b50e-b946-4f6d-da47-268464ef553b"
# Check out the desription of the data in the dataframe
gen_data.describe()
# + colab={"base_uri": "https://localhost:8080/", "height": 908} id="lNW4Tbno4CYs" outputId="d5828d15-7f44-4582-dc15-ecec320eef26"
# Figure out the correlation among the columns
plt.figure(figsize=(15,15))
sns.heatmap(gen_data.corr(), annot=True, cmap='viridis', linewidths=.5);
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="746u4xSa5AiN" outputId="572dc602-1a21-4203-b6c1-04c63451f573"
# Check for class imbalance
fig, ax = plt.subplots(figsize=(4,3))
sns.countplot(gen_data['label'], ax=ax)
plt.title('Male/Female Count')
plt.show();
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="kOnxWjU06tZ6" outputId="824c1407-6943-4e7f-dec7-f4e8612987ca"
# Plot histograms
male = gen.loc[gen['label']=='male']
female = gen.loc[gen['label']=='female']
fig, axes = plt.subplots(10, 2, figsize=(10, 20))
ax = axes.ravel()
for i in range(20):
ax[i].hist(male.iloc[:,1], bins=20, color=mglearn.cm3(0), alpha=.5)
ax[i].hist(female.iloc[:, i], bins=20, color=mglearn.cm3(2), alpha=.5)
ax[i].set_title(list(male)[i])
ax[i].set_yticks(())
ax[i].set_xlabel("Feature magnitude")
ax[i].set_ylabel("Frequency")
ax[i].legend(["male", "female"], loc="best")
fig.tight_layout()
# + id="btUOuwqG8riK"
# Drop columns which are not able to differentiate between male and female based on the histogram results
gen_new = gen_data.drop(['IQR', 'skew', 'kurt', 'meandom', 'mindom', 'maxdom', 'dfrange', 'modindx'], axis = 1)
# + colab={"base_uri": "https://localhost:8080/"} id="65KAGG7h-LcU" outputId="91cd9bf7-b049-4c47-e4fa-b7cc4fd63d51"
gen_new.columns
# + id="TQshwA3T-VlB"
y = gen_new['label']
X = gen_new.drop(['label'], axis = 1)
# + id="FLH-hvVS-vbo"
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=0.2, random_state=0)
# + colab={"base_uri": "https://localhost:8080/"} id="L3z6rriw-5Jn" outputId="6c4bd21c-2aff-4a13-a329-11b7f2e3b29f"
# Train SVM model
classifier = SVC(kernel = 'linear', random_state = 0)
classifier.fit(Xtrain, ytrain)
# + colab={"base_uri": "https://localhost:8080/"} id="Vhgpp1NrAQ0h" outputId="3879ceb4-43d6-4b1a-b40f-79a33016d11a"
print("Accuracy on training set {:.2f}".format(classifier.score(Xtrain, ytrain)))
# + colab={"base_uri": "https://localhost:8080/"} id="PJGoH2PuBApe" outputId="8fe2409d-d43b-42a3-dadb-aaf47c66b9ee"
print("Accuracy on test set: {:.2f}".format(classifier.score(Xtest, ytest)))
# + colab={"base_uri": "https://localhost:8080/"} id="h3WTHGR-BeOR" outputId="a7e3b4b2-0129-428e-abed-4b6a80fdc1ea"
# Train RandomForest model
classifier_random = RandomForestClassifier(n_estimators = 400, criterion = 'entropy', random_state=0)
classifier_random.fit(Xtrain, ytrain)
# + colab={"base_uri": "https://localhost:8080/"} id="e5rztyi-CV0q" outputId="5fb1d501-4deb-470b-8dd7-648bf595a4c7"
print("Accuracy on training set {:.2f}".format(classifier_random.score(Xtrain, ytrain)))
print("Accuracy on test set: {:.2f}".format(classifier_random.score(Xtest, ytest)))
# + [markdown] id="_a8jWldFC4jr"
# ### RandomForestClassifier performed the best. The accuracy that got produced on the training set is close to the accuracy that got produced on the test set, hence the model is not overfitting.
# + id="KoEyjO3VDF3-"
# Save the model to disk
import pickle
filename = 'drive/MyDrive/Predict_Voice_From_Gender_Application/gender_prediction.pickle'
pickle.dump(classifier_random, open(filename, 'wb'))
| Voice_From_Gender_Application.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.1 64-bit
# language: python
# name: python3
# ---
# STUDENT NAME: <NAME>
#
# STUDENT NUMBER: 10592521
#
# COURSE TITLE: MASTERS IN ARTIFICIAL INTELLIGENCE
#
# LECTURER’S NAME: <NAME>
#
# MODULE/SUBJECT CODE: B9AI108
#
# MODULE/SUBJECT TITLE: PROGRAMING FOR DATA ANALYSIS
#
# GITHUB_LINK: https://github.com/anjalisharma16/CA-1-Emp.wages-hour-
# ASSIGNMENT DETAILS:
#
# Create a class Employee, and create and test a function to compute net pay from payment, work and tax credit information.
#
# Employee should have the following attributes:
# StaffID, LastName, FirstName, RegHours, HourlyRate, OTMultiple, TaxCredit, StandardBand,
#
# For Example:
# jg= Employee(12345,'Green','Joe', 37, 16, 1.5, 72, 710)
#
# Create a method computePayment in class Employee which takes HoursWorked and date as input, and returns a payment information dictionary as follows: (if jg is an Employee object for worker Joe Green)
#
# We will assume a standard rate of 20% and a higher rate of 40%, and that PRSI at 4% is not subject to allowances. (we will ignore USC etc.)
# jg.computePayment(42 '31/10/2021')
#
# {'name': '<NAME>', 'Date':'31/10/2021', 'Regular Hours Worked':37,'Overtime Hours Worked':5,'Regular Rate':16,'Overtime Rate':24, 'Regular Pay':592,'Overtime Pay':120,'Gross Pay':712, 'Standard Rate Pay':710,'Higher Rate Pay':2, 'Standard Tax':142,'Higher Tax':0.8,'Total Tax':142.8,'Tax Credit':72, 'Net Tax':70.8, 'PRSI': 28.48,'Net Deductions':99.28, 'Net Pay': 612.72}
#
# Test your class and method thoroughly, and at a minimum include test cases testing the following:
#
# Net pay cannot exceed gross pay
# Overtime pay or overtime hours cannot be negative.
# Regular Hours Worked cannot exceed hours worked
# Higher Tax cannot be negative.
# Net Pay cannot be negative.
# +
# Imports
import unittest as ut
#open file for Employee Details
with open('Employees.txt','w') as f:
f.write('''12345 <NAME> 37 16 1.5 72 710''')
f.close()
#open file for Hours Details
with open('Hours.txt','w') as f1:
f1.write('''12345 31/10/2021 42''')
f1.close()
#create employee class and attributes
class Employee:
Employees = {}
Std_rate = 0.2
Higher_rate = 0.4
def __init__(self, staffId, Firstname, Lastname, Reghours, Hourlyrate, OTMultiple, Taxcredit, Standardband):
self.Employees = {}
self.staffId = int(staffId)
self.Firstname = str(Firstname)
self.Lastname = str(Lastname)
self.Reghours = int(Reghours)
if Reghours <=0:
raise ValueError("incorrect value! Reghours should be more than 0")
self.Hourlyrate = int(Hourlyrate)
if Hourlyrate <= 0:
raise ValueError("incorrect value! Hourlyrate should be more than 0")
self.OTMultiple = float(OTMultiple)
if OTMultiple < 0:
raise ValueError("incorrect value! OTMultiple should be more than 0")
self.Taxcredit = int(Taxcredit)
if Taxcredit < 0:
raise ValueError("incorrect value! Taxcredit should be more than 0")
self.Standardband = int(Standardband)
if Standardband <= 0:
raise ValueError("incorrect value! Standardband should be more than 0")
def configure(Employeesfile):
with open(Employeesfile) as f:
for line in f:
staffId,Firstname,Lastname,Reghours,Hourlyrate,OTMultiple,Taxcredit,Standardband = line.split()
Employee.Employees[staffId] = Employee(staffId,Firstname,Lastname,int(Reghours),int(Hourlyrate),float(OTMultiple),int(Taxcredit),int(Standardband))
def computepayment(self, Hoursworked, Date):
salary = dict()
salary['Name'] = self.Firstname + " "
salary['Date'] = Date
salary['RHW'] = min(Hoursworked, self.Reghours)#To calculate Regular Hours Worked
if Hoursworked < 0:
raise ValueError('hours worked cannot be negative')
salary['OHW'] = max(0, Hoursworked - self.Reghours)#To calculate OverTime Hours Worked
salary['Regular rate'] = self.Hourlyrate
salary['Overtime rate'] = self.Hourlyrate * self.OTMultiple#To calculate Overtime rate
salary['Regular pay'] = self.Hourlyrate * salary['RHW']#To calculate Regular pay
salary['Overtime pay'] = max(salary['Overtime rate'] * salary['OHW'], 0)#To calculate Overtime pay
salary['Gross pay'] = salary['Regular pay'] + salary['Overtime pay']#To calculate Gross pay
salary['Standard rate pay'] = self.Standardband
salary['Higher rate pay'] = max(salary['Gross pay'] - self.Standardband, 0)#To calculate Higher rate pay
salary['Standard tax'] = salary['Standard rate pay'] * 0.2#To calculate Standard tax
salary['Higher tax'] = round(salary['Higher rate pay'] * 0.4, 2)#To calculate Higher tax
salary['Total tax'] = round(salary['Standard tax'] + salary['Higher tax'], 2)#To calculate Total tax
salary['Tax credit'] = self.Taxcredit
salary['PRSI'] = salary['Gross pay'] * 0.4#To calculate PRSI
salary['Net tax'] =round(salary['Total tax'] - salary['Tax credit'], 2)#To calculate Net tax
salary['Net deductions'] = round(salary['Net tax'] + salary['PRSI'], 2)#To calculate Net deduction
salary['Net pay'] = max(0, salary['Gross pay'] - salary['Net deductions'])#To calculate Net pay
if salary['Net pay'] <0:
raise ValueError("incorrect value! net pay should be more than 0 & cannot be in negative")
return salary
jg=Employee(12345,'Green','Joe',37,16,1.5,72,710)
jg.computepayment(42,'30/01/2022')
# -
class TestEmployee(ut.TestCase):
def test_hoursnotnegative(self):
jb = Employee(12345,'Green','Joe',37, 16, 1.5, 72, 710)
with self.assertRaises(ValueError):
jb.computepayment(-40, '31/10/2021')
def testnegativeotpay(self):
jb = Employee(1234,'Green','Joe', 37, 16, 1.5,72,710)
otpay = jb.computepayment(100, '27/03/2023')
self.assertGreaterEqual(otpay['Overtime pay'], 0)
self.assertGreaterEqual(otpay['OHW'], 0)
def test_negative_hours_worked(self):
with self.assertRaises(ValueError):
jb = Employee(12345,'Green','Joe',37, 16, 1.5, 72, 710)
jb.computepayment(-1, '30/05/2022')
def test_negative_regularhours(self):
with self.assertRaises(ValueError):
jb = Employee(12345,'Green','Joe',-37, 16, 1.5, 72, 710)
jb.computepayment(42, '16/09/1998')
def test_zero_regularhours(self):
with self.assertRaises(ValueError):
jb = Employee(12345,'Green','Joe',0, 16, 1.5, 72, 710)
jb.computepayment(42, '16/09/1998')
def test_negative_hourlyrate(self):
with self.assertRaises(ValueError):
jb = Employee(12345,'Green','Joe',37, -16, 1.5, 72, 710)
jb.computepayment(42, '16/09/1998')
def test_zero_hourlyworked(self):
with self.assertRaises(ValueError):
jb = Employee(12345,'Green','Joe',37, 0, 1.5, 72, 710)
jb.computepayment(42, '16/09/1998')
def test_negatuve_OTMultiple(self):
with self.assertRaises(ValueError):
jb = Employee(12345,'Green','Joe',37, 16, -1.5, 72, 710)
jb.computepayment(42, '16/09/1998')
def testnegative_taxcredit(self):
with self.assertRaises(ValueError):
jb = Employee(12345,'Green','Joe',37, 16, 1.5, -72, 710)
jb.computepayment(42, '16/09/1998')
def testnegative_standardnamd(self):
with self.assertRaises(ValueError):
jb = Employee(12345,'Green','Joe',37, 16, 1.5, 72, -710)
jb.computepayment(42, '16/09/1998')
def test_zero_standardband(self):
with self.assertRaises(ValueError):
jb = Employee(12345,'Green','Joe',37, 16, 1.5, 72, 0)
jb.computepayment(42, '16/09/1998')
def test_negative_overtimepay(self):
with self.assertRaises(ValueError):
jb = Employee(12345,'Green','Joe',37, -16, 1.5, 72, 710)
otp = jb.computepayment(42, '16/09/1998')
self.assertGreaterEqual(otp['Overtime pay'], 0)
self.assertGreaterEqual(otp['OHW'], 0)
ut.main(argv=['ignored'], exit=False)
| CA_REAL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data inspection
#
# Below we can find the performance results of a mini benchmark [`bench-stat`](https://github.com/janhybs/bench-stat)
# which were collected via `ci-hpc` framework.
#
# The benchmark `bench-stat` was executed on a `charon` resource.
#
# ## About `bench-stat` application
#
# `bench-stat` application is a set of 3 benchmarks performing simple memory operations in a level 1, level 2 and level 3 cache.
#
# These operations can be extremely fast, thus the experiments are *repeated* `N` times to obtain measurable duration of the benchmarks.
#
# $$N = 1024 * 1024 * reps$$
# where $reps$ is a extra repetition coefficient which is altered in all of the commits.
#
# As a baseline commit tagged as `reps-100` was selected, where $reps = 100$.
# The total number of repetition for this commit is $N = 1024 * 1024 * 100 = 104\ 857\ 600$
#
# A commits with maximum and minumum number of $reps$ are tagged as `reps-125` and `reps-075` respecively.
# ## Data structure
#
# In table below we can see a *simplified* format of the data collected. The most of the fields are self-explanitory
# however some of them require explanation:
#
# - `tag` - a `git` tag of a commit making the results more human-readible
# - `timepoint` - numerical value of a `tag` for further purposes
# - `no` - i-th repetition
# %matplotlib inline
from cihpc.exp import exp_02_init as env
env = env.reload(env)
np, sc, pd, plt, sea = env.np, env.sc, env.pd, env.plt, env.sea
df = env.fetch_data()
df.head()
# ## Impact of individual commits on a duration
#
# Chart below illustrates relation between `walltime [sec]` and commits, marked as
#
# $$reps^{075}, reps^{080}, \dots, reps^{095}, reps^{097}, reps^{099}, reps^{100}, reps^{101}, reps^{103}, reps^{105}, reps^{110}, \dots, reps^{125}$$
df2 = env.unwrap(df, ['walltime', 'walltime_mem_l1', 'walltime_mem_l2', 'walltime_mem_l3'], 'frame', 'walltime', ['tag', '$tag$'])
plt.figure(figsize=(18, 6))
sea.lineplot(data=df2, x='$tag$', y='walltime', style='frame', hue='frame', markers=True, ci='sd', legend='full')
plt.title('Duration in time for all frames', size=16);
# ## Data distribution for individual commits
#
# Charts below show histogram for each of the 15 commits along with `normal` fit (gray dashed line)
g = sea.FacetGrid(df, col='tag', col_wrap=3, aspect=3, height=1.6, sharex=False, hue='tag')
g.map(sea.distplot, 'walltime', bins=11, rug=True, fit=sc.stats.norm, fit_kws=env.alpha_style(0.3, '--'));
g.fig.suptitle("Distribution of invidual commits", size=16)
g.fig.subplots_adjust(top=.9)
tags = sorted(list(set(df['tag'])))
print(''.join(['%s_{%s}, ' % tuple(x.split('-')) for x in tags]))
| src/cihpc/exp (copy)/exp_02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Import Libraries
import urllib
from bs4 import BeautifulSoup
from selenium import webdriver
import re
import os,sys
import time
from datetime import date
try:
import cPickle as pickle
except:
import pickle
import pprint
from collections import deque
from shutil import copyfile
import random
# ## Loading Gender Predictor File to predict Gender of LinkedIn Profiles
# +
# # %load Gender_Prediction.py
# Import Libraries
import os
import re
import urllib2
from zipfile import ZipFile
import csv
import cPickle as pickle
def downloadNames():
u = urllib2.urlopen('https://www.ssa.gov/oact/babynames/names.zip')
localFile = open('names.zip', 'w')
localFile.write(u.read())
localFile.close()
def getNameList():
if not os.path.exists('names.pickle'):
print 'names.pickle does not exist, generating'
# https://www.ssa.gov/oact/babynames/names.zip
if not os.path.exists('names.zip'):
print 'names.zip does not exist, downloading from github'
downloadNames()
else:
print 'names.zip exists, not downloading'
print 'Extracting names from names.zip'
namesDict=extractNamesDict()
maleNames=list()
femaleNames=list()
print 'Sorting Names'
for name in namesDict:
counts=namesDict[name]
tuple=(name,counts[0],counts[1])
if counts[0]>counts[1]:
maleNames.append(tuple)
elif counts[1]>counts[0]:
femaleNames.append(tuple)
names=(maleNames,femaleNames)
print 'Saving names.pickle'
fw=open('names.pickle','wb')
pickle.dump(names,fw,-1)
fw.close()
print 'Saved names.pickle'
else:
print 'names.pickle exists, loading data'
f=open('names.pickle','rb')
names=pickle.load(f)
print 'names.pickle loaded'
print '%d male names loaded, %d female names loaded'%(len(names[0]),len(names[1]))
return names[0],names[1]
def extractNamesDict():
zf=ZipFile('names.zip', 'r')
filenames=zf.namelist()
names=dict()
genderMap={'M':0,'F':1}
for filename in filenames:
fp = zf.open(filename,'rU')
rows=csv.reader(fp, delimiter=',', dialect=csv.excel_tab)
try:
for row in rows:
#print name,row[1]
try:
name=row[0].upper()
gender=genderMap[row[1]]
count=int(row[2])
if not names.has_key(name):
names[name]=[0,0]
names[name][gender]=names[name][gender]+count
except:
pass
except:
pass
fp.close()
print '\tImported %s'%filename
return names
def find_gender_from_first_name(name):
if name.upper() in new_male_list:
return "Male"
elif name.upper() in new_female_list:
return "Female"
else:
return "Unknown"
if __name__ == '__main__':
male_names, female_names = getNameList()
new_male_list = []
new_female_list = []
for index,name in enumerate(male_names):
try:
if (name[1]/name[2])>=4:
new_male_list.append(name[0])
except:
new_male_list.append(name[0])
#print "Total number of Male Names after is %d." %len(new_male_list)
for index,name in enumerate(female_names):
try:
if (name[2]/name[1])>=4:
new_female_list.append(name[0])
except:
new_female_list.append(name[0])
#print "Total number of Female Names after is %d." %len(new_female_list)
#find_gender_from_first_name('Harsh')
# -
# ## Get Link of Profile Picture
def getProfilePicLink(html):
soup=BeautifulSoup(html,"lxml")
images = [x for x in soup.find_all('img')]
#print images
try:
if "shrinknp_200_200" in str(images[0]):
imageUrlString = str(images[0]).replace("shrinknp_200_200", "shrinknp_400_400")
else:
imageUrlString = ""
except:
imageUrlString = ""
#print imageUrlString
return imageUrlString
# ## Store Profile Picture to Local Directory
def storeProfilePicture(profileUrl,profile_link):
lst = profileUrl.split()
userId = profile_link.split('/')[-1]
regex=re.compile(r'(src).*')
img_url = re.sub('src=','', "".join([m.group(0) for l in lst for m in [regex.search(l)] if m]))
img_url = img_url.strip('""')
#print img_url
if img_url:
urllib.urlretrieve(img_url, "Images/" + userId + ".jpg")
print userId + ".jpg is saved."
return img_url
else:
with open('ghost_person.png', 'rb') as f:
data = f.read()
with open("Images/" + userId + ".png", 'wb') as f:
f.write(data)
print userId + ".png is saved."
return "https://static.licdn.com/scds/common/u/images/themes/katy/ghosts/person/ghost_person_100x100_v1.png".strip("")
# ## Scrap the UserID from Recommended Users' list.
def getRecommendedUserIds(html):
soup=BeautifulSoup(html,"lxml")
#content = driver.page_source
#images = [x for x in soup.find_all('img')]
profLinks = [x for x in soup.find_all('li',{'class': 'profile-card'})]
recUserIds = []
#print profLinks
for link in profLinks:
urls = re.findall('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', str(link))
recId = urls[0].split('?')[0].split('/')[-1]
recUserIds.append(recId)
return recUserIds
# ## Get Full Name from title in source
def getName(html):
soup=BeautifulSoup(html,"lxml")
title = soup.find('title')
name = str(title).replace('<title>','')
full_name = name.replace(' | LinkedIn</title>','')
return full_name
# ## Get All Bachelor Degree List and Make a dictionary
def getBachelorList():
BachelorDict = {}
regex = re.compile('[^a-zA-Z/]')
with open('bachelor_degrees.txt') as fp:
for line in fp.readlines():
lineSepator = line.split('(')
abbr = regex.sub('', lineSepator[1])
abbr = abbr.split('/')
for abrv in abbr:
BachelorDict[abrv] = lineSepator[0].strip()
fp.close()
return BachelorDict
# ## Calculate Age from Bachelor Degree Starting Year
def calculate_age(bachelor_year):
today = date.today()
return today.year - bachelor_year + 18
# ## Find Person's All Degrees and their Duration
def get_Degree_Duration(html):
soup=BeautifulSoup(html.encode("ascii","ignore"),"lxml")
schoolLinks = soup.find_all('li',{'class':'school'})
degreeList = []
time_range_list = []
#print schoolLinks
for soup1 in schoolLinks:
degreeLink = soup1.find('span',{'data-field-name':"Education.DegreeName"})
#print degreeLink
timeRange = soup1.find('span',{'class':"date-range"})
#print timeRange
tempDegree = str(degreeLink).replace('<span class="translated translation" data-field-name="Education.DegreeName">','')
degree = tempDegree.replace('</span>','')
tempTime = str(timeRange).replace('<span class="date-range">','')
time = tempTime.replace('<time>','')
temp_time = time.replace('</time>','')
time_range = temp_time.replace('</span>','')
#print time_range
degreeList.append(degree)
time_range_list.append(time_range)
return degreeList,time_range_list
# ## Find the Bachelor Year from Degree List and its Duration
def find_bachelor_year(degree_list,time_list):
bachelor_degree_duration = set()
BachelorDict = getBachelorList()
for index,dg in enumerate(degree_list):
for key in BachelorDict.keys():
if key in dg:
bachelor_degree_duration.add(time_list[index])
break
for value in BachelorDict.values():
if value in dg:
bachelor_degree_duration.add(time_list[index])
break
#print time_list
bachelor_degree_duration = list(bachelor_degree_duration)
#print bachelor_degree_duration
#print time_list[0]
try:
if not bachelor_degree_duration:
if time_list[0]:
bachelor_year = int(time_list[0].split()[0]) - 5
else:
bachelor_year = None
else:
bachelor_year = int(bachelor_degree_duration[0].split()[0])
except:
bachelor_year = None
return bachelor_year
# ## Find the age from LinkedIn Profile
def age_from_linkedin_profile(profileUrl):
try:
degree_list, time_list = get_Degree_Duration(profileUrl)
refined_degree_list = []
regex = re.compile('[^a-zA-Z\s+]')
for degree in degree_list:
refined_degree_list.append(regex.sub('',degree))
bachelor_year = find_bachelor_year(refined_degree_list,time_list)
#print bachelor_year
if bachelor_year:
age = calculate_age(bachelor_year)
else:
age = None
except:
age = None
return age
# ## Make Dictionary for each Profile and and its Recommended Ids
def MakeProfileDictionary(usrid):
#recommended_profile_ids = []
profileUrl = "https://www.linkedin.com/in/" + usrid
driver = webdriver.PhantomJS(service_log_path=os.path.devnull)
driver.get(profileUrl)
html=driver.page_source
if "Parse the tracking code from cookies." in html:
return
else:
with open("Profile_Source/" + usrid + ".txt", 'wb') as fp:
fp.write(html.encode('utf-8'))
fp.close()
profileDict = {}
profileDict['User_ID'] = usrid
profileDict['Full_Name'] = getName(html)
profileDict['Gender'] = find_gender_from_first_name(getName(html).split()[0])
recommended_profile_ids = getRecommendedUserIds(html)
profileDict['Recommended_Ids'] = recommended_profile_ids
profilePicUrl = getProfilePicLink(html)
picUrl = storeProfilePicture(profilePicUrl,profileUrl)
profileDict['Profile_Url'] = picUrl
profileDict['age'] = age_from_linkedin_profile(html)
return profileDict,recommended_profile_ids
# ## Copying Files for the backup
copyfile('linkedin_UserIds.pickle','Temp_Files/linkedin_UserIds.pickle')
copyfile('linkedin_Black_Listed_UserIds.pickle','Temp_Files/linkedin_Black_Listed_UserIds.pickle')
copyfile('linkedin_profiles.pickle','Temp_Files/linkedin_profiles.pickle')
#copyfile('linkedin_profiles_temp.pickle','Temp_Files/linkedin_profiles_temp.pickle')
# ## Load Values from Pickle File
# +
pkl_file = open("linkedin_UserIds.pickle","rb")
userIds_list=pickle.load(pkl_file) # errors out here
pkl_file.close()
pkl_file = open("linkedin_Black_Listed_UserIds.pickle","rb")
black_listed_userids=pickle.load(pkl_file) # errors out here
pkl_file.close()
pkl_fl = open("linkedin_profiles.pickle","rb")
my_original_list=pickle.load(pkl_fl) # errors out here
pkl_fl.close()
# -
len(my_original_list)
len(black_listed_userids)
# ## Main Method
if __name__ == '__main__':
directory = "Images"
if not os.path.exists(directory):
os.makedirs(directory)
directory1 = "Profile_Source"
if not os.path.exists(directory1):
os.makedirs(directory1)
#userIds = deque(["harshparikh1001","marissamayer","williamhgates","mbilgic"])
#userIds = ["harshparikh1001"]
#recommended_profile_ids = []
uids = [uid for uid in (d['User_ID'] for d in my_original_list)]
uniqueIds = list(set(uids))
userIds = userIds_list
profiles = []
output = open("linkedin_profiles_temp.pickle", 'wb') # Save all profiles as pickle file
count = 0
temp_count = 0
last_call = 0
#black_listed_userids = []
while count != 100:
usrid = userIds.popleft()
#userIds.remove(usrid)
temp_count+=1
print temp_count
if (usrid not in uniqueIds) and (usrid not in black_listed_userids):
try:
profileDict,recommed_id_list = MakeProfileDictionary(usrid)
userIds.extend(recommed_id_list)
if (profileDict['age']!=None) and (profileDict['Profile_Url'].endswith('.jpg')):
count += 1
print count
profiles.append(profileDict)
else:
black_listed_userids.append(usrid)
time.sleep(5) # delays for 5 seconds
#if temp_count % 10 == 0:
# time.sleep(100) # delays for 100 seconds
except:
try:
print "\n\n*******Your IP got tracked... Wait for sometime..*******\n\n"
last_call += 1
#if last_call>=5:
# print "\nPlease try again later.. :)"
# break
#time.sleep(300)
mins=0
while mins != 5:
print ">>>>>>>>>>>>>>>>>>>>>", (mins)
time.sleep(60)
# Increment the minute total
mins += 1
# Bring up the dialog box here
pass
except:
break
pickle.dump(profiles, output,-1)
output.close()
# ## Open pickle file and read stuff from it
# +
pkl_fl = open("linkedin_UserIds.pickle","wb")
pickle.dump(userIds,pkl_fl,-1) # errors out here
pkl_fl.close()
pk_fl = open("linkedin_Black_Listed_UserIds.pickle","wb")
pickle.dump(black_listed_userids,pk_fl,-1) # errors out here
pk_fl.close()
pkl_file = open("linkedin_profiles_temp.pickle","rb")
temp_list=pickle.load(pkl_file) # errors out here
pkl_file.close()
# -
len(temp_list)
profile_list = my_original_list + temp_list
len(profile_list)
import os
for file in os.listdir("Images"):
file_path = os.path.join("Images", file)
try:
if not file.endswith('.jpg'):
os.unlink(file_path)
#elif os.path.isdir(file_path): shutil.rmtree(file_path)
except Exception as e:
print(e)
pkl_file = open("linkedin_profiles.pickle","wb")
pickle.dump(profile_list, pkl_file,-1) # errors out here
pkl_file.close()
| src/LinkedIn_Scrapper.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# importing our libraries
import numpy as np
# scikit learn comes with a number of preloaded datasets
from sklearn.datasets import load_breast_cancer
# -
data = load_breast_cancer()
type(data) # this is a lot like python dictionaries
data.keys() # we can use these keys as labels
data.data
data.data.shape
data.target # we see that these are just bunch of zeroes and ones
# we can see what these labels means as 0 and 1 are not the actual targets they bound to have meaning
data.target_names
# so 1 is malignant
# 2 is benign
data.target.shape # so we can interpret this as every row has a target
# so we make the predictions never on the data we use to train the model
# as we know that could mean that our model has memorized the results
# thats why in sci-kit learn we have the trian test split function
# which divides the dataset into a randomly generated two parts
# one is used for training and the other is used for testing
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, test_size=0.33)
# so according to this call 33% of the data is going to test data and rest train data
# Now it is time to make a machine learning model which will do the classsification
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier()
model.fit(X_train, y_train)
model.score(X_train,y_train)
# this gives us the accuracy of our predictions on the training set
# 1.0 means all predictions are perfect
model.score(X_test, y_test) # this gives us the accuracy of the test set
# now let us make predictions from the model we have trained
predictions = model.predict(X_test)
predictions
len(predictions)
len(y_test)
# let is check the accuracy manually
correct = 0
for i in range(0,188):
if predictions[i]==y_test[i]:
correct = correct+1
correct_predicts = correct/len(predictions)
correct # this gives us the number of correct predictions
correct_predicts # comes out to be the same as calculated earlier
# using the model.score() function
predict1 = model.predict([X_test[176,:]]) # making one single prediction
predict1
if predict1 == [y_test[176]]:
print("right prediction")
# now let do the classification using deep learning
from sklearn.neural_network import MLPClassifier
# +
# its is important to scale the training data to reduce the computation load
from sklearn.preprocessing import StandardScaler
# -
scaler = StandardScaler()
X_train2 = scaler.fit_transform(X_train)
X_test2 = scaler.transform(X_test)
model = MLPClassifier()
model.fit(X_train2, y_train)
model.score(X_train2, y_train)
model.score(X_test2, y_test)
# we can see that the accuracy of neural net based classifier is better than
# Random
| ScikitLearn_Basics/Classification_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Optimize prob and nms thresholds
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"
import glob
import sys
sys.path.append('../')
import cv2
import numpy as np
from tqdm import tqdm
from stardist.models import StarDist3D
from csbdeep.models import Config, CARE
from tifffile import imread
os.environ["CUDA_VISIBLE_DEVICES"]="1"
from vollseg.OptimizeThreshold import OptimizeThreshold
os.environ["HDF5_USE_FILE_LOCKING"] = "FALSE"
from pathlib import Path
# +
BaseDir = '/data/u934/service_imagerie/v_kapoor/CurieTrainingDatasets/MouseClaudia/AugmentedGreenCell3D/'
Model_Dir = '/data/u934/service_imagerie/v_kapoor/CurieDeepLearningModels/MouseClaudia/'
SaveDir = '/data/u934/service_imagerie/v_kapoor/CurieTrainingDatasets/MouseClaudia/'
StardistModelName = 'ScipyDeepGreenCells'
UNETModelName = 'UNETScipyDeepGreenCells'
NoiseModel = None
Starmodel = StarDist3D(config = None, name = StardistModelName, basedir = Model_Dir)
UnetModel = CARE(config = None, name = UNETModelName, basedir = Model_Dir)
# +
#Number of tiles to break the image into for applying the prediction to fit in the computer memory
n_tiles = (1,2,2)
#Use Probability map = True or distance map = False as the image to perform watershed on
UseProbability = False
# +
Raw = sorted(glob.glob(BaseDir + '/Raw/' + '*.tif'))
RealMask = sorted(glob.glob(BaseDir + '/RealMask/' + '*.tif'))
X = list(map(imread,Raw))
Y = list(map(imread,RealMask))
OptimizeThreshold(Starmodel,UnetModel,X,Y,BaseDir, UseProbability = UseProbability, n_tiles=n_tiles)
# -
| examples/Predict/Optimizer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Regn rente på førsteorden
# Renteformel, hvor i er årlig rente og r er daglig rente:\
# $$r=((1+i)^{\frac{1}{372}}-1)$$
# OBS: Vi regner med 31 dage i hver måned.
#Parameter
FørsteOrdenReserve = 267279.3735
i=0.05
sikkerhedsTillæg=0.005
#daglig rente
r = ((1+i)**(1/372)-1)
print(r)
#daglig sikkerhedstillæg
sikkerhedsDaglig = (1+i)**(1/372) - (1+i-sikkerhedsTillæg)**(1/372)
print(sikkerhedsDaglig)
# ## Rente optjent
# Fremregne med den daglige rente 31 dage frem:
# $$F(31)=F(0)*((1+r)^{31}-1)$$
# F(t) er future value, hvor *t* er tid.
print(FørsteOrdenReserve*((r-sikkerhedsDaglig)**31))
# rente optjent uden fradrag af sikkerhedstillæg
print(FørsteOrdenReserve*((1+r)**31-1))
# sikkerhedstillæg
print(FørsteOrdenReserve*((1+sikkerhedsDaglig)**31-1))
# Effektiv rente
effektivRente = (1+(i-sikkerhedsTillæg))**(1/372)-1
effektivRente
# ## Reserve efter tilskrivning af rente
# $Ultimo = primo + primo * ((1+effektivRente)^{31}-1)$
# Ultimoreserve 2. orden
FørsteOrdenReserve * ((1+effektivRente)**(31)-1) + FørsteOrdenReserve
#sanity check
print(FørsteOrdenReserve * ((1+effektivRente)**(31)-1))
print(FørsteOrdenReserve*((1+r)**31-1)-FørsteOrdenReserve*((1+sikkerhedsDaglig)**31-1))
| RenteFremregning1Orden.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="QxqNgxvRJZ3k" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598408352832, "user_tz": 240, "elapsed": 726, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08851466340381164023"}}
try:
pass
except Exception as e:
raise
else:
pass
finally:
pass
# + id="ZaaLrqzIKXOI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 167} executionInfo={"status": "error", "timestamp": 1598408385312, "user_tz": 240, "elapsed": 667, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08851466340381164023"}} outputId="b3f6ac3b-0d62-4db8-fb3b-4956ceddbcb2"
fh = open("unexisting_file.txt", "r")
# + id="Zz-Yau5TLRsU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598408408751, "user_tz": 240, "elapsed": 817, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08851466340381164023"}} outputId="e8fe4355-6905-43f3-860c-73773b655dfa"
try:
fh = open("unexisting_file.txt", "r")
fh.write("This is my test file for exception handling!!")
except IOError:
print ("Error: can\'t find file or read data")
else:
print ("Written content in the file successfully")
| Module 2 Python intro + libraries/16 exceptions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import SimpleITK as sitk
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
from scipy import signal
from numpy import *
from pylab import *
import cv2
import random
from random import randrange
from numpy import linalg
from scipy import signal
from pylab import *
from PIL import Image
from skimage.transform import warp
## full matrix
import sys
import numpy
numpy.set_printoptions(threshold=sys.maxsize)
## imshow problem
#import tkinter
#import matplotlib
#matplotlib.use('TkAgg')
##################################### for windows ######################################
# directory='C:/Users/mahdi/OneDrive/Desktop/data/SP_S05_D1_RND.nii'
##################################### for linux #######################################
directory='/home/mahdi/python codes/final version/SP_S05_D1_RND.nii'
I = sitk.ReadImage(directory)
I = sitk.GetArrayFromImage(I)
## t.shape[0] ## for volume
def mask(I,volume, layer):
if volume<10:
name=str(0)+str(0)+str(volume)
if 9<volume<100:
name=str(0)+str(volume)
if 99<volume<1000:
name=str(volume)
g=I[volume,layer,:,:]
g=g.astype(np.float32)
df = pd.read_csv('/home/mahdi/python codes/centerline_case2/centerline_volume'+name+'.csv', header=None)
df.columns=['x','y','delete']
df=df[['x','y']]
c=df.loc[layer]
x=int(c['x'])
y=int(c['y'])
f=g[y-15:y+15,x-15:x+15]
return f
# +
def GaussianFunction(x, sigma):
if sigma == 0:
return 0
else:
g = (1/math.sqrt(2*math.pi*sigma*sigma))*math.exp(-x*x)/(2*sigma*sigma)
return g
# function returns the gaussian kernel using the GaussianFunction of size 3x3
def GaussianMask(sigma):
g = []
for i in range(-2, 3):#creating a gaussian kernel of size 3x3
g1 = GaussianFunction(i,sigma)
g2 = GaussianFunction(i-0.5, sigma)
g3 = GaussianFunction(i+0.5, sigma)
gaussian = (g1+g2+g3)/3
g.append(gaussian)
return g
sigma = 1.5
G = [] # Gaussian Kernel
G = GaussianMask(sigma)
def DownSample(I):
Ix = Iy = []
I = np.array(I)
S = np.shape(I) #shape of the image
for i in range(S[0]):
Ix.extend([signal.convolve(I[i,:],G,'same')])#convolution of the I[i] with G
Ix = np.array(np.matrix(Ix))
Iy = Ix[::2, ::2]#selects the alternate column and row
return Iy
def UpSample(I):
I = np.array(I)
S = np.shape(I)
Ix = np.zeros((S[0], 2*S[1]))#inserting alternate rows of zeros
Ix[:, ::2] = I
S1 = np.shape(Ix)
Iy = np.zeros((2*S1[0], S1[1]))#inserting alternate columns of zeros
Iy[::2, :] = Ix
Ig = cv2.GaussianBlur(Iy, (5,5), 1.5, 1.5)#instead of using the user-defined gaussian function, I am using the Gaussian Blur functtion for double the size of gaussian kernel size
return Ig
def LucasKanade(I1, I2):
I1 = np.array(I1)
I2 = np.array(I2)
S = np.shape(I1)
Ix = signal.convolve2d(I1,[[-0.25,0.25],[-0.25,0.25]],'same') + signal.convolve2d(I2,[[-0.25,0.25],[-0.25,0.25]],'same')
Iy = signal.convolve2d(I1,[[-0.25,-0.25],[0.25,0.25]],'same') + signal.convolve2d(I2,[[-0.25,-0.25],[0.25,0.25]],'same')
It = signal.convolve2d(I1,[[0.25,0.25],[0.25,0.25]],'same') + signal.convolve2d(I2,[[-0.25,-0.25],[-0.25,-0.25]],'same')
features = cv2.goodFeaturesToTrack(I1, 10000, 0.01, 10)
features = np.int0(features)
u = np.ones((S))
v = np.ones((S))
for l in features:
j,i = l.ravel()
#IX = ([Ix[i-1,j-1],Ix[i,j-1],Ix[i+1,j+1],Ix[i-1,j],Ix[i,j],Ix[i+1,j],Ix[i-1,j+1],Ix[i,j+1],Ix[i+1,j-1]])
#IY = ([Iy[i-1,j-1],Iy[i,j-1],Iy[i+1,j+1],Iy[i-1,j],Iy[i,j],Iy[i+1,j],Iy[i-1,j+1],Iy[i,j+1],Iy[i+1,j-1]])
#IT = ([It[i-1,j-1],It[i,j-1],It[i+1,j+1],It[i-1,j],It[i,j],It[i+1,j],It[i-1,j+1],It[i,j+1],It[i+1,j-1]])
IX = ([Ix[i-1,j-1],Ix[i-1,j],Ix[i-1,j+1],Ix[i,j-1],Ix[i,j],Ix[i,j+1],Ix[i+1,j-1],Ix[i+1,j],Ix[i+1,j+1]])
IY = ([Ix[i-1,j-1],Ix[i-1,j],Ix[i-1,j+1],Ix[i,j-1],Ix[i,j],Ix[i,j+1],Ix[i+1,j-1],Ix[i+1,j],Ix[i+1,j+1]])
IT = ([Ix[i-1,j-1],Ix[i-1,j],Ix[i-1,j+1],Ix[i,j-1],Ix[i,j],Ix[i,j+1],Ix[i+1,j-1],Ix[i+1,j],Ix[i+1,j+1]])
# Using the minimum least squares solution approach
LK = (IX,IY)
LK = matrix(LK)
LK_T = array(matrix(LK))
LK = array(np.matrix.transpose(LK))
#Psedudo Inverse
A1 = np.dot(LK_T,LK)
A2 = np.linalg.pinv(A1)
A3 = np.dot(A2,LK_T)
(u[i,j],v[i,j]) = np.dot(A3,IT) # we have the vectors with minimized square error
u = np.flipud(u)
v = np.flipud(v)
return u,v
# -
def LucasKanadeIterative(I1, I2, u1, v1,kernel):
I1 = np.array(I1)
I2 = np.array(I2)
S = np.shape(I1)
u1 = np.round(u1)
v1 = np.round(v1)
u = np.zeros(S)
v = np.zeros(S)
for i in range(int(kernel/2),S[0]-int(kernel/2)):
for j in range(int(kernel/2),S[0]-int(kernel/2)):
I1new = I1[i-int(kernel/2):i+int(kernel/2)+1,j-int(kernel/2):j+int(kernel/2)+1]# picking 5x5 pixels at a time
lr = (i-int(kernel/2))+v1[i,j]#Low Row Index
hr = (i+int(kernel/2))+v1[i,j]#High Row Index
lc = (j-int(kernel/2))+u1[i,j]#Low Column Index
hc = (j+int(kernel/2))+u1[i,j]#High Column Index
#window search and selecting the last window if it goes out of bounds
if(lr < 0):
lr = 0
hr = kernel-1
if(lc < 0):
lc = 0
hc = kernel-1
if(hr > (len(I1[:,0]))-1):
lr = len(I1[:,0])-kernel
hr = len(I1[:,0])-1
if(hc > (len(I1[0,:]))-1):
lc = len(I1[0,:])-kernel
hc = len(I1[0,:])-1
if(np.isnan(lr)):
lr = i-int(kernel/2)
hr = i+int(kernel/2)
if(np.isnan(lc)):
lc = j-int(kernel/2)
hc = j+int(kernel/2)
#Selecting the same window for the second frame
I2new = I2[int(lr):int((hr+1)),int(lc):int((hc+1))]
# Now applying LK for each window of the 2 images
IX = signal.convolve2d(I1new,[[-0.25,0.25],[-0.25,0.25]],'same') + signal.convolve2d(I2new,[[-0.25,0.25],[-0.25,0.25]],'same')
IY = signal.convolve2d(I1new,[[-0.25,-0.25],[0.25,0.25]],'same') + signal.convolve2d(I2new,[[-0.25,-0.25],[0.25,0.25]],'same')
IT = signal.convolve2d(I1new,[[0.25,0.25],[0.25,0.25]],'same') + signal.convolve2d(I2new,[[-0.25,-0.25],[-0.25,-0.25]],'same')
if kernel>1:
IX = np.transpose(IX[1:kernel,1:kernel])
IY = np.transpose(IY[1:kernel,1:kernel])
IT = np.transpose(IT[1:kernel,1:kernel])
IX = IX.ravel()
IY = IY.ravel()
IT = IT.ravel()
LK = (IX,IY)
LK = np.matrix(LK)
LK_T = np.array(np.matrix(LK))
LK = np.array(np.matrix.transpose(LK))
A1 = np.dot(LK_T,LK)
A2 = np.linalg.pinv(A1)
A3 = np.dot(A2,LK_T)
(u[i,j],v[i,j]) = np.dot(A3,IT)
return u,v
def LK_Pyramid(Im1, Im2, iteration, level,kernel):
I1 = np.array(Im1)
I2 = np.array(Im2)
S = np.shape(I1)
pyramid1 = np.empty((S[0],S[1],level))
pyramid2 = np.empty((S[0],S[1],level))
pyramid1[:,:,0] = I1 #since the lowest level is the original imae
pyramid2[:,:,0] = I2 #since the lowest level is the original image
#creating the pyramid by downsampling the original image
for i in range(1, level):
I1 = DownSample(I1)
I2 = DownSample(I2)
pyramid1[0:np.shape(I1)[0], 0:np.shape(I1)[1], i] = I1
pyramid2[0:np.shape(I2)[0], 0:np.shape(I2)[1], i] = I2
level0_I1 = pyramid1[0:round(len(pyramid1[:,0])/4),0:round(len(pyramid1[0,:])/4),2]
level0_I2 = pyramid2[0:round(len(pyramid2[:,0])/4),0:round(len(pyramid2[0,:])/4),2]
(u,v) = LucasKanade(Im1, Im2)
for i in range(0, iteration):
(u,v) = LucasKanadeIterative(level0_I1, level0_I2, u, v,kernel)
u_l0 = u
v_l0 = v
I_l0 = level0_I1
#u_l0[np.where(u_l0 == 0)] = nan
#v_l0[np.where(v_l0 == 0)] = nan
#for level 1
k = 1
u1 = UpSample(u)
v1 = UpSample(v)
I1new = pyramid1[0:int(len(pyramid1[:,0])/(2**(level-k-1))),0:int(len(pyramid1[0,:])/(2**(level-k-1))),level-k-1]
I2new = pyramid2[0:int(len(pyramid2[:,0])/(2**(level-k-1))),0:int(len(pyramid2[0,:])/(2**(level-k-1))),level-k-1]
(u,v) = LucasKanadeIterative(I1new, I2new, u1, v1,kernel)
u_l1 = u
v_l1 = v
I_l1 = I1new
#u_l1[np.where(u_l1 == 0)] = nan
#v_l1[np.where(v_l1 == 0)] = nan
k = 2
u1 = UpSample(u)
v1 = UpSample(v)
I1new = pyramid1[0:int(len(pyramid1[:,0])/(2**(level-k-1))),0:int(len(pyramid1[0,:])/(2**(level-k-1))),level-k-1]
I2new = pyramid2[0:int(len(pyramid2[:,0])/(2**(level-k-1))),0:int(len(pyramid2[0,:])/(2**(level-k-1))),level-k-1]
(u,v) = LucasKanadeIterative(I1new, I2new, u1, v1,kernel)
u_l2 = u
v_l2 = v
I_l2 = I1new
#u_l2[np.where(u_l2 == 0)] = nan
#v_l2[np.where(v_l2 == 0)] = nan
nr, nc = Im1.shape
row_coords, col_coords = np.meshgrid(np.arange(nr), np.arange(nc),
indexing='ij')
im1_warp = warp(I2new, np.array([row_coords + u_l2, col_coords + v_l2]),
order=1)
return im1_warp
# # with change format
# +
import nibabel as nib
import numpy as np
from scipy import ndimage, misc
import time
import os
import subprocess
start_time = time.time()
#==============================
img = nib.load(directory)
img_mask_affine = img.affine
#################################
header = img.header
nb_img = header.get_data_shape()
nb_img_h = nb_img[0] #Hauteur
#################################
o=np.zeros((30,30,nb_img[2],nb_img[3]))
kernel=11
for v in range(0,nb_img[3]):
for s in range(0,nb_img[2]):
a=LK_Pyramid(mask(I,0,s), mask(I,v,s), 3, 3,kernel)
a=a.astype(np.int16)
o[:, :, s,v] = a.T
print("--- %s second ---" % (time.time() - start_time))
img_reg = nib.Nifti1Image(o, affine=img_mask_affine, header=header)
nib.save(img_reg,'/home/mahdi/python codes/motion result/dataT'+str(kernel))
subprocess.Popen(['fsleyes','/home/mahdi/python codes/motion result/dataT'+str(kernel)]) ## just change the output names.
# +
subprocess.Popen(['sct_fmri_compute_tsnr','-i','/home/mahdi/python codes/motion result/dataT'+str(kernel)+'.nii','-c','t2s']) ## just change the output names.
# -
| v 5.1 .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 0. Introduction
# While the previous projects dealt with medical image features, we turn now to the classification of entire time series into one of 4 classes. This time you will work with the original ECG recordings of different length sampled as 300Hz to predict heart rhythm.
#
# X_train.csv: the training signals, each row is one sample indexed by an id, the first column contains the id, and the rest columns are up to 17842 sample points.
#
# X_test.csv: the test set, same structure
#
# y_train.csv: the training targets (signal classes)
#
# The problem was a classification task. We had ECG measurements of 4 classes that were unbalanced and of not the same length. We used different techinques to extract features that we used for the classification. For each ECG signal we extracted the autocorrelation, the average and the power. We also extracted 15 coefficients of the FFT. For each ECG using biosspy we extracted the heartbeats, averaged them and created a characteristic average of the same length of each patient. For each of these signals (after normalization) we extracted the energy of the wave, the T, S, P, R, Q peaks, the ST QRS PR intervals, QRS/T and QRS/P ratios, the median, mean and interval of the amplitude and the db2 coefficients. Finally, the library biosspy gave us the locations of peaks in the original wave, the timings as well as the heart beats and their timings. For all of them we calculated the mean, median and standard deviation. We also extracted the mean, median and standard deviation of the differences between the peaks' timings( important feature to classify noise, normal heart rate and abnormal heart rhythms). Using all of these features we trained a GradientBoosting model which was fine-tuned using a Cross-validation grid search. The model has 0.817 mean score in the cross-validation and 0.833 in the public scoreboard.
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
import csv
import os
import biosppy as biosppy
import biosppy.signals.ecg as ecg
import pywt
from sklearn.preprocessing import normalize
from scipy import stats
from statistics import pstdev,variance
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import confusion_matrix
from sklearn.metrics import f1_score
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.metrics import make_scorer
from sklearn.impute import SimpleImputer
# -
# # 1. Preprocessing
# ## 1.0 Read data from CSV files
# Use pandas to read csv file. Then discard unnecessary columns (id column). Check the correctness of data reading in the end of the cell.
# +
x_train = pd.read_csv("task2/X_train.csv")
y_train = pd.read_csv("task2/y_train.csv")
x_test = pd.read_csv("task2/X_test.csv")
x_train.pop("id")
y_train.pop("id")
x_test.pop("id")
x_train.head(3)
# -
print(x_train.shape, x_test.shape, y_train.shape)
# ## 1.1 Extract frequency domain features
# +
# extract frequency domain features: FFT, power, average and autocorrelation
# before padding to 9000 points
autocorr = []
ptp = []
avg = []
fft = []
for i in range(len(x_train)):
# extract i-th single row as a dataframe and drop na values
signal = x_train.loc[i].dropna().to_numpy(dtype='float32')
signal_series = pd.Series(signal)
# extract autocorrelation, average, ptp(max-min)
autocorr.append(signal_series.autocorr(lag=2))
avg.append(np.average(signal))
ptp.append(np.ptp(signal))
f_coefficients = np.fft.fft(signal)
f_coefficients = f_coefficients[0:800]
n = 15
f = f_coefficients.argsort()[-n:][::-1]
fft.append(f)
autocorr = np.transpose(np.array([autocorr]))
ptp = np.transpose(np.array([ptp]))
avg = np.transpose(np.array([avg]))
fft = np.array(fft)
# -
# ## 1.2 Time Series Analysis using Biosppy
# function for extracting average of squared rpeaks differences
def mean_sqrd_diff(rpeaks):
diff = np.diff(rpeaks)
mean_sqrd = np.mean(diff*diff)
return mean_sqrd
# +
# Process a raw ECG signal and extract relevant signal features using default parameters
# return ts, filtered, rpeaks, templates_ts, heartbeat templates
# and heart_rate_ts, heart_rate
ts_list = []
filtered_list = []
rpeaks_list = []
templates_ts_list = []
templates_list = []
heart_rate_ts_list = []
heart_rate_list = []
for i in range(len(x_train)):
# print(i)
ts, filtered, rpeaks, templates_ts, templates, heart_rate_ts, heart_rate = \
biosppy.signals.ecg.ecg(signal = x_train.loc[i].dropna().to_numpy(dtype='float32'),
sampling_rate=300.0, show=False)
# # Correct R-peak locations to the maximum, introduce some tolerance level
# rpeaks = ecg.correct_rpeaks(signal = x_train.loc[i].dropna().to_numpy(dtype='float32'),
# rpeaks = rpeaks, sampling_rate = 300.0,
# tol = 0.01)
# # Set heart rates to array of nans if contains no elements, otherwise min and max are not defined
# if len(heart_rate) == 0:
# heart_rate = np.array([np.nan, np.nan])
# if len(heart_rate_ts) == 0:
# heart_rate_ts = np.array([np.nan, np.nan])
filtered_list.append(filtered)
rpeaks_list.append(rpeaks)
templates_ts_list.append(templates_ts)
templates_list.append(templates)
heart_rate_ts_list.append(heart_rate_ts)
heart_rate_list.append(heart_rate)
ts_list.append(ts)
# +
# Find the average characteristic heartbeat and try to plot one sample
normalized_templates = []
average_heartbeats = []
for i in range(len(templates_list)):
normalized_templates.append(normalize(templates_list[i]))
average_heartbeats.append(sum(normalized_templates[i])/len(normalized_templates[i]))
plt.plot(average_heartbeats[0])
plt.show()
# +
# Find P,Q,R,S,T
P_list = []
Q_list = []
R_list = []
S_list = []
T_list = []
P_value_list = []
Q_value_list = []
S_value_list = []
T_value_list = []
def find_points(i):
current = average_heartbeats[i]
# Find R(the peak)
sample_point = np.where(current == max(current))
R = sample_point[0]
first_half = current[0:R[0]]
sample_point = np.where(current == min(first_half[R[0]-30:R[0]]))
Q = sample_point[0]
sample_point = np.where(first_half[0:Q[0]] == max(first_half[0:Q[0]]))
P = sample_point[0]
second_half = current[R[0]+1:]
sample_point = np.where(current == min(second_half[0:30]))
S = sample_point[0]
sample_point = np.where(current == max(second_half[(S[0]-R[0]+1):]))
T = sample_point[0]
return P,Q,R,S,T
# current = average_heartbeats[256]
# plt.plot(current)
# plt.scatter(find_points(256)[0],current[find_points(256)[0]],label='P')
# plt.scatter(find_points(256)[1],current[find_points(256)[1]],label='Q')
# plt.scatter(find_points(256)[2],current[find_points(256)[2]],label='R')
# plt.scatter(find_points(256)[3],current[find_points(256)[3]],label='S')
# plt.scatter(find_points(256)[4],current[find_points(256)[4]],label='T')
# plt.plot(np.arange(0, 180),np.zeros(180), 'r--')
# plt.legend()
# plt.show()
# -
for i in range(len(average_heartbeats)):
# print(i)
P_list.append(find_points(i)[0])
Q_list.append(find_points(i)[1])
R_list.append(find_points(i)[2])
S_list.append(find_points(i)[3])
T_list.append(find_points(i)[4])
P_value_list.append(average_heartbeats[i][find_points(i)[0]])
Q_value_list.append(average_heartbeats[i][find_points(i)[1]])
S_value_list.append(average_heartbeats[i][find_points(i)[3]])
T_value_list.append(average_heartbeats[i][find_points(i)[4]])
mean_sqrd = []
for i in range(len(rpeaks_list)):
mean_sqrd.append(mean_sqrd_diff(rpeaks_list[i]))
len(mean_sqrd)
# +
# Find Intervals and Ratios of peaks
RR_list = []
PR_list = []
QRS_list = []
ST_list = []
def findInterval(i):
if i+1 < len(R_list):
RR_list.append(P_list[i+1]-P_list[i])
PR_list.append(R_list[i]-P_list[i])
QRS_list.append(S_list[i]-Q_list[i])
ST_list.append(T_list[i]-S_list[i])
for i in range(len(P_list)):
findInterval(i)
RR_list = np.array(RR_list).reshape(-1,1)
QRS_list = np.array(QRS_list).reshape(-1,1)
ST_list = np.array(ST_list).reshape(-1,1)
P_list = np.array(P_list).reshape(-1,1)
R_list = np.array(R_list).reshape(-1,1)
S_list = np.array(S_list).reshape(-1,1)
T_list = np.array(T_list).reshape(-1,1)
QRS_T_list= np.divide(QRS_list, T_list)
QRS_P_list= np.divide(QRS_list, P_list)
QRS_T_list=np.nan_to_num(QRS_T_list, nan=0.0,posinf=0.0, neginf=0.0)
QRS_P_list=np.nan_to_num(QRS_P_list, nan=0.0,posinf=0.0, neginf=0.0)
# +
max_wave = []
min_wave = []
mean_wave = []
median_wave = []
for i in range(len(average_heartbeats)):
current = average_heartbeats[i]
max_wave.append(max(current))
min_wave.append(min(current))
mean_wave.append(np.mean(current))
median_wave.append(np.median(current))
# +
# Heart rates mean, median, variant and standard deviation
hr_mean = []
hr_std = []
hr_median = []
hr_var = []
for i in range(len(heart_rate_list)):
d = np.diff(heart_rate_list[i])
hr_mean.append(np.mean(d))
hr_std.append(np.std(d))
hr_median.append(np.median(d))
hr_var.append(np.mean(d)-np.var(d))
hr_mean=np.nan_to_num(hr_mean, nan = 0.0)
hr_std=np.nan_to_num(hr_std, nan = 0.0)
hr_median=np.nan_to_num(hr_median, nan = 0.0)
hr_var=np.nan_to_num(hr_var, nan = 0.0)
# +
# Timings of peaks mean, median, variant and standard deviation
ts_mean = []
ts_std = []
ts_median = []
ts_var = []
for i in range(len(ts_list)):
d =np.diff(ts_list[i])
ts_mean.append(np.mean(d))
ts_std.append(np.std(d))
ts_median.append(np.median(d))
ts_var.append(np.mean(d)-np.var(d))
ts_mean=np.nan_to_num(ts_mean, nan=0.0)
ts_std=np.nan_to_num(ts_std, nan=0.0)
ts_median=np.nan_to_num(ts_median, nan=0.0)
ts_var=np.nan_to_num(ts_var, nan=0.0)
# +
# Timings of heart rates mean, median, variant and standard deviation
hr_ts_mean = []
hr_ts_std = []
hr_ts_median = []
hr_ts_var = []
for i in range(len(heart_rate_ts_list)):
d =np.diff(heart_rate_ts_list[i])
hr_ts_mean.append(np.mean(d))
hr_ts_std.append(np.std(d))
hr_ts_median.append(np.median(d))
hr_ts_var.append(np.mean(d)-np.var(d))
hr_ts_mean=np.nan_to_num(hr_ts_mean, nan=0.0)
hr_ts_std=np.nan_to_num(hr_ts_std, nan=0.0)
hr_ts_median=np.nan_to_num(hr_ts_median, nan=0.0)
hr_ts_var=np.nan_to_num(hr_ts_var, nan=0.0)
# +
# Peaks mean, median, variant, mode and standard deviation
peaks_mean = []
peaks_std = []
peaks_median = []
peaks_mode = []
peaks_var = []
for i in range(len(rpeaks_list)):
peaks_mean.append(np.mean(rpeaks_list[i]))
peaks_std.append(np.std(rpeaks_list[i]))
peaks_median.append(np.median(rpeaks_list[i]))
peaks_mode.append(np.mean(rpeaks_list[i])-stats.mode(rpeaks_list[i])[0])
peaks_var.append(np.var(rpeaks_list[i]))
# +
# Peaks differences mean, median, variant, mode and standard deviation
diff_mean=[]
diff_std=[]
diff_median=[]
diff_mode=[]
diff_var = []
diff_dev = []
for i in range(len(rpeaks_list)):
d = np.diff(rpeaks_list[i])
diff_mean.append(np.mean(d))
diff_std.append(np.std(d))
diff_median.append(np.median(d))
diff_mode.append(np.mean(d)-stats.mode(d)[0])
diff_var.append(np.mean(d)-variance(d))
diff_dev.append(np.mean(d)-pstdev(d))
diff_mean=np.nan_to_num(diff_mean, nan=0.0)
diff_std=np.nan_to_num(diff_std, nan=0.0)
diff_median=np.nan_to_num(diff_median, nan=0.0)
diff_mode=np.nan_to_num(diff_mode, nan=0.0)
diff_var=np.nan_to_num(diff_var, nan=0.0)
diff_dev=np.nan_to_num(diff_dev, nan=0.0)
# -
# Energy of the signal
energy_list = []
for i in range(len(average_heartbeats)):
energy_list.append(np.sum(average_heartbeats[i] ** 2))
# +
# db2 coefficients
cA_list=[]
cD_list=[]
for i in range(len(average_heartbeats)):
cA, cD = pywt.dwt(average_heartbeats[i], 'db2', mode='periodic')
cA_list.append(cA)
cD_list.append(cD)
# +
# Prepare data
hr_mean = np.array(hr_mean).reshape(-1,1)
hr_std = np.array(hr_std).reshape(-1,1)
hr_median = np.array(hr_median).reshape(-1,1)
hr_var = np.array(hr_var).reshape(-1,1)
hr_ts_mean = np.array(hr_ts_mean).reshape(-1,1)
hr_ts_std = np.array(hr_ts_std).reshape(-1,1)
hr_ts_median = np.array(hr_ts_median).reshape(-1,1)
hr_ts_var = np.array(hr_ts_var).reshape(-1,1)
ts_mean = np.array(ts_mean).reshape(-1,1)
ts_std = np.array(ts_std).reshape(-1,1)
ts_median = np.array(ts_median).reshape(-1,1)
ts_var = np.array(ts_var).reshape(-1,1)
peaks_mean = np.array(peaks_mean).reshape(-1,1)
peaks_std = np.array(peaks_std).reshape(-1,1)
peaks_median = np.array(peaks_median).reshape(-1,1)
peaks_mode = np.array(peaks_mode).reshape(-1,1)
peaks_var = np.array(peaks_var).reshape(-1,1)
diff_mean = np.array(diff_mean).reshape(-1,1)
diff_std = np.array(diff_std).reshape(-1,1)
diff_median = np.array(diff_median).reshape(-1,1)
diff_mode = np.array(diff_mode).reshape(-1,1)
diff_var = np.array(diff_var).reshape(-1,1)
diff_dev = np.array(diff_dev).reshape(-1,1)
max_wave = np.array(max_wave).reshape(-1,1)
min_wave = np.array(min_wave).reshape(-1,1)
mean_wave = np.array(mean_wave).reshape(-1,1)
median_wave = np.array(median_wave).reshape(-1,1)
energy_list = np.array(energy_list).reshape(-1,1)
# RR_list = np.array(RR_list).reshape(-1,1)
PR_list = np.array(PR_list).reshape(-1,1)
ST_list = np.array(ST_list).reshape(-1,1)
P_list = np.array(P_list).reshape(-1,1)
Q_list = np.array(Q_list).reshape(-1,1)
R_list = np.array(R_list).reshape(-1,1)
S_list = np.array(S_list).reshape(-1,1)
T_list = np.array(T_list).reshape(-1,1)
mean_sqrd = np.array(mean_sqrd).reshape(-1,1)
# Creates array of all training data's features
feats_train = np.concatenate((fft,
autocorr,
ptp,
avg,
peaks_var,
peaks_mean,
peaks_std,
peaks_median,
peaks_mode,
P_list,
Q_list,
R_list,
S_list,
T_list,
ST_list,
QRS_list,
PR_list,
QRS_T_list,
max_wave - min_wave,
mean_wave,
median_wave,
hr_std,
hr_mean,
hr_std,
hr_var,
hr_median,
hr_ts_mean,
hr_ts_std,
hr_ts_median,
hr_ts_var,
diff_dev,
diff_var,
diff_std,
diff_mode,
diff_mean,
diff_median,
ts_mean,
ts_std,
ts_median,
ts_var,
mean_sqrd,
cD_list,
cA_list,
energy_list), axis=1)
print(feats_train.shape)
# -
# # 2. Classification using Gradient Boost Classifier
# +
x_training = feats_train
y_train = np.ravel(y_train)
#replacing NaNs with median of columns
impute1 = SimpleImputer(strategy = 'median', fill_value = 0)
x_training = impute1.fit_transform(x_training)
#rescaling data
scaler = StandardScaler()
scaler.fit(x_training)
x_train = scaler.transform(x_training)
# clf = GradientBoostingClassifier(learning_rate=0.05, n_estimators=500, max_depth=7,
# min_samples_split=60, min_samples_leaf=9, subsample=1.0,
# max_features=50, random_state=0)
# using best parameter given by GS
# max_features from 60 to 50
clf = GradientBoostingClassifier(n_estimators = 250,
max_depth = 5,
learning_rate = 0.1,
max_features = 60)
scorer_f1 = make_scorer(f1_score, greater_is_better = True, average = 'micro')
cv_means = []
cv_stds = []
# changed to 5-fold
for i in np.arange(10):
scores = cross_val_score(estimator = clf,
X = x_training,
y = y_train,
scoring = scorer_f1,
cv = KFold(n_splits = 5, shuffle = True))
cv_means.append(np.mean(scores))
cv_stds.append(np.std(scores))
print("Average of F1 scores:", np.mean(cv_means))
print("Standard deviation of F1 scores:", np.mean(cv_stds))
# -
# # 3. Extracting features from Test set
# +
# extract frequency domain features: FFT, power, average and autocorrelation
# before padding to 9000 points
autocorr = []
ptp = []
avg = []
fft = []
for i in range(len(x_test)):
# extract i-th single row as a dataframe and drop na values
signal = x_test.loc[i].dropna().to_numpy(dtype='float32')
signal_series = pd.Series(signal)
# extract autocorrelation, average, ptp(max-min)
autocorr.append(signal_series.autocorr(lag=2))
avg.append(np.average(signal))
ptp.append(np.ptp(signal))
f_coefficients = np.fft.fft(signal)
f_coefficients = f_coefficients[0:800]
n = 15
f = f_coefficients.argsort()[-n:][::-1]
fft.append(f)
autocorr = np.transpose(np.array([autocorr]))
ptp = np.transpose(np.array([ptp]))
avg = np.transpose(np.array([avg]))
fft = np.array(fft)
# +
# Process a raw ECG signal and extract relevant signal features using default parameters
# return ts, filtered, rpeaks, templates_ts, heartbeat templates
# and heart_rate_ts, heart_rate
ts_list = []
filtered_list = []
rpeaks_list = []
templates_ts_list = []
templates_list = []
heart_rate_ts_list = []
heart_rate_list = []
for i in range(len(x_test)):
# print(i)
ts, filtered, rpeaks, templates_ts, templates, heart_rate_ts, heart_rate = \
biosppy.signals.ecg.ecg(signal = x_test.loc[i].dropna().to_numpy(dtype='float32'),
sampling_rate=300.0, show=False)
# # Correct R-peak locations to the maximum, introduce some tolerance level
# rpeaks = ecg.correct_rpeaks(signal = x_test.loc[i].dropna().to_numpy(dtype='float32'),
# rpeaks = rpeaks, sampling_rate = 300.0,
# tol = 0.01)
# # Set heart rates to array of nans if contains no elements, otherwise min and max are not defined
# if len(heart_rate) == 0:
# heart_rate = np.array([np.nan, np.nan])
# if len(heart_rate_ts) == 0:
# heart_rate_ts = np.array([np.nan, np.nan])
filtered_list.append(filtered)
rpeaks_list.append(rpeaks)
templates_ts_list.append(templates_ts)
templates_list.append(templates)
heart_rate_ts_list.append(heart_rate_ts)
heart_rate_list.append(heart_rate)
ts_list.append(ts)
# Find the average characteristic heartbeat
normalized_templates = []
average_heartbeats = []
for i in range(len(templates_list)):
normalized_templates.append(normalize(templates_list[i]))
average_heartbeats.append(sum(normalized_templates[i])/len(normalized_templates[i]))
# Find P,Q,R,S,T
P_list = []
Q_list = []
R_list = []
S_list = []
T_list = []
for i in range(len(average_heartbeats)):
P_list.append(find_points(i)[0])
Q_list.append(find_points(i)[1])
R_list.append(find_points(i)[2])
S_list.append(find_points(i)[3])
T_list.append(find_points(i)[4])
mean_sqrd = []
for i in range(len(rpeaks_list)):
mean_sqrd.append(mean_sqrd_diff(rpeaks_list[i]))
# Find Intervals and Ratios of peaks
RR_list = []
PR_list = []
QRS_list = []
ST_list = []
for i in range(len(P_list)):
findInterval(i)
RR_list = np.array(RR_list).reshape(-1,1)
QRS_list = np.array(QRS_list).reshape(-1,1)
ST_list = np.array(ST_list).reshape(-1,1)
P_list = np.array(P_list).reshape(-1,1)
R_list = np.array(R_list).reshape(-1,1)
S_list = np.array(S_list).reshape(-1,1)
T_list = np.array(T_list).reshape(-1,1)
QRS_T_list= np.divide(QRS_list, T_list)
QRS_P_list= np.divide(QRS_list, P_list)
QRS_T_list=np.nan_to_num(QRS_T_list, nan=0.0,posinf=0.0, neginf=0.0)
QRS_P_list=np.nan_to_num(QRS_P_list, nan=0.0,posinf=0.0, neginf=0.0)
max_wave = []
min_wave = []
mean_wave = []
median_wave = []
for i in range(len(average_heartbeats)):
current = average_heartbeats[i]
max_wave.append(max(current))
min_wave.append(min(current))
mean_wave.append(np.mean(current))
median_wave.append(np.median(current))
# Heart rates mean, median, variant and standard deviation
hr_mean = []
hr_std = []
hr_median = []
hr_var = []
for i in range(len(heart_rate_list)):
d = np.diff(heart_rate_list[i])
hr_mean.append(np.mean(d))
hr_std.append(np.std(d))
hr_median.append(np.median(d))
hr_var.append(np.mean(d)-np.var(d))
hr_mean=np.nan_to_num(hr_mean, nan = 0.0)
hr_std=np.nan_to_num(hr_std, nan = 0.0)
hr_median=np.nan_to_num(hr_median, nan = 0.0)
hr_var=np.nan_to_num(hr_var, nan = 0.0)
# Timings of peaks mean, median, variant and standard deviation
ts_mean = []
ts_std = []
ts_median = []
ts_var = []
for i in range(len(ts_list)):
d =np.diff(ts_list[i])
ts_mean.append(np.mean(d))
ts_std.append(np.std(d))
ts_median.append(np.median(d))
ts_var.append(np.mean(d)-np.var(d))
ts_mean=np.nan_to_num(ts_mean, nan=0.0)
ts_std=np.nan_to_num(ts_std, nan=0.0)
ts_median=np.nan_to_num(ts_median, nan=0.0)
ts_var=np.nan_to_num(ts_var, nan=0.0)
# Timings of heart rates mean, median, variant and standard deviation
hr_ts_mean = []
hr_ts_std = []
hr_ts_median = []
hr_ts_var = []
for i in range(len(heart_rate_ts_list)):
d =np.diff(heart_rate_ts_list[i])
hr_ts_mean.append(np.mean(d))
hr_ts_std.append(np.std(d))
hr_ts_median.append(np.median(d))
hr_ts_var.append(np.mean(d)-np.var(d))
hr_ts_mean=np.nan_to_num(hr_ts_mean, nan=0.0)
hr_ts_std=np.nan_to_num(hr_ts_std, nan=0.0)
hr_ts_median=np.nan_to_num(hr_ts_median, nan=0.0)
hr_ts_var=np.nan_to_num(hr_ts_var, nan=0.0)
# Peaks mean, median, variant, mode and standard deviation
peaks_mean = []
peaks_std = []
peaks_median = []
peaks_mode = []
peaks_var = []
for i in range(len(rpeaks_list)):
peaks_mean.append(np.mean(rpeaks_list[i]))
peaks_std.append(np.std(rpeaks_list[i]))
peaks_median.append(np.median(rpeaks_list[i]))
peaks_mode.append(np.mean(rpeaks_list[i])-stats.mode(rpeaks_list[i])[0])
peaks_var.append(np.var(rpeaks_list[i]))
# Peaks differences mean, median, variant, mode and standard deviation
diff_mean=[]
diff_std=[]
diff_median=[]
diff_mode=[]
diff_var = []
diff_dev = []
for i in range(len(rpeaks_list)):
d = np.diff(rpeaks_list[i])
diff_mean.append(np.mean(d))
diff_std.append(np.std(d))
diff_median.append(np.median(d))
diff_mode.append(np.mean(d)-stats.mode(d)[0])
diff_var.append(np.mean(d)-variance(d))
diff_dev.append(np.mean(d)-pstdev(d))
diff_mean=np.nan_to_num(diff_mean, nan=0.0)
diff_std=np.nan_to_num(diff_std, nan=0.0)
diff_median=np.nan_to_num(diff_median, nan=0.0)
diff_mode=np.nan_to_num(diff_mode, nan=0.0)
diff_var=np.nan_to_num(diff_var, nan=0.0)
diff_dev=np.nan_to_num(diff_dev, nan=0.0)
# db2 coefficients
cA_list=[]
cD_list=[]
for i in range(len(average_heartbeats)):
cA, cD = pywt.dwt(average_heartbeats[i], 'db2', mode='periodic')
cA_list.append(cA)
cD_list.append(cD)
# Energy of the signal
energy_list = []
for i in range(len(average_heartbeats)):
energy_list.append(np.sum(average_heartbeats[i] ** 2))
# Prepare data
hr_mean = np.array(hr_mean).reshape(-1,1)
hr_std = np.array(hr_std).reshape(-1,1)
hr_median = np.array(hr_median).reshape(-1,1)
hr_var = np.array(hr_var).reshape(-1,1)
hr_ts_mean = np.array(hr_ts_mean).reshape(-1,1)
hr_ts_std = np.array(hr_ts_std).reshape(-1,1)
hr_ts_median = np.array(hr_ts_median).reshape(-1,1)
hr_ts_var = np.array(hr_ts_var).reshape(-1,1)
ts_mean = np.array(ts_mean).reshape(-1,1)
ts_std = np.array(ts_std).reshape(-1,1)
ts_median = np.array(ts_median).reshape(-1,1)
ts_var = np.array(ts_var).reshape(-1,1)
peaks_mean = np.array(peaks_mean).reshape(-1,1)
peaks_std = np.array(peaks_std).reshape(-1,1)
peaks_median = np.array(peaks_median).reshape(-1,1)
peaks_mode = np.array(peaks_mode).reshape(-1,1)
peaks_var = np.array(peaks_var).reshape(-1,1)
diff_mean = np.array(diff_mean).reshape(-1,1)
diff_std = np.array(diff_std).reshape(-1,1)
diff_median = np.array(diff_median).reshape(-1,1)
diff_mode = np.array(diff_mode).reshape(-1,1)
diff_var = np.array(diff_var).reshape(-1,1)
diff_dev = np.array(diff_dev).reshape(-1,1)
max_wave = np.array(max_wave).reshape(-1,1)
min_wave = np.array(min_wave).reshape(-1,1)
mean_wave = np.array(mean_wave).reshape(-1,1)
median_wave = np.array(median_wave).reshape(-1,1)
energy_list = np.array(energy_list).reshape(-1,1)
# RR_list = np.array(RR_list).reshape(-1,1)
PR_list = np.array(PR_list).reshape(-1,1)
ST_list = np.array(ST_list).reshape(-1,1)
P_list = np.array(P_list).reshape(-1,1)
Q_list = np.array(Q_list).reshape(-1,1)
R_list = np.array(R_list).reshape(-1,1)
S_list = np.array(S_list).reshape(-1,1)
T_list = np.array(T_list).reshape(-1,1)
mean_sqrd = np.array(mean_sqrd).reshape(-1,1)
# Creates array of all testing data's features
feats_test = np.concatenate((fft,
autocorr,
ptp,
avg,
peaks_var,
peaks_mean,
peaks_std,
peaks_median,
peaks_mode,
P_list,
Q_list,
R_list,
S_list,
T_list,
ST_list,
QRS_list,
PR_list,
QRS_T_list,
max_wave - min_wave,
mean_wave,
median_wave,
hr_std,
hr_mean,
hr_std,
hr_var,
hr_median,
hr_ts_mean,
hr_ts_std,
hr_ts_median,
hr_ts_var,
diff_dev,
diff_var,
diff_std,
diff_mode,
diff_mean,
diff_median,
ts_mean,
ts_std,
ts_median,
ts_var,
mean_sqrd,
cD_list,
cA_list,
energy_list), axis=1)
print(feats_test.shape)
# -
# # 4. Write predictions to CSV
# +
#replacing NaNs with median of columns
impute2 = SimpleImputer(strategy = 'median', fill_value = 0)
feats_test = impute2.fit_transform(feats_test)
#rescaling data
feats_test = scaler.transform(feats_test)
clf.fit(x_training, y_train)
predictions = clf.predict(feats_test)
prediction_results = pd.DataFrame(data = predictions, columns = ['y'])
index = [i for i in range(len(prediction_results))]
prediction_results.insert(0,"id",index)
prediction_results.to_csv('task2/result_10.csv',index = False)
# -
prediction_results
| task2_ECG_classification_ver2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ---
# title: Triple Pendulum Example
# type: submodule
# ---
# %matplotlib inline
# Try running with this variable set to true and to false and see the difference in the resulting equations of motion
use_constraints = False
# Import all the necessary modules
# +
# -*- coding: utf-8 -*-
"""
Written by <NAME>
Email: danaukes<at>gmail.com
Please see LICENSE for full license.
"""
import pynamics
from pynamics.frame import Frame
from pynamics.variable_types import Differentiable,Constant
from pynamics.system import System
from pynamics.body import Body
from pynamics.dyadic import Dyadic
from pynamics.output import Output,PointsOutput
from pynamics.particle import Particle
from pynamics.constraint import AccelerationConstraint
import pynamics.integration
import numpy
import sympy
import matplotlib.pyplot as plt
plt.ion()
from math import pi
# -
# The next two lines create a new system object and set that system as the global system within the module so that other variables can use and find it.
system = System()
pynamics.set_system(__name__,system)
# ## Parameterization
#
# ### Constants
#
# Declare constants and seed them with their default value. This can be changed at integration time but is often a nice shortcut when you don't want the value to change but you want it to be represented symbolically in calculations
# +
lA = Constant(1,'lA',system)
lB = Constant(1,'lB',system)
lC = Constant(1,'lC',system)
mA = Constant(1,'mA',system)
mB = Constant(1,'mB',system)
mC = Constant(1,'mC',system)
g = Constant(9.81,'g',system)
b = Constant(1e1,'b',system)
k = Constant(1e1,'k',system)
preload1 = Constant(0*pi/180,'preload1',system)
preload2 = Constant(0*pi/180,'preload2',system)
preload3 = Constant(0*pi/180,'preload3',system)
Ixx_A = Constant(1,'Ixx_A',system)
Iyy_A = Constant(1,'Iyy_A',system)
Izz_A = Constant(1,'Izz_A',system)
Ixx_B = Constant(1,'Ixx_B',system)
Iyy_B = Constant(1,'Iyy_B',system)
Izz_B = Constant(1,'Izz_B',system)
Ixx_C = Constant(1,'Ixx_C',system)
Iyy_C = Constant(1,'Iyy_C',system)
Izz_C = Constant(1,'Izz_C',system)
torque = Constant(0,'torque',system)
freq = Constant(3e0,'freq',system)
# -
# ### Differentiable State Variables
#
# Define your differentiable state variables that you will use to model the state of the system. In this case $qA$, $qB$, and $qC$ are the rotation angles of a three-link mechanism
qA,qA_d,qA_dd = Differentiable('qA',system)
qB,qB_d,qB_dd = Differentiable('qB',system)
qC,qC_d,qC_dd = Differentiable('qC',system)
# ### Initial Values
# Define a set of initial values for the position and velocity of each of your state variables. It is necessary to define a known. This code create a dictionary of initial values.
initialvalues = {}
initialvalues[qA]=0*pi/180
initialvalues[qA_d]=0*pi/180
initialvalues[qB]=0*pi/180
initialvalues[qB_d]=0*pi/180
initialvalues[qC]=0*pi/180
initialvalues[qC_d]=0*pi/180
# These two lines of code order the initial values in a list in such a way that the integrator can use it in the same order that it expects the variables to be supplied
statevariables = system.get_state_variables()
ini = [initialvalues[item] for item in statevariables]
# ## Kinematics
#
# ### Frames
# Define the reference frames of the system
N = Frame('N',system)
A = Frame('A',system)
B = Frame('B',system)
C = Frame('C',system)
# ### Newtonian Frame
#
# It is important to define the Newtonian reference frame as a reference frame that is not accelerating, otherwise the dynamic equations will not be correct
system.set_newtonian(N)
# Rotate each successive frame by amount q<new> from the last. This approach can produce more complex equations but is representationally simple (Minimal Representation)
A.rotate_fixed_axis(N,[0,0,1],qA,system)
B.rotate_fixed_axis(A,[0,0,1],qB,system)
C.rotate_fixed_axis(B,[0,0,1],qC,system)
# ### Vectors
# Define the vectors that describe the kinematics of a series of connected lengths
#
# * pNA - This is a vector with position at the origin.
# * pAB - This vector is length $l_A$ away from the origin along the A.x unit vector
# * pBC - This vector is length $l_B$ away from the pAB along the B.x unit vector
# * pCtip - This vector is length $l_C$ away from the pBC along the C.x unit vector
# Define my rigid body kinematics
#
# 
# 
pNA=0*N.x
pAB=pNA+lA*A.x
pBC = pAB + lB*B.x
pCtip = pBC + lC*C.x
# ## Centers of Mass
#
# It is important to define the centers of mass of each link. In this case, the center of mass of link A, B, and C is halfway along the length of each
pAcm=pNA+lA/2*A.x
pBcm=pAB+lB/2*B.x
pCcm=pBC+lC/2*C.x
# ## Calculating Velocity
#
# The angular velocity between frames, and the time derivatives of vectors are extremely useful in calculating the equations of motion and for determining many of the forces that need to be applied to your system (damping, drag, etc). Thus, it is useful, once kinematics have been defined, to take or find the derivatives of some of those vectors for calculating linear or angular velocity vectors
#
# ### Angular Velocity
# The following three lines of code computes and returns the angular velocity between frames N and A (${}^N\omega^A$), A and B (${}^A\omega^B$), and B and C (${}^B\omega^C$). In other cases, if the derivative expression is complex or long, you can supply pynamics with a given angular velocity between frames to speed up computation time.
wNA = N.get_w_to(A)
wAB = A.get_w_to(B)
wBC = B.get_w_to(C)
# ### Vector derivatives
# The time derivatives of vectors may also be
# vCtip = pCtip.time_derivative(N,system)
# ### Define Inertias and Bodies
# The next several lines compute the inertia dyadics of each body and define a rigid body on each frame. In the case of frame C, we represent the mass as a particle located at point pCcm.
# +
IA = Dyadic.build(A,Ixx_A,Iyy_A,Izz_A)
IB = Dyadic.build(B,Ixx_B,Iyy_B,Izz_B)
IC = Dyadic.build(C,Ixx_C,Iyy_C,Izz_C)
BodyA = Body('BodyA',A,pAcm,mA,IA,system)
BodyB = Body('BodyB',B,pBcm,mB,IB,system)
BodyC = Body('BodyC',C,pCcm,mC,IC,system)
#BodyC = Particle(pCcm,mC,'ParticleC',system)
# -
# ## Forces and Torques
# Forces and torques are added to the system with the generic ```addforce``` method. The first parameter supplied is a vector describing the force applied at a point or the torque applied along a given rotational axis. The second parameter is the vector describing the linear speed (for an applied force) or the angular velocity(for an applied torque)
system.addforce(torque*sympy.sin(freq*2*sympy.pi*system.t)*A.z,wNA)
# ### Damper
system.addforce(-b*wNA,wNA)
system.addforce(-b*wAB,wAB)
system.addforce(-b*wBC,wBC)
# ### Spring Forces
#
# Spring forces are a special case because the energy stored in springs is conservative and should be considered when calculating the system's potential energy. To do this, use the ```add_spring_force``` command. In this method, the first value is the linear spring constant. The second value is the "stretch" vector, indicating the amount of deflection from the neutral point of the spring. The final parameter is, as above, the linear or angluar velocity vector (depending on whether your spring is a linear or torsional spring)
#
# In this case, the torques applied to each joint are dependent upon whether qA, qB, and qC are absolute or relative rotations, as defined above.
system.add_spring_force1(k,(qA-preload1)*N.z,wNA)
system.add_spring_force1(k,(qB-preload2)*A.z,wAB)
system.add_spring_force1(k,(qC-preload3)*B.z,wBC)
# ### Gravity
# Again, like springs, the force of gravity is conservative and should be applied to all bodies. To globally apply the force of gravity to all particles and bodies, you can use the special ```addforcegravity``` method, by supplying the acceleration due to gravity as a vector. This will get applied to all bodies defined in your system.
system.addforcegravity(-g*N.y)
# ## Constraints
# Constraints may be defined that prevent the motion of certain elements. Try turning on the constraints flag at the top of the script to see what happens.
if use_constraints:
eq = []
eq.append(pCtip)
eq_d=[item.time_derivative() for item in eq]
eq_dd=[item.time_derivative() for item in eq_d]
eq_dd_scalar = []
eq_dd_scalar.append(eq_dd[0].dot(N.y))
constraint = AccelerationConstraint(eq_dd_scalar)
system.add_constraint(constraint)
# ## F=ma
# This is where the symbolic expressions for F and ma are calculated. This must be done after all parts of the system have been defined. The ```getdynamics``` function uses Kane's method to derive the equations of motion.
f,ma = system.getdynamics()
f
ma
# ## Solve for Acceleration
#
# The next line of code solves the system of equations F=ma plus any constraint equations that have been added above. It returns one or two variables. func1 is the function that computes the velocity and acceleration given a certain state, and lambda1(optional) supplies the function that computes the constraint forces as a function of the resulting states
#
# There are a few ways of solveing for a. The below function inverts the mass matrix numerically every time step. This can be slower because the matrix solution has to be solved for, but is sometimes more tractable than solving the highly nonlinear symbolic expressions that can be generated from the previous step. The other options would be to use ```state_space_pre_invert```, which pre-inverts the equations symbolically before generating a numerical function, or ```state_space_post_invert2```, which adds Baumgarte's method for intermittent constraints.
func1,lambda1 = system.state_space_post_invert(f,ma,return_lambda = True)
# ## Integration Tolerance
# Specify the precision of the integration
tol = 1e-5
# ### Time
# Define variables for time that can be used throughout the script. These get used to create the t array, a list of every time value that is solved for during integration
tinitial = 0
tfinal = 10
fps = 30
tstep = 1/fps
t = numpy.r_[tinitial:tfinal:tstep]
# ## Integrate
#
# The next line of code integrates the function calculated
states=pynamics.integration.integrate(func1,ini,t,rtol=tol,atol=tol, args=({'constants':system.constant_values},))
# ## Outputs
#
#
# The next section simply calculates and plots a variety of data from the previous simulation
# ### States
plt.figure()
artists = plt.plot(t,states[:,:3])
plt.legend(artists,['qA','qB','qC'])
# ### Energy
KE = system.get_KE()
PE = system.getPEGravity(pNA) - system.getPESprings()
energy_output = Output([KE-PE],system)
energy_output.calc(states,t)
energy_output.plot_time()
# ### Constraint Forces
#
# This line of code computes the constraint forces once the system's states have been solved for.
if use_constraints:
lambda2 = numpy.array([lambda1(item1,item2,system.constant_values) for item1,item2 in zip(t,states)])
plt.figure()
plt.plot(t, lambda2)
# ### Motion
points = [pNA,pAB,pBC,pCtip]
points_output = PointsOutput(points,system)
y = points_output.calc(states,t)
points_output.plot_time(20)
# #### Motion Animation
# in normal Python the next lines of code produce an animation using matplotlib
# + active=""
# points_output.animate(fps = fps,movie_name = 'triple_pendulum.mp4',lw=2,marker='o',color=(1,0,0,1),linestyle='-')
# -
# To plot the animation in jupyter you need a couple extra lines of code...
# + active=""
# from matplotlib import animation, rc
# from IPython.display import HTML
# HTML(points_output.anim.to_html5_video())
| python/pynamics_examples/triple_pendulum.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# We consider solving a system of linear equations by using Gaussian elimination. Given a nonsingular square matrix $A \in \mathbb{R}^{n \times n}$, and a column vector $\vec{b} = \mathbb{R}^{n}$, find $\vec{x} \in \mathbb{R}^{n}$ such that
# $$A\vec{\mathstrut x} = \vec{\mathstrut b}$$
#
# # Backward and forward substitution
# Assume $A$ is an upper triangular matrix, denoted as $U$, consider to solve:
#
# $$U\vec{x} = \left[
# \begin{array}{cccc}
# u_{11} & u_{12} & \cdots & u_{1n} \\
# & u_{22} & \cdots & u_{2n} \\
# & & \ddots & \vdots \\
# & & & u_{nn}
# \end{array}\right] \cdot
# \left[
# \begin{array}{c}
# x_1 \\
# x_2 \\
# \vdots \\
# x_n
# \end{array} \right] = \left[
# \begin{array}{c}
# b_1 \\
# b_2 \\
# \vdots \\
# b_n
# \end{array} \right] = \vec{b}$$
#
# $Algorithm$: **Backward substitution**
#
# for j = n to 1
# if u_{jj} = 0, then display('Error: Singular Matrix');
# x_j = b_j /u_{jj};
# for i = 1 to j − 1
# b_i = b_i − u_{ij} x_j ;
# end
# end
#
# And for lower triangular matrix, denoted as $L$, consider to solve:
#
# $$L\vec{x} = \left[
# \begin{array}{cccc}
# l_{11} & & & \\
# l_{21} & l_{22} & & \\
# \vdots & \vdots & \ddots & \\
# l_{n1} & l_{n2} & \cdots & l_{nn}
# \end{array}\right] \cdot
# \left[
# \begin{array}{c}
# x_1 \\
# x_2 \\
# \vdots \\
# x_n
# \end{array} \right] = \left[
# \begin{array}{c}
# b_1 \\
# b_2 \\
# \vdots \\
# b_n
# \end{array} \right] = \vec{b}$$
#
# $Algorithm$: **Forward substitution**
#
# for j = 1 to n
# if u_{jj} = 0, then display('Error: Singular Matrix');
# x_j = b_j /u_{jj};
# for i = j + 1 to n
# b_i = b_i − u_{ij} x_j ;
# end
# end
#
# # Simple examples of Gaussian elimination
# Let's look at an example
#
# $$A = \left[
# \begin{array}{ccc}
# 2 & 2 & 2 \\
# 4 & 3 & 2 \\
# 4 & 6 & 4
# \end{array}\right]$$
#
# We apply Gaussian elimination by multiplying elementary matrices to the left of $A$, we have
#
# $$\begin{align}
# M_1 A = \left[
# \begin{array}{ccc}
# 1 & 0 & 0 \\
# -2 & 1 & 0 \\
# -2 & 0 & 1
# \end{array}\right] \cdot \left[
# \begin{array}{ccc}
# 2 & 2 & 2 \\
# 4 & 3 & 2 \\
# 4 & 6 & 4
# \end{array}\right] &= \left[
# \begin{array}{ccc}
# 2 & 2 & 2 \\
# 0 & -1 & -2 \\
# 0 & 2 & 0
# \end{array}\right] \\
# M_2 A = \left[
# \begin{array}{ccc}
# 1 & 0 & 0 \\
# 0 & 1 & 0 \\
# 0 & 2 & 1
# \end{array}\right] \cdot \left[
# \begin{array}{ccc}
# 2 & 2 & 2 \\
# 0 & -1 & -2 \\
# 0 & 2 & 0
# \end{array}\right] &= \left[
# \begin{array}{ccc}
# 2 & 2 & 2 \\
# 0 & -1 & -2 \\
# 0 & 0 & -4
# \end{array}\right]
# \end{align}$$
#
# which is upper triangular, so that now we have:
#
# $$M_2 M_1 A = \left[
# \begin{array}{ccc}
# 1 & 0 & 0 \\
# 0 & 1 & 0 \\
# 0 & 2 & 1
# \end{array}\right] \cdot \left[
# \begin{array}{ccc}
# 1 & 0 & 0 \\
# -2 & 1 & 0 \\
# -2 & 0 & 1
# \end{array}\right] \cdot \left[
# \begin{array}{ccc}
# 2 & 2 & 2 \\
# 4 & 3 & 2 \\
# 4 & 6 & 4
# \end{array}\right] = \left[
# \begin{array}{ccc}
# 2 & 2 & 2 \\
# 0 & -1 & -2 \\
# 0 & 0 & -4
# \end{array}\right] = U$$
#
# $i.e.$
#
# $$\begin{align}
# A = M_{1}^{-1} M_{2}^{-1} U &= \left[
# \begin{array}{ccc}
# 1 & 0 & 0 \\
# -2 & 1 & 0 \\
# -2 & 0 & 1
# \end{array}\right]^{-1} \cdot \left[
# \begin{array}{ccc}
# 1 & 0 & 0 \\
# 0 & 1 & 0 \\
# 0 & 2 & 1
# \end{array}\right]^{-1} \cdot \left[
# \begin{array}{ccc}
# 2 & 2 & 2 \\
# 0 & -1 & -2 \\
# 0 & 0 & -4
# \end{array}\right] \\
# &= \left[
# \begin{array}{ccc}
# 1 & 0 & 0 \\
# 2 & 1 & 0 \\
# 2 & 0 & 1
# \end{array}\right] \cdot \left[
# \begin{array}{ccc}
# 1 & 0 & 0 \\
# 0 & 1 & 0 \\
# 0 & -2 & 1
# \end{array}\right] \cdot \left[
# \begin{array}{ccc}
# 2 & 2 & 2 \\
# 0 & -1 & -2 \\
# 0 & 0 & -4
# \end{array}\right] \\
# &= \left[
# \begin{array}{ccc}
# 1 & 0 & 0 \\
# 2 & 1 & 0 \\
# 2 & -2 & 1
# \end{array}\right] \cdot \left[
# \begin{array}{ccc}
# 2 & 2 & 2 \\
# 0 & -1 & -2 \\
# 0 & 0 & -4
# \end{array}\right] = LU
# \end{align}$$
#
# where $L$ is *lower triangular* and $U$ is *upper triangular*. We call $LU$ the **LU factorization** of $A$.
#
# # Elementary matrices
# This in a different elementary matrics with the following structure
#
# $$M_{k} = \left[
# \begin{array}{ccccccc}
# 1 & \cdots & 0 & 0 & 0 & \cdots & 0 \\
# \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
# 0 & \cdots & 1 & 0 & 0 & \cdots & 0 \\
# 0 & \cdots & 0 & 1 & 0 & \cdots & 0 \\
# 0 & \cdots & 0 & m_{k+1, k} & 1 & \cdots & 0 \\
# \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
# 0 & \cdots & 0 & m_{n,k} & 0 & \cdots & 1
# \end{array}\right] $$
#
# where the only nontrivial entries are located below the $(k, k)$ diagonal on column $k$.
#
# $Conclusion$
# 1. Gaussian elimination
# $$M_{k} \vec{a} = \left[
# \begin{array}{ccccccc}
# 1 & \cdots & 0 & 0 & 0 & \cdots & 0 \\
# \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
# 0 & \cdots & 1 & 0 & 0 & \cdots & 0 \\
# 0 & \cdots & 0 & 1 & 0 & \cdots & 0 \\
# 0 & \cdots & 0 & -a_{k+1}/a_{k} & 1 & \cdots & 0 \\
# \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
# 0 & \cdots & 0 & -a_{n}/a_{k} & 0 & \cdots & 1
# \end{array}\right] \cdot \left[
# \begin{array}{c}
# a_1 \\
# a_2 \\
# \vdots \\
# a_{k} \\
# a_{k+1} \\
# \vdots \\
# a_n
# \end{array} \right] = \left[
# \begin{array}{c}
# a_1 \\
# a_2 \\
# \vdots \\
# a_{k} \\
# 0 \\
# \vdots \\
# 0
# \end{array} \right] $$
#
# 2. Inverse
# $$M_{k}^{-1} = \left[
# \begin{array}{ccccccc}
# 1 & \cdots & 0 & 0 & 0 & \cdots & 0 \\
# \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
# 0 & \cdots & 1 & 0 & 0 & \cdots & 0 \\
# 0 & \cdots & 0 & 1 & 0 & \cdots & 0 \\
# 0 & \cdots & 0 & m_{k+1, k} & 1 & \cdots & 0 \\
# \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
# 0 & \cdots & 0 & m_{n,k} & 0 & \cdots & 1
# \end{array}\right]^{-1} =\;\; \left[
# \begin{array}{ccccccc}
# 1 & \cdots & 0 & 0 & 0 & \cdots & 0 \\
# \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
# 0 & \cdots & 1 & 0 & 0 & \cdots & 0 \\
# 0 & \cdots & 0 & 1 & 0 & \cdots & 0 \\
# 0 & \cdots & 0 & -m_{k+1, k} & 1 & \cdots & 0 \\
# \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
# 0 & \cdots & 0 & -m_{n,k} & 0 & \cdots & 1
# \end{array}\right]$$
#
# 3. Production
# $$\left[
# \begin{array}{ccccccc}
# 1 & \cdots & 0 & 0 & 0 & \cdots & 0 \\
# \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
# 0 & \cdots & 1 & 0 & 0 & \cdots & 0 \\
# 0 & \cdots & 0 & 1 & 0 & \cdots & 0 \\
# 0 & \cdots & 0 & m_{k+1, k} & 1 & \cdots & 0 \\
# \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
# 0 & \cdots & 0 & m_{n,k} & 0 & \cdots & 1
# \end{array}\right] \cdot \left[
# \begin{array}{ccccccc}
# 1 & \cdots & 0 & 0 & 0 & \cdots & 0 \\
# \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
# 0 & \cdots & 1 & 0 & 0 & \cdots & 0 \\
# 0 & \cdots & 0 & 1 & 0 & \cdots & 0 \\
# 0 & \cdots & 0 & m_{j+1, j} & 1 & \cdots & 0 \\
# \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
# 0 & \cdots & 0 & m_{n,j} & 0 & \cdots & 1
# \end{array}\right] \\ = \left[
# \begin{array}{ccccccccccc}
# 1 & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 \\
# \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
# 0 & \cdots & 1 & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 \\
# 0 & \cdots & 0 & 1 & 0 & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
# 0 & \cdots & 0 & m_{k+1, k} & 1 & \cdots & 0 & 0 & 0 & \cdots & 0 \\
# \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
# 0 & \cdots & 0 & m_{j-1, k} & 0 & \cdots & 1 & 0 & 0 & \cdots & 0 \\
# 0 & \cdots & 0 & m_{j, k} & 0 & \cdots & 0 & 1 & 0 & \cdots & 0 \\
# 0 & \cdots & 0 & m_{j+1, k} & 0 & \ddots & 0 & m_{j+1, j} & 1 & \cdots & 0 \\
# \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
# 0 & \cdots & 0 & m_{j+1, n} & 0 & \ddots & 0 & m_{j+1, n} & 0 & \cdots & 0 \\
# \end{array}\right]$$
#
# # LU factorization
# For a general invertible matrix $A = [a_{i,j}]_{n \times n}$,
#
# $$\begin{align}
# M_1A = \left[
# \begin{array}{ccccc}
# 1 & & & & \\
# -a_{21}/a_{11} & 1 & & & \\
# -a_{31}/a_{11} & 0 & 1 & & \\
# \vdots & \vdots & \vdots & \ddots & \\
# -a_{n1}/a_{11} & 0 & 0 & \cdots & 1
# \end{array}\right] \cdot \left[
# \begin{array}{ccccc}
# a_{11} & a_{12} & a_{13} & \cdots & a_{1n} \\
# a_{21} & a_{22} & a_{23} & \cdots & a_{2n} \\
# a_{31} & a_{32} & a_{33} & \cdots & a_{3n} \\
# \vdots & \vdots & \vdots & \ddots & \vdots \\
# a_{n1} & a_{n2} & a_{n3} & \cdots & a_{nn} \\
# \end{array}\right] = \left[
# \begin{array}{ccccc}
# a_{11} & a_{12} & a_{13} & \cdots & a_{1n} \\
# 0 & \tilde{a}_{22} & \tilde{a}_{23} & \cdots & \tilde{a}_{2n} \\
# 0 & \tilde{a}_{32} & \tilde{a}_{33} & \cdots & \tilde{a}_{3n} \\
# \vdots & \vdots & \vdots & \ddots & \vdots \\
# 0 & \tilde{a}_{n2} & \tilde{a}_{n3} & \cdots & \tilde{a}_{nn} \\
# \end{array}\right]
# \end{align}$$
#
# Keep multiply these $M$ Lower matrix, finally we can see that
#
# $$M_{n-1}M_{n-2}\cdots M_2M_1A = \left[
# \begin{array}{cccc}
# u_{11} & u_{12} & \cdots & u_{1n} \\
# & u_{22} & \cdots & u_{2n} \\
# & & \ddots & \vdots \\
# & & & u_{nn}
# \end{array}\right] = U$$
#
# $i.e.$
#
# $$A = \left(\prod _{i = 1}^{n-1} M_{i}^{-1} \right) U = LU = \left[
# \begin{array}{cccc}
# 1 & & & \\
# l_{21} & 1 & & \\
# \vdots & \vdots & \ddots & \\
# l_{n1} & l_{n2} & \cdots & 1
# \end{array}\right] \cdot \left[
# \begin{array}{cccc}
# u_{11} & u_{12} & \cdots & u_{1n} \\
# & u_{22} & \cdots & u_{2n} \\
# & & \ddots & \vdots \\
# & & & u_{nn}
# \end{array}\right] $$
#
# where $L$ is a lower triangular matrix with $1$ on its diagonals, and $U$ is an upper normal triangular matrix.
#
# $Algorithm$: **LU factorization**
#
# for k = 1, . . . , n − 1 % loop over the first n − 1 columns
# if a_{kk} = 0 then stop
# for i = k + 1, . . . , n % loop over Row k + 1 through Row n
# a_ {ik} ← a_ {ik}/a_{kk} % store the multiplier, which gives L, NOT included in the flops
# for j = k + 1, . . . , n % update other entries of that row
# a_{ij} ← a_{ij} − a_ {ik} a_{kj}
# end
# end
# end
#
# To evaluate the c*omputational expense* of an algorithm we can analyze the number of floating point operation (flops). One operation of addition, subtraction, multiplication, division, etc, is called a **floating point operation**. For LU factorization, we have
#
# $$\begin{align}
# \text{flops} &= \sum_{k = 1} ^{n - 1} \sum_{i = k + 1} ^{n} \sum_{j = k + 1} ^{n} 2 \\
# &= 2 \sum_{k = 1} ^{n} (n - k)^2 = 2\sum_{l = 1} ^{n - 1} l^2 \\
# &= 2\left[n^3 - 1 - \frac{3n(n - 1)} {2} - (n - 1) \right]\\
# &\approx \frac{2} {3} n^3
# \end{align}$$
#
# It is a big number, when we are dealing with a huge matrix. But with certain special structures we can modify the algorithm to make it more effective. Especially the **banded matrix**, where the nonzero entries of a matrix $A_{n \times n}$ only appear in the main $m$ off diagonals.
#
# $Algorithm$: **LU factorization for banded matrix**
#
# for k = 1, . . . , n − 1 % loop over the first n − 1 columns
# if a_{kk} = 0 then `stop`
# for i = k + 1, . . . , min{k + m, n} % loop over Row k + 1 to the last nonzero entry
# a_ {ik} ← a_ {ik}/a_{kk} % store the multiplier, which gives L, NOT included in the flops
# for j = k + 1, . . . , min{k + m, n} % update other entries of that row
# a_{ij} ← a_{ij} − a_ {ik} a_{kj}
# end
# end
# end
#
# $$\begin{align}
# \text{flops} &= \sum_{k = 1} ^{n - 1} \sum_{i = k + 1} ^{k + m} \sum_{j = k + 1} ^{k + m} 2 \\
# &= 2 \sum_{k = 1} ^{n} m^2 \\
# &= 2m^2 (n - 1)
# \end{align}$$
# ***
# So how do we use this factorization? Actually we can use it for solving linear equations
#
# $$A\vec{x} = \vec{b} \Leftrightarrow
# \begin{cases}
# L\vec{\mathstrut y} = \vec{\mathstrut b} \\
# U\vec{\mathstrut x} = \vec{\mathstrut y}
# \end{cases}
# $$
#
# >**e.g.** Solve
# >
# >$$A\vec{x} = \left[
# \begin{array}{ccc}
# 2 & 2 & 2 \\
# 4 & 3 & 2 \\
# 4 & 6 & 4
# \end{array}\right] \cdot \left[
# \begin{array}{c}
# x_1 \\
# x_2 \\
# x_3
# \end{array}\right] = \left[
# \begin{array}{c}
# 2 \\
# 3 \\
# 2
# \end{array}\right]$$
# >
# >First we factorization matrix $A$ like
# >
# >$$\begin{align}
# A\vec{x} &= LU\vec{x} = \vec{b} \\
# &= \left( \begin{bmatrix}
# 1 & 0 & 0 \\
# 2 & 1 & 0 \\
# 2 & -2 & 1
# \end{bmatrix} \cdot
# \begin{bmatrix}
# 2 & 2 & 2 \\
# 0 & -1 & -2 \\
# 0 & 0 & -4
# \end{bmatrix}\right ) \cdot \left[
# \begin{array}{c}
# x_1 \\
# x_2 \\
# x_3
# \end{array}\right] = \left[
# \begin{array}{c}
# 2 \\
# 3 \\
# 2
# \end{array}\right]
# \end{align}$$
# >
# >$i.e.$
# >
# >$$L\vec{y} = \begin{bmatrix}
# 1 & 0 & 0 \\
# 2 & 1 & 0 \\
# 2 & -2 & 1
# \end{bmatrix} \cdot \left[
# \begin{array}{c}
# y_1 \\
# y_2 \\
# y_3
# \end{array}\right] = \left[
# \begin{array}{c}
# 2 \\
# 3 \\
# 2
# \end{array}\right] \\ \Downarrow \\
# \vec{y} = \left[
# \begin{array}{c}
# y_1 \\
# y_2 \\
# y_3
# \end{array}\right] = \left[
# \begin{array}{c}
# 2 \\
# -1 \\
# -4
# \end{array}\right]$$
# >
# >Then we solve
# >$$U\vec{x} = \begin{bmatrix}
# 2 & 2 & 2 \\
# 0 & -1 & -2 \\
# 0 & 0 & -4
# \end{bmatrix} \cdot \left[
# \begin{array}{c}
# x_1 \\
# x_2 \\
# x_3
# \end{array}\right] = \left[
# \begin{array}{c}
# 2 \\
# -1 \\
# -4
# \end{array}\right] \\ \Downarrow \\
# \vec{x} = \left[
# \begin{array}{c}
# x_1 \\
# x_2 \\
# x_3
# \end{array}\right] = \left[
# \begin{array}{c}
# 1 \\
# -1 \\
# 1
# \end{array}\right]$$
# # Gaussian elimination with pivoting
# **Pivot**, the $a_{kk}$. And the elimination without pivoting may fail when meeting situation like the following:
#
# $$A = \begin{bmatrix}
# 0 & 1 \\
# 1 & 1
# \end{bmatrix}$$
#
# First we define the **permutation matrix**, obtained by exchanging any two rows of the identity matrix. An example can show how it works.
#
# $$PMP = \begin{bmatrix}
# 1 & & & & \\
# & & & 1 & \\
# & & 1 & & \\
# & 1 & & & \\
# & & & & 1
# \end{bmatrix}\begin{bmatrix}
# 1 & & & & \\
# m_{21} & 1 & & & \\
# m_{31} & & 1 & & \\
# m_{41} & & & 1 & \\
# m_{51} & & & & 1
# \end{bmatrix}\begin{bmatrix}
# 1 & & & & \\
# & & & 1 & \\
# & & 1 & & \\
# & 1 & & & \\
# & & & & 1
# \end{bmatrix} = \begin{bmatrix}
# 1 & & & & \\
# m_{41} & 1 & & & \\
# m_{31} & & 1 & & \\
# m_{21} & & & 1 & \\
# m_{51} & & & & 1
# \end{bmatrix}$$
#
# Another property of the permutation matrix is that for any permutation matrix $P$, it holds that $P^2 = I$, $i.e.$, $P
# ^{−1} = P$.
#
# Now we can get down to the Gaussian elimination with pivoting. Before we eliminate column $k$, we first need to exchange the largest entry among all $a_{ik}, i = k, k+1, \dots, n$ to the entry $a_{kk}$, meaning that change the row with the largest entry with row $k$.
#
# So that in general, this process can be represented as
#
# $$M_{n-1}P_{n-1}M_{n-2}P_{n-2}\cdots M_{2}P_{2}M_{1}P_{1}A = U$$
#
# $Conclusion$
#
# For any nonsingular matrix $A$, there exists a permutation matrix $P = P_{n-1}P_{n-2}\dots P_2P_1$ such that $PA = LU$.
#
# $Proof$
#
# We've already got that $A = P_{1}^{-1}M_{1}^{-1}P_{2}^{-1}M_{2}^{-1}\dots P_{n-2}^{-1}M_{n-2}^{-1}P_{n - 1}^{-1}M_{n-1}^{-1} = P_{1}L_{1}P_{2}L_{2}\dots P_{n-2}L_{n-2}P_{n-1}L_{n-1}$. The $L$ is the inverse of $M$, also a lower triangular matrix. So that now we have
#
# $\begin{align}
# A &= P_{1}L_{1}P_{2}L_{2}\dots P_{n-2}L_{n-2}P_{n-1}L_{n-1}\\
# P_{n-1}P_{n-2}\dots P_2P_1A &= (P_{n-1}P_{n-2}\dots P_2) L_{1}P_{2}L_{2}\dots P_{n-2}L_{n-2}P_{n-1}L_{n-1} \\
# &= (P_{n-1}P_{n-2}\dots P_2 L_{1}P_2P_3\dots P_{n-2}P_{n-1}) (P_{n-1}P_{n-2}\dots P_2)P_2 L_2 \dots P_{n-2}L_{n-2}P_{n-1}L_{n-1} \\
# & = \tilde{L}_1 (P_{n-1}P_{n-2}\dots P_3 L_{2}P_3\dots P_{n-2}P_{n-1}) (P_{n-1}P_{n-2}\dots P_3)P_3 L_3 \dots P_{n-2}L_{n-2}P_{n-1}L_{n-1} \\
# &\cdots \\
# &= \tilde{L}_1\tilde{L}_2\cdots \tilde{L}_{n-1} L_n U
# \end{align}$
#
# To summary,
#
# $$\begin{cases}
# \tilde{L}_1 &= P_{n-1}P_{n-2}\dots P_{2}L_{1}P_{2}\dots P_{n-2}P_{n-1} \\
# \tilde{L}_2 &= P_{n-1}P_{n-2}\dots P_{3}L_{2}P_{3}\dots P_{n-2}P_{n-1} \\
# \vdots &\vdots \\
# \tilde{L}_{n-2} &= P_{n-1}L_{n-2}P_{n-1} \\
# \tilde{L}_{n-1} &= L_{n-1} \\
# P &= P_{n-1} P_{n-2} \dots P_2 P_1 \\
# L &= \tilde{L}_1 \tilde{L}_2 \cdots \tilde{L}_{n-2} \tilde{L}_{n-1}
# \end{cases}$$
#
# $Algorithm$: **LU factorization using pivoting**
#
# for k = 1, . . . , n − 1
# find the row number i^∗, where |a_{i^∗k}| = max_{k≤i≤n} |a_{ik}|
# exchange the two rows A(k,:) ↔ A(i^∗,:)
# p(k) = i^∗
# for i = k + 1, . . . , n
# a_{ik} ← a_{ik}/a{kk}
# for j = k + 1, . . . , n
# a_{ij} ← a_{ij} − a_{ik} a_{kj}
# end
# end
# end
#
# So now to solve a linear system $A\vec{x} = \vec{b}$, we can solve $PA\vec{x} = LU\vec{x} = P\vec{b}$, so actually what's needed is permutation matrix $P$, which can be obtained during the process of pivoting.
#
# # Stability of Gaussian elimination
# For matrix
#
# $$A = \begin{bmatrix}
# 10^{-20} & 1 \\
# 1 & 1
# \end{bmatrix}$$
#
# Easily we can find that $L = \begin{bmatrix}
# 1 & 0 \\
# 10^{20} & 1
# \end{bmatrix}$ and $U = \begin{bmatrix}
# 10^{-20} & 1 \\
# 0 & 1 - 10^{20}
# \end{bmatrix}$. But without pivoting, in computer, we will see that $\tilde{U} = \begin{bmatrix}
# 10^{-20} & 1 \\
# 0 & - 10^{20}
# \end{bmatrix}$. And the truth is that
#
# $$\tilde{L}\tilde{U} = \begin{bmatrix}
# 1 & 0 \\
# 10^{20} & 1
# \end{bmatrix} \cdot \begin{bmatrix}
# 10^{-20} & 1 \\
# 0 & - 10^{20}
# \end{bmatrix} = \begin{bmatrix}
# 10^{-20} & 1 \\
# 1 & 0
# \end{bmatrix}$$
#
# So the first process should be
#
# $$P_1A = \begin{bmatrix}
# 0 & 1 \\
# 1 & 0
# \end{bmatrix}\begin{bmatrix}
# 10^{-20} & 1 \\
# 1 & 1
# \end{bmatrix} = \begin{bmatrix}
# 1 & 1 \\
# 10^{-20} & 1
# \end{bmatrix}$$
#
# Then
#
# $$M_1P_1A = \begin{bmatrix}
# 1 & 0 \\
# -10^{-20} & 1
# \end{bmatrix}\begin{bmatrix}
# 0 & 1 \\
# 1 & 0
# \end{bmatrix}\begin{bmatrix}
# 10^{-20} & 1 \\
# 1 & 1
# \end{bmatrix} = \begin{bmatrix}
# 1 & 1 \\
# 0 & 1-10^{-20}
# \end{bmatrix} = \begin{bmatrix}
# 1 & 1 \\
# 0 & 1
# \end{bmatrix}$$
#
# Now, $\tilde{U} = \begin{bmatrix}
# 1 & 1 \\
# 0 & 1
# \end{bmatrix}$, $\tilde{L} = \begin{bmatrix}
# 1 & 0 \\
# 10^{-20} & 1
# \end{bmatrix}$, $P = \begin{bmatrix}
# 0 & 1 \\
# 1 & 0
# \end{bmatrix}$, and we can find that
#
# $$\tilde{L}\tilde{U} = \begin{bmatrix}
# 1 & 1 \\
# 10^{-20} & 1+10^{-20}
# \end{bmatrix}$$
#
# which is the true *LU factorization* of the nearby matrix to $PA$. And since been pivoted, generally the lower trianuglar entries in the $L$ matrix in the Gaussian elimination with pivoting are always at most $1$ in absolute value.
#
# $Theorem$ 1
#
# Let $A = LU$ be computed by Gaussian elimination, with or without partial pivoting, on a computer with machine epsilon $\epsilon_{\text{machine}}$ . Denote the computed $L$ and $U$ matrices by $\tilde{L}$ and $\tilde{U}$, respectively. Then we have
#
# $$\tilde{L}\tilde{U} = A + \delta A$$
#
# for a certain $\delta A$ which satisfies $\displaystyle \frac{\left\| \delta A \right\|} {\left\| L \right\| \left\| U \right\|} = O(\epsilon_{\text{machine}})$
#
# This help us to estimate the relative difference between $\tilde{L}\tilde{U}$ and $A$:
#
# $$\frac{\left\| \delta A \right\|} {\left\| A \right\|} = \frac{\left\| L \right\| \left\| U \right\|} {\left\| A \right\|}O(\epsilon_{\text{machine}})$$
#
# $i.e.$
#
# The stability ($\tilde{f}(x) = f(\tilde{x})$) of the Gaussian elimination depends on the norms of $L$ and $U$. After pivoting, all entries in $L$ is equal or less than $1$, so that the norm of $L$ is not large, then the stability will be determined by $\displaystyle \frac{\left\| U \right\|} {\left\| A \right\|}$
#
# Now we define $\rho = \displaystyle \frac{max_{ij} u_{ij}} {max_{ij} a_{ij}}$, the **growth factor** of the LU factorization. Because $\left\| U \right\| = O( \rho \left\| A \right\| )$.
#
# $Theorem$ 2
#
# Let $PA = LU$ be the LU factorization computed by Gaussian elimination **with partial pivoting** on a **computer** with machine epsilon $\epsilon_{\text{machine}}$. Denote the computed $P$, $L$ and $U$ matrices by $\tilde{P}$, $\tilde{L}$ and $\tilde{U}$, respectively. Then we have
#
# $$\tilde{L}\tilde{U} = \tilde{P}A + \delta A$$
#
# for a certain $\delta A$, which now satisfies $\displaystyle \frac{\left\|U \right\|} {\left\|A\right\|} = O(\rho \epsilon_{\text{machine}})$
#
# A typical size for the growth factor $\rho$ for an $n \times n$ matrix is $O(\sqrt{n})$. And the worst case is that $\rho = 2^{n-1}$
#
# # Cholesky Factorization
# If an $n$ by $n$ matrix $A$ is symmetric positive definite, $i.e.$, for all $n\text{-dimensional}$ column vector $\vec{x} \neq \vec{0}$, $x^{\mathrm{T}} Ax > 0$, then we can write the matrix $A$ as
#
# $$\begin{align}
# A &= \begin{bmatrix}
# a_{11} & \vec{v}_1^{\mathrm{T}} \\
# \vec{v}_1 & K_1
# \end{bmatrix} \\
# &= \begin{bmatrix}
# \sqrt{a_{11}} & \vec{0} \\
# \frac{\vec{v}_1}{\sqrt{a_{11}}} & I
# \end{bmatrix}
# \begin{bmatrix}
# 1 & \vec{0} \\
# \vec{0} & K_1 - \frac{\vec{v}_1\vec{v}_1^{\mathrm{T}}}{a_{11}}
# \end{bmatrix}
# \begin{bmatrix}
# \sqrt{a_{11}} & \frac{\vec{v}_1^{\mathrm{T}}}{\sqrt{a_{11}}} \\
# \vec{0} & I
# \end{bmatrix} \\
# &= L_1 A_1 L_1^{\mathrm{T}}
# \end{align}$$
#
# where $L_1$ is lower triangular and $A_1$ is also symmetric positive definite. Continuing the process, we have a series of $A_i$ that $A_i = L_{i+1}A_{i+1}L_{i+1}^{\mathrm{T}}$, where $L_i$ are all lower triangular and $A_i$ are all symmetric positive definite, with $A_{n+1} = 1$. Therefore
#
# $$A = L_1 A_1 L_1^{\mathrm{T}} = L_1 L_2 A_2 L_2^{\mathrm{T}} L_1^{\mathrm{T}} = \cdots = LL^{\mathrm{T}}$$
#
# here
# $$\begin{align}L &= L_1 L_2 \cdots L_n \\
# &= \begin{bmatrix}
# \sqrt{a_{11}} & \vec{0} \\
# \frac{\vec{v}_1}{\sqrt{a_{11}}} & I
# \end{bmatrix}
# \left[
# \begin{array}{cc}
# 1 & \vec{0}\\
# \vec{0} & \begin{array}{cc}
# {\sqrt{a_{22}}} & \vec{0} \\
# \frac{\vec{v}_2} {\sqrt{a_{22}}} & I
# \end{array}
# \end{array}
# \right] \cdots
# \begin{bmatrix}
# 1 & & & \\
# & \ddots & & \\
# & & 1 & \\
# & & & \sqrt{a_{22}}
# \end{bmatrix} \\
# &= \left[
# \begin{array}{cc}
# \sqrt{a_{11}} & \vec{0} \\
# \frac{\vec{v}_1} {\sqrt{a_{11}}} & \begin{array}{cc}
# \sqrt{a_{22}} & \vec{0} \\
# \frac{\vec{v}_2} {\sqrt{a_{22}}} & \begin{array}{cc}
# \ddots & \vdots \\
# \cdots & \sqrt{a_{nn}}
# \end{array}
# \end{array}
# \end{array}
# \right]
# \end{align}$$
#
# which is still a lower triangular. We call this the **Cholesky factorization** of the matrix $A$.
#
# $Algorithm$: **Cholesky Factorization**
#
# ```
# for k = 1, . . . , n % loop over columns
# for i = k + 1, . . . , n % update each row behind row k
# A(i, k + 1 : n) = A(i, k + 1 : n) − a_{ik}/a_{kk} A(k, k + 1 : n)
# end
# A(k : n, k) = A(k : n, k)/√(a_{kk})
# end
# ```
#
# With this algorithm, $L$ is saved as the lower triangular part of the result matrix $A$. And actually the update of the upper part is not necessary, so here's the efficient version
#
# $Algorithm$: **Cholesky Factorization Efficient Version**
#
# ```
# for k = 1, . . . , n % loop over columns
# for i = k + 1, . . . , n % update each row behind row k
# A(i, k + 1 : i) = A(i, k + 1 : i) − a_{ik}/a_{kk} (A(k + 1 : i, k))^T
# end
# A(k : n, k) = A(k : n, k)/√(a_{kk})
# end
# ```
#
# At last, we claim that The Cholesky factorization of symmetric positive definite matrices is **always stable**.
| Computational/Intro to Numerical Computing/Note_Chap05_Gaussian elimination, LU factorization, Cholesky factorization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Analyze Norwegian GDP Data
# * 16-06-2019
# * <NAME>
# * ECN222 - Macroeconomics II
# * Norwegian University of Life Science
import pandas as pd
#import matplotlib as plt
import matplotlib.pyplot as plt
import datetime
import numpy as np
# +
# Open data from Statistics Norway, www.ssb.no
# Quarterly national account:
# https://www.ssb.no/en/statbank/list/knr
dataset = "http://www.ssb.no/statbank/sq/10010628/"
df = pd.read_excel(dataset, skiprows=3, skipfooter=48)
# -
df.tail(10)
# Make a timedata index
# remove first column
df = df.drop(['Unnamed: 0'], axis=1)
# add index, using pandas.period_range,
df.index = pd.Index(pd.period_range('1978-01', periods=172, freq='Q'))
# remember to update no. of periods every quarter.
# +
# Rename columns?
#df.columns['Consumption', 'PublicConsumption', 'Investments', 'Error', 'Exports', 'Imports', 'GDP', 'GDPMainland']
# -
# # Figures
#
# Simple time series plot
df['Konsum i husholdninger og ideelle organisasjoner'].plot()
plt.show()
# +
# Add title and legend
df['Konsum i husholdninger og ideelle organisasjoner'].plot()
plt.title("Figure 1.1: Consumption")
plt.legend()
plt.xlabel('Date', fontdict=None, labelpad=None)
plt.ylabel('MNOK')
# save figure and use in presentataion etc.
# folder ='C:\\Users\\username\\Documents\\GitHub\\MacroeconomicsII\\'
filename = folder + 'consumption1.png'
plt.savefig(filename)
plt.show()
# -
# # Variables engineering
# +
# Create growth rates:
# df[log_C] = ...
df['Dc'] = np.log(df['Konsum i husholdninger og ideelle organisasjoner']).diff(4)
Ddf = df.diff(4)
# alternative way to make absolute diffs of all variables in a dataframe
df['DC_Y'] = df['Konsum i husholdninger og ideelle organisasjoner'].diff(4)/(df['Bruttonasjonalprodukt Fastlands-Norge, markedsverdi'].shift(4))
# Remember that using difference of the logarithm is an approximation that works as long as the relavtive change is small
# Note small letters for logarithms
# +
# Figure with title and legend
Ddf['Konsum i husholdninger og ideelle organisasjoner'].plot()
plt.title("Figure 1.2: Yearly change in consumption")
plt.legend()
plt.xlabel('Date', fontdict=None, labelpad=None)
plt.ylabel('MNOK')
# save figure and use in presentataion etc.
filename = folder + 'consumption2.png'
plt.savefig(filename)
plt.show()
# +
# Figure with title and legend
df['DC_Y'].plot()
plt.title("Figure 1.3: Contributions from consumption to GDP-growth")
plt.legend()
plt.xlabel('Date', fontdict=None, labelpad=None)
plt.ylabel('Percentage points')
# save figure and use in presentataion etc.
filename = folder + 'consumption3.png'
plt.savefig(filename)
plt.show()
# -
df['2018Q3':'2020Q4']
# Make main components of GDP as a share of GDP
df['C_Y'] = df['Konsum i husholdninger og ideelle organisasjoner']/df['Bruttonasjonalprodukt, markedsverdi']
df['G_Y'] = df['Konsum i offentlig forvaltning']/df['Bruttonasjonalprodukt, markedsverdi']
df['I_Y'] = df['Bruttoinvestering i fast realkapital']/df['Bruttonasjonalprodukt, markedsverdi']
df['NX'] = df['Eksport i alt']-df['Import i alt']
df['NX_Y'] = df['NX']/df['Bruttonasjonalprodukt, markedsverdi']
# Find mean of main compontents of GDP as a share of GDP
df.mean()
# or
df.describe()
# or
maincomp = ['C_Y', 'G_Y', 'I_Y', 'NX_Y']
df[maincomp].mean()
#need a filter for year?
print('Main components share of GDP'), print(df[maincomp]['2020Q1':'2020Q4'].mean())
# Next: Make pie-chart
| exercise1/IntroMacroTimeSeriesPy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Zfpg-n2gNgtB" colab_type="text"
# ## T5 구현 과정 (2/2)
# T5 모델 구현에 대한 설명 입니다.
#
# 이 내용을 확인하기 전 아래 내용을 확인하시기 바랍니다.
# - [Sentencepiece를 활용해 Vocab 만들기](https://paul-hyun.github.io/vocab-with-sentencepiece/)
# - [Naver 영화리뷰 감정분석 데이터 전처리 하기](https://paul-hyun.github.io/preprocess-nsmc/)
# - [Transformer (Attention Is All You Need) 구현하기 (1/3)](https://paul-hyun.github.io/transformer-01/)
# - [Transformer (Attention Is All You Need) 구현하기 (2/3)](https://paul-hyun.github.io/transformer-02/)
# - [Transformer (Attention Is All You Need) 구현하기 (3/3)](https://paul-hyun.github.io/transformer-03/)
#
#
# [Colab](https://colab.research.google.com/)에서 실행 했습니다.
# + [markdown] id="b4fLocKzS8qH" colab_type="text"
# #### 0. Pip Install
# 필요한 패키지를 pip를 이용해서 설치합니다.
# + id="gP4qW5w6TAXe" colab_type="code" outputId="54e19570-6235-4dc3-dfa7-b0c81eb940fd" colab={"base_uri": "https://localhost:8080/", "height": 255}
# !pip install sentencepiece
# !pip install wget
# + [markdown] id="XZs93qCwS_bM" colab_type="text"
# #### 1. Google Drive Mount
# Colab에서는 컴퓨터에 자원에 접근이 불가능 하므로 Google Drive에 파일을 올려 놓은 후 Google Drive를 mount 에서 로컬 디스크처럼 사용 합니다.
# 1. 아래 블럭을 실행하면 나타나는 링크를 클릭하세요.
# 2. Google 계정을 선택 하시고 허용을 누르면 나타나는 코드를 복사하여 아래 박스에 입력한 후 Enter 키를 입력하면 됩니다.
#
# 학습관련 [데이터 및 결과 파일](https://drive.google.com/open?id=15XGr-L-W6DSoR5TbniPMJASPsA0IDTiN)을 참고 하세요.
# + id="1XR4LcDdNfnW" colab_type="code" outputId="36d28670-e5bb-4880-95d3-579f6ea529f4" colab={"base_uri": "https://localhost:8080/", "height": 122}
from google.colab import drive
drive.mount('/content/drive')
# data를 저장할 폴더 입니다. 환경에 맞게 수정 하세요.
data_dir = "/content/drive/My Drive/Data/transformer-evolution"
# + [markdown] id="FMgpP6fjTJF8" colab_type="text"
# #### 2. Imports
# + id="KRgT80wpTJiO" colab_type="code" colab={}
import os
import numpy as np
import math
from random import random, shuffle, choices, randrange
import matplotlib.pyplot as plt
import json
import pandas as pd
from IPython.display import display
from tqdm import tqdm, tqdm_notebook, trange
import sentencepiece as spm
import wget
import torch
import torch.nn as nn
import torch.nn.functional as F
# + [markdown] id="KRrdaSJ_TNAf" colab_type="text"
# #### 3. 폴더의 목록을 확인
# Google Drive mount가 잘 되었는지 확인하기 위해 data_dir 목록을 확인 합니다.
# + id="QfWB9L0_TQlP" colab_type="code" outputId="5012e254-5831-43a3-b585-e33ee1970d80" colab={"base_uri": "https://localhost:8080/", "height": 289}
for f in os.listdir(data_dir):
print(f)
# + [markdown] id="yUOwhKMyTXNQ" colab_type="text"
# #### 4. Vocab 및 입력
#
# T5를 위해 vocab을 새로 만들 었습니다. (아래 옵션 참고)
# + id="BvVMgq0sgMEj" colab_type="code" colab={}
spm.SentencePieceTrainer.train(
f"--input={corpus} --model_prefix={prefix} --vocab_size={vocab_size + 7 + 26}" +
" --model_type=bpe" +
" --max_sentence_length=999999" + # 문장 최대 길이
" --pad_id=0 --pad_piece=[PAD]" + # pad (0)
" --unk_id=1 --unk_piece=[UNK]" + # unknown (1)
" --bos_id=2 --bos_piece=[BOS]" + # begin of sequence (2)
" --eos_id=3 --eos_piece=[EOS]" + # end of sequence (3)
" --user_defined_symbols=[SEP],[CLS],[MASK],<A>,<B>,<C>,<D>,<E>,<F>,<G>,<H>,<I>,<J>,<K>,<L>,<M>,<N>,<O>,<P>,<Q>,<R>,<S>,<T>,<U>,<V>,<W>,<X>,<Y>,<Z>") # 기타 추가 토큰
# + id="1LX6VgIkTaKV" colab_type="code" outputId="a4f39958-3250-49d9-f10f-ba150219c661" colab={"base_uri": "https://localhost:8080/", "height": 34}
# vocab loading
vocab_file = f"{data_dir}/kowiki_t5.model"
vocab = spm.SentencePieceProcessor()
vocab.load(vocab_file)
# + [markdown] id="LMgziU4ATcyN" colab_type="text"
# #### 5. Config
#
# 모델에 설정 값을 전달하기 위한 config를 만듭니다.
# + id="GDcRg9V0Tdc1" colab_type="code" colab={}
""" configuration json을 읽어들이는 class """
class Config(dict):
__getattr__ = dict.__getitem__
__setattr__ = dict.__setitem__
@classmethod
def load(cls, file):
with open(file, 'r') as f:
config = json.loads(f.read())
return Config(config)
# + id="JUztpEf6Tfx1" colab_type="code" outputId="b9c71b1c-3124-4b7b-cf78-3f0ca157507f" colab={"base_uri": "https://localhost:8080/", "height": 34}
config = Config({
"n_vocab": len(vocab),
"n_seq": 256,
"n_layer": 6,
"d_hidn": 256,
"i_pad": 0,
"d_ff": 1024,
"n_head": 4,
"d_head": 64,
"dropout": 0.1,
"layer_norm_epsilon": 1e-12
})
print(config)
# + [markdown] id="j93X24LtTijG" colab_type="text"
# #### 6. T5
#
# T5 Class 및 함수 입니다.
# + id="z_41WubQUImx" colab_type="code" colab={}
""" attention pad mask """
def get_attn_pad_mask(seq_q, seq_k, i_pad):
batch_size, len_q = seq_q.size()
batch_size, len_k = seq_k.size()
pad_attn_mask = seq_k.data.eq(i_pad).unsqueeze(1).expand(batch_size, len_q, len_k) # <pad>
return pad_attn_mask
""" attention decoder mask """
def get_attn_decoder_mask(seq):
subsequent_mask = torch.ones_like(seq).unsqueeze(-1).expand(seq.size(0), seq.size(1), seq.size(1))
subsequent_mask = subsequent_mask.triu(diagonal=1) # upper triangular part of a matrix(2-D)
return subsequent_mask
""" scale dot product attention """
class ScaledDotProductAttention(nn.Module):
def __init__(self, config):
super().__init__()
self.config = config
self.dropout = nn.Dropout(config.dropout)
self.scale = 1 / (self.config.d_head ** 0.5)
self.num_buckets = 32
self.relative_attention_bias = torch.nn.Embedding(self.num_buckets, self.config.n_head)
def forward(self, Q, K, V, attn_mask, bidirectional=True):
qlen, klen = Q.size(-2), K.size(-2)
# (bs, n_head, n_q_seq, n_k_seq)
scores = torch.matmul(Q, K.transpose(-1, -2)).mul_(self.scale)
# (1, n_head, n_q_seq, n_k_seq)
position_bias = self.compute_bias(qlen, klen, bidirectional=bidirectional)
scores += position_bias
scores.masked_fill_(attn_mask, -1e9)
# (bs, n_head, n_q_seq, n_k_seq)
attn_prob = nn.Softmax(dim=-1)(scores)
attn_prob = self.dropout(attn_prob)
# (bs, n_head, n_q_seq, d_v)
context = torch.matmul(attn_prob, V)
# (bs, n_head, n_q_seq, d_v), (bs, n_head, n_q_seq, n_v_seq)
return context, attn_prob
def compute_bias(self, qlen, klen, bidirectional=True):
context_position = torch.arange(qlen, dtype=torch.long)[:, None]
memory_position = torch.arange(klen, dtype=torch.long)[None, :]
# (qlen, klen)
relative_position = memory_position - context_position
# (qlen, klen)
rp_bucket = self._relative_position_bucket(
relative_position, # shape (qlen, klen)
num_buckets=self.num_buckets,
bidirectional=bidirectional
)
# (qlen, klen)
rp_bucket = rp_bucket.to(self.relative_attention_bias.weight.device)
# (qlen, klen, n_head)
values = self.relative_attention_bias(rp_bucket)
# (1, n_head, qlen, klen)
values = values.permute([2, 0, 1]).unsqueeze(0)
return values
def _relative_position_bucket(self, relative_position, bidirectional=True, num_buckets=32, max_distance=128):
ret = 0
n = -relative_position
if bidirectional:
num_buckets //= 2
ret += (n < 0).to(torch.long) * num_buckets # mtf.to_int32(mtf.less(n, 0)) * num_buckets
n = torch.abs(n)
else:
n = torch.max(n, torch.zeros_like(n))
# half of the buckets are for exact increments in positions
max_exact = num_buckets // 2
is_small = n < max_exact
# The other half of the buckets are for logarithmically bigger bins in positions up to max_distance
val_if_large = max_exact + (
torch.log(n.float() / max_exact) / math.log(max_distance / max_exact) * (num_buckets - max_exact)
).to(torch.long)
val_if_large = torch.min(val_if_large, torch.full_like(val_if_large, num_buckets - 1))
ret += torch.where(is_small, n, val_if_large)
return ret
""" multi head attention """
class MultiHeadAttention(nn.Module):
def __init__(self, config):
super().__init__()
self.config = config
self.W_Q = nn.Linear(self.config.d_hidn, self.config.n_head * self.config.d_head)
self.W_K = nn.Linear(self.config.d_hidn, self.config.n_head * self.config.d_head)
self.W_V = nn.Linear(self.config.d_hidn, self.config.n_head * self.config.d_head)
self.scaled_dot_attn = ScaledDotProductAttention(self.config)
self.linear = nn.Linear(self.config.n_head * self.config.d_head, self.config.d_hidn)
self.dropout = nn.Dropout(config.dropout)
def forward(self, Q, K, V, attn_mask, bidirectional=False):
batch_size = Q.size(0)
# (bs, n_head, n_q_seq, d_head)
q_s = self.W_Q(Q).view(batch_size, -1, self.config.n_head, self.config.d_head).transpose(1,2)
# (bs, n_head, n_k_seq, d_head)
k_s = self.W_K(K).view(batch_size, -1, self.config.n_head, self.config.d_head).transpose(1,2)
# (bs, n_head, n_v_seq, d_head)
v_s = self.W_V(V).view(batch_size, -1, self.config.n_head, self.config.d_head).transpose(1,2)
# (bs, n_head, n_q_seq, n_k_seq)
attn_mask = attn_mask.unsqueeze(1).repeat(1, self.config.n_head, 1, 1)
# (bs, n_head, n_q_seq, d_head), (bs, n_head, n_q_seq, n_k_seq)
context, attn_prob = self.scaled_dot_attn(q_s, k_s, v_s, attn_mask, bidirectional=bidirectional)
# (bs, n_head, n_q_seq, h_head * d_head)
context = context.transpose(1, 2).contiguous().view(batch_size, -1, self.config.n_head * self.config.d_head)
# (bs, n_head, n_q_seq, e_embd)
output = self.linear(context)
output = self.dropout(output)
# (bs, n_q_seq, d_hidn), (bs, n_head, n_q_seq, n_k_seq)
return output, attn_prob
""" feed forward """
class PoswiseFeedForwardNet(nn.Module):
def __init__(self, config):
super().__init__()
self.config = config
self.conv1 = nn.Conv1d(in_channels=self.config.d_hidn, out_channels=self.config.d_ff, kernel_size=1)
self.conv2 = nn.Conv1d(in_channels=self.config.d_ff, out_channels=self.config.d_hidn, kernel_size=1)
self.active = F.gelu
self.dropout = nn.Dropout(config.dropout)
def forward(self, inputs):
# (bs, d_ff, n_seq)
output = self.active(self.conv1(inputs.transpose(1, 2)))
# (bs, n_seq, d_hidn)
output = self.conv2(output).transpose(1, 2)
output = self.dropout(output)
# (bs, n_seq, d_hidn)
return output
# + id="WDXUeMKoULa2" colab_type="code" colab={}
""" encoder layer """
class EncoderLayer(nn.Module):
def __init__(self, config):
super().__init__()
self.config = config
self.self_attn = MultiHeadAttention(self.config)
self.layer_norm1 = nn.LayerNorm(self.config.d_hidn, eps=self.config.layer_norm_epsilon)
self.pos_ffn = PoswiseFeedForwardNet(self.config)
self.layer_norm2 = nn.LayerNorm(self.config.d_hidn, eps=self.config.layer_norm_epsilon)
def forward(self, inputs, attn_mask):
# (bs, n_enc_seq, d_hidn), (bs, n_head, n_enc_seq, n_enc_seq)
att_outputs, attn_prob = self.self_attn(inputs, inputs, inputs, attn_mask)
att_outputs = self.layer_norm1(inputs + att_outputs)
# (bs, n_enc_seq, d_hidn)
ffn_outputs = self.pos_ffn(att_outputs)
ffn_outputs = self.layer_norm2(ffn_outputs + att_outputs)
# (bs, n_enc_seq, d_hidn), (bs, n_head, n_enc_seq, n_enc_seq)
return ffn_outputs, attn_prob
""" encoder """
class Encoder(nn.Module):
def __init__(self, config):
super().__init__()
self.config = config
self.layers = nn.ModuleList([EncoderLayer(self.config) for _ in range(self.config.n_layer)])
def forward(self, enc_embd, enc_self_mask):
# (bs, n_enc_seq, d_hidn)
enc_outputs = enc_embd
attn_probs = []
for layer in self.layers:
# (bs, n_enc_seq, d_hidn), (bs, n_head, n_enc_seq, n_enc_seq)
enc_outputs, attn_prob = layer(enc_outputs, enc_self_mask)
attn_probs.append(attn_prob)
# (bs, n_enc_seq, d_hidn), [(bs, n_head, n_enc_seq, n_enc_seq)]
return enc_outputs, attn_probs
""" decoder layer """
class DecoderLayer(nn.Module):
def __init__(self, config):
super().__init__()
self.config = config
self.self_attn = MultiHeadAttention(self.config)
self.layer_norm1 = nn.LayerNorm(self.config.d_hidn, eps=self.config.layer_norm_epsilon)
self.dec_enc_attn = MultiHeadAttention(self.config)
self.layer_norm2 = nn.LayerNorm(self.config.d_hidn, eps=self.config.layer_norm_epsilon)
self.pos_ffn = PoswiseFeedForwardNet(self.config)
self.layer_norm3 = nn.LayerNorm(self.config.d_hidn, eps=self.config.layer_norm_epsilon)
def forward(self, dec_inputs, enc_outputs, self_mask, ende_mask):
# (bs, n_dec_seq, d_hidn), (bs, n_head, n_dec_seq, n_dec_seq)
self_att_outputs, self_attn_prob = self.self_attn(dec_inputs, dec_inputs, dec_inputs, self_mask, bidirectional=False)
self_att_outputs = self.layer_norm1(dec_inputs + self_att_outputs)
# (bs, n_dec_seq, d_hidn), (bs, n_head, n_dec_seq, n_enc_seq)
dec_enc_att_outputs, dec_enc_attn_prob = self.dec_enc_attn(self_att_outputs, enc_outputs, enc_outputs, ende_mask)
dec_enc_att_outputs = self.layer_norm2(self_att_outputs + dec_enc_att_outputs)
# (bs, n_dec_seq, d_hidn)
ffn_outputs = self.pos_ffn(dec_enc_att_outputs)
ffn_outputs = self.layer_norm3(dec_enc_att_outputs + ffn_outputs)
# (bs, n_dec_seq, d_hidn), (bs, n_head, n_dec_seq, n_dec_seq), (bs, n_head, n_dec_seq, n_enc_seq)
return ffn_outputs, self_attn_prob, dec_enc_attn_prob
""" decoder """
class Decoder(nn.Module):
def __init__(self, config):
super().__init__()
self.config = config
self.layers = nn.ModuleList([DecoderLayer(self.config) for _ in range(self.config.n_layer)])
def forward(self, dec_embd, enc_outputs, self_mask, ende_mask):
# (bs, n_dec_seq, d_hidn)
dec_outputs = dec_embd
self_attn_probs, dec_enc_attn_probs = [], []
for layer in self.layers:
# (bs, n_dec_seq, d_hidn), (bs, n_dec_seq, n_dec_seq), (bs, n_dec_seq, n_enc_seq)
dec_outputs, self_attn_prob, dec_enc_attn_prob = layer(dec_outputs, enc_outputs, self_mask, ende_mask)
self_attn_probs.append(self_attn_prob)
dec_enc_attn_probs.append(dec_enc_attn_prob)
# (bs, n_dec_seq, d_hidn), [(bs, n_dec_seq, n_dec_seq)], [(bs, n_dec_seq, n_enc_seq)]S
return dec_outputs, self_attn_probs, dec_enc_attn_probs
""" t5 """
class T5(nn.Module):
def __init__(self, config):
super().__init__()
self.config = config
self.embedding = nn.Embedding(self.config.n_vocab, self.config.d_hidn)
self.encoder = Encoder(self.config)
self.decoder = Decoder(self.config)
self.projection_lm = nn.Linear(self.config.d_hidn, self.config.n_vocab, bias=False)
self.projection_lm.weight = self.embedding.weight
def forward(self, enc_inputs, dec_inputs):
enc_embd = self.embedding(enc_inputs)
dec_embd = self.embedding(dec_inputs)
enc_self_mask = get_attn_pad_mask(enc_inputs, enc_inputs, self.config.i_pad)
dec_self_mask = self.get_attn_dec_mask(dec_inputs)
dec_ende_mask = get_attn_pad_mask(dec_inputs, enc_inputs, self.config.i_pad)
# (bs, n_enc_seq, d_hidn), [(bs, n_head, n_enc_seq, n_enc_seq)]
enc_outputs, enc_self_attn_probs = self.encoder(enc_embd, enc_self_mask)
# (bs, n_dec_seq, d_hidn), [(bs, n_head, n_dec_seq, n_dec_seq)], [(bs, n_head, n_dec_seq, n_enc_seq)]
dec_outputs, dec_self_attn_probs, dec_enc_attn_probs = self.decoder(dec_embd, enc_outputs, dec_self_mask, dec_ende_mask)
# (bs, n_dec_seq, n_vocab)
dec_outputs = self.projection_lm(dec_outputs)
# (bs, n_dec_seq, n_vocab), [(bs, n_head, n_enc_seq, n_enc_seq)], [(bs, n_head, n_dec_seq, n_dec_seq)], [(bs, n_head, n_dec_seq, n_enc_seq)]
return dec_outputs, enc_self_attn_probs, dec_self_attn_probs, dec_enc_attn_probs
def get_attn_dec_mask(self, dec_inputs):
# (bs, n_dec_seq, n_dec_seq)
dec_pad_mask = get_attn_pad_mask(dec_inputs, dec_inputs, self.config.i_pad)
# (bs, n_dec_seq, n_dec_seq)
dec_ahead_mask = get_attn_decoder_mask(dec_inputs)
# (bs, n_dec_seq, n_dec_seq)
dec_self_mask = torch.gt((dec_pad_mask + dec_ahead_mask), 0)
# (bs, n_dec_seq, n_dec_seq)
return dec_self_mask
def save(self, epoch, loss, path):
torch.save({
"epoch": epoch,
"loss": loss,
"state_dict": self.state_dict()
}, path)
def load(self, path):
save = torch.load(path)
self.load_state_dict(save["state_dict"])
return save["epoch"], save["loss"]
# + [markdown] id="pZQyVfnJUcXH" colab_type="text"
# #### 7. Pretrain 모델
# + id="KIu5UyLUUdMW" colab_type="code" colab={}
""" T5 pretrain """
class T5Pretrain(nn.Module):
def __init__(self, config):
super().__init__()
self.config = config
self.t5 = T5(self.config)
def forward(self, enc_inputs, dec_inputs):
# (bs, n_dec_seq, n_vocab), [(bs, n_head, n_enc_seq, n_enc_seq)], [(bs, n_head, n_dec_seq, n_dec_seq)], [(bs, n_head, n_dec_seq, n_enc_seq)]
logits, enc_self_attn_probs, dec_self_attn_probs, dec_enc_attn_probs = self.t5(enc_inputs, dec_inputs)
return logits, enc_self_attn_probs, dec_self_attn_probs, dec_enc_attn_probs
# + [markdown] id="L_DfRSm3Uzg3" colab_type="text"
# #### 8. Pretrain Data 준비
# + id="1cyywnsK12GB" colab_type="code" outputId="39b2c6ee-1625-4a2b-bd82-bf56e20ff459" colab={"base_uri": "https://localhost:8080/", "height": 296}
SPAN_LEN = 8
SPAN_VALUE = np.array([i+1 for i in range(SPAN_LEN)])
SPAN_RATIO = np.array([1/i for i in SPAN_VALUE])
SPAN_RATIO = SPAN_RATIO / np.sum(SPAN_RATIO)
print(f"평균 mask 길이: {np.sum(SPAN_VALUE * SPAN_RATIO)}")
# graph
plt.figure(figsize=[12, 4])
plt.plot(SPAN_RATIO, label="ration")
plt.legend()
plt.xlabel('Length')
plt.ylabel('Ratioo')
plt.show()
# + id="7vR09cBp2yGH" colab_type="code" colab={}
""" SPAN 길이 """
def get_span_length():
return choices(SPAN_VALUE, SPAN_RATIO)[0]
# + id="7F8q2PatU0KH" colab_type="code" colab={}
def create_pretrain_mask(tokens, mask_cnt):
"""
마스크 생성
"""
masks = []
cand_idx = {}
index = 0
for (i, token) in enumerate(tokens):
masks.append(None)
if token == "[BOS]" or token == "[EOS]":
continue
if 0 < len(cand_idx) and not token.startswith(u"\u2581"):
cand_idx[index].append(i)
else:
index += 1
cand_idx[index] = [i]
assert len(masks) == len(tokens)
keys = list(cand_idx.keys())
shuffle(keys)
mask_lms = []
covered_idx = set()
for index in keys:
if len(mask_lms) >= mask_cnt:
break
span_len = get_span_length()
# 남은 토큰개수고 마스크 토큰 캐수보다 작은 경우
if len(cand_idx) <= index + span_len:
continue
index_set = []
# mask할 토큰의 index 저장
for i in range(span_len):
index_set.extend(cand_idx[index + i])
# 마스크 개수 초과 방지
if len(mask_lms) + len(index_set) > mask_cnt:
continue
# 이미 마스크 된 경우가 있는지 확인
is_idx_covered = False
for index in index_set:
if index in covered_idx:
is_idx_covered = True
break
if is_idx_covered:
continue
for index in index_set:
covered_idx.add(index)
masked_token = None
mask_lms.append({"index": index, "span_idx1": index_set[0] - 1, "span_idx2": index_set[-1] + 1, "label": tokens[index]})
masks[index] = tokens[index]
tokens[index] = masked_token
# span boundary
covered_idx.add(index_set[0] - 1)
covered_idx.add(index_set[-1] + 1)
enc_input, dec_input = [], []
ascii_index = 65
is_mask = False
for i, token in enumerate(tokens):
if is_mask:
if token is None:
dec_input.append(masks[i])
else:
enc_input.append(token)
is_mask = False
else:
if token is None:
enc_input.append(f"<{chr(ascii_index)}>")
dec_input.append(f"<{chr(ascii_index)}>")
dec_input.append(masks[i])
ascii_index += 1
is_mask = True
else:
enc_input.append(token)
if 0 < len(dec_input):
dec_input.append("<Z>")
return enc_input, dec_input
# + id="A7h4x5WAVE-Y" colab_type="code" colab={}
def create_pretrain_instances(docs, doc_idx, doc, n_seq, mask_prob):
"""
doc별 pretrain 데이터 생성
"""
# for [BOS], [EOS]
max_seq = n_seq - 2
tgt_seq = max_seq
instances = []
current_chunk = []
current_length = 0
for i in range(len(doc)):
current_chunk.append(doc[i]) # line
current_length += len(doc[i])
if i == len(doc) - 1 or current_length >= tgt_seq:
if 0 < len(current_chunk):
tokens = []
for chunk in current_chunk: tokens.extend(chunk)
tokens = tokens[:tgt_seq]
enc_input, dec_input = create_pretrain_mask(tokens, int((len(tokens) - 3) * mask_prob))
instance = {
"enc_input": enc_input,
"dec_input": dec_input,
}
instances.append(instance)
current_chunk = []
current_length = 0
return instances
# + id="dqbLKjIczUQz" colab_type="code" colab={}
""" pretrain 데이터 생성 """
def make_pretrain_data(vocab, in_file, out_file, count, n_seq, mask_prob):
line_cnt = 0
with open(in_file, "r") as in_f:
for line in in_f:
line_cnt += 1
docs = []
with open(in_file, "r") as f:
doc = []
with tqdm_notebook(total=line_cnt, desc=f"Loading") as pbar:
for i, line in enumerate(f):
line = line.strip()
if line == "":
if 0 < len(doc):
docs.append(doc)
doc = []
# 메모리 사용량을 줄이기 위해 100,000개만 처리 함
if 100000 < len(docs): break
else:
pieces = vocab.encode_as_pieces(line)
if 0 < len(pieces):
doc.append(pieces)
pbar.update(1)
if doc:
docs.append(doc)
for index in range(count):
output = out_file.format(index)
if os.path.isfile(output): continue
with open(output, "w") as out_f:
with tqdm_notebook(total=len(docs), desc=f"Making") as pbar:
for i, doc in enumerate(docs):
instances = create_pretrain_instances(docs, i, doc, n_seq, mask_prob)
for instance in instances:
out_f.write(json.dumps(instance, ensure_ascii=False))
out_f.write("\n")
pbar.update(1)
# + id="uPEwWM_qMoYB" colab_type="code" colab={}
# pretrain 파일 개수
count = 1
# + id="36MJkxW_zd07" colab_type="code" outputId="c6ec5083-c052-4065-9a39-e75071086347" colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["3f893cef3d474a15aab57e168205eedf", "10c886e2cf7c4eb5bca64b399098d168", "b4296d0ce29a4c7f8e575506c3835911", "26daeaba3b04460d87f5e9cfd8d85563", "e8124022674d485c9316ec64b2c456c9", "ddbba6da173642558aafa24f121bef2b", "b493c59c5e8d43649d2ed8484c909235", "69dffc7115af4e6eba4c1739db730db1"]}
in_file = f"{data_dir}/kowiki.txt"
out_file = f"{data_dir}/kowiki_t5" + "_{}.json"
n_seq = 256
mask_prob = 0.15
make_pretrain_data(vocab, in_file, out_file, count, n_seq, mask_prob)
# + [markdown] id="rgAxx2YImNOw" colab_type="text"
# #### 9. Pretrain Data
# BERT Pretrain Data 입니다.
# + id="G8GPvntkmRlo" colab_type="code" colab={}
""" pretrain 데이터셋 """
class PretrainDataSet(torch.utils.data.Dataset):
def __init__(self, vocab, infile):
self.vocab = vocab
self.labels = []
self.enc_inputs = []
self.dec_inputs = []
line_cnt = 0
with open(infile, "r") as f:
for line in f:
line_cnt += 1
with open(infile, "r") as f:
for i, line in enumerate(tqdm(f, total=line_cnt, desc=f"Loading {infile}", unit=" lines")):
instance = json.loads(line)
enc_input = [vocab.piece_to_id(p) for p in instance["enc_input"]]
dec_input = [vocab.piece_to_id(p) for p in instance["dec_input"]]
self.labels.append(dec_input + [vocab.piece_to_id("[EOS]")])
self.enc_inputs.append(enc_input)
self.dec_inputs.append([vocab.piece_to_id("[BOS]")] + dec_input)
def __len__(self):
assert len(self.labels) == len(self.enc_inputs)
assert len(self.labels) == len(self.dec_inputs)
return len(self.labels)
def __getitem__(self, item):
return (torch.tensor(self.labels[item]),
torch.tensor(self.enc_inputs[item]),
torch.tensor(self.dec_inputs[item]))
# + id="_ONVNep2nM05" colab_type="code" colab={}
""" pretrain data collate_fn """
def pretrin_collate_fn(inputs):
labels, enc_inputs, dec_inputs = list(zip(*inputs))
labels = torch.nn.utils.rnn.pad_sequence(labels, batch_first=True, padding_value=-1)
enc_inputs = torch.nn.utils.rnn.pad_sequence(enc_inputs, batch_first=True, padding_value=0)
dec_inputs = torch.nn.utils.rnn.pad_sequence(dec_inputs, batch_first=True, padding_value=0)
batch = [
labels,
enc_inputs,
dec_inputs
]
return batch
# + id="NzPPga8ZnQsq" colab_type="code" outputId="0f2de6db-b50d-47a5-d087-399aaeaf675d" colab={"base_uri": "https://localhost:8080/", "height": 34}
""" pretrain 데이터 로더 """
batch_size = 128
dataset = PretrainDataSet(vocab, f"{data_dir}/kowiki_t5_0.json")
train_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True, collate_fn=pretrin_collate_fn)
# + id="gO9hUtT6pgS5" colab_type="code" colab={}
""" 모델 epoch 학습 """
def train_epoch(config, epoch, model, criterion, optimizer, train_loader):
losses = []
model.train()
with tqdm(total=len(train_loader), desc=f"Train({epoch})") as pbar:
for i, value in enumerate(train_loader):
labels, enc_inputs, dec_inputs = map(lambda v: v.to(config.device), value)
optimizer.zero_grad()
outputs = model(enc_inputs, dec_inputs)
logits = outputs[0]
loss = criterion(logits.view(-1, logits.size(2)), labels.view(-1))
loss_val = loss.item()
losses.append(loss_val)
loss.backward()
optimizer.step()
pbar.update(1)
pbar.set_postfix_str(f"Loss: {loss_val:.3f} ({np.mean(losses):.3f})")
return np.mean(losses)
# + id="xl7i2KUTqInp" colab_type="code" outputId="2b489b02-ae11-416c-9b37-949a23801723" colab={"base_uri": "https://localhost:8080/", "height": 34}
config.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(config)
learning_rate = 5e-5
n_epoch = 4
# + id="6KUHixRPqNpc" colab_type="code" outputId="efb516e5-ae2f-4097-f54c-a22a93c69e10" colab={"base_uri": "https://localhost:8080/", "height": 102}
model = T5Pretrain(config)
save_pretrain = f"{data_dir}/save_t5_pretrain.pth"
best_epoch, best_loss = 0, 0
if os.path.isfile(save_pretrain):
best_epoch, best_loss = model.t5.load(save_pretrain)
print(f"load pretrain from: {save_pretrain}, epoch={best_epoch}, loss={best_loss}")
best_epoch += 1
model.to(config.device)
criterion = torch.nn.CrossEntropyLoss(ignore_index=-1, reduction='mean')
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
losses = []
offset = best_epoch
for step in range(n_epoch):
epoch = step + offset
if 0 < step and 1 < count:
del train_loader
dataset = PretrainDataSet(vocab, f"{data_dir}/kowiki_t5_{epoch % count}.json")
train_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True, collate_fn=pretrin_collate_fn)
loss = train_epoch(config, epoch, model, criterion, optimizer, train_loader)
losses.append(loss)
model.t5.save(epoch, loss, save_pretrain)
# + id="UO7-KlvSr8mV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 652} outputId="92bfb975-1b50-494a-abb3-46b759ce3c8e"
# data
data = {
"loss": losses
}
df = pd.DataFrame(data)
display(df)
# graph
plt.figure(figsize=[12, 4])
plt.plot(losses, label="loss")
plt.legend()
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.show()
| tutorial/t5_01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Environment (conda_tensorflow_p36)
# language: python
# name: conda_tensorflow_p36
# ---
# # Introduction: Recurrent Neural Network Quickstart
#
# The purpose of this notebook is to serve as a rapid introduction to recurrent neural networks. All of the details can be found in `Deep Dive into Recurrent Neural Networks` while this notebook focuses on using the pre-trained network.
# %load_ext autoreload
# %autoreload 2
# +
from IPython.core.interactiveshell import InteractiveShell
from IPython.display import HTML
InteractiveShell.ast_node_interactivity = 'all'
import warnings
warnings.filterwarnings('ignore', category = RuntimeWarning)
warnings.filterwarnings('ignore', category = UserWarning)
import pandas as pd
import numpy as np
from utils import get_data, generate_output, guess_human, seed_sequence, get_embeddings, find_closest
# -
# # Fetch Training Data
#
# * Using patent abstracts from patent search for neural network
# * 3000+ patents total
#
data = pd.read_csv('../data/neural_network_patent_query.csv')
data.head()
training_dict, word_idx, idx_word, sequences = get_data('../data/neural_network_patent_query.csv', training_len = 50)
# * Sequences of text are represented as integers
# * `word_idx` maps words to integers
# * `idx_word` maps integers to words
# * Features are integer sequences of length 50
# * Label is next word in sequence
# * Labels are one-hot encoded
training_dict['X_train'][:2]
training_dict['y_train'][:2]
for i, sequence in enumerate(training_dict['X_train'][:2]):
text = []
for idx in sequence:
text.append(idx_word[idx])
print('Features: ' + ' '.join(text) + '\n')
print('Label: ' + idx_word[np.argmax(training_dict['y_train'][i])] + '\n')
# # Make Recurrent Neural Network
#
# * Embedding dimension = 100
# * 64 LSTM cells in one layer
# * Dropout and recurrent dropout for regularization
# * Fully connected layer with 64 units on top of LSTM
# * 'relu' activation
# * Drop out for regularization
# * Output layer produces prediction for each word
# * 'softmax' activation
# * Adam optimizer with defaults
# * Categorical cross entropy loss
# * Monitor accuracy
# +
from keras.models import Sequential, load_model
from keras.layers import LSTM, Dense, Dropout, Embedding, Masking, Bidirectional
from keras.optimizers import Adam
from keras.utils import plot_model
# +
model = Sequential()
# Embedding layer
model.add(
Embedding(
input_dim=len(word_idx) + 1,
output_dim=100,
weights=None,
trainable=True))
# Recurrent layer
model.add(
LSTM(
64, return_sequences=False, dropout=0.1,
recurrent_dropout=0.1))
# Fully connected layer
model.add(Dense(64, activation='relu'))
# Dropout for regularization
model.add(Dropout(0.5))
# Output layer
model.add(Dense(len(word_idx) + 1, activation='softmax'))
# Compile the model
model.compile(
optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
# -
# ## Load in Pre-Trained Model
#
# Rather than waiting several hours to train the model, we can load in a model trained for 150 epochs. We'll demonstrate how to train this model for another 5 epochs which shouldn't take too long depending on your hardware.
# +
from keras.models import load_model
# Load in model and demonstrate training
model = load_model('../models/train-embeddings-rnn.h5')
h = model.fit(training_dict['X_train'], training_dict['y_train'], epochs = 5, batch_size = 2048,
validation_data = (training_dict['X_valid'], training_dict['y_valid']),
verbose = 1)
# +
model = load_model('../models/train-embeddings-rnn.h5')
print('Model Performance: Log Loss and Accuracy on training data')
model.evaluate(training_dict['X_train'], training_dict['y_train'], batch_size = 2048)
print('\nModel Performance: Log Loss and Accuracy on validation data')
model.evaluate(training_dict['X_valid'], training_dict['y_valid'], batch_size = 2048)
# -
# There is a minor amount of overfitting on the training data but it's not major. Using regularization in both the LSTM layer and after the fully dense layer can help to combat the prevalent issue of overfitting.
# # Generate Output
#
# We can use the fully trained model to generate output by starting it off with a seed sequence. The `diversity` controls the amount of stochasticity in the predictions: the next word predicted is selected based on the probabilities of the predictions.
for i in generate_output(model, sequences, idx_word, seed_length = 50, new_words = 30, diversity = 0.75):
HTML(i)
for i in generate_output(model, sequences, idx_word, seed_length = 30, new_words = 30, diversity = 1.5):
HTML(i)
# Too high of a diversity and the output will be nearly random. Too low of a diversity and the model can get stuck outputting loops of text.
# ## Start the network with own input
#
# Here you can input your own starting sequence for the network. The network will produce `num_words` of text.
s = 'This patent provides a basis for using a recurrent neural network to '
HTML(seed_sequence(model, s, word_idx, idx_word, diversity = 0.75, num_words = 20))
s = 'The cell state is passed along from one time step to another allowing the '
HTML(seed_sequence(model, s, word_idx, idx_word, diversity = 0.75, num_words = 20))
# # Guess if Output is from network or human
#
# The next function plays a simple game: is the output from a human or the network? Two of the choices are computer generated while the third is the actual ending but the order is randomized. Try to see if you can discern the differences!
guess_human(model, sequences, idx_word)
guess_human(model, sequences, idx_word)
guess_human(model, sequences, idx_word)
# # Inspect Embeddings
#
# As a final piece of model inspection, we can look at the embeddings and find the words closest to a query word in the embedding space. This gives us an idea of what the network has learned.
embeddings = get_embeddings(model)
embeddings.shape
# Each word in the vocabulary is now represented as a 100-dimensional vector. This could be reduced to 2 or 3 dimensions for visualization. It can also be used to find the closest word to a query word.
find_closest('network', embeddings, word_idx, idx_word)
# A word should have a cosine similarity of 1.0 with itself! The embeddings are learned for a task, so the nearest words may only make sense in the context of the patents on which we trained the network.
find_closest('data', embeddings, word_idx, idx_word)
# It seems the network has learned some basic relationships between words!
# # Conclusions
#
# In this notebook, we saw a rapid introduction to recurrent neural networks. The full details can be found in `Deep Dive into Recurrent Neural Networks`. Recurrent neural networks are a powerful tool for natural language processing because of their ability to keep in mind an entire input sequence as they process one word at a time. This makes them applicable to sequence learning tasks where the order of the inputs matter and there can be long-term dependencies in the input sequences.
| notebooks/Quick Start to Recurrent Neural Networks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
import matplotlib.pyplot as plt
import sqlite3
import pandas as pd
import sys
sys.path.append("/Users/peter/PycharmProjects/depth/")
conn = sqlite3.connect("../data.db")
cur = conn.cursor()
protein_df_all = pd.read_sql_query("select * from proteins", conn)
protein_df = protein_df_all[(protein_df_all.method.str.match("x-ray")) & (protein_df_all.cov_trimmed == 100) & (protein_df_all.cov_total >= 80) & (protein_df_all.resolution <= 3.5) & (protein_df_all.length >= 30)]
ds_df = pd.read_sql_query("select * from (select distinct id from datasets) a join proteins using (id)", conn)
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import FunctionTransformer
import numpy as np
from numpy import vectorize
import math
from time import time
# +
from mllib.retrievers import SQLRetriever
from mllib.features.binary import Bfactors
import io
from Bio import AlignIO
tf = Bfactors()
sql_retriever = SQLRetriever(conn, tf.query)
# -
r = sql_retriever.transform("4qnd_A")
r[:10]
s = tf.fit(r).transform(r)
s[:10]
i = tf.inverse_transform(s)
i[:10]
ds_df.count()
print(ds_df.thickness.mean(), ds_df.thickness.std())
print(ds_df.describe())
| jupyter/explore.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from thesis_initialise import *
from everest.funcy import Fn
myobj = Fn(slice(None))
myobj
myarr = Fn([[1, 2], [10, 20]])
myget = myarr[:, 1]
myget.value
slice(None)
help(slice)
myval = Fn(0)
myval
myval.value
myobj = Fn([1, 2, 3])
Fn(0, 1, 2)
Fn([0, 1], 2, 3)
type(slice(None)) is slice
myslice = slice(1, 2, 3)
myslice.start
myslice.stop
myslice.step
myarr = Fn([[1, 2], [10, 20]])
| dev/dev_000_seq.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Graduate Rotational Internship Program - The Sparks Foundation, SG
#
# ## GRIP - Computer Vision & IoT - Task 1 (Object Detection Using MobileNet & SSD)
#
# **<NAME>**\
# [LinkedIn](https://linkedin.com/in/avishj/) [GitHub](https://github.com/avishj/) [Website](https://avishj.dev/)
# Object detection, a subset of computer vision, is an automated method for locating interesting objects in an image with respect to the background. Solving the object detection problem means placing a tight bounding box around these objects and associating the correct object category with each bounding box. Like other computer vision tasks, deep learning is the state-of-art method to perform object detection. MobileNet is an efficient and portable CNN architecture that is used in real world applications. MobileNets primarily use depthwise seperable convolutions in place of the standard convolutions used in earlier architectures to build lighter models. Single Shot Detector's are used to ensure one shot can detect multiple objects, for example YOLO.
#
# In the current task, we shall utilise MobileNetSSD with a DNN Module in OpenCV to build our object detection system.\
# Reference: [Object detection with deep learning and OpenCV - PyImageSearch](https://www.pyimagesearch.com/2017/09/11/object-detection-with-deep-learning-and-opencv/)
# ### Step 1: Handling Imports
import numpy as np
import cv2
# ### Step 2: Initialising Classes
# +
# Generate a set of bounding box colors for each class that was used to train MobileNet
CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat",
"bottle", "bus", "car", "cat", "chair", "cow", "diningtable",
"dog", "horse", "motorbike", "person", "pottedplant", "sheep",
"sofa", "train", "tvmonitor"]
COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3))
# -
# ### Step 3: Loading Pre-Trained Model & Images to be Detected
net = cv2.dnn.readNetFromCaffe('MobileNetSSD_deployprototxt.txt','MobileNetSSD_deploy.caffemodel')
img = cv2.imread('imgs/multi2.jpg')
img= cv2.resize(img, (500, 500))
cv2.imshow("Detected", img)
cv2.waitKey(0)
# ### Step 4: Detecting Objects
# +
# Extracting Dimensions & Creating 500x500 Blob, then passing it to the DNN
(h, w) = img.shape[:2]
blob = cv2.dnn.blobFromImage(img, 0.007843,(500, 500), 127.5)
net.setInput(blob)
detections = net.forward()
# -
# ### Step 5: Labelling Predictions, Boxing Objects & Displaying the Probability
for i in np.arange(0, detections.shape[2]):
# Extracting Confidence
confidence = detections[0, 0, i, 2]
# Filtering Confidence with minimum 30%
if confidence > 0.3 :
# Extracting Index of Class Label & Computing Dimensions of the Bounding Box
idx = int(detections[0, 0, i, 1])
box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
(startX, startY, endX, endY) = box.astype("int")
# Displaying the Probability
label = "{}: {:.2f}%".format(CLASSES[idx], confidence * 100)
print("{}".format(label))
cv2.rectangle(img, (startX, startY), (endX, endY), COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(img, label, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# ### Step 6: Displaying Output Image
cv2.imshow("Output", img)
cv2.waitKey(0)
# ### Step 7: Function to Detect & Output from Input
def detect_object(imgloc):
img = cv2.imread("imgs/" + imgloc)
img = cv2.resize(img, (500, 500))
cv2.imshow("The Input: ", img)
cv2.waitKey(0)
(h, w) = img.shape[:2]
blob = cv2.dnn.blobFromImage(img, 0.007843,(200, 200), 127.5)
net.setInput(blob)
detections = net.forward()
for i in np.arange(0, detections.shape[2]):
confidence = detections[0, 0, i, 2]
if confidence > 0.3:
idx = int(detections[0, 0, i, 1])
box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
(startX, startY, endX, endY) = box.astype("int")
label = "{}: {:.2f}%".format(CLASSES[idx], confidence * 100)
print("{}".format(label))
cv2.rectangle(img, (startX, startY), (endX, endY), COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(img, label, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
cv2.imshow("The Output:", img)
cv2.waitKey(0)
# ### Step 8: Testing the Function
print('cars.jpg:')
detect_object('cars.jpg')
print('dhs.jsp:')
detect_object('dhs.jpg')
print('dp.jpg')
detect_object('dp.jpg')
print('horse.jpg:')
detect_object('horse.jpg')
print('multi1.jpg:')
detect_object('multi1.jpg')
print('multi2.jpg:')
detect_object('multi2.jpg')
| IoT and Computer Vision/Task 1/Object Detection Using MobileNet & SSD.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="TA21Jo5d9SVq"
#
#
# 
#
# [](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/ER_RXNORM.ipynb)
#
#
#
# + [markdown] id="CzIdjHkAW8TB"
# # **RxNorm coding**
# + [markdown] id="6uDmeHEFW7_h"
# To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens.
# + [markdown] id="wIeCOiJNW-88"
# ## 1. Colab Setup
# + [markdown] id="HMIDv74CYN0d"
# Import license keys
# + colab={"base_uri": "https://localhost:8080/", "height": 51} executionInfo={"elapsed": 1221, "status": "ok", "timestamp": 1601204964894, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10508284328555930330"}, "user_tz": -300} id="ttHPIV2JXbIM" outputId="e2e60380-7e71-4cb2-c5bd-63ee6c2af10a"
import os
import json
with open('/content/spark_nlp_for_healthcare.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
# + [markdown] id="rQtc1CHaYQjU"
# Install dependencies
# + colab={"base_uri": "https://localhost:8080/", "height": 326} executionInfo={"elapsed": 78670, "status": "ok", "timestamp": 1601205044346, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10508284328555930330"}, "user_tz": -300} id="CGJktFHdHL1n" outputId="6917eee3-1eda-4919-8a51-f4b0432646d2"
# Install Java
# ! apt-get update -qq
# ! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
# ! java -version
# Install pyspark
# ! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
# ! pip install --ignore-installed spark-nlp==$sparknlp_version
# ! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
# + [markdown] id="Hj5FRDV4YSXN"
# Import dependencies into Python
# + executionInfo={"elapsed": 76566, "status": "ok", "timestamp": 1601205044348, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10508284328555930330"}, "user_tz": -300} id="qUWyj8c6JSPP"
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
# + [markdown] id="ed6Htm7qDQB3"
# Start the Spark session
# + executionInfo={"elapsed": 97253, "status": "ok", "timestamp": 1601205066361, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10508284328555930330"}, "user_tz": -300} id="eaSM8-xhDRa4"
spark = sparknlp_jsl.start(secret)
# + [markdown] id="9RgiqfX5XDqb"
# ## 2. Select the Entity Resolver model and construct the pipeline
# + [markdown] id="AVKr8C2SrkZQ"
# Select the models:
#
# **RxNorm Entity Resolver models:**
#
# 1. **chunkresolve_rxnorm_cd_clinical**
# 2. **chunkresolve_rxnorm_sbd_clinical**
# 3. **chunkresolve_rxnorm_scd_clinical**
#
#
#
#
#
#
# For more details: https://github.com/JohnSnowLabs/spark-nlp-models#pretrained-models---spark-nlp-for-healthcare
# + executionInfo={"elapsed": 1068, "status": "ok", "timestamp": 1601205489347, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10508284328555930330"}, "user_tz": -300} id="cK9xxkkfrsLc"
## important: You can change NER models and whitelist entities according to your own requirements
# ner and entity resolver mapping dict
ner_er_dict = {'chunkresolve_rxnorm_scd_clinical': 'ner_posology',
'chunkresolve_rxnorm_cd_clinical': 'ner_posology',
'chunkresolve_rxnorm_sbd_clinical': 'ner_clinical'}
# entities to whitelist, so resolver works on only required entities
wl_er_dict = {'chunkresolve_rxnorm_scd_clinical': ['DRUG'],
'chunkresolve_rxnorm_cd_clinical': ['DRUG'],
'chunkresolve_rxnorm_sbd_clinical': ['TREATMENT']}
# Change this to the model you want to use and re-run the cells below.
model = 'chunkresolve_rxnorm_sbd_clinical'
# + [markdown] id="zweiG2ilZqoR"
# Create the pipeline
# + colab={"base_uri": "https://localhost:8080/", "height": 170} executionInfo={"elapsed": 128590, "status": "ok", "timestamp": 1601205620436, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10508284328555930330"}, "user_tz": -300} id="LLuDz_t40be4" outputId="a2e97216-244f-418d-b1e5-52c67c6599ec"
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentences')
tokenizer = Tokenizer()\
.setInputCols(['sentences']) \
.setOutputCol('tokens')
embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models')\
.setInputCols(["sentences", "tokens"])\
.setOutputCol("embeddings")
ner_model = NerDLModel().pretrained(ner_er_dict[model], 'en', 'clinical/models')\
.setInputCols("sentences", "tokens", "embeddings")\
.setOutputCol("ner_tags")
ner_chunker = NerConverter()\
.setInputCols(["sentences", "tokens", "ner_tags"])\
.setOutputCol("ner_chunk").setWhiteList(wl_er_dict[model])
chunk_embeddings = ChunkEmbeddings()\
.setInputCols("ner_chunk", "embeddings")\
.setOutputCol("chunk_embeddings")
entity_resolver = \
ChunkEntityResolverModel.pretrained(model,"en","clinical/models")\
.setInputCols("tokens","chunk_embeddings").setOutputCol("resolution")
pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
embeddings,
ner_model,
ner_chunker,
chunk_embeddings,
entity_resolver])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = pipeline.fit(empty_df)
light_pipeline = sparknlp.base.LightPipeline(pipeline_model)
# + [markdown] id="2Y9GpdJhXIpD"
# ## 3. Create example inputs
# + executionInfo={"elapsed": 1113, "status": "ok", "timestamp": 1601205726523, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10508284328555930330"}, "user_tz": -300} id="vBOKkB2THdGI"
# Enter examples as strings in this array
input_list = [
"""The patient is a 40-year-old white male who presents with a chief complaint of "chest pain". The patient is diabetic and has a prior history of coronary artery disease. The patient presents today stating that his chest pain started yesterday evening and has been somewhat intermittent. He has been advised Aspirin 81 milligrams QDay. Humulin N. insulin 50 units in a.m. HCTZ 50 mg QDay. Nitroglycerin 1/150 sublingually PRN chest pain.""",
]
# + [markdown] id="1gmrjqHSGcJx"
# # 4. Run the pipeline
# + executionInfo={"elapsed": 10826, "status": "ok", "timestamp": 1601205738514, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10508284328555930330"}, "user_tz": -300} id="xdhgKutMHUoC"
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
light_result = light_pipeline.fullAnnotate(input_list[0])
# + [markdown] id="UIVShVLhI68M"
# # 5. Visualize
# + [markdown] id="472iBPpK-FvF"
# Full Pipeline
# + colab={"base_uri": "https://localhost:8080/", "height": 204} executionInfo={"elapsed": 18650, "status": "ok", "timestamp": 1601205748420, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10508284328555930330"}, "user_tz": -300} id="Qdh2BQaLI7tU" outputId="86ed0ea7-72f2-40e8-93fc-809a33609d54"
result.select(
F.explode(
F.arrays_zip('ner_chunk.result',
'ner_chunk.begin',
'ner_chunk.end',
'ner_chunk.metadata',
'resolution.metadata', 'resolution.result')
).alias('cols')
).select(
F.expr("cols['0']").alias('chunk'),
F.expr("cols['1']").alias('begin'),
F.expr("cols['2']").alias('end'),
F.expr("cols['3']['entity']").alias('entity'),
F.expr("cols['4']['resolved_text']").alias('RxNorm_description'),
F.expr("cols['5']").alias('RxNorm_code'),
).toPandas()
# + [markdown] id="1w6-BQ0MFL9Y"
# Light Pipeline
# + colab={"base_uri": "https://localhost:8080/", "height": 122} executionInfo={"elapsed": 15467, "status": "ok", "timestamp": 1601205748421, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "10508284328555930330"}, "user_tz": -300} id="LSukuO5eE1cZ" outputId="2d864f68-2216-4125-d558-7afd0419efea"
light_result[0]['resolution']
| tutorials/streamlit_notebooks/healthcare/ER_RXNORM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from rsna_retro.imports import *
from rsna_retro.metadata import *
from rsna_retro.preprocess import *
from rsna_retro.train import *
from rsna_retro.self_supervised import *
torch.cuda.set_device(2)
dls = get_ss_data(256, splits=Meta.splits_stg1)
size=128
aug = get_aug_pipe(size, min_scale=0.7, mult=1., max_rotate=20, p_lighting=.9, p_affine=.9)
aug2 = get_aug_pipe(size, min_scale=0.3, mult=2, p_lighting=1.0, p_affine=1.0)
cb = SSCallback(BatchContrastiveLoss(10.0), size=size, aug_targ=aug, aug_pos=aug2, combined_loss=True)
learn = cnn_learner(dls, xresnet18, loss_func=get_loss(), lr=3e-3,
opt_func=Adam, metrics=[], cbs=cb)
name = 'self_supervised_train_3'
learn.load(f'runs/self_supervised_train_1-1')
do_fit(learn, 6, 4e-2)
learn.save(f'runs/{name}-1')
# learn.load(f'runs/{name}-1')
size = 256
learn.dls = get_ss_data(192, splits=Meta.splits_stg1)
cb.update_size(size)
do_fit(learn, 6, 4e-3)
learn.save(f'runs/{name}-2')
# learn.load(f'runs/{name}-2')
learn.dls = get_ss_data(96, splits=Meta.splits_stg1, img_dir=path_jpg)
cb.update_size(384)
do_fit(learn, 2, 4e-4)
learn.save(f'runs/{name}-3')
| 08_train_self_supervised_train_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/drc10723/udacity_secure_private_AI/blob/master/Differential_Private_Deep_Learing_.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="HsnI57x9qXlC" colab_type="text"
# # Training Deep Learning Model with Differntial Privacy
#
# If you have a private dataset and you want to train a deep learning model for some predictions, in most of cases you can't train model due to non-availabity of true labels. Let's take example, a hosptial has lots of unlabelled data for particular disease (like lung cancer). Even if other hospitals have labelled data, they can't share with others. In most of cases one hosptial doesn't have good number of labelled examples to train good model.
#
#
# ## Problem Assumption:-
#
# * N Hospitals ( Teachers) have some labelled data with same kind of labels
# * One Hosptial ( Student) have some unlabelled data
#
# ## Problem Solution :-
#
#
# * Ask each of the N hospitals (Teachers) to train a model on their own datasets
# * Use the N teachers models to predict on your local dataset, generating N labels for each datapoints
# * Aggregate the N labels using a differential private (DP) query
# * Train model with new aggregated labels on your own dataset
#
#
#
# Let's start by imports
# + id="QjlTH2x3ehtC" colab_type="code" colab={}
# !pip install -q syft
# + id="oe_bbNgmdn43" colab_type="code" colab={}
import torch
import torchvision
import numpy as np
from torch import nn, optim
from torchvision import datasets, transforms
# + id="Ns1dKTcSoRZ4" colab_type="code" outputId="8a0cd05b-38e6-468f-d3e2-1940a02c7d44" colab={"base_uri": "https://localhost:8080/", "height": 34}
# use cuda if available
DEVICE = torch.device("cuda" if torch.cuda.is_available()
else "cpu")
print(f"Using {DEVICE} backend")
# number of teacher models.
# our student model accuracy will depend on this parameter
num_teachers = 100 #@param {type:"integer"}
# + [markdown] id="_XqEfI2DnNnu" colab_type="text"
# ## Teacher Models Training
#
# We will use MNIST data as dummy data to train Teachers and Student Models.
#
#
#
# * MNIST Training Data will be divided in N( equal to number of teachers) subsets and each subset will train one teacher model.
# * MNIST Test Data will be used as private or student data and will be assumed unlabelled.
#
#
# + id="lqcK-ZRIeuB5" colab_type="code" colab={}
# convert to tensor and normalize
train_transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize([.5],[.5])])
# load training data
mnsit_dataset = datasets.MNIST('./mnsit', train=True, transform=train_transform, download=True, )
# + id="7tL_ggcdfmqo" colab_type="code" colab={}
# divide mnist train data to num_teachers partitions
total_size = len(mnsit_dataset)
# length of each teacher dataset
lengths = [int(total_size/num_teachers)]*num_teachers
# list of all teacher dataset
teacher_datasets = torch.utils.data.random_split(mnsit_dataset, lengths)
# + id="hgqoe8Adf64i" colab_type="code" colab={}
# We will create basic model, which will be used for teacher and student training both
# It is not necessary to have same model structure for all teahders and even student model
class Network(nn.Module):
def __init__(self):
super(Network,self).__init__()
# sequential layer : input size (batch_size, 28*28)
self.layer = nn.Sequential(nn.Linear(28*28, 256),
# out size (batch_size, 256)
nn.BatchNorm1d(256),
# out size (batch_size, 256)
nn.ReLU(),
# out size (batch_size, 256)
nn.Dropout(0.5),
# out size (batch_size, 256)
nn.Linear(256, 64),
# out size (batch_size, 64)
nn.BatchNorm1d(64),
# out size (batch_size, 64)
nn.ReLU(),
# out size (batch_size, 64)
nn.Dropout(0.5),
# out size (batch_size, 64)
nn.Linear(64, 10),
# out size (batch_size, 10)
# we will use logsoftmax instead softmax
# softmax has expoential overflow issues
nn.LogSoftmax(dim=1)
# out size (batch_size, 10)
)
def forward(self,x):
# x size : (batch_size, 1, 28, 28)
x = x.view(x.shape[0], -1)
# x size : (batch_size, 784)
x = self.layer(x)
# x size : (batch_size, 10)
return x
# + id="9vDfitR6gAxs" colab_type="code" colab={}
def train_model(dataset, checkpoint_file, num_epochs=10, do_validation=False):
"""
Train a model for given dataset for given number of epochs and
save last epoch model checkpoint
Parameters:
dataset (torch.dataset): training data
checkpoint_file (str): filename for saving model
num_epochs (int): number of training epoch
do_validation (bool): perform validation by dividing dataset in 90:10 ratio
Returns: None
"""
# if validation divide dataset to train and test set 90:10 ratio
if do_validation:
dataset_size = len(dataset)
train_set, test_set = torch.utils.data.random_split(dataset, [int(0.9*dataset_size), int(0.1*dataset_size)])
# create train and test dataloader
trainloader = torch.utils.data.DataLoader(train_set, batch_size=32, shuffle=True)
testloader = torch.utils.data.DataLoader(test_set, batch_size= 32, shuffle=True)
else:
# create train dataloader using full dataset
trainloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)
# create model and send to gpu
model = Network().to(DEVICE)
# we have used logsoftmax, so now NLLLoss
criterion = nn.NLLLoss()
# adam optimizer for training
optimizer = optim.Adam(model.parameters(), lr=0.005)
# train for num_epochs
for epoch in range(num_epochs):
# training accuracy and loss for logging
train_accuracy = 0
train_loss = 0
# training dataloader
for images, labels in trainloader:
# zero accumlated grads
optimizer.zero_grad()
# send images, labels to gpu
images, labels = images.to(DEVICE), labels.to(DEVICE)
# run forward propagation
output = model.forward(images)
# calculate loss
loss = criterion(output, labels)
train_loss += loss.item()
# calculate accuracy
top_out, top_class = output.topk(1, dim=1)
success = (top_class==labels.view(*top_class.shape))
train_accuracy += success.sum().item()
# do backward propagation
loss.backward()
optimizer.step()
if do_validation:
# set model to evaluation
model.eval()
test_accuracy = 0
test_loss = 0
# do forward pass and calculate loss and accuracy
with torch.no_grad():
for images, labels in testloader:
images, labels = images.to(DEVICE), labels.to(DEVICE)
output = model.forward(images)
loss = criterion(output, labels)
test_loss += loss.item()
top_out, top_class = output.topk(1, dim=1)
success = (top_class==labels.view(*top_class.shape))
test_accuracy += success.sum().item()
# log train and test metrics
print("Epoch: {}".format(epoch+1),
"Train Loss: {:.3f}".format(train_loss/len(trainloader)),
"Train Accuracy: {:.3f}".format(train_accuracy/len(train_set)),
"Test Loss: {:.3f}".format(test_loss/len(testloader)),
"Test Accuracy: {:.3f}".format(test_accuracy/len(test_set))
)
# set model to train
model.train()
else:
# log only training metrics if no validation
print("Epoch: {}".format(epoch+1),
"Train Loss: {:.3f}".format(train_loss/len(trainloader)),
"Train Accuracy: {:.3f}".format(train_accuracy/len(dataset))
)
# save trained teacher model
torch.save(model.state_dict(), checkpoint_file)
# + id="LLEIq8vZgZd2" colab_type="code" colab={}
# train all teachers models on MNIST partition datasets
for teacher in range(num_teachers):
print("############################### Teacher {} Model Training #############################".format(teacher+1))
train_model(teacher_datasets[teacher], f"checkpoint_teacher_{teacher+1}.pth")
# + [markdown] id="wv5oN__NobyE" colab_type="text"
# ## Teacher Models Predictions
#
# Now we have trained N teachers models and we can share those trained models for student training.
#
#
# We have assumed MNIST test dataset, as student dataset
# + id="8-HgSpTFggYZ" colab_type="code" colab={}
# student dataset transforms
test_transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize([.5],[.5])])
# load private student dataset
private_dataset = datasets.MNIST('./mnsit', train=False, transform=test_transform, download=True)
# mnist test dataset have 10000 examples
private_data_size = len(private_dataset)
# create dataloader for private train dataset
private_dataloader = torch.utils.data.DataLoader(private_dataset, batch_size=32)
# + id="BFFD5ZKPr86_" colab_type="code" colab={}
def predict_model(model_checkpoint, dataloader):
"""
Load a trained model and make predictions
Parameters:
checkpoint_file (str): filename for trained model checkpoint
dataloader (DataLoader): dataloader instance
Returns:
preds_list (torch.Tensor): predictions for whole dataset
"""
# create model
model = Network()
# load model from checkpoint
state_dict = torch.load(model_checkpoint)
model.load_state_dict(state_dict)
# send model to gpu
model = model.to(DEVICE)
# list for batch predictions
preds_list = []
# set model to eval mode
model.eval()
# no gradients calculation needed
with torch.no_grad():
# iterate over dataset
for images, labels in dataloader:
images = images.to(DEVICE)
# calculate predictions ( log of predictions)
preds = model.forward(images)
# calculate top_class
top_preds, top_classes = preds.topk(k=1, dim=1)
# append batch top_classes tensor
preds_list.append(top_classes.view(-1))
# concat all batch predictions
preds_list = torch.cat(preds_list).cpu()
# return predictions
return preds_list
# + id="ehvTKsEPttUc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="7755d0a1-9a4c-4f87-c9f4-aaffc35baf64"
# list of all teacher model predictions
teacher_preds = []
# predict for each teacher model
for teacher in range(num_teachers):
teacher_preds.append(predict_model(f'checkpoint_teacher_{teacher+1}.pth', private_dataloader))
# stack all teacher predictions
teacher_preds = torch.stack(teacher_preds)
print(teacher_preds.shape)
# + [markdown] id="lDTfiE5N1cmW" colab_type="text"
# ## Aggregating Teacher Predictions
#
# We have N predictions for each datapoint from our private dataset. We can aggregate N predictions using max query on bin counts for different labels.
#
# Can we train a model on those aggregated labels directly ? Yes, we can, but for increasing differenital privacy and keeping within some privacy budget, we will convert our aggreagte query to dp query. In dp query, we will add some amount of gaussian noise.
# + id="aFpP_lPd_tnq" colab_type="code" colab={}
# epsilon budget for one aggregate dp query
epsilon = 0.1 #@param {type:"number"}
# number of labels
num_classes = 10
# + [markdown] id="RIWnNSbp5Q76" colab_type="text"
# we have assumed, student data is unlabelled. For analysis purpose we will use real labels.
# + id="a4gsYby7rirD" colab_type="code" colab={}
# real targets, will not available for private dataset in real scenerio
real_targets = private_dataset.targets
# + [markdown] id="DHCbddtDss5A" colab_type="text"
# ### Teacher Argmax Aggregation
#
# Aggregate N teacher predictions using max query on bin counts for different labels
# + id="w6ByEkfPoyQw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="809d917a-89c2-4a75-fcdc-4da9b4f09ca5"
# teacher aggregation result
teachers_argmax = list()
for image_i in range(private_data_size):
# calculate bin count
label_counts = torch.bincount(teacher_preds[:, image_i], minlength=num_classes)
# take maximum bin count label
argmax_label = torch.argmax(label_counts)
teachers_argmax.append(argmax_label)
# convert array to
teachers_argmax = torch.tensor(teachers_argmax)
# correct predictions
argmax_correct = torch.sum(real_targets == teachers_argmax)
print("Teachers argmax labels accuracy", argmax_correct.item()/private_data_size)
# + [markdown] id="aV_PkE7Ys0KX" colab_type="text"
# ### Teacher Noisy Aggregation ( DP query)
#
# We use laplacian noise and beta will equal to **(sensitivity / epsilon )**.
#
# Sensitivity of argmax query will be one.
# + id="1_S-1lZ5o93S" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="45850183-9743-46a8-df9c-400d5c9d4b4b"
# dp query results
noisy_labels = list()
for image_i in range(private_data_size):
# calculate bin count
label_counts = torch.bincount(teacher_preds[:, image_i], minlength=num_classes)
# calcuate beta for laplacian
beta = 1 / epsilon
# add noise for each teacher predictions
for i in range(len(label_counts)):
label_counts[i] += np.random.laplace(0, beta, 1)[0]
# calculate dp label
noisy_label = torch.argmax(label_counts)
noisy_labels.append(noisy_label)
noisy_labels = torch.tensor(noisy_labels)
# accuracy for noisy or dp query results
noisy_accuracy = torch.sum(real_targets == noisy_labels)
print("Noisy label accuracy", noisy_accuracy.item()/private_data_size)
# + [markdown] id="pzvbjQ__m5Ia" colab_type="text"
# ## PATE Analysis
#
# **What is our epsilon budget, we have used ?** We will perform PATE analysis.
# + id="7K22A_cbm73A" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 105} outputId="ce833ae5-f705-416c-ed2e-6d982b7c857b"
from syft.frameworks.torch.differential_privacy import pate
# + id="04IrN0KFAYRA" colab_type="code" colab={}
# memory usage is getting pretty high with all predictions in PATE analysis,
# using subset of predictions ( subset of mnist test dataset)
# will help us understand importnace of private data size
num_student_train = 2000 #@param {type:"integer"}
teacher_preds1 = teacher_preds[:, :num_student_train].to(DEVICE)
noisy_labels1 = noisy_labels[:num_student_train].to(DEVICE)
teachers_argmax1 = teachers_argmax[:num_student_train].to(DEVICE)
real_targets1 = real_targets[:num_student_train].to(DEVICE)
# + [markdown] id="amjcM56asCQC" colab_type="text"
# ### Noisy Labels PATE Analysis
# + id="MS2vWqN6nG9f" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 88} outputId="7b89eea7-5a55-4c70-9130-5c9a34304196"
# Data dependant and independant epsilon for noisy labels
data_dep_eps, data_ind_eps = pate.perform_analysis_torch(preds=teacher_preds1, indices=noisy_labels1,
noise_eps=epsilon, delta=1e-5, moments=10)
print(f"Data dependant epsilon {data_dep_eps.item()} data independent epsilon {data_ind_eps.item()}")
# + [markdown] id="2NM0rcSKsRSC" colab_type="text"
# ### Teacher Argmax PATE Analysis
# + id="4ZlOVKzwTlys" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 88} outputId="c1282206-f420-412a-8822-88ce2896ccf4"
# Data dependant and independant epsilon for argmax labels
data_dep_eps, data_ind_eps = pate.perform_analysis_torch(preds=teacher_preds1, indices=teachers_argmax1,
noise_eps=epsilon, delta=1e-5, moments=10)
print(f"Data dependant epsilon {data_dep_eps.item()} data independent epsilon {data_ind_eps.item()}")
# + [markdown] id="NagLNxzssYLW" colab_type="text"
# ### Real Labels PATE Analysis
# + id="vPCPPiIgngxS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 88} outputId="05333d46-7fa6-440c-8c85-5d0270635603"
# Data dependant and independant epsilon for argmax labels
data_dep_eps, data_ind_eps = pate.perform_analysis_torch(preds=teacher_preds1, indices=real_targets1,
noise_eps=epsilon, delta=1e-5, moments=10)
print(f"Data dependant epsilon {data_dep_eps.item()} data independent epsilon {data_ind_eps.item()}")
# + [markdown] id="_oDPSMdNs9pM" colab_type="text"
# ## Student Model Training
#
# Differential privacy gaurantees that any amount of postprocessing can't increase epsilon value for given dataset, which means epsilon value will be less than or equal to PATE analysis values after training deep learning models.
# + id="IhPzW671nyL_" colab_type="code" colab={}
# save real labels
private_real_labels = private_dataset.targets
# replace real labels with noisy labels in private dataset
private_dataset.targets = noisy_labels
# create training and testing subset
train_private_set = torch.utils.data.Subset(private_dataset, range(0, num_student_train))
test_private_set = torch.utils.data.Subset(private_dataset, range(num_student_train, len(private_dataset)))
# + id="uH55jgzNwXtQ" colab_type="code" colab={}
# train student model with noisy labels
student_model = train_model(train_private_set, f'checkpoint_student.pth', num_epochs=20)
# + id="10zc5yEly8DL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="3fc99772-d0a7-4b3b-dd5d-7d46786e5e65"
# create test loader
private_testloader = torch.utils.data.DataLoader(test_private_set, batch_size=32)
# get test predictions
test_preds = predict_model(f'checkpoint_student.pth', private_testloader)
# calculate test predictions
correct = torch.sum(private_real_labels[num_student_train:] == test_preds)
# accuracy
print(f"student model test accuracy {correct.item()/(len(private_dataset)-num_student_train)}")
# + [markdown] id="Egyl4l_xXqpW" colab_type="text"
# ## Conclusion
#
# As you can see, we are able to train a quite good accuracy model.
#
# Try different values of epsilon and number of teachers, you should able to observe following :-
#
# 1. More the numbers of teachers, less data dependent epsilon and more accuracy also
# 2. By adding noise, we are able to reduce privacy budget hugely ( See difference between data dependent and Independent epsilon)
# 3. Less the value of epsilon, more differntial privacy ( low data dependent and independent epsilon )
# 4. Given enough examples, deep learning model will able to remove noise added during DP query without reducing differential privacy.
# 5. More unlabelled student data, more accuracy
#
#
# + id="NQAx2WJiYv9t" colab_type="code" colab={}
| Dharmendra_Choudhary/udacity_secure_private_AI/Differential_Private_Deep_Learing_.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a id='header'></a>
# # Demo for data manipulation
#
# In this notebook we present *data manipulation* functionalities of the `preprocess` module.
#
# - [**Section 1**](#centering_and_scaling): Centering and scaling data sets
# - [**Section 2**](#outlier_detection): Multivariate outlier detection from data sets
# - [**Section 3**](#kernel_density): Kernel density weighting of data sets
#
# ***
# **Should plots be saved?**
save_plots = False
# ***
# +
from PCAfold import preprocess
from PCAfold import PreProcessing
from PCAfold import KernelDensity
from PCAfold import PCA
from PCAfold import reduction
import numpy as np
save_filename = None
# -
# ***
# <a id='centering_and_scaling'></a>
# ## Centering and scaling and constant variable removal
#
# [**Go up**](#header)
# We begin by generating a dummy data set:
X = np.random.rand(100,20)
# Centering and scaling can be performed in the following way:
(X_cs, X_center, X_scale) = preprocess.center_scale(X, 'range', nocenter=False)
# Uncentering and unscaling can be performed in the following way to get back the original data set:
X = preprocess.invert_center_scale(X_cs, X_center, X_scale)
# If constant variables are present in the data set, they can be removed using `preprocess.remove_constant_vars` function which can be a useful pre-processing before PCA is applied on a data set. Below we inject an artificial constant column to the dummy data set:
X[:,5] = np.ones((100,))
# it can be removed by:
(X_removed, idx_removed, idx_retained) = preprocess.remove_constant_vars(X)
# Indices of any removed columns are stored in the `idx_removed` vector:
idx_removed
# In addition to that, an object of the `PreProcessing` class can be created and used to store the combination of the above pre-processing:
preprocessed = preprocess.PreProcessing(X, 'range', nocenter=False)
# Centered and scaled data set can then be accessed as class attribute:
preprocessed.X_cs
# as well as the corresponding centers and scales:
preprocessed.X_center
preprocessed.X_scale
# ***
# <a id='outlier_detection'></a>
# ## Multivariate outlier detection
#
# [**Go up**](#header)
# Genrate a two-dimensional data set with artificial outliers:
# +
N = 2000
mean = [3, 3]
covariance = [[1, 0.2], [0.2, 1]]
x_data, y_data = np.random.multivariate_normal(mean, covariance, N).T
N_outliers = 20
mean_outliers = [7, 10]
covariance_outliers = [[0.2, .1], [.1, 0.2]]
x_outliers, y_outliers = np.random.multivariate_normal(mean_outliers, covariance_outliers, N_outliers).T
idx = np.zeros((N+N_outliers,))
x = np.vstack((x_data[:,np.newaxis], x_outliers[:,np.newaxis]))
y = np.vstack((y_data[:,np.newaxis], y_outliers[:,np.newaxis]))
X = np.hstack((x, y))
(n_observations, n_variables) = np.shape(X)
# -
# Visualize the data set and outliers using the `preprocess.plot_2d_clustering` function. Cluster `"0"` will be the data set and cluster `"1"` will be the outliers:
if save_plots: save_filename = '../images/data-manipulation-initial-data'
plt = reduction.plot_2d_manifold(x, y, color_variable='k', x_label='$x$', y_label='$y$', colorbar_label=None, title=None, save_filename=save_filename)
# ### Find multivariate outliers using `MULTIVARIATE TRIMMING` option
(idx_outliers_removed, idx_outliers) = preprocess.outlier_detection(X, scaling='auto', method='MULTIVARIATE TRIMMING', trimming_threshold=0.6, verbose=True)
# We are going to visualize how the algorithm classified the data into outliers/not-outliers. We begin by generating the new cluster classification vector, where the cluster $k_0$ will be non-outliers and cluster $k_1$ will be outliers:
idx_new = np.zeros((n_observations,))
for i in range(0, n_observations):
if i in idx_outliers:
idx_new[i] = 1
# We can plot the partitioning:
if save_plots: save_filename = '../images/data-manipulation-outliers-multivariate-trimming-60'
plt = preprocess.plot_2d_clustering(x, y, idx_new, x_label='$x$', y_label='$y$', color_map='coolwarm', first_cluster_index_zero=True, grid_on=True, figure_size=(7, 7), title=None, save_filename=save_filename)
# If the parameter `trimming_threshold` is decreased, we more points that are within the data cloud become classified as outliers:
(idx_outliers_removed, idx_outliers) = preprocess.outlier_detection(X, scaling='auto', method='MULTIVARIATE TRIMMING', trimming_threshold=0.3, verbose=True)
idx_new = np.zeros((n_observations,))
for i in range(0, n_observations):
if i in idx_outliers:
idx_new[i] = 1
if save_plots: save_filename = '../images/data-manipulation-outliers-multivariate-trimming-30'
plt = preprocess.plot_2d_clustering(x, y, idx_new, x_label='$x$', y_label='$y$', color_map='coolwarm', first_cluster_index_zero=True, grid_on=True, figure_size=(7, 7), title=None, save_filename=save_filename)
# ### Find multivariate outliers using `PC CLASSIFIER` option
(idx_outliers_removed, idx_outliers) = preprocess.outlier_detection(X, scaling='auto', method='PC CLASSIFIER', quantile_threshold=0.9899, verbose=True)
idx_new = np.zeros((n_observations,))
for i in range(0, n_observations):
if i in idx_outliers:
idx_new[i] = 1
if save_plots: save_filename = '../images/data-manipulation-outliers-pc-classifier'
plt = preprocess.plot_2d_clustering(x, y, idx_new, x_label='$x$', y_label='$y$', color_map='coolwarm', first_cluster_index_zero=True, grid_on=True, figure_size=(7, 7), title=None, save_filename=save_filename)
# ***
# <a id='kernel_density'></a>
# ## Kernel density weighting
#
# [**Go up**](#header)
# In this tutorial we reproduce results generated on a synthetic data set from the following paper:
#
# > [<NAME>., <NAME>., & <NAME>. (2012). Kernel density weighted principal component analysis of combustion processes. Combustion and flame, 159(9), 2844-2855.](https://www.sciencedirect.com/science/article/abs/pii/S001021801200123X)
# We begin by generating a synthetic data set:
# +
n_observations = 2021
x1 = np.zeros((n_observations,1))
x2 = np.zeros((n_observations,1))
for i in range(0,n_observations):
R = np.random.rand()
if i <= 999:
x1[i] = -1 + 20*R
x2[i] = 5*x1[i] + 100*R
if i >= 1000 and i <= 1020:
x1[i] = 420 + 8*(i+1 - 1001)
x2[i] = 5000/200 * (x1[i] - 400) + 500*R
if i >= 1021 and i <= 2020:
x1[i] = 1000 + 20*R
x2[i] = 5*x1[i] + 100*R
X = np.hstack((x1, x2))
# -
if save_plots: save_filename = '../images/kernel-density-original-data'
plt = reduction.plot_2d_manifold(x1, x2, x_label='$x_1$', y_label='$x_2$', save_filename=save_filename)
# We will also define a function that will create parity plots of the reconstructed data set:
# ### Reconstructing the data set without weighting
#
# We will first demonstrate how the data set is reconstructed using only the first Principal Component obtained on the unweighted data set.
pca = PCA(X, scaling='auto', n_components=1)
PCs = pca.transform(X)
X_rec = pca.reconstruct(PCs)
if save_plots: save_filename = '../images/kernel-density-original-x1'
plt = reduction.plot_parity(X[:,0], X_rec[:,0], x_label='Observed $x_1$', y_label='Reconstructed $x_1$', save_filename=save_filename)
if save_plots: save_filename = '../images/kernel-density-original-x2'
plt = reduction.plot_parity(X[:,1], X_rec[:,1], x_label='Observed $x_2$', y_label='Reconstructed $x_2$', save_filename=save_filename)
# ### Single-variable case
#
# We first compute weights using a single variable as the conditioning variable:
# %time kernd_single = KernelDensity(pca.X_cs, pca.X_cs[:,0], verbose=True)
# Obtain data set weighted by the computed weights:
X_weighted_single = kernd_single.X_weighted
# Weights vector $\mathbf{W_c}$ can also be obtained as the class attribute:
single_weights = kernd_single.weights
# Perform PCA on the weighted data set:
pca_single = PCA(X_weighted_single, 'none', n_components=1, nocenter=True)
PCs_single = pca_single.transform(pca.X_cs)
X_rec_single = pca_single.reconstruct(PCs_single)
X_rec_single = (X_rec_single * pca.X_scale) + pca.X_center
if save_plots: save_filename = '../images/kernel-density-single-x1'
plt = reduction.plot_parity(X[:,0], X_rec_single[:,0], x_label='Observed $x_1$', y_label='Reconstructed $x_1$', save_filename=save_filename)
if save_plots: save_filename = '../images/kernel-density-single-x2'
plt = reduction.plot_parity(X[:,1], X_rec_single[:,1], x_label='Observed $x_2$', y_label='Reconstructed $x_2$', save_filename=save_filename)
# ### Multi-variable case:
# %time kernd_multi = KernelDensity(pca.X_cs, pca.X_cs, verbose=True)
# Obtain data set weighted by the computed weights:
X_weighted_multi = kernd_multi.X_weighted
# Weights vector $\mathbf{W_c}$ can also be obtained as the class attribute:
multi_weights = kernd_multi.weights
# Perform PCA on the weighted data set:
pca_multi = PCA(X_weighted_multi, 'none', n_components=1)
PCs_multi = pca_multi.transform(pca.X_cs)
X_rec_multi = pca_multi.reconstruct(PCs_multi)
X_rec_multi = (X_rec_multi * pca.X_scale) + pca.X_center
if save_plots: save_filename = '../images/kernel-density-multi-x1'
plt = reduction.plot_parity(X[:,0], X_rec_multi[:,0], x_label='Observed $x_1$', y_label='Reconstructed $x_1$', save_filename=save_filename)
if save_plots: save_filename = '../images/kernel-density-multi-x2'
plt = reduction.plot_parity(X[:,1], X_rec_multi[:,1], x_label='Observed $x_2$', y_label='Reconstructed $x_2$', save_filename=save_filename)
| docs/tutorials/demo-data-manipulation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# default_exp core
# -
# # XG Boost
#
# > Task descriptions for XG Boost test project.
#hide
from nbdev.showdoc import *
#export
def load_dataframe() -> "ray.ObjectRef":
"""
#### build random dataframe task
"""
print("Loading CSV.")
if SIMPLE:
print("Loading simple")
from sklearn import datasets
data = datasets.load_breast_cancer(return_X_y=True)
else:
import pandas as pd
# import modin.pandas as mpd -- currently does not work.
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/" \
"00280/HIGGS.csv.gz"
colnames = ["label"] + ["feature-%02d" % i for i in range(1, 29)]
data = pd.read_csv(url, compression='gzip', names=colnames)
# data = pd.read_csv("/home/astro/HIGGS.csv.gz", names=colnames)
print("loaded higgs")
print("Loaded CSV.")
return data
#export
def create_data(data):
print("RUNNING SOME CODE!")
logfile = open("/tmp/ray/session_latest/custom.log", "w")
def write(msg):
logfile.write(f"{msg}\n")
logfile.flush()
write(f"Creating data matrix: {data, SIMPLE}")
if SIMPLE:
from sklearn.model_selection import train_test_split
write("Splitting data")
data, labels = data
train_x, test_x, train_y, test_y = train_test_split(
data, labels, test_size=0.25)
train_set = xgb.RayDMatrix(train_x, train_y)
test_set = xgb.RayDMatrix(test_x, test_y)
else:
df_train = data[(data['feature-01'] < 0.4)]
colnames = ["label"] + ["feature-%02d" % i for i in range(1, 29)]
train_set = xgb.RayDMatrix(df_train, label="label", columns=colnames)
df_validation = data[(data['feature-01'] >= 0.4)& (data['feature-01'] < 0.8)]
test_set = xgb.RayDMatrix(df_validation, label="label")
write("finished data matrix")
return train_set, test_set
#export
def train_model(data) -> None:
logfile = open("/tmp/ray/session_latest/custom.log", "w")
def write(msg):
logfile.write(f"{msg}\n")
logfile.flush()
dtrain, dvalidation = data
evallist = [(dvalidation, 'eval')]
evals_result = {}
config = {
"tree_method": "hist",
"eval_metric": ["logloss", "error"],
}
write("Start training")
bst = xgb.train(
params=config,
dtrain=dtrain,
evals_result=evals_result,
ray_params=xgb.RayParams(max_actor_restarts=1, num_actors=2, cpus_per_actor=2),
num_boost_round=100,
evals=evallist)
write("finish training")
return bst
| 00_XGBoost.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CNN Notebook
# #### *Author: <NAME>*
# #### *University of Chicago, CAPP'20*
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.preprocessing.image import ImageDataGenerator
# ### Model Definition
cnn = Sequential()
# +
cnn.add(Conv2D(32, (3, 3), input_shape=(64, 64, 3), activation="relu"))
cnn.add(MaxPooling2D(pool_size=(2, 2)))
cnn.add(Conv2D(64, (3, 3), activation="relu"))
cnn.add(MaxPooling2D(pool_size=(2, 2)))
cnn.add(Conv2D(128, (3, 3), activation="relu"))
cnn.add(MaxPooling2D(pool_size=(2, 2)))
# -
cnn.add(Flatten())
cnn.add(Dense(128, activation="relu"))
cnn.add(Dense(1, activation="sigmoid"))
cnn.compile("adam", loss="binary_crossentropy", metrics=["accuracy"])
# ### Load Data
# +
train_datagen = ImageDataGenerator(rescale=1./255, shear_range=0.2,
zoom_range=0.2, horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale=1./255)
# +
training_set = train_datagen.flow_from_directory("dataset/training_set",
target_size=(64, 64),
batch_size=32,
class_mode="binary")
test_set = test_datagen.flow_from_directory("dataset/test_set",
target_size=(64, 64),
batch_size=32,
class_mode="binary")
# -
# ### Model Training
cnn.fit_generator(training_set,
steps_per_epoch=800, epochs=25,
validation_data=test_set,
validation_steps=200)
# There is some over-fitting. Try a less complex one with higher resolution.
# +
cnn = Sequential()
cnn.add(Conv2D(32, (3, 3), input_shape=(128, 128, 3), activation="relu"))
cnn.add(MaxPooling2D(pool_size=(2, 2)))
cnn.add(Conv2D(32, (3, 3), activation="relu"))
cnn.add(MaxPooling2D(pool_size=(2, 2)))
cnn.add(Conv2D(32, (3, 3), activation="relu"))
cnn.add(MaxPooling2D(pool_size=(2, 2)))
cnn.add(Flatten())
cnn.add(Dense(256, activation="relu"))
cnn.add(Dense(1, activation="sigmoid"))
cnn.compile("adam", loss="binary_crossentropy", metrics=["accuracy"])
# +
training_set = train_datagen.flow_from_directory("dataset/training_set",
target_size=(128, 128),
batch_size=32,
class_mode="binary")
test_set = test_datagen.flow_from_directory("dataset/test_set",
target_size=(128, 128),
batch_size=32,
class_mode="binary")
# -
cnn.fit_generator(training_set,
steps_per_epoch=250, epochs=25,
validation_data=test_set,
validation_steps=64)
# The accuracy rate on the test set is rather satisfying, and our CNN seems less likely to over-fit.
| Part 8 - Deep Learning/Section 40 - Convolutional Neural Networks (CNN)/CNN Notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # GEOS 505 Assignment 1
#
# ## Due: Thursday, October 11, 2018
#
# ### Purpose
# In this assignment you will demonstrate your emerging research computing competencies by: (1) obtaining some dataset that is relevant to your research or research interests, (2) importing that dataset into Python, (3) creating a visualization of that dataset and describing (in text) what is being shown, and (4) saving and uploading the Jupyter Notebook you created to your Github account.
#
# ### Instructions
# Using this Jupyter notebook as a template, you should use a combination of code and Markdown cells to illustrate and describe the workflow you are using to import and visualize your dataset. The prompts below can serve as a roadmap for how this workflow should unfold.
# ### 1. Describe the dataset
# Using words in Markdown, describe that you are importing and state why it is important to your research or research interests. This should include some details about where the data came from and the format in which it is stored
# I am importing data from <NAME>' Sunnyslope AVA weather station located in Polo Cove vineyard. Data in this set includes temperature (°F) readings for 2017-4-1 through 2017-10-31. The number of days where the temperature reaches 50°F or higher are the growing degree days (GDD) of the vineyard. GDD are used as a measurement to estimate the likelihood the fruit will reach full maturity before harvest.
# ### 2. Import the dataset and verify that it is what you expect
# You should include a detailed description of any cleaning or manipulation done to the data prior to reading it in here, why you needed to do it, and how you did it. You should also include a justification for how you read the data into Python using the command you did. Provide examples of other ways you could have read in the data and discuss why you chose not to use those methods.
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
filename_PC = "PC_GDD_17.csv"
# -
df = pd.read_csv(filename_PC)
df[0:4]
# I downloaded the data as a '.csv' and, in Excel, deleted the first column of values "1-5055" that corresponded to the reading number. Doing this makes it easier for Python to read, and the data output is neater. I imported the data using filename_PC = "####.csv". I named the file 'PC' for "Polo Cove". I created a dataframe and read the '.csv' data in using Pandas. To check if my data was imported correctly (makes sense) I asked Python to show me the dataframe for rows 0 through 4. Values printed in Python matched values found in Excel, so I know it all worked out. I could have created a graph using matplotlib.pyplot and compared it to the graph on the webpage for the data, but printing the dataframe was easier.
# ### 3. Create a visualization of some feature of your dataset
# Using a standard plotting library, create a visualization of your dataset. Describe, in words, why you are choosing to plot the data the way you are and what you intend to reveal through this visualization. Make sure your plot has a title and that axes are labeled and units are provided.
ax = df.plot(x='Date',y='Value')
plt.show()
datearr = df['Date'].values
datearr[0]
datearr_dt = pd.to_datetime(datearr)
datearr_dt[0]
type(datearr_dt)
df['NP_Datetime'] = datearr_dt
df[0:4]
# +
plt.plot(df['NP_Datetime'].values,df['Value'].values)
plt.title('Polo Cove Vineyard April 1st to October 31st')
plt.xlabel('Date')
plt.ylabel('Value [°F]')
plt.show()
# -
# This type of plot allows the reader to see temperature (°F) fluctuations during the growing season in Polo Cove vineyard. There are over 5000 values that needed to appear on the plot, and a graph such as this is easy to read. I had to convert the data and time stamp into a form that was human readable.
# ### 4. Describe the visualization you just created
# As if you were writing a paragraph in a results section of a technical report, peer-reviewed manuscript, or thesis, describe (again, in words) the central idea that this visualization conveys. Describe key features of the data that the outside reviewer should pay attention to. This might include a description of anything unanticipated that you see in the data, or a verification that the visualization shows what you expected to see.
# Temperature readings as displayed in the figure "Polo Cove Vineyard April 1st to October 31st" show an increased in temperature during the summer months. The start of the growing season (April 1st) and the end of the season (October 31st) display values that are significantly lower than readings from June and August. The weather station failed to take readings at the end of August and beginning of September possibly due to a dead battery. The increasing temperature trends during the summer months displayed in this graph are to be expected and verify that our data is accurate. Using the graph, we can conclude nearly half of the growing season reaches 50°F or greater.
# ### 5. Perform any additional analyses (optional)
# If you are interested or curious, go beyond this simple visualization to perform any additional statistical analyses on your data, create any additional visualizations, or compute any additional derived variables that may help you and an external reviewer better understand the dataset that you are showing. Use words to describe any additional analyses and why they are relevant.
# +
### Python code goes here
# -
# ### 6. Upload your Jupyter Notebook to your GitHub account
# Paste the link to your completed notebook in the class etherpad: https://etherpad.boisestate.edu/p/GEOS-505-FA18
| GEOS 505 Assignment 1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PLOT DATA
# This notebook imports the obtained results for varying horizon lengths, AGV group sizes and random start/goal/delay configurations.
# +
import logging
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['pdf.fonttype'] = 42
mpl.rcParams['ps.fonttype'] = 42
import numpy as np
import random
import seaborn as sns
import pandas as pd
import statistics as stat
import os
import yaml
import glob
# WHERE TO SAVE THE FIGURES?
save_delay_vs_improv = "/home/alberndt/Documents/research/bosch/figures/"
save_horizon_vs_improv = "/home/alberndt/Documents/research/bosch/figures/"
# -
# # 1 Import and Read Result Data
# ## 1.1 Load Data
# +
data = {"AGVs": [], "randseed": [], "delay": [], "horizon": [], "total_time": [], "improvement": []}
yaml_list = glob.glob("ICAPS/*.yaml")
horizon_0_data = {"AGVs": [], "randseed": [], "delay": [], "total_time": []}
for file in yaml_list:
split_filename = file.split("_")
horizon = str(split_filename[-1].split(".")[0])
delay = str(split_filename[-3])
# if ((int(delay) == 3) or (int(delay) == 25)):
seed = str(split_filename[-5])
AGVs = str(split_filename[-7])
with open(file, "r") as stream:
#try:
yaml_data = yaml.safe_load(stream)
cumulative_time = yaml_data["results"]["total time"]
data["AGVs"].append(int(AGVs))
data["randseed"].append(int(seed))
data["delay"].append(int(delay))
data["horizon"].append(int(horizon))
data["total_time"].append(int(cumulative_time))
data["improvement"].append(int(cumulative_time))
# except yaml.YAMLError as exc:
# print(exc)
# -
# ## 1.2 Calculate improvement metric
# +
df = pd.DataFrame(data, columns=["AGVs", "randseed", "delay", "horizon", "total_time", "improvement"])
# Get the 0 horizon data
df_0 = df[df.horizon == 0]
newdata = {"AGVs": [], "randseed": [], "delay": [], "horizon": [], "total_time": [], "improvement": []}
no_baseline_cnt = 0
no_baseline_list = []
max_delay = 0
for index, row in df.iterrows():
AGVs = row["AGVs"]
randseed = row["randseed"]
delay = row["delay"]
horizon = row["horizon"]
total_time = row["total_time"]
try:
baseline = df_0[(df_0.AGVs == AGVs) & (df_0.randseed == randseed) & (df_0.delay == delay)].iloc[0]
baseline_time = baseline["total_time"]
improvement = 100*(baseline_time-total_time)/baseline_time
newdata["AGVs"].append(int(AGVs))
newdata["randseed"].append(int(seed))
newdata["delay"].append(int(delay))
newdata["horizon"].append(int(horizon))
newdata["total_time"].append(int(cumulative_time))
newdata["improvement"].append(float(improvement))
if max_delay < int(delay):
max_delay = int(delay)
except IndexError:
# if no baseline (Horizon = 0) is found, do not add this data: cannot be compared
no_baseline_cnt += 1
no_baseline_str = str(AGVs) + " \t " + str(randseed) + " \t " + str(delay) + " \t " + str(horizon)
no_baseline_list.append(no_baseline_str)
print("No baseline count: {}".format(no_baseline_cnt))
print("List of baselines missing:")
print("AGVs \t seed \t delay \t horizon")
print("---------------------------------")
for row in no_baseline_list:
print(row)
print("---------------------------------")
print("max delay: {}".format(max_delay))
dfnew = pd.DataFrame(newdata, columns=["AGVs", "randseed", "delay", "horizon", "total_time", "improvement"])
print(dfnew)
# -
# # 2 Delay vs Improvement results
# ## 2.1 Overlayed Plot of all AGV sizes
# +
sns.set(style="ticks")
sns.set_palette("bright")
sns_col = sns.color_palette("bright", n_colors=5)
for horizon in [5]:
df_new_hor = dfnew[dfnew.horizon == horizon]
plt.figure()
sns.lineplot(x="delay", y="improvement",
hue="AGVs",
ci=64,
data=df_new_hor,
palette=sns_col)
plt.ylim(-1,31)
plt.xlabel("Delay $k$ [timesteps]")
plt.ylabel("Improvement [%]")
plt.xlim(-1,51)
plt.grid(True)
plt.legend(loc="upper left")
ax = plt.gca()
ax.figure.set_size_inches(9,4.5)
plt.subplots_adjust(left=0.07, bottom=0.12, right=0.98, top=0.98, wspace=None, hspace=None)
ax.figure.set_size_inches(6,3)
plt.subplots_adjust(left=0.095, bottom=0.17, right=0.998, top=0.996, wspace=None, hspace=None)
plt.savefig(save_delay_vs_improv + "improvement_delay_all_H_{}.pdf".format(horizon), format="pdf", pad_inches=0.01, transparent=True)
# -
# ## 2.2 Individual plot for each AGV group size
# +
sns.set(style="ticks")
sns.set_palette("bright")
sns_col = sns.color_palette("bright", n_colors=5)
# print(sns_col)
for horizon in [5]:
df_new_hor = dfnew[dfnew.horizon == horizon]
idx = 0
for agv_cnt in [30,40,50,60,70]:
df_new_agvs = df_new_hor[df_new_hor.AGVs == agv_cnt]
plt.figure(idx)
sns.lineplot(x="delay", y="improvement",
hue="AGVs",
ci=100,
data=df_new_agvs,
palette=[sns_col[idx]],
legend=False)
idx += 1
plt.ylim(-1,31)
plt.xlabel("Delay $k$ [timesteps]")
plt.ylabel("Improvement [%]")
plt.xlim(-1,51)
plt.grid(True)
# plt.legend(loc="upper left")
ax = plt.gca()
ax.figure.set_size_inches(6,3)
plt.subplots_adjust(left=0.095, bottom=0.17, right=0.998, top=0.996, wspace=None, hspace=None)
plt.savefig(save_delay_vs_improv + "improvement_delay_AGVs_{}_H_{}.pdf".format(agv_cnt, horizon), format="pdf", pad_inches=0.01, transparent=True)
# -
# # 3 Horizon vs Improvement results
# ## 3.1 Delay k = 3
# +
sns.set(style="ticks")
sns.set_palette("bright")
sns_col = sns.color_palette("bright", n_colors=5)
# Delay amount
k = 3
df_improv = dfnew[dfnew.delay == k]
df_improv_30 = df_improv[df_improv.AGVs == 30]
df_improv_40 = df_improv[df_improv.AGVs == 40]
df_improv_50 = df_improv[df_improv.AGVs == 50]
df_improv_60 = df_improv[df_improv.AGVs == 60]
df_improv_70 = df_improv[df_improv.AGVs == 70]
print("Delay k = {}".format(k))
print(" sim count for 30 AGVs: {}".format(len(df_improv_30.index)))
print(" sim count for 40 AGVs: {}".format(len(df_improv_40.index)))
print(" sim count for 50 AGVs: {}".format(len(df_improv_50.index)))
print(" sim count for 60 AGVs: {}".format(len(df_improv_60.index)))
print(" sim count for 70 AGVs: {}".format(len(df_improv_70.index)))
plt.figure(1)
ax = plt.gca()
# ax.set(yscale="log")
sns.lineplot(x="horizon", y="improvement",
hue="AGVs",
ci=64,
data=df_improv,
palette=sns_col)
plt.xlabel("Horizon H")
plt.ylabel("Improvement [%]")
plt.grid()
ax = plt.gca()
plt.xlim(-0.1,5.1)
plt.ylim(-0.1,7,1)
ax.figure.set_size_inches(7,4)
plt.subplots_adjust(left=0.12, bottom=0.13, right=0.98, top=0.98, wspace=None, hspace=None)
ax.figure.set_size_inches(9,4.5)
plt.subplots_adjust(left=0.07, bottom=0.12, right=0.98, top=0.98, wspace=None, hspace=None)
ax.figure.set_size_inches(6,3)
plt.subplots_adjust(left=0.095, bottom=0.17, right=0.998, top=0.996, wspace=None, hspace=None)
plt.savefig(save_horizon_vs_improv + "horizon_improve_k_3_all.pdf", format="pdf", pad_inches=0.01, transparent=True)
# -
# ### Individual Plots for delay k=3
# +
sns.set(style="ticks")
sns.set_palette("bright")
sns_col = sns.color_palette("bright", n_colors=5)
# Delay amount
k = 3
df_improv = dfnew[dfnew.delay == k]
idx = 0
for agv_cnt in [30,40,50,60,70]:
df_new_agvs = df_improv[df_improv.AGVs == agv_cnt]
plt.figure(idx)
sns.lineplot(x="horizon", y="improvement",
hue="AGVs",
ci=100,
data=df_new_agvs,
palette=[sns_col[idx]],
legend=False)
idx += 1
plt.xlabel("Horizon H")
plt.ylabel("Improvement [%]")
plt.grid(True)
plt.ylim(-0.1,7.1)
plt.xlim(-0.2,5.2)
# plt.legend(loc="upper left")
ax = plt.gca()
ax.figure.set_size_inches(4,3)
plt.subplots_adjust(left=0.16, bottom=0.16, right=0.98, top=0.98, wspace=None, hspace=None)
ax.figure.set_size_inches(6,3)
plt.subplots_adjust(left=0.095, bottom=0.17, right=0.998, top=0.996, wspace=None, hspace=None)
plt.savefig(save_horizon_vs_improv + "horizon_improve_k_3_{}.pdf".format(agv_cnt), format="pdf", pad_inches=0.01, transparent=True)
# -
# ## 3.2 Delay k = 25
# +
sns.set(style="ticks")
sns.set_palette("bright")
sns_col = sns.color_palette("bright", n_colors=5)
# Delay amount
k = 20
df_improv = dfnew[dfnew.delay == k]
df_improv_30 = df_improv[df_improv.AGVs == 30]
df_improv_40 = df_improv[df_improv.AGVs == 40]
df_improv_50 = df_improv[df_improv.AGVs == 50]
df_improv_60 = df_improv[df_improv.AGVs == 60]
df_improv_70 = df_improv[df_improv.AGVs == 70]
print("Delay k = {}".format(k))
print(" sim count for 30 AGVs: {}".format(len(df_improv_30.index)))
print(" sim count for 40 AGVs: {}".format(len(df_improv_40.index)))
print(" sim count for 50 AGVs: {}".format(len(df_improv_50.index)))
print(" sim count for 60 AGVs: {}".format(len(df_improv_60.index)))
print(" sim count for 70 AGVs: {}".format(len(df_improv_70.index)))
plt.figure(2)
ax = plt.gca()
# ax.set(yscale="log")
sns.lineplot(x="horizon", y="improvement",
hue="AGVs",
ci=1,
data=df_improv,
palette=sns_col)
plt.xlabel("Horizon H")
plt.ylabel("Improvement [%]")
plt.grid(True)
plt.ylim(-1,31)
plt.xlim(-0.1,15.1)
ax = plt.gca()
ax.figure.set_size_inches(7,4)
plt.subplots_adjust(left=0.12, bottom=0, right=0.98, top=0.98, wspace=None, hspace=None)
ax.figure.set_size_inches(6,3)
plt.subplots_adjust(left=0.095, bottom=0.17, right=0.998, top=0.996, wspace=None, hspace=None)
plt.savefig(save_horizon_vs_improv + "horizon_improve_k_25_all.pdf", format="pdf", pad_inches=0.01, transparent=True)
# plt.savefig(save_loc_icaps + "improvement_vs_horizon_k_25.pdf", format="pdf", pad_inches=0.01, transparent=True)
# -
# ### Individual Plots for delay k=25
# +
sns.set(style="ticks")
sns.set_palette("bright")
sns_col = sns.color_palette("bright", n_colors=5)
# Delay amount
k = 20
df_improv = dfnew[dfnew.delay == k]
idx = 0
for agv_cnt in [30,40,50,60,70]:
df_new_agvs = df_improv[df_improv.AGVs == agv_cnt]
plt.figure(idx)
sns.lineplot(x="horizon", y="improvement",
hue="AGVs",
ci=100,
data=df_new_agvs,
palette=[sns_col[idx]])
idx += 1
plt.xlabel("Horizon H")
plt.ylabel("Improvement [%]")
plt.grid(True)
plt.ylim(-1,31)
plt.xlim(-0.1,15.1)
# plt.legend(loc="upper left")
ax = plt.gca()
ax.figure.set_size_inches(4,3)
plt.subplots_adjust(left=0.15, bottom=0.17, right=0.98, top=0.98, wspace=None, hspace=None)
ax.figure.set_size_inches(6,3)
plt.subplots_adjust(left=0.095, bottom=0.17, right=0.998, top=0.996, wspace=None, hspace=None)
plt.savefig(save_horizon_vs_improv + "horizon_improve_k_20_{}.pdf".format(agv_cnt), format="pdf", pad_inches=0.01, transparent=True)
# -
| python/results/.ipynb_checkpoints/plot_improvement_vs_delay-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# # Auto NUTS processing
#
# Here we relabel indicators using the nuts detector
# ## 0. Preamble
# %run ../notebook_preamble.ipy
# +
from beis_indicators.utils.nuts_utils import auto_nuts2_uk
import re
# +
def test_dataset(data_source):
'''
This function finds, for a data source, the processed csvs, and performs autonuts detection
For now it will simply print the inferred specification for each csv
Args:
-data source (str) is the folder storing indicators in /processed
'''
path = f'../../data/processed/{data_source}'
csv_files = [x for x in os.listdir(path) if 'csv' in x]
for x in csv_files:
print(x)
auton = auto_nuts2_uk(pd.read_csv(os.path.join(path,x)))
print(set(auton['nuts_year_spec']))
# -
def nuts_test_processed():
'''
Finds all csv folders in the processed folder with a yaml file (ie merged indicators)
Performs the test
'''
to_check = []
for folder in os.listdir('../../data/processed/'):
if os.path.isdir(f'../../data/processed/{folder}')==True:
#We assume that folders with yamls have been processed
yamls = [x for x in os.listdir(f'../../data/processed/{folder}') if '.yaml' in x]
#This is not always the case though
try:
for x in yamls:
csv = re.sub('.yaml','.csv',x)
table = pd.read_csv(f'../../data/processed/{folder}/{csv}',index_col=None)
#Remove unnecessary indices
table = table[[x for x in table.columns if 'Unnamed' not in x]]
#Autonuts
autonuts = auto_nuts2_uk(table)
#Save
autonuts.to_csv(f'../../data/processed/{folder}/{csv}',index=False)
print(autonuts.head())
except:
print('old schema')
nuts_test_processed()
# ?pd.read_csv
| ds/notebooks/dev/13_run_autonuts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/lvm/dcgan_fashion_tf.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="8s8SQdAZF-Kd" colab_type="text"
# # Deep convolutional generative adversarial networks (DCGAN)
#
# This tutorial fits a DC-GAN to Fashion-MNIST. The code is based on
# https://www.tensorflow.org/beta/tutorials/generative/dcgan
#
# + colab_type="code" id="J5oue0oqCkZZ" colab={}
from __future__ import absolute_import, division, print_function, unicode_literals
# + colab_type="code" id="g5RstiiB8V-z" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1930ef01-a647-45b0-88b3-6228c01c7a6c"
try:
# # %tensorflow_version only exists in Colab.
# %tensorflow_version 2.x
except Exception:
pass
# + colab_type="code" id="WZKbyU2-AiY-" colab={}
import tensorflow as tf
# + colab_type="code" id="wx-zNbLqB4K8" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="872d07d0-eae1-4ed4-c5b7-d824e8e64f52"
tf.__version__
# + colab_type="code" id="YfIk2es3hJEd" colab={}
import glob
import imageio
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
from tensorflow.keras import layers
import time
from IPython import display
# + [markdown] colab_type="text" id="iYn4MdZnKCey"
# ### Load and prepare the dataset
#
# You will use the MNIST dataset to train the generator and the discriminator. The generator will generate handwritten digits resembling the MNIST data.
# + colab_type="code" id="a4fYMGxGhrna" colab={}
(train_images, train_labels), (_, _) = tf.keras.datasets.fashion_mnist.load_data()
#(train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()
# + colab_type="code" id="NFC2ghIdiZYE" colab={}
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
#train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
train_images = train_images / 255 # Normalize the images to [0,1]
train_images = (train_images * 2) -1 # Normalize the images to [-1, 1]
# + colab_type="code" id="S4PIDhoDLbsZ" colab={}
BUFFER_SIZE = 60000
BATCH_SIZE = 256
# + colab_type="code" id="-yKCCQOoJ7cn" colab={}
# Batch and shuffle the data
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
# + [markdown] colab_type="text" id="THY-sZMiQ4UV"
# ## Create the models
#
# Both the generator and discriminator are defined using the [Keras Sequential API](https://www.tensorflow.org/guide/keras#sequential_model).
# + [markdown] colab_type="text" id="-tEyxE-GMC48"
# ### The Generator
#
# The generator uses `tf.keras.layers.Conv2DTranspose` (upsampling) layers to produce an image from a seed (random noise). Start with a `Dense` layer that takes this seed as input, then upsample several times until you reach the desired image size of 28x28x1. Notice the `tf.keras.layers.LeakyReLU` activation for each layer, except the output layer which uses tanh.
# + colab_type="code" id="6bpTcDqoLWjY" colab={}
def make_generator_model():
model = tf.keras.Sequential()
model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((7, 7, 256)))
assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
assert model.output_shape == (None, 7, 7, 128)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 14, 14, 64)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh')) # assumes output is [-1,1]
#model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='sigmoid')) # assumes output is [0,1]
assert model.output_shape == (None, 28, 28, 1)
return model
# + [markdown] colab_type="text" id="GyWgG09LCSJl"
# Use the (as yet untrained) generator to create an image.
# + colab_type="code" id="gl7jcC7TdPTG" colab={"base_uri": "https://localhost:8080/", "height": 286} outputId="2f3b3f29-63ab-4c68-f16e-03e4cd2d3f70"
generator = make_generator_model()
noise = tf.random.normal([1, 100])
generated_image = generator(noise, training=False)
plt.imshow(generated_image[0, :, :, 0], cmap='binary')
# + [markdown] colab_type="text" id="D0IKnaCtg6WE"
# ### The Discriminator
#
# The discriminator is a CNN-based image classifier.
# + colab_type="code" id="dw2tPLmk2pEP" colab={}
def make_discriminator_model():
model = tf.keras.Sequential()
model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same',
input_shape=[28, 28, 1]))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Flatten())
model.add(layers.Dense(1))
#model.add(layers.Dense(1, activation="sigmoid")) # cross-entropy loss assumes logits as input
return model
# + [markdown] colab_type="text" id="QhPneagzCaQv"
# Use the (as yet untrained) discriminator to classify the generated images as real or fake. The model will be trained to output positive values for real images, and negative values for fake images.
# + colab_type="code" id="gDkA05NE6QMs" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="deedbcd4-f0c6-4c6a-c865-0b4363e4f66f"
discriminator = make_discriminator_model()
decision = discriminator(generated_image)
print (decision)
# + [markdown] colab_type="text" id="0FMYgY_mPfTi"
# ## Define the loss and optimizers
#
# Define loss functions and optimizers for both models.
#
# + colab_type="code" id="psQfmXxYKU3X" colab={}
# This method returns a helper function to compute cross entropy loss
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True) # don't need sigmoid on output of discriminator
# + [markdown] colab_type="text" id="PKY_iPSPNWoj"
# ### Discriminator loss
#
# This method quantifies how well the discriminator is able to distinguish real images from fakes. It compares the discriminator's predictions on real images to an array of 1s, and the discriminator's predictions on fake (generated) images to an array of 0s.
# + colab_type="code" id="wkMNfBWlT-PV" colab={}
def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
# + [markdown] colab_type="text" id="Jd-3GCUEiKtv"
# ### Generator loss
# The generator's loss quantifies how well it was able to trick the discriminator. Intuitively, if the generator is performing well, the discriminator will classify the fake images as real (or 1). Here, we will compare the discriminators decisions on the generated images to an array of 1s.
# + colab_type="code" id="90BIcCKcDMxz" colab={}
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)
# + [markdown] colab_type="text" id="MgIc7i0th_Iu"
# The discriminator and the generator optimizers are different since we will train two networks separately.
# + colab_type="code" id="iWCn_PVdEJZ7" colab={}
#generator_optimizer = tf.keras.optimizers.Adam(1e-4)
#discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
generator_optimizer = tf.keras.optimizers.RMSprop()
discriminator_optimizer = tf.keras.optimizers.RMSprop()
# + [markdown] colab_type="text" id="mWtinsGDPJlV"
# ### Save checkpoints
# This notebook also demonstrates how to save and restore models, which can be helpful in case a long running training task is interrupted.
# + colab_type="code" id="CA1w-7s2POEy" colab={}
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
# + [markdown] colab_type="text" id="Rw1fkAczTQYh"
# ## Define the training loop
#
#
# + colab_type="code" id="NS2GWywBbAWo" colab={}
noise_dim = 100
num_examples_to_generate = 25 # 16
# We will reuse this seed overtime (so it's easier)
# to visualize progress in the animated GIF)
seed = tf.random.normal([num_examples_to_generate, noise_dim])
# + id="2xhOjwivzp8R" colab_type="code" colab={}
#http://www.datawrangling.org/python-montage-code-for-displaying-arrays/
from numpy import array,flipud,shape,zeros,rot90,ceil,floor,sqrt
from scipy import io,reshape,size
import pylab
def montage(X, colormap=pylab.cm.gist_gray):
m, n, count = shape(X)
mm = int(ceil(sqrt(count)))
nn = mm
M = zeros((mm * m, nn * n))
image_id = 0
for j in range(mm):
for k in range(nn):
if image_id >= count:
break
sliceM, sliceN = j * m, k * n
M[sliceN:sliceN + n, sliceM:sliceM + m] = X[:, :, image_id]
image_id += 1
pylab.imshow(flipud(rot90(M)), cmap=colormap)
pylab.axis('off')
# We assume tensor is [N, H, W, 1].
def plot_montage(tensor):
tensor = tensor[:, :, :, 0]
X = np.transpose(tensor, [2, 1, 0])
montage(X)
# + id="Z6Be7fUHz4Q3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="7d3cbe59-72e5-4ecb-a18b-c950022671ed"
tensor = train_images[:25, :, :]
plot_montage(tensor)
# + [markdown] colab_type="text" id="jylSonrqSWfi"
# The training loop begins with generator receiving a random seed as input. That seed is used to produce an image. The discriminator is then used to classify real images (drawn from the training set) and fakes images (produced by the generator). The loss is calculated for each of these models, and the gradients are used to update the generator and discriminator.
# + colab_type="code" id="RmdVsmvhPxyy" colab={}
def generate_and_save_images(model, epoch, test_input):
# Notice `training` is set to False.
# This is so all layers run in inference mode (batchnorm).
predictions = model(test_input, training=False)
predictions = (predictions + 1)/2 # map back to [0,1]
plot_montage(predictions)
plt.tight_layout()
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
# + colab_type="code" id="3t5ibNo05jCB" colab={}
# Notice the use of `tf.function`
# This annotation causes the function to be "compiled".
@tf.function
def train_step(images):
noise = tf.random.normal([BATCH_SIZE, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
fake_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))
# + colab_type="code" id="2M7LmLtGEMQJ" colab={}
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
train_step(image_batch)
# Produce images for the GIF as we go
display.clear_output(wait=True)
generate_and_save_images(generator,
epoch + 1,
seed)
# Save the model every 15 epochs
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))
# Generate after the final epoch
display.clear_output(wait=True)
generate_and_save_images(generator,
epochs,
seed)
# + [markdown] colab_type="text" id="dZrd4CdjR-Fp"
# ## Train the model
# Call the `train()` method defined above to train the generator and discriminator simultaneously. Note, training GANs can be tricky. It's important that the generator and discriminator do not overpower each other (e.g., that they train at a similar rate).
#
# At the beginning of the training, the generated images look like random noise. As training progresses, the generated digits will look increasingly real. After about 50 epochs, they resemble MNIST digits. This may take about one minute / epoch with the default settings on Colab.
# + colab_type="code" id="Ly3UN0SLLY2l" colab={"base_uri": "https://localhost:8080/", "height": 332} outputId="37163cec-e23e-4c18-a4ef-704158be112d"
# %%time
EPOCHS = 10
train(train_dataset, EPOCHS)
# + [markdown] colab_type="text" id="rfM4YcPVPkNO"
# Restore the latest checkpoint.
# + colab_type="code" id="XhXsd0srPo8c" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="899d6ba6-ba3a-43f7-ec91-7ec9fc946ea9"
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
# + id="R_W4cRs0sNEx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 87} outputId="a4fee3f5-2d2e-4e2c-cc56-2bca996207fc"
# !ls
# + [markdown] colab_type="text" id="P4M_vIbUi7c0"
# ## Create a GIF
#
# + colab_type="code" id="WfO5wCdclHGL" colab={}
# Display a single image using the epoch number
def display_image(epoch_no):
return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))
# + id="zbFDPGPr-1eY" colab_type="code" colab={}
# Remove border from image
# https://gist.github.com/kylemcdonald/bedcc053db0e7843ef95c531957cb90f
def full_frame(width=None, height=None):
import matplotlib as mpl
mpl.rcParams['savefig.pad_inches'] = 0
figsize = None if width is None else (width, height)
fig = plt.figure(figsize=figsize)
ax = plt.axes([0,0,1,1], frameon=False)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.autoscale(tight=True)
# + colab_type="code" id="5x3q9_Oe5q0A" colab={"base_uri": "https://localhost:8080/", "height": 929} outputId="b7e0d671-771f-4efb-d848-d0fcf0fb7cc8"
step = 5
ndx = list(range(1, EPOCHS, step))
ndx.append(EPOCHS)
for i in ndx:
img = display_image(i)
full_frame()
plt.imshow(img)
plt.axis('off')
ttl = 'epoch {}'.format(i)
plt.title(ttl)
plt.show()
| book2/supplements/gan/dcgan_fashion_tf.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Functions for CMag System Specific Analysis
#
#
# +
import pandas as pd
import numpy as np
import sys
import matplotlib.pyplot as plt
import matplotlib as mpl
import scipy.io
from scipy import stats
import scipy.io as sio
import math
# %matplotlib inline
# import matplotlib as mpl
# mpl.rcParams['figure.dpi'] = 300
from matplotlib.ticker import FormatStrFormatter
from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error
from functions.functions import load_data_forGridSearch
# -
# ## Load testing data
# step 1 - load labels and predictions
# load testing data
X_test, y_test = load_data_forGridSearch("../Data", "test")
42721/119
# ## Load and organize prediction results
results = {}
results["X_test"] = X_test
results["labels"] = y_test
# +
# load matlab predictions (recalibrated)
y_pred_baseline_recalibrated = sio.loadmat('./baseline_model/magnetic_model/CalibrateSystem_CardioMag_retrained_w_SensorGrid/mpem_y_pred.mat')['fieldStrength']
y_pred_baseline_recalibrated = y_pred_baseline_recalibrated[:,0:3]
assert y_pred_baseline_recalibrated.shape == y_test.shape, "Predictions for testing set do not have the same shape as the labels"
results["linear multipole electromagnet model"] = y_pred_baseline_recalibrated
# +
# for random forest model
y_pred_rf = np.load("../Models/RF/GridSearch_RF_predictions.npy")
results["RF"] = y_pred_rf
# +
# for ANN
y_pred_MLP =np.load('../Models/ANN/predictions_ANN.npy')
results["MLP"] = y_pred_MLP
# -
# ## Error by max current among eight coils
def plot_metrics_by_max_current(results_dict):
marker_list = ["o", "D", "s"]
colour_list = ["b", "k", "g"]
def rounddown(x, level=5.0):
return int(math.floor(x / level) * level)
def metrics_by_group(grouped):
return evaluate_generic_metrics(labels=grouped[["y_true_x", "y_true_y", "y_true_z"]].values,
predictions=grouped[["y_pred_x", "y_pred_y", "y_pred_z"]].values)
def evaluate_generic_metrics(labels, predictions):
# label_norm = np.sqrt(np.sum(labels**2, axis=1))
# prediction_norm = np.sqrt(np.sum(predictions**2, axis=1))
label_norm = [np.linalg.norm(y) for y in labels]
prediction_norm = [np.linalg.norm(y) for y in predictions]
# R^2
r2_c = r2_score(y_true=labels, y_pred=predictions, multioutput='raw_values')
r2 = r2_score(y_true=labels, y_pred=predictions)
r2_norm = r2_score(y_true=label_norm, y_pred=prediction_norm)
# Root mean squared error
rmse_c = np.sqrt(mean_squared_error(y_true=labels, y_pred=predictions, multioutput='raw_values'))
rmse = np.sqrt(mean_squared_error(y_true=labels, y_pred=predictions))
rmse_norm = np.sqrt(mean_squared_error(y_true=label_norm, y_pred=prediction_norm))
return {"R2_x": round(r2_c[0], 2),
"R2_y": round(r2_c[1], 2),
"R2_z": round(r2_c[2], 2),
"R2": round(r2, 2),
"R2_norm": round(r2_norm, 2),
"RMSE_x_mT": round(rmse_c[0]*1000, 2),
"RMSE_y_mT": round(rmse_c[1]*1000, 2),
"RMSE_z_mT": round(rmse_c[2]*1000, 2),
"RMSE_mT": round(rmse*1000, 2),
"RMSE_norm_mT": round(rmse_norm*1000,2)}
def _plot(X_test, y_test, k, y_pred, idx):
model_name = k
# step 1: construct a dataframe for better data manipulation [currents, power, predictions, labels]
results_data = pd.DataFrame(data=X_test[:, 3:], columns=["I{}".format(a) for a in range(1, 9)])
results_data['max_currents_mag'] = np.max(np.fabs(results_data), axis=1)
results_data['current_level'] = results_data['max_currents_mag'].apply(rounddown)
results_data['y_pred_x'] = y_pred[:, 0]
results_data['y_pred_y'] = y_pred[:, 1]
results_data['y_pred_z'] = y_pred[:, 2]
results_data['y_true_x'] = y_test[:, 0]
results_data['y_true_y'] = y_test[:, 1]
results_data['y_true_z'] = y_test[:, 2]
# group results to evaluate for each power level
results_by_current = results_data.groupby("current_level").apply(metrics_by_group)
count_number = results_data.groupby("current_level").size().values
percentage = [round(i / len(results_data) * 100, 2) for i in count_number]
currentLists = list(results_by_current.keys())
R2_list = [results_by_current.get(l)['R2_norm'] for l in currentLists]
RMSE_list = [results_by_current.get(l)['RMSE_norm_mT'] for l in currentLists]
# plot two metrics
# axs[0].scatter(currentLists, R2_list, label=model_name)
axs[0].plot(currentLists, R2_list, linestyle = "-", marker=marker_list[idx], color=colour_list[idx], label=model_name)
# axs[0].set_xlabel("\ncurrent level (A)", size=16)
axs[0].set_ylabel(r"$R_{norm}^2$", size=16)
axs[0].yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
axs[0].legend(loc="lower left", prop={'size': 14})
axs[1].plot(currentLists, RMSE_list, linestyle = "-", marker=marker_list[idx], color=colour_list[idx], label=model_name)
axs[1].set_xlabel("\ncurrent level (A)", size=16)
axs[1].set_ylabel(r"$RMSE_{norm} (mT)$", size=16)
axs[1].yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
axs[1].legend(loc="upper left", prop={'size': 14})
print("R2:", R2_list)
print("RMSE:", RMSE_list)
# fig.suptitle(
# 'model performance evaluation stratified by maximum absolute current among eight coils'.format(
# model_name), size=18)
# TODO: why the xticklabels are shifted, temp solution is to add a blank entry on the top, what is a better solution
plt.setp(axs, xticklabels=['',
'0-5',
'5-10',
'10-15',
'15-20',
'20-25',
'25-30',
'30-35'])
fig, axs = plt.subplots(2, 1, figsize=(8, 10))
for ax in axs:
ax.tick_params(axis="x", labelsize=12)
ax.tick_params(axis="y", labelsize=12)
X_test = results_dict["X_test"]
y_test = results_dict["labels"]
prediction_list = list(results_dict)
prediction_list.remove("X_test")
prediction_list.remove("labels")
for idx, k in enumerate(prediction_list):
_plot(X_test, y_test, k, results_dict[k], idx)
# save figure
# fig.savefig("../Figures/metrics_by_current.png", dpi=300)
plot_metrics_by_max_current(results)
# # Mixing in Results from Deep Fluids
CNN_results = np.load('/home/samuelch/src/deep-fluids/log/notebook/df_results_by_current_level')
CNN_DF_results = np.load('/home/samuelch/src/deep-fluids/log/notebook/df_results_by_current_level_divfree')
def create_results_dict(y_pred):
def rounddown(x, level=5.0):
return int(math.floor(x / level) * level)
def metrics_by_group(grouped):
return evaluate_generic_metrics(labels=grouped[["y_true_x", "y_true_y", "y_true_z"]].values,
predictions=grouped[["y_pred_x", "y_pred_y", "y_pred_z"]].values)
def evaluate_generic_metrics(labels, predictions):
# label_norm = np.sqrt(np.sum(labels**2, axis=1))
# prediction_norm = np.sqrt(np.sum(predictions**2, axis=1))
label_norm = [np.linalg.norm(y) for y in labels]
prediction_norm = [np.linalg.norm(y) for y in predictions]
# R^2
r2_c = r2_score(y_true=labels, y_pred=predictions, multioutput='raw_values')
r2 = r2_score(y_true=labels, y_pred=predictions)
r2_norm = r2_score(y_true=label_norm, y_pred=prediction_norm)
# Root mean squared error
rmse_c = np.sqrt(mean_squared_error(y_true=labels, y_pred=predictions, multioutput='raw_values'))
rmse = np.sqrt(mean_squared_error(y_true=labels, y_pred=predictions))
rmse_norm = np.sqrt(mean_squared_error(y_true=label_norm, y_pred=prediction_norm))
mae = mean_absolute_error(y_true=labels, y_pred=predictions)
nmae = mae / (np.max(predictions) - np.min(predictions))
return {"R2_x": r2_c[0],
"R2_y": r2_c[1],
"R2_z": r2_c[2],
"R2": r2,
"MAE_mT": 1000*mae,
"N-MAE": nmae,
"R2_norm": round(r2_norm, 2),
"RMSE_x_mT": round(rmse_c[0]*1000, 2),
"RMSE_y_mT": round(rmse_c[1]*1000, 2),
"RMSE_z_mT": round(rmse_c[2]*1000, 2),
"RMSE_mT": round(rmse*1000, 2),
"RMSE_norm_mT": round(rmse_norm*1000,2)}
# step 1: construct a dataframe for better data manipulation [currents, power, predictions, labels]
results_data = pd.DataFrame(data=X_test[:, 3:], columns=["I{}".format(a) for a in range(1, 9)])
results_data['max_currents_mag'] = np.max(np.fabs(results_data), axis=1)
results_data['current_level'] = results_data['max_currents_mag'].apply(rounddown)
results_data['y_pred_x'] = y_pred[:, 0]
results_data['y_pred_y'] = y_pred[:, 1]
results_data['y_pred_z'] = y_pred[:, 2]
results_data['y_true_x'] = y_test[:, 0]
results_data['y_true_y'] = y_test[:, 1]
results_data['y_true_z'] = y_test[:, 2]
results_by_current = results_data.groupby("current_level").apply(metrics_by_group)
return results_by_current
mlp_results = create_results_dict(y_pred_MLP)
linear_results = create_results_dict(y_pred_baseline_recalibrated)
rf_results = create_results_dict(y_pred_rf)
y_pred_s_mpem = np.load('../Models/S-MPEM/predictions_S-MPEM.npy')
s_mpem_results = create_results_dict(y_pred_s_mpem)
# +
marker_list = ["o", "D", "s", 'v', '^', '8']
colour_list = ["tab:blue", "tab:orange", "tab:green", 'tab:red', 'tab:purple', 'tab:brown']
def plot_results(results_dict, model_name, idx):
currentLists = list(CNN_results.keys())
R2_list = [results_dict.get(l)['R2'] for l in currentLists]
RMSE_list = [results_dict.get(l)['RMSE_norm_mT'] for l in currentLists]
MAE_list = [results_dict.get(l)['MAE_mT'] for l in currentLists]
axs[0].plot(currentLists, R2_list, linestyle = "-", linewidth=2.,
marker=marker_list[idx], color=colour_list[idx], label=model_name)
# axs[0].set_xlabel("\ncurrent level (A)", size=16)
axs[0].set_ylabel(r"$R^2$")
axs[0].yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
axs[0].legend(loc="lower left")
axs[0].grid(True)
axs[-1].plot(currentLists, MAE_list, linestyle = "-", linewidth=2.5,
marker=marker_list[idx], color=colour_list[idx], label=model_name)
axs[-1].set_xlabel("Current Level (A)")
axs[-1].set_ylabel(r"MAE (mT)")
#axs.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
axs[-1].legend(loc="upper left")
plt.setp(axs, xticklabels=['',
'0-5',
'5-10',
'10-15',
'15-20',
'20-25',
'25-30',
'30-35'])
plt.tight_layout()
axs[-1].grid(True)
#axs.minorticks_on()
def plot_results_single(results_dict, model_name, idx):
currentLists = list(CNN_results.keys())
R2_list = [results_dict.get(l)['R2'] for l in currentLists]
RMSE_list = [results_dict.get(l)['RMSE_norm_mT'] for l in currentLists]
MAE_list = [results_dict.get(l)['MAE_mT'] for l in currentLists]
plt.plot(currentLists, MAE_list, linestyle = "-", linewidth=1.5, markersize=4.,
marker=marker_list[idx], color=colour_list[idx], label=model_name)
plt.xlabel("Current Level (A)")
plt.ylabel(r"MAE (mT)")
#axs.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
plt.legend(loc="upper left")
plt.gca().set_xticklabels(['',
'0-5',
'5-10',
'10-15',
'15-20',
'20-25',
'25-30',
'30-35'])
plt.tight_layout()
plt.grid(True)
# +
fig, axs = plt.subplots(2, 1, figsize=(4.6, 5))
plot_results(linear_results, 'MPEM',0, )
plot_results(rf_results, 'RF', 1)
plot_results(s_mpem_results, 'S-MPEM', 2)
plot_results(mlp_results, 'ANN', 3)
plot_results(CNN_DF_results, 'CNN-DF', 4)
plot_results(CNN_results, 'CNN', 5)
#plt.savefig('../Figures/current_levels.pdf')
# +
# plot for IEEE submission
mpl.rcParams.update({'font.size': 8,
'lines.linewidth': 1.5})
fig = plt.figure(figsize=(3.5, 2.2))
plot_results_single(linear_results, 'MPEM',0)
#plot_results(rf_results, 'RF', 1)
plot_results_single(s_mpem_results, 'S-MPEM', 2)
plot_results_single(mlp_results, 'ANN', 3)
plot_results_single(CNN_DF_results, 'CNN-DF', 4)
plot_results_single(CNN_results, 'CNN', 5)
plt.savefig('../Figures/current_levels_ieee.pdf')
# -
# %matplotlib inline
# plot for nonlinear chapter of thesis
fig, axs = plt.subplots(2, 1, figsize=(4.6, 5))
plot_results(linear_results, 'MPEM',3)
plot_results(s_mpem_results, 'S-MPEM',4)
plt.savefig('../Figures/current_levels_s-mpem.pdf')
print('percentage error at different current levels between MPEM and S-MPEM')
for r_mpem, r_smpem in zip(linear_results, s_mpem_results):
print('k: 100*(r-mpem['MAE_mT'] - r-smpem['MAE_mT']) / r-mpem['MAE_mT']
for k, r_mpem, r_smpem in zip(linear_results.keys(), linear_results, s_mpem_results):
percent_error = 100*(r_mpem['MAE_mT'] - r_smpem['MAE_mT']) / r_mpem['MAE_mT']
print('current: {}, \terror: {:2.1f}%'.format(k, percent_error))
linear_results[10]
| Code/Misc/08_PerformanceEvaluation_SystemSpecific.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Internal Tracer Content budget
# Calculate the terms in the internal tracer budget (a generalization of the internal heat content budget of Holmes et al., 2018):
# $$ \frac{\partial \Phi_I}{\partial t}(\phi,t) = \mathcal{F}(\phi,t) + \mathcal{P}_I(\phi,t) + \mathcal{K}(\phi,t) + \mathcal{I}(\phi,t) + \mathcal{B}(\phi,t)\, ,$$
# where
# $$\Phi_I(\phi,t) = \mathcal{M}(\phi,t)[\overline{\phi} - \phi]$$
# is the "internal tracer content". Here $\mathcal{M}(\phi,t)$ is the mass of water with tracer concentration less than $\phi$ and $\overline{\phi}$ is the mass-weighted average tracer contration in that mass.
# $\mathcal{F}$ is the diffusive surface tracer flux, $\mathcal{K}$ and $\mathcal{I}$ are parameterized and numerical mixing respectively, $\mathcal{P}_I(\phi,t)$ is the internal component of the surface tracer flux due to surface mass fluxes (i.e. $P-E+R$), and $\mathcal{B}$ is a source/sink term for reactive tracers.
import xarray as xr
import numpy as np
from matplotlib import pyplot as plt
import budgetcalcs as bc
import calc_wmt as wmt
# +
from distributed import Client, LocalCluster, progress
from dask_jobqueue import SLURMCluster
cluster = LocalCluster(
threads_per_worker=16,
n_workers=2,
dashboard_address=8726,
processes=False)
client = Client(cluster)
client
# +
# rootdir = '/archive/gam/MOM6-examples/ice_ocean_SIS2/Baltic_OM4_025/1yr/'
rootdir = '/archive/gam/ESM4/DECK/ESM4_piControl_D/gfdl.ncrc4-intel16-prod-openmp/7/history/'
averaging = '5daily'
vgrid = 'native'
dtst = '08990101'
ds = {}
files = ['common','heat','salt','thk','o2','no3']
for f in files:
print('Loading '+f+'.')
if f=='common':
filename = dtst+'.ocean_'+averaging+'_'+vgrid+'_'+dtst[0:2]+'*.nc'
else:
filename = dtst+'.ocean_'+f+'_'+averaging+'_'+vgrid+'_'+dtst[0:2]+'*.nc'
ds[f] = xr.open_mfdataset(rootdir+filename, chunks={'time': 1}, combine='by_coords')
filename_grid = dtst+'.ocean_static.nc'
delta_t = ds['heat']['average_DT'].astype('timedelta64[s]')
grid = xr.open_dataset(rootdir+filename_grid)
area = grid['areacello']
rho0 = 1035.0
cp = 3992.0
if vgrid == 'z':
ds = ds.rename({'z_l':'zl'})
filename_snap = dtst+'.ocean_'+averaging+'_'+vgrid+'_snap*.nc'
ds_snap = xr.open_mfdataset(rootdir+filename_snap, chunks={'time': 1}, combine='by_coords')
# +
# Budget terms
terms = {}
terms['heat'] = ['opottemptend','T_advection_xy','Th_tendency_vert_remap',
'boundary_forcing_heat_tendency','internal_heat_heat_tendency',
'opottempdiff','opottemppmdiff','frazil_heat_tendency']
terms['salt'] = ['osalttend','S_advection_xy','Sh_tendency_vert_remap',
'boundary_forcing_salt_tendency','osaltdiff','osaltpmdiff']
terms['h'] = ['dhdt','dynamics_h_tendency','vert_remap_h_tendency',
'boundary_forcing_h_tendency']
terms['o2'] = ['o2h_tendency','o2_advection_xy','o2h_tendency_vert_remap',
'o2_dfxy_cont_tendency','o2_vdiffuse_impl','jo2']
terms['no3'] = ['no3h_tendency','no3_advection_xy','no3h_tendency_vert_remap',
'no3_dfxy_cont_tendency','no3_vdiffuse_impl','jno3']
l_is = {}
l_is['heat'] = np.arange(-4,34,0.5)
l_is['o2'] = np.arange(-5E-4,5.5E-4,0.5E-6)
l_is['no3'] = np.arange(-5E-5,3E-4,0.5E-5)
# +
# Corrections for o2 budget
# Correct MOM6 tendencies to account for mass in cell
# i.e. convert from [mol kg^-1 m s^-1] to [mol m^-2 s^-1]
for term in terms['o2'][:-1]:
ds['o2'][term] *= rho0
### THIS A HACK WHILE I WORK OUT THE VDIFFUSE_IMPL TERMS ###
# Calculate residual error
# OXYGEN
tendsum,error = bc.calc_budget(ds['o2'],terms['o2'][1:],terms['o2'][0],plot=False)
ds['o2']['o2_vdiffuse_impl']=error
# Corrections for no3 budget
# Correct MOM6 tendencies to account for mass in cell
# i.e. convert from [mol kg^-1 m s^-1] to [mol m^-2 s^-1]
for term in terms['no3'][:-1]:
ds['no3'][term] *= rho0
### THIS A HACK WHILE I WORK OUT THE VDIFFUSE_IMPL TERMS ###
# Calculate residual error
# OXYGEN
tendsum,error = bc.calc_budget(ds['no3'],terms['no3'][1:],terms['no3'][0],plot=False)
ds['no3']['no3_vdiffuse_impl']=error
# +
l_name = 'o2'
if l_name == 'temp':
ds_name = 'heat'
else:
ds_name = l_name
greaterthan = False
# Time-mean fields
l = ds[ds_name][l_name] # tracer
# Snapshots: for evaluating budget tracer content tendency
# NOTE: time-mean i corresponds to the snapshots at i and i-1
# so, for example, diff(snap[1]-snap[0])/dt = mean[1]
l_snap = ds_snap[l_name] # Snapshots of volume-defining tracer
h_snap = ds_snap['thkcello'] # Snapshots of layer thickness (for tracer content calculation)
# -
# Tricky case: all contours
l_i_vals = l_is[l_name]
# Tracer content tendency
L = wmt.calc_P(rho0*l_snap*h_snap,l_snap,l_i_vals,area,greaterthan=greaterthan)
V = wmt.calc_P(h_snap,l_snap,l_i_vals,area,greaterthan=greaterthan)
dL = L.diff('time').assign_coords({'time':l['time'][1:]})
dV = V.diff('time').assign_coords({'time':l['time'][1:]})
dLdt = dL/delta_t[1:].astype('float')
dVdt = dV/delta_t[1:].astype('float')
dLidt = (dLdt-rho0*dVdt*dVdt[l_name+'_bin'])
# Forcing tendencies
forcing = {}
for term in terms[l_name][3:]:
forcing[term] = wmt.calc_P(ds[ds_name][term],l,l_i_vals,area,greaterthan=greaterthan)[1:,:]
# External surface forcing
p_e = ds['thk']['boundary_forcing_h_tendency']
P_e = wmt.calc_P(p_e,l,l_i_vals,area,greaterthan=greaterthan)[1:,:]
P_e = rho0*P_e*P_e[l_name+'_bin']
# %%time
# Load the bits that are needed
print('Loading dLidt.')
dLidt.load()
for term in terms[l_name][3:]:
print('Loading '+term+'.')
forcing[term].load()
P_e.load()
residual = dLidt.copy()
for term in terms[l_name][3:]:
residual -= forcing[term]
residual += P_e
# +
fig,ax = plt.subplots(figsize=(12,4))
ax.plot(dLidt[l_name+'_bin'],dLidt.mean('time'),label='dLidt')
for term in terms[l_name][3:]:
ax.plot(forcing[term][l_name+'_bin'],forcing[term].mean('time'),label=term)
ax.plot(P_e[l_name+'_bin'],P_e.mean('time'),label='P_e')
ax.plot(residual[l_name+'_bin'],residual.mean('time'),label='residual')
ax.legend()
# -
l_c = 80E-6
fig,ax = plt.subplots(figsize=(12,4))
ax.plot(dLidt['time'],dLidt.sel({l_name+'_bin':l_c},method='nearest'),label='dLidt')
for term in terms[l_name][3:]:
ax.plot(forcing[term]['time'],forcing[term].sel({l_name+'_bin':l_c},method='nearest'),label=term)
ax.plot(P_e['time'],P_e.sel({l_name+'_bin':l_c},method='nearest'),label='P_e')
ax.plot(residual['time'],residual.sel({l_name+'_bin':l_c},method='nearest'),label='residual')
ax.legend()
# ***
# ### SINGLE CONTOUR
# Simple case: single contour
l_c = 0 # tracer contour value
volume = l > l_c # condition for volume defined by contour
volume_snap = l_snap > l_c
# Tracer content tendency
L = (rho0*cp*l_snap*h_snap*area).where(volume_snap,0).sum(['xh','yh','zl'])
V = (h_snap*area).where(volume_snap,0).sum(['xh','yh','zl'])
dL = L.diff('time').assign_coords({'time':ds['time'][1:]})
dV = V.diff('time').assign_coords({'time':ds['time'][1:]})
dLdt = dL/delta_t[1:].astype('float')
dVdt = dV/delta_t[1:].astype('float')
dLidt = dLdt-rho0*cp*l_c*dVdt
# Forcing tendencies
F = (f*area).where(volume,0).sum(['xh','yh','zl'])[1:].load()
G = (g*area).where(volume,0).sum(['xh','yh','zl'])[1:].load()
M = (m*area).where(volume,0).sum(['xh','yh','zl'])[1:].load()
K = (k*area).where(volume,0).sum(['xh','yh','zl'])[1:].load()
Z = (z*area).where(volume,0).sum(['xh','yh','zl'])[1:].load()
# Internal surface forcing
p_e = ds['boundary_forcing_h_tendency']*rho0*cp*l_c
P_e = (p_e*area).where(volume,0).sum(['xh','yh','zl'])[1:].load()
F_i = F - P_e
# Residual
residual = dLidt-F_i-G-M-K-Z
# +
fig,ax = plt.subplots(figsize=(12,4))
ax.plot(dLidt,label='dLidt')
ax.plot(F_i,label='F_i')
ax.plot(G,label='G')
ax.plot(M,label='M')
ax.plot(K,label='K')
ax.plot(Z,label='Z')
ax.plot(residual,label='residual')
ax.legend()
# -
fig,ax = plt.subplots(figsize=(4,4))
ax.plot(0,dLidt.mean('time').values,'.')
ax.plot(0,F_i.mean('time').values,'.')
ax.plot(0,G.mean('time').values,'.')
ax.plot(0,M.mean('time').values,'.')
ax.plot(0,K.mean('time').values,'.')
ax.plot(0,Z.mean('time').values,'.')
ax.plot(0,residual.mean('time').values,'.')
| notebooks/budgets/calc_internal-tracer-budget.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Linear Regression with PyTorch
# ## 1. About Linear Regression
#
# ### 1.1 Simple Linear Regression Basics
# - Allows us to understand **relationship** between two **continuous variables**
# - Example
# - x: independent variable
# - weight
# - y: dependent variable
# - height
# - $y = \alpha x + \beta$
# ### 1.2 Example of simple linear regression
# +
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
np.random.seed(1)
n = 50
x = np.random.randn(n)
y = x * np.random.randn(n)
colors = np.random.rand(n)
plt.plot(np.unique(x), np.poly1d(np.polyfit(x, y, 1))(np.unique(x)))
plt.scatter(x, y, c=colors, alpha=0.5)
plt.show()
# -
# ### 1.3 Aim of Linear Regression
# - Minimize the distance between the points and the line ($y = \alpha x + \beta$)
# - Adjusting
# - Coefficient: $\alpha$
# - Bias/intercept: $\beta$
#
# ## 2. Building a Linear Regression Model with PyTorch
#
# ### 2.1 Example
# - Coefficient: $\alpha = 2$
# - Bias/intercept: $\beta = 1$
# - Equation: $y = 2x + 1$
# ### 2.2 Building a Toy Dataset
x_values = [i for i in range(11)]
x_values
# Convert to numpy
x_train = np.array(x_values, dtype=np.float32)
x_train.shape
# IMPORTANT: 2D required
x_train = x_train.reshape(-1, 1)
x_train.shape
# $y = 2x + 1$
y_values = [2*i + 1 for i in x_values]
y_values
# In case you're weak in list iterators...
y_values = []
for i in x_values:
result = 2*i + 1
y_values.append(result)
y_values
y_train = np.array(y_values, dtype=np.float32)
y_train.shape
# IMPORTANT: 2D required
y_train = y_train.reshape(-1, 1)
y_train.shape
# ### 2.3 Building Model
# **Critical Imports**
import torch
import torch.nn as nn
# **Create Model**
# 1. Linear model
# - True Equation: $y = 2x + 1$
# 2. Forward
# - Example
# - Input $x = 1 $
# - Output $\hat y = ?$
# Create class
class LinearRegressionModel(nn.Module):
def __init__(self, input_dim, output_dim):
super(LinearRegressionModel, self).__init__()
self.linear = nn.Linear(input_dim, output_dim)
def forward(self, x):
out = self.linear(x)
return out
# **Notes**
#
# - **nn.Module is passed every single time** when we define every single model
#
# - **super function inherits nn.module** and allows us to accept all goodness of nn.Module
#
# - Incase case of **self.linear** the input dim is x and output dim is y
#
#
# - Every single model from now on we'll have a function **forward** where we'll pass x. This function will be called for every value of x
# **Instantiate Model Class**
# - input: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
# - desired output: [1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21]
# +
input_dim = 1
output_dim = 1
model = LinearRegressionModel(input_dim, output_dim)
# -
# **Instantiate Loss Class**
# - MSE Loss: Mean Squared Error
# - $MSE = \frac{1}{n} \sum_{i=1}^n(\hat y_i - y_i)$
# - $\hat y$: prediction
# - $y$: true value
criterion = nn.MSELoss()
# **Instantiate Optimizer Class**
# - Simplified equation
# - $\theta = \theta - \eta \cdot \nabla_\theta $
# - $\theta$: parameters (our variables)
# - $\eta$: learning rate (how fast we want to learn)
# - $\nabla_\theta$: parameters' gradients
# - Even simplier equation
# - `parameters = parameters - learning_rate * parameters_gradients`
# - parameters: $\alpha$ and $\beta$ in $ y = \alpha x + \beta$
# - desired parameters: $\alpha = 2$ and $\beta = 1$ in $ y = 2x + 1$
# +
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# -
# **Train Model**
# - 1 epoch: going through the whole x_train data once
# - 100 epochs:
# - 100x mapping `x_train = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]`
#
# - Process
# 1. Convert inputs/labels to tensors with gradients
# 2. Clear gradient buffets
# 3. Get output given inputs
# 4. Get loss
# 5. Get gradients w.r.t. parameters
# 6. Update parameters using gradients
# - `parameters = parameters - learning_rate * parameters_gradients`
# 7. REPEAT
epochs = 100
# **Notes**
#
# - epochs can be any number
# - as we do more epochs, our final parameters our final parameters will more accurately measure the values that we want
# - It may not necessarily true if we do it for too many epochs.
# - we're incrementing epoch first as we want our first epoch to be 1
# - We're converting our numpy arrays to Torch Variables (because Torch Variables can accumulate gradients)
# - We don't pass require_grad=TRUE as automatically a variable has require_grad=TRUE
# - zero_grad clears old gradients from the last step (otherwise you’d just accumulate the gradients from all loss.backward() calls).
# ### https://stackoverflow.com/a/48009142/9292995
# - loss.backward() computes the derivative of the loss w.r.t. the parameters (or anything requiring gradients) using backpropagation.
for epoch in range(epochs):
epoch += 1
# Convert numpy array to torch Variable
inputs = torch.from_numpy(x_train).requires_grad_()
labels = torch.from_numpy(y_train)
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward to get output
outputs = model(inputs)
# Calculate Loss
loss = criterion(outputs, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
print('epoch {}, loss {}'.format(epoch, loss.item()))
# **Compare Data**
# Purely inference
predicted = model(torch.from_numpy(x_train).requires_grad_()).data.numpy()
predicted
# y = 2x + 1
y_train
# **Plot Graph**
# +
# Clear figure
plt.clf()
# Get predictions
predicted = model(torch.from_numpy(x_train).requires_grad_()).data.numpy()
# Plot true data
plt.plot(x_train, y_train, 'go', label='True data', alpha=0.5)
# Plot predictions
plt.plot(x_train, predicted, '--', label='Predictions', alpha=0.5)
# Legend and plot
plt.legend(loc='best')
plt.show()
# -
# **Save Model**
save_model = False
if save_model is True:
# Saves only parameters
# alpha & beta
torch.save(model.state_dict(), 'awesome_model.pkl')
# **Load Model**
load_model = False
if load_model is True:
model.load_state_dict(torch.load('awesome_model.pkl'))
# ## 3. Building a Linear Regression Model with PyTorch (GPU)
#
#
# **CPU Summary**
# +
import torch
import torch.nn as nn
'''
STEP 1: CREATE MODEL CLASS
'''
class LinearRegressionModel(nn.Module):
def __init__(self, input_dim, output_dim):
super(LinearRegressionModel, self).__init__()
self.linear = nn.Linear(input_dim, output_dim)
def forward(self, x):
out = self.linear(x)
return out
'''
STEP 2: INSTANTIATE MODEL CLASS
'''
input_dim = 1
output_dim = 1
model = LinearRegressionModel(input_dim, output_dim)
'''
STEP 3: INSTANTIATE LOSS CLASS
'''
criterion = nn.MSELoss()
'''
STEP 4: INSTANTIATE OPTIMIZER CLASS
'''
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
'''
STEP 5: TRAIN THE MODEL
'''
epochs = 100
for epoch in range(epochs):
epoch += 1
# Convert numpy array to torch Variable
inputs = torch.from_numpy(x_train).requires_grad_()
labels = torch.from_numpy(y_train)
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward to get output
outputs = model(inputs)
# Calculate Loss
loss = criterion(outputs, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
# -
# GPU: 2 things must be on GPU
# - `model`
# - `tensors with gradients`
# +
import torch
import torch.nn as nn
import numpy as np
'''
STEP 1: CREATE MODEL CLASS
'''
class LinearRegressionModel(nn.Module):
def __init__(self, input_dim, output_dim):
super(LinearRegressionModel, self).__init__()
self.linear = nn.Linear(input_dim, output_dim)
def forward(self, x):
out = self.linear(x)
return out
'''
STEP 2: INSTANTIATE MODEL CLASS
'''
input_dim = 1
output_dim = 1
model = LinearRegressionModel(input_dim, output_dim)
#######################
# USE GPU FOR MODEL #
#######################
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)
'''
STEP 3: INSTANTIATE LOSS CLASS
'''
criterion = nn.MSELoss()
'''
STEP 4: INSTANTIATE OPTIMIZER CLASS
'''
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
'''
STEP 5: TRAIN THE MODEL
'''
epochs = 100
for epoch in range(epochs):
epoch += 1
# Convert numpy array to torch Variable
#######################
# USE GPU FOR MODEL #
#######################
inputs = torch.from_numpy(x_train).to(device)
labels = torch.from_numpy(y_train).to(device)
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward to get output
outputs = model(inputs)
# Calculate Loss
loss = criterion(outputs, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
# Logging
print('epoch {}, loss {}'.format(epoch, loss.item()))
# -
# # Summary
# - Simple **linear regression basics**
# - $y = Ax + B$
# - $y = 2x + 1$
# - **Example** of simple linear regression
# - **Aim** of linear regression
# - Minimizing distance between the points and the line
# - Calculate "distance" through `MSE`
# - Calculate `gradients`
# - Update parameters with `parameters = parameters - learning_rate * gradients`
# - Slowly update parameters $A$ and $B$ model the linear relationship between $y$ and $x$ of the form $y = 2x + 1$
# - Built a linear regression **model** in **CPU and GPU**
# - Step 1: Create Model Class
# - Step 2: Instantiate Model Class
# - Step 3: Instantiate Loss Class
# - Step 4: Instantiate Optimizer Class
# - Step 5: Train Model
# - Important things to be on **GPU**
# - `model`
# - `tensors with gradients`
# - How to bring to **GPU**?
# - `model_name.cuda()`
# - `variable_name.cuda()`
| deep_learning/practical_pytorch/.ipynb_checkpoints/pytorch_linear_regression-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ## Exercise 1
# Write a game_setup function that does the following:
#
# 1. Creates a numpy array representing the 3 doors
# 2. Creates a numpy array representing the prizes (0 for goat and 1 for car). You should randomly choose one of the elements of the prize array to be a car and the others should be goats.
# 3. Your function should return the arrays of doors and prizes
import numpy as np
def game_setup():
doors = np.array([1,2,3])
prizes = np.array([0,0,1])
rand_prizes = np.random.choice((prizes),3,replace=False)
assigned_prizes = np.array(list(zip(doors,rand_prizes)))
return assigned_prizes
game_setup()
# ## Exercise 2
# Write a function choose_door that asks the user to input an integer 1,2,3 to choose a door and returns the door they chose. You should catch any errors that the user might make and continue asking for input until the user inputs a valid choice.
def choose_door():
valid_input = False
while valid_input == False:
door_choice = input("Which door would you like to choose? ")
if door_choice != ('1') and door_choice != ('2') and door_choice != ('3'):
print("Error: Must choose doors 1,2, or 3")
else:
choice = int(door_choice)
break
return print("You have chosen door number",choice,"!")
choose_door()
# ## Exercise 3
# Write a function switch_stay that asks the user if they want to switch to the remaining door or stay with their original choice. Catch any errors and continue asking for input until the user inputs a valid choice.
# +
def game_setup_new():
doors = np.array([1,2,3])
prizes = np.array([0,0,1])
rand_prizes = np.random.choice((prizes),3,replace=False)
assigned_prizes = np.array(list(zip(doors,rand_prizes)))
return assigned_prizes
def switch_stay():
choice = None
assigned_prizes = game_setup_new()
valid_input = False
while valid_input == False:
door_choice = input("Which door would you like to choose? ")
if door_choice != ('1') and door_choice != ('2') and door_choice != ('3'):
print("Error: Must choose doors 1,2, or 3")
False
else:
True
break
index = int(door_choice)
possible_reveal_index = index - 1
if index == 1:
choice = assigned_prizes[0]
elif index == 2:
choice = assigned_prizes[1]
elif index == 3:
choice = assigned_prizes[2]
print("You have chosen door number",index,"!")
## revealing a door
possible_reveals = np.delete(assigned_prizes,possible_reveal_index,axis=0)
##print(possible_reveals)
bad_doors = np.empty((0, 2), int)
for i in range(len(possible_reveals)):
if possible_reveals[i][1] == 0:
bad_doors = np.append(bad_doors, np.array([possible_reveals[i]]),axis = 0)
revealed_door = bad_doors[np.random.choice(bad_doors.shape[0], 1 ,replace = False), :]
print('Door Number',revealed_door[0][0],'has been revealed to have a goat!')
revealed_door_index = revealed_door[0][0]-1
#print(revealed_door[0])
choice_index = choice[0]-1
#print(choice)
possible_switches = np.delete(assigned_prizes,[choice_index,revealed_door_index],axis = 0)
#print(possible_switches)
print("You can now switch to door number",possible_switches[0][0],'!')
## Switch doors
valid_decision = False
while valid_decision == False:
switch_decision = input('Would you like to switch or stay? ')
if switch_decision != ('switch') and switch_decision != ('stay'):
print('Error: Must either choose switch or stay.')
elif switch_decision == ('switch'):
choice = possible_switches[0]
print('Your door is now number',choice[0],'!')
break
elif switch_decision == ('stay'):
print('Your door is still number',index,'!')
break
# -
switch_stay()
# ## Exercise 4
# Write a Monte Hall game simulator that introduces the game and proceeds in the steps given in the introduction. After step 4, your function (the host) should tell the player if they won the car! or got the goat :-(
#
# 1. Be sure to add print statements with appropriate messages to update the player on the status of the game
# 2. In the step where the host reveals a door with a goat behind it, your host should randomly choose from the remaining doors that have goats behind them
# 3. Test your function 3 times in the cells below
#
#
#
#
def monte_hall_sim():
choice = None
assigned_prizes = game_setup_new()
valid_input = False
while valid_input == False:
door_choice = input("Which door would you like to choose? ")
if door_choice != ('1') and door_choice != ('2') and door_choice != ('3'):
print("Error: Must choose doors 1,2, or 3")
False
else:
True
break
index = int(door_choice)
possible_reveal_index = index - 1
if index == 1:
choice = assigned_prizes[0]
elif index == 2:
choice = assigned_prizes[1]
elif index == 3:
choice = assigned_prizes[2]
print("You have chosen door number",index,"!")
## revealing a door
possible_reveals = np.delete(assigned_prizes,possible_reveal_index,axis=0)
##print(possible_reveals)
bad_doors = np.empty((0, 2), int)
for i in range(len(possible_reveals)):
if possible_reveals[i][1] == 0:
bad_doors = np.append(bad_doors, np.array([possible_reveals[i]]),axis = 0)
revealed_door = bad_doors[np.random.choice(bad_doors.shape[0], 1 ,replace = False), :]
print('Door Number',revealed_door[0][0],'has been revealed to have a goat!')
revealed_door_index = revealed_door[0][0]-1
#print(revealed_door[0])
choice_index = choice[0]-1
#print(choice)
possible_switches = np.delete(assigned_prizes,[choice_index,revealed_door_index],axis = 0)
#print(possible_switches)
print("You can now switch to door number",possible_switches[0][0],'!')
## Switch doors
valid_decision = False
while valid_decision == False:
switch_decision = input('Would you like to switch or stay? ')
if switch_decision != ('switch') and switch_decision != ('stay'):
print('Error: Must either choose switch or stay.')
elif switch_decision == ('switch'):
choice = possible_switches[0]
print('Your door is now number',choice[0],'!')
break
elif switch_decision == ('stay'):
print('Your door is still number',index,'!')
break
#print(choice)
if choice[1] == 1:
print('Congratulations! You won a brand new Car!')
elif choice[1] == 0:
print('Sorry, you chose the goat. Try again next time!')
monte_hall_sim()
monte_hall_sim()
monte_hall_sim()
# ## Exercise 5
# Modify your function from Exercise 4 to run a Monte Hall game automatically without any user input.
#
# 1. Your function should have a Boolean variable switch whose default value is True, and indicates that the player will choose to switch (if True) or stay (if False)
# 2. Your player door should randomly chosen for step 1 of the game
# 3. Your function should output 1 if the player wins the car and 0 if the player gets the goat
# 4. Your function should suppress any print statements from that in Exercise 4
bools = [True, False]
rand_bools = np.random.choice(bools,1)
def auto_mhall_sim():
choice = None
assigned_prizes = game_setup_new()
door_choice = assigned_prizes[np.random.choice(assigned_prizes.shape[0], 1 ,replace = False), :]
#door_choice = np.random.randint(2, size=1)
index = int(door_choice[0][0])
possible_reveal_index = index - 1
if index == 1:
choice = assigned_prizes[0]
elif index == 2:
choice = assigned_prizes[1]
elif index == 3:
choice = assigned_prizes[2]
#print("You have chosen door number",index,"!")
## revealing a door
possible_reveals = np.delete(assigned_prizes,possible_reveal_index,axis=0)
##print(possible_reveals)
bad_doors = np.empty((0, 2), int)
for i in range(len(possible_reveals)):
if possible_reveals[i][1] == 0:
bad_doors = np.append(bad_doors, np.array([possible_reveals[i]]),axis = 0)
revealed_door = bad_doors[np.random.choice(bad_doors.shape[0], 1 ,replace = False), :]
#print('Door Number',revealed_door[0][0],'has been revealed to have a goat!')
revealed_door_index = revealed_door[0][0]-1
#print(revealed_door[0])
choice_index = choice[0]-1
#print(choice)
possible_switches = np.delete(assigned_prizes,[choice_index,revealed_door_index],axis = 0)
#print(possible_switches)
#print("You can now switch to door number",possible_switches[0][0],'!')
## Switch doors
valid_decision = False
while valid_decision == False:
switch_decision = True
if switch_decision == True:
choice = possible_switches[0]
#print('Your door is now number',choice[0],'!')
break
elif switch_decision == False:
#print('Your door is still number',index,'!')
break
#print(choice)
if choice[1] == 1:
return 1
#print('Congratulations! You won a brand new Car!')
elif choice[1] == 0:
return 0
#print('Sorry, you chose the goat. Try again next time!')
auto_mhall_sim()
# ## Exercise 6
# 1. Write a script that specifies a number of trials num_trials=100, runs your automatic Monte Hall simulator from Exercise 5 with switch=True, num_trials times and stores the output as an ndarray
# 2. Repeat the process from step 1 num_trials times. Note you can do these 2 steps in a nested for loop -- create a numpy array of size (num_trials,num_trials) with each entry initialized to 0, and for each [i,j] entry, capture the output of your Monte Hall simulator
# 3. Sum your numpy array from step 2 along the row axis (meaning add all elements of a given column together) to obtain an array where each entry captures how many times the player won out of num_trials games. Call this array winners.
# 4. Using pyplots hist command (see Probability lecture), plot a histogram of the winners array from the previous step with 15 bins and range being the minimum of winners to the maximum of winners
# 5. Also report the min, max, mean, median, and standard deviation of winners
# 6. Repeat Steps 1--5 for switch=False (i.e., the player choosing to stay)
def auto_mhall_sim():
choice = None
assigned_prizes = game_setup_new()
door_choice = assigned_prizes[np.random.choice(assigned_prizes.shape[0], 1 ,replace = False), :]
#door_choice = np.random.randint(2, size=1)
index = int(door_choice[0][0])
possible_reveal_index = index - 1
if index == 1:
choice = assigned_prizes[0]
elif index == 2:
choice = assigned_prizes[1]
elif index == 3:
choice = assigned_prizes[2]
#print("You have chosen door number",index,"!")
## revealing a door
possible_reveals = np.delete(assigned_prizes,possible_reveal_index,axis=0)
##print(possible_reveals)
bad_doors = np.empty((0, 2), int)
for i in range(len(possible_reveals)):
if possible_reveals[i][1] == 0:
bad_doors = np.append(bad_doors, np.array([possible_reveals[i]]),axis = 0)
revealed_door = bad_doors[np.random.choice(bad_doors.shape[0], 1 ,replace = False), :]
#print('Door Number',revealed_door[0][0],'has been revealed to have a goat!')
revealed_door_index = revealed_door[0][0]-1
#print(revealed_door[0])
choice_index = choice[0]-1
#print(choice)
possible_switches = np.delete(assigned_prizes,[choice_index,revealed_door_index],axis = 0)
#print(possible_switches)
#print("You can now switch to door number",possible_switches[0][0],'!')
## Switch doors
valid_decision = False
while valid_decision == False:
switch_decision = True
if switch_decision == True:
choice = possible_switches[0]
#print('Your door is now number',choice[0],'!')
break
elif switch_decision == False:
#print('Your door is still number',index,'!')
break
#print(choice)
if choice[1] == 1:
return 1
#print('Congratulations! You won a brand new Car!')
elif choice[1] == 0:
return 0
#print('Sorry, you chose the goat. Try again next time!')
num_trials=1000
results = np.zeros(shape=(num_trials,num_trials))
for i in range(len(results)):
for j in range(len(results)):
results[i][j] = auto_mhall_sim()
print(results)
winners = np.mean(results,axis=1)
print(winners)
print(min(winners))
print(max(winners))
import matplotlib.pyplot as plt
plt.hist(winners,bins=15,range=(0.621,0.715))
print(min(winners))
print(max(winners))
print(np.mean(winners))
print(np.median(winners))
print(np.std(winners))
results_false = np.zeros(shape=(num_trials,num_trials))
for i in range(len(results_false)):
for j in range(len(results_false)):
results_false[i][j] = auto_mhall_sim_switch_false()
print(results_false)
winners_false = np.mean(results_false,axis=1)
print(winners_false)
print(min(winners_false))
print(max(winners_false))
plt.hist(winners_false,bins=15,range=(0.17,0.41))
print(min(winners_false))
print(max(winners_false))
print(np.mean(winners_false))
print(np.median(winners_false))
print(np.std(winners_false))
# ## Exercise 7 -- Conclusion¶
# 1. Based on your observations from Exercise 6, what do you estimate the probability of winning if the player chooses to switch vs. choosing to stay?
# 2. Should the player switch doors or stay with their initial guess?
# 1. The probabilty of winning when switching is on average approximately 0.665 and when staying it is 0.331
# 2. The player should always switch since it gives them an approximately 2/3 chance of winning the prize.
| Labs/Lab6/Lab6-MonteHallSim.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Wi-Fi localization. Modele de clasificare
#
# <NAME>, _grupa 10LF383_
#
# <i>Sursă dataset:</i> http://archive.ics.uci.edu/ml/datasets/Wireless+Indoor+Localization
#
# <i>Descriere dataset:</i> [DOI 10.1007/978-981-10-3322-3_27 via ResearchGate](Docs/chp_10.1007_978-981-10-3322-3_27.pdf)
#
# <i>Synopsis:</i> Setul de date _Wireless Indoor Localization_ cuprinde 2000 de măsurători ale puterii semnalului (măsurat în dBm) recepționat de la routerele unui birou din Pittsburgh. Acest birou are șapte routere și patru camere; un utilizator înregistrează cu ajutorul unui smartphone o dată pe secundă puterea semnalelor venite de la cele șapte routere, fiecărei înregistrări fiindu-i asociate camera în care se afla utilizatorul la momentul măsurării (1, 2, 3 sau 4).
#
# În figura de mai jos este ilustrat un sample din dataset: <br><br>
# 
#
# În cele ce urmează, coloana Class (camera) este reprezentată de y, iar coloanele WS1 - WS7 (features: puterea semnalului de la fiecare router), de X.
import numpy as np
import pandas as pd
from IPython.display import display, HTML
header = ['WS1', 'WS2', 'WS3', 'WS4', 'WS5', 'WS6', 'WS7', 'Class']
data_wifi = pd.read_csv("./Datasets/wifi_localization.txt", names=header, sep='\t')
display(HTML("<i>Dataset overview:</i>"))
display(data_wifi)
X = data_wifi.values[:, :7]
y = data_wifi.values[:, -1]
folds = 5
# # <u>Modelele de clasificare _in a nutshell_</u>
# Pentru clasificarea datelor din datasetul _Wi-Fi localization_ vom utiliza următoarele modele de clasificare: kNN, Decision Tree, MLP, Gaussian NB și Random Forest. În paragraful următor vă vom explica modul de funcționare al fiecărui algoritm și evidenția hiperparametrii și formulele care stau la baza acestora.
# <div style="text-align:center"><img src="./Images/xkcd_machine_learning.png"><br>"hiperparametrii și formulele care stau la baza acestora"<br>sursă: <a href="https://xkcd.com/1838/">xkcd 1838: Machine Learning</a></div>
# ### [1. <i>k</i>-nearest neighbors classifier](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html#sklearn.neighbors.KNeighborsClassifier)
# class `sklearn.neighbors.KNeighborsClassifier`<i>(n_neighbors=5, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=None, **kwargs)</i>
# Într-o problemă de clasificare, algoritmul kNN (_k_-nearest neighbors) identifică cei mai apropiați _k_ vecini ai fiecărui item neclasificat - indiferent de etichetele acestora - vecini localizați în setul de antrenare. Determinarea claselor din care fac parte itemii neclasificați se face prin votare: clasa în care aparțin majoritatea vecinilor se consideră clasa itemului.
# <div style="text-align:center"><img style="width: 500px" src="./Images/knn_example.png"><br>Exemplu: clasificarea itemului c cu 3NN. În urma votării se determină clasa lui c: <b>o</b>.<br>sursă: <a href="http://youtu.be/UqYde-LULfs">YouTube (<i>How kNN algorithm works</i> de <NAME>)</a></div><br>
#
# Pentru determinarea distanței dintre itemi se pot utiliza mai multe metrici. Scikit-learn admite orice funcție Python ca și metrică, dar implicit folosește metrica _Minkowski_. Iată câteva exemple de metrici des utilizate în kNN:
#
# - _distanța Minkowski_: $d_{st} = \sqrt[p]{\sum_{j=1}^n |x_{sj} - y_{tj}|^p}$ (_Obs._: p este un hiperparametru utilizat de Scikit-learn)
# - _distanța Euclideană_: $d(\textbf{x},\textbf{y}) = \sqrt{\sum_{i=1}^n (y_i - x_i)^2}$
# - _distanța Manhattan (City block)_: $d_{st} = \sum_{j=1}^n |x_{sj} - y_{tj}|$
# - _distanța Mahalanobis_: $d(\textbf{x},\textbf{y}) = \sqrt{\sum_{i=1}^n \frac{(x_i - y_i)^2}{s_i^2}}$, unde $s_i$ este deviația standard a lui $x_i$ și $y_i$ în sample
# ### [2. Decision tree classifier](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier)
# class `sklearn.tree.DecisionTreeClassifier`<i>(criterion='gini', splitter='best', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, class_weight=None, presort='deprecated', ccp_alpha=0.0)</i>
# Un arbore de decizie (_decision tree_) este o structură arborescentă tip flowchart unde un nod intern reprezintă un feature, ramura este un criteriu de decizie, iar fiecare frunză este un rezultat, o clasificare. Algoritmul Decision tree selectează cel mai bun feature folosind o metrică ASM (_Attribute Selection Measure_), convertește un nod feature la un nod tip criteriu de decizie, și partiționează (splits) datasetul în subseturi. Procesul se execută recursiv până arborele conține numai noduri criterii de decizie și noduri frunză rezultat. Cu cât arborele este mai adânc, cu atât sunt mai complexe criteriile de decizie și modelul are o acuratețe mai mare.
#
# <div style="text-align:center"><img style="width: 600px" src="./Images/dt_diagram.png"><br>Structura unui arbore de decizie.<br>sursă: <a href="https://www.kdnuggets.com/2020/01/decision-tree-algorithm-explained.html">KDnuggets (Decision Tree Algorithm, Explained)</a></div>
#
# <br>Pentru măsurarea calității unui split, Scikit-learn utilizează două metrici ASM:
#
# - _impuritatea Gini_ (cât de des este etichetat greșit un element ales aleator dacă a fost etichetat folosind distribuția etichetelor dintr-un subset; poate determina overfitting-ul modelului): <br>$Gini(p) = 1 - \sum_{j=1}^c p_j^2$ <br>
# - _entropia_ (similar cu Gini impurity, mai intensă d.p.d.v. computațional din cauza funcției logaritmice): <br>$H(p) = - \sum_{j=1}^c p_j \log p_j$
#
# (unde c este numărul de clase (etichete), iar $p_j$ este subsetul etichetat cu clasă i, unde $j \in \{1, 2, ..., c\}$).
# ### [3. Multilayer perceptron (MLP) classifier](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html#sklearn.neural_network.MLPClassifier)
# class `sklearn.neural_network.MLPClassifier`<i>(hidden_layer_sizes=(100, ), activation='relu', solver='adam', alpha=0.0001, batch_size='auto', learning_rate='constant', learning_rate_init=0.001, power_t=0.5, max_iter=200, shuffle=True, random_state=None, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08, n_iter_no_change=10, max_fun=15000)</i>
# _Perceptronii_ sunt o clasă de clasificatori utilizați în învățarea supervizată, fiind un model matematic al unui neuron biologic. În particular, _perceptronii multistrat (MLP)_ formează rețele neuronale cu mai multe straturi de perceptroni: un strat de intrare, unul sau mai multe straturi intermediare (ascunse), și un strat de ieșire.
#
# <div style="text-align:center"><img style="width: 500px" src="./Images/mlp_diagram.png"><br>Un perceptron multistrat ilustrat.<br>sursă: <a href="https://github.com/ledell/sldm4-h2o/blob/master/sldm4_h2o_oct2016.pdf">GitHub (ledell/sldm4-h2o)</a></div>
#
# <br>Într-o rețea neuronală, o _funcție de activare_ definește ieșirea unui perceptron după ce este supus unui set de intrare. În forma lui cea mai simplă, funcția poate returna un rezultat binar (funcție liniară, output 0 sau 1): făcând analogie cu neuronul biologic, dacă trece un impuls electric prin axonul acestuia sau nu. În cazul rețelelor neuronale moderne care utilizează mai multe straturi de perceptroni, funcțiile de activare pot fi și non-binare (non-liniare). Scikit-learn admite funcții de activare de ambele tipuri în implementarea MLP classifier:
# - _funcția identitate_: $f(x) = x$
# - _sigmoida logistică_: $f(x) = \frac{1}{1 + \exp(-x)}$
# - _tangenta hiperbolică_: $f(x) = \tanh(x) = \frac{\sinh(x)}{\cosh(x)} = \frac{e^x - e^{-x}}{e^x + e^{-x}}$
# - _Rectified Linear Unit (ReLU)_: $f(x) = \max(0, x) = \begin{cases} 0 & \text{dacă } x \leq 0 \\ x & \text{dacă } x > 0 \end{cases}$
#
# De asemenea, clasificatorul MLP din Scikit-learn utilizează și algoritmi de optimizare a ponderilor (solvers): _LBFGS_ (algoritm Quasi-Newton), _SGD_ (stochastic gradient descent) și _Adam_ (algoritm derivat din SGD, creat de <NAME> și <NAME>).
# ### [4. Gaussian Naïve Bayes classifier](https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.GaussianNB.html#sklearn.naive_bayes.GaussianNB)
# class `sklearn.naive_bayes.GaussianNB`<i>(priors=None, var_smoothing=1e-09)<i>
# Algoritmul de clasificare _Gaussian Naïve Bayes_ aparține familiei de clasificatori _Naïve Bayes_, care presupun că prezența unui feature într-o clasă nu este afectată de prezența altor features; pe scurt, proprietățile contribuie independent la probabilitatea apartenenței la o clasă. În particular, algoritmul _Gaussian Naïve Bayes_ urmează funcția de probabilitate (PDF) a unei distribuții normale (Gaussiene):
# $$\large P(x_i | y) = \frac{1}{\sqrt{2 \pi \sigma_y^2}}\exp\bigg(-\frac{(x_i - \mu_y)^2}{2 \sigma_y^2}\bigg),$$
# unde parametrii $\sigma_y$ și $\mu_y$, deviația standard și media, sunt determinați folosind maximum likelihood estimation (MLE), o metodă de estimare a parametrilor unei PDF prin maximizarea unei funcții de likelihood (cât de bine se potrivește un sample cu un model statistic).
# ### [5. Random Forest classifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier)
# class `sklearn.ensemble.RandomForestClassifier`<i>(n_estimators=100, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None, ccp_alpha=0.0, max_samples=None)<i>
# Un clasificator _Random forest_ se folosește de ipotezele emise de mai mulți arbori de decizie aleatori (_random trees_), obținuți în urma unui _random split_. Un random forest se obține prin construirea unui random tree pentru fiecare set de antrenare. Acești arbori funcționează ca un ansamblu; pentru fiecare dată de intrare se aplică modelele din ansamblu, și rezultatul final se obține agregând rezultatele prin votare. Astfel, un random forest este un _meta-estimator_: se obține o predicție în urma mai multor predicții.
# <div style="text-align:center"><img style="width: 400px" src="./Images/rf_diagram.png"><br>Un model random forest făcând o predicție; în urma votării se obține rezultatul 1.<br>sursă: <a href="https://towardsdatascience.com/understanding-random-forest-58381e0602d2">Medium (Towards Data Science: Understanding Random Forest)</a></div>
#
# <br>La fel ca la _Decision Tree classifier_, pentru măsurarea calității unui split, Scikit-learn utilizează două metrici:
#
# - _impuritatea Gini_: $Gini(p) = 1 - \sum_{j=1}^c p_j^2$ <br>
# - _entropia_: $H(p) = - \sum_{j=1}^c p_j \log p_j$
#
# (unde c este numărul de clase (etichete), iar $p_j$ este subsetul etichetat cu clasă i, unde $j \in \{1, 2, ..., c\}$).
# # <u>Testarea algoritmilor de clasificare pe setul de date cu Scikit-learn</u>
# just a pretty printing function, don't mind me...
def print_stats_cv(model_cv_stats):
print(f"Test accuracy for each fold: {model_cv_stats['test_accuracy']} \n=> Average test accuracy: {round(model_cv_stats['test_accuracy'].mean() * 100, 3)}%")
print(f"Train accuracy for each fold: {model_cv_stats['train_accuracy']} \n=> Average train accuracy: {round(model_cv_stats['train_accuracy'].mean() * 100, 3)}%")
print(f"Test F1 score for each fold: {model_cv_stats['test_f1_macro']} \n=> Average test F1 score: {round(model_cv_stats['test_f1_macro'].mean() * 100, 3)}%")
print(f"Train F1 score for each fold: {model_cv_stats['train_f1_macro']} \n=> Average train F1 score: {round(model_cv_stats['train_f1_macro'].mean() * 100, 3)}%")
# ### 1. <i>k</i>-nearest neighbors classifier
# +
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_validate
from sklearn.preprocessing import MinMaxScaler
# hiperparametri
knn_neighbors = 4
knn_minkowski_p = 3
# scalare date
scaler = MinMaxScaler()
X_scaled = scaler.fit_transform(X)
# implementare KNN
model = KNeighborsClassifier(n_neighbors=knn_neighbors, p=knn_minkowski_p)
model_cv_stats = cross_validate(model, X_scaled, y, cv=folds, scoring=('accuracy', 'f1_macro'), return_train_score=True)
# statistici
display(HTML(f"<h4>{folds}-fold cross validation for {knn_neighbors}-nearest neighbors classification:</h4>"))
print_stats_cv(model_cv_stats)
# -
# ### 2. Decision Tree classifier
# +
from sklearn.tree import DecisionTreeClassifier
# hiperparametri
dt_criterion = 'gini'
dt_splitter = 'best'
# implementare Decision Tree
model = DecisionTreeClassifier(criterion=dt_criterion, splitter=dt_splitter, random_state=42)
model_cv_stats = cross_validate(model, X, y, cv=folds, scoring=('accuracy', 'f1_macro'), return_train_score=True)
# statistici
display(HTML(f"<h4>{folds}-fold cross validation for Decision Trees classification:</h4>"))
print_stats_cv(model_cv_stats)
# -
# ### 3. Multilayer Perceptron (MLP) classifier
# +
from sklearn.neural_network import MLPClassifier
# hiperparametri
mlp_solver = 'adam'
mlp_activation = 'relu'
mlp_alpha=1e-3
mlp_hidden_layer_sizes = (50,50)
# implementare MLP
model = MLPClassifier(solver=mlp_solver, activation=mlp_activation, alpha=mlp_alpha, hidden_layer_sizes=mlp_hidden_layer_sizes, random_state=42)
model_cv_stats = cross_validate(model, X, y, cv=folds, scoring=('accuracy', 'f1_macro'), return_train_score=True)
# statistici
display(HTML(f"<h4>{folds}-fold cross validation for MLP classification</h4>"))
display(HTML(f"using hardcoded hyperparameters - Solver: <b>{mlp_solver}</b>, Activation function: <b>{mlp_activation}</b>, Parameter for regularization (α): <b>{mlp_alpha}</b>, Hidden layer sizes: <b>{mlp_hidden_layer_sizes}</b>"))
print_stats_cv(model_cv_stats)
# -
# ### 4. Gaussian Naïve Bayes classifier
# +
from sklearn.naive_bayes import GaussianNB
# implementare GNB
model = GaussianNB()
model_cv_stats = cross_validate(model, X, y, cv=folds, scoring=('accuracy', 'f1_macro'), return_train_score=True)
# statistici
display(HTML(f"<h4>{folds}-fold cross validation for Gaussian NB classification</h4>"))
print_stats_cv(model_cv_stats)
# -
# ### 5. Random Forest classifier
# +
from sklearn.ensemble import RandomForestClassifier
# hiperparametri
rfc_n_estimators = 150
rfc_criterion = 'gini'
# implementare Random Forest
model = RandomForestClassifier(n_estimators=rfc_n_estimators, criterion=rfc_criterion)
model_cv_stats = cross_validate(model, X, y, cv=folds, scoring=('accuracy', 'f1_macro'), return_train_score=True)
display(HTML(f"<h4>{folds}-fold cross validation for Random Forest classification</h4>"))
print_stats_cv(model_cv_stats)
# -
# # <u>Nested Cross Validation pentru optimizarea hiperparametrilor</u>
# +
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from sklearn.metrics import accuracy_score
# CVs configuration
inner_cv = KFold(n_splits=4, shuffle=True)
outer_cv = KFold(n_splits=5, shuffle=True)
# outer CV folds:
print("5-fold cross validation: split overview")
splits = outer_cv.split(range(data_wifi.index.size))
subsets = pd.DataFrame(columns=['Fold', 'Train row indices', 'Test row indices'])
for i, split_data in enumerate(splits):
subsets.loc[i]=[i + 1, split_data[0], split_data[1]]
display(subsets)
# -
# ### 1. <i>k</i>-nearest neighbors classifier
# +
outer_cv_acc = []
param_candidates = {'n_neighbors': np.linspace(start=1, stop=30, num=30, dtype=int),
'p': np.linspace(start=1, stop=5, num=4, dtype=int)}
param_search = RandomizedSearchCV(estimator=KNeighborsClassifier(), param_distributions=param_candidates, scoring='accuracy', cv=inner_cv, random_state=42)
for fold in range(5):
X_train = X_scaled[subsets.loc[subsets.index[fold],'Train row indices']]
y_train = y[subsets.loc[subsets.index[fold],'Train row indices']]
X_test = X_scaled[subsets.loc[subsets.index[fold],'Test row indices']]
y_test = y[subsets.loc[subsets.index[fold],'Test row indices']]
param_search.fit(X_train, y_train)
y_estimated = param_search.predict(X_test)
accuracy = accuracy_score(y_test, y_estimated)
outer_cv_acc.append(accuracy)
print(f'Outer fold {fold+1}, optimal hyperparameters after inner 4-fold CV: {param_search.best_params_}')
print(f'kNN model accuracy with optimal hyperparameters: {accuracy}')
print(f'\nAverage model accuracy: {round(np.mean(outer_cv_acc) * 100, 2)}%')
# -
# ### 2. Decision Tree classifier
# +
outer_cv_acc = []
param_candidates = {'criterion': ['gini', 'entropy'],
'splitter': ['best', 'random']}
param_search = GridSearchCV(estimator=DecisionTreeClassifier(), param_grid=param_candidates, scoring='accuracy', cv=inner_cv)
for fold in range(5):
X_train = X[subsets.loc[subsets.index[fold],'Train row indices']]
y_train = y[subsets.loc[subsets.index[fold],'Train row indices']]
X_test = X[subsets.loc[subsets.index[fold],'Test row indices']]
y_test = y[subsets.loc[subsets.index[fold],'Test row indices']]
param_search.fit(X_train, y_train)
y_estimated = param_search.predict(X_test)
accuracy = accuracy_score(y_test, y_estimated)
outer_cv_acc.append(accuracy)
print(f'Outer fold {fold+1}, optimal hyperparameters after inner 4-fold CV: {param_search.best_params_}')
print(f'Decision Tree model accuracy with optimal hyperparameters: {accuracy}')
print(f'\nAverage model accuracy: {round(np.mean(outer_cv_acc) * 100, 2)}%')
# -
# ### 3. Multilayer Perceptron (MLP) classifier
# +
outer_cv_acc = []
param_candidates = {'alpha': np.linspace(start=0, stop=1e-1, num=500),
'activation': ['identity', 'logistic', 'tanh', 'relu']}
param_search = RandomizedSearchCV(estimator=MLPClassifier(max_iter=1000), param_distributions=param_candidates, scoring='accuracy', cv=inner_cv)
for fold in range(5):
X_train = X[subsets.loc[subsets.index[fold],'Train row indices']]
y_train = y[subsets.loc[subsets.index[fold],'Train row indices']]
X_test = X[subsets.loc[subsets.index[fold],'Test row indices']]
y_test = y[subsets.loc[subsets.index[fold],'Test row indices']]
param_search.fit(X_train, y_train)
y_estimated = param_search.predict(X_test)
accuracy = accuracy_score(y_test, y_estimated)
outer_cv_acc.append(accuracy)
print(f'Outer fold {fold+1}, optimal hyperparameters after inner 4-fold CV: {param_search.best_params_}')
print(f'MLP model accuracy with optimal hyperparameters: {accuracy}')
print(f'\nAverage model accuracy: {round(np.mean(outer_cv_acc) * 100, 2)}%')
# -
# ### 4. Gaussian Naïve Bayes classifier
# +
outer_cv_acc = []
param_candidates = {'var_smoothing': np.linspace(start=1e-9, stop=1e-2, num=500)}
param_search = RandomizedSearchCV(estimator=GaussianNB(), param_distributions=param_candidates, scoring='accuracy', cv=inner_cv)
for fold in range(5):
X_train = X[subsets.loc[subsets.index[fold],'Train row indices']]
y_train = y[subsets.loc[subsets.index[fold],'Train row indices']]
X_test = X[subsets.loc[subsets.index[fold],'Test row indices']]
y_test = y[subsets.loc[subsets.index[fold],'Test row indices']]
param_search.fit(X_train, y_train)
y_estimated = param_search.predict(X_test)
accuracy = accuracy_score(y_test, y_estimated)
outer_cv_acc.append(accuracy)
print(f'Outer fold {fold+1}, optimal hyperparameters after inner 4-fold CV: {param_search.best_params_}')
print(f'Gaussian Naïve Bayes model accuracy with optimal hyperparameters: {accuracy}')
print(f'\nAverage model accuracy: {round(np.mean(outer_cv_acc) * 100, 2)}%')
# -
# ### 5. Random Forest classifier
# +
outer_cv_acc = []
param_candidates = {'criterion': ['gini', 'entropy'],
'n_estimators': np.linspace(start=1, stop=500, num=500, dtype=int)}
param_search = RandomizedSearchCV(estimator=RandomForestClassifier(), param_distributions=param_candidates, scoring='accuracy', cv=inner_cv)
for fold in range(5):
X_train = X[subsets.loc[subsets.index[fold],'Train row indices']]
y_train = y[subsets.loc[subsets.index[fold],'Train row indices']]
X_test = X[subsets.loc[subsets.index[fold],'Test row indices']]
y_test = y[subsets.loc[subsets.index[fold],'Test row indices']]
param_search.fit(X_train, y_train)
y_estimated = param_search.predict(X_test)
accuracy = accuracy_score(y_test, y_estimated)
outer_cv_acc.append(accuracy)
print(f'Outer fold {fold+1}, optimal hyperparameters after inner 4-fold CV: {param_search.best_params_}')
print(f'Random Forest Classifier model accuracy with optimal hyperparameters: {accuracy}')
print(f'\nAverage model accuracy: {round(np.mean(outer_cv_acc) * 100, 2)}%')
# -
| Laborator6/Wi-Fi localization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Global imports
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
from matplotlib import rc
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import analysis as an
from pathlib import Path
project_folder = Path(an.__file__).parent.parent.resolve()
from bokeh.io import push_notebook, show, output_notebook
output_notebook()
# -
data_folder = project_folder / 'data'
print('Available data paths:')
[x for x in sorted(data_folder.iterdir()) if x.is_dir()]
df_list = [
an.get_iperf_folder(data_folder / '2018-01-17-192523', recursive=True),
an.get_iperf_folder(data_folder / '2018-01-17-144113', recursive=True),
]
df = pd.concat(df_list)
# df.groupby(['Kernel', 'Access Point', 'Client'])['Throughput [Mbps]'].describe()
# +
import networkx as nx
import twistmap
G = nx.DiGraph()
G.add_nodes_from(twistmap.node_positions.keys())
means = df.groupby(['Access Point', 'Client'])['Throughput [Mbps]'].mean().to_dict()
for key in means:
G.add_edge(key[0], key[1], {'weight': means[key]})
# -
plot = twistmap.create_map()
twistmap.draw_graph(plot, G)
show(plot, notebook_handle=True);
| analysis/Network Map.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
import numpy as np
import pandas as pd
data = pd.Series([0.25, 0.5, 0.75, 1.0], index = [2, 5, 3, 7])
data
data.loc[2]
data.iloc[2]
df = pd.DataFrame([[1,2], [3,4], [5,6]], columns = ['foo', 'bar'], index = ["a", "b"," c"])
df
df.loc[:, "foo"]
df.iloc[:, "foo"]
df.sum(0)
x = [1, 2, 3]
y = [10, 20, 30]
z = [i+j for i, j in zip(x, y)]
z
x + y
list(np.array(x) + np.array(y))
# +
n = 7
S = 10
x = np.random.randint(size = (S, n), low = 1, high = 7)
x
# -
x[0]
x.mean(axis=0)
x = {4, 1, 2, 3, 3}
print(x)
y = ["item1", "item2"]
x = [1,2]
"item" + str(x)
["item" + str(one) for one in x]
x = [1, 2, 3, 4, 5]
print(x[-1])
x = np.array([1, 2, 3, 4, 5])
x<=3
df = pd.DataFrame([[1, 2], [3, np.nan], [5,6]], columns = ["foo", "bar"], index = ["a", "b", "c"])
df
df.bar.isnull()
pd.cut(df, bins = 3)
# ! wget -q-O- https://www.myipaddress.com/show-my-ip-address/ | grep "IP Address"
np.arrange(3) + 5
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import os
from datetime import date
import requests
import pandas as pd
import altair as alt
# -
API_KEY = os.environ.get('BLS_API_KEY')
BASE_URL = 'https://api.bls.gov/publicAPI/v2/timeseries/data/'
# This series ID denotes all industries, all U.S. regions, seasonally adjusted quit rate
# REF: https://www.bls.gov/help/hlpforma.htm#jt
JOLTS_SERIES = 'JTS000000000000000QUR'
# +
# the API only allows you to grab 10 years of data at a time,
# so breaking this into chunks for future resiliency
first_year = 2001
this_year = date.today().year
start_years = list(range(first_year, this_year+1, 10))
year_ranges = [(x, x+9) if x+9 < this_year else (x, x) for x in start_years]
print(year_ranges)
# -
jolts_data = []
for years in year_ranges:
data = {
'registrationkey': API_KEY,
'seriesid': JOLTS_SERIES,
'startyear': years[0],
'endyear': years[1]
}
r = requests.post(BASE_URL, data=data)
r.raise_for_status()
this_data = r.json()['Results']['series'][0]['data']
jolts_data += this_data
jolts_data.sort(key=lambda x: (x['year'], x['period']))
df = pd.DataFrame(data=jolts_data)[['year', 'period', 'value']]
df.head()
# a few integrity checks
df.year.value_counts().sort_index()
# a few integrity checks
df.period.value_counts().sort_index()
# a few integrity checks
print(df.value.min())
print(df.value.max())
# create a date column for the chart
df['date'] = df.apply(lambda row: row['year'] + '-' + row['period'].lstrip('M') + '-01', axis=1)
df['date'] = pd.to_datetime(df['date'])
df.head()
# adjust the numbers for percent display -- divide by 100
# to reverse the multiplication by 100 that happened upstream
df['value'] = df['value'].astype(float) / 100.0
alt.Chart(df).mark_area(
color='lightblue',
interpolate='step-before',
line=True
).encode(
x=alt.X('date:T', axis=alt.Axis(title='')),
y=alt.Y('value:Q', axis=alt.Axis(title='Quit rate', format='%')),
).properties(
width=800
)
| Quit rate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ### Training RL Policies using L5Kit Closed-Loop Environment
#
# This notebook describes how to train RL policies for self-driving using our gym-compatible closed-loop environment.
#
# We will be using [Proximal Policy Optimization (PPO)](https://arxiv.org/abs/1707.06347) algorithm as our reinforcement learning algorithm, as it not only demonstrates remarkable performance but it is also empirically easy to tune.
#
# The PPO implementation in this notebook is based on [Stable Baselines3](https://github.com/DLR-RM/stable-baselines3) framework, a popular framework for training RL policies. Note that our environment is also compatible with [RLlib](https://docs.ray.io/en/latest/rllib.html), another popular frameworks for the same.
#@title Download L5 Sample Dataset and install L5Kit
import os
RunningInCOLAB = 'google.colab' in str(get_ipython())
if RunningInCOLAB:
# !wget https://raw.githubusercontent.com/lyft/l5kit/master/examples/setup_notebook_colab.sh -q
# !sh ./setup_notebook_colab.sh
os.environ["L5KIT_DATA_FOLDER"] = open("./dataset_dir.txt", "r").read().strip()
else:
os.environ["L5KIT_DATA_FOLDER"] = "/tmp/level5_data"
print("Not running in Google Colab.")
# +
import gym
from stable_baselines3 import PPO
from stable_baselines3.common.callbacks import CheckpointCallback
from stable_baselines3.common.env_util import make_vec_env
from stable_baselines3.common.utils import get_linear_fn
from stable_baselines3.common.vec_env import SubprocVecEnv
from l5kit.configs import load_config_data
from l5kit.environment.feature_extractor import CustomFeatureExtractor
from l5kit.environment.callbacks import L5KitEvalCallback
from l5kit.environment.envs.l5_env import SimulationConfigGym
from l5kit.visualization.visualizer.zarr_utils import episode_out_to_visualizer_scene_gym_cle
from l5kit.visualization.visualizer.visualizer import visualize
from bokeh.io import output_notebook, show
# +
# Dataset is assumed to be on the folder specified
# in the L5KIT_DATA_FOLDER environment variable
# get environment config
env_config_path = '../gym_config.yaml'
cfg = load_config_data(env_config_path)
# -
# ### Define Training and Evaluation Environments
#
# **Training**: We will be training the PPO policy on episodes of length 32 time-steps. We will have 4 sub-processes (training environments) that will help to parallelize and speeden up episode rollouts. The *SimConfig* dataclass will define the parameters of the episode rollout: like length of episode rollout, whether to use log-replayed agents or simulated agents etc.
#
# **Evaluation**: We will evaluate the performance of the PPO policy on the *entire* scene (~248 time-steps).
# +
# Train on episodes of length 32 time steps
train_eps_length = 32
train_envs = 4
# Evaluate on entire scene (~248 time steps)
eval_eps_length = None
eval_envs = 1
# make train env
train_sim_cfg = SimulationConfigGym()
train_sim_cfg.num_simulation_steps = train_eps_length + 1
env_kwargs = {'env_config_path': env_config_path, 'use_kinematic': True, 'sim_cfg': train_sim_cfg}
env = make_vec_env("L5-CLE-v0", env_kwargs=env_kwargs, n_envs=train_envs,
vec_env_cls=SubprocVecEnv, vec_env_kwargs={"start_method": "fork"})
# make eval env
validation_sim_cfg = SimulationConfigGym()
validation_sim_cfg.num_simulation_steps = None
eval_env_kwargs = {'env_config_path': env_config_path, 'use_kinematic': True, \
'return_info': True, 'train': False, 'sim_cfg': validation_sim_cfg}
eval_env = make_vec_env("L5-CLE-v0", env_kwargs=eval_env_kwargs, n_envs=eval_envs,
vec_env_cls=SubprocVecEnv, vec_env_kwargs={"start_method": "fork"})
# -
# ### Define backbone feature extractor
#
# The backbone feature extractor is shared between the policy and the value networks. The feature extractor *simple_gn* is composed of two convolutional networks followed by a fully connected layer, with ReLU activation. The feature extractor output is passed to both the policy and value networks composed of two fully connected layers with tanh activation (SB3 default).
#
# We perform **group normalization** after every convolutional layer. Empirically, we found that group normalization performs far superior to batch normalization. This can be attributed to the fact that activation statistics change quickly in on-policy algorithms (PPO is on-policy) while batch-norm learnable parameters can be slow to update causing training issues.
# +
# A simple 2 Layer CNN architecture with group normalization
model_arch = 'simple_gn'
features_dim = 128
# Custom Feature Extractor backbone
policy_kwargs = {
"features_extractor_class": CustomFeatureExtractor,
"features_extractor_kwargs": {"features_dim": features_dim, "model_arch": model_arch},
"normalize_images": False
}
# -
# ### Clipping Schedule
#
# We linearly decrease the value of the clipping parameter $\epsilon$ as the PPO training progress as it shows improved training stability
# Clipping schedule of PPO epsilon parameter
start_val = 0.1
end_val = 0.01
training_progress_ratio = 1.0
clip_schedule = get_linear_fn(start_val, end_val, training_progress_ratio)
# ### Hyperparameters for PPO.
#
# For detailed description, refer https://stable-baselines3.readthedocs.io/en/master/_modules/stable_baselines3/ppo/ppo.html#PPO
lr = 3e-4
num_rollout_steps = 256
gamma = 0.8
gae_lambda = 0.9
n_epochs = 10
seed = 42
batch_size = 64
tensorboard_log = 'tb_log'
# ### Define the PPO Policy.
#
# SB3 provides an easy interface to the define the PPO policy. Note: We do need to tweak appropriate hyperparameters and the custom policy backbone has been defined above.
#
# define model
model = PPO("CnnPolicy", env, policy_kwargs=policy_kwargs, verbose=1, n_steps=num_rollout_steps,
learning_rate=lr, gamma=gamma, tensorboard_log=tensorboard_log, n_epochs=n_epochs,
clip_range=clip_schedule, batch_size=batch_size, seed=seed, gae_lambda=gae_lambda)
# ### Defining Callbacks
#
# We can additionally define callbacks to save model checkpoints and evaluate models during training.
# +
callback_list = []
# Save Model Periodically
save_freq = 100000
save_path = './logs/'
output = 'PPO'
checkpoint_callback = CheckpointCallback(save_freq=(save_freq // train_envs), save_path=save_path, \
name_prefix=output)
callback_list.append(checkpoint_callback)
# Eval Model Periodically
eval_freq = 100000
n_eval_episodes = 1
val_eval_callback = L5KitEvalCallback(eval_env, eval_freq=(eval_freq // train_envs), \
n_eval_episodes=n_eval_episodes, n_eval_envs=eval_envs)
callback_list.append(val_eval_callback)
# -
# ### Train
n_steps = 3000
model.learn(n_steps, callback=callback_list)
# **Voila!** We have a trained PPO policy! Train for larger number of steps for better accuracy. Typical RL algorithms require training atleast 1M steps for good convergence. You can visualize the quantitiative evaluation using tensorboard.
# Visualize Tensorboard logs (!! run on local terminal !!)
# !tensorboard --logdir tb_log
# ### Visualize the episode from the environment
#
# We can easily visualize the outputs obtained by rolling out episodes in the L5Kit using the Bokeh visualizer.
# +
rollout_sim_cfg = SimulationConfigGym()
rollout_sim_cfg.num_simulation_steps = None
rollout_env = gym.make("L5-CLE-v0", env_config_path=env_config_path, sim_cfg=rollout_sim_cfg, \
use_kinematic=True, train=False, return_info=True)
def rollout_episode(model, env, idx = 0):
"""Rollout a particular scene index and return the simulation output.
:param model: the RL policy
:param env: the gym environment
:param idx: the scene index to be rolled out
:return: the episode output of the rolled out scene
"""
# Set the reset_scene_id to 'idx'
env.reset_scene_id = idx
# Rollout step-by-step
obs = env.reset()
done = False
while True:
action, _ = model.predict(obs, deterministic=True)
obs, _, done, info = env.step(action)
if done:
break
# The episode outputs are present in the key "sim_outs"
sim_out = info["sim_outs"][0]
return sim_out
# Rollout one episode
sim_out = rollout_episode(model, rollout_env)
# +
# might change with different rasterizer
map_API = rollout_env.dataset.rasterizer.sem_rast.mapAPI
def visualize_outputs(sim_outs, map_API):
for sim_out in sim_outs: # for each scene
vis_in = episode_out_to_visualizer_scene_gym_cle(sim_out, map_API)
show(visualize(sim_out.scene_id, vis_in))
output_notebook()
visualize_outputs([sim_out], map_API)
| examples/RL/notebooks/ppo_policy_training.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="pq0E-mOI4TFW" colab_type="text"
# # **Linear Regression**
# + id="Nr0fDXuuueFg" colab_type="code" colab={}
# Importing some packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from sklearn.datasets import california_housing
from scipy import stats
import seaborn as sns
from sklearn import linear_model
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression, Ridge
from sklearn.metrics import r2_score, mean_squared_error
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
from sklearn import svm
# + [markdown] id="x1Z9upT65-gG" colab_type="text"
# # **california_housing Dataset**
#
# ---
#
#
# **The following function returns:**
# dataset : dict-like object with the following attributes:
#
# **dataset.data :** ndarray, shape [20640, 8]
#
# Each row corresponding to the 8 feature values in order.
# **dataset.target :** numpy array of shape (20640,)
#
# Each value corresponds to the average house value in units of 100,000.
# **dataset.feature_names :** array of length 8
#
# Array of ordered feature names used in the dataset.
# **dataset.DESCR :** string
#
# Description of the California housing dataset.
#
#
# ---
#
#
# + id="-ZFm6xMNvkAz" colab_type="code" outputId="b3ed8e05-2095-47fe-ff09-231298c81c69" colab={"base_uri": "https://localhost:8080/", "height": 436}
housing_data = california_housing.fetch_california_housing()
housing_data
# + [markdown] id="CCJ2l9VW7QJn" colab_type="text"
#
#
# ---
# The data contains 20,640 observations on 9 variables.
#
# This dataset contains the **average house value as target variable**
# and the following input variables (features): **average income,
# housing average age, average rooms, average bedrooms, population,
# average occupation, latitude, and longitude** in that order.
#
# **Let us now extract the features and the target from the dataset and combine them in one data frame.**
#
# ---
#
#
# + id="aAkhvRiFvtiC" colab_type="code" outputId="44ac75ac-934c-4b44-b1de-2590ffb8239d" colab={"base_uri": "https://localhost:8080/", "height": 423}
Features = pd.DataFrame(housing_data.data, columns=housing_data.feature_names)
Target = pd.DataFrame(housing_data.target, columns=['Target'])
df = Features.join(Target)
df
# + id="sTmFY8qMCHZt" colab_type="code" colab={}
##to check for Nan values
##df['MedInc'].isnull().values.any()
# + id="tYT6r00PAFXJ" colab_type="code" outputId="e80f8aa0-fcfc-42fd-b245-5935337cc9d1" colab={"base_uri": "https://localhost:8080/", "height": 320}
df.describe()
# + [markdown] id="XKu5TUmoBl2C" colab_type="text"
#
#
# ---
#
# Let us use the function **df.corr()** to compute pairwise correlation of columns, excluding NA/null values.
#
# ---
#
#
# + id="VXPWIN9pwN_h" colab_type="code" outputId="56653ad5-aafa-4684-abdc-1f4a52df125e" colab={"base_uri": "https://localhost:8080/", "height": 331}
df.corr()
# + [markdown] id="0o-siG-LCsY5" colab_type="text"
#
#
# ---
#
# Let us consider only one feature say **MedInc**
#
# ---
#
#
# + id="qdYINFDDwkca" colab_type="code" outputId="381d17d2-6ca7-40a0-e1f7-63ba0e836f25" colab={"base_uri": "https://localhost:8080/", "height": 300}
df[['MedInc', 'Target']].describe()
# + [markdown] id="6R5uw9ufEyPa" colab_type="text"
# **Pre-Processing**
#
# Notice that 75% of the data has price less than 2.65, but maximum price go as high as 5. Thus we should remove the extremely expensive houses which might prone noise.
# + id="fmfjygeawtFc" colab_type="code" colab={}
df = df[df.Target < 5 ]
# + id="PgBhzCOYFeTC" colab_type="code" outputId="0fd363bc-066b-450f-c323-0eb61ae92138" colab={"base_uri": "https://localhost:8080/", "height": 52}
# Normalization of the MedInc and Target
def Norm(x):
minx = x.min()
maxx = x.max()
return pd.Series([(i - minx)/(maxx-minx) for i in x])
x = Norm(df.MedInc)
y = Norm(df.Target)
print("maximum value of MedInc = {}".format(x.max()))
print("maximum value of Target = {}".format(y.max()))
# + id="NNq6QIXUw959" colab_type="code" outputId="2f5bcc68-e17d-438b-b969-2933d0376d9d" colab={"base_uri": "https://localhost:8080/", "height": 354}
plt.figure(figsize=(10,5))
plt.scatter(x, y, label='Data', c='#e377c2', s=6)
plt.title('Correlation Between Income and House Price', fontSize=14)
plt.xlabel('Income', fontSize=12)
plt.ylabel('House Price', fontSize=12)
plt.legend(loc=1, fontsize=10, borderpad=.6)
plt.show()
# + [markdown] id="NsrZSo9lr4fa" colab_type="text"
# #**Linear Regression With scikit-learn**
#
#
#
# ---
#
#
# There are five basic steps when you’re implementing linear regression:
#
# 1. Import the packages and classes you need.
# 2. Provide data to work with and eventually do appropriate transformations.
# 3. Create a regression model and fit it with existing data.
# 4. Check the results of model fitting to know whether the model is satisfactory.
# 5. Apply the model for predictions.
#
#
# ---
#
#
# + id="d-5ZyAKDkKP6" colab_type="code" outputId="8f9fabaf-77d6-4875-a622-472cf0e207cc" colab={"base_uri": "https://localhost:8080/", "height": 86}
# Note X need to have one column and as many rows as necessary
X= np.array(x).reshape((-1, 1))
y=np.array(y)
print(x.ndim)
print(x.shape)
print(X.ndim)
print(X.shape)
# + [markdown] id="ZYdH_KNAxM-W" colab_type="text"
#
#
# ---
#
#
# This statement creates the variable model as the instance of LinearRegression. You can provide several optional parameters to LinearRegression:
#
# 1. fit_intercept is a Boolean (True by default) that decides whether to calculate the intercept 𝑏 (True) or consider it equal to zero (False).
# 2. normalize is a Boolean (False by default) that decides whether to normalize the input variables (True) or not (False).
# 3. n_jobs is an integer or None (default) and represents the number of jobs used in parallel computation. None usually means one job and -1 to use all processors.
#
#
# ---
#
#
#
# + id="Pe_-Pf_1noHj" colab_type="code" colab={}
#create a linear regression model and fit it using the existing data
model = LinearRegression(normalize=False)
# + id="5IAKe4oRoB-Z" colab_type="code" outputId="29e13ba8-efb0-4f46-f30a-8ff4b5e0b6cc" colab={"base_uri": "https://localhost:8080/", "height": 34}
#fit(), you calculate the optimal values of the weights m and 𝑏, using the existing input and output (X and y) as the argument
model.fit(X, y)
# + id="3dFX23i4oefw" colab_type="code" outputId="8123c6b8-20fc-48c8-cda8-894b2da9fc12" colab={"base_uri": "https://localhost:8080/", "height": 52}
print('intercept:', model.intercept_)
print('slope:', model.coef_)
# + id="9wJi1-RyrBvJ" colab_type="code" outputId="947f55d4-5393-47e1-f354-a064f3477a87" colab={"base_uri": "https://localhost:8080/", "height": 52}
y_pred = model.predict(X)
print(y_pred)
print(y_pred.ndim)
# + id="Wr41nrKcrX7B" colab_type="code" outputId="c164ce61-e19c-4f49-9c67-874369bbde97" colab={"base_uri": "https://localhost:8080/", "height": 191}
y_pred = model.intercept_ + model.coef_ * X
print('predicted response:', y_pred, sep='\n')
print(type(y_pred))
print(y_pred.ndim)
# + id="QubAeT3XrsZ3" colab_type="code" outputId="0b31737b-5c39-481e-f02b-1956c5212c64" colab={"base_uri": "https://localhost:8080/", "height": 351}
plt.figure(figsize=(10,5))
plt.scatter(X, y, label='Data', c='#388fd8', s=6)
plt.plot(X, y_pred, c='#ff7702', lw=3, label='Regression')
plt.title('Linear Regression', fontSize=14)
plt.xlabel('Income', fontSize=11)
plt.ylabel('Price', fontSize=11)
plt.legend(frameon=True, loc=0, fontsize=10)
plt.show()
# + [markdown] id="YnuNYDvlydk5" colab_type="text"
# # **Linear Regression from Scratch**
# + [markdown] id="uwdaYxdsQjMj" colab_type="text"
#
#
# ---
#
# We can represent the linear regression by the following equation:
#
# **y = mx+b**
#
# where m is the slope, b is the intercept, and x is the median income.
#
# ---
#
#
# + id="DNutMte-xEmm" colab_type="code" colab={}
class LinearRegression:
def fit(self, X, y):
self.X = X
self.y = y
self.m = ((np.mean(X) * np.mean(y) - np.mean(X*y)) / ((np.mean(X)**2) - np.mean(X**2)))
self.b = np.mean(y) - self.m * np.mean(X)
def coeffs(self):
return self.m, self.b
def predict(self):
self.y_pred = self.m * self.X + self.b
return self.y_pred
# + id="fA7zgh_0xOl-" colab_type="code" outputId="d51f1c3a-f72a-4a49-a0cc-beff187c367d" colab={"base_uri": "https://localhost:8080/", "height": 52}
# Normalization of the MedInc and Target
def Norm(x):
minx = x.min()
maxx = x.max()
return pd.Series([(i - minx)/(maxx-minx) for i in x])
X = Norm(df.MedInc)
y = Norm(df.Target)
print("maximum value of MedInc = {}".format(x.max()))
print("maximum value of Target = {}".format(y.max()))
# + id="I8O0-eOLxS_f" colab_type="code" colab={}
lr = LinearRegression()
# + id="EA8Vu2V3xWqn" colab_type="code" colab={}
lr.fit(X, y)
# + id="fdzGHN_xxaTD" colab_type="code" colab={}
y_pred = lr.predict()
# + id="ZeD_Y35_znMb" colab_type="code" colab={}
m,b = lr.coeffs()
# + id="eFdT7gXZxePT" colab_type="code" outputId="34b89444-8b7c-43ee-dc21-c90c97394b78" colab={"base_uri": "https://localhost:8080/", "height": 69}
print("MSE:{}".format(mean_squared_error(y, y_pred)))
print("slope:{}".format(m))
print("intercept:{}".format(b))
# + id="o3mrtQcwmNuq" colab_type="code" outputId="efbf37f3-5b69-4978-b72b-1024f29500b6" colab={"base_uri": "https://localhost:8080/", "height": 351}
plt.figure(figsize=(10,5))
plt.scatter(X, y, label='Data', c='#388fd8', s=6)
plt.plot(X, y_pred, c='#ff7702', lw=3, label='Regression')
plt.title('Linear Regression', fontSize=14)
plt.xlabel('Income', fontSize=11)
plt.ylabel('Price', fontSize=11)
plt.legend(frameon=True, loc=1, fontsize=10, borderpad=.6)
plt.show()
# + [markdown] id="_9s68fgntc0v" colab_type="text"
# #**Gradient Descent**
# + id="aLNYBzNDxsuz" colab_type="code" colab={}
def gradient_descent(X, y, lr, epoch):
m, b = 0.1, 0.1 # parameters
mse = []
N = len(X) # number of samples
for _ in range(epoch):
f = y - (m*X + b)
# Updating m and b
m -= lr * (-2 * X.dot(f).sum() / N)
b -= lr * (-2 * f.sum() / N)
mse.append(mean_squared_error(y, (m*X + b)))
return m, b, mse
# + id="msBlM5Sqx-df" colab_type="code" outputId="0178c656-5eaa-4ceb-e8b2-5f728becbb87" colab={"base_uri": "https://localhost:8080/", "height": 702}
# X = Norm(df.MedInc)
# y = Norm(df.Target)
X = df.MedInc
y = df.Target
m, b, mse = gradient_descent(X, y, lr=0.01, epoch=100)
y_pred = m*X + b
print("MSE:",mean_squared_error(y, y_pred))
plt.figure(figsize=(10,5))
plt.scatter(X, y, label='Data', c='#388fd8', s=6)
plt.plot(X, y_pred, c='#ff7702', lw=3, label='Regression')
plt.title('Linear Regression', fontSize=14)
plt.xlabel('Income', fontSize=11)
plt.ylabel('Price', fontSize=11)
plt.legend( loc=0, fontsize=10, borderpad=.6)
plt.show()
plt.figure(figsize=(10,5))
plt.plot(range(len(mse)), mse)
plt.title('Gradient Descent Optimization', fontSize=14)
plt.xlabel('Epochs')
plt.ylabel('MSE')
plt.show()
# + [markdown] id="a5Eukw3EjECe" colab_type="text"
# # **Ridge Regression**
# + id="JGV2itchh-8H" colab_type="code" colab={}
# Let us use the same dataset california_housing
housing_data = california_housing.fetch_california_housing()
Features = pd.DataFrame(housing_data.data, columns=housing_data.feature_names)
Target = pd.DataFrame(housing_data.target, columns=['Target'])
df = Features.join(Target)
housing_data.data = preprocessing.scale(housing_data.data)
X_train, X_test, y_train, y_test = train_test_split(
housing_data.data, housing_data.target, test_size=0.3, random_state=10)
# + id="ACqDRxGzk2-N" colab_type="code" outputId="55a8b2be-5668-4c9b-ac5c-413a2b80709a" colab={"base_uri": "https://localhost:8080/", "height": 455}
# initialize
ridge_reg = Ridge(alpha=0)
ridge_reg.fit(X_train, y_train)
ridge_df = pd.DataFrame({'variable': housing_data.feature_names, 'estimate': ridge_reg.coef_})
ridge_train_pred = []
ridge_test_pred = []
# iterate lambdas
for alpha in np.arange(0, 200, 1):
# training
ridge_reg = Ridge(alpha=alpha)
ridge_reg.fit(X_train, y_train)
var_name = 'estimate' + str(alpha)
ridge_df[var_name] = ridge_reg.coef_
# prediction
ridge_train_pred.append(ridge_reg.predict(X_train))
ridge_test_pred.append(ridge_reg.predict(X_test))
# organize dataframe
ridge_df = ridge_df.set_index('variable').T.rename_axis('estimate')
ridge_df
# + id="xnRpEn0aupA-" colab_type="code" outputId="6bea24d7-5cd9-453b-d01b-3fc1f9504da3" colab={"base_uri": "https://localhost:8080/", "height": 352}
# plot betas by lambda
fig, ax = plt.subplots(figsize=(10, 5))
ax.plot(ridge_df.MedInc, 'r', ridge_df.HouseAge, 'g', ridge_df.AveRooms, 'b', ridge_df.AveBedrms, 'c', ridge_df.Population, 'y')
ax.axhline(y=0, color='black', linestyle='--')
ax.set_xlabel("Lambda")
ax.set_ylabel("Beta Estimate")
ax.set_title("Ridge Regression Trace", fontsize=16)
ax.legend(labels=['MedInc','HouseAge','AveRooms','AveBedrms','Population'])
ax.grid(True)
# + [markdown] id="ZFak48JfNyrl" colab_type="text"
# # **Logistic Regression**
#
#
# ---
#
#
# You can download the dataset from:
# https://www.kaggle.com/uciml/pima-indians-diabetes-database
#
#
# ---
#
#
# + id="ozmQ05hvOGWG" colab_type="code" outputId="a18a9c1c-3385-4cb5-e00f-dda931824f57" colab={"base_uri": "https://localhost:8080/", "height": 423}
df = pd.read_csv('/content/drive/My Drive/diabetes.csv')
df
# + id="zxWUVg6ZQ6BL" colab_type="code" colab={}
#split dataset in features and target variable
feature_cols = ['Pregnancies', 'Insulin', 'BMI', 'Age','Glucose','BloodPressure','DiabetesPedigreeFunction']
X = df[feature_cols] # Features
y = df.Outcome # Target variable
# + id="uNpF-d-rRnFK" colab_type="code" colab={}
# split X and y into training and testing sets
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.25,random_state=0)
# + id="dEUXSh9rSGpQ" colab_type="code" outputId="fef5096f-6bae-4ff9-8dfc-52c471c29c24" colab={"base_uri": "https://localhost:8080/", "height": 176}
logreg = LogisticRegression()
# fit the model with data
logreg.fit(X_train,y_train)
y_pred=logreg.predict(X_test)
# + id="_pZ01V0KSXJU" colab_type="code" outputId="926ecc13-1bfa-4dac-8f7a-46c95d505154" colab={"base_uri": "https://localhost:8080/", "height": 52}
cnf_matrix = metrics.confusion_matrix(y_test, y_pred)
cnf_matrix
# + id="ukuAHLI8Sj1M" colab_type="code" outputId="04423189-139f-4d09-8101-d8a4644f9aad" colab={"base_uri": "https://localhost:8080/", "height": 310}
class_names=[0,1] # name of classes
fig, ax = plt.subplots()
tick_marks = np.arange(len(class_names))
plt.xticks(tick_marks)
plt.yticks(tick_marks)
# create heatmap
sns.heatmap(pd.DataFrame(cnf_matrix), cmap="YlGnBu")
ax.xaxis.set_label_position("top")
plt.title('Confusion matrix', y=1.1)
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
# + id="SY2y6411TAtu" colab_type="code" outputId="894b7a87-5b82-4693-fac7-3730d7eac515" colab={"base_uri": "https://localhost:8080/", "height": 69}
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
print("Precision:",metrics.precision_score(y_test, y_pred))
print("Recall:",metrics.recall_score(y_test, y_pred))
# + id="-JBK2j0qTKdo" colab_type="code" outputId="af3a8025-2fdc-40eb-d850-c9a7798b7d91" colab={"base_uri": "https://localhost:8080/", "height": 265}
y_pred_proba = logreg.predict_proba(X_test)[:,1]
fpr, tpr, _ = metrics.roc_curve(y_test, y_pred_proba)
auc = metrics.roc_auc_score(y_test, y_pred_proba)
plt.plot(fpr,tpr,label="auc="+str(auc))
plt.legend(loc=4)
plt.show()
# + [markdown] id="fl1Q2M6BE-Pg" colab_type="text"
# #**Support Vector Machine**
# + [markdown] id="lIJM_g_SFJam" colab_type="text"
#
#
# ---
# Let us create a **linearly separable** dataset.
#
#
# ---
#
#
# + id="T8y_KiVRFab2" colab_type="code" colab={}
# from sklearn.datasets.samples_generator import make_blobs
# X, y = make_blobs(n_samples=100, centers=2,
# random_state=0, cluster_std=0.60)
# plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap=plt.cm.Paired);
# # print(X)
# # print(y)
# + id="C2g7zradFG9M" colab_type="code" colab={}
# from sklearn import svm
# clf = svm.SVC(kernel='linear', C=1000)
# #clf = svm.SVC()
# clf.fit(X, y)
# + id="blocBTb2glMv" colab_type="code" colab={}
# plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
# # plot the decision function
# ax = plt.gca()
# xlim = ax.get_xlim()
# ylim = ax.get_ylim()
# # create grid to evaluate model
# xx = np.linspace(xlim[0], xlim[1], 10)
# yy = np.linspace(ylim[0], ylim[1], 10)
# YY, XX = np.meshgrid(yy, xx)
# xy = np.vstack([XX.ravel(), YY.ravel()]).T
# #decision_function(self, X) evaluates the decision function for the samples in X.
# Z = clf.decision_function(xy).reshape(XX.shape)
# # plot decision boundary and margins
# ax.contour(XX, YY, Z, colors='k', levels=[-1, 0, 1], alpha=0.5,
# linestyles=['--', '-', '--'])
# # plot support vectors
# ax.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=100,
# linewidth=1, facecolors='none', edgecolors='k')
# plt.show()
# + id="MnOGdrDUnuu3" colab_type="code" colab={}
# # the support vectors are:
# clf.support_vectors_
| Lecture/Google_Colab_Notebooks/MachineLearning/L4/Linear_Logistic_SVM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# !pip3 install torch
# # Model Creation
# +
import torch
import numpy as np
import matplotlib.pyplot as plt
import torch.nn as nn
from sklearn import datasets
# +
# Get linearly separable dataset, clustered around two points
n_pts = 100
centers = [[-0.5, 0.5],[0.5, -0.5]]
X, y = datasets.make_blobs(n_samples=n_pts, random_state=123,centers=centers, cluster_std=0.4)
x_data = torch.Tensor(X)
y_data = torch.Tensor(y.reshape(100,1))
print(X)
print(y)
# -
def scatter_plot():
'''
Print a scatterplot
'''
plt.scatter(X[y==0,0], X[y==0, 1]) # Where labels are 0
plt.scatter(X[y==1,0], X[y==1, 1]) # Where labels are 1
# X and y are numpy arrays, they must be in the form of tensors.
# Plot Raw Data
scatter_plot()
class Model(nn.Module):
'''
Class of a perceptron for linear classifier (logistic regression)
'''
def __init__(self,input_size, output_size):
super().__init__() # inherit from parent class
self.linear = nn.Linear(input_size, output_size)
def forward(self, x):
'''
pass forward through linear model and apply sigmoid (to convert output to probability)
'''
pred = torch.sigmoid(self.linear(x))
return pred
def predict(self, x):
pred = self.forward(x)
if pred >= 0.5:
return 1
else:
return 0
# +
# Set seed, initialize model and print model parameters (random)
torch.manual_seed(2)
model = Model(2,1)
print(list(model.parameters()))
# +
[w,b] = model.parameters()
w1, w2 = w.view(2) # unpack Weights
def get_params():
return(w1.item(), w2.item(), b[0].item())
# -
def plot_fit(title):
plt.title = title
# 0 = w1x1 + w2x2 + b # Equation Line
w1, w2, b1 = get_params()
x1 = np.array([-2.0,2.0]) # numbers here is range of data in the plot above
x2 = (w1*x1 + b1)/-w2
plt.plot(x1,x2, 'r')
scatter_plot()
plot_fit('Initial Model')
# # Model Training
# 1. Compute error (loss) of the model based on cross entropy criterian.
#
# 2. Take the gradient of the loss function (derivative of the error). Subtract this from the weights and it will update them in the direction of the least error.
#
# 3. Train with n number of epochs
# +
# Step 1 - Cross Entropy
criterion = nn.BCELoss()
# Step 2 - Stochastic Gradient Descent
optimizer = torch.optim.SGD(model.parameters(), lr = 0.01)
# +
# Step 3 - Train
epochs = 2000
losses = []
for i in range(epochs):
y_pred = model.forward(x_data)
loss = criterion(y_pred, y_data)
print("epoch:",i,"loss",loss.item())
losses.append(loss.item())
optimizer.zero_grad() # Set gradient to zero because they accumulate between ruins
loss.backward() # Compute the gradient
optimizer.step() # Update the parameters
# -
# plot the loss
plt.plot(range(epochs), losses)
plt.ylabel('Loss')
plt.xlabel('epoch')
# Plot trained model on data
plot_fit('Trained Model')
# # Testing
# +
# Define some testing points
point1 = torch.Tensor([1.0, -1.0])
point2 = torch.tensor([-1.0, 1.0])
# Plot the test points
plt.plot(point1.numpy()[0],point1.numpy()[1],'ro')
plt.plot(point2.numpy()[0],point2.numpy()[1],'ko')
print('Red point positive probability = {}'.format(model.forward(point1).item()))
print('Black point positive probability = {}'.format(model.forward(point2).item()))
plot_fit('Trained Model')
| 2-pytorch-perceptron-training-testing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# ### Recurrent memory intro
#
# In the seminar you'll deploy recurrent neural network inside SARSA agent.
#
# The environment it plays is a simple POMDP of rock-paper-scissors game with exploitable opponent.
#
# #### Instructions
#
# First, read through the code and __run it as you read__. The code will create a feedforward neural network and train it with SARSA.
#
# Since the game is partially observable, default algorithm will won't reach optimal score. In fact, it's unstable and may even end up worse than random.
#
# After you ran the code, __find the two ```#YOUR CODE HERE``` chunks__ (mb ctrl+f) and implement a recurrent memory.
#
# Re-run the experiment and compare the performance of feedworward vs recurrent agent.
# RNN should be _much_ better, session __reward > 50__.
#
# After you're done with that, proceed to the next part, for it is going to be much more interesting.
# +
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
# number of parallel agents and batch sequence length (frames)
N_AGENTS = 10
SEQ_LENGTH = 25
# -
# The environment we're going to use now is not a default gym env.
#
# It was instead written from scratch in `rockpaperscissors.py`.
#
# Morale: you can make your own gym environments easily with anything you want (including OS and the web, e.g. selenium)
# +
import gym
from rockpaperscissors import RockPaperScissors
def make_env():
env = RockPaperScissors()
return gym.wrappers.TimeLimit(env, max_episode_steps=100)
# spawn game instance
env = make_env()
observation_shape = env.observation_space.shape
n_actions = env.action_space.n
env.reset()
obs = env.step(env.action_space.sample())[0]
print obs
# -
# # Basic agent setup
# Here we define a simple agent that maps game images into policy with a minimalistic neural net
#
# +
# setup theano/lasagne. Prefer CPU
# %env THEANO_FLAGS = device = cpu, floatX = float32
import theano
import lasagne
import theano.tensor as T
from lasagne.layers import *
# +
# observation
obs = InputLayer((None,)+observation_shape,)
nn = DenseLayer(obs, 32, nonlinearity=T.nnet.elu)
# +
from agentnet.memory import RNNCell, GRUCell, LSTMCell
<YOUR CODE>:
# Implement a recurrent agent memory by un-comemnting code below and defining h_new
#h_prev = InputLayer((None,50),name="previous memory state with 50 units")
# h_new = RNNCell(<what's prev state>,<what's input>,nonlinearity=T.nnet.elu)
# (IMPORTANT!) use new cell to compute q-values instead of dense layer
#nn = h_new
# -
from agentnet.resolver import EpsilonGreedyResolver
l_qvalues = DenseLayer(nn, n_actions)
l_actions = EpsilonGreedyResolver(l_qvalues)
# ##### Agent, as usual
# +
from agentnet.agent import Agent
<YOUR CODE>
# uncomment agent_states and define what layers should be used
agent = Agent(observation_layers=obs,
policy_estimators=(l_qvalues),
# agent_states={<new rnn state>:<what layer should it become at next time-step>},
action_layers=l_actions)
# -
# ### Pool, as usual
# +
from agentnet.experiments.openai_gym.pool import EnvPool
pool = EnvPool(agent, make_env, n_games=16) # may need to adjust
pool.update(SEQ_LENGTH)
# -
# ### Learning
#
# For N+1'st time, we use vanilla SARSA
# +
replay = pool.experience_replay
qvalues_seq = agent.get_sessions(
replay,
session_length=SEQ_LENGTH,
experience_replay=True,
unroll_scan=False, # this new guy makes compilation 100x faster for a bit slower runtime
)[-1]
auto_updates = agent.get_automatic_updates() # required if unroll_scan=False
# -
# get SARSA mse loss
from agentnet.learning import sarsa
elemwise_mse = sarsa.get_elementwise_objective(qvalues_seq,
actions=replay.actions[0],
rewards=replay.rewards,
is_alive=replay.is_alive)
loss = elemwise_mse.mean()
# +
# Compute weights and updates
weights = lasagne.layers.get_all_params([l_actions], trainable=True)
updates = lasagne.updates.adam(loss, weights)
# compile train function
train_step = theano.function([], loss, updates=auto_updates+updates)
# -
# # Demo run
untrained_reward = np.mean(pool.evaluate(save_path="./records", n_games=10,
record_video=False, use_monitor=False))
# # Training loop
# +
# starting epoch
epoch_counter = 1
# full game rewards
rewards = {0: untrained_reward}
loss, reward = 0, untrained_reward
# +
from tqdm import trange
from IPython.display import clear_output
for i in trange(10000):
# play
pool.update(SEQ_LENGTH)
# train
loss = train_step()
# update epsilon
new_epsilon = max(0.01, 1-2e-4*epoch_counter)
l_actions.epsilon.set_value(np.float32(new_epsilon))
# record current learning progress and show learning curves
if epoch_counter % 100 == 0:
clear_output(True)
print("iter=%i,loss=%.3f,epsilon=%.3f" %
(epoch_counter, loss, new_epsilon))
reward = 0.9*reward + 0.1*np.mean(np.mean(pool.evaluate(save_path="./records", n_games=10,
record_video=False, use_monitor=False)))
rewards[epoch_counter] = reward
plt.plot(*zip(*sorted(rewards.items(), key=lambda (t, r): t)))
plt.grid()
plt.show()
epoch_counter += 1
# -
# # Evaluating results
# * Here we plot learning curves and sample testimonials
plt.plot(*zip(*sorted(rewards.items(), key=lambda k: k[0])))
plt.grid()
# ## Bonus (1++ points)
#
# Compare two types of nonlinearities for the RNN:
# - `T.nnet.elu`
# - `T.nnet.sigmoid`
#
# Re-train agent at least 10 times. It's probably a good idea to automate the process.
#
# Notice something weird? Any clue why this happens and how to fix it?
#
# _Running the experiment and reporting results gets your 1 point. Reward will get much higher as you go down the rabbit hole! Don't forget to send this notebook to Anytask and mention that you went for this bonus._
# +
# results, ideas, solutions...
# -
# ```
#
#
# ```
# ```
#
#
# ```
# ```
#
#
# ```
# ```
#
#
# ```
# ```
#
#
# ```
# ```
#
#
# ```
# ```
#
#
# ```
# ```
#
#
# ```
# ```
#
#
# ```
# ```
#
#
# ```
# ```
#
#
# ```
#
| week08_pomdp/theano_optional_recurrence_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Home Work
# Q1. Write a function that computes the volume of a sphere given its radius.
#
# The volume of a sphere is given as
#
# 4/3x πxr**3
def volume(rad):
pass
# Check
volume(2)
# Q2. Write a function that checks whether a number is in a given range (inclusive of high and low)
def range_chk(num,low,high):
pass
# Q3. Write a Python function that accepts a string and calculates the number of upper case letters and lower case letters.
#
# Sample String : 'Hello Mr. Rogers, how are you this fine Tuesday?'
#
# Expected Output :
#
# No. of Upper case characters : 4
#
# No. of Lower case Characters : 33
#
def up_low(s):
pass
# Q4. Write a Python function to check whether a string is pangram or not.
#
# Note : Pangrams are words or sentences containing every letter of the alphabet at least once.
# For example : "The quick brown fox jumps over the lazy dog"
# +
def ispangram(str1):
pass
# -
# Q5. MASTER YODA: Given a sentence, return a sentence with the words reversed
#
# master_yoda('I am home') --> 'home am I'
#
# master_yoda('We are ready') --> 'ready are We'
#
#
def master_yoda(text):
pass
| Homework - ITM FCS .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#The-DAG" data-toc-modified-id="The-DAG-1"><span class="toc-item-num">1 </span>The DAG</a></span></li><li><span><a href="#UI-needs" data-toc-modified-id="UI-needs-2"><span class="toc-item-num">2 </span>UI needs</a></span></li><li><span><a href="#Scrap" data-toc-modified-id="Scrap-3"><span class="toc-item-num">3 </span>Scrap</a></span></li></ul></div>
# -
# We'd like to develop a web app that will help a user to make a domain-specific metric (DSM) to evaluate binary classifiers.
#
# Often, off-the-shelf metrics are used, such as accuracy, precision, recall, false-positive-rate, etc.
#
# The number these produce may not be directly interpretable in the domain they're being applied to.
#
# It can be helpful to see what false negative, false positive, etc. mean in the given context,
# and associate a value (penalty or reward) to these
# that reflects how much they contribute towards or contravene the actual objective of the classifier.
#
# If a COVID test turns out positive, what actions will be taken and what value do their consequences have depending on whether the positive test
# was true or false? Same question for true/false negative test results. Doing this valuation then allows us to produce a formula that
# evaluates the worth of a test that has more meaning that flat false positive and false negative rates.
#
# See this notebook if you want a tutorial on how we came to the DAG we'll use here.
# # The DAG
# Below are some functions provided to a user
# to get from where they are (e.g. they have a model and some test data)
# to where they want to get to (e.g. getting a classifier_score)
# is nice.
#
# Making it clear on how to use these functions to get from A to B is also nice.
# This can be done through documentation, examples, and (Uncle Bob style) through careful function and variable naming.
# +
from collections import Counter
import numpy as np
def _aligned_items(a, b):
"""Yield (k, a_value, b_value) triples for all k that are both a key of a and of b"""
# reason for casting to dict is to make sure things like pd.Series use the right keys.
# could also use k in a.keys() etc. to solve this.
a = dict(a)
b = dict(b)
for k in a:
if k in b:
yield k, a[k], b[k]
def _dot_product(a, b):
"""
>>> dot_product({'a': 1, 'b': 2, 'c': 3}, {'b': 4, 'c': -1, 'd': 'whatever'})
5
"""
return sum(ak * bk for _, ak, bk in _aligned_items(a, b))
def classifier_score(confusion_count, confusion_value):
"""Compute a score for a classifier that produced the `confusion_count`, based on the given `confusion_value`.
Meant to be curried by fixing the confusion_value dict.
The function is purposely general -- it is not specific to binary classifier outcomes, or even any classifier outcomes.
It simply computes a normalized dot product, depending on the inputs keys to align values to multiply and
considering a missing key as an expression of a null value.
"""
return _dot_product(confusion_count, confusion_value) / sum(confusion_count.values())
def confusion_count(prediction, truth):
"""Get a dict containing the counts of all combinations of predicction and corresponding truth values.
>>> confusion_count(
... [0, 0, 1, 0, 1, 1, 1],
... [0, 0, 0, 1, 1, 1, 1]
... )
Counter({(0, 0): 2, (1, 0): 1, (0, 1): 1, (1, 1): 3})
"""
return Counter(zip(prediction, truth))
def prediction(predict_proba, threshold):
"""Get an array of predictions from thresholding the scores of predict_proba array.
>>> prediction([0.3, 0.4, 0.5, 0.6, 0.7, 0.8], threshold=0.5)
array([False, False, True, True, True, True])
"""
return np.array(predict_proba) >= threshold
def predict_proba(model, test_X):
"""Get the prediction_proba scores of a model given some test data"""
return model.predict_proba(test_X)
# -
# Conveniently, if we use names of functions and arguments as we did above, these can be used to indicate how they all relate to each other.
#
# That is to say, we need nothing further to make a DAG.
# +
from meshed import DAG
dag = DAG([classifier_score, confusion_count, prediction, predict_proba])
dag.dot_digraph()
# -
# # UI needs
# So we can use `dagapp` to make an app for this dag, but we'll run into a problem: How do we acquire inputs of complex objects from the user?
# Let's start with the `confusion_value` and `confusion_count`. These need to be mapping between some confusion key and a float.
# So we need to allow the user to enter multiple keys and values.
# - Restricting ourselves to binary cases, this means exactly four (though unspecified values can be interpreted as 0.
# - We can find a more general mechanism to allow an arbitrary mapping to be entered.
# - The values are floats, but the keys could be complicated. In the case of classification, the keys are actually all possible pairs of categories.
# - There could be a UI component specialized for getting pairs. A matrix makes sense.
# To make the dag even more useful, we'll want to vectorize some of the inputs. Namely, we'd like to not enter only one threshold, but possibly a range of them.
#
# A UI mechanism to get arithmetic sequences would be in order here. Could be:
# - min, max, ?all_integers (-> [min, min+1, min+2, ..., max])
# - min, max, num_of_elements
# - min, max, step
# # Scrap
| misc/Developing a binary classification metric factory app.ipynb |