text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# ExtraTrees Regression with Normalize
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.preprocessing import Normalizer
from sklearn.model_selection import train_test_split
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path = ""
```
List of features which are required for model training .
```
#x_values
features = []
```
Target feature for prediction.
```
#y_value
target = ''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 123)
```
### Data Rescaling
The Normalizer normalizes samples (rows) individually to unit norm.
Each sample with at least one non zero component is rescaled independently of other samples so that its norm (l1, l2 or inf) equals one.
We will fit an object of Normalizer to train data then transform the same data via fit_transform(X_train) method, following which we will transform test data via transform(X_test) method.
```
normalizer = Normalizer()
X_train = normalizer.fit_transform(X_train)
X_test = normalizer.transform(X_test)
```
### Model
ExtraTrees Regressor model implements a meta estimator that fits a number of randomized decision trees (a.k.a. extra-trees) on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting.
#### Model Tuning Parameters
1. n_estimators:int, default=100
>The number of trees in the forest.
2.criterion{“mse”, “mae”}, default=”mse”
> The function to measure the quality of a split. Supported criteria are “mse” for the mean squared error, which is equal to variance reduction as feature selection criterion, and “mae” for the mean absolute error.
3.max_depth:int, default=None
>The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
4.max_features:{“auto”, “sqrt”, “log2”}, int or float, default=”auto”
>The number of features to consider when looking for the best split
```
# Build Model here
model=ExtraTreesRegressor(n_jobs = -1,random_state = 123)
model.fit(X_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
> **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(X_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Feature Importances
The Feature importance refers to techniques that assign a score to features based on how useful they are for making the prediction.
```
plt.figure(figsize=(8,6))
n_features = len(X.columns)
plt.barh(range(n_features), model.feature_importances_, align='center')
plt.yticks(np.arange(n_features), X.columns)
plt.xlabel("Feature importance")
plt.ylabel("Feature")
plt.ylim(-1, n_features)
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(X_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Vikas Mishra , Github: [Profile](https://github.com/Vikaas08)
| github_jupyter |
# Recommender Systems (RS):
- We can use deep learning to predict rating for users based on the items
- We use the Movielens-100k dataset for illustration. There are 943 users and 1682 movies. In total there are a 100k ratings in the dataset.
```
import pandas as pd
import numpy as np
u_cols = ['user_id', 'sex', 'age', 'occupation', 'zip_code']
users = pd.read_csv('./Movie_Lens/users.dat', sep='::', names=u_cols, encoding='latin-1')
print(users.head())
r_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp']
ratings = pd.read_csv('./Movie_Lens/ratings.dat', sep='::', names=r_cols, encoding='latin-1')
print(ratings.head())
m_cols = ['movie_id', 'title', 'Genre']
movies = pd.read_csv('./Movie_Lens/movies.dat', sep='::', names=m_cols, usecols=range(5), encoding='latin-1')
print(movies.head())
movie_ratings = pd.merge(movies, ratings)
lens = pd.merge(movie_ratings, users)
dataset = lens[['user_id', 'movie_id', 'rating']]
print(dataset.head())
print(dataset.shape)
print(dataset['user_id'].nunique())
print(dataset['movie_id'].nunique())
```
## Dataset
```
dataset = pd.read_csv("./ml-100k/u.data",sep='\t',names="user_id,item_id,rating,timestamp".split(","))
print(dataset.head())
print(dataset['user_id'].nunique())
print(dataset['item_id'].nunique())
import keras
from keras.models import Sequential
from keras.optimizers import Adam
from sklearn.model_selection import train_test_split
train, test = train_test_split(dataset, test_size=0.2)
n_users, n_movies = len(dataset.user_id.unique()), len(dataset.item_id.unique())
n_latent_factors = 3
movie_input = keras.layers.Input(shape=[1],name='Item')
movie_embedding = keras.layers.Embedding(n_movies + 1, n_latent_factors, name='Movie-Embedding')(movie_input)
movie_vec = keras.layers.Flatten(name='FlattenMovies')(movie_embedding)
user_input = keras.layers.Input(shape=[1],name='User')
user_vec = keras.layers.Flatten(name='FlattenUsers')(keras.layers.Embedding(n_users + 1, n_latent_factors,name='User-Embedding')(user_input))
prod = keras.layers.dot([movie_vec, user_vec], axes = -1, name='DotProduct', normalize=False)
model = keras.models.Model([user_input, movie_input], prod)
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
history = model.fit([train.user_id, train.item_id], train.rating, epochs=100, verbose=1)
y_hat = model.predict([test.user_id, test.item_id])
y_true = test.rating
from sklearn.metrics import mean_absolute_error
mean_absolute_error(y_true, y_hat)
from sklearn.metrics import mean_squared_error
mean_squared_error(y_true, y_hat)
from sklearn.metrics import r2_score
r2_score(y_true, y_hat)
import matplotlib.pyplot as plt
print(len(y_hat))
plt.plot(range(len(y_hat)), y_hat)
print([(i, j) for (i, j) in zip(y_hat.ravel()[:20], y_true.ravel()[:20])])
train.user_id.values[0]
movie_embedding_learnt = model.get_layer(name='Movie-Embedding').get_weights()[0]
pd.DataFrame(movie_embedding_learnt).describe()
user_embedding_learnt = model.get_layer(name='User-Embedding').get_weights()[0]
pd.DataFrame(user_embedding_learnt).describe()
```
## Next Item Prediction
- In online transactions or groceries transactions, people **do not** purchase 5 simiar items
- However, they buy the items that are related somehow into each other
- For example, a shoper wants to make Spaghetti at home, so he/she buy: Pasta -> Tomato Sauce -> Mushroom -> Parsley
## MLP for next item prediction
```
# LSTM with Variable Length Input Sequences to One Character Output
import numpy
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.utils import np_utils
from keras.preprocessing.sequence import pad_sequences
# fix random seed for reproducibility
numpy.random.seed(7)
# define the raw dataset
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
# create mapping of characters to integers (0-25) and the reverse
char_to_int = dict((c, i) for i, c in enumerate(alphabet))
int_to_char = dict((i, c) for i, c in enumerate(alphabet))
# prepare the dataset of input to output pairs encoded as integers
num_inputs = 1000
max_len = 5
dataX = []
dataY = []
for i in range(num_inputs):
start = numpy.random.randint(len(alphabet)-2)
end = numpy.random.randint(start, min(start+max_len,len(alphabet)-1))
sequence_in = alphabet[start:end+1]
sequence_out = alphabet[end + 1]
dataX.append([char_to_int[char] for char in sequence_in])
dataY.append(char_to_int[sequence_out])
print(sequence_in, '->', sequence_out)
# convert list of lists to array and pad sequences if needed
X = pad_sequences(dataX, maxlen=max_len, dtype='float32')
# reshape X to be [samples, time steps, features]
# X = numpy.reshape(X, (X.shape[0], max_len, 1))
# normalize
X = X / float(len(alphabet))
print(X)
# one hot encode the output variable
y = np_utils.to_categorical(dataY)
print(X.shape)
print(y.shape[1])
# print(y)
# create and fit the model
batch_size = 10
model = Sequential()
model.add(Dense(32, input_shape=(max_len, )))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, y, epochs=500, batch_size=batch_size, verbose=1)
# summarize performance of the model
scores = model.evaluate(X, y, verbose=0)
print("Model Accuracy: %.2f%%" % (scores[1]*100))
# demonstrate some model predictions
for i in range(20):
pattern_index = numpy.random.randint(len(dataX))
pattern = dataX[pattern_index]
x = pad_sequences([pattern], maxlen=max_len, dtype='float32')
# x = numpy.reshape(x, (1, max_len, 1))
x = x / float(len(alphabet))
prediction = model.predict(x, verbose=0)
index = numpy.argmax(prediction)
result = int_to_char[index]
seq_in = [int_to_char[value] for value in pattern]
print(seq_in, "->", result)
```
#### MLP accuracy is 88.40%
## LSTM for next item prediction
```
# LSTM with Variable Length Input Sequences to One Character Output
import numpy
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.utils import np_utils
from keras.preprocessing.sequence import pad_sequences
# fix random seed for reproducibility
numpy.random.seed(7)
# define the raw dataset
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
# create mapping of characters to integers (0-25) and the reverse
char_to_int = dict((c, i) for i, c in enumerate(alphabet))
int_to_char = dict((i, c) for i, c in enumerate(alphabet))
# prepare the dataset of input to output pairs encoded as integers
num_inputs = 1000
max_len = 5
dataX = []
dataY = []
for i in range(num_inputs):
start = numpy.random.randint(len(alphabet)-2)
end = numpy.random.randint(start, min(start+max_len,len(alphabet)-1))
sequence_in = alphabet[start:end+1]
sequence_out = alphabet[end + 1]
dataX.append([char_to_int[char] for char in sequence_in])
dataY.append(char_to_int[sequence_out])
print(sequence_in, '->', sequence_out)
# convert list of lists to array and pad sequences if needed
X = pad_sequences(dataX, maxlen=max_len, dtype='float32')
# reshape X to be [samples, time steps, features]
X = numpy.reshape(X, (X.shape[0], max_len, 1))
# normalize
X = X / float(len(alphabet))
# one hot encode the output variable
y = np_utils.to_categorical(dataY)
print(X.shape)
print(y.shape[1])
# print(y)
# create and fit the model
batch_size = 10
model = Sequential()
model.add(LSTM(32, input_shape=(X.shape[1], 1)))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, y, epochs=500, batch_size=batch_size, verbose=1)
# summarize performance of the model
scores = model.evaluate(X, y, verbose=0)
print("Model Accuracy: %.2f%%" % (scores[1]*100))
# demonstrate some model predictions
for i in range(20):
pattern_index = numpy.random.randint(len(dataX))
pattern = dataX[pattern_index]
x = pad_sequences([pattern], maxlen=max_len, dtype='float32')
x = numpy.reshape(x, (1, max_len, 1))
x = x / float(len(alphabet))
prediction = model.predict(x, verbose=0)
index = numpy.argmax(prediction)
result = int_to_char[index]
seq_in = [int_to_char[value] for value in pattern]
print(seq_in, "->", result)
```
#### LSTM accuracy is 95.10%
## Deep Learning for Temporal Recommendation
- A novel deep neural network based architecture that models the combination of long-term static and short-term temporal user preferences to improve the recommendation performance
- https://github.com/sonyisme/keras-recommendation
## Deep Learning for Content-based Recommendation based on images
- https://nycdatascience.com/blog/student-works/deep-learning-meets-recommendation-systems/
| github_jupyter |
## Exponential Smoothing Real Data
```
# install and load necessary packages
!pip install seaborn
!pip install --upgrade --no-deps statsmodels
import pyspark
from datetime import datetime
import seaborn as sns
import sys
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import os
print('Python version ' + sys.version)
print('Spark version: ' + pyspark.__version__)
%env DH_CEPH_KEY = DTG5R3EEWN9JBYJZH0DF
%env DH_CEPH_SECRET = pdcEGFERILlkRDGrCSxdIMaZVtNCOKvYP4Gf2b2x
%env DH_CEPH_HOST = http://storage-016.infra.prod.upshift.eng.rdu2.redhat.com:8080
%env METRIC_NAME = kubelet_docker_operations_latency_microseconds
%env LABEL =
label = os.getenv("LABEL")
where_labels = {}#"metric.group=route.openshift.io"}
metric_name = str(os.getenv("METRIC_NAME"))
print(metric_name)
```
### Establish Connection to Spark Cluster
set configuration so that the Spark Cluster communicates with Ceph and reads a chunk of data.
```
import string
import random
# Set the configuration
# random string for instance name
inst = ''.join(random.choices(string.ascii_uppercase + string.digits, k=4))
AppName = inst + ' - Ceph S3 Prometheus JSON Reader'
conf = pyspark.SparkConf().setAppName(AppName).setMaster('spark://spark-cluster.dh-prod-analytics-factory.svc:7077')
print("Application Name: ", AppName)
# specify number of nodes need (1-5)
conf.set("spark.cores.max", "88")
# specify Spark executor memory (default is 1gB)
conf.set("spark.executor.memory", "400g")
# Set the Spark cluster connection
sc = pyspark.SparkContext.getOrCreate(conf)
# Set the Hadoop configurations to access Ceph S3
import os
(ceph_key, ceph_secret, ceph_host) = (os.getenv('DH_CEPH_KEY'), os.getenv('DH_CEPH_SECRET'), os.getenv('DH_CEPH_HOST'))
ceph_key = 'DTG5R3EEWN9JBYJZH0DF'
ceph_secret = 'pdcEGFERILlkRDGrCSxdIMaZVtNCOKvYP4Gf2b2x'
ceph_host = 'http://storage-016.infra.prod.upshift.eng.rdu2.redhat.com:8080'
sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", ceph_key)
sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key", ceph_secret)
sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", ceph_host)
#Get the SQL context
sqlContext = pyspark.SQLContext(sc)
#Read the Prometheus JSON BZip data
jsonUrl = "s3a://DH-DEV-PROMETHEUS-BACKUP/prometheus-openshift-devops-monitor.1b7d.free-stg.openshiftapps.com/"+metric_name+"/"
jsonFile = sqlContext.read.option("multiline", True).option("mode", "PERMISSIVE").json(jsonUrl)
import pyspark.sql.functions as F
from pyspark.sql.types import StringType
from pyspark.sql.types import IntegerType
from pyspark.sql.types import TimestampType
# create function to convert POSIX timestamp to local date
def convert_timestamp(t):
return datetime.fromtimestamp(float(t))
def format_df(df):
#reformat data by timestamp and values
df = df.withColumn("values", F.explode(df.values))
df = df.withColumn("timestamp", F.col("values").getItem(0))
df = df.withColumn("values", F.col("values").getItem(1))
# drop null values
df = df.na.drop(subset=["values"])
# cast values to int
df = df.withColumn("values", df.values.cast("int"))
#df = df.withColumn("timestamp", df.values.cast("int"))
# define function to be applied to DF column
udf_convert_timestamp = F.udf(lambda z: convert_timestamp(z), TimestampType())
df = df.na.drop(subset=["timestamp"])
# convert timestamp values to datetime timestamp
df = df.withColumn("timestamp", udf_convert_timestamp("timestamp"))
# drop null values
df = df.na.drop(subset=["values"])
# calculate log(values) for each row
#df = df.withColumn("log_values", F.log(df.values))
return df
def extract_from_json(json, name, select_labels, where_labels):
#Register the created SchemaRDD as a temporary variable
json.registerTempTable(name)
#Filter the results into a data frame
query = "SELECT values"
# check if select labels are specified and add query condition if appropriate
if len(select_labels) > 0:
query = query + ", " + ", ".join(select_labels)
query = query + " FROM " + name
# check if where labels are specified and add query condition if appropriate
if len(where_labels) > 0:
query = query + " WHERE " + " AND ".join(where_labels)
print("SQL QUERRY: ", query)
df = sqlContext.sql(query)
#sample data to make it more manageable
#data = data.sample(False, fraction = 0.05, seed = 0)
# TODO: get rid of this hack
#df = sqlContext.createDataFrame(df.head(1000), df.schema)
return format_df(df)
if label != "":
select_labels = ['metric.' + label]
else:
select_labels = []
where_labels = {"metric.quantile='0.9'","metric.hostname='free-stg-master-03fb6'"}
# get data and format
df = extract_from_json(jsonFile, metric_name, select_labels, where_labels)
select_labels = []
df_pd = df.toPandas()
df_pd = df_pd[["values","timestamp"]]
df_pd
df_pd.dtypes
df_pd.sort_values(by='timestamp')
#df_pd = df_pd.set_index("timestamp")
#df_pd.set_index("timestamp")
train_frame = df_pd[0 : int(0.7*len(df_pd))]
test_frame = df_pd[int(0.7*len(df_pd)) : ]
df_pd
sc.stop()
df_pd.dtypes
df_pd_trimmed = df_pd[df_pd.timestamp > datetime(2018,6,17,3,14)]
df_pd_trimmed = df_pd_trimmed[df_pd_trimmed.timestamp < datetime(2018,6,21,3,14)]
#df_pd_trimmed = df_pd_trimmed[["values"]]
train_frame = df_pd_trimmed[0 : int(0.7*len(df_pd_trimmed))]
test_frame = df_pd_trimmed[int(0.7*len(df_pd_trimmed)) : ]
train_frame
```
### Exponential Smoothing
inspiration: https://github.com/statsmodels/statsmodels/blob/master/examples/notebooks/exponential_smoothing.ipynb
```
!pip install patsy
from statsmodels.tsa.api import ExponentialSmoothing, SimpleExpSmoothing, Holt
import matplotlib.pyplot as plt
import pandas as pd
train_frame = train_frame.reset_index()
df_series = pd.Series(train_frame["values"])
#df_series.index = pd.DatetimeIndex(train_frame.index, freq="N")
df_series
ax=df_series.plot(title="Original Data")
ax.set_ylabel("value")
plt.show()
```
### Simple Exponential Smoothing
```
fit1 = SimpleExpSmoothing(df_series).fit()
fcast1 = fit1.forecast(600)
ax = df_series.plot(marker='o', color='black',legend=True, label="Prom data", figsize=(12,8))
fcast1.plot(marker='o', markersize=0.2, ax=ax, color='blue', legend=True, label="Predicted")
fit1.fittedvalues.plot(marker='o', markersize=0.2, ax=ax, color='green',legend=True, label="ES fitted values")
plt.title("Simple Exponential Smoothing on Real Data")
plt.ylabel("value")
plt.xlabel("timestamp")
```
### Holt Winters Method (Additive)
The additive method gives a prediction by adding the seasonality, trend, and value components of the model together. This method is ideal for time series when the seasonality is relatively constant.
```
# fit2 = ExponentialSmoothing(df_series, seasonal_periods=4, trend='add', seasonal='add').fit(use_boxcox=True)
# results=pd.DataFrame(index=[r"$\alpha$",r"$\beta$",r"$\phi$",r"$\gamma$",r"$l_0$","$b_0$","SSE"])
# params = ['smoothing_level', 'smoothing_slope', 'damping_slope', 'smoothing_seasonal', 'initial_level', 'initial_slope']
# results["Additive"] = [fit2.params[p] for p in params] + [fit2.sse]
ax = df_series.plot(figsize=(10,6), marker='o', color='black',legend=True, label="Prom data")
fit2.fittedvalues.plot(ax=ax, style='--', color='green', legend=True, label="HW fitted values")
fit2.forecast(5000).plot(ax=ax, style='--', marker='o', color='blue', legend=True, label="Predicted")
plt.title("Holt Winters Additive method on Real Data")
plt.ylabel("value")
plt.xlabel("timestamp")
plt.show()
```
### Holt Winter's Method (Multiplicative)
This method is used when the trend and seasonality vary with time. This is effective for when the trend is non-stationary (ie. changes with time).
```
fit2 = ExponentialSmoothing(df_series, seasonal_periods=4, trend='add', seasonal='mult').fit(use_boxcox=True)
results=pd.DataFrame(index=[r"$\alpha$",r"$\beta$",r"$\phi$",r"$\gamma$",r"$l_0$","$b_0$","SSE"])
params = ['smoothing_level', 'smoothing_slope', 'damping_slope', 'smoothing_seasonal', 'initial_level', 'initial_slope']
results["Additive"] = [fit2.params[p] for p in params] + [fit2.sse]
ax = df_series.plot(figsize=(10,6), marker='o', color='black',legend=True, label="Prom data" )
fit2.fittedvalues.plot(ax=ax, style='--', color='green', legend=True, label="HW fitted values")
fit2.forecast(5000).plot(ax=ax, style='--', marker='o', color='blue', legend=True, label="Predicted")
plt.title(" Holt-Winters' Multiplicative method")
plt.ylabel("value")
plt.xlabel("timestamp")
plt.show()
from __future__ import print_function
import statsmodels.api as sm
from statsmodels.tsa.arima_process import arma_generate_sample
np.random.seed(12345)
arparams = np.array([.75, -.25])
maparams = np.array([.65, .35])
arparams = np.r_[1, -arparams]
maparams = np.r_[1, maparams]
nobs = 250
y = arma_generate_sample(arparams, maparams, nobs)
#dates = sm.tsa.datetools.dates_from_range('1980m1', length=nobs)
y = df_series#pd.Series(y, index=dates)
arma_mod = sm.tsa.ARMA(y, order=(2,2))
arma_res = arma_mod.fit(trend='nc', disp=-1)
print(arma_res.summary())
y.tail()
fig, ax = plt.subplots(figsize=(10,8))
fig = arma_res.plot_predict(start=8000, end=16000, ax=ax)
legend = ax.legend(loc='upper left')
plt.title("Time Series Forecasting using the ARIMA model")
fig, ax = plt.subplots(figsize=(10,8))
fig = arma_res.plot_predict(start=0, end=25000, ax=ax)
legend = ax.legend(loc='upper left')
plt.title("Time Series Forecasting using the ARIMA model")
!pip install fbprophet
from fbprophet import Prophet
```
#### Forecasting __list_images__
```
temp_frame = get_filtered_op_frame(OP_TYPE)
temp_frame = temp_frame.set_index(temp_frame.timestamp)
temp_frame = temp_frame[['timestamp','value']]
```
#### Separating train and test frame
Separation is being done to check the forecast quality and residual plot and its distribution
```
train_frame = temp_frame[0 : int(0.7*len(temp_frame))]
test_frame = temp_frame[int(0.7*len(temp_frame)) : ]
print(len(train_frame), len(test_frame), len(temp_frame))
train_p=train_frame
train_p['y'] = train_frame['values']
train_p['ds'] = train_frame['timestamp']
test_p=test_frame
train_p['y'] = train_frame['values']
train_p['ds'] = train_frame['timestamp']
```
#### Initialisation of model
We can add seasonality when initialising the model. Its little hard to understand but, inital approach will show some seaonality in forecast plot. Seasonality can be week, daily month or yearly
* We can add additinal features as well for better forecasting
```
m = Prophet()
```
#### Model fitting
It is being trained on training data
```
m.fit(train_p)
```
#### Make future frame wholding the timestamps of future for forecasting.
For example, if we run the model every 12 hours, we can make the future frame for next 11 hours 59 minutes, taking frequency as 1 minute. As the time capture is being done for every one minute
```
future = m.make_future_dataframe(periods= int(len(test_p) * 1.1),freq= '1MIN')
forecast = m.predict(future)
forecast.head()
```
#### Useful features in forecasted frame
```
forecasted_features = ['ds','yhat','yhat_lower','yhat_upper']
```
#### Plotting the future with history
Confidence interval will be also printed for expectation. After completion of historical plot, future plot will be in right part of graph
```
m.plot(forecast,xlabel="Timestamp",ylabel="Value");
```
| github_jupyter |
# Pix2Pix
### Goals
In this notebook, you will write a generative model based on the paper [*Image-to-Image Translation with Conditional Adversarial Networks*](https://arxiv.org/abs/1611.07004) by Isola et al. 2017, also known as Pix2Pix.
You will be training a model that can convert aerial satellite imagery ("input") into map routes ("output"), as was done in the original paper. Since the architecture for the generator is a U-Net, which you've already implemented (with minor changes), the emphasis of the assignment will be on the loss function. So that you can see outputs more quickly, you'll be able to see your model train starting from a pre-trained checkpoint - but feel free to train it from scratch on your own too.

<!-- You will take the segmentations that you generated in the previous assignment and produce photorealistic images. -->
### Learning Objectives
1. Implement the loss of a Pix2Pix model that differentiates it from a supervised U-Net.
2. Observe the change in generator priorities as the Pix2Pix generator trains, changing its emphasis from reconstruction to realism.
<!-- When you're done with this assignment, you'll be able to understand much of [*Image-to-Image Translation with Conditional Adversarial Networks*](https://arxiv.org/abs/1611.07004), which introduced Pix2Pix.
You'll be using the same U-Net as in the previous assignment, but you'll write another discriminator and change the loss to make it a GAN. -->
## Getting Started
You will start by importing libraries, defining a visualization function, and getting the pre-trained Pix2Pix checkpoint. You will also be provided with the U-Net code for the Pix2Pix generator.
```
import torch
from torch import nn
from tqdm.auto import tqdm
from torchvision import transforms
from torchvision.utils import make_grid
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
torch.manual_seed(0)
def show_tensor_images(image_tensor, num_images=25, size=(1, 28, 28)):
'''
Function for visualizing images: Given a tensor of images, number of images, and
size per image, plots and prints the images in an uniform grid.
'''
image_shifted = image_tensor
image_unflat = image_shifted.detach().cpu().view(-1, *size)
image_grid = make_grid(image_unflat[:num_images], nrow=5)
plt.imshow(image_grid.permute(1, 2, 0).squeeze())
plt.show()
```
#### U-Net Code
The U-Net code will be much like the code you wrote for the last assignment, but with optional dropout and batchnorm. The structure is changed slightly for Pix2Pix, so that the final image is closer in size to the input image. Feel free to investigate the code if you're interested!
```
def crop(image, new_shape):
'''
Function for cropping an image tensor: Given an image tensor and the new shape,
crops to the center pixels.
Parameters:
image: image tensor of shape (batch size, channels, height, width)
new_shape: a torch.Size object with the shape you want x to have
'''
middle_height = image.shape[2] // 2
middle_width = image.shape[3] // 2
starting_height = middle_height - round(new_shape[2] / 2)
final_height = starting_height + new_shape[2]
starting_width = middle_width - round(new_shape[3] / 2)
final_width = starting_width + new_shape[3]
cropped_image = image[:, :, starting_height:final_height, starting_width:final_width]
return cropped_image
class ContractingBlock(nn.Module):
'''
ContractingBlock Class
Performs two convolutions followed by a max pool operation.
Values:
input_channels: the number of channels to expect from a given input
'''
def __init__(self, input_channels, use_dropout=False, use_bn=True):
super(ContractingBlock, self).__init__()
self.conv1 = nn.Conv2d(input_channels, input_channels * 2, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(input_channels * 2, input_channels * 2, kernel_size=3, padding=1)
self.activation = nn.LeakyReLU(0.2)
self.maxpool = nn.MaxPool2d(kernel_size=2, stride=2)
if use_bn:
self.batchnorm = nn.BatchNorm2d(input_channels * 2)
self.use_bn = use_bn
if use_dropout:
self.dropout = nn.Dropout()
self.use_dropout = use_dropout
def forward(self, x):
'''
Function for completing a forward pass of ContractingBlock:
Given an image tensor, completes a contracting block and returns the transformed tensor.
Parameters:
x: image tensor of shape (batch size, channels, height, width)
'''
x = self.conv1(x)
if self.use_bn:
x = self.batchnorm(x)
if self.use_dropout:
x = self.dropout(x)
x = self.activation(x)
x = self.conv2(x)
if self.use_bn:
x = self.batchnorm(x)
if self.use_dropout:
x = self.dropout(x)
x = self.activation(x)
x = self.maxpool(x)
return x
class ExpandingBlock(nn.Module):
'''
ExpandingBlock Class:
Performs an upsampling, a convolution, a concatenation of its two inputs,
followed by two more convolutions with optional dropout
Values:
input_channels: the number of channels to expect from a given input
'''
def __init__(self, input_channels, use_dropout=False, use_bn=True):
super(ExpandingBlock, self).__init__()
self.upsample = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
self.conv1 = nn.Conv2d(input_channels, input_channels // 2, kernel_size=2)
self.conv2 = nn.Conv2d(input_channels, input_channels // 2, kernel_size=3, padding=1)
self.conv3 = nn.Conv2d(input_channels // 2, input_channels // 2, kernel_size=2, padding=1)
if use_bn:
self.batchnorm = nn.BatchNorm2d(input_channels // 2)
self.use_bn = use_bn
self.activation = nn.ReLU()
if use_dropout:
self.dropout = nn.Dropout()
self.use_dropout = use_dropout
def forward(self, x, skip_con_x):
'''
Function for completing a forward pass of ExpandingBlock:
Given an image tensor, completes an expanding block and returns the transformed tensor.
Parameters:
x: image tensor of shape (batch size, channels, height, width)
skip_con_x: the image tensor from the contracting path (from the opposing block of x)
for the skip connection
'''
x = self.upsample(x)
x = self.conv1(x)
skip_con_x = crop(skip_con_x, x.shape)
x = torch.cat([x, skip_con_x], axis=1)
x = self.conv2(x)
if self.use_bn:
x = self.batchnorm(x)
if self.use_dropout:
x = self.dropout(x)
x = self.activation(x)
x = self.conv3(x)
if self.use_bn:
x = self.batchnorm(x)
if self.use_dropout:
x = self.dropout(x)
x = self.activation(x)
return x
class FeatureMapBlock(nn.Module):
'''
FeatureMapBlock Class
The final layer of a U-Net -
maps each pixel to a pixel with the correct number of output dimensions
using a 1x1 convolution.
Values:
input_channels: the number of channels to expect from a given input
output_channels: the number of channels to expect for a given output
'''
def __init__(self, input_channels, output_channels):
super(FeatureMapBlock, self).__init__()
self.conv = nn.Conv2d(input_channels, output_channels, kernel_size=1)
def forward(self, x):
'''
Function for completing a forward pass of FeatureMapBlock:
Given an image tensor, returns it mapped to the desired number of channels.
Parameters:
x: image tensor of shape (batch size, channels, height, width)
'''
x = self.conv(x)
return x
class UNet(nn.Module):
'''
UNet Class
A series of 4 contracting blocks followed by 4 expanding blocks to
transform an input image into the corresponding paired image, with an upfeature
layer at the start and a downfeature layer at the end.
Values:
input_channels: the number of channels to expect from a given input
output_channels: the number of channels to expect for a given output
'''
def __init__(self, input_channels, output_channels, hidden_channels=32):
super(UNet, self).__init__()
self.upfeature = FeatureMapBlock(input_channels, hidden_channels)
self.contract1 = ContractingBlock(hidden_channels, use_dropout=True)
self.contract2 = ContractingBlock(hidden_channels * 2, use_dropout=True)
self.contract3 = ContractingBlock(hidden_channels * 4, use_dropout=True)
self.contract4 = ContractingBlock(hidden_channels * 8)
self.contract5 = ContractingBlock(hidden_channels * 16)
self.contract6 = ContractingBlock(hidden_channels * 32)
self.expand0 = ExpandingBlock(hidden_channels * 64)
self.expand1 = ExpandingBlock(hidden_channels * 32)
self.expand2 = ExpandingBlock(hidden_channels * 16)
self.expand3 = ExpandingBlock(hidden_channels * 8)
self.expand4 = ExpandingBlock(hidden_channels * 4)
self.expand5 = ExpandingBlock(hidden_channels * 2)
self.downfeature = FeatureMapBlock(hidden_channels, output_channels)
self.sigmoid = torch.nn.Sigmoid()
def forward(self, x):
'''
Function for completing a forward pass of UNet:
Given an image tensor, passes it through U-Net and returns the output.
Parameters:
x: image tensor of shape (batch size, channels, height, width)
'''
x0 = self.upfeature(x)
x1 = self.contract1(x0)
x2 = self.contract2(x1)
x3 = self.contract3(x2)
x4 = self.contract4(x3)
x5 = self.contract5(x4)
x6 = self.contract6(x5)
x7 = self.expand0(x6, x5)
x8 = self.expand1(x7, x4)
x9 = self.expand2(x8, x3)
x10 = self.expand3(x9, x2)
x11 = self.expand4(x10, x1)
x12 = self.expand5(x11, x0)
xn = self.downfeature(x12)
return self.sigmoid(xn)
```
## PatchGAN Discriminator
Next, you will define a discriminator based on the contracting path of the U-Net to allow you to evaluate the realism of the generated images. Remember that the discriminator outputs a one-channel matrix of classifications instead of a single value. Your discriminator's final layer will simply map from the final number of hidden channels to a single prediction for every pixel of the layer before it.
```
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED CLASS: Discriminator
class Discriminator(nn.Module):
'''
Discriminator Class
Structured like the contracting path of the U-Net, the discriminator will
output a matrix of values classifying corresponding portions of the image as real or fake.
Parameters:
input_channels: the number of image input channels
hidden_channels: the initial number of discriminator convolutional filters
'''
def __init__(self, input_channels, hidden_channels=8):
super(Discriminator, self).__init__()
self.upfeature = FeatureMapBlock(input_channels, hidden_channels)
self.contract1 = ContractingBlock(hidden_channels, use_bn=False)
self.contract2 = ContractingBlock(hidden_channels * 2)
self.contract3 = ContractingBlock(hidden_channels * 4)
self.contract4 = ContractingBlock(hidden_channels * 8)
#### START CODE HERE ####
self.final = nn.Conv2d(hidden_channels * 16, 1, kernel_size=1)
#### END CODE HERE ####
def forward(self, x, y):
x = torch.cat([x, y], axis=1)
x0 = self.upfeature(x)
x1 = self.contract1(x0)
x2 = self.contract2(x1)
x3 = self.contract3(x2)
x4 = self.contract4(x3)
xn = self.final(x4)
return xn
# UNIT TEST
test_discriminator = Discriminator(10, 1)
assert tuple(test_discriminator(
torch.randn(1, 5, 256, 256),
torch.randn(1, 5, 256, 256)
).shape) == (1, 1, 16, 16)
print("Success!")
```
## Training Preparation
<!-- You'll be using the same U-Net as in the previous assignment, but you'll write another discriminator and change the loss to make it a GAN. -->
Now you can begin putting everything together for training. You start by defining some new parameters as well as the ones you are familiar with:
* **real_dim**: the number of channels of the real image and the number expected in the output image
* **adv_criterion**: an adversarial loss function to keep track of how well the GAN is fooling the discriminator and how well the discriminator is catching the GAN
* **recon_criterion**: a loss function that rewards similar images to the ground truth, which "reconstruct" the image
* **lambda_recon**: a parameter for how heavily the reconstruction loss should be weighed
* **n_epochs**: the number of times you iterate through the entire dataset when training
* **input_dim**: the number of channels of the input image
* **display_step**: how often to display/visualize the images
* **batch_size**: the number of images per forward/backward pass
* **lr**: the learning rate
* **target_shape**: the size of the output image (in pixels)
* **device**: the device type
```
import torch.nn.functional as F
# New parameters
adv_criterion = nn.BCEWithLogitsLoss()
recon_criterion = nn.L1Loss()
lambda_recon = 200
n_epochs = 20
input_dim = 3
real_dim = 3
display_step = 200
batch_size = 4
lr = 0.0002
target_shape = 256
device = 'cuda'
```
You will then pre-process the images of the dataset to make sure they're all the same size and that the size change due to U-Net layers is accounted for.
```
transform = transforms.Compose([
transforms.ToTensor(),
])
import torchvision
dataset = torchvision.datasets.ImageFolder("maps", transform=transform)
```
Next, you can initialize your generator (U-Net) and discriminator, as well as their optimizers. Finally, you will also load your pre-trained model.
```
gen = UNet(input_dim, real_dim).to(device)
gen_opt = torch.optim.Adam(gen.parameters(), lr=lr)
disc = Discriminator(input_dim + real_dim).to(device)
disc_opt = torch.optim.Adam(disc.parameters(), lr=lr)
def weights_init(m):
if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d):
torch.nn.init.normal_(m.weight, 0.0, 0.02)
if isinstance(m, nn.BatchNorm2d):
torch.nn.init.normal_(m.weight, 0.0, 0.02)
torch.nn.init.constant_(m.bias, 0)
# Feel free to change pretrained to False if you're training the model from scratch
pretrained = True
if pretrained:
loaded_state = torch.load("pix2pix_15000.pth")
gen.load_state_dict(loaded_state["gen"])
gen_opt.load_state_dict(loaded_state["gen_opt"])
disc.load_state_dict(loaded_state["disc"])
disc_opt.load_state_dict(loaded_state["disc_opt"])
else:
gen = gen.apply(weights_init)
disc = disc.apply(weights_init)
```
While there are some changes to the U-Net architecture for Pix2Pix, the most important distinguishing feature of Pix2Pix is its adversarial loss. You will be implementing that here!
```
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED CLASS: get_gen_loss
def get_gen_loss(gen, disc, real, condition, adv_criterion, recon_criterion, lambda_recon):
'''
Return the loss of the generator given inputs.
Parameters:
gen: the generator; takes the condition and returns potential images
disc: the discriminator; takes images and the condition and
returns real/fake prediction matrices
real: the real images (e.g. maps) to be used to evaluate the reconstruction
condition: the source images (e.g. satellite imagery) which are used to produce the real images
adv_criterion: the adversarial loss function; takes the discriminator
predictions and the true labels and returns a adversarial
loss (which you aim to minimize)
recon_criterion: the reconstruction loss function; takes the generator
outputs and the real images and returns a reconstructuion
loss (which you aim to minimize)
lambda_recon: the degree to which the reconstruction loss should be weighted in the sum
'''
# Steps: 1) Generate the fake images, based on the conditions.
# 2) Evaluate the fake images and the condition with the discriminator.
# 3) Calculate the adversarial and reconstruction losses.
# 4) Add the two losses, weighting the reconstruction loss appropriately.
#### START CODE HERE ####
fake_images = gen(condition)
disc_res = disc(fake_images, condition)
adv_loss = adv_criterion(disc_res, torch.ones_like(disc_res))
recon_loss = recon_criterion(real, fake_images)
gen_loss = adv_loss + recon_loss * lambda_recon
#### END CODE HERE ####
return gen_loss
# UNIT TEST
def test_gen_reasonable(num_images=10):
gen = torch.zeros_like
disc = lambda x, y: torch.ones(len(x), 1)
real = None
condition = torch.ones(num_images, 3, 10, 10)
adv_criterion = torch.mul
recon_criterion = lambda x, y: torch.tensor(0)
lambda_recon = 0
assert get_gen_loss(gen, disc, real, condition, adv_criterion, recon_criterion, lambda_recon).sum() == num_images
disc = lambda x, y: torch.zeros(len(x), 1)
assert torch.abs(get_gen_loss(gen, disc, real, condition, adv_criterion, recon_criterion, lambda_recon)).sum() == 0
adv_criterion = lambda x, y: torch.tensor(0)
recon_criterion = lambda x, y: torch.abs(x - y).max()
real = torch.randn(num_images, 3, 10, 10)
lambda_recon = 2
gen = lambda x: real + 1
assert torch.abs(get_gen_loss(gen, disc, real, condition, adv_criterion, recon_criterion, lambda_recon) - 2) < 1e-4
adv_criterion = lambda x, y: (x + y).max() + x.max()
assert torch.abs(get_gen_loss(gen, disc, real, condition, adv_criterion, recon_criterion, lambda_recon) - 3) < 1e-4
test_gen_reasonable()
print("Success!")
```
## Pix2Pix Training
Finally, you can train the model and see some of your maps!
```
from skimage import color
import numpy as np
def train(save_model=False):
mean_generator_loss = 0
mean_discriminator_loss = 0
dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True)
cur_step = 0
for epoch in range(n_epochs):
# Dataloader returns the batches
for image, _ in tqdm(dataloader):
image_width = image.shape[3]
condition = image[:, :, :, :image_width // 2]
condition = nn.functional.interpolate(condition, size=target_shape)
real = image[:, :, :, image_width // 2:]
real = nn.functional.interpolate(real, size=target_shape)
cur_batch_size = len(condition)
condition = condition.to(device)
real = real.to(device)
### Update discriminator ###
disc_opt.zero_grad() # Zero out the gradient before backpropagation
with torch.no_grad():
fake = gen(condition)
disc_fake_hat = disc(fake.detach(), condition) # Detach generator
disc_fake_loss = adv_criterion(disc_fake_hat, torch.zeros_like(disc_fake_hat))
disc_real_hat = disc(real, condition)
disc_real_loss = adv_criterion(disc_real_hat, torch.ones_like(disc_real_hat))
disc_loss = (disc_fake_loss + disc_real_loss) / 2
disc_loss.backward(retain_graph=True) # Update gradients
disc_opt.step() # Update optimizer
### Update generator ###
gen_opt.zero_grad()
gen_loss = get_gen_loss(gen, disc, real, condition, adv_criterion, recon_criterion, lambda_recon)
gen_loss.backward() # Update gradients
gen_opt.step() # Update optimizer
# Keep track of the average discriminator loss
mean_discriminator_loss += disc_loss.item() / display_step
# Keep track of the average generator loss
mean_generator_loss += gen_loss.item() / display_step
### Visualization code ###
if cur_step % display_step == 0:
if cur_step > 0:
print(f"Epoch {epoch}: Step {cur_step}: Generator (U-Net) loss: {mean_generator_loss}, Discriminator loss: {mean_discriminator_loss}")
else:
print("Pretrained initial state")
show_tensor_images(condition, size=(input_dim, target_shape, target_shape))
show_tensor_images(real, size=(real_dim, target_shape, target_shape))
show_tensor_images(fake, size=(real_dim, target_shape, target_shape))
mean_generator_loss = 0
mean_discriminator_loss = 0
# You can change save_model to True if you'd like to save the model
if save_model:
torch.save({'gen': gen.state_dict(),
'gen_opt': gen_opt.state_dict(),
'disc': disc.state_dict(),
'disc_opt': disc_opt.state_dict()
}, f"pix2pix_{cur_step}.pth")
cur_step += 1
train()
```
| github_jupyter |
```
import numpy as np
import os
import cv2
import matplotlib.pyplot as plt
from keras.models import load_model
import numpy as np
import pandas as pd
import os
import glob
import cv2
import random
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.optimizers import RMSprop
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import regularizers
from keras.callbacks import CSVLogger
#from livelossplot import PlotLossesKeras
import os
import numpy as np
#from imgaug import augmenters as iaa
#import cv2
from keras.layers.normalization import BatchNormalization
#import seaborn as sns
import pandas as pd
from keras import initializers
from keras import optimizers
import keras.backend as K
import tensorflow as tf
import operator
val_all_path = '../input/validation-data/val/all'
val_hem_path = '../input/validation-data/val/hem'
val_all_list = os.listdir(val_all_path)
val_all_list.sort()
val_hem_list = os.listdir(val_hem_path)
val_hem_list.sort()
print('val/all: ', len(val_all_list))
print('val/hem :', len(val_hem_list))
val_all_batch = np.zeros((len(val_all_list), 210, 210, 3), dtype=np.uint8)
val_hem_batch = np.zeros((len(val_hem_list), 210, 210, 3), dtype=np.uint8)
print(val_all_batch.shape, val_hem_batch.shape)
def Read_n_Crop(list_data, batch, path):
i=0
for x in list_data:
image = cv2.imread(os.path.join(path, x))
image = cv2.cvtColor(image,cv2.COLOR_BGR2RGB)
image = crop_center(image, (210,210,3))
batch[i] = image
i+=1
print(type(batch), batch.shape, batch.dtype, batch[0].shape, batch[0].dtype)
return batch
def crop_center(img, bounding):
start = tuple(map(lambda a, da: a//2-da//2, img.shape, bounding))
end = tuple(map(operator.add, start, bounding))
slices = tuple(map(slice, start, end))
return img[slices]
parasite_images=Read_n_Crop(val_all_list, val_all_batch, val_all_path)
uninf_images=Read_n_Crop(val_hem_list, val_hem_batch, val_hem_path)
para_label = np.array([0 for _ in range(len(parasite_images))])
uninf_label = np.array([1 for _ in range(len(uninf_images))])
para_label.shape, uninf_label.shape
x_all = np.concatenate((parasite_images, uninf_images), axis=0)
y_all = np.concatenate((para_label, uninf_label), axis=0)
print(x_all.shape, y_all.shape)
model = None
model = load_model("../input/modelsforleukemia/Models for 5x Aug/adam_baseline_vgg.h5", compile = False)
model.summary()
x_all=x_all/255.0
# Make predictions using trained model
y_pred = model.predict(x_all, verbose=1)
print("Predictions: ", y_pred.shape)
y_pred_flat = []
for pred in y_pred:
if pred > 0.5:
y_pred_flat.append(1)
else:
y_pred_flat.append(0)
y_pred_flat = np.array(y_pred_flat)
from sklearn.metrics import confusion_matrix, classification_report
# Classification report
confusion_mtx = confusion_matrix(y_all, y_pred_flat)
print(confusion_mtx)
target_names = ['0', '1']
print(classification_report(y_all, y_pred_flat, target_names=target_names, digits=4))
```
| github_jupyter |
# Equivalent layer technique for estimating total magnetization direction : Iteration process and L-curve application
Notebook to perform the inversion process. The L-curve
## Importing libraries
```
% matplotlib inline
import sys
import numpy as np
import matplotlib.pyplot as plt
import cPickle as pickle
import datetime
import timeit
import string as st
from fatiando.gridder import regular
notebook_name = 'airborne_EQL_magdirection_RM_calculation.ipynb'
```
## Plot style
```
plt.style.use('ggplot')
```
## Importing my package
```
dir_modules = '../../../mypackage'
sys.path.append(dir_modules)
import auxiliary_functions as fc
```
## Loading the model
```
with open('data/model_multi.pickle') as f:
model_multi = pickle.load(f)
```
## Loading observation points
```
with open('data/airborne_survey.pickle') as f:
airborne = pickle.load(f)
```
## Loading data set
```
with open('data/data_set.pickle') as f:
data = pickle.load(f)
```
## Open a dictionary
```
result_RM_airb = dict()
```
## List of saved files
```
saved_files = []
```
## Observation area
```
print 'Area limits: \n x_max = %.1f m \n x_min = %.1f m \n y_max = %.1f m \n y_min = %.1f m' % (airborne['area'][1],
airborne['area'][0],
airborne['area'][3],
airborne['area'][2])
```
## Airborne survey information
```
print 'Shape : (%.0f,%.0f)'% airborne['shape']
print 'Number of data: %.1f' % airborne['N']
print 'dx: %.1f m' % airborne['dx']
print 'dy: %.1f m ' % airborne['dy']
```
## Properties of the model
### Main field
```
inc_gf,dec_gf = model_multi['main_field']
print'Main field inclination: %.1f degree' % inc_gf
print'Main field declination: %.1f degree' % dec_gf
```
### Magnetization direction
```
print 'Inclination: %.1f degree' % model_multi['inc_R']
print 'Declination: %.1f degree' % model_multi['dec_R']
inc_R,dec_R = model_multi['inc_R'],model_multi['dec_R']
```
## Generating the layer
### Layer depth
```
h = 1150.
```
### Generating the equivalent sources coordinates
```
shape_layer = (airborne['shape'][0],airborne['shape'][1])
xs,ys,zs = regular(airborne['area'],shape_layer,h)
```
## Iteration process : LM-NNLS for positive magnetic-moment distribution
```
i_pos = 1250
it_max = 30
it_marq = 15
lamb = 10.
dlamb = 100.
eps_e = 1e-4
eps_i = 1e-4
mu_list = [1e2,1e3,1e4,1e5,3.5*1e5,5*1e5,1e6,2*1e6]
mu_norm = []
norm_r = []
norm_m = []
m_est = []
incl_est = []
decl_est = []
phi_list = []
for i in mu_list:
m_LM,inc_est,dec_est,phi,imax,pest,incs,decs = fc.LM_NNLS(
data['tfa_obs_RM_airb'],airborne['x'],airborne['y'],
airborne['z'],xs,ys,zs,inc_gf,dec_gf,-10.,-10.,lamb,dlamb,i_pos,it_max,
it_marq,eps_e,eps_i,i)
G = fc.sensitivity_mag(airborne['x'],airborne['y'],airborne['z'],
xs,ys,zs,inc_gf,dec_gf,inc_est,dec_est)
tfpred = np.dot(G,m_LM)
r = data['tfa_obs_RM_airb'] - tfpred
norm_r.append(np.sqrt(np.sum(r*r)))
norm_m.append(np.sqrt(np.sum(m_LM*m_LM)))
m_est.append(m_LM)
incl_est.append(inc_est)
decl_est.append(dec_est)
phi_list.append(phi)
```
## L-curve visualization
```
title_font = 20
bottom_font = 18
saturation_factor = 1.
plt.close('all')
plt.figure(figsize=(9,9), tight_layout=True)
plt.figure(figsize=(10, 10))
plt.loglog(norm_r,norm_m, 'b-')
plt.title('L-curve', fontsize=title_font)
plt.xlabel('r_norm', fontsize = title_font)
plt.ylabel('m_norm', fontsize = title_font)
plt.tick_params(axis='both', which='major', labelsize=15)
file_name = 'figs/airborne/Lcurve_RM'
plt.savefig(file_name+'.png',dpi=300)
saved_files.append(file_name+'.png')
plt.show()
```
### Results
```
result_RM_airb['magnetic_moment'] = m_est
result_RM_airb['inc_est'] = incl_est
result_RM_airb['dec_est'] = decl_est
result_RM_airb['layer_depth'] = h
result_RM_airb['reg_parameter'] = mu_list
result_RM_airb['phi'] = phi_list
```
### Generating .pickle file
```
now = datetime.datetime.utcnow().strftime('%d %B %Y %H:%M:%S UTC')
result_RM_airb['metadata'] = 'Generated by {name} on {date}'.format(date=now, name=notebook_name)
file_name = 'data/result_RM_airb.pickle'
with open(file_name, 'w') as f:
pickle.dump(result_RM_airb, f)
saved_files.append(file_name)
```
### Saved files
```
with open('reports/report_%s.md' % notebook_name[:st.index(notebook_name, '.')], 'w') as q:
q.write('# Saved files \n')
now = datetime.datetime.utcnow().strftime('%d %B %Y %H:%M:%S UTC')
header = 'Generated by {name} on {date}'.format(date=now, name=notebook_name)
q.write('\n\n'+header+'\n\n')
for i, sf in enumerate(saved_files):
print '%d %s' % (i+1,sf)
q.write('* `%s` \n' % (sf))
```
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive/')
%cd '/content/drive/My Drive/thesis'
import config_16x15_seq
%cd '/content/drive/My Drive/thesis/config_16x15_seq'
import os
import pprint
import tensorflow as tf
if 'COLAB_TPU_ADDR' not in os.environ:
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
print('ERROR!')
else:
print('Found GPU at: {}'.format(device_name))
else:
tpu_address = 'grpc://' + os.environ['COLAB_TPU_ADDR']
print('TPU address is', tpu_address)
with tf.Session(tpu_address) as session:
devices = session.list_devices()
print('TPU devices:')
pprint.pprint(devices)
from config_16x15_seq.builder import *
from tqdm import tqdm
from keras.callbacks import BaseLogger, History, CallbackList
from scipy.ndimage import gaussian_filter
import copy
MU_training = MU['training']
MU_norm_training = MU_norm['training']
PHI_meas_training = PHI_meas['training']
MUa_training = MUa['training']
MUsp_training = MUsp['training']
freq_training = freq['training']
d_training = d['training']
lr = 0.0002
beta_1 = 0.5
# clip_value = 0.01
optimizer = Adam(lr=lr, beta_1=beta_1)
# optimizer = RMSprop(lr=lr)
generator = primary_net()
print('Generator model summary:')
generator.summary()
discriminator = secondary_net()
discriminator.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=[custom_binary_accuracy])
print('Discriminator model summary:')
discriminator.summary()
discriminator.trainable = False
# inputs = Input((PHI_meas_training.shape[1],))
outputs = generator.outputs
outputs.append(discriminator(outputs[:2]))
gan = Model(inputs=generator.inputs, outputs=outputs)
loss = ['mse', 'mse', 'mse', 'mse', 'mse', 'mse', 'binary_crossentropy']
loss_weights = [0, 0, 5, 5, 1e4, 1, 1]
metrics = ['mae', 'mse', 'mape', 'msle', 'logcosh', 'cosine', custom_binary_accuracy]
gan.compile(loss=loss, loss_weights=loss_weights, optimizer=optimizer, metrics=metrics)
print('Generative adversarial network model summary:')
gan.summary()
history = History()
logger = BaseLogger(stateful_metrics=['discriminator_' + s for s in discriminator.stateful_metric_names] +
['gan_' + s for s in gan.stateful_metric_names])
# checkpoint = callbacks.ModelCheckpoint('model_test{epoch:02d}.h5', verbose=1)
time_history = TimeHistory()
callbacks = [logger, time_history, history]
out_labels = ['discriminator_' + s for s in discriminator.metrics_names] + ['gan_' + s for s in gan.metrics_names]
callback_metrics = copy.copy(out_labels) + ['val_' + n for n in out_labels]
callbacks = CallbackList(callbacks)
# callbacks = [TimeHistory(), EarlyStopping(monitor='loss', min_delta=0.1, patience=100,
# restore_best_weights=True, verbose=1)]
# early_stop = EarlyStopping(patience=200, verbose=1)
choice = np.zeros(data_size, dtype='uint8')
choice[np.random.choice(data_size, val_size, replace=False)] = 1
MU_training_train = MU_training[choice == 0]
MU_training_val = MU_training[choice == 1]
MU_norm_training_train = MU_norm_training[choice == 0]
MU_norm_training_val = MU_norm_training[choice == 1]
PHI_meas_training_train = PHI_meas_training[choice == 0]
PHI_meas_training_val = PHI_meas_training[choice == 1]
freq_training_train = freq_training[choice == 0]
freq_training_val = freq_training[choice == 1]
d_training_train = d_training[choice == 0]
d_training_val = d_training[choice == 1]
MUa_train = MUa_training[choice == 0]
MUa_val = MUa_training[choice == 1]
MUsp_train = MUsp_training[choice == 0]
MUsp_val = MUsp_training[choice == 1]
# choice = data_cat
batch_size = 32
prob_param = 20.0
prob_param2 = 1.0
# label_softness = 0.1
# metrics_eval_n = 50
# metrics_eval_step = 20
hard_valid = np.ones(val_size)
hard_fake = np.zeros(val_size)
# soft_label = False
# augment = True
# min_img_loss_avg = np.inf
# min_dis_loss_avg = np.inf
# img_loss_baseline_factor = 1.1
# img_loss_threshold = 1.2
# img_loss_threshold2 = 0.6
# prob_augment = 0.5
# gan_d_acc = [0.5, 0.5]
# init_train = 100
N_count_train = np.zeros_like(N_count)
n = 0
for i in range(len(N_count)):
ni = N_count[i]
N_count_train[i] = np.where(choice[n:(n + ni)] == 0)[0].size
n += ni
index_array = np.arange(train_size)
index_array2 = np.arange(train_size)
ia = np.arange(train_size)
n = N_count_train[0]
np.random.shuffle(index_array[:n])
np.random.shuffle(index_array2[:n])
for i in range(1, len(N_count_train) - 1):
ni = N_count_train[i]
np.random.shuffle(index_array[n:(n + ni)])
np.random.shuffle(index_array2[n:(n + ni)])
n += ni
np.random.shuffle(index_array[n:])
np.random.shuffle(index_array2[n:])
def run_epoch(epochs):
callbacks.set_params({
'batch_size': batch_size,
'epochs': epochs,
'steps': None,
'samples': train_size,
'verbose': 2,
'do_validation': True,
'metrics': callback_metrics,
})
callbacks.on_train_begin()
for epoch in range(epochs):
for m in discriminator.stateful_metric_functions:
m.reset_states()
for m in gan.stateful_metric_functions:
m.reset_states()
callbacks.on_epoch_begin(epoch)
epoch_logs = {}
progress_bar = None
# np.random.shuffle(ia)
n = N_count_train[0]
np.random.shuffle(ia[:n])
for i in range(1, len(N_count_train) - 1):
ni = N_count_train[i]
np.random.shuffle(ia[n:(n + ni)])
n += ni
np.random.shuffle(ia[n:])
num_batches = (train_size + batch_size - 1) // batch_size
# if epoch > 0 and epoch % metrics_eval_step == 0:
# hist_avg = {}
# for k, v in history.history.items():
# if k.startswith('val_'):
# l = k[4:]
# else:
# l = k
# if l in logger.stateful_metrics:
# hist_avg[k] = v[-1]
# else:
# hist_avg[k] = np.average(v[-metrics_eval_n:])
# img_loss_avg = hist_avg['val_gan_' + gan.metrics_names[0]] - hist_avg['val_gan_' + gan.metrics_names[7]]
# dis_loss_avg = hist_avg['val_discriminator_' + discriminator.metrics_names[0]]
# min_img_loss = np.min(history.history['val_gan_' + gan.metrics_names[0]]) \
# - np.min(history.history['val_gan_' + gan.metrics_names[7]])
# if img_loss_avg > min_img_loss_avg * img_loss_baseline_factor:
# if not augment:
# augment = True
# elif img_loss_avg >= img_loss_threshold and min_img_loss >= img_loss_threshold2:
# soft_label = True
# elif dis_loss_avg > min_dis_loss_avg:
# if augment:
# augment = False
# elif img_loss_avg < img_loss_threshold:
# soft_label = False
# img_loss_avg = np.average(history.history['val_gan_' + gan.metrics_names[0]][-metrics_eval_n:])\
# - np.average(history.history['val_gan_' + gan.metrics_names[7]][-metrics_eval_n:])
# if soft_label and img_loss_avg < img_loss_threshold:
# soft_label = False
# if np.random.random() > 0.5:
# augment = not augment
# min_img_loss_avg = min(img_loss_avg, min_img_loss_avg)
# min_dis_loss_avg = min(dis_loss_avg, min_dis_loss_avg)
# if epoch >= init_train:
# if gan_d_acc[0] < 0.5 or gan_d_acc[1] < 0.5:
# prob_augment = 0.8
# else:
# prob_augment = 0.5
# if augment == (np.random.random() < prob_augment):
# augment = not augment
batch_end = 0
# prob_augment = np.random.random()
for batch_index in range(num_batches):
augment = np.random.random() < 0.5
batch_start = batch_end
batch_end = min(train_size, batch_end + batch_size)
batch_ids = index_array[ia[batch_start:batch_end]]
batch_logs = {'batch': batch_index, 'size': len(batch_ids)}
callbacks.on_batch_begin(batch_index, batch_logs)
MU_gt = [MU_training_train[batch_ids, 0], MU_training_train[batch_ids, 1]]
MU_gt = [im.reshape(im.shape[:1] + (1,) + im.shape[1:]) for im in MU_gt]
in_gen = [PHI_meas_training_train[batch_ids], freq_training_train[batch_ids], d_training_train[batch_ids]]
if augment:
noise_param = np.random.random(len(batch_ids))
blur_param = np.random.random(len(batch_ids))
noise_param *= blur_param
MU_gt_aug = [np.copy(im) for im in MU_gt]
for i, bi in enumerate(batch_ids):
temp = mask_image(MU_gt_aug[0][i, 0], MUa_train[bi])
temp = temp + temp * np.random.normal(scale=noise_param[i] * 0.5, size=temp.shape)
temp = gaussian_filter(temp, blur_param[i] * 5.0)
MU_gt_aug[0][i, 0] = mask_image(temp, 0.0)
temp = mask_image(MU_gt_aug[1][i, 0], MUsp_train[bi])
temp = temp + temp * np.random.normal(scale=noise_param[i] * 0.5, size=temp.shape)
temp = gaussian_filter(temp, blur_param[i] * 5.0)
MU_gt_aug[1][i, 0] = mask_image(temp, 0.0)
# temp = np.clip(blur_param, prob_param2 / prob_param, 1.0 - prob_param2 / prob_param)
valid = 0.95 - 0.05 * np.random.random(len(batch_ids))
# valid = 1.0 - label_softness * np.random.beta(temp * prob_param, (1.0 - temp) * prob_param)
# if soft_label:
# fake = label_softness * np.random.random(len(batch_ids))
# valid = 1.0 - label_softness * np.random.beta(temp * prob_param, (1.0 - temp) * prob_param)
# else:
# fake = hard_fake[:len(batch_ids)]
# valid = hard_valid[:len(batch_ids)] - label_softness
else:
MU_gt_aug = MU_gt
valid = hard_valid[:len(batch_ids)] - 0.05
# valid = 1.0 - 0.05 * np.random.random(len(batch_ids))
# valid = 1.0 - label_softness * np.random.random(len(batch_ids))
# valid = 1.0 - label_softness * np.random.beta(prob_param2,
# prob_param - prob_param2, size=len(batch_ids))
# if soft_label:
# fake = label_softness * np.random.random(len(batch_ids))
# valid = 1.0 - label_softness * np.random.beta(
# prob_param2, prob_param - prob_param2, size=len(batch_ids))
# else:
# fake = hard_fake[:len(batch_ids)]
# valid = hard_valid[:len(batch_ids)] - label_softness
# fake = hard_fake[:len(batch_ids)]
fake = 0.05 * np.random.random(len(batch_ids))
# if epoch < init_train:
# valid -= label_softness * np.random.random(len(batch_ids)) * (init_train - epoch) / init_train
# fake += label_softness * np.random.random(len(batch_ids)) * (init_train - epoch) / init_train
d_loss1 = discriminator.train_on_batch(MU_gt_aug, valid)
# d_loss1 = discriminator.train_on_batch([np.concatenate([im1, im2], axis=0)
# for im1, im2 in zip(MU_gt_aug, MU_gt)],
# np.concatenate([valid, hard_valid[:len(batch_ids)]], axis=0),
# sample_weight=np.ones(2 * len(batch_ids)) * 0.5)
d_loss2 = discriminator.train_on_batch(generator.predict(in_gen)[:2], fake)
# d_loss = discriminator.train_on_batch([np.concatenate([im1, im2], axis=0)
# for im1, im2 in zip(MU_gt_aug, generator.predict(in_gen)[:2])],
# np.concatenate([valid, fake], axis=0))
d_loss = np.zeros(len(discriminator.metrics_names))
for i, l in enumerate(discriminator.metrics_names):
if l in discriminator.stateful_metric_names:
d_loss[i] = d_loss2[i]
else:
d_loss[i] = 0.5 * (d_loss1[i] + d_loss2[i])
# for l in discriminator.layers:
# weights = l.get_weights()
# weights = [np.clip(w, -clip_value, clip_value) for w in weights]
# l.set_weights(weights)
batch_ids = index_array2[ia[batch_start:batch_end]]
MU_gt = [MU_training_train[batch_ids, 0], MU_training_train[batch_ids, 1]]
MU_gt = [im.reshape(im.shape[:1] + (1,) + im.shape[1:]) for im in MU_gt]
in_gen = [PHI_meas_training_train[batch_ids], freq_training_train[batch_ids], d_training_train[batch_ids]]
MU_norm_gt = [MU_norm_training_train[batch_ids, 0], MU_norm_training_train[batch_ids, 1]]
MU_norm_gt = [im.reshape(im.shape[:1] + (1,) + im.shape[1:]) for im in MU_norm_gt]
g_loss = gan.train_on_batch(in_gen, [MU_gt[0], MU_gt[1], MU_norm_gt[0], MU_norm_gt[1],
MUa_train[batch_ids], MUsp_train[batch_ids],
hard_valid[:len(batch_ids)]])
for l, o in zip(out_labels, np.concatenate([d_loss, g_loss])):
batch_logs[l] = o
callbacks.on_batch_end(batch_index, batch_logs)
if progress_bar is None:
progress_bar = tqdm(total=train_size, desc='Epoch {}/{}'.format(epoch + 1, epochs), unit='samples')
progress_bar.set_postfix(train_on_batch=('[D loss: [%g %g], acc.: [%.2f%% %.2f%%]] [G total_loss: %g, ' +
'img_loss: %g, d_loss: %g, d_acc.: %.2f%%]') %
(d_loss1[0], d_loss2[0], 100 * d_loss1[1], 100 * d_loss2[1],
g_loss[0],
g_loss[0] - g_loss[7], g_loss[7], 100 * g_loss[-1]),
augment=augment)
progress_bar.update(len(batch_ids))
MU_gt = [MU_training_val[:, np.array([0])], MU_training_val[:, np.array([1])]]
d_val_loss1 = discriminator.evaluate(MU_gt, hard_valid, batch_size=batch_size, verbose=0)
d_val_loss2 = discriminator.evaluate(generator.predict([PHI_meas_training_val, freq_training_val,
d_training_val])[:2], hard_fake, batch_size=batch_size,
verbose=0)
# d_val_loss = discriminator.evaluate([np.concatenate([im1, im2], axis=0)
# for im1, im2 in zip(MU_gt,
# generator.predict(PHI_meas_training_val)[:2])],
# np.concatenate([hard_valid, hard_fake], axis=0), verbose=0)
d_val_loss = np.zeros(len(discriminator.metrics_names))
for i, l in enumerate(discriminator.metrics_names):
if l in discriminator.stateful_metric_names:
d_val_loss[i] = d_val_loss2[i]
else:
d_val_loss[i] = 0.5 * (d_val_loss1[i] + d_val_loss2[i])
MU_norm_gt = [MU_norm_training_val[:, np.array([0])], MU_norm_training_val[:, np.array([1])]]
progress_bar.set_postfix(validate_on_epoch='[D loss: [%g %g], acc.: [%.2f%% %.2f%%]] validating generator..' %
(d_val_loss1[0], d_val_loss2[0], 100 * d_val_loss1[1],
100 * d_val_loss2[1]))
g_val_loss = gan.evaluate([PHI_meas_training_val, freq_training_val, d_training_val],
[MU_gt[0], MU_gt[1], MU_norm_gt[0], MU_norm_gt[1], MUa_val, MUsp_val, hard_valid],
batch_size=batch_size, verbose=0)
for l, o in zip(out_labels, np.concatenate([d_val_loss, g_val_loss])):
epoch_logs['val_' + l] = o
# num_batches = (val_size + batch_size - 1) // batch_size
# for batch_index in range(num_batches):
# batch_start = batch_index * batch_size
# batch_end = min(val_size, (batch_index + 1) * batch_size)
# MU_gt = [MU_training_val[batch_start:batch_end, 0], MU_training_val[batch_start:batch_end, 0]]
# MU_gt = [im.reshape(im.shape[:1] + (1,) + im.shape[1:]) for im in MU_gt]
# d_loss1 = discriminator.test_on_batch(MU_gt, hard_valid[:(batch_end - batch_start)])
# in_gen = PHI_meas_training_val[batch_start:batch_end]
# d_loss2 = discriminator.test_on_batch(generator.predict(in_gen)[:2], hard_fake[:(batch_end - batch_start)])
# d_loss = np.zeros(len(discriminator.metrics_names))
# for i, l in enumerate(discriminator.metrics_names):
# if l in discriminator.stateful_metric_names:
# d_loss[i] = d_loss2[i]
# else:
# d_loss[i] = 0.5 * (d_loss1[i] + d_loss2[i])
# MU_norm_gt = [MU_norm_training_val[batch_start:batch_end, 0],
# MU_norm_training_val[batch_start:batch_end, 1]]
# MU_norm_gt = [im.reshape(im.shape[:1] + (1,) + im.shape[1:]) for im in MU_norm_gt]
# g_loss = gan.test_on_batch(in_gen, [MU_gt[0], MU_gt[1], MU_norm_gt[0], MU_norm_gt[1],
# MUa_val[batch_start:batch_end], MUsp_val[batch_start:batch_end],
# hard_valid[:(batch_end - batch_start)]])
# for l, o in zip(out_labels, np.concatenate([d_loss, g_loss])):
# if l in logger.stateful_metrics:
# epoch_logs['val_' + l] = o
# else:
# if 'val_' + l in epoch_logs:
# epoch_logs['val_' + l] += o * (batch_end - batch_start)
# else:
# epoch_logs['val_' + l] = o * (batch_end - batch_start)
# pbar.set_postfix(validate_on_batch=('[D loss: %g, acc.: %.2f%%] [G total_loss: %g, img_loss: %g, ' +
# 'd_loss: %g, d_acc.: %.2f%%]') %
# (d_loss[0], 100 * d_loss[1], g_loss[0], g_loss[0] - g_loss[7],
# g_loss[7], 100 * g_loss[-1]), soft_label=soft_label, augment=augment)
# pbar.update(batch_end - batch_start)
# for l in out_labels:
# if l not in logger.stateful_metrics:
# epoch_logs['val_' + l] /= val_size
callbacks.on_epoch_end(epoch, epoch_logs)
# gan_d_acc[0] = epoch_logs[out_labels[-1]]
# gan_d_acc[1] = epoch_logs['val_' + out_labels[-1]]
progress_bar.set_postfix(train=('[D loss: %g, acc.: %.2f%%] [G total_loss: %g, img_loss: %g, d_loss: %g, ' +
'd_acc.: %.2f%%]') %
(epoch_logs[out_labels[0]], 100 * epoch_logs[out_labels[1]],
epoch_logs[out_labels[2]],
epoch_logs[out_labels[2]] - epoch_logs[out_labels[9]],
epoch_logs[out_labels[9]],
100 * epoch_logs[out_labels[-1]]),
validation=('[D loss: %g, acc.: %.2f%%] [G total_loss: %g, img_loss: %g, ' +
'd_loss: %g, d_acc.: %.2f%%]') %
(epoch_logs['val_' + out_labels[0]],
100 * epoch_logs['val_' + out_labels[1]],
epoch_logs['val_' + out_labels[2]],
epoch_logs['val_' + out_labels[2]] - epoch_logs['val_' + out_labels[9]],
epoch_logs['val_' + out_labels[9]],
100 * epoch_logs['val_' + out_labels[-1]]))
progress_bar.close()
callbacks.on_train_end()
hist_data = {
'train_start': time_history.train_time_start,
'train_time': time_history.train_time,
'epoch_time': time_history.times,
'history': history.history,
'choice': np.where(choice == 1)[0].tolist()
}
return hist_data
hist_data = run_epoch(100)
gan.save('model46log_100.h5')
generator.save('model46log_100_gen.h5')
discriminator.save('model46log_100_dis.h5')
with open('model_hist_data46.0log.json', 'w') as f:
json.dump(hist_data, f)
```
| github_jupyter |
# Dunder Data Challenge 004 - Finding the Date of the Largest Percentage Stock Price Drop
In this challenge, you are given a table of closing stock prices for 10 different stocks with data going back as far as 1999. For each stock, find the date where it had its largest one-day percentage loss. The data is found in the `stocks10.csv` file with each stocks ticker symbol as a column name.
```
import pandas as pd
stocks = pd.read_csv('../data/stocks10.csv')
stocks.head()
```
### Challenge
There is a nice, fast solution that uses just a minimal amount of code without any loops. Can you return a Series that has the ticker symbols in the index and the date where the largest percentage price drop happened as the values.
#### Extra challenge
Can you return a DataFrame with the ticker symbol as the columns and a row for the date and another row for the percentage price drop?
## Solution
To begin, we need to find the percentage drop for each stock for each day. pandas has a built-in method for this called `pct_change`. By default, it finds the percentage change between the current value and the one immediately above it. Like most DataFrame methods, it treats each column independently from the others.
If we call it on our current DataFrame, we'll get an error as it will not work on our date column. Let's re-read in the data, converting the date column to a datetime and place it in the index.
```
stocks = pd.read_csv('../data/stocks10.csv', parse_dates=['date'], index_col='date')
stocks.head()
```
Placing the date column in the index is a key part of this challenge that makes our solution quite a bit nicer. Let's now call the `pct_change` method to get the percentage change for each trading day.
```
stocks.pct_change().head()
```
Let's verify that one of the calculated values is what we desire. MSFT dropped 2 cents from 29.84 to 29.82 on its second trading day in this dataset. The percentage calculated below equals the percentage calculated in the method above.
```
(29.82 - 29.84) / 29.82
```
Most pandas users know how to get the maximum and minimum value of each column with the methods `max`/`min`. Let's find the largest drop by calling the `min` method.
```
stocks.pct_change().min()
```
For the first part of this challenge, we aren't interested in the value of the largest percentage one-day drop, but the date that it happened. Since the date is in the index, we can use the lesser-known method called `idxmin` which returns the index of the minimum. An analogous `idxmax` method also exists.
```
stocks.pct_change().idxmin()
```
In general mathematical speak, this calculation is known as the [arg min or arg max][1].
[1]: https://en.m.wikipedia.org/wiki/Arg_max
### Extra challenge
Knowing the date of the largest drop is great, but it doesn't tell us what the value of the drop was. We need to return both the minimum and the date of that minimum. This is possible with help from the `agg` method which allows us to return any number of aggregations from our DataFrame.
An aggregation is any function that returns a single value. Both `min` and `idxmin` return a single value and therefore are considered aggregations. The `agg` method works by accepting a list of aggregating functions where the functions are written as strings.
```
stocks.pct_change().agg(['idxmin', 'min'])
```
# Become a pandas expert
If you are looking to completely master the pandas library and become a trusted expert for doing data science work, check out my book [Master Data Analysis with Python][1]. It comes with over 300 exercises with detailed solutions covering the pandas library in-depth.
[1]: https://www.dunderdata.com/master-data-analysis-with-python
| github_jupyter |
- load logs
- normalisation
- feture engineering
- UMAP
```
from welly import Project
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
data_df = pd.read_csv("./LASDF_ss.csv").drop(['UWI'], axis=1)
# data_df = pd.read_csv("./big_df.csv")
data_df['W'] = data_df['W'].apply(lambda n: n[:-4])
data_df['LITHOLOGY_GEOLINK'] = data_df.apply(lambda row: row['LITHOLOGY_GEOLINK'] if not np.isnan(row['LITHOLOGY_GEOLINK']) else -1, axis=1)
data_df.head()
valid_df = data_df.loc[~data_df.isna().any(axis=1)]
```
#### Features
```
well_names = valid_df['W'].unique()
print(well_names)
st_wells = ['25_2-13 T4', '25_8-5 S', '31_2-2 R', '31_5-2 R', '31_5-4 S', '34_10-16 R', '34_2-2 R', '34_11-2 S']
v_well_names = [n for n in well_names if n not in st_wells]
print(len(v_well_names), "prob vertical")
import random
# well_names_to_use = random.sample(v_well_names, 6)
valid_df = valid_df[valid_df['W'].isin(well_names_to_use)]
well_names_to_use = ['31_2-1', '31_2-10', '31_2-3', '31_2-7', '31_2-8', '31_2-9']
print(well_names_to_use)
meta_df = valid_df[['W','LITHOLOGY_GEOLINK']].copy()
feature_df= valid_df.drop(['W','Unnamed: 0','LITHOLOGY_GEOLINK'], axis=1)
dfs = [meta_df, feature_df]
feature_df = pd.concat(dfs, axis='columns')
feature_df.columns.values
feature_df = feature_df.dropna()
# feature_df.to_pickle('./noqc_features_sp.pickle')
# take a smaller random sample
# feature_df = feature_df.head(40000)
# feature_df = feature_df.iloc[::9]
print(len(feature_df))
X = feature_df.drop(['W'], axis=1).values.tolist()
print(len(X), len(X[0]))
```
#### Scaling
UMAP Expects normally distributed data
```
from sklearn.preprocessing import RobustScaler
scaler = RobustScaler()
scaler.fit(X)
Xs = scaler.transform(X)
```
#### Run UMAP
```
n_neighbors= 8# default 15
min_dist=0.99 # defult 0.1
n_components=2
metric='minkowski' #'euclidean' #'minkowski'
import umap
reducer = umap.UMAP(
n_neighbors=n_neighbors,
min_dist=min_dist,
n_components=n_components,
metric=metric)
import time
start = time.process_time()
embedding = reducer.fit_transform(Xs)
end = time.process_time()
embedding.shape
print("Time {} secs".format(end-start))
import matplotlib.pyplot as plt
NUM_LITH = 37
LITH_KEY_MAP = [1,1,1,1,7,7,8,8,3,1,3,3,3,8,3,3,4,4,6,6,2,2,5,5,8,2,2,6,4,1,4,2,2,2,1,4]
LITH_NAME_MAP = ["Sand","Evaporite","Calc","Lithic","Intrusive","Silt","Shale"]
valid_df["LITH"] = valid_df['LITHOLOGY_GEOLINK'].apply(lambda n: LITH_KEY_MAP[int(n)] )
import matplotlib as mpl
cmap = mpl.cm.get_cmap('viridis', NUM_LITH)
plt.figure(figsize=(12,12))
plt.scatter(embedding[:,0], embedding[:,1], c=feature_df['LITHOLOGY_GEOLINK'], cmap='tab20')
plt.colorbar()
```
| github_jupyter |
# Metadata
## Overview
At its core, metadata is data about data. In day-to-day GIS data management workflows, data is created, updated,
archived and used for various decision support systems. Part of the information management lifecycle of data includes maintenance, protection and preservation, as well as facilitating discovery. Metadata serves to meet these requirements.
## Core concepts
Documentation is critical in order to describe:
- who is responsible and who to contact for the data
- what the data represents (features, grids, etc.)
- where the data is located
- when the data was created, updated and what time span is the data based on
- why the data exists
- how the data was generated
## Standards
There are numerous standards that exist in support of documenting data. The [Dublin Core](https://dublincore.org) standard provides 16 core elements to describe any resource. The [OGC Catalogue Service for the Web](https://opengeospatial.org/standards/cat) leverages Dublin Core in providing a core metadata model for geospatial catalogues and search.
The geospatial community has had long standing efforts around developing metadata standards for geospatial data, including (but not limited to) [FGDC CSDGM](https://www.fgdc.gov/metadata/csdgm-standard), [DIF](https://earthdata.nasa.gov/esdis/eso/standards-and-references/directory-interchange-format-dif-standard), and [ISO 19115](https://www.iso.org/standard/26020.html).
Recently, [JSON](https://json.org) and [GeoJSON](https://geojson.org) have proliferated the geospatial ecosystem for lightweight data exchange over the web. Metadata is no exception here; the [OGC API](https://ogcapi.org) and [STAC](https://stacspec.org) efforts have focused on JSON as a core representation of geospatial metadata.
Whichever standard you require or choose, using these standards to generate geospatial metadata provides value for easy integration into geospatial search catalogues and desktop GIS tools to help organize, categorize and find geospatial data. The challenge of geospatial metadata remains in its complexity. Tools are needed to easily create and manage geospatial metadata.
## Easy metadata workflows with pygeometa
[pygeometa](https://geopython.github.io/pygeometa) provides a lightweight toolkit allowing users to easily create geospatial metadata in standards-based formats using simple configuration files (affectionately called metadata control files [MCF]). Leveraging the simple but powerful YAML format, pygeometa can generate metadata in numerous standards. Users can also create their own custom metadata formats which can be plugged into pygeometa for custom metadata format output.
For developers, pygeometa provides an intuitive Python API that allows Python developers to tightly couple metadata generation within their systems.
## Creating metadata
Let's walk through examples of using pygeometa on the command line as well the API.
Let's start with the CLI below.
```
!pygeometa
!cat ../data/countries.yml
!pygeometa metadata generate ../data/countries.yml --schema iso19139 --output /tmp/countries.xml
!cat /tmp/countries.xml
```
Now let's try to output the metadata as an OGC API - Records metadata record. Note the record JSON representation, which is key to the emerging OGC API standards, and baselined by GeoJSON, enabling broad interoperability.
```
!pygeometa metadata generate ../data/countries.yml --schema oarec-record
```
Now let's use the API to make some updates
```
from pygeometa.core import read_mcf
mdata = read_mcf('../data/countries.yml')
mdata
mdata['identification']['title']
```
Let's change the dataset title
```
mdata['identification']['title'] = 'Countries of the world'
```
Now let's select ISO 19139 as the output schema
```
from pygeometa.schemas.iso19139 import ISO19139OutputSchema
iso_os = ISO19139OutputSchema()
xml_string = iso_os.write(mdata)
```
Now let's inspect the `/gmd:MD_Metadata/gmd:identificationInfo/gmd:MD_DataIdentification/gmd:citation/gmd:CI_Citation/gmd:title` to see the updated title
```
print(xml_string)
```
Now try updating the `mdata` variable (`dict`) with updated values and use the pygeometa API to generate a new ISO XML.
---
[<- Visualization](07-visualization.ipynb) | [Publishing ->](09-publishing.ipynb)
| github_jupyter |
# A Line-up of Tips for Better SQL Writing
**SQL** stands for **`structured query language (SQL)`**
The three most common SQL RDBMS are:
* SQLite
* MySQL (from Oracle)
* PostgreSQL
**SELECT** indicates which column(s) you want from the table.
**FROM** specifies from which table(s) you want to select the columns. Notice the columns need to exist in this table.
If you want to be provided with the data from all columns in the table, you use "*", like so:
```
SELECT * FROM orders
```
Note that using SELECT does not create a new table with these columns in the database, it just provides the data to you as the results, or output, of this command.
*******************************************************************************************************************************
<h2>Formatting Best Practices..</h2>
1. **Using Upper and Lower Case in SQL:**<br>
SQL queries can be run successfully whether characters are written in upper- or lower-case. In other words, SQL queries are not case-sensitive
2. **Capitalizing SQL Clauses:**<br>It is common and best practice to capitalize all SQL commands, like `SELECT` and `FROM`, and keep everything else in your query lower case. Capitalizing command words makes queries easier to read, which will matter more as you write more complex queries.
3. **One other note:**<BR> The text data stored in SQL tables can be either upper or lower case, and SQL is case-sensitive in regard to this text data.
4. **Avoid Spaces in Table and Variable Names:**<br>
It is common to use underscores and avoid spaces in column names. It is a bit annoying to work with spaces in SQL. In Postgres if you have spaces in column or table names, you need to refer to these columns/tables with double quotes around them (Ex: `FROM "Table Name"` as opposed to `FROM table_name`). In other environments, you might see this as square brackets instead (Ex: `FROM [Table Name]`).
5. **Use White Space in Queries:**<br>
SQL queries ignore spaces, so you can add as many spaces and blank lines between code as you want, and the queries are the same. But pls use with decorum.
6. **Semicolons:**<br>
Depending on your SQL environment, your query may need a semicolon at the end to execute. Other environments are more flexible in terms of this being a "requirement." It is considered best practice to put a semicolon at the end of each statement, which also allows you to run multiple queries at once if your environment allows this.
*******************************************************************************************************************************
## The LIMIT clause
* The `LIMIT` command is always the very last part of a query.
*******************************************************************************************************************************
## The ORDER BY Clause
* The `ORDER BY` statement allows us to sort our results using the data in any column.
* Using `ORDER BY` in a SQL query only has temporary effects, for the results of that query, unlike sorting a sheet by column in Excel or Sheets.
* The `ORDER BY` statement always comes in a query after the `SELECT` and `FROM` statements, but before the `LIMIT` statement. If you are using the `LIMIT` statement, it will always appear last.
* Remember **DESC** can be added after the column in your `ORDER BY` statement to sort in descending order, as the default is to sort in ascending order.
**The ORDER BY 2 Clause**
* We can `ORDER BY` more than one column at a time
* When you provide a list of columns in an `ORDER BY` command, the sorting occurs using the leftmost column in your list first, then the next column from the left, and so on.
* We still have the ability to flip the way we order using `DESC`.
*******************************************************************************************************************************
## The WHERE Clause:
* Using the `WHERE` statement, we can display subsets of tables based on conditions that must be met. You can also think of the `WHERE` command as filtering the data.
* The `WHERE` clause goes after `FROM`, but before `ORDER BY` or `LIMIT`
* Common symbols used in `WHERE` statements include:
** $>$ (greater than)
** $<$ (less than)
** $>=$ (greater than or equal to)
** $<=$ (less than or equal to)
** $=$ (equal to)
** $!=$ (not equal to)
**WHERE Clause contd...**
The `WHERE` statement can also be used with non-numeric data. We can use the = and != operators here. You need to be sure to use single quotes (just be careful if you have quotes in the original text) with the text data, not double quotes.
Commonly when we are using `WHERE` with non-numeric data fields, we use the `LIKE`, `NOT`, or `IN` operators.
***************************************************************************************************************************
## Derived Columns:
* Creating a new column that is a combination of existing columns is known as a derived column (or "calculated" or "computed" column). Usually you want to give a name, or "alias," to your new column using the `AS` keyword.
* This derived column, and its alias, are generally only temporary, existing just for the duration of your query. The next time you run a query and access this table, the new column will not be there.
* **Order of Operations**<br>
Remember **PEMDAS** from math class to help remember the order of operations? The same order of operations applies when using arithmetic operators in SQL.
The following two statements have very different end results:
* Standard_qty / standard_qty + gloss_qty + poster_qty
* standard_qty / (standard_qty + gloss_qty + poster_qty)
***************************************************************************************************************************
<h2>Introduction to Logical Operators</h2>
In the next concepts, you will be learning about Logical Operators. Logical Operators include:
* **LIKE**<br>
This allows you to perform operations similar to using `WHERE` and `=`, but for cases when you might not know exactly what you are looking for.
* **IN**<br>
This allows you to perform operations similar to using `WHERE` and `=`, but for more than one condition.
* **NOT**<br>
This is used with `IN` and `LIKE` to select all of the rows `NOT LIKE` or `NOT IN` a certain condition.
* **AND & BETWEEN**<br>
These allow you to combine operations where all combined conditions must be true.
* **OR**<br>
This allow you to combine operations where at least one of the combined conditions must be true.
**The LIKE Operator:**
The `LIKE` operator is extremely useful for working with text. You will use `LIKE` within a `WHERE` clause. The `LIKE` operator is frequently used with `%`. The `%` tells us that we might want any number of characters leading up to a particular set of characters or following a certain set of characters.
Remember to use single quotes for the text you pass to the `LIKE` operator, because of this lower and uppercase letters are not the same within the string. Searching for 'T' is not the same as searching for 't'.
**The IN Operator:**
The `IN` operator is useful for working with both numeric and text columns. This operator allows you to use an `=`, but for more than one item of that particular column. We can check one, two or many column values for which we want to pull data, but all within the same query.
**The NOT Operator:**
The `NOT` operator is an extremely useful operator for working with the previous two operators we introduced: `IN` and `LIKE`. By specifying `NOT` `LIKE` or `NOT` `IN`, we can grab all of the rows that do not meet a particular criteria.
**Expert Tip**
In most SQL environments, although not in our Udacity's classroom, you can use single or double quotation marks - and you may NEED to use double quotation marks if you have an apostrophe within the text you are attempting to pull.
**********************************************************************************************************************
## The AND Operator:
* The `AND` operator is used within a `WHERE` statement to consider more than one logical clause at a time
* Each time you link a new statement with an `AND`, you will need to specify the column you are interested in looking at. You may link as many statements as you would like to consider at the same time.
* This operator works with all of the operations we have seen so far including arithmetic operators `(+, *, -, /)`. `LIKE, IN`, and `NOT` logic can also be linked together using the `AND` operator.
## The BETWEEN Operator:
* Sometimes we can make a cleaner statement using `BETWEEN` than we can using `AND`. Particularly this is true when we are using the same column for different parts of our `AND` statement.
* Note that the endpoints of a `BETWEEN` operator query are inclusive. both start and end limits, included in the output.
For example, statement 1 below is much better written as statement 2 below.
1. `SELECT * FROM table WHERE column >= 6 AND column <= 10`
2. `SELECT * FROM table WHERE column BETWEEN 6 AND 10`
## The OR Operator
* Similar to the `AND` operator, the `OR` operator can combine multiple statements.
* Each time you link a new statement with an `OR`, you will need to specify the column you are interested in looking at, just like with `AND`.
* You may link as many statements as you would like to consider at the same time.
* This operator works with all of the operations we have seen so far including arithmetic operators `(+, *, -, /)`, `LIKE`, `IN`, `NOT`, `AND`, and `BETWEEN` logic can all be linked together using the `OR` operator.
* When combining multiple of these operations, we frequently might need to use parentheses to ensure that logic we want to perform is being executed correctly.
*********************************************************************************************************************
<h2><b>Joins</b></h2>
The whole purpose of `JOIN` statements is to allow us to pull data from more than one table at a time.
Again - `JOINs` are useful for allowing us to pull data from multiple tables. This is both simple and powerful all at the same time.
With the addition of the `JOIN` statement to our toolkit, we will also be adding the `ON` statement.
We use `ON` clause to specify a `JOIN` condition which is a logical statement to combine the table in `FROM` and `JOIN` statements.
The table name is always before the period.<br>
The column you want from that table is always after the period.
For example, if we want to pull only the account name and the dates in which that account placed an order, but none of the other columns, we can do this with the following query:
```
SELECT accounts.name, orders.occurred_at
FROM orders
JOIN accounts
ON orders.account_id = accounts.id;
```
Additionally, which side of the = a column is listed doesn't matter.
Personally, I think it makes sense to keep it uniform...
Meaning make the table at the left side of the = be the first table selected, while that at the right side be the second table and so on.
<h2>Keys</h2>
**Primary Key (PK):**
A primary key is a unique column in a particular table. This is the first column in each of our tables. Here, those columns are all called id, but that doesn't necessarily have to be the name. It is common that the primary key is the first column in our tables in most databases.
**Foreign Key (FK):**
A foreign key is a column in one table that is a primary key in a different table. We can see in the Parch & Posey ERD that the foreign keys are:
* region_id
* account_id
* sales_rep_id
Each of these is linked to the primary key of another table. An example is shown in the image below:<br>**Note that a table can have multiple foreign-keys, but one primary-key**
<img src='https://video.udacity-data.com/topher/2017/August/598d2378_screen-shot-2017-08-10-at-8.23.48-pm/screen-shot-2017-08-10-at-8.23.48-pm.png'>
**Notice**
Notice our SQL query has the two tables we would like to join - one in the `FROM` and the other in the `JOIN`. Then in the `ON`, we will ALWAYs have the PK equal to the FK:
The way we join any two tables is in this way: linking the PK and FK (generally in an `ON` statement).
<h2>Alias:</h2>
When we `JOIN` tables together, it is nice to give each table an alias. Frequently an alias is just the first letter of the table name. You actually saw something similar for column names in the Arithmetic Operators concept.
Example:
```
FROM tablename AS t1
JOIN tablename2 AS t2
```
Frequently, you might also see these statements without the `AS` statement. Each of the above could be written in the following way instead, and they would still produce the exact same results:
```
FROM tablename t1
JOIN tablename2 t2
```
and
```
SELECT col1 + col2 total, col3
```
<h2>Aliases for Columns in Resulting Table</h2>
While aliasing tables is the most common use case. It can also be used to alias the columns selected to have the resulting table reflect a more readable name.
Example:
```
Select t1.column1 aliasname, t2.column2 aliasname2
FROM tablename AS t1
JOIN tablename2 AS t2
```
The alias name fields will be what shows up in the returned table instead of t1.column1 and t2.column2
```
aliasname aliasname2
example row example row
example row example row
```
<h2>Inner, Left, Right, Outer Joins</h2>
**JOINs**
The INNER JOIN, which we saw by just using JOIN,
Fro the right and left joins,
If there is not matching information in the JOINed table, then you will have columns with empty cells. These empty cells introduce a new data type called NULL. You will learn about NULLs in detail in the next lesson, but for now you have a quick introduction as you can consider any cell without data as NULL.
<h3><b>Facts</b></h3>
1. A `LEFT JOIN` and `RIGHT JOIN` do the same thing if we change the tables that are in the `FROM` and `JOIN` statements.
2. A `LEFT JOIN` will at least return all the rows that are in an `INNER JOIN`.
3. `JOIN` and `INNER JOIN` are the same.
4. A `LEFT OUTER JOIN` is the same as `LEFT JOIN`.
<b><h3>Tip:</h3></b>
If you have two or more columns in your SELECT that have the same name after the table name such as accounts.name and sales_reps.name you will need to alias them. Otherwise it will only show one of the columns. You can alias them like accounts.name AS AcountName, sales_rep.name AS SalesRepName
## GROUP BY:
* `GROUP BY` can be used to aggregate data within subsets of the data. For example, grouping for different accounts, different regions, or different sales representatives.
* Any column in the `SELECT` statement that is not within an aggregator must be in the `GROUP BY` clause.
* The `GROUP BY` always goes between `WHERE` and `ORDER BY`.
* `ORDER BY` works like SORT in spreadsheet software.
### GROUP BY - Expert Tip:
SQL evaluates the aggregations before the `LIMIT` clause. If you don’t `group by` any columns, you’ll get a 1-row result—no problem there. If you `group by` a column with enough unique values that it exceeds the `LIMIT` number, the aggregates will be calculated, and then some rows will simply be omitted from the results.
This is actually a nice way to do things because you know you’re going to get the correct aggregates. If SQL cuts the table down to 100 rows, then performed the aggregations, your results would be substantially different. So the default style of `Group by` before `LIMIT` which usally comes last is ok.
### **GROUP BY PART 2**
* We can `GROUP BY` multiple columns at once. This is often useful to aggregate across a number of different segments.
* The order of columns listed in the `ORDER BY` clause does make a difference. You are ordering the columns from left to right. But it makes no difference in `GROUP BY` Clause
**GROUP BY - Expert Tips**
* The order of column names in your `GROUP BY` clause doesn’t matter—the results will be the same regardless. If we run the same query and reverse the order in the `GROUP BY` clause, you can see we get the same results.
* As with `ORDER BY`, we can substitute numbers for column names in the `GROUP BY` clause. It’s generally recommended to do this only when you’re grouping many columns, or if something else is causing the text in the `GROUP BY` clause to be excessively long.
* A reminder here that any column that is not within an aggregation must show up in your `GROUP BY` statement. If you forget, you will likely get an error. However, in the off chance that your query does work, you might not like the results!
### **Distinct**
* `DISTINCT` is always used in `SELECT` statements, and it provides the unique rows for all columns written in the `SELECT` statement. Therefore, you only use `DISTINCT` once in any particular `SELECT` statement.
* You could write:
```
SELECT DISTINCT column1, column2, column3
FROM table1;
```
which would return the unique (or DISTINCT) rows across all three columns.
* You could not write:
```
SELECT DISTINCT column1, DISTINCT column2, DISTINCT column3
FROM table1;
```
* You can think of DISTINCT the same way you might think of the statement "unique".
**DISTINCT - Expert Tip**
It’s worth noting that using `DISTINCT`, particularly in aggregations, can slow your queries down quite a bit.
## **Having**
**HAVING - Expert Tip**
HAVING is the “clean” way to filter a query that has been aggregated, but this is also commonly done using a subquery. Essentially, any time you want to perform a `WHERE` on an element of your query that was created by an aggregate, you need to use `HAVING` instead.
## **Pitching Where and Having**
1. `WHERE` subsets the returned data based on a logical condition
2. `WHERE` appears after the `FROM`, `JOIN` and `ON` clauses but before the `GROUP BY`
3. `HAVING` appears after the `GROUP BY` clause but before the `ORDER BY`.
4. `HAVING` is like `WHERE` but it works on logical statements involving aggregations.
## Case Statements
Case statements are SQL's way of handling If-Then logic.
We can create derived columns using `CASE` statements to answer interersting questions about the data.
* The `CASE` statement is followed by at least one pair of `when` and `then` statements which are SQL's equivalent of If and Else statements.
* The `CASE` statement must finish with the word `END`.
* We can define a `CASE` statement with many `when`, `then` statements as we like.
* Each `when` statement would evaluate in the pattern or format that it's written, one after another.
* It's really best to create `when` statements that dont over lap.
* We can add `AND` and `OR` to create finer conditions in the `when` statements.
* The `CASE` clause allows us to count several different conditions at a time, unlike the `WHERE` clause which allows us count only one condition a time. For example...
```
SELECT CASE WHEN total > 500 THEN "Over-500" ELSE "500-or-under"
END AS total_group, COUNT(*) as order_count FROM orders GROUP BY 1
```
* Finally we can combine `CASE` statements with aggregations to produce enhanced results.
### CASE - Expert Tip
* The `CASE` statement always goes in the `SELECT` clause.
* `CASE` must include the following components: `WHEN`, `THEN`, and `END`. `ELSE` is an optional component to catch cases that didn’t meet any of the other previous CASE conditions.
* You can make any conditional statement using any conditional operator (like `WHERE`) `between` `WHEN` and `THEN`. This includes stringing together multiple conditional statements using `AND` and `OR`.
* You can include multiple `WHEN` statements, as well as an `ELSE` statement again, to deal with any unaddressed conditions.
| github_jupyter |
### k Nearest Neighbors (kNN)
As the name suggest the algorithm works based on majority vote of its k nearest neighbors class. In figure 14, 5 (k) nearest neighbors for the unknown data point are identified based on the chosen distance measure, and the unknown point will be classified based on majority class among identified nearest data points class. The key drawback of kNN is the complexity in searching the nearest neighbors for each sample.
Things to remember:
* Choose an odd k value for a 2 class problem
* k must not be a multiple of the number of classes
```
from IPython.display import Image
Image(filename='../Chapter 3 Figures/kNN.png', width=800)
```
### Load Data
Loading the Iris dataset from scikit-learn. Here, the third column represents the petal length, and the fourth column the petal width of the flower samples. The classes are already converted to integer labels where 0=Iris-Setosa, 1=Iris-Versicolor, 2=Iris-Virginica.
```
import warnings
warnings.filterwarnings('ignore')
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn import datasets
import numpy as np
import pandas as pd
from sklearn import tree
from sklearn import metrics
iris = datasets.load_iris()
X = iris.data
y = iris.target
print('Class labels:', np.unique(y))
```
Normalize data: the unit of measurement might differ so lets normalize the data before building the model
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
sc.fit(X)
X = sc.transform(X)
```
Split data into train and test. When ever we are using radom function its advised to use a seed to ensure the reproducibility of the results.
```
# split data into train and test
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
def plot_decision_regions(X, y, classifier):
h = .02 # step size in the mesh
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, h),
np.arange(x2_min, x2_max, h))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier(n_neighbors=5, p=2, metric='minkowski')
clf.fit(X_train, y_train)
# generate evaluation metrics
print "Train - Accuracy :", metrics.accuracy_score(y_train, clf.predict(X_train))
print "Train - Confusion matrix :",metrics.confusion_matrix(y_train, clf.predict(X_train))
print "Train - classification report :", metrics.classification_report(y_train, clf.predict(X_train))
print "Test - Accuracy :", metrics.accuracy_score(y_test, clf.predict(X_test))
print "Test - Confusion matrix :",metrics.confusion_matrix(y_test, clf.predict(X_test))
print "Test - classification report :", metrics.classification_report(y_test, clf.predict(X_test))
```
### Plot Decision Boundary
Let's consider a two class example to keep things simple
```
# Let's use sklearn make_classification function to create some test data.
from sklearn.datasets import make_classification
X, y = make_classification(100, 2, 2, 0, weights=[.5, .5], random_state=0)
# build a simple logistic regression model
clf = KNeighborsClassifier(n_neighbors=5, p=2, metric='minkowski')
clf.fit(X, y)
# Plot the decision boundary
plot_decision_regions(X, y, classifier=clf)
plt.xlabel('X1')
plt.ylabel('X2')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
```
| github_jupyter |
```
import geopandas as gpd
import pandas as pd
import numpy as np
import numpy
import matplotlib.pyplot as plt
import pandas
import math
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM, Flatten
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from keras.callbacks import EarlyStopping
from keras.layers import ConvLSTM2D
adm_name = gpd.read_file('data/TSDM/Mandal_Boundary.shp')
df = pd.read_csv('data/cropfire_taluk.csv', index_col=False)
import time #to calculate time taken to run the model
start_time = time.time() #start time of the program
place = []
deviance = []
for i in df.place.unique():
dataframe = pd.DataFrame()
dataframe = df.loc[df['place'] == i]
del dataframe['place']
dataframe = dataframe.reset_index()
del dataframe['index']
#Convert pandas dataframe to numpy array
dataset = dataframe.fireCount
dataset = dataset.astype('float32') #COnvert values to float
#LSTM uses sigmoid and tanh that are sensitive to magnitude so values need to be normalized
# normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1)) #Also try QuantileTransformer
dataset = np.array(dataset).reshape(-1,1)
dataset = scaler.fit_transform(dataset)
train_size = int(len(dataset)-365)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]
print(test_size)
def to_sequences(dataset, seq_size=1):
x = []
y = []
for i in range(len(dataset)-seq_size-1):
#print(i)
window = dataset[i:(i+seq_size), 0]
x.append(window)
y.append(dataset[i+seq_size, 0])
return np.array(x),np.array(y)
seq_size = 10 # Number of time steps to look back
#Larger sequences (look further back) may improve forecasting.
trainX, trainY = to_sequences(train, seq_size)
testX, testY = to_sequences(test, seq_size)
print("Shape of training set: {}".format(trainX.shape))
print("Shape of test set: {}".format(testX.shape))
trainX = trainX.reshape((trainX.shape[0], 1, 1, 1, seq_size))
testX = testX.reshape((testX.shape[0], 1, 1, 1, seq_size))
model = Sequential()
model.add(ConvLSTM2D(filters=64, kernel_size=(1,1), activation='relu', input_shape=(1, 1, 1, seq_size)))
model.add(Flatten())
model.add(Dense(32))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mean_squared_error')
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=20,verbose=1, mode='auto', restore_best_weights=True)
model.summary()
print('Train...')
model.fit(trainX, trainY, validation_data=(testX, testY),
verbose=2, epochs=100)
# make predictions
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
trainPredict = scaler.inverse_transform(trainPredict)
trainY = scaler.inverse_transform([trainY])
testPredict = scaler.inverse_transform(testPredict)
testY = scaler.inverse_transform([testY])
y_true = scaler.inverse_transform(dataset)[-360:]
y_pred = testPredict
y_pred = np.round(y_pred,0)
if np.max(y_pred) > np.max(y_true):
deviance.append('Positive')
else:
deviance.append('Negative')
place.append(i)
plt.plot(y_true, label='Actual')
plt.plot(y_pred, label='Predicted')
plt.legend()
plt.show()
result = pd.DataFrame()
result['Place'], result['Deviance'] = place, deviance
print("--- %s seconds ---" % (time.time() - start_time)) #print total time taken to run code
result1 = result.loc[result['Deviance'] == 'Positive']
result = result.sort_values(by='Place')
result2 = result1.rename(columns = {'Place': 'N_Revenue'}, inplace = True)
dev = []
ar = []
for i in adm_name.N_Revenue:
if i in list(result1.N_Revenue):
dev.append(1)
ar.append(i)
else:
dev.append(0)
ar.append(i)
adm_name = adm_name.sort_values(by='N_Revenue')
adm_name['deviance']=dev
ax = adm_name.plot(column='deviance', legend=True, legend_kwds={'label': "Taluks having Positive Deviance",
'orientation': "vertical"}, figsize=(12, 12));
ax.set_title('Telangana Data Powered Positive Deviance for Crop Fire')
fig = ax.get_figure()
fig.savefig("output/LSTM_taluk.png")
dev = []
ar = []
for i in adm_name.N_Revenue:
if i in list(result1.N_Revenue):
dev.append(1)
ar.append(i)
else:
dev.append(0)
ar.append(i)
adm_name = adm_name.sort_values(by='N_Revenue')
adm_name['deviance']=dev
ax = adm_name.plot(column='deviance', legend=True, legend_kwds={'label': "Taluks having Positive Deviance",
'orientation': "vertical"}, figsize=(12, 12));
ax.set_title('Telangana Data Powered Positive Deviance for Crop Fire')
fig = ax.get_figure()
fig.savefig("output/LSTM_taluk.png")
```
| github_jupyter |
# Lecture 02: Primitives
[Download on GitHub](https://github.com/NumEconCopenhagen/lectures-2022)
[<img src="https://mybinder.org/badge_logo.svg">](https://mybinder.org/v2/gh/NumEconCopenhagen/lectures-2022/master?urlpath=lab/tree/02/Primitives.ipynb)
1. [Your first notebook session](#Your-first-notebook-session)
2. [Fundamentals](#Fundamentals)
3. [Containers](#Containers)
4. [Conditionals and loops](#Conditionals-and-loops)
5. [Functions](#Functions)
6. [Floating point numbers](#Floating-point-numbers)
7. [Classes (user-defined types)](#Classes-(user-defined-types))
8. [Summary](#Summary)
9. [Extra: Iterators](#Extra:-Iterators)
10. [Extra: More on functions](#Extra:-More-on-functions)
You will be given an in-depth introduction to the **fundamentals of Python** (objects, variables, operators, classes, methods, functions, conditionals, loops). You learn to discriminate between different **types** such as integers, floats, strings, lists, tuples and dictionaries, and determine whether they are **subscriptable** (slicable) and/or **mutable**. You will learn about **referencing** and **scope**. You will learn a tiny bit about **floating point arithmetics**.
**Take-away:** This lecture is rather abstract compared to the rest of the course. The central take-away is **a language** to speak about programming in. An overview of the map, later we will study the terrain in detail. It is not about **memorizing**. Almost no code projects begin from scratch, you start by copying in similar code you have written for another project.
Hopefully, this notebook can later be used as a **reference sheet**. When you are done with the DataCamp courses, read through this notebook, play around with the code, and ask questions if there is stuff you do not understand.
**Links:**
* **Tutorial:** A more detailed tutorial is provided [here](https://www.python-course.eu/python3_course.php).
* **Markdown:** All text cells are written in *Markdown*. A guide is provided [here](https://www.markdownguide.org/basic-syntax/).
<a id="Your-first-notebook-session"></a>
# 1. Your first notebook session
**Optimally:** You have this notebook open as well on your own computer.
**Download guide:**
1. Follow the [installation guide](https://numeconcopenhagen.netlify.com/guides/python-setup/) in detail
2. Open VScode
3. Pres <kbd>Ctrl</kbd>+<kbd>Shift</kbd>+<kbd>P</kbd>
4. Write `git: clone` + <kbd>Enter</kbd>
5. Write `https://github.com/NumEconCopenhagen/lectures-2021` + <kbd>Enter</kbd>
6. You can always update to the newest version of the code with `git: sync` + <kbd>Enter</kbd>
7. Create a copy of the cloned folder, where you work with the code (otherwise you can not sync with updates)
**PROBLEMS?** Ask your teaching asssistant ASAP.
**Execution:**
* **Movements**: Arrows and scrolling
* **Run cell and advance:** <kbd>Shift</kbd>+<kbd>Enter</kbd>
* **Run cell**: <kbd>Ctrl</kbd>+<kbd>Enter</kbd>
* **Edit:** <kbd>Enter</kbd>
* **Toggle sidebar:** <kbd>Ctrl</kbd>+<kbd>B</kbd>
* **Change to markdown cell:** <kbd>M</kbd>
* **Change to code cell:** <kbd>Y</kbd>
<a id="Fundamentals"></a>
# 2. Fundamentals
All **variables** in Python is a **reference** to an **object** of some **type**.
## 2.1 Atomic types
The most simple types are called **atomic**. Atomic indicates that they cannot be changed - only overwritten.
**Integers (int):** -3, -2, -1, 0, 1, 2, 3, etc.
```
x = 1
# variable x references an integer type object with a value of 1
print(type(x)) # prints the type of x
print(x) # prints the value of x
```
**Decimal numbers (float)**: 3.14, 2.72, 1.0, etc.
```
x = 1.2
# variable x references an floating point (decimal number) type object
# with a value of 1.2
print(type(x))
print(x)
```
**Strings (str)**: 'abc', '123', 'this is a full sentence', etc.
```
x = 'abc'
# variable x references a string type opbject
# with a value of 'abc'
print(type(x))
print(x)
```
**Note:** Alternatively, use double quotes instead of single quotes.
```
x = "abc"
# variable x reference a string type opbject
# with a value of 'abc'
print(type(x))
print(x)
```
**Booleans (bool)**: True and False
```
x = True
# variable x reference a boolean type opbject
# with a value of False
print(type(x))
print(x)
```
**Atomic types:**
1. Integers, *int*
2. Floating point numbers, *float*
3. Strings, *str*
4. Booleans, *bool*
## 2.2 Type conversion
Objects of one type can (sometimes) be **converted** into another type.<br>For example, from float to string:
```
x = 1.2
# variable x references an floating point (decimal number) type object
# with a value of 1.2
y = str(x)
# variable y now references a string type object
# with a value created based on x
print(y,type(y))
```
or from float to integer:
```
x = 2.9
y = int(x)
# variable x now references an integer type object
# with a value created based on x (here rounded down)
print(y,type(y))
```
**Limitation:** You can, however, e.g. not convert a string to an integer.
```
try: # try to run this block
x = int('222a')
print('can be done')
print(x)
except: # if any error found run this block instead
print('canNOT be done')
```
**Note**: The identation is required (typically 4 spaces).
**Question**: Can you convert a boolean variable `x = False` to an integer?
- **A:** No
- **B:** Yes, and the result is 0
- **C:** Yes, and the result is 1
- **D:** Yes, and the result is -1
- **E:** Don't know
## 2.3 Operators
Variables can be combined using **operators** (e.g. +, -, /, **).<br>For numbers we have:
```
x = 3
y = 2
print(x+y)
print(x-y)
print(x/y)
print(x*y)
```
For strings we can use an overloaded '+' for concatenation:
```
x = 'abc'
y = 'def'
print(x+y)
```
A string can also be multiplied by an integer:
```
x = 'abc'
y = 2
print(x*y)
```
**Question**: What is the result of `x = 3**2`?
- **A:** `x = 3`
- **B:** `x = 6`
- **C:** `x = 9`
- **D:** `x = 12`
- **E:** Don't know
**Socrative room:** *NUMECON*
**Note:** Standard division converts integers to floating point numbers.
```
x = 8
y = x/2 # standard division
z = x//3 # integer division
print(y,type(y))
print(z,type(z))
```
## 2.4 Augmentation
Variables can be changed using **augmentation operators** (e.g. +=, -=, *=, /=)
```
x = 3
print(x)
x += 1 # same result as x = x+1
print(x)
x *= 2 # same result as x = x*2
print(x)
x /= 2 # same result as x = x/2
print(x)
```
## 2.5 Comparision
Variables can be compared using **boolean operators** (e.g. ==, !=, <, <=, >, >=).
```
x = 3
y = 2
z = 10
print(x < y) # less than
print(x <= y) # less than or equal
print(x != y) # not equal
print(x == y) # equal
```
The comparison returns a boolean variable:
```
z = x < y # z is now a boolean variable
print(z)
type(z)
```
## 2.6 Summary
The new central concepts are:
1. Variable
2. Reference
3. Object
4. Type (int, float, str, bool)
5. Value
6. Operator (+, -, *, **, /, //, % etc.)
7. Augmentation (+=, -=, *=, /= etc.)
8. Comparison (==, !=, <, <= etc.)
<a id="Containers"></a>
# 3. Containers
A more complicated type of object is a **container**. This is an object, which consists of serveral objects of e.g. an atomic type. They are also called **collection types**.
## 3.1 Lists
A first example is a **list**. A list contains **variables** each **referencing** some **object**.
```
x = [1,'abc']
# variable x references a list type object with elements
# referencing 1 and 'abc'
print(x,type(x))
```
The **length** of a list can be found with the **len** function.
```
print(f'the number of elements in x is {len(x)}')
```
A list is **subscriptable** and starts, like everything in Python, from **index 0**. Beware!
```
print(x[0]) # 1st element
print(x[1]) # 2nd element
```
A list is **mutable**, i.e. you can change its elements on the fly. Ie., you can change its **references** to objects.
```
x[0] = 'def'
x[1] = 2
print(x)
```
and add more elements
```
x.append('new_element') # add new element to end of list
print(x)
```
**Link:** [Why is 0 the first index?](http://python-history.blogspot.com/2013/10/why-python-uses-0-based-indexing.html)
### Slicing
A list is **slicable**, i.e. you can extract a list from a list.
```
x = [0,1,2,3,4,5]
print(x[0:3]) # x[0] included, x[3] not included
print(x[1:3])
print(x[:3])
print(x[1:])
print(x[:99]) # This is very particular to Python. Normally you'd get an error.
print(x[:-1]) # x[-1] is the last element
print(type(x[:-1])) # Slicing yields a list
print(type(x[-1])) # Unless only 1 element
```
**Explantion:** Slices are half-open intervals. I.e. ``x[i:i+n]`` means starting from element ``x[i]`` and create a list of (up to) ``n`` elements.
```
# splitting a list at x[3] and x[5] is:
print(x[0:3])
print(x[3:5])
print(x[5:])
```
**Question**: Consider the following code:
```
x = [0,1,2,3,4,5]
```
What is the result of `print(x[-4:-2])`?
- **A:** [1,2,3]
- **B:** [2,3,4]
- **C:** [2,3]
- **D:** [3,4]
- **E:** Don't know
### Referencing
**Important**: Multiple variables can refer to the **same** list.
```
x = [1,2,3]
y = x # y now references the same list as x
y[0] = 2 # change the first element in the list y
print(x) # x is also changed because it references the same list as y
```
If you want to know if two variables contain the same reference, use the **is** operator.
```
print(y is x)
z = [1,2]
w = [1,2]
print(z is w) # z and w have the same numerical content, but do not reference the same object.
```
**Conclusion:** The `=` sign copy the reference, not the content! What about the atomic types?
```
z = 10
w = z
print(z is w) # w is now the same reference as z
z += 5
print(z, w)
print(z is w) # z was overwritten in the augmentation statement.
```
If one variable is deleted, the other one still references the list.
```
del x # delete the variable x
print(y)
```
Instead, lists can by **copied** by using the copy-module:
```
from copy import copy
x = [1,2,3]
y = copy(x) # y now a copy of x
y[0] = 2
print(y)
print(x) # x is not changed when y is changed
print(x is y) # as they are not the same reference
```
or by slicing:
```
x = [1,2,3]
y = x[:] # y now a copy of x
y[0] = 2
print(y)
print(x) # x is not changed when y is changed
```
**Advanced**: A **deepcopy** is necessary, when the list contains mutable objects:
```
from copy import deepcopy
a = [1,2,3]
x = [a,2,3] # x is a list of a list and two integers
y1 = copy(x) # y1 now a copy x
y2 = deepcopy(x) # y2 is a deep copy
a[0] = 10 # change1
x[-1] = 1 # change2
print(x) # Both changes happened
print(y1) # y1[0] reference the same list as x[0]. Only change1 happened
print(y2) # y2[0] is a copy of the original list referenced by x[0]
```
**Question**: Consider the following code:
```
x = [1,2,3]
y = [x,x]
z = x
z[0] = 3
z[2] = 1
```
What is the result of `print(y[0])`?
- **A:** 1
- **B:** 3
- **C:** [3,2,1]
- **D:** [1,2,3]
- **E:** Don't know
## 3.2 Tuples
A **tuple** is an **immutable list**.<br>It is similar when extracting information:
```
x = (1,2,3) # note: parentheses instead of square backets
print(x,type(x))
print(x[2])
print(x[:2])
```
But it **cannot be changed** (it is immutable):
```
try: # try to run this block
x[0] = 2
print('did succeed in setting x[0]=2')
except: # if any error found run this block instead
print('did NOT succeed in setting x[0]=2')
print(x)
```
## 3.3 Dictionaries
A **dictionary** is a **key-based** (instead of index-based) container.
* **Keys:** All immutable objects are valid keys.
* **Values:** Fully unrestricted.
```
x = {} # create x as an empty dictionary
x['abc'] = '1' # key='abc', value = '1'
print(x['abc'])
x[('abc',1)] = 2 # key=('abc',1), value = 2
```
Elements of a dictionary are **extracted** using their keyword:
```
key = 'abc'
value = x[key]
print(value)
key = ('abc',1)
value = x[key]
print(value)
```
Dictionaries can also be **created with content**:
```
y = {'abc': '1', 'a': 1, 'b': 2, 'c': 3}
print(y['c'])
```
**Content is deleted** using its key:
```
print(y)
del y['abc']
print(y)
```
**Task:** Create a dictionary called `capitals` with the capital names of Denmark, Sweden and Norway as values and country names as keys.
**Answer:**
```
capitals = {}
capitals['denmark'] = 'copenhagen'
capitals['sweden'] = 'stockholm'
capitals['norway'] = 'oslo'
capital_of_sweden = capitals['sweden']
print(capital_of_sweden)
```
## 3.4 Summary
The new central concepts are:
1. Containers (lists, tuples, dictionaries)
2. Mutable/immutable
3. Slicing of lists and tuples
4. Referencing (copy and deepcopy)
5. Key-value pairs for dictionaries
**Note:** All atomic types as immutable, and only strings are subscriptable.
```
x = 'abcdef'
print(x[:3])
print(x[3:5])
print(x[5:])
try:
x[0] = 'f'
except:
print('strings are immutable')
```
**Advanced:** Other interesting containers are e.g. **namedtuple** and **OrderDict** (see [collections](https://docs.python.org/2/library/collections.html)), and [**sets**](https://docs.python.org/2/library/sets.html).
<a id="Conditionals-and-loops"></a>
# 4. Conditionals and loops
## 4.1 Conditionals
You typically want your program to do one thing if some condition is met, and another thing if another condition is met.
In Python this is done with **conditional statments**:
```
x = 3
if x < 2:
# happens if x is smaller than 2
print('first possibility')
elif x > 4: # elif = else if
# happens if x is not smaller than 2 and x is larger than 4
print('second possibility')
elif x < 0:
# happens if x is not smaller than 2, x is not larger than 4
# and x is smaller than 0
print('third posibility') # note: this can never happen
else:
# happens if x is not smaller than 2, x is not larger than 4
# and x is not smaller than 0
print('fourth possiblity')
```
**Note:**
1. "elif" is short for "else if"
2. the **indentation** after if, elif and else is required (typically 4 spaces)
An **equivalent formulation** of the above if-elif-else statement is:
```
x = -1
cond_1 = x < 2 # a boolean (True or False)
cond_2 = x > 4 # a boolean (True or False)
cond_3 = x < 0 # a boolean (True or False)
if cond_1:
print('first possibility')
elif cond_2:
print('second possibility')
elif cond_3:
print('third posibility')
else:
print('fourth possiblity')
y = [1, 2]
if y:
print('y is not empty')
```
The above can also be written purely in terms of if-statements:
```
if cond_1:
print('first possibility')
if not cond_1 and cond_2:
print('second possibility')
if not (cond_1 or cond_2) and cond_3:
print('third posibility')
if not (cond_1 or cond_2 or cond_3):
print('fourth possiblity')
```
## 4.2 Simple loops
You typically also want to **repeat a task multiple times**. But it is time-consuming and **error prone** to write:
```
x_list = [0,1,2,3,4]
y_list = [] # empty list
y_list.append(x_list[0]**2)
y_list.append(x_list[1]**2)
y_list.append(x_list[2]**2)
y_list.append(x_list[3]**2)
y_list.append(x_list[4]**2)
print(y_list)
```
You should at **all costs** avoid repeating code. Therefore use a **for loop** instead:
```
y_list = [] # empty list
for x in x_list:
y_list.append(x**2)
print(y_list)
```
Use a **while loop**:
```
y_list = [] # empty list
i = 0
while i <= 4:
y_list.append(x_list[i]**2)
i += 1
print(y_list)
```
Use a **for loop** with **range** instead:
```
y_list = [] # empty list
for x in range(5):
print(x)
y_list.append(x**2)
print(y_list)
```
Use a **list comprehension**:
```
y_list = [x**2 for x in x_list]
print(y_list)
```
**Note:** List comprehension is the shortest (and fastest) code, but can become messy in more complicated situations.
## 4.3 More complex loops
For loops can also be **enumerated**.
```
y_list = []
for i,x in enumerate(x_list):
print(i)
y_list.append(x**2)
print(y_list)
```
Loops can be fine-tuned with **continue** and **break**.
```
y_list = []
x_list = [*range(10)]
for i,x in enumerate(x_list):
if i == 1:
continue # go to next iteration
elif i == 4:
break # stop loop prematurely
y_list.append(x**2)
print(y_list)
```
**Task:** Create a list with the 10 first positive uneven numbers.
```
# write your code here
```
**Answer:**
```
my_list = []
for i in range(10):
my_list.append((i+1)*2-1)
print(my_list)
```
**Zip:** We can loop over **2 lists at the same time**:
```
x = ['I', 'II', 'III']
y = ['a', 'b', 'c']
for i,j in zip(x,y):
print(i+j)
```
Iter(ation)tools enable us do complicated loops in a smart way. We can e.g. loop through **all combinations of elements in 2 lists**:
```
for i in x:
for j in y:
print(i+j)
import itertools as it
for i,j in it.product(x,y):
print(i,j)
```
## 4.4 Dictionaries
We can loop throug keys, values or key-value pairs of a dictionary.
```
my_dict = {'a': '-', 'b': '--', 'c': '---'}
for key in my_dict.keys():
print(key)
for val in my_dict.values():
print(val)
for key,val in my_dict.items():
print(key,val)
```
We can also **check whether a key exists**:
```
if 'a' in my_dict:
print('a is in my_dict with the value ' + my_dict['a'])
else:
print('a is not in my_dict')
if 'd' in my_dict:
print('d is in my_dict with the value ' + my_dict['d'])
else:
print('d is not in my_dict')
```
**Note:** dictionaries can do this operation very quickly without looping through all elements. So use a dictionary when lookups are relevant.
## 4.5 Summary
The new central concepts are:
1. Conditionals (if, elif, else)
2. Loops (for, while, range, enumerate, continue, break, zip)
3. List comprehensions
4. Itertools (product)
<a id="Functions"></a>
# 5. Functions
The most simple function takes **one argument** and returns **one output**:
```
def f(x):
return x**2
print(f(2))
```
**Note:** The identation after `def` is again required (typically 4 spaces).
Alternatively, you can use a single-line **lambda formulation**:
```
g = lambda x: x**2
print(g(2))
```
Introducing **multiple arguments** are straigtforward:
```
def f(x,y):
return x**2 + y**2
print(f(2,2))
```
So are **multiple outputs**:
```
def f(x,y):
z = x**2
q = y**2
return z,q
full_output = f(2,2) # returns a tuple
print(full_output)
```
The output tuple can be unpacked:
```
z,q = full_output # unpacking
print(z)
print(q)
```
## 5.1 No outputs...
Functions without *any* output can be useful when arguments are mutable:
```
def f(x): # assume x is a list
new_element = x[-1]+1
x.append(new_element)
x = [1,2,3] # original list
f(x) # update list (appending the element 4)
f(x) # update list (appending the element 5)
f(x)
print(x)
```
Note: this is called a side-effect, which is often best avoided.
## 5.2 Keyword arguments
We can also have **keyword arguments** with default values (instead of **positionel** arguments):
```
def f(x,y,a=2,b=2):
return x**a + y**b
print(f(2,4)) # 2**2 + 2**2
print(f(2,2,b=3)) # 2**3 + 2**2
print(f(2,2,a=3,b=3)) # 2**3 + 2**3
```
**Note:** Keyword arguments must come after positional arguments.
**Advanced:** We can also use undefined keyword arguments:
```
def f(**kwargs):
# kwargs (= "keyword arguments") is a dictionary
for key,value in kwargs.items():
print(key,value)
f(a='abc',b='2',c=[1,2,3])
```
and these keywords can come from *unpacking a dictionary*:
```
my_dict = {'a': 'abc', 'b': '2', 'c': [1,2,3]}
f(**my_dict)
```
## 5.3 A function is an object
A function is an object and can be given to another functions as an argument.
```
def f(x):
return x**2
def g(x,h):
temp = h(x) # call function h with argument x
return temp+1
print(g(2,f))
```
## 5.4 Scope
**Important:** Variables in functions can be either **local** or **global** in scope.
```
a = 2 # a global variable
def f(x):
return x**a # a is global
def g(x,a=2):
# a's default value is fixed when the function is defined
return x**a
def h(x):
a = 2 # a is local
return x**a
print(f(2), g(2), h(2))
print('incrementing the global variable:')
a += 1
print(f(2), g(2), h(2)) # output is only changed for f
```
**Recommendation:** Never rely on global variables, they make it hard to understand what your code is doing.
## 5.5 Summary
**Functions:**
1. are **objects**
2. can have multiple (or no) **arguments** and **outputs**
3. can have **positional** and **keyword** arguments
4. can use **local** or **global** variables (**scope**)
**Task:** Create a function returning a person's full name from her first name and family name with middle name as an optional keyword argument with empty as a default.
```
# write your code here
```
**Answer:**
```
def full_name(first_name,family_name,middle_name=''):
name = first_name
if middle_name != '':
name += ' '
name += middle_name
name += ' '
name += family_name
return name
print(full_name('Jeppe','Druedahl','"Economist"'))
```
**Alternative answer** (more advanced, using a built-in list function):
```
def full_name(first_name,family_name,middle_name=''):
name = [first_name]
if middle_name != '':
name.append(middle_name)
name.append(family_name)
return ' '.join(name)
print(full_name('Jeppe','Druedahl','"Economist"'))
```
<a id="Floating-point-numbers"></a>
# 6. Floating point numbers
There are uncountable many real numbers. On a computer the real line is approximated with numbers on the form:
$$\text{number} = \text{significand} \times \text{base}^{exponent}$$
* **significand**: 1 bit, positive or negative
* **base**: 52 bits
* **exponent**: 11 bits
All numbers is therefore *not* represented, but a *close* neighboring number is used.
```
x = 0.1
print(f'{x:.100f}') # printing x with 100 decimals
x = 17.2
print(f'{x:.100f}') # printing x with 100 decimals
```
Simple sums might, consequently, not be exactly what you expect.
```
print(0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1)
```
And just as surprising:
```
print(0.1 == 0.10000000000000001)
```
**Comparisions of floating point numbers** is therefore always problematic.<br>
We know that
$$\frac{a \cdot c}{b \cdot c} = \frac{a}{b}$$
but:
```
a = 0.001
b = 11.11
c = 1000
test = (a*c)/(b*c) == a/b
print(test)
```
However, rounding off the numbers to a close neighbor may help:
```
test = round((a*c)/(b*c), 10) == round(a/b, 10)
print(test)
```
You may also use the np.isclose function to test if 2 floats are numerically very close, i.e. practically the same:
```
import numpy as np
print(np.isclose((a*c)/(b*c), a/b))
```
**Underflow**: Multiplying many small numbers can result in an exact zero:
```
x = 1e-60
y = 1
for _ in range(6):
y *= x
print(y)
```
**Overflow**: If intermediate results are too large to be represented, the final result may be wrong or not possible to calculate:
```
x = 1.0
y = 2.7
for i in range(200):
x *= (i+1)
y *= (i+1)
print(y/x) # should be 2.7
print(x,y)
```
**Note:** `nan` is not-a-number. `inf` is infinite.
**Note:** Order of additions matter, but not by that much:
```
sum1 = 10001234.0 + 0.12012 + 0.12312 + 1e-5
sum2 = 1e-5 + 0.12312 + 0.12012 + 10001234.0
print(sum1-sum2)
```
## 6.1 Summary
The take-aways are:
1. Decimal numbers are **approximate** on a computer!
2. **Never compare floats with equality** (only use strict inequalities)
3. Underflow and overflow can create problem (not very important in practice)
For further details see [here](https://docs.python.org/3/tutorial/floatingpoint.html).
**Videos:**
* [Why computers are bad at algebra - Infinite Series](https://www.youtube.com/watch?v=pQs_wx8eoQ8)
* [Floating point numbers - Computerphile](https://www.youtube.com/watch?v=PZRI1IfStY0)
<a id="Classes-(user-defined-types)"></a>
# 7. Classes (user-defined types)
**Advanced:** New types of objects can be defined using **classes**.
```
class human():
def __init__(self,name,height,weight): # called when created
# save the inputs as attributes
self.name = name # an attribute
self.height = height # an attribute
self.weight = weight # an attribute
def bmi(self): # a method
bmi = self.weight/(self.height/100)**2 # calculate bmi
return bmi # output bmi
def print_bmi(self):
print(self.bmi())
```
A class is used as follows:
```
# a. create an instance of the human object called "jeppe"
jeppe = human('jeppe',182,80) # height=182, weight=80
print(type(jeppe))
# b. print an attribute
print(jeppe.height)
# c. print the result of calling a method
print(jeppe.bmi())
```
**Methods** are like functions, but can automatically use all the attributes of the class (saved in *self.*) without getting them as arguments.
**Attributes** can be changed and extracted with **.-notation**
```
jeppe.height = 160
print(jeppe.height)
print(jeppe.bmi())
```
Or with **setattr- and getatrr-notation**
```
setattr(jeppe,'height',182) # jeppe.height = 182
height = getattr(jeppe,'height') # height = jeppe.height
print(height)
print(jeppe.bmi())
```
## 7.1 Operator methods
If the **appropriate methods** are defined, standard operators, e.g. +, and general functions such as print can be used.
Define a new type of object called a **fraction**:
```
class fraction:
def __init__(self,numerator,denominator): # called when created
self.num = numerator
self.denom = denominator
def __str__(self): # called when using print
return f'{self.num}/{self.denom}' # string = self.nom/self.denom
def __add__(self,other): # called when using +
new_num = self.num*other.denom + other.num*self.denom
new_denom = self.denom*other.denom
return fraction(new_num,new_denom)
```
**Note:** We use that
$$\frac{a}{b}+\frac{c}{d}=\frac{a \cdot d+c \cdot b}{b \cdot d}$$
We can now **add fractions**:
```
x = fraction(1,3)
print(x)
x = fraction(1,3) # 1/3 = 5/15
y = fraction(2,5) # 2/5 = 6/15
z = x+y # 5/15 + 6/15 = 11/15
print(z,type(z))
```
Equivalent to:
```
z_alt = x.__add__(y)
print(z,type(z))
```
But we **cannot multiply** fractions (yet):
```
try:
z = x*y
print(z)
except:
print('multiplication is not defined for the fraction type')
```
**Extra task:** Implement multiplication for fractions.
## 7.2 Summary
The take-aways are:
1. **A class is a user-defined type**
2. **Attributes** are like **variables** encapsulated in the class
3. **Methods** are like **functions** encapsulated in the class
4. Operators are fundamentally defined in terms of methods
<a id="Summary"></a>
# 8. Summary
**This lecture:** We have talked about:
1. Types (int, str, float, bool, list, tuple, dict)
2. Operators (+, *, /, +=, *=, /=, ==, !=, <)
3. Referencing (=) vs. copying (copy, deepcopy)
4. Conditionals (if-elif-else) and loops (for, while, range, enumerate, zip, product)
5. Functions (positional and keyword arguments) and scope
6. Floating points
7. Classes (attributes, methods)
**You work:** When you are done with the DataCamp courses read through this notebook, play around with the code and ask questions if there is stuff you don't understand.
**Next lecture:** We will solve the consumer problem from microeconomics numerically.
**Your to-do list:** You should be running JupyterLab on your own computer.
<a id="Extra:-Iterators"></a>
# 9. Extra: Iterators
Consider the following loop, where my_list is said to be **iterable**.
```
my_list = [0,2,4,6,8]
for i in my_list:
print(i)
```
Consider the same loop generated with an **iterator**.
```
for i in range(0,10,2):
print(i)
```
This can also be written as:
```
x = iter(range(0,10,2))
print(x)
print(next(x))
print(next(x))
print(next(x))
```
The main benefit here is that the, potentially long, my_list, is never created.
We can also write **our own iterator class**:
```
class range_two_step:
def __init__(self, N):
self.i = 0
self.N = N
def __iter__(self):
return self
def __next__(self):
if self.i >= self.N:
raise StopIteration
temp = self.i
self.i = self.i + 2
return temp
```
Can then be used as follows:
```
x = iter(range_two_step(10))
print(next(x))
print(next(x))
print(next(x))
```
Or in a loop:
```
for i in range_two_step(10):
print(i)
```
<a id="Extra:-More-on-functions"></a>
# 10. Extra: More on functions
We can have an **undefined number of input arguments**:
```
def f(*args):
out = 0
for x in args:
out += x**2
return out
print(f(2,2))
print(f(2,2,2,2))
```
We can have **recursive functions** to calculate the Fibonacci sequence:
$$
\begin{aligned}
F_0 &= 0 \\
F_1 &= 1 \\
F_n &= F_{n-1} + F_{n-2} \\
\end{aligned}
$$
```
def fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fibonacci(n-1) + fibonacci(n-2)
y = fibonacci(7)
print(y)
```
| github_jupyter |
# Using Python, requests and Pandas
[Python](https://www.python.org) is a popular programming language which is heavily used in the data science domains. Python provides high level functionality supporting rapid application development with a large ecosystem of packages to work with weather/climate/water data.
Let's use the [Python requests](https://docs.python-requests.org) package to further interact with the wis2box API, and [Pandas](https://pandas.pydata.org) to run some simple summary statistics.
```
import json
import requests
def pretty_print(input):
print(json.dumps(input, indent=2))
# define the endpoint of the OGC API
api = 'http://localhost:8999/oapi'
```
## Stations
Let's find all the stations in our wis2box:
```
url = f'{api}/collections/stations/items?limit=50'
response = requests.get(url).json()
print(f"Number of stations: {response['numberMatched']}")
print('Stations:\n')
for station in response['features']:
print(station['properties']['name'])
```
## Discovery Metadata
Now, let's find all the dataset that are provided by the above stations. Each dataset is identified by a WIS 2.0 discovery metadata record.
```
url = f'{api}/collections/discovery-metadata/items'
response = requests.get(url).json()
print('Datasets:\n')
for dataset in response['features']:
print(f"id: {dataset['properties']['id']}, title: {dataset['properties']['title']}")
```
Let's find all the data access links associated with the Surface weather observations (hourly) dataset:
```
dataset_id = 'data.core.observations-surface-land.mw.FWCL.landFixed'
url = f"{api}/collections/discovery-metadata/items/{dataset_id}"
response = requests.get(url).json()
print('Data access links:\n')
for link in response['associations']:
print(f"{link['href']} ({link['type']})")
[link['href'] for link in response['associations']]
```
Let's use the OGC API - Features (OAFeat) link to drill into the observations for Chidoole station
```
dataset_api_link = [link['href'] for link in response['associations'] if link['type'] == 'OAFeat'][0]
dataset_api_link
```
## Observations
Let's inspect some of the data in the API's raw GeoJSON format:
```
url = f'{dataset_api_link}/items'
query_parameters = {
'wigos_station_identifier': '0-454-2-AWSCHIDOOLE',
'limit': 10000
}
response = requests.get(url, params=query_parameters).json()
pretty_print(response['features'][0])
```
Let's inspect what's measured at Chidoole:
```
print('Observed properties:\n')
for key, value in response['features'][0]['properties']['observations'].items():
print(f'{key} ({value["units"]})')
```
## Pandas
Let's use the GeoJSON to build a more user-friendly table
```
import pandas as pd
datestamp = [obs['properties']['phenomenonTime'] for obs in response['features']]
air_temperature = [obs['properties']['observations']['air_temperature']['value'] for obs in response['features']]
d = {
'Date/Time': datestamp,
'Air temperature (°C)': air_temperature
}
df = pd.DataFrame(data=d)
df
print("Time extent\n")
print(f'Begin: {df["Date/Time"].min()}')
print(f'End: {df["Date/Time"].max()}')
print("Summary statistics:\n")
df[['Air temperature (°C)']].describe()
```
| github_jupyter |
To open this notebook in Google Colab and start coding, click on the Colab icon below.
<table style="border:2px solid orange" align="left">
<td style="border:2px solid orange ">
<a target="_blank" href="https://colab.research.google.com/github/neuefische/ds-meetups/blob/main/01_Python_Workshop_Revisiting_Some_Fundamentals/Copying.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
# Revisiting... how to make copies in Python!
**or: The Aquarium Incident**
## Learning goals for this Notebook
At the end of this notebook you should:
- know the three different types of copying in Python
- have a better understanding when / why to use them
- know that buying just one aquarium for your kids is a bad idea.
## How to use this
This notebook is supposed to be a *follow-along*. Feel free to change stuff and experiment as much as you want, though.
Ideally, you should look at each cell and try to predict the result. Afterwards you can run it and see if you were right.
## Recap: immutable and mutable objects in Python
All objects in Python are either immutable or mutable.
- Immutable objects CANNOT BE CHANGED after creating it! (when you try to, Python creates a new object insead)
- Mutable objects can be changed after creation.
| Immutable | Mutable
|---|---|
|numbers | lists|
|strings | dicts|
|tuple | set|
## Importing stuff
We barely have to import anything here, most of it is just Python.
```
import copy
```
## First way: Copy an Object with `=` operator
Ok, here comes the story...
Once upon a time, my Grandma asked my sister and me what our favourite animals are.
My sister said: "My favourite animal is a rainbow fish".
And as I was the biggest fan of my sister, I said: "Mine too!"
Apparently later, my sister changed her mind. And her favourite animal was then a shark.
```
my_sis_fav_animal = "Rainbow fish"
my_fav_animal = my_sis_fav_animal
my_sis_fav_animal = "Shark"
print(my_sis_fav_animal)
print(my_fav_animal)
```
### What happened in Python?
Remember that strings are immutable objects, they cannot be changed after creating!
We created my_sis_fav_animal, and then mine (my_fav_animal).
When we "changed" my_sis_fav_animal, the object "my_sis_fav_animal" was NOT changed, but rebinded to a new object ("Shark").
**Ok back to the story.**
My grandma asked us about our favourite animal because she wanted to give us a present. She bought us an aquarium. It was ours, but my sister and me both refered to it as "mine".
<img src="images/Python-copying-chapter1-1.png" alt="copying with '=' operator example" style="width:450px;"/>
```
aquarium = ['blackbox', 'big fish', 'small fish', 'second big fish']
my_aquarium = my_sis_aquarium = aquarium
```
My grandma told us that she will buy us both our favourite animal.
As we only had one aquarium, when my sister got a fish as present, my aquarium had a new fish as well. And vice versa.
<img src="images/Python-copying-chapter1-2.png" alt="copying with '=' operator example" style="width:450px;"/>
```
my_sis_aquarium.append('Shark')
my_aquarium.append("Rainbow fish")
```
Ahh! Surely you see where this will lead us... But first let's see how my sisters aquarium and mine look like after the presents:
```
print(my_sis_aquarium)
print(my_aquarium)
```
<img src="images/Python-copying-chapter1-3.png" alt="'=' operator example" style="width:450px;"/>
Until one day... the shark ate the cute little rainbow fish :(
And of course this was my sisters fault! Obviously!
```
my_sis_aquarium.pop()
```
Oh wow... You see, that's when the love for my sister dropped a bit...
We can understand why this happend in our aquarium, but
## What happened in Python?
The `=` operator creates a copy of an object. But it doesn't create a new object, it only creates a new variable that shares the reference to the original object. Like as both me and my sister refered to the aquarium as "Mine", it was just always the exact same one!
Let's check it out in python. If we just created two variables sharing the same refernce of the original objects, the `id` of both should be the same.
```
print(id(my_sis_aquarium))
print(id(my_aquarium))
id(my_aquarium) == id(my_sis_aquarium)
```
### Conclusion
You can copy **immutable** objects with the `=` operator. But **mutable** objects will **NOT** be copied like this!
If you make any changes in either of those lists, the changes are done in both!
How to solve this situation?
## Second Way: Shallow copy
My grandma felt bad, as her gift caused this heartache. So she thought she could help out here...
She just got both of us the exact same aquarium. But now we both had our own aquarium in our own room.
<img src="images/Python-copying-chapter2-1.png" alt="shallow copy example" style="width:450px;"/>
```
blackbox = ['20 % O₂', 'Off']
my_sis_aquarium_2 = [blackbox, 'big fish', 'small fish', 'second big fish']
my_aquarium_2 = copy.copy(my_sis_aquarium_2)
```
So now, things that happen in my aquarium aren't happening in my sisters aquarium and vice versa.
<img src="images/Python-copying-chapter2-2.png" alt="shallow copy example" style="width:450px;"/>
```
my_sis_aquarium_2.append('Shark')
my_aquarium_2.append('Rainbow fish')
```
You trust me and my story right?
```
print(my_sis_aquarium_2)
print(my_aquarium_2)
```
Everything was fine, until one day a fish died in my sisters aquarium.
<img src="images/Python-copying-chapter2-3.png" alt="shallow copy example" style="width:450px;"/>
```
my_sis_aquarium_2.pop(-2)
```
That's when this little black box plays a role in our story.
It's a magic box.
It was installed in both our aquariums and had **one** remote control to change O₂ concentration in both aquariums at once.
How comfortable!
Maybe the fish died because of too less O₂ in the water. So let's increase it. Also to prevent this happening in my aquarium!
<img src="images/Python-copying-chapter2-4.png" alt="shallow copy example" style="width:450px;"/>
```
blackbox[0] = "50 % O₂"
print(my_sis_aquarium_2)
print(my_aquarium_2)
```
You see: O₂ concentration was changed for both aqauriums!
But what about that second part of the remote control?
It's the party switch!!!
Don't be afraid to press it :D
```
my_aquarium_2[0][1] = "On"
```
Now it's party time 🎉 🎉 🎉.
Everywhere!
<img src="images/Python-copying-chapter2-5.png" alt="shallow copy example" style="width:450px;"/>
Really?
```
print(my_sis_aquarium_2)
print(my_aquarium_2)
print(blackbox)
```
Yess!!!
This is cool!
Until... it's not anymore.
Maybe my sister and me don't want to have a party always at the same time...
It's time for revenge!
<img src="images/Python-copying-chapter2-6.png" alt="shallow copy example" style="width:450px;"/>
That's definitly what's happening if someone else can control this switch for both rooms at once.
Hopefully the aquarium example was clear. Back to python.
## What happened in Python?
The shallow copy creates a new object which stores the references of the original elements! So it stores the references to each item in the list. If one item is a list (when we have a list in a list, we call it "nested list"), also the reference to this list is copied! Copying only the reference of a list was exactly what we have seen in chapter 1 (`=` operator); this is not a copy of the list! If items in this nested list are changed, those will be changed in both the original and the shallow copy.
The two variables `my_aquarium_2` and `my_sis_aquarium_2` don't have the same id. But the id of the `blackbox` will be both times the same.
So if anything is changed in the list "blackbox" it will be changed in both aqariums as well.
```
print(id(my_sis_aquarium_2))
print(id(my_aquarium_2))
# that's the shallow copy!
#checking if id of objects are identical
my_aquarium_2 is my_sis_aquarium_2
# no copies are created of nested objects!
my_aquarium_2[0] is my_sis_aquarium_2[0]
my_aquarium_2[0] is blackbox
```
**Remember:** If you create a shallow copy, you will create a copy of the object. But it won't create a copy of all nested mutable objects recursively! It will just copy the reference.
How to solve this situation?
## Third Way: Deep Copy
Here we are... revenge is over. So we should be able to get to a happy ending.
And here it comes. After my sister destroyed her and my aquarium in her rage, we got new ones. This time even with our own remote control!
Nothing that's changed for her aquarium should affect mine and vice versa.
<img src="images/Python-copying-chapter3-1.png" alt="deep copy example" style="width:450px;"/>
```
blackbox_2 = ['20 % O₂', 'Off']
aquarium = [blackbox_2, 'big fish', 'small fish', 'second big fish']
my_sis_aquarium_3 = copy.deepcopy(aquarium)
my_aquarium_3 = copy.deepcopy(aquarium)
my_aquarium_3.append('Rainbow fish')
print(my_aquarium_3)
print(my_sis_aquarium_3)
blackbox_2[1] = "On"
print(my_aquarium_3)
print(my_sis_aquarium_3)
my_sis_aquarium_3[0][1] ="Party"
print(my_aquarium_3)
print(my_sis_aquarium_3)
# copies are created
my_aquarium_3 is my_sis_aquarium_3
# copies are created of nested objects!
my_aquarium_3[0] is my_sis_aquarium_3[0]
```
## What happened in Python?
The Deep copy method constructs new object for every nested compound object.
Deep copies do not share any data with each other (also not in nested lists)!
## The End... What's the moral of the story?
- you can copy immutable object with the `=` operator!
- you CANNOT copy mutable object with the `=` operator!
- Shallow copy: copies all elements except contents within nested mutable objects (that contents becomes shared between original and all copies)
- Deep copy: copies entire object (without any exceptions)
There are still open questions...
### Why not always use deepcopy?
In our aquarium example it was pretty neat that one remote control changed the settings for O₂ concentration in both aquariums. Also, buying two aquariums and remote controls will be the most expensive way.
Translated to Python. Why not use deepcopy?
- if you have information that should be present in several copies, but if the information is changed you don't want to change it in each copy (prone to errors if not one true source).
<br>
<br>
- shallow copies are much faster. If you don't need deepcopy, don't use it!
### What's the fastest?
Let's check it out...
```
import timeit
import numpy as np
nested_list = list(range(11))
test_list = [nested_list]*10
# try out a not nested list:
# test_list = nested_list*10
print(test_list)
repeat = 10000
def equal_operator():
list2 = test_list
return list2
def shallow_by_slicing():
return test_list[:]
def shallow_by_copy():
return copy.copy(test_list)
def shallow_list_method():
return list(test_list)
def deep_copy():
return copy.deepcopy(test_list)
print(f"Duration of copying with = operator: {'{0:.5f}'.format(round(np.mean(timeit.repeat(equal_operator,number=repeat)),8))}s")
print(f"Duration of shallow copying by slicing: {'{0:.5f}'.format(round(np.mean(timeit.repeat(shallow_by_slicing,number=repeat)),8))}s")
print(f"Duration of shallow copying with 'list' method: {'{0:.5f}'.format(round(np.mean(timeit.repeat(shallow_list_method,number=repeat)),8))}s")
print(f"Duration of shallow copying with 'copy': {'{0:.5f}'.format(round(np.mean(timeit.repeat(shallow_by_copy,number=repeat)),8))}s")
print(f"Duration of deep copying with 'deepcopy': {'{0:.5f}'.format(round(np.mean(timeit.repeat(deep_copy,number=repeat)),8))}s")
```
#### The whole story
<img src="images/Copying_Aquarium_Example.png" alt="copying in python, whole aquarium example" style="width:1050px;"/>
| github_jupyter |
<img src="../../images/banners/python-oop.png" width="600"/>
# <img src="../../images/logos/python.png" width="23"/> OOP (Part 4: Composition)
## <img src="../../images/logos/toc.png" width="20"/> Table of Contents
* [Implementation Inheritance vs Interface Inheritance](#implementation_inheritance_vs_interface_inheritance)
* [Composition](#composition)
* [Flexible Designs With Composition](#flexible_designs_with_composition)
* [Choosing Between Inheritance and Composition in Python](#choosing_between_inheritance_and_composition_in_python)
---
<a class="anchor" id="implementation_inheritance_vs_interface_inheritance"></a>
## Implementation Inheritance vs Interface Inheritance
When you derive one class from another, the derived class inherits both:
1. **The base class interface**: The derived class inherits all the methods, properties, and attributes of the base class.
2. **The base class implementation**: The derived class inherits the code that implements the class interface.
Most of the time, you’ll want to inherit the implementation of a class, but you will want to implement multiple interfaces, so your objects can be used in different situations. Modern programming languages are designed with this basic concept in mind. They allow you to inherit from a single class, but you can implement multiple interfaces.
In Python, you don’t have to explicitly declare an interface. Any object that implements the desired interface can be used in place of another object. This is known as [duck typing](https://realpython.com/python-type-checking/#duck-typing). Duck typing is usually explained as “if it behaves like a duck, then it’s a duck.”
```
from abc import ABC, abstractmethod
class Shape(ABC):
shape_id = 0
def __init__(self, color='Black'):
print("Shape constructor called!")
self.color = color
def __str__(self, ):
return f"Shape is {self.color}"
@abstractmethod
def area(self):
pass
@abstractmethod
def perimeter(self):
pass
class Rectangle(Shape):
def __init__(self, width, height, color='Black'):
# You can also type `super(Rectangle, self)`
super().__init__(color)
print("Rectangle constructor called!")
self.width = width
self.height = height
def area(self):
return self.width * self.height
def perimeter(self,):
return 2 * self.width + 2 * self.height
def calculate_areas(self, rectangles_list):
areas = []
for r in rectangles_list:
areas.append()
return areas
def __str__(self,):
return f"Rectangle is {self.color}"
r = Rectangle(3, 4)
```
To illustrate this, you will now add a `Temporary` class to the example above which doesn’t derive from `Shape`:
```
class Human:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return 999
```
The `Human` class doesn’t derive from `Shape` or `Rectangle`, but it exposes the same interface required by the `.calculate_areas()`. The `Rectangle.calculate_areas()` requires a list of objects that implement the following interface:
- A width property.
- A height property.
All these requirements are met by the `Human` class, so the `Rectangle` can still calculate its area.
```
Rectangle.area(Human(3, 4))
Rectangle.area(Rectangle(3, 4))
```
Since you don’t have to derive from a specific class for your objects to be reusable by the program, you may be asking why you should use inheritance instead of just implementing the desired interface. The following rules may help you:
- **Use inheritance to reuse an implementation**: Your derived classes should leverage most of their base class implementation. They must also model an **is a** relationship. A `Human` class might also have a width and a height, but a `Human` **is not** a `Shape`, so you should not use inheritance.
- **Implement an interface to be reused**: When you want your class to be reused by a specific part of your application, you implement the required interface in your class, but you don’t need to provide a base class, or inherit from another class.
<a class="anchor" id="composition"></a>
## Composition
Composition is an object oriented design concept that models a **has a** relationship. In composition, a class known as **composite** contains an object of another class known to as **component**. In other words, a composite class **has a** component of another class.
Composition allows composite classes to reuse the implementation of the components it contains. The composite class doesn’t inherit the component class interface, but it can leverage its implementation.
The composition relation between two classes is considered **loosely coupled**. That means that changes to the component class rarely affect the composite class, and changes to the composite class never affect the component class. This provides better adaptability to change and allows applications to introduce new requirements without affecting existing code.
For example, an attribute for a shape can be color.
```
class Color:
def __init__(self, name, hex_code, rgb):
self.name = name
self.hex_code = hex_code
self.rgb = rgb
def __str__(self):
return f"{self.name} | (#{self.hex_code}) | RGB: {self.rgb}"
red_color = Color("red", "D33817", (211, 56, 23))
```
We implemented `__str__()` to provide a pretty representation of an Address. When you `print()` the address variable, the special method `__str__()` is invoked. Since you overloaded the method to return a string formatted as an address, you get a nice, readable representation:
```
print(red_color)
```
You can now add the `Color` to the `Rectangle` class through composition:
```
r = Rectangle(3, 4, color=Color("Red", "D33817", (211, 56, 23)))
print(r.color)
```
Composition is a loosely coupled relationship that often doesn’t require the composite class to have knowledge of the component.
The `Rectangle` class leverages the implementation of the `Color` class without any knowledge of what an `Color` object is or how it’s represented. This type of design is so flexible that you can change the `Color` class without any impact to the `Rectangle` class.
<a class="anchor" id="flexible_designs_with_composition"></a>
## Flexible Designs With Composition
Composition is more flexible than inheritance because it models a loosely coupled relationship. Changes to a component class have minimal or no effects on the composite class. Designs based on composition are more suitable to change.
You change behavior by providing new components that implement those behaviors instead of adding new classes to your hierarchy.
<a class="anchor" id="choosing_between_inheritance_and_composition_in_python"></a>
## Choosing Between Inheritance and Composition in Python
Python, as an object oriented programming language, supports both inheritance and composition. You saw that inheritance is best used to model an **is a** relationship, whereas composition models a **has a** relationship.
Sometimes, it’s hard to see what the relationship between two classes should be, but you can follow these guidelines:
- **Use inheritance over composition in Python** to model a clear **is a** relationship. First, justify the relationship between the derived class and its base. Then, reverse the relationship and try to justify it. If you can justify the relationship in both directions, then you should not use inheritance between them.
- **Use inheritance over composition in Python** to leverage both the interface and implementation of the base class.
- **Use composition over inheritance in Python** to model a **has a** relationship that leverages the implementation of the component class.
- **Use composition over inheritance in Python** to create components that can be reused by multiple classes in your Python applications.
- **Use composition over inheritance in Python** to implement groups of behaviors and policies that can be applied interchangeably to other classes to customize their behavior.
- **Use composition over inheritance in Python** to enable run-time behavior changes without affecting existing classes.
| github_jupyter |
<a href="https://qworld.net" target="_blank" align="left"><img src="../qworld/images/header.jpg" align="left"></a>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $
$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}#1}}} $
$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}#1}}} $
$ \newcommand{\redbit}[1] {\mathbf{{\color{red}#1}}} $
$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}#1}}} $
$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}#1}}} $
<font style="font-size:28px;" align="left"><b>Matrices: Two Dimensional Lists </b></font>
<br>
_prepared by Abuzer Yakaryilmaz_
<br><br>
A matrix is a list of vectors where each vector has the same dimension.
Here is an example matrix formed by 4 row vectors with dimension 5:
$$
M = \mymatrix{rrrrr}{8 & 0 & -1 & 0 & 2 \\ -2 & -3 & 1 & 1 & 4 \\ 0 & 0 & 1 & -7 & 1 \\ 1 & 4 & -2 & 5 & 9}.
$$
We can also say that $M$ is formed by 5 column vectors with dimension 4.
$M$ is called an $ (4 \times 5) $-dimensional matrix. ($4 \times 5$: "four times five")
We can represent $M$ as a two dimensional list in Python.
```
# we may break lines when defining our list
M = [
[8 , 0 , -1 , 0 , 2],
[-2 , -3 , 1 , 1 , 4],
[0 , 0 , 1 , -7 , 1],
[1 , 4 , -2 , 5 , 9]
]
# let's print matrix M
print(M)
# let's print M in matrix form, row by row
for i in range(4): # there are 4 rows
print(M[i])
```
Remark that, by definition, the rows and columns of matrices are indexed starting from 1.
The $ (i,j) $-th entry of $ M $ refers to the entry in $ i $-th row and $ j $-th column.
(It is also denoted as $ M[i,j] $, $ M(i,j) $, or $ M_{ij} $.)
On the other hand, in Python, the indices start from zero.
So, when we define a list for a matrix or vector in Python, the value of an index in Python is one less than the value of the original index.
Let's see this with the following example.
```
M = [
[8 , 0 , -1 , 0 , 2],
[-2 , -3 , 1 , 1 , 4],
[0 , 0 , 1 , -7 , 1],
[1 , 4 , -2 , 5 , 9]
]
# print the element of M in the 1st row and the 1st column.
print(M[0][0])
# print the element of M in the 3rd row and the 4th column.
print(M[2][3])
# print the element of M in the 4th row and the 5th column.
print(M[3][4])
```
<h3> Multiplying a matrix with a number </h3>
When matrix $ M $ is multiplied by $ -2 $, each entry is multiplied by $ -2 $.
```
# we use double nested for-loops
N =[] # the result matrix
for i in range(4): # for each row
N.append([]) # create an empty sub-list for each row in the result matrix
for j in range(5): # in row (i+1), we do the following for each column
N[i].append(M[i][j]*-2) # we add new elements into the i-th sub-list
# print M and N, and see the results
print("I am M:")
for i in range(4):
print(M[i])
print()
print("I am N:")
for i in range(4):
print(N[i])
```
We write down the matrix $ N= -2 M $:
$$
N= -2 M = \mymatrix{rrrrr}{-16 & 0 & 2 & 0 & -4 \\ 4 & 6 & -2 & -2 & -8 \\ 0 & 0 & -2 & 14 & -2 \\ -2 & -8 & 4 & -10 & -18}.
$$
<h3> The summation of matrices</h3>
If $ M $ and $ N $ are matrices with the same dimensions, then $ M+N $ is also a matrix with the same dimensions.
The summation of two matrices is similar to the summation of two vectors.
If $ K = M +N $, then $ K[i,j] = M[i,j] + N[i,j] $ for every pair of $ (i,j) $.
Let's find $ K $ by using python.
```
# create an empty list for the result matrix
K=[]
for i in range(len(M)): # len(M) return the number of rows in M
K.append([]) # we create a new row for K
for j in range(len(M[0])): # len(M[0]) returns the number of columns in M
K[i].append(M[i][j]+N[i][j]) # we add new elements into the i-th sublist/rows
# print each matrix in a single line
print("M=",M)
print("N=",N)
print("K=",K)
```
<b> Observation:</b>
$ K = N +M $. We defined $ N $ as $ -2 M $.
Thus, $ K = N+M = -2M + M = -M $.
We can see that $ K = -M $ by looking at the outcomes of our program.
<h3> Task 1 </h3>
Randomly create $ (3 \times 4) $-dimensional matrices $ A $ and $ B $.
The entries can be picked from the list $ \{-5,\ldots,5\} $.
Print the entries of both matrices.
Find matrix $ C = 3A - 2B $, and print its entries. (<i>Note that $ 3A - 2B = 3A + (-2B) $</i>.)
Verify the correctness your outcomes.
```
from random import randrange
#
# your solution is here
#
```
<a href="Math28_Matrices_Solutions.ipynb#task1">click for our solution</a>
<h3> Transpose of a matrix</h3>
The transpose of a matrix is obtained by interchanging rows and columns.
For example, the second row becomes the new second column, and third column becomes the new third row.
The transpose of a matrix $ M $ is denoted by $ M^T $.
Here we give two examples.
$$
M = \mymatrix{rrrr}{-2 & 3 & 0 & 4\\ -1 & 1 & 5 & 9} ~~~~~ \Rightarrow ~~~~~ M^T=\mymatrix{rr}{-2 & -1 \\ 3 & 1 \\ 0 & 5 \\ 4 & 9} ~~~~~~~~ \mbox{ and } ~~~~~~~~
N = \mymatrix{ccc}{1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9} ~~~~~ \Rightarrow ~~~~~ N^T = \mymatrix{ccc}{1 & 4 & 7 \\ 2 & 5 & 8 \\ 3 & 6 & 9}.
$$
Shortly, $ M[i,j] = M^T[j,i] $ and $ N[i,j] = N^T[j,i] $. (The indices are interchanged.)
<h3> Task 2 </h3>
Find $ M^T $ and $ N^T $ by using python.
Print all matrices and verify the correctness of your outcome.
```
M = [
[-2,3,0,4],
[-1,1,5,9]
]
N =[
[1,2,3],
[4,5,6],
[7,8,9]
]
#
# your solution is here
#
```
<a href="Math28_Matrices_Solutions.ipynb#task2">click for our solution</a>
<h3> Multiplication of a matrix with a vector </h3>
We define a matrix $ M $ and a column vector $ v $:
$$
M = \mymatrix{rrr}{-1 & 0 & 1 \\ -2 & -3 & 4 \\ 1 & 5 & 6} ~~~~~~\mbox{and}~~~~~~ v = \myrvector{1 \\ -3 \\ 2}.
$$
The multiplication of $ M v $ is a new vector $ u $ shown as $ u = M v $:
<ul>
<li> The first entry of $u $ is the dot product of the first row of $ M $ and $ v $.</li>
<li> The second entry of $ u $ is the dot product of the second row of $M$ and $ v $.</li>
<li> The third entry of $ u $ is the dot product of the third row of $M$ and $v$. </li>
</ul>
We do the calculations by using python.
```
# matrix M
M = [
[-1,0,1],
[-2,-3,4],
[1,5,6]
]
# vector v
v = [1,-3,2]
# the result vector u
u = []
# for each row, we do an inner product
for i in range(3):
# inner product for one row is initiated
inner_result = 0 # this variable keeps the summation of the pairwise multiplications
for j in range(3): # the elements in the i-th row
inner_result = inner_result + M[i][j] * v[j]
# inner product for one row is completed
u.append(inner_result)
print("M is")
for i in range(len(M)):
print(M[i])
print()
print("v=",v)
print()
print("u=",u)
```
We check the calculations:
$$
\mbox{First row:}~~~~ \myrvector{-1 \\ 0 \\ 1} \cdot \myrvector{1 \\ -3 \\ 2} = (-1)\cdot 1 + 0 \cdot (-3) + 1 \cdot 2 = -1 + 0 + 2 = 1.
$$
$$
\mbox{Second row:}~~~~ \myrvector{-2 \\ -3 \\ 4} \cdot\myrvector{1 \\ -3 \\ 2} = (-2)\cdot 1 + (-3) \cdot (-3) + 4 \cdot 2 = -2 + 9 + 8 = 15.
$$
$$
\mbox{Third row:}~~~~ \myrvector{1 \\ 5 \\ 6} \cdot \myrvector{1 \\ -3 \\ 2} = 1\cdot 1 + 5 \cdot (-3) + 6 \cdot 2 = 1 - 15 + 12 = -2.
$$
Then,
$$
u = \myrvector{1 \\ 15 \\ -2 }.
$$
<b>Observations:</b>
<ul>
<li> The dimension of the row of $ M $ is the same as the dimension of $ v $. Otherwise, the inner product is not defined.</li>
<li> The dimension of the result vector is the number of rows in $ M $, because we have the dot product for each row of $ M $</li>
</ul>
<h3> Task 3 </h3>
Find $ u' = N u $ by using python for the following matrix $ N $ and column vector $ u $:
$$
N = \mymatrix{rrr}{-1 & 1 & 2 \\ 0 & -2 & -3 \\ 3 & 2 & 5 \\ 0 & 2 & -2} ~~~~~~\mbox{and}~~~~~~ u = \myrvector{2 \\ -1 \\ 3}.
$$
```
#
# your solution is here
#
```
<a href="Math28_Matrices_Solutions.ipynb#task3">click for our solution</a>
<h3> Multiplication of two matrices </h3>
This is just the generalization of the procedure given above.
We find matrix $ K = M N $ for given matrices
$
M = \mymatrix{rrr}{-1 & 0 & 1 \\ -2 & -1 & 2 \\ 1 & 2 & -2} ~~\mbox{and}~~
N = \mymatrix{rrr}{0 & 2 & 1 \\ 3 & -1 & -2 \\ -1 & 1 & 0}.
$
Remark that the matrix $ N $ has three columns: $ v_1 = \myrvector{0 \\ 3 \\ -1} $, $ v_2 = \myrvector{2 \\ -1 \\ 1} $, and $ v_3 = \myrvector{1 \\ -2 \\ 0} $.
We know how to calculate $ v_1' = M v_1 $.
Similarly, we can calculate $ v_2' = M v_2 $ and $ v_3' = M v_3 $.
It may have already been guessed that these new column vectors ($v_1'$, $v_2'$, and $v_3'$) are the columns of the result matrix $ K $.
The dot product of the i-th row of $ M $ and $ j $-th column of $ N $ gives the $(i,j)$-th entry of $ K $.
<h3> Task 4 </h3>
Find matrix $ K $.
This is a challenging task. You may use triple nested for-loops.
You may also consider to write a function taking two lists and returning their dot product.
```
# matrix M
M = [
[-1,0,1],
[-2,-1,2],
[1,2,-2]
]
# matrix N
N = [
[0,2,1],
[3,-1,-2],
[-1,1,0]
]
# matrix K
K = []
#
# your solution is here
#
```
<a href="Math28_Matrices_Solutions.ipynb#task4">click for our solution</a>
<h3> Is $ A B = B A $? </h3>
It is a well-known fact that the order of numbers does not matter in multiplication.
For example, $ (-3) \cdot 4 = 4 \cdot (-3) $.
Is it also true for matrices? For any given two matrices $ A $ and $ B $, is $ A B = B A $?
There are some examples of $A$ and $B$ such that $ A B = B A $.
But this is not true in general, and so this statement is false.
We can falsify this statement by finding a counter-example.
We write a program using a probabilistic strategy.
The idea is as follows: Randomly find two example matrices $ A $ and $ B $ such that $ AB \neq BA $.
Remark that if $ AB = BA $, then $ AB - BA $ is a zero matrix.
<h3> Task 5 </h3>
Randomly define two $ (2 \times 2) $-dimensional matrices $A$ and $ B $.
Then, find $ C= AB-BA $. If $ C $ is not a zero matrix, then we are done.
<i>Remark: With some chances, we may find a pair of $ (A,B) $ such that $ AB = BA $.
In this case, repeat your experiment. </i>
```
#
# your solution is here
#
```
<a href="Math28_Matrices_Solutions.ipynb#task5">click for our solution</a>
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.stats import kurtosis,skew
from sklearn.linear_model import LinearRegression,Ridge
from sklearn.preprocessing import LabelEncoder,StandardScaler
from sklearn.tree import DecisionTreeRegressor,ExtraTreeRegressor
from sklearn.ensemble import RandomForestRegressor,BaggingRegressor,GradientBoostingRegressor,ExtraTreesRegressor
#import xgboost,lightgbm
from sklearn.neighbors import KNeighborsRegressor
from sklearn.metrics import mean_absolute_error,mean_squared_error
from sklearn import linear_model
from sklearn.cross_validation import train_test_split
from sklearn.linear_model import ElasticNet, Lasso, BayesianRidge, LassoLarsIC
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
from sklearn.kernel_ridge import KernelRidge
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler,MinMaxScaler,StandardScaler
from sklearn.model_selection import KFold, cross_val_score, train_test_split
from sklearn.metrics import mean_squared_error
d1=pd.read_csv('Train_UWu5bXk.csv')
# d2=pd.read_csv('test1.csv')
# data=pd.concat([d1,d2],axis=0)
data=d1
d1.shape
data['Item_Fat_Content'].value_counts()
data['Item_Fat_Content']=data['Item_Fat_Content'].map({'LF':0,'Low Fat':0,'Regular':1,'low fat':0,'reg':1})
data.isnull().sum()
yy=data.groupby('Item_Identifier')['Item_Weight'].mean()
yy=dict(yy)
data['Item_Weight'].fillna(-1.0,inplace=True)
def ff(xx,d):
if(d==-1.0):
return yy[xx]
else:
return d
data['Item_Weight']=data.apply(lambda x : ff(x['Item_Identifier'],x['Item_Weight']),axis=1)
data.isnull().sum()
data.groupby('Item_Type')['Item_Weight'].median()
data['Item_Weight'].fillna(10.0,inplace=True)
data.groupby('Outlet_Type')['Outlet_Size'].value_counts()
data.groupby(['Outlet_Type','Outlet_Size'])['Item_Outlet_Sales'].mean()
data['Outlet_Size'].fillna('Small',inplace=True)
# data.Outlet_Size
data['Item_MRP'].hist(bins=200,figsize=(10,6))
np.where(data.Item_Visibility<=0)
yy=data[data.Item_Visibility>0.0].groupby('Item_Identifier')['Item_Visibility'].mean()
yy=dict(yy)
def ff(x,y):
if(x==0.0):
return yy[y]
else:
return x
data['Item_Visibility']=data.apply(lambda x : ff(x['Item_Visibility'],x['Item_Identifier']),axis=1)
ya=data.groupby('Item_Identifier')['Item_Visibility'].mean()
ya=dict(ya)
def ff(x,y):
return (x/ya[y])
data['itemimportance']=data.apply(lambda x : ff(x['Item_Visibility'],x['Item_Identifier']),axis=1)
data['itemimportance'].describe()
data['years']=2018-data['Outlet_Establishment_Year']
del data['Outlet_Establishment_Year']
lb=LabelEncoder()
lb.fit(data['Item_Type'])
data['Item_Type']=lb.transform(data['Item_Type'])
lb=LabelEncoder()
lb.fit(data['Outlet_Identifier'])
data['Outlet_Identifier']=lb.transform(data['Outlet_Identifier'])
lb=LabelEncoder()
lb.fit(data['Outlet_Size'])
data['Outlet_Size']=lb.transform(data['Outlet_Size'])
lb=LabelEncoder()
lb.fit(data['Outlet_Location_Type'])
data['Outlet_Location_Type']=lb.transform(data['Outlet_Location_Type'])
lb=LabelEncoder()
lb.fit(data['Outlet_Type'])
data['Outlet_Type']=lb.transform(data['Outlet_Type'])
data.isnull().sum()
data.describe()
data.boxplot(by='Outlet_Type',column='Item_Outlet_Sales')
del data['Item_Identifier']
test=data[8500:]
data=data[0:8500]
yd=data['Item_Outlet_Sales']
del data['Item_Outlet_Sales']
del test['Item_Outlet_Sales']
data.shape,yd.shape,test.shape
yy=yd
np.where(np.isnan(data))
data.isnull().sum()
#simple linear regression with cross validation of 5 fold
avg=0.0
skf=KFold(n_splits=5)
skf.get_n_splits(data)
for ti,tj in skf.split(data):
dx,tx=data.iloc[ti],data.iloc[tj]
dy,ty=yy[ti],yy[tj]
lm=make_pipeline(MinMaxScaler(), linear_model.LinearRegression(n_jobs=-1))
lm.fit(dx,dy)
yu=np.sqrt(mean_squared_error(y_true=ty,y_pred=lm.predict(tx)))
avg=avg+yu
print(yu)
print("AVG RMSE::",avg/5)
#simple elasticnet regression with cross validation of 5 fold
avg=0.0
skf=KFold(n_splits=5)
skf.get_n_splits(data)
for ti,tj in skf.split(data):
dx,tx=data.iloc[ti],data.iloc[tj]
dy,ty=yy[ti],yy[tj]
lm=make_pipeline(StandardScaler(), linear_model.ElasticNet(l1_ratio=0.6,alpha=0.001))
lm.fit(dx,dy)
yu=np.sqrt(mean_squared_error(y_true=ty,y_pred=lm.predict(tx)))
avg=avg+yu
print(yu)
print("AVG RMSE::",avg/5)
#simple Decision Tree regression with cross validation of 5 fold
avg=0.0
skf=KFold(n_splits=5)
skf.get_n_splits(data)
for ti,tj in skf.split(data):
dx,tx=data.iloc[ti],data.iloc[tj]
dy,ty=yy[ti],yy[tj]
lm=DecisionTreeRegressor(max_depth=5)
lm.fit(dx,dy)
yu=np.sqrt(mean_squared_error(y_true=ty,y_pred=lm.predict(tx)))
avg=avg+yu
print(yu)
print("AVG RMSE::",avg/5)
#simple Random Forest Tree regression with cross validation of 5 fold
avg=0.0
skf=KFold(n_splits=5)
skf.get_n_splits(data)
for ti,tj in skf.split(data):
dx,tx=data.iloc[ti],data.iloc[tj]
dy,ty=yy[ti],yy[tj]
lm=RandomForestRegressor(max_depth=5,n_jobs=-1,n_estimators=100)
lm.fit(dx,dy)
yu=np.sqrt(mean_squared_error(y_true=ty,y_pred=lm.predict(tx)))
avg=avg+yu
print(yu)
print("AVG RMSE::",avg/5)
```
| github_jupyter |
1. #### Matlab SPM pipeline for making brain regions and tumor regions in uncorrected, corrected and ground truth DSC space
```
# Run once to store stdout
import sys
nb_stdout = sys.stdout
# Redirect stdout to console, to not get too much text output in the notebook
# This means that the notebook will not output any text. Text will be redirected
# to the terminal where the notebook was started.
sys.stdout = open(1, 'w')
output_directory_suffix = "2019_07_02_native"
corrections_base_directory = "../epi_corrections_out_" + output_directory_suffix
corrections_base_directory
from os import getcwd
from os.path import abspath
epi_corrections_root_directory = abspath(getcwd() + "/..") # Go one directory up from current working directory
epi_corrections_root_directory
%%bash -s "$epi_corrections_root_directory" "$corrections_base_directory" "$output_directory_suffix"
epi_corrections_root_directory=$1
corrections_base_directory=../$2
output_directory_suffix=$3
spm_path=/media/loek/HDD3TB1/apps/spm12 # This needs to be changed
cd $epi_corrections_root_directory
pipeline_report_file=$2/pipeline_report_spm_$output_directory_suffix.txt
run_command='matlab
-nodisplay
-nosplash
-nodesktop
-r "cd('"'spm_pipeline'"');
addpath('"'$spm_path'"');
myCluster = parcluster;
myCluster.NumWorkers = 4;
saveProfile(myCluster);
parpool(4);
make_dsc_rois('"'$corrections_base_directory'"');
exit;"
2>&1 | tee $pipeline_report_file'
eval $run_command
```
To see the log, run from corrections_base_directory
tail -f pipeline_report_spm*.txt
2. #### Copy over EPI_raw_DSC, EPI_applytopup and EPI_applyepic to a separate directory
```
corrections_base_new_directory = \
"../epi_corrections_out_2019_07_02_native_tumor_excluded_from_rcbv"
```
3. #### Run remove-tumor-from-cbv.ipynb on this separate directory to remove rCBV based on tumor GT-ROIs
4. #### compute-roi-medians.sh using brain and tumor GT ROIs
```
%%bash -s "$epi_corrections_root_directory" "$corrections_base_new_directory"
epi_corrections_root_directory=$1
corrections_base_new_directory=$2
cd $epi_corrections_root_directory
command="bash scripts/compute-roi-medians.sh \
$corrections_base_new_directory \
2>&1 | tee $corrections_base_new_directory/computegtroismedianslog.txt"
eval $command
```
5. #### compute-dice-between-rois.sh using brain and tumor GT and uncorrected and corrected ROIs
```
%%bash -s "$epi_corrections_root_directory" "$corrections_base_new_directory"
epi_corrections_root_directory=$1
corrections_base_new_directory=$2
cd $epi_corrections_root_directory
command="bash scripts/compute-dice-between-rois.sh \
$corrections_base_new_directory \
2>&1 | tee $corrections_base_new_directory/computedicelog.txt"
eval $command
```
6. #### analyze-medians-and-dice-scores.sh
7. #### analyze-dice-gt-cor.sh
8. #### Additional analysis of GT tumor roi rCBV change from correction (2020-08-31)
See {epi_corrections_root_directory}/notes.txt
```
corrections_base_new_directory = \
"../epi_corrections_out_2019_07_02_native_wtumor"
print(epi_corrections_root_directory + corrections_base_new_directory)
```
8. #### a) compute-roi-medians.sh using brain and tumor GT ROIs
```
%%bash -s "$epi_corrections_root_directory" "$corrections_base_new_directory"
epi_corrections_root_directory=$1
corrections_base_new_directory=$2
cd $epi_corrections_root_directory
command="bash scripts/compute-roi-medians.sh \
$corrections_base_new_directory \
2>&1 | tee $corrections_base_new_directory/computegtroismedianslog.txt"
eval $command
```
8. #### b) analyze tumor gt median change
```
corrections_base_old_directory = \
"../epi_corrections_out_2019_07_02_native_tumor_excluded_from_rcbv"
%%bash -s "$epi_corrections_root_directory" "$corrections_base_new_directory"
epi_corrections_root_directory=$1
corrections_base_new_directory=$2
#echo $epi_corrections_root_directory
#echo $corrections_base_new_directory
cd $epi_corrections_root_directory
# Gradient echo (e1) and Spin echo (e2)
for (( i = 1 ; i < 3 ; i++ )) ; do
# MNI ROI median rCBV files
readarray epic_tumor_median_files_arr_e${i} < <(find $corrections_base_new_directory/EPI_applyepic -type d -name *e${i}_applyepic_perf | xargs -I {} echo {}/tumorroismedians.txt)
readarray topup_tumor_median_files_arr_e${i} < <(find $corrections_base_new_directory/EPI_applytopup -type d -name *e${i}_prep_topup_applytopup_postp_perf | xargs -I {} echo {}/tumorroismedians.txt)
readarray raw_tumor_median_files_arr_e${i} < <(find $corrections_base_new_directory/EPI_raw_DSC -type d -name *e${i}_perf | xargs -I {} echo {}/tumorroismedians.txt)
done
num1=${#epic_tumor_median_files_arr_e1[*]}
#num2=${#topup_tumor_median_files_arr_e1[*]}
#num3=${#raw_tumor_median_files_arr_e1[*]}
#echo $num1
#echo $num2
#echo $num3
script=$epi_corrections_root_directory/scripts/wilcoxon-tumor-gt-rois-analysis.py
command="python $script --rawmedians ${raw_tumor_median_files_arr_e2[@]} --cormedians ${topup_tumor_median_files_arr_e2[@]}"
#echo $command
eval $command
```
| github_jupyter |
# Distilling a Neural Network into Soft Decision Tree
* Implementation based on [[Frosst & Hinton, 2017](http://arxiv.org/abs/1711.09784)]
## Imports
```
import os
import keras
import numpy as np
import tensorflow as tf
import tensorflow.keras.backend as K
from tensorflow.keras.layers import Input, Dense, Conv1D, Flatten, Lambda
from tensorflow.keras.models import Model
from models import ConvNet, SoftBinaryDecisionTree
from models.tree import SoftDecisionTree
from models.utils import brand_new_tfsession, draw_tree
from models import ConvNet, SoftBinaryDecisionTree
from models.utils import brand_new_tfsession, draw_tree
from tensorflow.keras.callbacks import EarlyStopping, Callback
sess = brand_new_tfsession()
```
## Dataset
```
# load MNIST data
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# hold out last 10000 training samples for validation
x_valid, y_valid = x_train[-10000:], y_train[-10000:]
x_train, y_train = x_train[:-10000], y_train[:-10000]
print(x_train.shape, y_train.shape, x_valid.shape, y_valid.shape, x_test.shape, y_test.shape)
# retrieve image and label shapes from training data
img_rows, img_cols = x_train.shape[1:]
n_classes = np.unique(y_train).shape[0]
print(img_rows, img_cols, n_classes)
# convert labels to 1-hot vectors
y_train = tf.keras.utils.to_categorical(y_train, n_classes)
y_valid = tf.keras.utils.to_categorical(y_valid, n_classes)
y_test = tf.keras.utils.to_categorical(y_test, n_classes)
print(y_train.shape, y_valid.shape, y_test.shape)
# normalize inputs and cast to float
x_train = (x_train / np.max(x_train)).astype(np.float32)
x_valid = (x_valid / np.max(x_valid)).astype(np.float32)
x_test = (x_test / np.max(x_test)).astype(np.float32)
```
## Neural Network
```
nn = ConvNet(img_rows, img_cols, n_classes)
nn.maybe_train(data_train=(x_train, y_train),
data_valid=(x_valid, y_valid),
batch_size=16, epochs=12)
nn.evaluate(x_train, y_train)
nn.evaluate(x_valid, y_valid)
nn.evaluate(x_test, y_test)
```
### Extraction of soft labels for distillation
```
y_train_soft = nn.predict(x_train)
y_train_soft.shape
```
## Binary Soft Decision Tree
Flatten dataset in advance
```
x_train_flat = x_train.reshape((x_train.shape[0], -1))
x_valid_flat = x_valid.reshape((x_valid.shape[0], -1))
x_test_flat = x_test.reshape((x_test.shape[0], -1))
# import matplotlib.pyplot as plt
# %matplotlib inline
# plt.imshow(x_test_flat.reshape((x_test_flat.shape[0], img_rows, img_cols))[1])
x_train_flat.shape, x_valid_flat.shape, x_test_flat.shape
```
### Hyperparameters
* `tree_depth`: as denoted in the [[paper](https://arxiv.org/pdf/1711.09784.pdf)], depth is in terms of inner nodes (excluding leaves / indexing depth from `0`)
* `penalty_strength`: regularization penalty strength
* `penalty_decay`: regularization penalty decay: paper authors found 0.5 optimal (note that $2^{-d} = 0.5^d$ as we use it)
* `ema_win_size`: scaling factor to the "default size of the window" used to calculate moving averages (growing exponentially with depth) of node and path probabilities
* `inv_temp`: scale logits of inner nodes to "avoid very soft decisions" [[paper](https://arxiv.org/pdf/1711.09784.pdf)]
* pass `0` to indicate that this should be a learned parameter (single scalar learned to apply to all nodes in the tree)
* `learning_rate`: hopefully no need to explain, but let's be cool and use [Karpathy constant](https://www.urbandictionary.com/define.php?term=Karpathy%20Constant) ([source](https://twitter.com/karpathy/status/801621764144971776)) :D as default in `tree.__init__()`
* `batch_size`: we use a small one, because with increasing depth and thus amount of leaf bigots, larger batch sizes cause their loss terms to be scaled down too much by averaging, which results in poor optimization properties
```
n_features = img_rows * img_cols
tree_depth = 4
penalty_strength = 1e+1
penalty_decay = 0.25
ema_win_size = 1000
inv_temp = 0.01
learning_rate = 5e-03
batch_size = 4
```
### Regular training with hard labels
```
sess = brand_new_tfsession()
input_layer = Input(shape=(n_features,))
lolz1 = Lambda(lambda x: K.expand_dims(x, 2), name='lolz1')(input_layer)
lolz2 = Conv1D(64, 3, name='lolz2')(lolz1)
lolz3 = Flatten(name='lolz3')(lolz2)
lolz4 = Dense(784, name='lolz4')(lolz3)
tree_layer = SoftDecisionTree(tree_depth,
n_features,
n_classes,
penalty_strength=penalty_strength,
penalty_decay=penalty_decay,
inv_temp=inv_temp,
ema_win_size=ema_win_size,
learning_rate=learning_rate)
output_layer = tree_layer(lolz4)
loss = tree_layer.get_tree_loss()
model = Model(inputs=input_layer, outputs=output_layer)
model.compile(optimizer='sgd', loss=loss)
model.summary()
tree_layer.initialize_variables(sess, x_train_flat, batch_size)
model.fit(x_train_flat, y_train, batch_size=batch_size, epochs=40)
```
### Distillation: training with soft labels
```
sess = brand_new_tfsession()
input_layer = Input(shape=(n_features,))
lolz1 = Lambda(lambda x: K.expand_dims(x, 2), name='lolz1')(input_layer)
lolz2 = Conv1D(64, 3, name='lolz2')(lolz1)
lolz3 = Flatten(name='lolz3')(lolz2)
lolz4 = Dense(784, name='lolz4')(lolz3)
tree_layer = SoftDecisionTree(tree_depth,
n_features,
n_classes,
penalty_strength=penalty_strength,
penalty_decay=penalty_decay,
inv_temp=inv_temp,
ema_win_size=ema_win_size,
learning_rate=learning_rate)
output_layer = tree_layer(lolz4)
loss = tree_layer.get_tree_loss()
model = Model(inputs=input_layer, outputs=output_layer)
model.compile(optimizer='sgd', loss=loss)
tree_layer.initialize_variables(sess, x_train_flat, batch_size)
epochs = 40
model.fit(x_train_flat, y_train, batch_size=batch_size, epochs=40)
```
### Visualizing learned parameters
```
draw_tree(sess, tree_layer.tree, img_rows, img_cols)
```
#### How to read the visual
Exactly as in the [[paper](https://arxiv.org/pdf/1711.09784.pdf)]:
* Number **below** any **leaf** denotes `argmax()` of learned distribution, thus final static **prediction** of the (bigot, not expert!) leaf.
* Numbers **above** any **inner node** denote the **set of possible predictions** in the sub-tree of the given node.
### Visualizing decision path
```
digit = 9
# get (reproducibly) pseudo-random example of chosen digit
np.random.seed(0)
sample_index = np.random.choice(np.where(np.argmax(y_test, axis=1)==digit)[0])
input_img = x_test[sample_index]
draw_tree(sess, tree_layer.tree, img_rows, img_cols, input_img=input_img)
```
#### How to read the visual
* The <span style="color:green">**maximum probability path**</span> leading **to final prediction** is now denoted by <span style="color:green"> **green arrows**</span>
* Number **below** any given **inner node** on this <span style="color:green">**path**</span> denotes the **pre-activation logit** $ = (\beta (\mathbf{xw}_i + b_i))$.
* This is basically just a **biased** ($b_i$) and **scaled** ($\beta$) **correlation** of **input** ($\mathbf{x}$) with the given **mask** ($\mathbf{w}_i$).
* From the definition of $\sigma$ activation function, the choice of branch breaks around `0`.
* From the definition of **branching** in the [[paper](https://arxiv.org/pdf/1711.09784.pdf)], **negative** correlations branch **to the left**, while **positive** correlations branch **to the right**.
<img src="assets/img/branching.png" width="35%"/>
```
draw_tree(sess, tree_layer.tree, img_rows, img_cols, input_img=input_img, show_correlation=True)
```
#### How to read the visual
* On the <span style="color:green">**maximum probability path**</span> there are now **correlations** of the **input image** with the **node masks**.
* The **homogeneous area** gives a frame of reference for color of `0`s.
* It always corresponds to the **black area in the input image**, but due to lack of normalization (yes, I'm the lazy one here), it ends up as different shade of gray in each subplot.
* All **lighter pixels** from this correspond to **positive correlation coefficients**.
* All **darker pixels** correspond to **negative correlation coefficients**.
_Note: In the last input-masked kernel on the path to prediction, notice how model recognizes `9`s from `7`s._
To save the inference example as animation, run the cell below.
```
if not os.path.isdir('assets/img/infer'):
os.mkdir('assets/img/infer')
draw_tree(sess, tree_layer.tree, img_rows, img_cols, input_img=input_img,
savepath='assets/img/infer/0.png')
draw_tree(sess, tree_layer.tree, img_rows, img_cols, input_img=input_img, show_correlation=True,
savepath='assets/img/infer/1.png')
!convert -delay 100 -loop 0 assets/img/infer/*.png assets/img/infer.gif
```
### Capturing the progress of learning
```
if not os.path.isdir('assets/img/epoch'):
os.mkdir('assets/img/epoch')
if not os.path.isdir('assets/img/sample'):
os.mkdir('assets/img/sample')
sess = brand_new_tfsession()
input_layer = Input(shape=(n_features,))
tree_layer = SoftDecisionTree(tree_depth,
n_features,
n_classes,
penalty_strength=penalty_strength,
penalty_decay=penalty_decay,
inv_temp=inv_temp,
ema_win_size=ema_win_size,
learning_rate=learning_rate)
output_layer = tree_layer(input_layer)
loss = tree_layer.get_tree_loss()
model = Model(inputs=input_layer, outputs=output_layer)
model.compile(optimizer='sgd', loss=loss)
tree_layer.initialize_variables(sess, x_train_flat, batch_size)
class ModelImageSaver(Callback):
def __init__(self, display, limit):
self.seen = 0
self.display = display
self.limit = limit
def on_train_begin(self, logs={}):
draw_tree(sess, tree, img_rows, img_cols, savepath='assets/img/epoch/{:04}.png'.format(0))
draw_tree(sess, tree, img_rows, img_cols, savepath='assets/img/sample/{:07}.png'.format(0))
def on_epoch_end(self, epoch, logs={}):
draw_tree(sess, tree, img_rows, img_cols, savepath='assets/img/epoch/{:04}.png'.format(epoch+1))
def on_batch_end(self, batch, logs={}):
self.seen += logs.get('size', 0)
if self.seen % self.display == 0 and self.seen <= self.limit:
draw_tree(sess, tree, img_rows, img_cols, savepath='assets/img/sample/{:07}.png'.format(self.seen))
image_saver = ModelImageSaver(1000, 250000)
# save image after each 1000th training example
# save max 250 images (corresponds to first 5 training epochs)
model.fit(x=x_train_flat, y=y_train_soft, validation_data=(x_valid_flat, y_valid),
batch_size=batch_size, epochs=40, callbacks=[image_saver]);
```
#### Compiling snapshots into animation
**Note**: converting captured series of PNG images into a GIF animation with `makegif.sh` requires `bash` environment with `convert` CLI tool available.
##### Epoch-wise compilation
```
!./makegif.sh epoch
```

##### Sample-wise compilation
```
!./makegif.sh sample
```

## Elaborating

By now, you should know what's coming...
```
tree_depth = 5
sess = brand_new_tfsession()
input_layer = Input(shape=(n_features,))
input_layer = Input(shape=(n_features,))
tree_layer = SoftDecisionTree(tree_depth,
n_features,
n_classes,
penalty_strength=penalty_strength,
penalty_decay=penalty_decay,
inv_temp=inv_temp,
ema_win_size=ema_win_size,
learning_rate=learning_rate)
output_layer = tree_layer(input_layer)
loss = tree_layer.get_tree_loss()
model = Model(inputs=input_layer, outputs=output_layer)
model.compile(optimizer='sgd', loss=loss)
tree_layer.initialize_variables(sess, x_train_flat, batch_size)
tree.model.fit(x=x_train_flat, y=y_train_soft, validation_data=(x_valid_flat, y_valid),
batch_size=batch_size, epochs=3);
# os.mkdir('assets/depth-{}'.format(tree_depth))
# tree.save_variables(sess, 'assets/depth-{}/tree-model'.format(tree_depth))
draw_tree(sess, tree, img_rows, img_cols)
```
Sorry, but deeper than this was not so visually appealing and would take much longer to train to a reasonable performance to even motivate examination.
# Final word
If you're reading this, I believe you are interested in this implementation, so please don't hesitate to **try it yourself** :)
* tune hyperparameters of the tree model
* try out different depths and penalty parameters (strength, decay)
* implement dynamic inverse temperature ($\beta$), scheduled as a function of training step / epoch
* try out different dataset, the approach is generic enough!
If you get any interesting results with this implementation, feel free to share them as an [issue](https://github.com/lmartak/distill-nn-tree/issues). Also feel free to improve this repo by submitting a [PR](https://github.com/lmartak/distill-nn-tree/pulls) or just making your own [fork](https://github.com/lmartak/distill-nn-tree/network/members).
If you feel adventurous, you could try:
* improve `draw_tree`'s correlation mode by normalizing the shade of gray around fixed-`0` color shade
* add similar notebook with whole training, distillation & evaluation lifecycle on different dataset (e.g. `cifar-10.ipynb` for [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset)
* This would probably require colorful masks and some experimenting with their normalization for the purposes of visualization, but could be fun!
| github_jupyter |
# ProjectQ First Program
This exercise is based on the ProjectQ compiler tutorial. See https://github.com/ProjectQ-Framework/ProjectQ/blob/develop/examples/compiler_tutorial.ipynb for the original version.
Please check out [ProjectQ paper](http://arxiv.org/abs/1612.08091) for an introduction to the basic concepts behind this compiler.
This exercise will create the program to make the [superdense coding] (https://en.wikipedia.org/wiki/Superdense_coding).
## Load the modules
* projectq, includes main functionalities
* projectq.backends, includes the backends for the execution of the program. Initially, you will load the CommandPrinter, which prints the final gate sequence generated by the compilers.
* projectq.operation, includes the main defined operations, as common quantum gates (H,X,etc), or quantum full subroutines, as QFT.
```
import projectq
from projectq.backends import Simulator
from projectq.ops import CNOT, X, Y, Z, H, Measure,All
```
**The first step is to create an Engine. The sintax to create this object is using [MainEngine](http://projectq.readthedocs.io/en/latest/projectq.cengines.html#projectq.cengines.MainEngine):**
***
MainEngine(backend, engine_list, setups, verbose)
***
**In this case, the selected backend is [Simulator](http://projectq.readthedocs.io/en/latest/projectq.backends.html#projectq.backends.Simulator) which will simulate the final sequence of gates.**
```
# create the compiler and specify the backend:
eng = projectq.MainEngine(backend=Simulator())
```
**On this Engine, you must first allocate space for the qubits. You will allocate a register with two qubits.**
```
qureg = eng.allocate_qureg(2)
```
**First, you (Bob) must create the [Bell's state](https://en.wikipedia.org/wiki/Bell_state):**
$$|\psi\rangle = \frac{1}{\sqrt{2}} (|00\rangle+|11\rangle)$$
**To do it, apply a Hadamard gate (H) to the first qubit and, afterwards, a CNOT gate on qubit 1 using the qubit 0 as control bit.**
qubit 1 = qureg[1]
qubit 0 = qureg[0]
**To apply operations, ProjectQ uses the sintax:**
***
Operation | registers
***
**In the case of CNOT, the first qubit is the control qubit, the second the controlled qubit.**
```
H | qureg[1]
CNOT | (qureg[1],qureg[0])
```
In ProjectQ, nothing is computed until you flush the set of gates. At anytime, because you are using the simulator, you can get the state of the Quantum Register using the cheat backend operation. The first part shows how the qubits have been mapped and the second the current quantum state.
```
eng.flush()
eng.backend.cheat()
```
**Now, after you (Bob) sent the qubit 1 to Alice, she applies one gate to transfer the information. The agreed protocol is:**
* 00, I
* 01, X
* 10, Z
* 11, Y
**Select one option for Alice!**
```
X| qureg[1]
```
**Now, Alice sends her qubit to Bob, who uncomputes the entanglement (apply the inverse gates in reversed order. CNOT and H are their own inverse)**
```
CNOT | (qureg[1],qureg[0])
H | qureg[1]
```
**And, now, measure the results. In ProjectQ, to get the results, first you must flush the program content, so compilers and backends make their work. In this case, the Simulator.**
```
All(Measure) | qureg
eng.flush()
print("Message from Alice: {}{}".format(int(qureg[1]),int(qureg[0])))
```
**You can explore the sequence of engines that have been applied before the Simulator.**
```
engines=eng
while engines.next_engine!=None:
print("Engine {}".format(engines.next_engine.__class__.__name__))
engines=engines.next_engine
```
# Congratulations!!! You have made you first Quantum Program with ProjectQ
| github_jupyter |
# Core Pressures and Mass Flux
We can additionally find higher level pressure drops across the system. We will start with a specific steam generator with inputs given below.
```
import NuclearTools.MassFlux as mf
import pint
U = pint.UnitRegistry()
obj = mf.steam_generator(
m = 36*10**6 * U.lb/U.hr,
T_hl = (620 + 459.67) * U.degR,
T_cl = (560 + 459.67) * U.degR,
A_ht = 79800 * U.foot**2,
n_tubes = 6633,
D = .6875 * U.inch,
wall_th = .04 * U.inch,
L = 30.64 * U.foot,
radius_max = 53.25 * U.inch,
radius_min = 2.25 * U.inch,
plate_th = 21.2 * U.inch,
inlet_k = 1.5,
exit_k = 1.0,
eq_long = 55,
eq_short = 90,
U = U)
print('The total pressure loss is:', obj.total_dp)
print('')
print('The friction pressure loss is:', obj.dP_loss.to(U.psi))
print('The exit pressure loss is:', obj.dP_exit.to(U.psi))
print('The entrance pressure loss is:', obj.dP_plate.to(U.psi))
```
Above we have all the pressure drops across the system. Since this is a U-Tube SG, we can also see the difference in the short-leg and long-leg calculations.
```
print('The average length velocity is:', obj.v_avg.to(U.foot/U.s))
print('The long-leg length velocity is:', obj.v_long.to(U.foot/U.s))
print('The short-leg length velocity is:', obj.v_short.to(U.foot/U.s))
```
Now we will switch over to the full core calculations
```
obj2 = mf.core_pressure(
pitch = .496 * U.inch,
D_clad = .374 * U.inch,
n_rods = 55777,
height = 144 * U.inch,
pressure = 2250 * U.psi,
n_grids = 8,
k_grid = 0.5,
core_height = 150 * U.inch,
k_core_in = 1.5,
k_core_out = 1.5,
v_ID = 173 * U.inch,
b_OD = 157.6 * U.inch,
L_d = 21 * U.foot,
k_d = 4.5,
L_hl = 20 * U.foot,
D_hl = 2.42 * U.foot,
HL_LD = 10,
k_hl_in = 1.5,
k_hl_out = 1.0,
k_sg_in = 1.5,
k_sg_out = 1.0,
SG_LD = 90,
D_sg = .6875 * U.inch,
SG_th = .04 * U.inch,
n_tubes = 6633,
A_total = 79800 * U.foot**2,
L_cl = 40 * U.foot,
D_cl = 2.29 * U.foot,
k_cl_in = 1.5,
k_cl_out = 1.0,
CL_LD = 50,
T_in = (560+459.67) * U.degR,
T_out = (620+459.67) * U.degR,
m = 144*10**6 * U.lb/U.hour,
U = U,
loops = 4)
print('The pressure change in the core:', obj2.P_core(obj2.m).to(U.psi))
print('The pressure change in the downcomer:', obj2.P_downcomer(obj2.m).to(U.psi))
print('The pressure change in the hot leg:', obj2.P_hot_leg(obj2.m).to(U.psi))
print('The pressure change in the steam generator:', obj2.P_sg(obj2.m).to(U.psi))
print('The pressure change in the cold leg:', obj2.P_cold_leg(obj2.m).to(U.psi))
print('The total pressure drop is:', obj2.P_total.to(U.psi))
print('The needed pump horsepower is:', obj2.work.to(U.hp))
```
| github_jupyter |
# Generate trajectories
```
import os
from datetime import datetime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from numpy.polynomial import legendre
from scipy.linalg import block_diag
from pyrotor.constraints import is_in_constraints
from pyrotor.projection import trajectory_to_coef, coef_to_trajectory
from pyrotor.data_analysis import compute_covariance
from pyrotor.linear_conditions import get_endpoints_matrix
```
## Define functions
```
def order_of_magnitude(x):
"""
Find order of magnitude of each component of an array.
"""
alpha = np.floor(np.log10(np.abs(x)))
return np.nan_to_num(alpha)
def projection_kernel(basis_dimension, endpoints):
"""
Compute projector onto the kernel of the matrix phi describing endpoints conditions.
"""
# Build endpoints conditions matrix
phi = get_endpoints_matrix(basis_dimension, endpoints)
# Compute SVD
_, S, V = np.linalg.svd(phi, full_matrices=True)
# Find singular vectors in kernel
indices_kernel = np.where(S == 0)
if len(indices_kernel[0]) > 0:
first_index = indices_kernel[0][0]
else:
first_index = len(S)
# Compute projector
V = V.T
P_kerphi = np.dot(V[:,first_index:], V[:,first_index:].T)
return P_kerphi
```
## Initialise generation
```
example = 1
```
### Define constraints
```
# x1 > 0
def f1(data):
x1 = data["x1"].values
return x1
# x1 < 1
def f2(data):
x1 = data["x1"].values
return 1 - x1
# x2 > 0
def f3(data):
x2 = data["x2"].values
return x2
# x2 < 1
def f4(data):
x2 = data["x2"].values
return 1 - x2
# x2 > f(x1)
def f5(data):
x1 = data["x1"].values
x2 = data["x2"].values
return x2 - 150/19 * (1-x1)**3 + 225/19 * (1-x1)**2 - 100/19 * (1-x1) + 79/190 - .05
constraints = [f1, f2, f3, f4]
if example == 2:
constraints.append(f5)
```
### Define initial and final states
```
if example == 1:
endpoints = {'x1': {'start': - .75 * np.sqrt(3)/2 + 1,
'end': - .75 * 1/2 + 1,
'delta': .01},
'x2': {'start': - .75 * 1/2 + 1,
'end': - .75 * np.sqrt(3)/2 + 1,
'delta': .01}}
elif example == 2:
endpoints = {'x1': {'start': .1,
'end': .9,
'delta': 0},
'x2': {'start': .9,
'end': .2,
'delta': 0}}
```
### Define independent variable (time)
```
if example == 1:
independent_variable = {'start': -5 * np.pi / 6,
'end': -4 * np.pi / 6,
'frequency': .005}
elif example == 2:
independent_variable = {'start': .1,
'end': .9,
'frequency': .01}
# Compute number of evaluation points
delta_time = independent_variable['end'] - independent_variable['start']
delta_time /= independent_variable['frequency']
independent_variable['points_nb'] = int(delta_time) + 1
```
### Define reference trajectory
```
# First component
def y1(t):
if example == 1:
return .75 * np.cos(t) + 1
elif example == 2:
if .1 <= t < .5:
y1 = .75 * (t - .1) + .1
elif .5 <= t <= .9:
y1 = 1.25 * (t - .9) + .9
return y1
# Second component
def y2(t):
if example == 1:
return .75 * np.sin(t) + 1
elif example == 2:
if .1 <= t < .3:
y2 = -3 * t + 1.2
elif .3 <= t < .7:
y2 = .25 * t + .225
elif .7 <= t <= .9:
y2 = -t + 1.1
return y2
# Create dataframe
y = pd.DataFrame()
time = np.linspace(independent_variable['start'],
independent_variable['end'],
independent_variable['points_nb'])
y['x1'] = np.array([y1(t) for t in time])
y['x2'] = np.array([y2(t) for t in time])
```
### Plot to visualise reference trajectory and constraints
```
X = np.linspace(0, 1, 101)
constraint_f5 = np.array([150/19 * (1-x)**3 - 225/19 * (1-x)**2 + 100/19 * (1-x) - 79/190 for x in X])
fig, ax = plt.subplots(figsize=(10,7))
ax.plot(y['x1'], y['x2'], label='Initial trajectory', color='b')
if example == 2:
ax.fill_between(X, 0, constraint_f5, color='r', alpha=.5, label='Forbidden area')
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.set_xlim(left=0, right=1)
ax.set_ylim(bottom=0, top=1)
ax.legend()
plt.tight_layout()
```
### Define functional basis and project reference trajectory
```
# Basis name
basis = 'legendre'
# Dimension for each variable
if example == 1:
basis_dimension = {'x1': 5,
'x2': 5}
elif example == 2:
basis_dimension = {'x1': 4,
'x2': 6}
# Project
c = trajectory_to_coef(y, basis, basis_dimension)
```
### Compute magnitude of each coefficient and add up small perturbations
```
magnitude = []
# Compute magnitude
magnitude = order_of_magnitude(c)
# Add Gaussian noise
noise = np.random.normal(0, 1, len(c))
magnitude += noise.astype(int)
```
## Generate trajectories
### Compute projector over phi kernel to preserve endpoints conditions
```
P_kerphi = projection_kernel(basis_dimension, endpoints)
```
### Generate new trajectories via perturbation
```
# Choose number of flights to generate
I = 1000
# Strength of noise
alpha = .2
# Generate
coefs_reference = []
for i in range(I):
# Generate Gaussian noise depending on order of magnitude of coefficients
noise = np.random.normal(0, alpha, len(c)) * np.float_power(10, magnitude)
# Project noise onto kernel of phi
noise = np.dot(P_kerphi, noise)
# Perturbe
coef_reference = c + noise
coefs_reference.append(coef_reference)
```
### Build trajectories
```
trajs_reference = []
points_nb = len(y)
for i in range(I):
yi = coef_to_trajectory(coefs_reference[i], points_nb, 'legendre', basis_dimension)
trajs_reference.append(yi)
```
### Check constraints and keep acceptable generated trajectories
```
trajs_acceptable = []
cost_by_time = np.zeros(points_nb)
for i in range(I):
boolean = is_in_constraints(trajs_reference[i], constraints, cost_by_time)
if boolean:
trajs_acceptable.append(trajs_reference[i].drop(columns='cost'))
trajs_acceptable_nb = len(trajs_acceptable)
print('Number of acceptable trajectories = ', trajs_acceptable_nb)
```
### Plot
```
fig, ax = plt.subplots(figsize=(10,7))
ax.plot(y['x1'], y['x2'], label='Initial trajectory', color='b')
for i in range(trajs_acceptable_nb):
ax.plot(trajs_acceptable[i]['x1'], trajs_acceptable[i]['x2'], label='_nolegend_', linestyle='--')
if example == 2:
ax.fill_between(X, 0, constraint_f5, color='r', alpha=.5, label='Forbidden area')
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.set_xlim(left=0, right=1)
ax.set_ylim(bottom=0, top=1)
ax.legend()
plt.tight_layout()
plt.savefig('fig.svg')
```
### Export to csv files
```
# Create folder
now = datetime.now()
dt_string = now.strftime("%d_%m_%Y_%H_%M_%S")
path = 'generated_trajectories_' + dt_string
os.mkdir(path)
# Save generated trajectories
for i in range(trajs_acceptable_nb):
trajs_acceptable[i].to_csv(path + '/trajectory_' + str(i) + '.csv')
```
| github_jupyter |
This notebook is part of the `deepcell-tf` documentation: https://deepcell.readthedocs.io/.
# Training a segmentation model
`deepcell-tf` leverages [Jupyter Notebooks](https://jupyter.org) in order to train models. Example notebooks are available for most model architectures in the [notebooks folder](https://github.com/vanvalenlab/deepcell-tf/tree/master/notebooks). Most notebooks are structured similarly to this example and thus this notebook serves as a core reference for the deepcell approach to model training.
```
import os
import errno
import numpy as np
import deepcell
```
## Load the data
### Download the data from `deepcell.datasets`
`deepcell.datasets` provides access to a set of annotated live-cell imaging datasets which can be used for training cell segmentation and tracking models.
All dataset objects share the `load_data()` method, which allows the user to specify the name of the file (`path`), the fraction of data reserved for testing (`test_size`) and a `seed` which is used to generate the random train-test split.
Metadata associated with the dataset can be accessed through the `metadata` attribute.
```
# Download the data (saves to ~/.keras/datasets)
filename = 'HeLa_S3.npz'
test_size = 0.1 # % of data saved as test
seed = 0 # seed for random train-test split
(X_train, y_train), (X_test, y_test) = deepcell.datasets.hela_s3.load_data(
filename, test_size=test_size, seed=seed)
```
The `PanopticNet` models require square inputs. Reshape the data to meet the model requirements.
```
from deepcell.utils.data_utils import reshape_matrix
size = 128
X_train, y_train = reshape_matrix(X_train, y_train, reshape_size=size)
X_test, y_test = reshape_matrix(X_test, y_test, reshape_size=size)
print('X.shape: {}\ny.shape: {}'.format(X_train.shape, y_train.shape))
```
## Set up filepath constants
```
# change DATA_DIR if you are not using `deepcell.datasets`
DATA_DIR = os.path.expanduser(os.path.join('~', '.keras', 'datasets'))
# DATA_FILE should be a npz file, preferably from `make_training_data`
DATA_FILE = os.path.join(DATA_DIR, filename)
# confirm the data file is available
assert os.path.isfile(DATA_FILE)
# Set up other required filepaths
# If the data file is in a subdirectory, mirror it in MODEL_DIR and LOG_DIR
PREFIX = os.path.relpath(os.path.dirname(DATA_FILE), DATA_DIR)
ROOT_DIR = '/data' # TODO: Change this! Usually a mounted volume
MODEL_DIR = os.path.abspath(os.path.join(ROOT_DIR, 'models', PREFIX))
LOG_DIR = os.path.abspath(os.path.join(ROOT_DIR, 'logs', PREFIX))
# create directories if they do not exist
for d in (MODEL_DIR, LOG_DIR):
try:
os.makedirs(d)
except OSError as exc: # Guard against race condition
if exc.errno != errno.EEXIST:
raise
```
## Create the PanopticNet Model
Here we instantiate a `PanopticNet` model from `deepcell.model_zoo` using 3 semantic heads:
inner distance (1 class),
outer distance (1 class),
foreground/background distance (2 classes)
```
from deepcell.model_zoo.panopticnet import PanopticNet
model = PanopticNet(
backbone='resnet50',
input_shape=X_train.shape[1:],
norm_method='std',
num_semantic_heads=3,
num_semantic_classes=[1, 1, 2], # inner distance, outer distance, fgbg
location=True, # should always be true
include_top=True)
```
## Prepare for training
### Set up training parameters.
There are a number of tunable hyper parameters necessary for training deep learning models:
**model_name**: Incorporated into any files generated during the training process.
**backbone**: The majority of DeepCell models support a variety backbone choices specified in the "backbone" parameter. Backbones are provided through [keras_applications](https://github.com/keras-team/keras-applications) and can be instantiated with weights that are pretrained on ImageNet.
**n_epoch**: The number of complete passes through the training dataset.
**lr**: The learning rate determines the speed at which the model learns. Specifically it controls the relative size of the updates to model values after each batch.
**optimizer**: The TensorFlow module [tf.keras.optimizers](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers) offers optimizers with a variety of algorithm implementations. DeepCell typically uses the Adam or the SGD optimizers.
**lr_sched**: A learning rate scheduler allows the learning rate to adapt over the course of model training. Typically a larger learning rate is preferred during the start of the training process, while a small learning rate allows for fine-tuning during the end of training.
**batch_size**: The batch size determines the number of samples that are processed before the model is updated. The value must be greater than one and less than or equal to the number of samples in the training dataset.
```
from tensorflow.keras.optimizers import SGD, Adam
from deepcell.utils.train_utils import rate_scheduler
model_name = 'watershed_centroid_nuclear_general_std'
n_epoch = 5 # Number of training epochs
test_size = .20 # % of data saved as test
norm_method = 'whole_image' # data normalization
optimizer = Adam(lr=1e-5, clipnorm=0.001)
lr_sched = rate_scheduler(lr=1e-5, decay=0.99)
batch_size = 1
min_objects = 3 # throw out images with fewer than this many objects
```
### Create the DataGenerators
The `SemanticDataGenerator` can output any number of transformations for each image. These transformations are passed to `generator.flow()` as a list of transform names.
Here we use `"inner-distance"`, `"outer-distance"` and `"fgbg"` to correspond to the inner distance, outer distance, and foreground background semantic heads, respectively. Keyword arguments may also be passed to each transform as a `dict` of transform names to `kwargs`.
```
from deepcell import image_generators
from deepcell.utils import train_utils
transforms = ['inner-distance', 'outer-distance', 'fgbg']
transforms_kwargs = {'outer-distance': {'erosion_width': 0}}
# use augmentation for training but not validation
datagen = image_generators.SemanticDataGenerator(
rotation_range=180,
shear_range=0,
zoom_range=(0.75, 1.25),
horizontal_flip=True,
vertical_flip=True)
datagen_val = image_generators.SemanticDataGenerator(
rotation_range=0,
shear_range=0,
zoom_range=0,
horizontal_flip=0,
vertical_flip=0)
train_data = datagen.flow(
{'X': X_train, 'y': y_train},
seed=seed,
transforms=transforms,
transforms_kwargs=transforms_kwargs,
min_objects=min_objects,
batch_size=batch_size)
val_data = datagen_val.flow(
{'X': X_test, 'y': y_test},
seed=seed,
transforms=transforms,
transforms_kwargs=transforms_kwargs,
min_objects=min_objects,
batch_size=batch_size)
```
Visualize the data generator output.
```
from matplotlib import pyplot as plt
inputs, outputs = train_data.next()
img = inputs[0]
inner_distance = outputs[0]
outer_distance = outputs[1]
fgbg = outputs[2]
fig, axes = plt.subplots(1, 4, figsize=(15, 15))
axes[0].imshow(img[..., 0])
axes[0].set_title('Source Image')
axes[1].imshow(inner_distance[0, ..., 0])
axes[1].set_title('Inner Distance')
axes[2].imshow(outer_distance[0, ..., 0])
axes[2].set_title('Outer Distance')
axes[3].imshow(fgbg[0, ..., 0])
axes[3].set_title('Foreground/Background')
plt.show()
```
### Create a loss function for each semantic head
Each semantic head is trained with it's own loss function. Mean Square Error is used for regression-based heads, whereas `weighted_categorical_crossentropy` is used for classification heads.
The losses are saved as a dictionary and passed to `model.compile`.
```
# Create a dictionary of losses for each semantic head
from tensorflow.python.keras.losses import MSE
from deepcell import losses
def semantic_loss(n_classes):
def _semantic_loss(y_pred, y_true):
if n_classes > 1:
return 0.01 * losses.weighted_categorical_crossentropy(
y_pred, y_true, n_classes=n_classes)
return MSE(y_pred, y_true)
return _semantic_loss
loss = {}
# Give losses for all of the semantic heads
for layer in model.layers:
if layer.name.startswith('semantic_'):
n_classes = layer.output_shape[-1]
loss[layer.name] = semantic_loss(n_classes)
model.compile(loss=loss, optimizer=optimizer)
```
## Train the model
Call `fit_generator` on the compiled model, along with a default set of callbacks.
```
from deepcell.utils.train_utils import get_callbacks
from deepcell.utils.train_utils import count_gpus
model_path = os.path.join(MODEL_DIR, '{}.h5'.format(model_name))
loss_path = os.path.join(MODEL_DIR, '{}.npz'.format(model_name))
num_gpus = count_gpus()
print('Training on', num_gpus, 'GPUs.')
train_callbacks = get_callbacks(
model_path,
lr_sched=lr_sched,
tensorboard_log_dir=LOG_DIR,
save_weights_only=num_gpus >= 2,
monitor='val_loss',
verbose=1)
loss_history = model.fit_generator(
train_data,
steps_per_epoch=train_data.y.shape[0] // batch_size,
epochs=n_epoch,
validation_data=val_data,
validation_steps=val_data.y.shape[0] // batch_size,
callbacks=train_callbacks)
```
## Predict on test data
Use the trained model to predict on new data. First, create a new prediction model without the foreground background semantic head. While this head is very useful during training, the output is unused during prediction. By using `model.load_weights(path, by_name=True)`, the semantic head can be removed.
```
from deepcell.model_zoo.panopticnet import PanopticNet
prediction_model = PanopticNet(
backbone='resnet50',
input_shape=X_train.shape[1:],
norm_method='std',
num_semantic_heads=2,
num_semantic_classes=[1, 1], # inner distance, outer distance
location=True, # should always be true
include_top=True)
prediction_model.load_weights(model_path, by_name=True)
# make predictions on testing data
from timeit import default_timer
start = default_timer()
test_images = prediction_model.predict(X_test)
watershed_time = default_timer() - start
print('Watershed segmentation of shape', test_images[0].shape, 'in', watershed_time, 'seconds.')
import time
from matplotlib import pyplot as plt
import numpy as np
from skimage.feature import peak_local_max
from deepcell_toolbox.deep_watershed import deep_watershed
index = np.random.choice(X_test.shape[0])
print(index)
fig, axes = plt.subplots(1, 4, figsize=(20, 20))
masks = deep_watershed(
test_images,
min_distance=10,
detection_threshold=0.1,
distance_threshold=0.01,
exclude_border=False,
small_objects_threshold=0)
# calculated in the postprocessing above, but useful for visualizing
inner_distance = test_images[0]
outer_distance = test_images[1]
coords = peak_local_max(
inner_distance[index],
min_distance=10,
threshold_abs=0.1,
exclude_border=False)
# raw image with centroid
axes[0].imshow(X_test[index, ..., 0])
axes[0].scatter(coords[..., 1], coords[..., 0],
color='r', marker='.', s=10)
axes[1].imshow(inner_distance[index, ..., 0], cmap='jet')
axes[2].imshow(outer_distance[index, ..., 0], cmap='jet')
axes[3].imshow(masks[index, ...], cmap='jet')
plt.show()
```
## Evaluate results
The `deepcell.metrics` package is used to measure advanced metrics for instance segmentation predictions.
```
from deepcell_toolbox.metrics import Metrics
from skimage.morphology import watershed, remove_small_objects
from skimage.segmentation import clear_border
outputs = model.predict(X_test)
y_pred = []
for i in range(outputs[0].shape[0]):
mask = deep_watershed(
[t[[i]] for t in outputs],
min_distance=10,
detection_threshold=0.1,
distance_threshold=0.01,
exclude_border=False,
small_objects_threshold=0)
y_pred.append(mask[0])
y_pred = np.stack(y_pred, axis=0)
y_pred = np.expand_dims(y_pred, axis=-1)
y_true = y_test.copy()
print('DeepWatershed - Remove no pixels')
m = Metrics('DeepWatershed - Remove no pixels', seg=False)
m.calc_object_stats(y_true, y_pred)
print('\n')
for i in range(y_pred.shape[0]):
y_pred[i] = remove_small_objects(y_pred[i].astype(int), min_size=100)
y_true[i] = remove_small_objects(y_true[i].astype(int), min_size=100)
print('DeepWatershed - Remove objects < 100 pixels')
m = Metrics('DeepWatershed - Remove 100 pixels', seg=False)
m.calc_object_stats(y_true, y_pred)
print('\n')
```
| github_jupyter |
# 1. `LightningModule`
A LightningModule organizes your PyTorch code into 6 sections:
- Computations (init).
- Train Loop (training_step)
- Validation Loop (validation_step)
- Test Loop (test_step)
- Prediction Loop (predict_step)
- Optimizers and LR Schedulers (configure_optimizers)
The LightningModule has many convenience methods, but the core ones you need to know about are:
|Name|Description|
|--|--|
|init|Define computations here|
|forward|Use for inference only (separate from training_step)|
|training_step|the complete training loop|
|validation_step|the complete validation loop|
|test_step|the complete test loop|
|predict_step|the complete prediction loop|
|configure_optimizers|define optimizers and LR schedulers|
|||
|||
```
import torch
from torch.nn import functional as F
from torch import nn
from pytorch_lightning.core.lightning import LightningModule
```
## 1.1 Define the basic model
```
class LitMNIST(LightningModule):
def __init__(self):
super().__init__()
self.layer_1 = nn.Linear(28 * 28, 128)
self.layer_2 = nn.Linear(128, 256)
self.layer_3 = nn.Linear(256, 10)
def forward(self, x):
batch_size, channels, height, width = x.size()
x = x.view(batch_size, -1)
x = F.relu(self.layer_1(x))
x = F.relu(self.layer_2(x))
x = F.log_softmax(self.layer_3(x), dim=1)
return x
net = LitMNIST()
x = torch.randn(1, 1, 28, 28)
out = net(x)
print(out.shape)
```
## 1.2 Add `training_step`
```
class LitMNIST(LightningModule):
def __init__(self):
super().__init__()
self.layer_1 = nn.Linear(28 * 28, 128)
self.layer_2 = nn.Linear(128, 256)
self.layer_3 = nn.Linear(256, 10)
def forward(self, x):
batch_size, channels, height, width = x.size()
x = x.view(batch_size, -1)
x = F.relu(self.layer_1(x))
x = F.relu(self.layer_2(x))
x = F.log_softmax(self.layer_3(x), dim=1)
return x
def training_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
return loss
```
## 1.3 Add `configure_optimizers`
```
from torch.optim import Adam
class LitMNIST(LightningModule):
def __init__(self):
super().__init__()
self.layer_1 = nn.Linear(28 * 28, 128)
self.layer_2 = nn.Linear(128, 256)
self.layer_3 = nn.Linear(256, 10)
def forward(self, x):
batch_size, channels, height, width = x.size()
x = x.view(batch_size, -1)
x = F.relu(self.layer_1(x))
x = F.relu(self.layer_2(x))
x = F.log_softmax(self.layer_3(x), dim=1)
return x
def training_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
return loss
def configure_optimizers(self):
# 因为LightningModule是Module的子类,
# 所以可以用self.parmeters()直接访问
return Adam(self.parameters(), lr=1e-3)
```
### 几种情形:
#### 1⃣️. most cases. no learning rate scheduler
```python
def configure_optimizers(self):
return Adam(self.parameters(), lr=1e-3)
```
#### 2⃣️. multiple optimizer case (e.g.: GAN)
```python
def configure_optimizers(self):
gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
return gen_opt, dis_opt
```
#### 3⃣️. example with learning rate schedulers
```python
def configure_optimizers(self):
gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
dis_sch = CosineAnnealing(dis_opt, T_max=10)
return [gen_opt, dis_opt], [dis_sch]
```
#### 4⃣️. example with step-based learning rate schedulers and each optimizer has its own scheduler
```python
def configure_optimizers(self):
gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
gen_sch = {
'scheduler': ExponentialLR(gen_opt, 0.99),
'interval': 'step' # called after each training step
}
dis_sch = CosineAnnealing(dis_opt, T_max=10) # called every epoch
return [gen_opt, dis_opt], [gen_sch, dis_sch]
```
#### 5⃣️. example with optimizer frequencies. see training procedure in `Improved Training of Wasserstein GANs`, Algorithm 1 https://arxiv.org/abs/1704.00028
```python
def configure_optimizers(self):
gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
n_critic = 5
return (
{'optimizer': dis_opt, 'frequency': n_critic},
{'optimizer': gen_opt, 'frequency': 1}
)
```
# 2. Data
## 2.1 Pytorch Dataloader方式
```
import os
import sys
from torch.utils.data import DataLoader, random_split
from torchvision.datasets import MNIST
from torchvision import datasets, transforms
from pytorch_lightning import Trainer
# transforms
# prepare transforms standard to MNIST
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
# data
from pathlib import Path
path_root = os.getcwd()
mnist_train = MNIST(os.path.join(str(path_root),'dataset'), train=True, download=True, transform=transform)
mnist_train = DataLoader(mnist_train, batch_size=64)
```
Pass in the dataloaders to the .fit() function directly
```
model = LitMNIST()
trainer = Trainer()
trainer.fit(model, mnist_train)
```
## 2.2
| github_jupyter |
```
import warnings
warnings.filterwarnings("ignore")
import os
import jieba
import torch
import pickle
import torch.nn as nn
import torch.optim as optim
import pandas as pd
from ark_nlp.model.tm.bert import Bert
from ark_nlp.model.tm.bert import BertConfig
from ark_nlp.model.tm.bert import Dataset
from ark_nlp.model.tm.bert import Task
from ark_nlp.model.tm.bert import get_default_model_optimizer
from ark_nlp.model.tm.bert import Tokenizer
```
### 一、数据读入与处理
#### 1. 数据读入
```
train_data_df = pd.read_json('../data/source_datasets/KUAKE-QTR/KUAKE-QTR_train.json')
train_data_df = (train_data_df
.rename(columns={'query': 'text_a', 'title': 'text_b'})
.loc[:,['text_a', 'text_b', 'label']])
dev_data_df = pd.read_json('../data/source_datasets/KUAKE-QTR/KUAKE-QTR_dev.json')
dev_data_df = (dev_data_df
.rename(columns={'query': 'text_a', 'title': 'text_b'})
.loc[:,['text_a', 'text_b', 'label']])
tm_train_dataset = Dataset(train_data_df)
tm_dev_dataset = Dataset(dev_data_df)
```
#### 2. 词典创建和生成分词器
```
tokenizer = Tokenizer(vocab='nghuyong/ernie-1.0', max_seq_len=30)
```
#### 3. ID化
```
tm_train_dataset.convert_to_ids(tokenizer)
tm_dev_dataset.convert_to_ids(tokenizer)
```
<br>
### 二、模型构建
#### 1. 模型参数设置
```
config = BertConfig.from_pretrained('nghuyong/ernie-1.0',
num_labels=len(tm_train_dataset.cat2id))
```
#### 2. 模型创建
```
torch.cuda.empty_cache()
dl_module = Bert.from_pretrained('nghuyong/ernie-1.0',
config=config)
```
<br>
### 三、任务构建
#### 1. 任务参数和必要部件设定
```
# 设置运行次数
num_epoches = 10
batch_size = 32
optimizer = get_default_model_optimizer(dl_module)
```
#### 2. 任务创建
```
model = Task(dl_module, optimizer, 'ce', cuda_device=0)
```
#### 3. 训练
```
model.fit(tm_train_dataset,
tm_dev_dataset,
lr=2e-5,
epochs=5,
batch_size=batch_size
)
```
<br>
### 四、模型验证与保存
```
import json
from ark_nlp.model.tm.bert import Predictor
tm_predictor_instance = Predictor(model.module, tokenizer, tm_train_dataset.cat2id)
test_df = pd.read_json('../data/source_datasets/KUAKE-QTR/KUAKE-QTR_test.json')
submit = []
for _id, _text_a, _text_b in zip(test_df['id'], test_df['query'], test_df['title']):
_predict = tm_predictor_instance.predict_one_sample([_text_a, _text_b])[0]
submit.append({
'id': _id,
'query': _text_a,
'title': _text_b,
'label': _predict
})
output_path = '../data/output_datasets/KUAKE-QTR_test.json'
with open(output_path,'w', encoding='utf-8') as f:
f.write(json.dumps(new_train_data, ensure_ascii=False))
```
| github_jupyter |
```
%matplotlib inline
# import statements
import numpy as np
import matplotlib.pyplot as plt #for figures
from mpl_toolkits.basemap import Basemap #to render maps
import math
import json #to write dict with parameters
#import GrowYourIC
from GrowYourIC import positions, geodyn, geodyn_trg, geodyn_static, plot_data, data
plt.rcParams['figure.figsize'] = (8.0, 3.0) #size of figures
cm = plt.cm.get_cmap('viridis')
cm2 = plt.cm.get_cmap('winter')
print("==== Models ====")
age_ic_dim = 1e9 #in years
rICB_dim = 1221. #in km
velocity_center = [0., 100.]#center of the eastern hemisphere
center = [0,-80] #center of the western hemisphere
units = None #we give them already dimensionless parameters.
rICB = 1.
age_ic = 1.
#Slow translation
v_slow = 0.8
omega_slow = 1.57
exponent_slow = 1.
velocity_slow = geodyn_trg.translation_velocity(velocity_center, v_slow)
proxy_type = "growth rate"#"growth rate"
proxy_name = "growth rate (km/Myears)" #growth rate (km/Myears)"
proxy_lim = None
print("=== Model 1 : slow translation, no rotation ===")
SlowTranslation = geodyn_trg.TranslationGrowthRotation() #can do all the models presented in the paper
parameters = dict({'units': units,
'rICB': rICB,
'tau_ic':age_ic,
'vt': velocity_slow,
'exponent_growth': exponent_slow,
'omega': 0.,
'proxy_type': proxy_type,
'proxy_name': proxy_name,
'proxy_lim': proxy_lim})
SlowTranslation.set_parameters(parameters)
SlowTranslation.name = "Slow translation"
SlowTranslation.define_units()
print("=== Model 2 : slow translation, rotation ===")
SlowTranslation2 = geodyn_trg.TranslationGrowthRotation() #can do all the models presented in the paper
parameters = dict({'units': units,
'rICB': rICB,
'tau_ic':age_ic,
'vt': velocity_slow,
'exponent_growth': exponent_slow,
'omega': omega_slow,
'proxy_type': proxy_type,
'proxy_name': proxy_name,
'proxy_lim': proxy_lim})
SlowTranslation2.set_parameters(parameters)
SlowTranslation2.name = "Slow translation + rotation"
SlowTranslation2.define_units()
#Fast translation
v_fast = 10.3
omega_fast = 7.85
time_translation = rICB_dim*1e3/4e-10/(np.pi*1e7)
maxAge = 2.*time_translation/1e6
velocity_fast = geodyn_trg.translation_velocity(velocity_center, v_fast)
exponent_fast = 0.1
proxy_type = "age"
proxy_name = "age (Myears)" #growth rate (km/Myears)"
proxy_lim = [0, maxAge]
print("=== Model 3 : fast translation, no rotation ===")
FastTranslation = geodyn_trg.TranslationGrowthRotation() #can do all the models presented in the paper
parameters = dict({'units': units,
'rICB': rICB,
'tau_ic':age_ic,
'vt': velocity_fast,
'exponent_growth': exponent_fast,
'omega': 0.,
'proxy_type': proxy_type,
'proxy_name': proxy_name,
'proxy_lim': proxy_lim})
FastTranslation.set_parameters(parameters)
FastTranslation.name = "Fast translation"
FastTranslation.define_units()
print("=== Model 4 : fast translation, rotation ===")
FastTranslation2 = geodyn_trg.TranslationGrowthRotation() #can do all the models presented in the paper
parameters = dict({'units': units,
'rICB': rICB,
'tau_ic':age_ic,
'vt': velocity_fast,
'exponent_growth': exponent_fast,
'omega': omega_fast,
'proxy_type': proxy_type,
'proxy_name': proxy_name,
'proxy_lim': proxy_lim})
FastTranslation2.set_parameters(parameters)
FastTranslation2.name = "Fast translation + rotation"
FastTranslation2.define_units()
npoints = 60 #number of points in the x direction for the data set.
data_set = data.PerfectSamplingEquator(npoints, rICB = 1.)
data_set.method = "bt_point"
proxy = geodyn.evaluate_proxy(data_set, FastTranslation, proxy_type="age", verbose = False)
data_set.plot_c_vec(FastTranslation, proxy=proxy, cm=cm, nameproxy="age (Myears)")
proxy = geodyn.evaluate_proxy(data_set, FastTranslation2, proxy_type="age", verbose = False)
data_set.plot_c_vec(FastTranslation2, proxy=proxy, cm=cm, nameproxy="age (Myears)")
proxy = geodyn.evaluate_proxy(data_set, SlowTranslation, proxy_type="age", verbose = False)
data_set.plot_c_vec(SlowTranslation, proxy=proxy, cm=cm, nameproxy="age (Myears)")
proxy = geodyn.evaluate_proxy(data_set, SlowTranslation2, proxy_type="age", verbose = False)
data_set.plot_c_vec(SlowTranslation2, proxy=proxy, cm=cm, nameproxy="age (Myears)")
```
| github_jupyter |
# Series Methods More
In this chapter, we cover several more less common, but still useful and important Series methods that you need to know in order to be fully capable at analyzing data with pandas.
* `agg` - Compute multiple aggregations at once
* `idxmax` and `idxmin` - Return the index of the max/min
* `diff` and `pct_change` - Find the difference/percent change from one value to the next
* `sample` - Randomly sample values in a Series
* `nsmallest`/`nlargest` - Retun the top/bottom `n` values in a Series
* `replace` - Replace one or more values in a variety of ways
Let's begin by reading in the movie dataset and selecting the `imdb_score` Series.
```
import pandas as pd
movie = pd.read_csv('../data/movie.csv', index_col='title')
score = movie['imdb_score']
score.head()
```
## The `agg` method
The `agg` method allows you to compute several aggregations simultaneously. Provide it a list with the Series aggregation methods as strings. For instance, the following computes the minimum and maximum of a Series and returns the result as a Series as well.
```
score.agg(['min', 'max'])
```
You may provide any number of aggregation methods to the `agg` method. The `agg` method is similar to the `describe`, but allows you to calculate just the aggregations you desire.
```
score.agg(['min', 'max', 'count', 'nunique'])
```
## Index of maximum and minimum
The `max` and `min` methods may be used to return the minimum and maximum values of a Series. Occasionally, we will want to know the index label for these maximum or minimum values and can do so with the `idxmax` and `idxmin` methods. Let's first find the maximum and minimum values.
```
score.max()
score.min()
```
To see the movies associated with these max and min values, call the `idxmin` and `idxmax` methods.
```
score.idxmax()
score.idxmin()
```
Let's verify these results by dropping any missing values and sorting the Series.
```
score_sorted = score.dropna().sort_values(ascending=False)
```
We can now output the first and last few values to verify.
```
score_sorted.head(3)
score_sorted.tail(3)
```
Both `idxmax` and `idxmin` always return a single index label. If two or more values share the maximum/minimum then pandas returns the index label that appears first in the Series. Since, one value is returned, `idxmax` and `idxmin` are considered aggregation methods.
## Differencing methods `diff` and `pct_change`
The `diff` method takes the difference between the current value and some other value. By default, the other value is the immediate preceding one. Since the first value has no previous value, its difference will be missing in the result. Let's read a small sample of Microsoft's stock dataset found in the stocks folder containing 10 trading days worth of information.
```
msft = pd.read_csv('../data/stocks/msft_sample.csv')
msft
```
Let's select the `adjusted_close` column as a Series and call the `diff` method on it. The difference between the second and first values is 2.57 and is now the new second value in the returned Series.
```
ac = msft['adjusted_close']
ac.diff()
```
It's possible to control which two values are subtracted. By default, the `periods` parameter is set to 1. Here, we change it to 3. The first possible difference happens between the fourth (139.68) and first (135.67) values resulting in 4.01. The first three values have no value three positions ahead of them, so they are now missing.
```
ac.diff(periods=3)
```
We can take the difference between the current value and a value further ahead by using negative integers. Here, we take the current value and subtract the second value following it.
```
ac.diff(-2)
```
The `pct_change` method works analogously but returns the percentage difference instead.
```
ac.pct_change()
ac.pct_change(-2)
```
## The `nlargest` and `nsmallest` methods
The `nlargest` and `nsmallest` methods are convenience methods to quickly return the top `n` values in the Series in order. By default, they return the top 5 values. Use the parameter `n` to choose how many values to return. Here, we select the top 4 movies by score.
```
score.nlargest(n=4)
```
By default, `nlargest` and `nsmallest` return exactly `n` values even if there are ties. Let's produce a similar result by calling the `sort_values` and returning the first five values. You'll notice that two movies are tied for the fourth highest score. By default, `nlargest` returns the first one.
```
score.sort_values(ascending=False).head()
```
If you'd like to keep the top `n` values and ties, set the `keep` parameter to 'all'. There is only one other movie with a value of 9.1, but if there were more, all of them would be returned here.
```
score.nlargest(n=4, keep='all')
```
The `nsmallest` method works analogously and returns the smallest `n` values.
```
score.nsmallest(n=3)
```
By default, the first tie is kept, but setting `keep` to 'last' return the last occurrence of the nth ranked value.
```
score.nsmallest(n=3, keep='last')
```
## Randomly sample a Series
The `sample` method is great for randomly sampling the values in your Series. Set the `n` parameter of the `sample` method to an integer to return that many randomly selected values.
```
score.sample(n=5)
```
By default, the sampling is done without replacement, so there is no possibility of selecting the same piece of data. If you attempt to choose a sample larger than the number of values in the Series, you'll get an error.
```
score.sample(n=5000)
```
However, you can sample with replacement, meaning that you can get duplicate pieces of data by setting the `replace` parameter to `False`.
```
score.sample(n=5000, replace=True).head()
```
You can also sample a fraction of the dataset by using the `frac` parameter. Here we take a random sample of 15% of the data.
```
score_sample = score.sample(frac=.15)
score_sample.head()
```
Let's verify that the sample is indeed 15% of the total length of the original.
```
len(score_sample)
len(score) * .15
```
## The `replace` method
The `replace` method helps you replace particular values in your Series with other values. There are a lot of options with the `replace` method to handle many different kinds of replacement. This section will only discuss basic replacements. Let's select the color column from the movie dataset as a Series.
```
color = movie['color']
color.head()
```
The simplest way to replace a value in the Series is to pass the `replace` method two arguments. The first is the value you'd like to replace and the second is the replacement value. Here, we replace the exact string 'Color' with 'Colour'.
```
color.replace('Color', 'Colour').head()
```
The `replace` method works with columns of all data types. Here we use the `score` Series to replace the value 7.1 with 999.
```
score.head()
score.replace(7.1, 999).head()
```
You might think you can replace specific words within strings, and you would be correct, but the way you do so takes more effort. Let's take a look at the `genres` column as a Series.
```
genres = movie['genres']
genres.head()
```
Let's say we are interested in replacing the string 'Adventure' with 'Adv' to shorten the length of each string in this column. The following won't work.
```
genres.replace('Adventure', 'Adv').head()
```
By default, the replace method works by matching the entire value in the Series. The genre must be exactly 'Adventure' for it to be replaced without any other text surrounding it. It is possible to do this within-string replacement, but you'll need a lesson in regular expressions first. Setting the `regex` parameter to `True` will do the trick. The following is presented with some precaution. You should not use the `regex` parameter until you understand the fundamentals of regular expressions, which are thoroughly covered in its own part of the course.
```
genres.replace('Adventure', 'Adv', regex=True).head()
```
## Exercises
Read in the employee dataset by executing the cell below and use it for the following exercises.
```
emp = pd.read_csv('../data/employee.csv')
emp.head(3)
```
### Exercise 1
<span style="color:green; font-size:16px">Find the minimum, maximum, mean, median, and standard deviation of the salary column. Return the result as a Series.</span>
### Exercise 2
<span style="color:green; font-size:16px">Use the `idxmax` and `idxmin` methods to find the index of where the maximum and minimum salaries are located in the DataFrame. Then use the `loc` indexer to select both of those rows as a DataFrame.</span>
### Exercise 3
<span style="color:green; font-size:16px">Repeat exercise 3, but do so on the `imdb_score` column from the movie dataset.</span>
### Exercise 4
<span style="color:green; font-size:16px">The `idxmax` and `idxmin` methods are aggregations as they return a single value. Use the `agg` method to return the min/max `imdb_score` and the label for each score.</span>
### Exercise 5
<span style="color:green; font-size:16px">Read in 20 years of Microsoft stock data, setting the 'timestamp' column as the index. Find the top 5 largest one-day percentage gains in the `adjusted_close`.</span>
### Exercise 6
<span style="color:green; font-size:16px">Randomly sample the `actor1` column as a Series with replacement to select three values. Use random state 12345. Setting a random state ensures that the same random sample is chosen regardless of which machine or version of numpy is being used.</span>
### Exercise 7
<span style="color:green; font-size:16px">Select the title column from the employee dataset as a Series. Replace the all occurrences of 'POLICE OFFICER' and 'SENIOR POLICE OFFICER' with 'POLICE'. You can use a list as the first argument passed to the `replace` method.</span>
| github_jupyter |
```
# Load Data
import glob
import pandas as pd
def Carga_All_Files( ):
regexp='../data/covi*'
df = pd.DataFrame()
# Iterate trough LIST DIR and
for my_file in glob.glob(regexp):
this_df = pd.read_csv(my_file)
for columna in [ 'PCR' , 'Antic.' ] :
if columna in this_df.columns : del this_df[columna]
this_df = this_df.rename(columns = {'Muertos':'Fallecidos','Hospit.' : 'Hospitalizados'})
this_df['Fecha'] = my_file
df = pd.concat([df,this_df])
return df
#df = Carga_All_Files( )
#nombre_comunidad = 'Madrid'
#df = df[(df['CCAA'] == nombre_comunidad)].sort_values(by='Fecha')
#df
! head -3 ../data/covi2504.csv ../data/covi2404.csv
def Get_Comunidades_List( ):
return Carga_All_Files( )['CCAA'].unique()
#Get_Comunidades_List()
def Preprocesado():
df = Carga_All_Files( )
# Formateamos la fecha
df['Fecha'].replace({
'../data/covi': '2020-',
'.csv' : ''}, inplace=True, regex=True)
df['Fecha'] = pd.to_datetime(df['Fecha'], format='%Y-%d%m')
#
return df.sort_values(by='Fecha')
import numpy as np
def Enrich_Columns(comunidad):
del comunidad['ID']
del comunidad['IA']
del comunidad['Nuevos']
if 'Fecha' in comunidad.columns :
comunidad.set_index('Fecha', inplace=True)
# Datos de fallecimientos diarios, en totales y tanto por uno.
comunidad['Fallecidos hoy absoluto'] = comunidad['Fallecidos'] - comunidad['Fallecidos'].shift(1)
comunidad['Fallecidos hoy porcentaje'] = comunidad['Fallecidos hoy absoluto'] / comunidad['Fallecidos']
comunidad['Fallecidos hoy variacion respecto ayer'] = comunidad['Fallecidos hoy absoluto'] - comunidad['Fallecidos hoy absoluto'].shift(1)
# Datos de Casos diarios, en totales y tanto por uno.
comunidad['Casos hoy absoluto'] = comunidad['Casos'] - comunidad['Casos'].shift(1)
comunidad['Casos hoy porcentaje'] = comunidad['Casos hoy absoluto'] / comunidad['Casos']
comunidad['Casos hoy variacion respecto ayer'] = comunidad['Casos hoy absoluto'] - comunidad['Casos hoy absoluto'].shift(1)
# Convertimos a entero, para quitar decimales
CONVERT_INT_COLUMNS = ['Fallecidos hoy absoluto',
'Fallecidos hoy variacion respecto ayer',
'Casos hoy variacion respecto ayer',
'Casos hoy absoluto',
'Hospitalizados',
'Curados']
for column in CONVERT_INT_COLUMNS :
comunidad[column] = comunidad[column].fillna(0)
comunidad[column] = comunidad[column].astype(np.int64)
comunidad['Curados hoy absoluto'] = comunidad['Curados'] - comunidad['Curados'].shift(1)
try :
comunidad['Proporcion Curados hoy absoluto / Casos hoy absoluto'] = comunidad['Curados hoy absoluto'] / comunidad['Casos hoy absoluto']
except:
pass
comunidad['Casos excluidos curados'] = comunidad['Casos'] - comunidad['Curados']
comunidad['Tasa Mortalidad'] = comunidad['Fallecidos'] / comunidad['Casos']
# ordenamos las filas y columnas
columnsTitles = ['CCAA',
'Casos' , 'Casos hoy absoluto' , 'Casos hoy variacion respecto ayer', 'Casos hoy porcentaje' ,
'Fallecidos', 'Fallecidos hoy absoluto', 'Fallecidos hoy variacion respecto ayer', 'Fallecidos hoy porcentaje' ,
'Tasa Mortalidad',
'Curados', 'Curados hoy absoluto', 'Casos excluidos curados', 'Proporcion Curados hoy absoluto / Casos hoy absoluto',
'UCI',
'Hospitalizados']
comunidad = comunidad.reindex(columns=columnsTitles)
comunidad = comunidad.sort_values(by=['Fecha'], ascending=False)
comunidad = comunidad.rename(columns = {'CCAA':'Lugar'})
return comunidad
def Get_Comunidad(nombre_comunidad):
# Trabajamos solo con una comunidad
df = Preprocesado()
df = df[(df['CCAA'] == nombre_comunidad)].sort_values(by='Fecha')
df = Enrich_Columns(df)
return df
def Get_Nacion():
df = Preprocesado()
df = df.sort_values(by='Fecha')
df = df.groupby(['Fecha']).sum()
df['CCAA'] = 'España'
df = Enrich_Columns(df)
return df
# Just for debug purposes
def Debug_Get_Comunidad():
comunidad = Get_Comunidad('Madrid')
return comunidad
Debug_Get_Comunidad()
nombre_comunidad='MADRID'
# Trabajamos solo con una comunidad
df = Preprocesado()
df
# df = df[(df['CCAA'] == nombre_comunidad)].sort_values(by='Fecha')
# df = Enrich_Columns(df)
# return df
# Just for debug purposes
def Debug_Get_Nacion():
return Get_Nacion()
Debug_Get_Nacion()
```
| github_jupyter |
# Data Science with Python : Markov's Chain #377
## What is Markov's Chain ?
Markov chains, named after <a href = "https://en.wikipedia.org/wiki/Andrey_Markov">Andrey Markov</a>, a stochastic model that depicts a sequence of possible events where predictions or probabilities for the next state based solely on its previous event state not the states before. In simple words, the probability that n+1 th steps will be x depends only on the nth steps not the complete sequence of steps that came before n. This property is known as <i><b>Markov Property</b></i> or <i><b>Memorylessness</b></i>.
Let us explore our Markov chain with the help of a diagram,
<img src = "https://upload.wikimedia.org/wikipedia/commons/thumb/2/2b/Markovkate_01.svg/800px-Markovkate_01.svg.png" width="200"/>
A diagram representing a two-state(here, E and A) Markov process.Here the arrows originated from the current state and points to the future state and the number associated with the arrows indicates the probability of the Markov process changing from one state to another state. For instance,if the Markov process is in state E, then the probability it changes to state A is 0.7, while the probability it remains in same state is 0.3. Similarly, for any process in state A, probability to change to E state is 0.4 and probability to remain in same state is 0.6.
## How to Represent Markov Chain ?
From the diagram of the two state Markov process ,we can understand that the Markov chain is directed graph. So we can represent is with the help of an adjacency matrix.
+-------+-------+
| A | E | --- Each element denotes the probability weight of the edge
+-------+-------+-------+ connecting the two corresponding vertices
| A | 0.6 | 0.4 | --- 0.4 is the probability for state A to go to state E and 0.6 is the probability
+-------+-------+-------+ to remain at the same state
| E | 0.7 | 0.3 | --- 0.7 is the probability for state E to go to state A and 0.3 is the probability
+-------+-------+-------+ to remain at the same state
This matrix is also called <i><b>Transition Matrix</b></i>. If the Markov chain has N possible states, the matrix will be an NxN matrix. Each row of this matrix should sum to 1.
In addition to this, a Markov chain also has an <i><b>Initial State Vector</b></i> of order Nx1.
These two entities are must to represent a Markov chain.
<i><b>N-step Transition Matrix :</b></i> Now let us learn higher order transition matrices. It helps us to find the chance of that transition occurring over multiple steps. To put in simple words, what will be the probability of moving from state <b>A</b> to state <b>E</b> over <b>N</b> step? There is actually a very simple way to calculate it. This can be determined by calculating the value of entry <b>(A,E)</b> of the matrix obtained by raising the transition matrix to the power of <b>N</b>.
## Markov Chain in Python :
Now we are going to code our Markov chain example above in python. Although for computing efficiently we generally use a library encoded Markov chain.
```
#let's import our library
import numpy as np
#Encoding this states to numbers as it is easier to deal with numbers instead of words.
state = {
0 : "A",
1 : "E",
}
state
#Assigning the transition matrix to a variable i.e a numpy 2d matrix.
MyMatrix = np.array([[0.6, 0.4], [0.7, 0.3]])
MyMatrix
#Simulating a random walk on our Markov chain with 20 steps.
#Random walk simply means that we start with an arbitary state and then we move along our markov chain.
n = 20
StartingState = 0 #decide which state to start with
CurrentState = StartingState
print(state[CurrentState], "--->", end=" ") #printing the stating state using state dictionary
while n-1:
#Deciding the next state using a random.choice() function,that takes list of states
#and the probability to go to the next states from our current state
CurrentState = np.random.choice([0, 1], p=MyMatrix[CurrentState])
print(state[CurrentState], "--->", end=" ") #printing the path of random walk
n-=1
print("stop")
#Let us find the stationary distribution of our Markov chain using repeated matrix multiplication
NumberOfSteps = 10**3
MyMatrix_n = MyMatrix
i=0
while i<NumberOfSteps:
MyMatrix_n = np.matmul(MyMatrix_n, MyMatrix) #Multiplying our matrix with itself and storing it into MyMatrix_n
i+=1
print("MyMatrix^n = \n", MyMatrix_n, "\n")
print("π = ", MyMatrix_n[0]) #Printing the probability distribution
#Let us find the stationary distribution of our Markov chain by Finding Left Eigen Vectors
#Importing our library
import scipy.linalg
MyValues, left = scipy.linalg.eig(MyMatrix, right = False, left = True) #We only need the left eigen vectors
print("left eigen vectors = \n", left, "\n")
print("eigen values = \n", MyValues)
#Pi is a probability distribution so the sum of the probabilities should be 1
#To get that from the above negative values we just have to normalize
pi = left[:,0]
pi_normalized = [(x/np.sum(pi)).real for x in pi]
pi_normalized
```
### Computing the Probability Corresponding to a Particular Sequence:
```
#How about finding P(A-->E-->E-->A)
def calculate_probability(sequence, MyMatrix, pi):
StartingState = sequence[0]
prob = pi[StartingState] #initializing prob with the prob of the start state
PreviousState, CurrentState = StartingState, StartingState
for i in range(1, len(sequence)):
CurrentState = sequence[i]
#Multiplying the transition prob from previous to current state with the current value of prob
prob *= MyMatrix[PreviousState][CurrentState]
PreviousState = CurrentState
return prob
print(calculate_probability([0, 1, 1, 0], MyMatrix, pi_normalized))
```
## Application of Markov Chain :
1. Markov chains makes the study of many real-world processes much more simple and easy to understand. Using Markov chain we can derive some useful results such as Stationary Distribution and many more.
2. MCMC(Markov Chain Monte Carlo) ,gives solution to the problems come from the normalization factor, is based on Markov Chain.
3. Markov Chains are used in information theory, search engine, speech recognition etc.
Markov chain has huge possibilities, future and importance in the field of Data Science and the interested readers are requested to learn these stuff properly for being a competent person in the field of Data Science.
| github_jupyter |
```
import pandas as pd
import requests
import json
MUC_compare_df = pd.read_csv('MUC_compare_df.csv')
PA_compare_df = pd.read_csv('PA_compare_df.csv')
MUC_df = pd.read_csv('MUC_poi_df.csv')
PA_df = pd.read_csv('PA_poi_df.csv')
charging_df = pd.read_csv('../PA_charging/PA_charging_data.csv')
MUC_compare_df.
MUC_compare_df.h3_area.unique()
MUC_high_area_df = MUC_compare_df.where(MUC_compare_df.h3_area =='High-area').sort_values(['total_poi', 'station_count', 'max_category'], ascending=[False, True, False]).dropna()
MUC_low_area_df = MUC_compare_df.where(MUC_compare_df.h3_area =='Low-area').sort_values(['total_poi', 'station_count', 'max_category'], ascending=[False, True, False]).dropna()
MUC_high_area_df_no_station = MUC_compare_df.query('h3_area =="High-area" and station_count == 0').sort_values(['total_poi', 'station_count', 'max_category'], ascending=[False, True, False]).dropna()
#MUC_low_area_df_no_station = MUC_compare_df.where((MUC_compare_df.h3_area =='Low-area') and (MUC_compare_df.station_count >= 0)).sort_values(['total_poi', 'station_count', 'max_category'], ascending=[False, True, False]).dropna()
# MUC_high_area_df_no_station = MUC_compare_df.where((MUC_compare_df.h3_area =='High-area') and (MUC_compare_df.station_count >= 0)).sort_values(['total_poi', 'station_count', 'max_category'], ascending=[False, True, False]).dropna()
# MUC_low_area_df_no_station = MUC_compare_df.where((MUC_compare_df.h3_area =='Low-area') and (MUC_compare_df.station_count >= 0)).sort_values(['total_poi', 'station_count', 'max_category'], ascending=[False, True, False]).dropna()
MUC_low_area_df.info()
MUC_is_station = MUC_compare_df.where(MUC_compare_df.station_count >= 1).dropna()
MUC_is_station.info()
#MUC_high_area_df.info() # 1125 non null
#MUC_low_area_df.info() # 1355 non null
MUC_low_area_df.h3_code.count()
MUC_high_area_df.h3_code.count()
# next_charging_station_MUC = next_charging_station_MUC[[ 'h3_code',
# 'total_poi',
# 'station_count',
# 'h3_area',
# 'max_category'
# ]]
MUC_high_cs_build_1 = MUC_high_area_df_no_station.head(50)
MUC_high_cs_build_2 = MUC_high_area_df_no_station.iloc[50:100 , :].head(50)
MUC_high_cs_build_3 = MUC_high_area_df_no_station.iloc[100:150 , :].head(50)
MUC_high_cs_build_4= MUC_high_area_df_no_station.iloc[150:200 , :].head(50)
MUC_high_cs_build_5 = MUC_high_area_df_no_station.iloc[200:250 , :].head(50)
MUC_high_cs_build_6 = MUC_high_area_df_no_station.iloc[250:300 , :].head(50)
MUC_high_cs_build_7 = MUC_high_area_df_no_station.iloc[300:350 , :].head(50)
MUC_high_cs_build_8 = MUC_high_area_df_no_station.iloc[350:400 , :].head(50)
MUC_high_cs_build_9 = MUC_high_area_df_no_station.iloc[400:450 , :].head(50)
MUC_high_cs_build_10 = MUC_high_area_df_no_station.iloc[500:550 , :].head(50)
MUC_high_cs_build_11 = MUC_high_area_df_no_station.iloc[550:600 , :].head(50)
MUC_high_cs_build_11 = MUC_high_area_df_no_station.iloc[600:650 , :].head(50)
MUC_high_area_df.head()
MUC_high_cs_build_1.to_csv('./build/1_build_MUC.csv', sep=',')
MUC_high_cs_build_2.to_csv('./build/2_build_MUC.csv', sep=',')
MUC_high_cs_build_3.to_csv('./build/3_build_MUC.csv', sep=',')
MUC_high_cs_build_4.to_csv('./build/4_build_MUC.csv', sep=',')
MUC_high_cs_build_5.to_csv('./build/5_build_MUC.csv', sep=',')
MUC_high_cs_build_6.to_csv('./build/6_build_MUC.csv', sep=',')
MUC_high_cs_build_7.to_csv('./build/7_build_MUC.csv', sep=',')
MUC_high_cs_build_8.to_csv('./build/8_build_MUC.csv', sep=',')
MUC_high_cs_build_9.to_csv('./build/9_build_MUC.csv', sep=',')
MUC_high_cs_build_10.to_csv('./build/10_build_MUC.csv', sep=',')
MUC_high_cs_build_11.to_csv('./build/11_build_MUC.csv', sep=',')
MUC_is_station.to_csv('./build/MUC_is_station.csv', sep=',')
```
| github_jupyter |
```
import numpy as np
import tensorflow as tf
from sklearn.utils import shuffle
import re
import time
import collections
import os
def build_dataset(words, n_words, atleast=1):
count = [['PAD', 0], ['GO', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words)
counter = [i for i in counter if i[1] >= atleast]
count.extend(counter)
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 0)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
with open('english-train', 'r') as fopen:
text_from = fopen.read().lower().split('\n')[:-1]
with open('vietnam-train', 'r') as fopen:
text_to = fopen.read().lower().split('\n')[:-1]
print('len from: %d, len to: %d'%(len(text_from), len(text_to)))
concat_from = ' '.join(text_from).split()
vocabulary_size_from = len(list(set(concat_from)))
data_from, count_from, dictionary_from, rev_dictionary_from = build_dataset(concat_from, vocabulary_size_from)
print('vocab from size: %d'%(vocabulary_size_from))
print('Most common words', count_from[4:10])
print('Sample data', data_from[:10], [rev_dictionary_from[i] for i in data_from[:10]])
concat_to = ' '.join(text_to).split()
vocabulary_size_to = len(list(set(concat_to)))
data_to, count_to, dictionary_to, rev_dictionary_to = build_dataset(concat_to, vocabulary_size_to)
print('vocab to size: %d'%(vocabulary_size_to))
print('Most common words', count_to[4:10])
print('Sample data', data_to[:10], [rev_dictionary_to[i] for i in data_to[:10]])
GO = dictionary_from['GO']
PAD = dictionary_from['PAD']
EOS = dictionary_from['EOS']
UNK = dictionary_from['UNK']
for i in range(len(text_to)):
text_to[i] += ' EOS'
def str_idx(corpus, dic):
X = []
for i in corpus:
ints = []
for k in i.split():
ints.append(dic.get(k,UNK))
X.append(ints)
return X
def pad_sentence_batch(sentence_batch, pad_int):
padded_seqs = []
seq_lens = []
max_sentence_len = max([len(sentence) for sentence in sentence_batch])
for sentence in sentence_batch:
padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence)))
seq_lens.append(len(sentence))
return padded_seqs, seq_lens
X = str_idx(text_from, dictionary_from)
Y = str_idx(text_to, dictionary_to)
emb_size = 256
n_hidden = 256
n_layers = 4
n_attn_heads = 16
learning_rate = 1e-4
batch_size = 16
epoch = 20
def encoder_block(inp, n_hidden, filter_size):
inp = tf.expand_dims(inp, 2)
inp = tf.pad(inp, [[0, 0], [(filter_size[0]-1)//2, (filter_size[0]-1)//2], [0, 0], [0, 0]])
conv = tf.layers.conv2d(inp, n_hidden, filter_size, padding="VALID", activation=None)
conv = tf.squeeze(conv, 2)
return conv
def decoder_block(inp, n_hidden, filter_size):
inp = tf.expand_dims(inp, 2)
inp = tf.pad(inp, [[0, 0], [filter_size[0]-1, 0], [0, 0], [0, 0]])
conv = tf.layers.conv2d(inp, n_hidden, filter_size, padding="VALID", activation=None)
conv = tf.squeeze(conv, 2)
return conv
def glu(x):
return tf.multiply(x[:, :, :tf.shape(x)[2]//2], tf.sigmoid(x[:, :, tf.shape(x)[2]//2:]))
def layer(inp, conv_block, kernel_width, n_hidden, residual=None):
z = conv_block(inp, n_hidden, (kernel_width, 1))
return glu(z) + (residual if residual is not None else 0)
class Chatbot:
def __init__(self):
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None, None])
self.X_seq_len = tf.count_nonzero(self.X, 1, dtype = tf.int32)
self.Y_seq_len = tf.count_nonzero(self.Y, 1, dtype = tf.int32)
batch_size = tf.shape(self.X)[0]
main = tf.strided_slice(self.Y, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1)
encoder_embedding = tf.Variable(tf.random_uniform([len(dictionary_from), emb_size], -1, 1))
decoder_embedding = tf.Variable(tf.random_uniform([len(dictionary_to), emb_size], -1, 1))
def forward(x, y,reuse=False):
with tf.variable_scope('forward',reuse=reuse):
encoder_embedded = tf.nn.embedding_lookup(encoder_embedding, x)
decoder_embedded = tf.nn.embedding_lookup(decoder_embedding, y)
e = tf.identity(encoder_embedded)
for i in range(n_layers):
z = layer(encoder_embedded, encoder_block, 3, n_hidden * 2, encoder_embedded)
encoder_embedded = z
encoder_output, output_memory = z, z + e
g = tf.identity(decoder_embedded)
for i in range(n_layers):
attn_res = h = layer(decoder_embedded, decoder_block, 3, n_hidden * 2,
residual=tf.zeros_like(decoder_embedded))
C = []
for j in range(n_attn_heads):
h_ = tf.layers.dense(h, n_hidden//n_attn_heads)
g_ = tf.layers.dense(g, n_hidden//n_attn_heads)
zu_ = tf.layers.dense(encoder_output, n_hidden//n_attn_heads)
ze_ = tf.layers.dense(output_memory, n_hidden//n_attn_heads)
d = tf.layers.dense(h_, n_hidden//n_attn_heads) + g_
dz = tf.matmul(d, tf.transpose(zu_, [0, 2, 1]))
a = tf.nn.softmax(dz)
c_ = tf.matmul(a, ze_)
C.append(c_)
c = tf.concat(C, 2)
h = tf.layers.dense(attn_res + c, n_hidden)
decoder_embedded = h
decoder_output = tf.sigmoid(h)
return tf.layers.dense(decoder_output, len(dictionary_to))
self.training_logits = forward(self.X, decoder_input)
self.logits = forward(self.X, self.Y, reuse=True)
self.k = tf.placeholder(dtype = tf.int32)
p = tf.nn.softmax(self.logits)
self.topk_logprobs, self.topk_ids = tf.nn.top_k(tf.log(p), self.k)
masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.training_logits,
targets = self.Y,
weights = masks)
self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost)
y_t = tf.argmax(self.training_logits,axis=2)
y_t = tf.cast(y_t, tf.int32)
self.prediction = tf.boolean_mask(y_t, masks)
mask_label = tf.boolean_mask(self.Y, masks)
correct_pred = tf.equal(self.prediction, mask_label)
correct_index = tf.cast(correct_pred, tf.float32)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Chatbot()
sess.run(tf.global_variables_initializer())
for i in range(epoch):
total_loss, total_accuracy = 0, 0
for k in range(0, len(text_to), batch_size):
index = min(k+batch_size, len(text_to))
batch_x, seq_x = pad_sentence_batch(X[k: index], PAD)
batch_y, seq_y = pad_sentence_batch(Y[k: index ], PAD)
accuracy,loss, _ = sess.run([model.accuracy, model.cost, model.optimizer],
feed_dict={model.X:batch_x,
model.Y:batch_y})
total_loss += loss
total_accuracy += accuracy
total_loss /= (len(text_to) / batch_size)
total_accuracy /= (len(text_to) / batch_size)
print('epoch: %d, avg loss: %f, avg accuracy: %f'%(i+1, total_loss, total_accuracy))
class Hypothesis:
def __init__(self, log_prob, seq):
self.log_prob = log_prob
self.seq = seq
@property
def step(self):
return len(self.seq) - 1
def beam_search(
batch_x,
beam_size,
num_ans = 5,
normalize_by_len = 1.0,
):
assert 0 <= normalize_by_len <= 1
batch_size = len(batch_x)
max_len = len(batch_x[0]) * 2
dec_inputs = np.ones((batch_size, 2), dtype=np.int32)
answers = [[] for i in range(batch_size)]
H = [[] for i in range(batch_size)]
tkl, tkid = sess.run([model.topk_logprobs,
model.topk_ids],
feed_dict = {model.X: batch_x,
model.Y: dec_inputs,
model.k: beam_size})
for i in range(batch_size):
for j, log_prob in enumerate(tkl[i, 0]):
if tkid[i, 0, j] != EOS:
h = Hypothesis(log_prob, [1, tkid[i, 0, j]])
H[i].append(h)
H[i].sort(key=lambda h: h.log_prob)
done = [False] * batch_size
while not all(done):
tkl_beam = []
tkid_beam = []
dec_inputs_beam = []
steps_beam = []
for i in range(beam_size):
steps = [1] * batch_size
prev_log_probs = np.zeros(batch_size, dtype=np.float32)
dec_inputs = np.ones((batch_size, max_len), dtype=np.int32)
for j, h in enumerate(H):
while h:
hi = h.pop()
lp, step, candidate_seq = hi.log_prob, hi.step, hi.seq
if candidate_seq[-1] != EOS:
dec_inputs[j, :len(candidate_seq)] = candidate_seq
steps[j] = step
prev_log_probs[j] = lp
break
else:
answers[j].append((lp, candidate_seq))
max_step = max(steps)
dec_inputs = dec_inputs[:, :max_step + 2]
tkl, tkid = sess.run([model.topk_logprobs,
model.topk_ids],
feed_dict = {model.X: batch_x,
model.Y: dec_inputs,
model.k: beam_size})
tkl_beam.append(tkl + prev_log_probs[:, None, None])
tkid_beam.append(tkid)
dec_inputs_beam.append(dec_inputs.copy())
steps_beam.append(steps)
for i in range(beam_size):
tkl = tkl_beam[i]
tkid = tkid_beam[i]
dec_inputs = dec_inputs_beam[i]
steps = steps_beam[i]
for j in range(batch_size):
step = steps[j]
for k in range(tkid.shape[2]):
extended_seq = np.hstack((dec_inputs[j, :step+1], [tkid[j, step, k]]))
log_prob = tkl[j, step, k]
if len(extended_seq) <= max_len and log_prob > -10:
h = Hypothesis(log_prob, extended_seq)
H[j].append(h)
H[j].sort(key=lambda h: h.log_prob / (h.step**normalize_by_len))
for i in range(batch_size):
done[i] = (len(answers[i]) >= num_ans) or (not H[i]) or (len(H[i]) > 100)
return answers
beamed = beam_search(batch_x, 5)
beamed = [i for i in beamed if len(i)]
predicted = [max(b, key = lambda t: t[0])[1] for b in beamed]
for i in range(len(predicted)):
print('row %d'%(i+1))
print('QUESTION:',' '.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]]))
print('REAL ANSWER:',' '.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]]))
print('PREDICTED ANSWER:',' '.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\n')
```
| github_jupyter |
# 1. 어제 오른 내 주식, 과연 내일은?
**ARIMA 시계열 분석법을 배우고, 직접 주식 시세를 예측해 본다.**
## 11-1. 들어가며
## 11-2. 시계열 예측이란(1) 미래를 예측한다는 것은 가능할까?
## 11-3. 시계열 예측이란(2) Stationary한 시계열 데이터
## 11-4. 시계열 예측이란(3) 시계열 데이터 사례분석
```bash
$ mkdir -p ~/aiffel/stock_prediction/data
$ ln -s ~/data/* ~/aiffel/stock_prediction/data
```
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import warnings
warnings.filterwarnings('ignore')
print('슝=3')
dataset_filepath = os.getenv('HOME')+'/aiffel/stock_prediction/data/daily-min-temperatures.csv'
df = pd.read_csv(dataset_filepath)
print(type(df))
df.head()
# 이번에는 Date를 index_col로 지정해 주었습니다.
df = pd.read_csv(dataset_filepath, index_col='Date', parse_dates=True)
print(type(df))
df.head()
ts1 = df['Temp'] # 우선은 데이터 확인용이니 time series 의 이니셜을 따서 'ts'라고 이름 붙여줍시다!
print(type(ts1))
ts1.head()
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 13, 6 # matlab 차트의 기본 크기를 13, 6으로 지정해 줍니다.
# 시계열(time series) 데이터를 차트로 그려 봅시다. 특별히 더 가공하지 않아도 잘 그려집니다.
plt.plot(ts1)
ts1[ts1.isna()] # 시계열(Time Series)에서 결측치가 있는 부분만 Series로 출력합니다.
# 결측치가 있다면 이를 보간합니다. 보간 기준은 time을 선택합니다.
ts1=ts1.interpolate(method='time')
# 보간 이후 결측치(NaN) 유무를 다시 확인합니다.
print(ts1[ts1.isna()])
# 다시 그래프를 확인해봅시다!
plt.plot(ts1)
def plot_rolling_statistics(timeseries, window=12):
rolmean = timeseries.rolling(window=window).mean() # 이동평균 시계열
rolstd = timeseries.rolling(window=window).std() # 이동표준편차 시계열
# 원본시계열, 이동평균, 이동표준편차를 plot으로 시각화해 본다.
orig = plt.plot(timeseries, color='blue',label='Original')
mean = plt.plot(rolmean, color='red', label='Rolling Mean')
std = plt.plot(rolstd, color='black', label='Rolling Std')
plt.legend(loc='best')
plt.title('Rolling Mean & Standard Deviation')
plt.show(block=False)
print('슝=3')
plot_rolling_statistics(ts1, window=12)
dataset_filepath = os.getenv('HOME')+'/aiffel/stock_prediction/data/airline-passengers.csv'
df = pd.read_csv(dataset_filepath, index_col='Month', parse_dates=True).fillna(0)
print(type(df))
df.head()
ts2 = df['Passengers']
plt.plot(ts2)
plot_rolling_statistics(ts2, window=12)
```
## 11-5. 시계열 예측이란(4) Stationary 여부를 체크하는 통계적 방법
```
from statsmodels.tsa.stattools import adfuller
def augmented_dickey_fuller_test(timeseries):
# statsmodels 패키지에서 제공하는 adfuller 메서드를 호출합니다.
dftest = adfuller(timeseries, autolag='AIC')
# adfuller 메서드가 리턴한 결과를 정리하여 출력합니다.
print('Results of Dickey-Fuller Test:')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)' % key] = value
print(dfoutput)
print('슝=3')
augmented_dickey_fuller_test(ts1)
augmented_dickey_fuller_test(ts2)
```
## 11-6. 시계열 예측의 기본 아이디어 : Stationary하게 만들 방법은 없을까?
```
ts_log = np.log(ts2)
plt.plot(ts_log)
augmented_dickey_fuller_test(ts_log)
moving_avg = ts_log.rolling(window=12).mean() # moving average구하기
plt.plot(ts_log)
plt.plot(moving_avg, color='red')
ts_log_moving_avg = ts_log - moving_avg # 변화량 제거
ts_log_moving_avg.head(15)
ts_log_moving_avg.dropna(inplace=True)
ts_log_moving_avg.head(15)
plot_rolling_statistics(ts_log_moving_avg)
augmented_dickey_fuller_test(ts_log_moving_avg)
moving_avg_6 = ts_log.rolling(window=6).mean()
ts_log_moving_avg_6 = ts_log - moving_avg_6
ts_log_moving_avg_6.dropna(inplace=True)
print('슝=3')
plot_rolling_statistics(ts_log_moving_avg_6)
augmented_dickey_fuller_test(ts_log_moving_avg_6)
ts_log_moving_avg_shift = ts_log_moving_avg.shift()
plt.plot(ts_log_moving_avg, color='blue')
plt.plot(ts_log_moving_avg_shift, color='green')
ts_log_moving_avg_diff = ts_log_moving_avg - ts_log_moving_avg_shift
ts_log_moving_avg_diff.dropna(inplace=True)
plt.plot(ts_log_moving_avg_diff)
plot_rolling_statistics(ts_log_moving_avg_diff)
augmented_dickey_fuller_test(ts_log_moving_avg_diff)
from statsmodels.tsa.seasonal import seasonal_decompose
decomposition = seasonal_decompose(ts_log)
trend = decomposition.trend # 추세(시간 추이에 따라 나타나는 평균값 변화 )
seasonal = decomposition.seasonal # 계절성(패턴이 파악되지 않은 주기적 변화)
residual = decomposition.resid # 원본(로그변환한) - 추세 - 계절성
plt.rcParams["figure.figsize"] = (11,6)
plt.subplot(411)
plt.plot(ts_log, label='Original')
plt.legend(loc='best')
plt.subplot(412)
plt.plot(trend, label='Trend')
plt.legend(loc='best')
plt.subplot(413)
plt.plot(seasonal,label='Seasonality')
plt.legend(loc='best')
plt.subplot(414)
plt.plot(residual, label='Residuals')
plt.legend(loc='best')
plt.tight_layout()
plt.rcParams["figure.figsize"] = (13,6)
plot_rolling_statistics(residual)
residual.dropna(inplace=True)
augmented_dickey_fuller_test(residual)
```
## 11-7. ARIMA 모델의 개념
```
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
plot_acf(ts_log) # ACF : Autocorrelation 그래프 그리기
plot_pacf(ts_log) # PACF : Partial Autocorrelation 그래프 그리기
plt.show()
# 1차 차분 구하기
diff_1 = ts_log.diff(periods=1).iloc[1:]
diff_1.plot(title='Difference 1st')
augmented_dickey_fuller_test(diff_1)
# 2차 차분 구하기
diff_2 = diff_1.diff(periods=1).iloc[1:]
diff_2.plot(title='Difference 2nd')
augmented_dickey_fuller_test(diff_2)
train_data, test_data = ts_log[:int(len(ts_log)*0.9)], ts_log[int(len(ts_log)*0.9):]
plt.figure(figsize=(10,6))
plt.grid(True)
plt.plot(ts_log, c='r', label='training dataset') # train_data를 적용하면 그래프가 끊어져 보이므로 자연스러운 연출을 위해 ts_log를 선택
plt.plot(test_data, c='b', label='test dataset')
plt.legend()
print(ts_log[:2])
print(train_data.shape)
print(test_data.shape)
```
## 11-8. ARIMA 모델 훈련과 추론
```
import warnings
warnings.filterwarnings('ignore') #경고 무시
from statsmodels.tsa.arima.model import ARIMA
# Build Model
model = ARIMA(train_data, order=(14, 1, 0)) # 모수는 이전 그래프를 참고
fitted_m = model.fit()
print(fitted_m.summary())
fitted_m = fitted_m.predict()
fitted_m = fitted_m.drop(fitted_m.index[0])
plt.plot(fitted_m, label='predict')
plt.plot(train_data, label='train_data')
plt.legend()
model = ARIMA(train_data, order=(14, 1, 0)) # p값을 14으로 테스트
fitted_m = model.fit()
fc= fitted_m.forecast(len(test_data), alpha=0.05) # 95% conf
# Make as pandas series
fc_series = pd.Series(fc, index=test_data.index) # 예측결과
# Plot
plt.figure(figsize=(9,5), dpi=100)
plt.plot(train_data, label='training')
plt.plot(test_data, c='b', label='actual price')
plt.plot(fc_series, c='r',label='predicted price')
plt.legend()
plt.show()
from sklearn.metrics import mean_squared_error, mean_absolute_error
import math
mse = mean_squared_error(np.exp(test_data), np.exp(fc))
print('MSE: ', mse)
mae = mean_absolute_error(np.exp(test_data), np.exp(fc))
print('MAE: ', mae)
rmse = math.sqrt(mean_squared_error(np.exp(test_data), np.exp(fc)))
print('RMSE: ', rmse)
mape = np.mean(np.abs(np.exp(fc) - np.exp(test_data))/np.abs(np.exp(test_data)))
print('MAPE: {:.2f}%'.format(mape*100))
```
## 11-9. 프로젝트 : 주식 예측에 도전해 보자
```
import pandas
import sklearn
import statsmodels
print(pandas.__version__)
print(sklearn.__version__)
print(statsmodels.__version__)
```
```bash
$ cd ~/aiffel/stock_prediction/data
$ ls
```
```bash
$ mkdir -p ~/aiffel/stock_prediction/data
$ ln -s ~/data/* ~/aiffel/stock_prediction/data
```
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
dataset_filepath = os.getenv('HOME') + '/aiffel/stock_prediction/data/005930.KS.csv'
df = pd.read_csv(dataset_filepath, index_col='Date', parse_dates=True)
ts = df['Close']
ts.head()
# 결측치 처리
ts = ts.interpolate(method='time')
ts[ts.isna()] # Time Series에서 결측치가 있는 부분만 Series로 출력합니다.
# 로그 변환 시도
ts_log = np.log(ts)
# 정성적 그래프 분석
plot_rolling_statistics(ts_log, window=12)
#정량적 Augmented Dicky-Fuller Test
augmented_dickey_fuller_test(ts_log)
#시계열 분해 (Time Series Decomposition)
from statsmodels.tsa.seasonal import seasonal_decompose
decomposition = seasonal_decompose(ts_log, model='multiplicative', period = 30)
trend = decomposition.trend
seasonal = decomposition.seasonal
residual = decomposition.resid
plt.subplot(411)
plt.plot(ts_log, label='Original')
plt.legend(loc='best')
plt.subplot(412)
plt.plot(trend, label='Trend')
plt.legend(loc='best')
plt.subplot(413)
plt.plot(seasonal,label='Seasonality')
plt.legend(loc='best')
plt.subplot(414)
plt.plot(residual, label='Residuals')
plt.legend(loc='best')
plt.tight_layout()
residual.dropna(inplace=True)
augmented_dickey_fuller_test(residual)
train_data, test_data = ts_log[:int(len(ts_log)*0.9)], ts_log[int(len(ts_log)*0.9):]
plt.figure(figsize=(10,6))
plt.grid(True)
plt.plot(ts_log, c='r', label='training dataset') # train_data를 적용하면 그래프가 끊어져 보이므로 자연스러운 연출을 위해 ts_log를 선택
plt.plot(test_data, c='b', label='test dataset')
plt.legend()
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
plot_acf(ts_log) # ACF : Autocorrelation 그래프 그리기
plot_pacf(ts_log) # PACF : Partial Autocorrelation 그래프 그리기
plt.show()
# 1차 차분 구하기
diff_1 = ts_log.diff(periods=1).iloc[1:]
diff_1.plot(title='Difference 1st')
augmented_dickey_fuller_test(diff_1)
# 혹시 필요한 경우 2차 차분 구하기
diff_2 = diff_1.diff(periods=1).iloc[1:]
diff_2.plot(title='Difference 2nd')
augmented_dickey_fuller_test(diff_2)
from statsmodels.tsa.arima.model import ARIMA
# Build Model
model = ARIMA(train_data, order=(2, 0, 1))
fitted_m = model.fit()
print(fitted_m.summary())
# Forecast : 결과가 fc에 담깁니다.
fc = fitted_m.forecast(len(test_data), alpha=0.05) # 95% conf
fc = np.array(fc)
# Make as pandas series
fc_series = pd.Series(fc, index=test_data.index) # 예측결과
# Plot
plt.figure(figsize=(10,5), dpi=100)
plt.plot(train_data, label='training')
plt.plot(test_data, c='b', label='actual price')
plt.plot(fc_series, c='r',label='predicted price')
plt.legend()
plt.show()
from sklearn.metrics import mean_squared_error, mean_absolute_error
import math
mse = mean_squared_error(np.exp(test_data), np.exp(fc))
print('MSE: ', mse)
mae = mean_absolute_error(np.exp(test_data), np.exp(fc))
print('MAE: ', mae)
rmse = math.sqrt(mean_squared_error(np.exp(test_data), np.exp(fc)))
print('RMSE: ', rmse)
mape = np.mean(np.abs(np.exp(fc) - np.exp(test_data))/np.abs(np.exp(test_data)))
print('MAPE: {:.2f}%'.format(mape*100))
```
>## **루브릭**
>|번호|평가문항|상세기준|
>|:---:|---|---|
>|1|시계열의 안정성이 충분히 확인되었는가?|플로팅과 adfuller 메소드가 모두 적절히 사용되었음|
>|2|ARIMA 모델 모수선택 근거를 체계적으로 제시하였는가?|p,q를 위한 ACF, PACF 사용과 d를 위한 차분 과정이 명확히 제시됨|
>|3|예측 모델의 오차율이 기준 이하로 정확하게 나왔는가?|3개 이상 종목이 MAPE 15% 미만의 정확도로 예측됨|
| github_jupyter |
<a href="https://colab.research.google.com/github/john-s-butler-dit/Numerical-Analysis-Python/blob/master/Chapter%2004%20-%20Multistep%20Methods/4_Problem_Sheet/406b_Problem_Sheet.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Problem Sheet Question 2a
The general form of the population growth differential equation
\begin{equation} y^{'}=ty^3-y, \ \ (0 \leq t \leq 2) \end{equation}
with the initial condition
\begin{equation}y(0)=1.\end{equation}
For N=4
\begin{equation} y(x_1)= 0.5.\end{equation}
### 2-step Adams Bashforth
The 2-step Adams Bashforth difference equation is
\begin{equation}w^{0}_{i+1} = w_{i} + \frac{h}{2}(3f(t_i,w_i)-f(t_{i-1},w_{i-1})) \end{equation}
\begin{equation}w^{0}_{i+1} = w_{i} + \frac{h}{2}(3(t_iw_i^3-w_i)-(t_{i-1}w_{i-1}^3-w_{i-1})) \end{equation}
### 3-step Adams Moulton
\begin{equation}w^{1}_{i+1} = w_{i} + \frac{h}{12}(5f(t_{i+1},w^{0}_{i+1})+8f(t_{i},w_{i})-f(t_{i-1},w_{i-1})) \end{equation}
\begin{equation} w^{1}_{i+1} = w_{i} + \frac{h}{12}(5(t_{i+1}(w^0_{i+1})^3-w^0_{i+1})+8(t_{i}w_{i}^3-w_{i})-(t_{i-1}w_{i-1}^3-w_{i-1})). \end{equation}
```
import numpy as np
import math
%matplotlib inline
import matplotlib.pyplot as plt # side-stepping mpl backend
import matplotlib.gridspec as gridspec # subplots
import warnings
warnings.filterwarnings("ignore")
def myfun_ty(t,y):
return y*y*y*t-y
#PLOTS
def Adams_Bashforth_Predictor_Corrector(N,IC):
x_end=2
x_start=0
INTITIAL_CONDITION=IC
h=x_end/(N)
N=N+2;
t=np.zeros(N)
w_predictor=np.zeros(N)
w_corrector=np.zeros(N)
Analytic_Solution=np.zeros(N)
k=0
w_predictor[0]=INTITIAL_CONDITION
w_corrector[0]=INTITIAL_CONDITION
Analytic_Solution[0]=INTITIAL_CONDITION
t[0]=x_start
t[1]=x_start+1*h
t[2]=x_start+2*h
w_predictor[1]=0.5
w_corrector[1]=0.5
for k in range (2,N-1):
w_predictor[k+1]=w_corrector[k]+h/2.0*(3*myfun_ty(t[k],w_corrector[k])-myfun_ty(t[k-1],w_corrector[k-1]))
w_corrector[k+1]=w_corrector[k]+h/12.0*(5*myfun_ty(t[k+1],w_predictor[k+1])+8*myfun_ty(t[k],w_corrector[k])-myfun_ty(t[k-1],w_corrector[k-1]))
t[k+1]=t[k]+h
fig = plt.figure(figsize=(10,4))
# --- left hand plot
ax = fig.add_subplot(1,2,1)
plt.plot(t,w_predictor,color='red')
#ax.legend(loc='best')
plt.title('Predictor h=%s'%(h))
# --- right hand plot
ax = fig.add_subplot(1,2,2)
plt.plot(t,w_corrector,color='blue')
plt.title('Corrector')
# --- titled , explanatory text and save
fig.suptitle(r"$y'=ty^3-y$", fontsize=20)
plt.tight_layout()
plt.subplots_adjust(top=0.85)
print('time')
print(t)
print('Predictor')
print(w_predictor)
print('Corrector')
print(w_corrector)
Adams_Bashforth_Predictor_Corrector(10,1)
```
| github_jupyter |
# Author : Kritika Srivastava
## Task 1 : Prediction using Supervised Machine Learning
## GRIP @ The Sparks Foundation
In this regression task I tried to predict the percentage of marks that a student is expected to score based upon the number of hours they studied.
This is a simple linear regression task as it involves just two variables.
## Technical Stack : Sikit Learn, Numpy Array, Pandas, Matplotlib
```
# Importing the required libraries
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
```
## Step 1 - Reading the data from source
```
# Reading data from remote link
url = r"https://raw.githubusercontent.com/AdiPersonalWorks/Random/master/student_scores%20-%20student_scores.csv"
s_data = pd.read_csv(url)
print("Data import successful")
s_data.head(10)
```
## Step 2 - Input data Visualization
```
# Plotting the distribution of scores
s_data.plot(x='Hours', y='Scores', style='o')
plt.title('Hours vs Percentage')
plt.xlabel('Hours Studied')
plt.ylabel('Percentage Score')
plt.show()
```
From the graph we can safely assume a positive linear relation between the number of hours studied and percentage of score.
## Step 3 - Data Preprocessing
This step involved division of data into "attributes" (inputs) and "labels" (outputs).
```
X = s_data.iloc[:, :-1].values
y = s_data.iloc[:, 1].values
```
## Step 4 - Model Training
Splitting the data into training and testing sets, and training the algorithm.
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
regressor = LinearRegression()
regressor.fit(X_train.reshape(-1,1), y_train)
print("Training complete.")
```
## Step 5 - Plotting the Line of regression
Now since our model is trained now, its the time to visualize the best-fit line of regression.
```
# Plotting the regression line
line = regressor.coef_*X+regressor.intercept_
# Plotting for the test data
plt.scatter(X, y)
plt.plot(X, line,color='red');
plt.show()
```
## Step 6 - Making Predictions
Now that we have trained our algorithm, it's time to test the model by making some predictions.
For this we will use our test-set data
```
# Testing data
print(X_test)
# Model Prediction
y_pred = regressor.predict(X_test)
```
## Step 7 - Comparing Actual result to the Predicted Model result
```
# Comparing Actual vs Predicted
df = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred})
df
#Estimating training and test score
print("Training Score:",regressor.score(X_train,y_train))
print("Test Score:",regressor.score(X_test,y_test))
# Plotting the Bar graph to depict the difference between the actual and predicted value
df.plot(kind='bar',figsize=(5,5))
plt.grid(which='major', linewidth='0.5', color='red')
plt.grid(which='minor', linewidth='0.5', color='blue')
plt.show()
# Testing the model with our own data
hours = 9.25
test = np.array([hours])
test = test.reshape(-1, 1)
own_pred = regressor.predict(test)
print("No of Hours = {}".format(hours))
print("Predicted Score = {}".format(own_pred[0]))
```
## Step 8 - Evaluating the model
The final step is to evaluate the performance of algorithm. This step is particularly important to compare how well different algorithms perform on a particular dataset. Here different errors have been calculated to compare the model performance and predict the accuracy.
```
from sklearn import metrics
print('Mean Absolute Error:',metrics.mean_absolute_error(y_test, y_pred))
print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_pred))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
print('R-2:', metrics.r2_score(y_test, y_pred))
```
R-2 gives the score of model fit and in this case we have R-2 = 0.9454906892105355 which is actually a great score for this model.
## Conclusion
### I was successfully able to carry-out Prediction using Supervised ML task and was able to evaluate the model's performance on various parameters.
# Thank You
| github_jupyter |
# Thermal equilibrium
An ensemble of trajectories obtained from simulating Langevin dynamics will tend to a stable distribution: the Boltzmann distribution.
#### Problem setup
Two identical magnetic nanoparticles, aligned along their anisotropy axes. The system has 6 degrees of freedom (x,y,z components of magnetisation for each particle) but the energy is defined by the two angles $\theta_1,\theta_2$ alone.

#### Boltzmann distribution
The Boltzmann distribution represents the probabability that the system will be found within certain sets (i.e. the angles of magnetisation). The distribution is parameterised by the temperature of the system and the energy landscape of the problem.
Note the sine terms appear because the distribution is over the **solid angles**. In other words, the distribution is over the surface of a unit sphere. The sine terms project these solid angles onto a simple elevation angle between the magnetisation and the anisotropy axis ($\theta$).
$$p\left(\theta_1,\theta_2,\phi_1,\phi_2\right) = \frac{\sin(\theta_1)\sin(\theta_2)e^{-E\left(\theta_1,\theta_2,\phi_1,\phi_2\right)/\left(K_BT\right)}}{Z}$$
#### Stoner-Wohlfarth model
The energy function for a single domain magnetic nanoparticle is given by the Stoner-Wohlfarth equation:
$$\frac{E\left(\theta_1,\theta_2,\phi_1,\phi_2\right)}{K_BT}=\sigma\left(\sin^2\theta_1+\sin^2\theta_2\right)
-\nu\left[2\cos\theta_1\cos\theta_2 - \sin\theta_1\sin\theta_2\cos\left(\phi_1-\phi_2\right)\right]$$
$$\sigma=\frac{KV}{K_BT}$$
$$\nu=\frac{\mu_0V^2M_s^2}{2\pi R^3K_BT}$$
$\sigma,\nu$ are the normalised anisotropy and interaction strength respectively.
## Functions for analytic solution
```
import numpy as np
# dipole interaction energy
def dd(t1, t2, p1, p2, nu):
return -nu*(2*np.cos(t1)*np.cos(t2) - np.sin(t1)*np.sin(t2)*np.cos(p1-p2))
# anisotropy energy
def anis(t1, t2, sigma):
return sigma*(np.sin(t1)**2 + np.sin(t2)**2)
# total energy
def tot(t1, t2, p1, p2, nu, sigma):
return dd(t1, t2, p1, p2, nu) + anis(t1, t2, sigma)
# numerator of the Boltzmann distribution (i.e. ignoring the partition function Z)
def p_unorm(t1,t2,p1,p2,nu,sigma):
return np.sin(t1)*np.sin(t2)*np.exp(-tot(t1, t2, p1, p2, nu, sigma))
# non interacting
from scipy.integrate import nquad
sigma, nu = 1.0, 0.0
Z = nquad(
lambda t1, t2, p1, p2: p_unorm(t1, t2, p1, p2, nu, sigma),
ranges=[(0, np.pi), (0, np.pi), (0, 2*np.pi), (0, 2*np.pi)]
)
print(Z[0])
Z = nquad(
lambda t1, t2: p_unorm(t1, t2, 0, 0, nu, sigma),
ranges=[(0, np.pi), (0, np.pi)]
)
print(Z[0] * 4 * np.pi**2)
```
## Magpy non-interacting case
Using magpy we initialise the dimers with both magnetisation vectors aligned along their anisotropy axes. We allow the system to relax.
```
import magpy as mp
```
### System properties
Set up the parameters of the dimers. They are identical and aligned along their anisotropy axes
```
K = 1e5
r = 7e-9
T = 330
Ms=400e3
R=9e-9
kdir = [0, 0, 1]
location1 = np.array([0, 0, 0], dtype=np.float)
location2 = np.array([0, 0, R], dtype=np.float)
direction = np.array([0, 0, 1], dtype=np.float)
alpha = 1.0
```
### Magpy model and simulation
Build a magpy model of the dimer
```
base_model = mp.Model(
anisotropy=np.array([K, K], dtype=np.float),
anisotropy_axis=np.array([kdir, kdir], dtype=np.float),
damping=alpha,
location=np.array([location1, location2], dtype=np.float),
magnetisation=Ms,
magnetisation_direction=np.array([direction, direction], dtype=np.float),
radius=np.array([r, r], dtype=np.float),
temperature=T
)
ensemble = mp.EnsembleModel(50000, base_model)
```
Simulate an ensemble of 10,000 dimers without interactions.
```
res = ensemble.simulate(end_time=1e-9, time_step=1e-12,
max_samples=500, random_state=1002,
n_jobs=8, implicit_solve=True,
interactions=False)
m_z0 = np.array([state['z'][0] for state in res.final_state()])/Ms
m_z1 = np.array([state['z'][1] for state in res.final_state()])/Ms
theta0 = np.arccos(m_z0)
theta1 = np.arccos(m_z1)
```
System magnetisation shows that the system has relaxed into the local minima (we could relax the system globally but it would take much longer to run since the energy barrier must be overcome).
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(res.results[0].time, res.ensemble_magnetisation())
plt.title('Non-interacting dimer ensemble magnetisation');
```
### Compare to analytic thermal equilibrium
Compute the expected boltzmann distribution
```
# Dimensionless parameters
V = 4./3*np.pi*r**3
sigma = K*V/mp.core.get_KB()/T
nu = 0
print('Sigma: {:.3f}'.format(sigma))
print(' Nu: {:.3f}'.format(nu))
```
The joint distribution for both angles are computed analytically and compared with the numerical result.
The resulting distrubiton is symmetric because both particles are independent and identically distributed.
```
Z = nquad(
lambda t1, t2, p1, p2: p_unorm(t1, t2, p1, p2, nu, sigma),
ranges=[(0, np.pi/2), (0, np.pi/2), (0, 2*np.pi), (0, 2*np.pi)]
)
print(Z[0])
Z=Z[0]
ts = np.linspace(min(theta0), max(theta0), 100)
bdist = [[
nquad(lambda p1, p2: p_unorm(t1, t2, p1, p2, nu, sigma)/Z, ranges=[(0, 2*np.pi), (0, 2*np.pi)])[0]
for t1 in ts]
for t2 in ts]
plt.hist2d(theta0, theta1, bins=30, normed=True);
plt.contour(ts, ts, bdist, cmap='Greys')
plt.title('Joint distribution')
plt.xlabel('$\\theta_1$'); plt.ylabel('$\\theta_2$');
```
We can also compute the marginal distribution (i.e. the equilibrium of just 1 particle). It is easier to see the alignment of the two distributions.
```
b_marginal = [nquad(
lambda t2, p1, p2: p_unorm(t1, t2, p1, p2, nu, sigma)/Z,
ranges=[(0, np.pi/2), (0, 2*np.pi), (0, 2*np.pi)])[0]
for t1 in ts]
plt.hist(theta0, bins=50, normed=True)
plt.plot(ts, np.array(b_marginal))
```
## Magpy interacting case
We now simulate the exact same ensemble of dimers but with the interactions enabled. We can compute the dimensionless parameters to understand the strength of the interactions (vs. the anisotropy strength).
```
# Dimensionless parameters
V = 4./3*np.pi*r**3
sigma = K*V/mp.core.get_KB()/T
nu = mp.core.get_mu0() * V**2 * Ms**2 / 2.0 / np.pi**2 / R**3 / mp.core.get_KB() / T
print('Sigma: {:.3f}'.format(sigma))
print(' Nu: {:.3f}'.format(nu))
```
The interaction strength is very strong (actually the particles are impossibly close). The following command is identical to above except that `interactions=True`
```
res = ensemble.simulate(end_time=1e-9, time_step=1e-13,
max_samples=500, random_state=1001,
n_jobs=8, implicit_solve=False,
interactions=True, renorm=True)
m_z0i = np.array([state['z'][0] for state in res.final_state()])/Ms
m_z1i = np.array([state['z'][1] for state in res.final_state()])/Ms
theta0i = np.arccos(m_z0i)
theta1i = np.arccos(m_z1i)
```
### System relaxation
The system quickly relaxes into the first minima again, as before.
```
plt.plot(res.results[0].time, res.ensemble_magnetisation())
plt.title('Interacting dimer ensemble magnetisation');
```
### Thermal equilibrium
The stationary distributions align BUT:
**introduced fudge factor** of $\pi$ into the denominator of the interaction constant $\nu$. In other words we use $$\nu_\textrm{fudge}=\frac{\nu}{\pi}$$
This factor of $1/\pi$ could come from integrating somewhere. I *think* this is an error with my analytic calculations and not the code. This is because the code certainly uses the correct term for interaction strength, whereas I derived this test myself.
```
# Dimensionless parameters
V = 4./3*np.pi*r**3
sigma = K*V/mp.core.get_KB()/T
nu = 1.0 * mp.core.get_mu0() * V**2 * Ms**2 / 2.0 / np.pi / np.pi / R**3 / mp.core.get_KB() / T
print('Sigma: {:.3f}'.format(sigma))
print(' Nu: {:.3f}'.format(nu))
Z = nquad(
lambda t1, t2, p1, p2: p_unorm(t1, t2, p1, p2, nu, sigma),
ranges=[(0, np.pi/2), (0, np.pi/2), (0, 2*np.pi), (0, 2*np.pi)]
)[0]
ts = np.linspace(min(theta0), max(theta0), 100)
bdist = [[
nquad(lambda p1, p2: p_unorm(t1, t2, p1, p2, nu, sigma)/Z, ranges=[(0, 2*np.pi), (0, 2*np.pi)])[0]
for t1 in ts]
for t2 in ts]
plt.hist2d(theta0i, theta1i, bins=30, normed=True);
# ts = np.linspace(min(theta0), max(theta0), 100)
# b = boltz_2d(ts, nu, sigma)
plt.contour(ts, ts, bdist, cmap='Greys')
plt.title('Joint distribution')
plt.xlabel('$\\theta_1$'); plt.ylabel('$\\theta_2$');
```
We use the marginal distribution again to check the convergence. We also compare to the interacting case
```
b_marginal = [nquad(
lambda t2, p1, p2: p_unorm(t1, t2, p1, p2, nu, sigma)/Z,
ranges=[(0, np.pi/2), (0, 2*np.pi), (0, 2*np.pi)])[0]
for t1 in ts]
plt.hist(theta0i, bins=50, normed=True, alpha=0.6, label='Magpy + inter.')
plt.hist(theta0, bins=50, normed=True, alpha=0.6, label='Magpy + no inter.')
plt.plot(ts, b_marginal, label='Analytic')
plt.legend();
```
# Possible sources of error
- Implementation of interaction strength in magpy (but there are many tests against the true equations)
- Analytic calculations for equilibrium
One way to test this could be to simulate a 3D system. If another factor of $\pi$ appears then it is definitely something missing from my analytic calculations.
```
import pymc3 as pm
with pm.Model() as model:
z1 = pm.Uniform('z1', 0, 1)
theta1 = pm.Deterministic('theta1', np.arccos(z1))
z2 = pm.Uniform('z2', 0, 1)
theta2 = pm.Deterministic('theta2', np.arccos(z2))
phi1 = pm.Uniform('phi1', 0, 2*np.pi)
phi2 = pm.Uniform('phi2', 0, 2*np.pi)
energy = tot(theta1, theta2, phi1, phi2, nu, sigma)
like = pm.Potential('energy', -energy)
with model:
step = pm.NUTS()
trace = pm.sample(500000, step=step)
pm.traceplot(trace)
plt.hist(trace['theta1'], bins=200, normed=True);
plt.plot(ts, b_marginal, label='Analytic')
plt.hist(trace['theta1'], bins=200, normed=True, alpha=0.6);
plt.hist(theta0i, bins=50, normed=True, alpha=0.6, label='Magpy + inter.');
```
### Cartesian
```
def cart_energy(mx1, my1, mz1, mx2, my2, my3, nu, sigma):
t1 = np.arccos(mz1)
t2 = np.arccos(mz2)
anis = sigma*np.sin(t1)**2 + sigma*np.sin(t2)**2
inter = -nu*(3*np.dot)
import pymc3 as pm
with pm.Model() as model:
z1 = pm.Uniform('z1', 0, 1)
theta1 = pm.Deterministic('theta1', np.arccos(z1))
z2 = pm.Uniform('z2', 0, 1)
theta2 = pm.Deterministic('theta2', np.arccos(z2))
phi1 = pm.Uniform('phi1', 0, 2*np.pi)
phi2 = pm.Uniform('phi2', 0, 2*np.pi)
energy = tot(theta1, theta2, phi1, phi2, nu, sigma)
like = pm.Potential('energy', -energy)
```
| github_jupyter |
# chat bot api
```
from chatbot import Chat,reflections,multiFunctionCall
import wikipedia
import os
```
# Wikipedia API connection
```
def whoIs(query,sessionID="general"):
print(query)
try:
return wikipedia.summary(query)
except:
for newquery in wikipedia.search(query):
try:
return wikipedia.summary(newquery)
except:
pass
return "I don't know about "+query
```
# Emotion Detector Connection
```
from keras.preprocessing.image import img_to_array
import imutils
import cv2
from keras.models import load_model
import numpy as np
import playsound
# parameters for loading data and images
detection_model_path = 'Emotion/haarcascade_files/haarcascade_frontalface_default.xml'
emotion_model_path = 'Emotion/models/_mini_XCEPTION.102-0.66.hdf5'
# hyper-parameters for bounding boxes shape
# loading models
face_detection = cv2.CascadeClassifier(detection_model_path)
emotion_classifier = load_model(emotion_model_path, compile=False)
EMOTIONS = ["angry" ,"disgust","scared", "happy", "sad", "surprised",
"neutral"]
def emo(query,sessionID="general"):
cv2.namedWindow('your_face')
camera = cv2.VideoCapture(0)
while True:
frame = camera.read()[1]
frame = imutils.resize(frame,width=300)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_detection.detectMultiScale(gray,scaleFactor=1.1,minNeighbors=5,minSize=(30,30),flags=cv2.CASCADE_SCALE_IMAGE)
canvas = np.zeros((250, 300, 3), dtype="uint8")
frameClone = frame.copy()
if len(faces) > 0:
faces = sorted(faces, reverse=True,
key=lambda x: (x[2] - x[0]) * (x[3] - x[1]))[0]
(fX, fY, fW, fH) = faces
roi = gray[fY:fY + fH, fX:fX + fW]
roi = cv2.resize(roi, (64, 64))
roi = roi.astype("float") / 255.0
roi = img_to_array(roi)
roi = np.expand_dims(roi, axis=0)
preds = emotion_classifier.predict(roi)[0]
emotion_probability = np.max(preds)
label = EMOTIONS[preds.argmax()]
ee = []
percent = []
for (i, (emotion, prob)) in enumerate(zip(EMOTIONS, preds)):
ee.append(emotion)
percent.append(prob)
mp = percent.index(max(percent))
cv2.imshow('your_face', frameClone)
break
camera.release()
cv2.destroyAllWindows()
try:
return ee[mp]
except:
return "I cannot see your face."
```
# Face Identification Connection
```
%matplotlib inline
import cv2
import matplotlib.pyplot as plt
from IPython import display
import face_recognition
import glob
users = glob.glob("Users\*.jpg")
# Load a sample picture and learn how to recognize it.
known_face_encodings = []
known_face_names = []
for user in users:
user_image = face_recognition.load_image_file(user)
known_face_encodings.append(face_recognition.face_encodings(user_image)[0])
known_face_names.append(user.split("\\")[1].split(".")[0])
print(known_face_names)
import face_recognition
def identifyu(query=0,sessionID="general"):
video_capture = cv2.VideoCapture(0)
# Load a sample picture and learn how to recognize it.
# while True:
# Grab a single frame of video
ret, frame = video_capture.read()
# Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
rgb_frame = frame[:, :, ::-1]
# Find all the faces and face enqcodings in the frame of video
face_locations = face_recognition.face_locations(rgb_frame)
face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)
name = "Unknown"
# Loop through each face in this frame of video
for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings):
# See if the face is a match for the known face(s)
matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
name = "Unknown"
# If a match was found in known_face_encodings, just use the first one.
if True in matches:
first_match_index = matches.index(True)
name = known_face_names[first_match_index]
# Draw a box around the face
cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
# Draw a label with a name below the face
cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)
#webcam_preview = plt.imshow(frame)
# # Hit 'q' on the keyboard to quit!
# if cv2.waitKey(1) & 0xFF == ord('q'):
# break
# break
# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()
return name
```
# Save Mood and Load The User template file
```
def whathappen(query,sessionID="general"):
aa = input()
nam = identifyu()
with open(nam+".txt", "a") as myfile:
myfile.write(aa)
return "Would you like to tell me more about it?"
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyser = SentimentIntensityAnalyzer()
def sas(sentence):
if analyser.polarity_scores(sentence)['pos']>analyser.polarity_scores(sentence)['neg']:
return "happy"
else:
return "sad"
def chat():
nam = identifyu()
call = multiFunctionCall({"whoIs":whoIs,"emo":emo, "identifyu":identifyu, "whathappen":whathappen})
if nam == "Unknown":
firstQuestion="Hi, I am chatbot."
template = "Example.template"
else :
firstQuestion="Hi "+nam+" , nice to see you again."
template = nam+".template"
Chat(template, reflections,call=call).converse(firstQuestion)
from os import path
if path.exists(nam+".txt"):
with open(nam+".txt", "r") as myfile:
daa = myfile.read()
with open("pratik.template", "r") as myfile:
try:
mood = (myfile.readlines()[-2][2]) == "m"
except:
mood = False
if mood:
with open(nam+".template", "r+") as f:
d = f.readlines()
f.seek(0)
for i in d[:-2]:
f.write(i)
f.truncate()
with open(nam+".template", "a") as myf:
myf.write("\n{ mood : "+sas(daa)+" }")
myf.write("\n{ reason : "+daa+" }")
os.remove(nam+".txt")
chat()
with open("pratik.template", "r") as myfile:
try:
mood = (myfile.readlines()[-2][2]) == "m"
except:
mood = False
if mood:
with open("pratik.template", "r") as myfile:
for line in myfile.readlines()[-2:]:
print(line)
```
# Testing below this point
```
Chat("Example.template", reflections,call=call).converse(firstQuestion)
```
# Text to Voice Code
```
import pyttsx3
engine = pyttsx3.init()
voices = engine.getProperty('voices') #getting details of current voice
engine.setProperty('voice', voices[1].id)
engine.say("I will speak this text")
engine.runAndWait()
import pyttsx3
engine = pyttsx3.init() # object creation
""" RATE"""
rate = engine.getProperty('rate') # getting details of current speaking rate
print (rate) #printing current voice rate
engine.setProperty('rate', 125) # setting up new voice rate
"""VOLUME"""
volume = engine.getProperty('volume') #getting to know current volume level (min=0 and max=1)
print (volume) #printing current volume level
engine.setProperty('volume',1.0) # setting up volume level between 0 and 1
"""VOICE"""
voices = engine.getProperty('voices') #getting details of current voice
#engine.setProperty('voice', voices[0].id) #changing index, changes voices. o for male
engine.setProperty('voice', voices[1].id) #changing index, changes voices. 1 for female
engine.say("Hello World!")
engine.say('My current speaking rate is ' + str(rate))
engine.runAndWait()
engine.stop()
import glob
a = glob.glob("Users\*.jpg")
a
import face_recognition
obama_image = face_recognition.load_image_file(a[1])
obama_face_encoding = face_recognition.face_encodings(obama_image)[0]
plt.imshow(obama_image)
import glob
users = glob.glob("Users\*.jpg")
# Load a sample picture and learn how to recognize it.
users_face_encoding = []
users_names = []
for user in users:
user_image = face_recognition.load_image_file(user)
users_face_encoding.append(face_recognition.face_encodings(user_image))
users_names.append(user.split("\\")[1])
plt.imshow(user_image)
#print(users_names)
# obama_image = face_recognition.load_image_file("obama.jpg")
# obama_face_encoding = face_recognition.face_encodings(obama_image)[0]
# # Load a second sample picture and learn how to recognize it.
# biden_image = face_recognition.load_image_file("biden.jpg")
# biden_face_encoding = face_recognition.face_encodings(biden_image)[0]
# # Create arrays of known face encodings and their names
# known_face_encodings = [
# obama_face_encoding,
# biden_face_encoding
# ]
# known_face_names = [
# "Barack Obama",
# "Pratik Savla"
# ]
%matplotlib inline
import cv2
import matplotlib.pyplot as plt
from IPython import display
vc = cv2.VideoCapture(0)
if vc.isOpened(): # try to get the first frame
is_capturing, frame = vc.read()
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) # makes the blues image look real colored
webcam_preview = plt.imshow(frame)
else:
is_capturing = False
while is_capturing:
try: # Lookout for a keyboardInterrupt to stop the script
is_capturing, frame = vc.read()
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) # makes the blues image look real colored
webcam_preview.set_data(frame)
plt.draw()
display.clear_output(wait=True)
display.display(plt.gcf())
plt.pause(0.1) # the pause time is = 1 / framerate
except KeyboardInterrupt:
vc.release()
if cv2.waitKey(1) & 0xFF == ord('q'):
break
break
# Release handle to the webcam
vc.release()
cv2.destroyAllWindows()
video_capture.release()
with open("test.txt", "a") as myfile:
myfile.write(" appended text aaa ")
with open("pratik.template", "r") as myfile:
try:
mood = (myfile.readlines()[-2][2]) == "m"
except:
mood = False
if mood:
with open("pratik.template", "r") as myfile:
print(myfile.readlines()[-2:])
from os import path
path.exists("pratik.txt")
def chat():
nam = identifyu()
call = multiFunctionCall({"whoIs":whoIs,"emo":emo, "identifyu":identifyu, "whathappen":whathappen})
if nam == "Unknown":
firstQuestion="Hi, I am chatbot."
template = "Example.template"
else :
firstQuestion="Hi "+nam+" , nice to see you again."
template = nam+".template"
Chat(template, reflections,call=call).converse(firstQuestion)
from os import path
if path.exists(nam+".txt"):
with open(nam+".txt", "r") as myfile:
daa = myfile.read()
with open(nam+".template", "a") as myf:
myf.write("\n\n{ mood : "+daa+" }")
os.remove(nam+".txt")
def whathappen(query,sessionID="general"):
aa = input()
with open("pratik"+".txt", "a") as myfile:
myfile.write(aa)
return "Would you like to tell me more about it?"
chat()
from datetime import date
today = date.today()
print("Today's date:", today)
d1 = today.strftime("%d/%m/%Y")
print("d1 =", d1)
today.strftime("%B")
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyser = SentimentIntensityAnalyzer()
def sentiment_analyzer_scores(sentence):
score = analyser.polarity_scores(sentence)
print(score)
sentiment_analyzer_scores("It is a good day")
```
| github_jupyter |
# GatedGCNs with DGL
From [Bresson & Laurent (2018) Residual Gated Graph ConvNets](https://arxiv.org/abs/1711.07553), adapted from [Xavier's notebook](https://drive.google.com/file/d/1WG5t6X12Z70JPtvA2-2PzdK3TMTQMsvm).
```
# Import libs
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
import os
os.environ['DGLBACKEND'] = 'pytorch' # tell DGL what backend to use
import dgl
from dgl import DGLGraph
from dgl.data import MiniGCDataset
import time
import numpy as np
import networkx as nx
from res.plot_lib import set_default
import matplotlib.pyplot as plt
set_default(figsize=(3, 3), dpi=150)
def draw(g, title):
plt.figure()
nx.draw(g.to_networkx(), with_labels=True, node_color='skyblue', edge_color='white')
plt.gcf().set_facecolor('k')
plt.title(title)
```
## Mini graph classification dataset
```python
class dgl.data.MiniGCDataset(num_graphs, min_num_v, max_num_v)
```
- `num_graphs`: number of graphs in this dataset
- `min_num_v`: minimum number of nodes for graphs
- `max_num_v`: maximum number of nodes for graphs
```
# The datset contains 8 different types of graphs:
graph_type = (
'cycle',
'star',
'wheel',
'lollipop',
'hypercube',
'grid',
'clique',
'circular ladder',
)
# visualise the 8 classes of graphs
for graph, label in MiniGCDataset(8, 10, 20):
draw(graph, f'Class: {label}, {graph_type[label]} graph')
```
## Let's add some signal to the domain
We can assign features to nodes and edges of a `DGLGraph`. The features are represented as dictionary of names (strings) and tensors, called **fields**. `ndata` and `edata` are syntax sugar to access the feature data of all nodes and edges.
```
# create artifical data feature (= in degree) for each node
def create_artificial_features(dataset):
for (graph, _) in dataset:
graph.ndata['feat'] = graph.in_degrees().view(-1, 1).float()
graph.edata['feat'] = torch.ones(graph.number_of_edges(), 1)
return dataset
# Generate artifical graph dataset with DGL
trainset = MiniGCDataset(350, 10, 20)
testset = MiniGCDataset(100, 10, 20)
trainset = create_artificial_features(trainset)
testset = create_artificial_features(testset)
print(trainset[0])
```
## GatedGCNs equations
$$
\def \vx {\boldsymbol{\color{Plum}{x}}}
\def \vh {\boldsymbol{\color{YellowGreen}{h}}}
\def \ve {\boldsymbol{\color{purple}{e}}}
\def \aqua#1{\color{Aquamarine}{#1}}
\def \red#1{\color{OrangeRed}{#1}}
$$
\begin{aligned}
\vh &= \vx + \Big( A \vx + \sum_{\aqua{v}_j \to \red{v}} \eta(\ve_{j}) \odot B \vx_j \Big)^+\\
\eta(\ve_{j}) &= \sigma(\ve_{j})\Big(\sum_{\aqua{v}_k \to \red{v}} \sigma(\ve_{k})\Big)^{-1} \\
\ve_{j} &= C \ve_{j}^{\vx} + D \vx_j + E\vx\\
\ve_{j}^{\vh} &= \ve_j^{\vx} + \Big( \ve_{j} \Big)^+
\end{aligned}
In DGL, the *message functions* are expressed as **Edge UDF**s (User Defined Functions). Edge UDFs take in a single argument `edges`. It has three members `src`, `dst`, and `data` for accessing source node features, destination node features, and edge features.
The *reduce functions* are **Node UDF**s. Node UDFs have a single argument `nodes`, which has two members `data` and `mailbox`. `data` contains the node features and `mailbox` contains all incoming message features, stacked along the second dimension (hence the `dim=1` argument).
`update_all(message_func, reduce_func)` send messages through all edges and update all nodes.
Optionally, apply a function to update the node features after receive.
This is a convenient combination for performing `send(g.edges(), message_func)` and `recv(g.nodes(), reduce_func)`.
```
class GatedGCN_layer(nn.Module):
def __init__(self, input_dim, output_dim):
super().__init__()
self.A = nn.Linear(input_dim, output_dim)
self.B = nn.Linear(input_dim, output_dim)
self.C = nn.Linear(input_dim, output_dim)
self.D = nn.Linear(input_dim, output_dim)
self.E = nn.Linear(input_dim, output_dim)
self.bn_node_h = nn.BatchNorm1d(output_dim)
self.bn_node_e = nn.BatchNorm1d(output_dim)
def message_func(self, edges):
Bx_j = edges.src['BX']
# e_j = Ce_j + Dxj + Ex
e_j = edges.data['CE'] + edges.src['DX'] + edges.dst['EX']
edges.data['E'] = e_j
return {'Bx_j' : Bx_j, 'e_j' : e_j}
def reduce_func(self, nodes):
Ax = nodes.data['AX']
Bx_j = nodes.mailbox['Bx_j']
e_j = nodes.mailbox['e_j']
# sigma_j = σ(e_j)
σ_j = torch.sigmoid(e_j)
# h = Ax + Σ_j η_j * Bxj
h = Ax + torch.sum(σ_j * Bx_j, dim=1) / torch.sum(σ_j, dim=1)
return {'H' : h}
def forward(self, g, X, E_X, snorm_n, snorm_e):
g.ndata['H'] = X
g.ndata['AX'] = self.A(X)
g.ndata['BX'] = self.B(X)
g.ndata['DX'] = self.D(X)
g.ndata['EX'] = self.E(X)
g.edata['E'] = E_X
g.edata['CE'] = self.C(E_X)
g.update_all(self.message_func, self.reduce_func)
H = g.ndata['H'] # result of graph convolution
E = g.edata['E'] # result of graph convolution
H *= snorm_n # normalize activation w.r.t. graph node size
E *= snorm_e # normalize activation w.r.t. graph edge size
H = self.bn_node_h(H) # batch normalization
E = self.bn_node_e(E) # batch normalization
H = torch.relu(H) # non-linear activation
E = torch.relu(E) # non-linear activation
H = X + H # residual connection
E = E_X + E # residual connection
return H, E
class MLP_layer(nn.Module):
def __init__(self, input_dim, output_dim, L=2): # L = nb of hidden layers
super().__init__()
list_FC_layers = [
nn.Linear(input_dim, input_dim) for l in range(L)
]
list_FC_layers.append(nn.Linear(input_dim, output_dim))
self.FC_layers = nn.ModuleList(list_FC_layers)
self.L = L
def forward(self, x):
y = x
for l in range(self.L):
y = self.FC_layers[l](y)
y = torch.relu(y)
y = self.FC_layers[self.L](y)
return y
class GatedGCN(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim, L):
super().__init__()
self.embedding_h = nn.Linear(input_dim, hidden_dim)
self.embedding_e = nn.Linear(1, hidden_dim)
self.GatedGCN_layers = nn.ModuleList([
GatedGCN_layer(hidden_dim, hidden_dim) for _ in range(L)
])
self.MLP_layer = MLP_layer(hidden_dim, output_dim)
def forward(self, g, X, E, snorm_n, snorm_e):
# input embedding
H = self.embedding_h(X)
E = self.embedding_e(E)
# graph convnet layers
for GGCN_layer in self.GatedGCN_layers:
H, E = GGCN_layer(g, H, E, snorm_n, snorm_e)
# MLP classifier
g.ndata['H'] = H
y = dgl.mean_nodes(g, 'H')
y = self.MLP_layer(y)
return y
# instantiate network
model = GatedGCN(input_dim=1, hidden_dim=100, output_dim=8, L=2)
print(model)
```
## Define a few helper functions
```
# Collate function to prepare graphs
def collate(samples):
graphs, labels = map(list, zip(*samples)) # samples is a list of pairs (graph, label)
labels = torch.tensor(labels)
sizes_n = [graph.number_of_nodes() for graph in graphs] # graph sizes
snorm_n = [torch.FloatTensor(size, 1).fill_(1 / size) for size in sizes_n]
snorm_n = torch.cat(snorm_n).sqrt() # graph size normalization
sizes_e = [graph.number_of_edges() for graph in graphs] # nb of edges
snorm_e = [torch.FloatTensor(size, 1).fill_(1 / size) for size in sizes_e]
snorm_e = torch.cat(snorm_e).sqrt() # graph size normalization
batched_graph = dgl.batch(graphs) # batch graphs
return batched_graph, labels, snorm_n, snorm_e
# Compute accuracy
def accuracy(logits, targets):
preds = logits.detach().argmax(dim=1)
acc = (preds==targets).sum().item()
return acc
```
## Test forward pass
```
# Define DataLoader and get first graph batch
train_loader = DataLoader(trainset, batch_size=10, shuffle=True, collate_fn=collate)
batch_graphs, batch_labels, batch_snorm_n, batch_snorm_e = next(iter(train_loader))
batch_X = batch_graphs.ndata['feat']
batch_E = batch_graphs.edata['feat']
# Checking some sizes
print(f'batch_graphs:', batch_graphs)
print(f'batch_labels:', batch_labels)
print('batch_X size:', batch_X.size())
print('batch_E size:', batch_E.size())
batch_scores = model(batch_graphs, batch_x, batch_e, batch_snorm_n, batch_snorm_e)
print(batch_scores.size())
batch_labels = batch_labels
print(f'accuracy: {accuracy(batch_scores, batch_labels)}')
```
## Test backward pass
```
# Loss
J = nn.CrossEntropyLoss()(batch_scores, batch_labels)
# Backward pass
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
optimizer.zero_grad()
J.backward()
optimizer.step()
```
## Train one epoch
```
def train(model, data_loader, loss):
model.train()
epoch_loss = 0
epoch_train_acc = 0
nb_data = 0
gpu_mem = 0
for iter, (batch_graphs, batch_labels, batch_snorm_n, batch_snorm_e) in enumerate(data_loader):
batch_X = batch_graphs.ndata['feat']
batch_E = batch_graphs.edata['feat']
batch_scores = model(batch_graphs, batch_X, batch_E, batch_snorm_n, batch_snorm_e)
J = loss(batch_scores, batch_labels)
optimizer.zero_grad()
J.backward()
optimizer.step()
epoch_loss += J.detach().item()
epoch_train_acc += accuracy(batch_scores, batch_labels)
nb_data += batch_labels.size(0)
epoch_loss /= (iter + 1)
epoch_train_acc /= nb_data
return epoch_loss, epoch_train_acc
```
## Evaluation
```
def evaluate(model, data_loader, loss):
model.eval()
epoch_test_loss = 0
epoch_test_acc = 0
nb_data = 0
with torch.no_grad():
for iter, (batch_graphs, batch_labels, batch_snorm_n, batch_snorm_e) in enumerate(data_loader):
batch_X = batch_graphs.ndata['feat']
batch_E = batch_graphs.edata['feat']
batch_scores = model(batch_graphs, batch_X, batch_E, batch_snorm_n, batch_snorm_e)
J = loss(batch_scores, batch_labels)
epoch_test_loss += J.detach().item()
epoch_test_acc += accuracy(batch_scores, batch_labels)
nb_data += batch_labels.size(0)
epoch_test_loss /= (iter + 1)
epoch_test_acc /= nb_data
return epoch_test_loss, epoch_test_acc
```
# Train GNN
```
# datasets
train_loader = DataLoader(trainset, batch_size=50, shuffle=True, collate_fn=collate)
test_loader = DataLoader(testset, batch_size=50, shuffle=False, collate_fn=collate)
# Create model
model = GatedGCN(input_dim=1, hidden_dim=100, output_dim=8, L=4)
loss = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
epoch_train_losses = []
epoch_test_losses = []
epoch_train_accs = []
epoch_test_accs = []
for epoch in range(40):
start = time.time()
train_loss, train_acc = train(model, train_loader, loss)
test_loss, test_acc = evaluate(model, test_loader, loss)
print(f'Epoch {epoch}, train_loss: {train_loss:.4f}, test_loss: {test_loss:.4f}')
print(f'train_acc: {train_acc:.4f}, test_acc: {test_acc:.4f}')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/smlra-kjsce/DL-in-NLP-101/blob/master/RNNs.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#RNN Implementation
##Data Preprocessing
```
!wget http://cmshare.eea.europa.eu/s/6WZZ8dBECmER2EF/download
!unzip download
import pandas as pd
date_vars = ['DatetimeBegin','DatetimeEnd']
df = pd.read_csv('BE_10_2013-2015_timeseries.csv', sep='\t', parse_dates=date_vars, date_parser=pd.to_datetime)
df.head()
import matplotlib.pyplot as plt
plt.plot(df['DatetimeBegin'][:100],df['Concentration'][:100])
plt.show()
cdf = pd.Series(df['Concentration'].values)
width = 25
lag1 = cdf.shift(1)
lag3 = cdf.shift(width - 1)
window = lag3.rolling(window=width)
means = window.mean()
dataframe = pd.concat([means, lag1, cdf], axis=1)
dataframe.columns = ['mean', 't-1', 't+1']
#dataframe.columns
dataframe.head()
plt.plot([i for i in range(len(dataframe['mean']))][0:100],dataframe['mean'][0:100])
import numpy as np
from sklearn import preprocessing
data = np.array(dataframe['mean'][48:])
data = data[:10000]
data = preprocessing.scale(data)
n_inputs=3
data_final = np.column_stack([data[2:], data[1:][1:], data[2:][0:]])
data.shape
x = data_final[:-1]
y = data_final.T[0].T[1:]
x.shape, y.shape
```
##Modeling
```
import torch
from torch import nn
import torch.nn.functional as F
import os
import torch.optim as optim
from torch.autograd import Variable
import numpy as np
from numpy.random import randn
from sklearn.model_selection import train_test_split
features_train, features_test, targets_train, targets_test = train_test_split(x,
y,
test_size = 0.2,
random_state = 42)
# Create RNN Model
class RNNModel(nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim, output_dim):
super(RNNModel, self).__init__()
# Number of hidden dimensions
self.hidden_dim = hidden_dim
# Number of hidden layers
self.layer_dim = layer_dim
# RNN
self.rnn = nn.RNN(input_dim, hidden_dim, layer_dim, batch_first=True,
nonlinearity='relu')
# Final layer
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
# Initialize hidden state with zeros
self.h0 = Variable(torch.zeros(self.layer_dim, x.size(0), self.hidden_dim))
# One time step
self.out, self.hn = self.rnn(x, self.h0)
self.out = self.fc(self.out[:, -1, :])
return self.out
featuresTrain = torch.from_numpy(np.expand_dims(features_train,axis=0))
targetsTrain = torch.from_numpy(np.expand_dims(targets_train,axis=0))
# create feature and targets tensor for test set.
featuresTest = torch.from_numpy(np.expand_dims(features_test,axis=0))
targetsTest = torch.from_numpy(np.expand_dims(targets_test,axis=0))
# batch_size, epoch and iteration
batch_size = 100
num_epochs = 10
# Pytorch train and test sets
train = torch.utils.data.TensorDataset(featuresTrain,targetsTrain)
test = torch.utils.data.TensorDataset(featuresTest,targetsTest)
# data loader
train_loader = torch.utils.data.DataLoader(train, batch_size = batch_size, shuffle = False)
test_loader = torch.utils.data.DataLoader(test, batch_size = batch_size, shuffle = False)
# Create RNN
input_dim = 3 # input dimension
hidden_dim = 5 # hidden layer dimension
layer_dim = 2 # number of hidden layers
output_dim = 1 # output dimension
model = RNNModel(input_dim, hidden_dim, layer_dim, output_dim).float()
# Loss
error = nn.MSELoss()
# SGD Optimizer
learning_rate = 0.05
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
```
Now, with the help of gradient descent we calculate the errors in weights, then correct them and subsequently compute the final weights.
This is done with the help of the loss function. This loss function is the cross entropy loss which is given by L=-ln(p) = -ln(softmax(p))
```
loss_list = []
iteration_list = []
count = 0
for epoch in range(num_epochs):
for i, (data, targets) in enumerate(train_loader):
train = Variable(data)
#print(train.size())
labels = Variable(targets)
# Clear gradients
optimizer.zero_grad()
# Forward propagation
train = train.float()
outputs = model(train)
targets=targets.float()
# Calculate softmax and ross entropy loss
loss = error(outputs, targets)
# Calculating gradients
loss.backward()
# Update parameters
optimizer.step()
count += 1
if count % 1 == 0:
# Calculate Accuracy
correct = 0
total = 0
# Iterate through test dataset
for x, y in test_loader:
x = Variable(x)
# Forward propagation
x = x.float()
outputs = model(x.float())
# Get predictions from the maximum value
predicted = torch.max(outputs.data, 1)[1]
# Total number of labels
y = y.float()
total += y.size(0)
# store loss and iteration
loss_list.append(loss.data)
iteration_list.append(count)
if count % 1 == 0:
# Print Loss
print('Iteration: {} Loss: {} '.format(count, loss.data.item()))
```
| github_jupyter |
# Deep Learning 101
This notebook presents the basics concepts that involve the concept of Deep Learning.
1. Linear Regression
* Logistic Regression
* Artificial Neural Networks
* Deep Neural Networks
* **Convolutional Neural Networks**
## 4. Convolutional Neural Networks
Convolutional networks are simply neural networks that use convolution in place of general matrix multiplication in at least one of their layers.
---
## Convolutional Neural Networks with Keras and TensorFlow
## 1. Load data
#### Load libraries
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Dense, Conv2D, Flatten
from tensorflow.keras.models import Sequential
from tensorflow.keras.utils import plot_model, to_categorical
from tensorflow.keras import backend as K
```
#### Getting the data
```
# load dataset
(X_train, y_train), (X_test, y_test) = mnist.load_data()
```
#### Explore visual data
```
fig = plt.figure()
for i in range(10):
plt.subplot(2, 5, i+1)
x_y = X_train[y_train == i]
plt.imshow(x_y[0], cmap='gray', interpolation='none')
plt.title("Class %d" % (i))
plt.xticks([])
plt.yticks([])
plt.tight_layout()
print('X_train.shape', X_train.shape)
print('y_train.shape', y_train.shape)
print('X_test.shape', X_test.shape)
print('y_train.shape', y_test.shape)
```
#### Reshaping and normalizing the inputs
```
# reshaping the inputs
if K.image_data_format() == 'channels_first':
X_train = X_train.reshape(X_train.shape[0], 1, 28, 28)
X_test = X_test.reshape(X_test.shape[0], 1, 28, 28)
input_shape = (1, 28, 28)
else:
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
input_shape = (28, 28, 1)
# normalizing the inputs
X_train = X_train.astype('float32') / 255.0
X_test = X_test.astype('float32') / 255.0
print('X_train reshape:', X_train.shape)
print('X_test reshape:', X_test.shape)
```
#### Convert class vectors to binary class matrices
```
# 10 classes
y_train_cat = to_categorical(y_train, 10)
y_test_cat = to_categorical(y_test, 10)
print('y_train_cat shape:', y_train_cat.shape)
print('y_test_cat shape:', y_test_cat.shape)
```
## 2. Define model
#### Add the input-, hidden- and output-layers
```
# building a linear stack of layers with the sequential model
model = Sequential()
# Add the input layer and hidden layer 1
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
# Add the input layer and hidden layer 2
model.add(Conv2D(64, (3, 3), activation='relu'))
# Flatten convolutional output
model.add(Flatten())
# Add the input layer and hidden layer 3
model.add(Dense(128, activation='relu'))
# Add the output layer
model.add(Dense(10, activation='softmax'))
```
#### Model visualization
```
# plot a Keras model
plot_model(model, to_file='img/model05_cnn.png',
show_shapes=True, show_layer_names=True)
# prints a summary representation of your model
model.summary()
```

## 3. Compile model
```
# compiling the sequential model
model.compile('rmsprop', loss='categorical_crossentropy',
metrics=['categorical_accuracy'])
```
## 4. Fit model
```
# training the model and saving metrics in history
history = model.fit(X_train, y_train_cat,
batch_size=256, epochs=50,
verbose=2,
validation_data=(X_test, y_test_cat))
```
## 5. Evaluate model
```
# plotting the metrics
fig = plt.figure()
plt.subplot(2,1,1)
plt.plot(history.history['categorical_accuracy'])
plt.plot(history.history['val_categorical_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='lower right')
plt.subplot(2,1,2)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')
plt.tight_layout()
# evaluate model on test data
[test_loss, test_acc] = model.evaluate(X_test, y_test_cat)
print("Evaluation result on Test Data:\nLoss = {}\nAccuracy = {}".format(test_loss, test_acc))
```
## References
* [Deep Learning Book](http://www.deeplearningbook.org)
* [Zero to Deep Learning™ Udemy Video Course](https://github.com/dataweekends/zero_to_deep_learning_udemy)
* [THE MNIST DATABASE](http://yann.lecun.com/exdb/mnist/)
| github_jupyter |
# Implementing the Gradient Descent Algorithm
In this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data.
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#Some helper functions for plotting and drawing lines
def plot_points(X, y):
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')
def display(m, b, color='g--'):
plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
x = np.arange(-10, 10, 0.1)
plt.plot(x, m*x+b, color)
```
## Reading and plotting the data
```
data = pd.read_csv('data.csv', header=None)
X = np.array(data[[0,1]])
y = np.array(data[2])
plot_points(X,y)
plt.show()
```
## TODO: Implementing the basic functions
Here is your turn to shine. Implement the following formulas, as explained in the text.
- Sigmoid activation function
$$\sigma(x) = \frac{1}{1+e^{-x}}$$
- Output (prediction) formula
$$\hat{y} = \sigma(w_1 x_1 + w_2 x_2 + b)$$
- Error function
$$Error(y, \hat{y}) = - y \log(\hat{y}) - (1-y) \log(1-\hat{y})$$
- The function that updates the weights
$$ w_i \longrightarrow w_i + \alpha (y - \hat{y}) x_i$$
$$ b \longrightarrow b + \alpha (y - \hat{y})$$
```
# Implement the following functions
# Activation (sigmoid) function
def sigmoid(x):
return 1 / (1 + np.e ** -x)
# Output (prediction) formula
def output_formula(features, weights, bias):
return sigmoid(np.dot(features, weights) + bias)
# Error (log-loss) formula
def error_formula(y, output):
return -y * np.log(output) - (1 - y) * np.log(1 - output)
# Gradient descent step
def update_weights(x, y, weights, bias, learnrate):
y_pred = output_formula(x, weights, bias)
weights += learnrate * (y - y_pred) * x
bias += learnrate * (y - y_pred)
return weights, bias
```
## Training function
This function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm.
```
np.random.seed(44)
epochs = 100
learnrate = 0.01
def train(features, targets, epochs, learnrate, graph_lines=False):
errors = []
n_records, n_features = features.shape
last_loss = None
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
bias = 0
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features, targets):
output = output_formula(x, weights, bias)
error = error_formula(y, output)
weights, bias = update_weights(x, y, weights, bias, learnrate)
# Printing out the log-loss error on the training set
out = output_formula(features, weights, bias)
loss = np.mean(error_formula(targets, out))
errors.append(loss)
if e % (epochs / 10) == 0:
print("\n========== Epoch", e,"==========")
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
predictions = out > 0.5
accuracy = np.mean(predictions == targets)
print("Accuracy: ", accuracy)
if graph_lines and e % (epochs / 100) == 0:
display(-weights[0]/weights[1], -bias/weights[1])
# Plotting the solution boundary
plt.title("Solution boundary")
display(-weights[0]/weights[1], -bias/weights[1], 'black')
# Plotting the data
plot_points(features, targets)
plt.show()
# Plotting the error
plt.title("Error Plot")
plt.xlabel('Number of epochs')
plt.ylabel('Error')
plt.plot(errors)
plt.show()
```
## Time to train the algorithm!
When we run the function, we'll obtain the following:
- 10 updates with the current training loss and accuracy
- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.
- A plot of the error function. Notice how it decreases as we go through more epochs.
```
train(X, y, epochs, learnrate, True)
```
| github_jupyter |
# Advanced Feature Engineering in Keras
**Learning Objectives**
1. Process temporal feature columns in Keras
2. Use Lambda layers to perform feature engineering on geolocation features
3. Create bucketized and crossed feature columns
## Introduction
In this notebook, we use Keras to build a taxifare price prediction model and utilize feature engineering to improve the fare amount prediction for NYC taxi cab rides.
Each learning objective will correspond to a __#TODO__ in the [student lab notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/feature_engineering/labs/4_keras_adv_feat_eng-lab.ipynb) -- try to complete that notebook first before reviewing this solution notebook.
## Set up environment variables and load necessary libraries
We will start by importing the necessary libraries for this lab.
```
# Run the chown command to change the ownership of the repository
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the results of that search
# to a name in the local scope.
import datetime
import logging
import os
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras import layers
from tensorflow.keras import models
# set TF error log verbosity
logging.getLogger("tensorflow").setLevel(logging.ERROR)
print(tf.version.VERSION)
```
## Load taxifare dataset
The Taxi Fare dataset for this lab is 106,545 rows and has been pre-processed and split for use in this lab. Note that the dataset is the same as used in the Big Query feature engineering labs. The fare_amount is the target, the continuous value we’ll train a model to predict.
First, let's download the .csv data by copying the data from a cloud storage bucket.
```
# `os.makedirs()` method will create all unavailable/missing directory in the specified path.
if not os.path.isdir("../data"):
os.makedirs("../data")
# The `gsutil cp` command allows you to copy data between the bucket and current directory.
!gsutil cp gs://cloud-training-demos/feat_eng/data/*.csv ../data
```
Let's check that the files were copied correctly and look like we expect them to.
```
# `ls` shows the working directory's contents.
# The `l` flag list the all files with permissions and details.
!ls -l ../data/*.csv
# By default `head` returns the first ten lines of each file.
!head ../data/*.csv
```
## Create an input pipeline
Typically, you will use a two step process to build the pipeline. Step one is to define the columns of data; i.e., which column we're predicting for, and the default values. Step 2 is to define two functions - a function to define the features and label you want to use and a function to load the training data. Also, note that pickup_datetime is a string and we will need to handle this in our feature engineered model.
```
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key',
]
LABEL_COLUMN = 'fare_amount'
STRING_COLS = ['pickup_datetime']
NUMERIC_COLS = ['pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count']
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
# A function to define features and labesl
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
# A utility method to create a tf.data dataset from a Pandas Dataframe
def load_dataset(pattern, batch_size=1, mode='eval'):
dataset = tf.data.experimental.make_csv_dataset(pattern,
batch_size,
CSV_COLUMNS,
DEFAULTS)
dataset = dataset.map(features_and_labels) # features, label
if mode == 'train':
dataset = dataset.shuffle(1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
```
## Create a Baseline DNN Model in Keras
Now let's build the Deep Neural Network (DNN) model in Keras using the functional API. Unlike the sequential API, we will need to specify the input and hidden layers. Note that we are creating a linear regression baseline model with no feature engineering. Recall that a baseline model is a solution to a problem without applying any machine learning techniques.
```
# Build a simple Keras DNN using its Functional API
def rmse(y_true, y_pred): # Root mean square error
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
# input layer
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
# feature_columns
feature_columns = {
colname: fc.numeric_column(colname)
for colname in NUMERIC_COLS
}
# Constructor for DenseFeatures takes a list of numeric columns
dnn_inputs = layers.DenseFeatures(feature_columns.values())(inputs)
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = layers.Dense(32, activation='relu', name='h1')(dnn_inputs)
h2 = layers.Dense(8, activation='relu', name='h2')(h1)
# final output is a linear activation because this is regression
output = layers.Dense(1, activation='linear', name='fare')(h2)
model = models.Model(inputs, output)
# compile model
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
```
We'll build our DNN model and inspect the model architecture.
```
model = build_dnn_model()
# We can visualize the DNN using the Keras `plot_model` utility.
tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')
```
## Train the model
To train the model, simply call [model.fit()](https://keras.io/models/model/#fit). Note that we should really use many more NUM_TRAIN_EXAMPLES (i.e. a larger dataset). We shouldn't make assumptions about the quality of the model based on training/evaluating it on a small sample of the full data.
We start by setting up the environment variables for training, creating the input pipeline datasets, and then train our baseline DNN model.
```
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 59621 * 5
NUM_EVALS = 5
NUM_EVAL_EXAMPLES = 14906
# `load_dataset` method is used to load the dataset.
trainds = load_dataset('../data/taxi-train*',
TRAIN_BATCH_SIZE,
'train')
evalds = load_dataset('../data/taxi-valid*',
1000,
'eval').take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
# `Fit` trains the model for a fixed number of epochs
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
```
### Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the mean squared error loss over the training epochs for both the train (blue) and test (orange) sets.
```
# A function to define plot_curves.
def plot_curves(history, metrics):
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(metrics):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
plot_curves(history, ['loss', 'mse'])
```
### Predict with the model locally
To predict with Keras, you simply call [model.predict()](https://keras.io/models/model/#predict) and pass in the cab ride you want to predict the fare amount for. Next we note the fare price at this geolocation and pickup_datetime.
```
# Use the model to do prediction with `model.predict()`.
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00 UTC'], dtype=tf.string),
}, steps=1)
```
## Improve Model Performance Using Feature Engineering
We now improve our model's performance by creating the following feature engineering types: Temporal, Categorical, and Geolocation.
### Temporal Feature Columns
We incorporate the temporal feature pickup_datetime. As noted earlier, pickup_datetime is a string and we will need to handle this within the model. First, you will include the pickup_datetime as a feature and then you will need to modify the model to handle our string feature.
```
# TODO 1a
def parse_datetime(s):
if type(s) is not str:
s = s.numpy().decode('utf-8')
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
# TODO 1b
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
# TODO 1c
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string),
ts_in)
```
### Geolocation/Coordinate Feature Columns
The pick-up/drop-off longitude and latitude data are crucial to predicting the fare amount as fare amounts in NYC taxis are largely determined by the distance traveled. As such, we need to teach the model the Euclidean distance between the pick-up and drop-off points.
Recall that latitude and longitude allows us to specify any location on Earth using a set of coordinates. In our training data set, we restricted our data points to only pickups and drop offs within NYC. New York city has an approximate longitude range of -74.05 to -73.75 and a latitude range of 40.63 to 40.85.
#### Computing Euclidean distance
The dataset contains information regarding the pickup and drop off coordinates. However, there is no information regarding the distance between the pickup and drop off points. Therefore, we create a new feature that calculates the distance between each pair of pickup and drop off points. We can do this using the Euclidean Distance, which is the straight-line distance between any two coordinate points.
```
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
```
#### Scaling latitude and longitude
It is very important for numerical variables to get scaled before they are "fed" into the neural network. Here we use min-max scaling (also called normalization) on the geolocation features. Later in our model, you will see that these values are shifted and rescaled so that they end up ranging from 0 to 1.
First, we create a function named 'scale_longitude', where we pass in all the longitudinal values and add 78 to each value. Note that our scaling longitude ranges from -70 to -78. Thus, the value 78 is the maximum longitudinal value. The delta or difference between -70 and -78 is 8. We add 78 to each longitudinal value and then divide by 8 to return a scaled value.
```
def scale_longitude(lon_column):
return (lon_column + 78)/8.
```
Next, we create a function named 'scale_latitude', where we pass in all the latitudinal values and subtract 37 from each value. Note that our scaling longitude ranges from -37 to -45. Thus, the value 37 is the minimal latitudinal value. The delta or difference between -37 and -45 is 8. We subtract 37 from each latitudinal value and then divide by 8 to return a scaled value.
```
def scale_latitude(lat_column):
return (lat_column - 37)/8.
```
### Putting it all together
We now create two new "geo" functions for our model. We create a function called "euclidean" to initialize our geolocation parameters. We then create a function called transform. The transform function passes our numerical and string column features as inputs to the model, scales geolocation features, then creates the Euclidean distance as a transformed variable with the geolocation features. Lastly, we bucketize the latitude and longitude features.
```
def transform(inputs, numeric_cols, string_cols, nbuckets):
print("Inputs before features transformation: {}".format(inputs.keys()))
# Pass-through columns
transformed = inputs.copy()
del transformed['pickup_datetime']
feature_columns = {
colname: tf.feature_column.numeric_column(colname)
for colname in numeric_cols
}
# TODO 2a
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ['pickup_longitude', 'dropoff_longitude']:
transformed[lon_col] = layers.Lambda(
scale_longitude,
name="scale_{}".format(lon_col))(inputs[lon_col])
# TODO 2b
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ['pickup_latitude', 'dropoff_latitude']:
transformed[lat_col] = layers.Lambda(
scale_latitude,
name='scale_{}'.format(lat_col))(inputs[lat_col])
# add Euclidean distance
transformed['euclidean'] = layers.Lambda(
euclidean,
name='euclidean')([inputs['pickup_longitude'],
inputs['pickup_latitude'],
inputs['dropoff_longitude'],
inputs['dropoff_latitude']])
feature_columns['euclidean'] = fc.numeric_column('euclidean')
# TODO 3a
# create bucketized features
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = fc.bucketized_column(
feature_columns['pickup_latitude'], latbuckets)
b_dlat = fc.bucketized_column(
feature_columns['dropoff_latitude'], latbuckets)
b_plon = fc.bucketized_column(
feature_columns['pickup_longitude'], lonbuckets)
b_dlon = fc.bucketized_column(
feature_columns['dropoff_longitude'], lonbuckets)
# TODO 3b
# create crossed columns
ploc = fc.crossed_column([b_plat, b_plon], nbuckets * nbuckets)
dloc = fc.crossed_column([b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)
# create embedding columns
feature_columns['pickup_and_dropoff'] = fc.embedding_column(pd_pair, 100)
print("Transformed features: {}".format(transformed.keys()))
print("Feature columns: {}".format(feature_columns.keys()))
return transformed, feature_columns
```
Next, we'll create our DNN model now with the engineered features. We'll set `NBUCKETS = 10` to specify 10 buckets when bucketizing the latitude and longitude.
```
NBUCKETS = 10
# DNN MODEL
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
# input layer is all float except for pickup_datetime which is a string
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
inputs.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='string')
for colname in STRING_COLS
})
# transforms
transformed, feature_columns = transform(inputs,
numeric_cols=NUMERIC_COLS,
string_cols=STRING_COLS,
nbuckets=NBUCKETS)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = layers.Dense(32, activation='relu', name='h1')(dnn_inputs)
h2 = layers.Dense(8, activation='relu', name='h2')(h1)
# final output is a linear activation because this is regression
output = layers.Dense(1, activation='linear', name='fare')(h2)
model = models.Model(inputs, output)
# Compile model
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
model = build_dnn_model()
```
Let's see how our model architecture has changed now.
```
# We can visualize the DNN using the Keras `plot_model` utility.
tf.keras.utils.plot_model(model, 'dnn_model_engineered.png', show_shapes=False, rankdir='LR')
# `load_dataset` method is used to load the dataset.
trainds = load_dataset('../data/taxi-train*',
TRAIN_BATCH_SIZE,
'train')
evalds = load_dataset('../data/taxi-valid*',
1000,
'eval').take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
# `Fit` trains the model for a fixed number of epochs
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS+3,
steps_per_epoch=steps_per_epoch)
```
As before, let's visualize the DNN model layers.
```
plot_curves(history, ['loss', 'mse'])
```
Let's a prediction with this new model with engineered features on the example we had above.
```
# Use the model to do prediction with `model.predict()`.
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00 UTC'], dtype=tf.string),
}, steps=1)
```
Below we summarize our training results comparing our baseline model with our model with engineered features.
| Model | Taxi Fare | Description |
|--------------------|-----------|-------------------------------------------|
| Baseline | 12.29 | Baseline model - no feature engineering |
| Feature Engineered | 07.28 | Feature Engineered Model |
Copyright 2020 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
## Outlier Engineering
An outlier is a data point which is significantly different from the remaining data. “An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism.” [D. Hawkins. Identification of Outliers, Chapman and Hall , 1980].
Statistics such as the mean and variance are very susceptible to outliers. In addition, **some Machine Learning models are sensitive to outliers** which may decrease their performance. Thus, depending on which algorithm we wish to train, we often remove outliers from our variables.
We discussed in section 3 of this course how to identify outliers. In this section, we we discuss how we can process them to train our machine learning models.
## How can we pre-process outliers?
- Trimming: remove the outliers from our dataset
- Treat outliers as missing data, and proceed with any missing data imputation technique
- Discrestisation: outliers are placed in border bins together with higher or lower values of the distribution
- Censoring: capping the variable distribution at a max and / or minimum value
**Censoring** is also known as:
- top and bottom coding
- windsorisation
- capping
## Censoring or Capping.
**Censoring**, or **capping**, means capping the maximum and /or minimum of a distribution at an arbitrary value. On other words, values bigger or smaller than the arbitrarily determined ones are **censored**.
Capping can be done at both tails, or just one of the tails, depending on the variable and the user.
Check my talk in [pydata](https://www.youtube.com/watch?v=KHGGlozsRtA) for an example of capping used in a finance company.
The numbers at which to cap the distribution can be determined:
- arbitrarily
- using the inter-quantal range proximity rule
- using the gaussian approximation
- using quantiles
### Advantages
- does not remove data
### Limitations
- distorts the distributions of the variables
- distorts the relationships among variables
## In this Demo
We will see how to perform capping with the quantiles using the Boston House Dataset
## Important
When doing capping, we tend to cap values both in train and test set. It is important to remember that the capping values MUST be derived from the train set. And then use those same values to cap the variables in the test set
I will not do that in this demo, but please keep that in mind when setting up your pipelines
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# for Q-Q plots
import scipy.stats as stats
# boston house dataset for the demo
from sklearn.datasets import load_boston
from feature_engine.outliers import Winsorizer
# load the the Boston House price data
# load the boston dataset from sklearn
boston_dataset = load_boston()
# create a dataframe with the independent variables
# I will use only 3 of the total variables for this demo
boston = pd.DataFrame(boston_dataset.data,
columns=boston_dataset.feature_names)[[
'RM', 'LSTAT', 'CRIM'
]]
# add the target
boston['MEDV'] = boston_dataset.target
boston.head()
# function to create histogram, Q-Q plot and
# boxplot. We learned this in section 3 of the course
def diagnostic_plots(df, variable):
# function takes a dataframe (df) and
# the variable of interest as arguments
# define figure size
plt.figure(figsize=(16, 4))
# histogram
plt.subplot(1, 3, 1)
sns.histplot(df[variable], bins=30)
plt.title('Histogram')
# Q-Q plot
plt.subplot(1, 3, 2)
stats.probplot(df[variable], dist="norm", plot=plt)
plt.ylabel('RM quantiles')
# boxplot
plt.subplot(1, 3, 3)
sns.boxplot(y=df[variable])
plt.title('Boxplot')
plt.show()
# let's find outliers in RM
diagnostic_plots(boston, 'RM')
# visualise outliers in LSTAT
diagnostic_plots(boston, 'LSTAT')
# outliers in CRIM
diagnostic_plots(boston, 'CRIM')
```
There are outliers in all of the above variables. RM shows outliers in both tails, whereas LSTAT and CRIM only on the right tail.
To find the outliers, let's re-utilise the function we learned in section 3:
```
def find_boundaries(df, variable):
# the boundaries are the quantiles
lower_boundary = df[variable].quantile(0.05)
upper_boundary = df[variable].quantile(0.95)
return upper_boundary, lower_boundary
# find limits for RM
RM_upper_limit, RM_lower_limit = find_boundaries(boston, 'RM')
RM_upper_limit, RM_lower_limit
# limits for LSTAT
LSTAT_upper_limit, LSTAT_lower_limit = find_boundaries(boston, 'LSTAT')
LSTAT_upper_limit, LSTAT_lower_limit
# limits for CRIM
CRIM_upper_limit, CRIM_lower_limit = find_boundaries(boston, 'CRIM')
CRIM_upper_limit, CRIM_lower_limit
# Now let's replace the outliers by the maximum and minimum limit
boston['RM']= np.where(boston['RM'] > RM_upper_limit, RM_upper_limit,
np.where(boston['RM'] < RM_lower_limit, RM_lower_limit, boston['RM']))
# Now let's replace the outliers by the maximum and minimum limit
boston['LSTAT']= np.where(boston['LSTAT'] > LSTAT_upper_limit, LSTAT_upper_limit,
np.where(boston['LSTAT'] < LSTAT_lower_limit, LSTAT_lower_limit, boston['LSTAT']))
# Now let's replace the outliers by the maximum and minimum limit
boston['CRIM']= np.where(boston['CRIM'] > CRIM_upper_limit, CRIM_upper_limit,
np.where(boston['CRIM'] < CRIM_lower_limit, CRIM_lower_limit, boston['CRIM']))
# let's explore outliers in the trimmed dataset
# for RM we see much less outliers as in the original dataset
diagnostic_plots(boston, 'RM')
diagnostic_plots(boston, 'LSTAT')
diagnostic_plots(boston, 'CRIM')
```
We can see that the outliers are gone, but the variable distribution was distorted quite a bit.
## Censoring with feature-engine
```
# load the the Boston House price data
# load the boston dataset from sklearn
boston_dataset = load_boston()
# create a dataframe with the independent variables
# I will use only 3 of the total variables for this demo
boston = pd.DataFrame(boston_dataset.data,
columns=boston_dataset.feature_names)[[
'RM', 'LSTAT', 'CRIM'
]]
# add the target
boston['MEDV'] = boston_dataset.target
boston.head()
# create the capper
windsoriser = Winsorizer(capping_method='quantiles', # choose from iqr, gaussian or quantiles
tail='both', # cap left, right or both tails
fold=0.05,
variables=['RM', 'LSTAT', 'CRIM'])
windsoriser.fit(boston)
boston_t = windsoriser.transform(boston)
diagnostic_plots(boston, 'RM')
diagnostic_plots(boston_t, 'RM')
# we can inspect the minimum caps for each variable
windsoriser.left_tail_caps_
# we can inspect the maximum caps for each variable
windsoriser.right_tail_caps_
```
| github_jupyter |
# Challenge 3 - Employment and Skills
This notebook demonstrates the use of the Python recipe wrapper to create a basic data pack that you can use to get you started with the GLA challenge of Employment and Skills. If you want to know more on the Challenge you can visit our [Tombolo website](http://www.tombolo.org.uk/greater-london-authority/). Don't forget that you can use the [City Data Explorer](https://tombolo-staging.emu-analytics.net) web app to visualise and style your results.
## Some Background
**The Tombolo project** is a Future Cities Catapult project funded by InnovateUK. It is a research and development project focused on understanding the value of data to unlock the potential of our cities. A big part of the Tombolo project is the [Digital Connector](http://www.tombolo.org.uk/products/), an open source piece of software for Data Scientists to import and combine datasets into a standard format and model. You can visit the project on [Github](https://github.com/FutureCitiesCatapult/TomboloDigitalConnector) to learn some background as well as instructions on how to use it.
## The goal
We will use the Python recipe implementation to tell Digital Connector to fetch some Social Isolation data for Barking and dagenham.
The geographical unit of measurement for our exports (the ***Subject*** in DC language) will be the Local Super Output Area (LSOA).
The data that we will be fetching are:
* ONS data on employment/unemployment
* ONS data on Business Demography
* Data on employment seekers allowance
* Data on Gross Annual Income
**Please note that the above datasources are only indicative! You should think more holistic in order to tackle this challenge!**
Our output will be a GeoJson file GLA's local authorities along with the attributes of interest. Feel free to play around with the code, explore the DC and download more resources that will help you tackle the Challenge!
### Lets get started
First, we import some libraries that we will be using as well as the recipe.py file
that contains all the classes necessary to build our recipes
```
import os
from pathlib import Path
home_dir = str(Path.home())
tdc = os.path.join(home_dir, 'Desktop/python_library_dc/digital-connector-python')
digital_connector = os.path.join(home_dir, 'Desktop/UptodateProject/TomboloDigitalConnector')
os.chdir(tdc)
from recipe import Recipe, Subject, Dataset, Geo_Match_Rule, Match_Rule, Datasource, GeographicAggregationField, FixedValueField, AttributeMatcherField, AttributeMatcher, LatestValueField, MapToContainingSubjectField, BackOffField, PercentilesField, LinearCombinationField
```
The first thing we need to do is to create a **Subject**. This represents the core geometry on which all our operations will be based on. It also specifies the export geometry of our final geojson file. We are using *localAuthority* and a **match_rule** to filter out all local authorities not belonging to Greater London Area.
```
subject_geometry = Subject(subject_type_label='localAuthority', provider_label='uk.gov.ons',
match_rule=Match_Rule(attribute_to_match_on='label', pattern='E0900%'))
```
Next, we need to define our **Datasource**. This will tell DC what data to download. For more information on DC importers and datasource_id's consult the [catalogue.json](https://github.com/FutureCitiesCatapult/TomboloDigitalConnector/blob/master/src/main/resources/catalogue.json) or use the terminal
**gradle info -Pi= *name_of_the_class***
```
localAuthority = Datasource(importer_class='uk.org.tombolo.importer.ons.OaImporter',
datasource_id='localAuthority')
englandGeneralisedBoundaries = Datasource(importer_class='uk.org.tombolo.importer.ons.OaImporter' ,
datasource_id='englandBoundaries')
NOMISIncome = Datasource(datasource_id='ONSGrossAnnualIncome',
importer_class='uk.org.tombolo.importer.ons.ONSEmploymentImporter')
ONSBusiness = Datasource(datasource_id='ONSBusiness',
importer_class='uk.org.tombolo.importer.ons.ONSBusinessDemographyImporter')
NOMISJobs = Datasource(datasource_id='ONSJobsDensity',
importer_class='uk.org.tombolo.importer.ons.ONSEmploymentImporter')
NOMISEmployment = Datasource(datasource_id='APSEmploymentRate',
importer_class='uk.org.tombolo.importer.ons.ONSEmploymentImporter')
NOMISUnEmployment = Datasource(datasource_id='APSUnemploymentRate',
importer_class='uk.org.tombolo.importer.ons.ONSEmploymentImporter')
NOMISBenefits = Datasource(datasource_id='ESAclaimants',
importer_class='uk.org.tombolo.importer.ons.ONSEmploymentImporter')
PopulationDensity = Datasource(datasource_id='qs102ew',
importer_class='uk.org.tombolo.importer.ons.CensusImporter')
importers_list = [localAuthority,englandGeneralisedBoundaries, NOMISIncome, ONSBusiness, NOMISJobs,
NOMISEmployment, NOMISUnEmployment,NOMISBenefits, PopulationDensity]
```
Now that we defined the datasources we need to tell the DC which attributes to fetch from the database. To do that we create **AttributeMatcher** fields for all the attributes of interest. Having specified the attributes that we will be using, we now need to use them within DC's **Fields**. There are numerous fields each one with its own unique properties. Please consult [DC's github repo](https://github.com/FutureCitiesCatapult/TomboloDigitalConnector/blob/master/documentation/fields-and-models.md) for more information on fields.
```
### Fields ###
### Defining our attributes and passing them to fields ###
### Unemployment
unemployment_attribute = AttributeMatcher(label='APSUnemploymentRate',
provider='uk.gov.ons')
unemployment = LatestValueField(attribute_matcher=unemployment_attribute,
label='APSUnemploymentRate')
### Employment
employment_attribute = AttributeMatcher(label='APSEmploymentRate',
provider='uk.gov.ons')
employment = LatestValueField(attribute_matcher=employment_attribute,
label='APSEmploymentRate')
### Claiming allowance
claimants_attribute = AttributeMatcher(label='ESAclaimants',
provider='uk.gov.ons')
claimants = LatestValueField(attribute_matcher=claimants_attribute,
label='ESAclaimants')
### Tranforming them to percentiles after taking care of the missing values ###
fields = ['unemployment','employment', 'claimants']
f={}
for i in fields:
f['geo_{0}'.format(i)] = GeographicAggregationField(subject=subject_geometry,
field=eval(('{0}').format(i)),
function='mean',
label='geo_{0}'.format(i))
f['map_{0}'.format(i)] = MapToContainingSubjectField(field=f['geo_{0}'.format(i)],
subject=Subject(subject_type_label='englandBoundaries',
provider_label='uk.gov.ons'),
label='map_{0}'.format(i))
f['backoff_{0}'.format(i)] = BackOffField(fields=[eval(('{0}').format(i)),
f['map_{0}'.format(i)]],
label='backoff_{0}'.format(i))
if i == 'employment':
f['percentile_{0}'.format(i)] = PercentilesField(field=f['backoff_{0}'.format(i)],
inverse=False,
percentile_count=10,
normalization_subjects=[subject_geometry],
label='percentile_{0}'.format(i))
else:
f['percentile_{0}'.format(i)] = PercentilesField(field=f['backoff_{0}'.format(i)],
inverse=True,
percentile_count=10,
normalization_subjects=[subject_geometry],
label='percentile_{0}'.format(i))
### Combining the resulting fields with a LinearCombinationField and convering the result to percentiles ###
combined_employment = LinearCombinationField(fields=[f['percentile_claimants'],
f['percentile_employment'],
f['percentile_unemployment']],
scalars = [1.,1.,1.],
label='Unemployment lower than the East London average')
percentile_combined_employment = PercentilesField(field=combined_employment,
inverse=False,
label='unemployment',
percentile_count=10,
normalization_subjects=[subject_geometry])
```
Now we are in a good shape to run our recipe!
```
### Run the exporter and plot the result ###
importers = [localAuthority,englandGeneralisedBoundaries,
NOMISEmployment,NOMISUnEmployment,NOMISBenefits]
dataset = Dataset(subjects=[subject_geometry], fields=[f['percentile_claimants'],
f['percentile_employment'], f['percentile_unemployment']],
datasources=importers)
recipe = Recipe(dataset,timestamp=False)
recipe.build_recipe(console_print=False)
recipe.run_recipe(tombolo_path=digital_connector,
output_path = 'Desktop/employment_and_skills.json', console_print=False)
```
Now lets view the results using geopandas
```
import geopandas as gpd
data = gpd.read_file(home_dir + '/Desktop/employment_and_skills.json')
data.head()
```
| github_jupyter |
# Get Passer Rating data by Play from DB
```
import mysql.connector
import pandas as pd
import numpy as np
from pandas import DataFrame
import matplotlib.mlab as mlab
from mysql.connector import errorcode
import matplotlib.pyplot as plt
%matplotlib inline
config = {
'user': 'db_gtown_2018',
'password': '****',
'port': '3306',
'host': 'nflnumbers.czuayagz62va.us-east-1.rds.amazonaws.com',
'database': 'db_nfl',
'raise_on_warnings': True,
}
try:
cnx = mysql.connector.connect(**config)
cursor = cnx.cursor()
#Let's read all the rows in the table
readContactPerson = """SELECT
PASSER.PNAME AS PASSER
, CONCAT(PASSER.PNAME, ', ', GAME.SEAS) AS PASSER_SEAS
, CASE WHEN PASSER_PRO_BOWL.PLAYER_ID IS NOT NULL
THEN 1
ELSE 0
END AS PASSER_PRO_BOWL
, CASE WHEN PASSER.DPOS > 0
THEN PASSER.DPOS
ELSE 256
END AS PASSER_DRAFT_SPOT
, CASE WHEN PASSER.DPOS > 0
THEN 1
ELSE 0
END AS PASSER_DRAFTED
,TARGET.PNAME AS TARGET
,CASE WHEN PASS_FULL.LOC IN ('DL', 'DM', 'DR')
THEN 1
ELSE 0
END AS DEEP_PASS
,CASE WHEN PASS_FULL.LOC IN ('L', 'M', 'R', 'NL')
THEN 1
ELSE 0
END AS MED_PASS
,CASE WHEN PASS_FULL.LOC IN ('SL', 'SM', 'SR')
THEN 1
ELSE 0
END AS SHORT_PASS
, PASS_FULL.YDS
, 1 AS PASS_ATTEMPT
, PASS_FULL.COMP
, PASS_FULL.TD
, PASS_FULL.INTRCPT
, PASSER_RATING.PASS_RAT
, PASSER.HEIGHT AS PASSER_HGHT
, GAME.SEAS - RIGHT(PASSER.DOB,4) AS PASSER_AGE
, PASSER.START AS PASSER_CAREER_STRT
/*, GAME.SEAS - */
, TARGET.HEIGHT AS TGT_HGHT
, TARGET.WEIGHT AS TGT_WGHT
, CASE WHEN TARGET_PRO_BOWL.PLAYER_ID IS NOT NULL
THEN 1
ELSE 0
END AS TARGET_PRO_BOWL
, GAME.SEAS - RIGHT(TARGET.DOB,4) AS TGT_AGE
, CASE WHEN TARGET.DPOS IS NULL
THEN 256
WHEN TARGET.DPOS = 0
THEN 256
ELSE TARGET.DPOS
END AS TGT_DRAFT_SPOT
, CASE WHEN TARGET.DPOS > 0
THEN 1
ELSE 0
END AS TGT_DRAFTED
, TARGET.START AS TGT_CAREER_STRT /*'USE TO FIGURE OUT YEARS IN LEAGUE'*/
, CASE WHEN TARGET.FORTY = 0
THEN ROUND(AVERAGE_FORTY.AVG_FORTY,3)
ELSE TARGET.FORTY
END AS TGT_FORTY
, CASE WHEN TARGET.VERTICAL = 0
THEN ROUND(AVERAGE_VERTICAL.AVG_VERT,3)
ELSE TARGET.VERTICAL
END AS TGT_VERT
, CASE WHEN TARGET.POS1 = 'RB'
THEN 1
ELSE 0
END AS RB_TGT
, CASE WHEN TARGET.POS1 = 'WR'
THEN 1
ELSE 0
END AS WR_TGT
, CASE WHEN TARGET.POS1 = 'TE'
THEN 1
ELSE 0
END AS TE_TGT
, CASE WHEN TARGET.POS1 <> 'RB'
AND TARGET.POS1 <> 'TE'
AND TARGET.POS1 <> 'WR'
THEN 1
ELSE 0
END AS TRICK_PLAY
/*,QTR
,MIN *//*'CAN BE USED FOR TWO MIN DRILL'*/
, YTG
, Case when PASS_FULL.YDS >= YTG
THEN 1
ELSE 0
END AS FIRST_DOWN_CONVERSION
, CASE WHEN ZONE = 5
THEN 1
ELSE 0
END AS RED_ZONE_ATTMPT /*'DEFINITION AVAILABLE'*/
, CASE WHEN ZONE = 5
THEN PASSER_RATING.PASS_RAT
ELSE NULL
END AS RED_ZONE_QB_RAT /*'DEFINITION AVAILABLE'*/
, CASE WHEN DWN = 4
AND PASS_FULL.YDS >= YTG
THEN 1
when DWN = 4
AND PASS_FULL.YDS < YTG
THEN 0
ELSE NULL
END
AS FOURTH_DOWN_SUCCESS
, CASE WHEN DWN = 3
AND PASS_FULL.YDS >= YTG
THEN 1
when DWN = 3
AND PASS_FULL.YDS < YTG
THEN 0
ELSE NULL
END
AS THIRD_DOWN_SUCCESS
, CASE WHEN QTR = 4
THEN PASSER_RATING.PASS_RAT
ELSE null
END AS FOURTH_QTR_PASS_RAT
, CASE WHEN SG = 'Y'
THEN 1
ELSE 0
END as SHOTGUN
, CASE WHEN NH = 'Y'
THEN 1
ELSE 0
END AS NO_HUDDLE
, CASE WHEN PENALTY.ACT IS NOT NULL
THEN 1
ELSE 0
END AS DEF_PENALTY_DCLND
,CASE WHEN PENALTY.ACT IS NOT NULL
THEN PASSER_RATING.PASS_RAT
ELSE NULL
END AS FREE_PLAY_PASS_RAT
, PENALTY.DESC as PENALTY_DESC
,GAME.SEAS
/* ,GAME.WK
,GAME.DAY
,GAME.V 'FIGURE OUT HOW TO DO HOME OR AWAY'
,GAME.H*/
/*HOW DO WE CALCULATE WHETHER THE PASSER IS AT HOME OR AWAY?*/
,GAME.TEMP
,GAME.HUMD
,GAME.WSPD
,case when COND IN ('Rain', 'Showers', 'Snow', 'Thunderstorms', 'Cold'
,'Flurries'
,'Light Rain'
,'Light Showers'
,'Light Snow'
,'Windy'
)
THEN 1
ELSE 0
END AS BAD_WEATH
, case when COND IN ('Chance Rain'
,'Clear'
,'Closed Roof'
,'Cloudy'
,'Covered Roof'
,'Dome'
,'Fair'
,'Foggy'
,'Hazy'
,'Mostly Cloudy'
,'Mostly Sunny'
,'Overcast'
,'Partly Cloudy'
,'Partly Sunny'
,'Sunny' )
THEN 1
WHEN COND IS NULL
THEN 1
ELSE 0
END AS GOOD_WEATH
,GAME.COND
, CASE WHEN GAME.COND = 'DOME'
THEN 1
ELSE 0
END AS DOME_GAME
, CASE WHEN GAME.SURF <> 'GRASS'
THEN 1
ELSE 0
END AS TURF_FIELD
FROM PBP
INNER JOIN PASS_FULL
ON PBP.PID = PASS_FULL.PID
left outer join GAME
ON PBP.GID = GAME.GID
LEFT OUTER JOIN PENALTY
ON PBP.PID = PENALTY.PID
AND PENALTY.CAT = '4'
AND PENALTY.ACT = 'D'
INNER JOIN PLAYER PASSER
ON PASS_FULL.PSR = PASSER.PLAYER
LEFT OUTER JOIN PLAYER TARGET
ON PASS_FULL.TRG = TARGET.PLAYER
LEFT OUTER JOIN PASSER_RATING
ON PASS_FULL.YDS = PASSER_RATING.YDS
AND PASS_FULL.COMP = PASSER_RATING.COMPL
AND PASS_FULL.TD = PASSER_RATING.TD
AND PASS_FULL.INTRCPT = PASSER_RATING.INTRCPT
LEFT OUTER JOIN (SELECT
POS1
, AVG(FORTY) AS AVG_FORTY
,COUNT(*)
FROM PLAYER
WHERE FORTY > 0
GROUP BY
POS1) AVERAGE_FORTY
ON TARGET.POS1 = AVERAGE_FORTY.POS1
LEFT OUTER JOIN
(SELECT
POS1
, AVG(VERTICAL) AS AVG_VERT
,COUNT(*)
FROM PLAYER
WHERE VERTICAL > 0
GROUP BY
POS1) AVERAGE_VERTICAL
ON TARGET.POS1 = AVERAGE_VERTICAL.POS1
LEFT OUTER JOIN PRO_BOWL PASSER_PRO_BOWL
ON GAME.SEAS = PASSER_PRO_BOWL.ProBowl_Year
AND PASSER.PLAYER = PASSER_PRO_BOWL.PLAYER_ID
LEFT OUTER JOIN PRO_BOWL TARGET_PRO_BOWL
ON GAME.SEAS = TARGET_PRO_BOWL.ProBowl_Year
AND TARGET.PLAYER = TARGET_PRO_BOWL.PLAYER_ID
WHERE PASS_FULL.SPK = 0
AND GAME.SEAS < 2017
"""
cursor.execute(readContactPerson)
#specify the attributes that you want to display
df = DataFrame(cursor.fetchall())
cnx.commit()
except mysql.connector.Error as err:
if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print("Something is wrong with your user name or password")
elif err.errno == errorcode.ER_BAD_DB_ERROR:
print("Database does not exist")
else:
print(err)
else:
cursor.close()
cnx.close()
df.columns = ['PASSER',
'PASSER_SEAS',
'PASSER_PRO_BOWL',
'PASSER_DRAFT_SPOT',
'PASSER_DRAFTED',
'TARGET',
'DEEP_PASS',
'MED_PASS',
'SHORT_PASS',
'YDS',
'PASS_ATTEMPT',
'COMP',
'TD',
'INTRCPT',
'PASS_RAT',
'PASSER_HGHT',
'PASSER_AGE',
'PASSER_CAREER_STRT',
'TGT_HGHT',
'TGT_WGHT',
'TARGET_PRO_BOWL',
'TGT_AGE',
'TGT_DRAFT_SPOT',
'TGT_DRAFTED',
'TGT_CAREER_STRT',
'TGT_FORTY',
'TGT_VERT',
'RB_TGT',
'WR_TGT',
'TE_TGT',
'TRICK_PLAY',
'YTG',
'FIRST_DOWN_CONVERSION',
'RED_ZONE_ATTMPT',
'RED_ZONE_QB_RAT',
'FOURTH_DOWN_SUCCESS',
'THIRD_DOWN_SUCCESS',
'FOURTH_QTR_PASS_RAT',
'SHOTGUN',
'NO_HUDDLE',
'DEF_PENALTY_DCLND',
'FREE_PLAY_PASS_RAT',
'PENALTY_DESC',
'SEAS',
'TEMP',
'HUMD',
'WSPD',
'BAD_WEATH',
'GOOD_WEATH',
'COND',
'DOME_GAME',
'TURF_FIELD'
]
df.head(5)
```
## Per Pass passer rating distribution
```
plt.figure(figsize=(10,4))
df["PASSER_PRO_BOWL"].value_counts().plot(kind='bar')
#Too many pass attemps, lets group them group into a per season/passer
#df.groupby(['PASSER', 'SEAS']).size()
df_passer_by_season = df.groupby(['PASSER', 'SEAS']).agg({
'PASSER': np.min,
'PASSER_SEAS': np.min,
'PASSER_PRO_BOWL': np.max,
'PASS_RAT': np.mean,
'PASS_ATTEMPT': np.size,
'COMP': np.sum,
'PASSER_DRAFT_SPOT': np.max,
'PASSER_DRAFTED': np.max,
'DEEP_PASS': np.mean,
'MED_PASS': np.mean,
'SHORT_PASS': np.mean,
'YDS': np.sum,
'TD': np.sum,
'INTRCPT': np.sum,
'PASSER_HGHT': np.max,
'PASSER_AGE': np.max,
'PASSER_CAREER_STRT': np.max,
'TGT_HGHT': np.mean,
'TGT_WGHT': np.mean,
'TARGET_PRO_BOWL': np.mean,
'TGT_AGE': np.mean,
'TGT_DRAFT_SPOT': np.mean,
'TGT_DRAFTED': np.mean,
'TGT_CAREER_STRT': np.mean,
'TGT_FORTY': np.mean,
'TGT_VERT':np.mean,
'RB_TGT': np.mean,
'WR_TGT': np.mean,
'TE_TGT': np.mean,
'TRICK_PLAY': np.mean,
'FIRST_DOWN_CONVERSION': np.mean,
'RED_ZONE_ATTMPT': np.mean,
'RED_ZONE_QB_RAT': np.mean ,
'FOURTH_DOWN_SUCCESS': np.mean,
'THIRD_DOWN_SUCCESS': np.mean,
'FOURTH_QTR_PASS_RAT': np.mean,
'SHOTGUN': np.mean,
'NO_HUDDLE': np.mean,
'DEF_PENALTY_DCLND': np.sum,
'FREE_PLAY_PASS_RAT': np.mean,
'SEAS': np.max,
'GOOD_WEATH': np.mean,
'DOME_GAME': np.mean,
'TURF_FIELD': np.mean,
'SEAS': np.min,
})
##df_passer_by_season = df.groupby(['PASSER', 'SEAS']).agg({'PASSER': { 'PASSER_NAME': np.min, 'PASS_ATTEMPTS': np.size}, 'PASSER_PRO_BOWL': np.max, 'PASS_RAT': np.mean, 'COMP': np.sum, 'PASSER_DRAFT_SPOT': np.max, 'PASSER_DRAFTED': np.max, 'DEEP_PASS': np.mean, 'MED_PASS': np.mean, 'SHORT_PASS': np.mean, 'YDS': np.sum, 'TD': np.sum, 'INTRCPT': np.sum, 'PASSER_HGHT': np.max, 'PASSER_AGE': np.max, 'PASSER_CAREER_STRT': np.max, 'TGT_HGHT': np.mean, 'TGT_WGHT': np.mean, 'TARGET_PRO_BOWL': np.mean, 'TGT_AGE': np.mean, 'TGT_DRAFT_SPOT': np.mean, 'TGT_DRAFTED': np.mean, 'TGT_CAREER_STRT': np.mean, 'TGT_FORTY': np.mean, 'TGT_VERT':np.mean, 'RB_TGT': np.mean, 'WR_TGT': np.mean, 'TE_TGT': np.mean, 'TRICK_PLAY': np.mean, 'FIRST_DOWN_CONVERSION': np.mean, 'RED_ZONE_ATTMPT': np.mean, 'RED_ZONE_QB_RAT': np.mean , 'FOURTH_DOWN_SUCCESS': np.mean, 'THIRD_DOWN_SUCCESS': np.mean, 'FOURTH_QTR_PASS_RAT': np.mean, 'SHOTGUN': np.mean, 'NO_HUDDLE': np.mean, 'DEF_PENALTY_DCLND': np.sum, 'FREE_PLAY_PASS_RAT': np.mean, 'SEAS': np.max, 'GOOD_WEATH': np.mean, 'DOME_GAME': np.mean, 'TURF_FIELD': np.mean,
## })
df_passer_by_season
#PASS_RAT.mean()
# 'YTG': np.mean,
## YTG is a problem - investigate why
# Temp is problems too
# 'HUMD': np.mean, 'WSPD': np.mean, 'BAD_WEATH': np.mean,
#df_pass_summ = pd.DataFrame(df_pass_loc)
##df_pass_summ.plot.bar()
import matplotlib.pyplot as plt
pass_limit = 35
#Removing passers with limited pass sample size
df_passer_by_season_qb = df_passer_by_season.loc[df_passer_by_season['PASS_ATTEMPT'] > pass_limit]
plt.scatter(df_passer_by_season_qb.PASS_ATTEMPT, df_passer_by_season_qb.PASS_RAT , c=df_passer_by_season_qb.PASSER_PRO_BOWL)
plt.suptitle('QBs & Pro-Bowl', fontsize=20)
plt.xlabel('Passes Attempted', fontsize=18)
plt.ylabel('Passer Rating', fontsize=16)
#Check how many null values remain for our sample size
df_passer_by_season_qb.isnull().sum()
df_passer_by_season_qb_nums_only = df_passer_by_season_qb.drop(['PASSER', 'PASSER_SEAS', 'FREE_PLAY_PASS_RAT'], axis=1)
print("Removed 3 columns to convert all data to numerical, removing missing 4th down data.")
df_passer_by_season_qb_nums_only.head(2)
#Fill in a few missing records with the average
df_passer_by_season_qb_nums_only = df_passer_by_season_qb_nums_only.fillna(df_passer_by_season_qb_nums_only.mean())
df_passer_by_season_qb_nums_only.isnull().sum()
##Interactive chart attempt 1
import plotly as py
import plotly.graph_objs as go
import ipywidgets as widgets
#lets group passes into season average over time
df_pass_seas_avg = df.groupby('SEAS').PASS_RAT.mean()
df_pass_trend = pd.DataFrame(df_pass_seas_avg)
df_pass_trend.plot()
import seaborn as sns
f, ax = plt.subplots(figsize=(10, 8))
corr = df_passer_by_season_qb_nums_only.corr()
sns.heatmap(corr, mask=np.zeros_like(corr, dtype=np.bool), cmap=sns.diverging_palette(220, 10, as_cmap=True),
square=True, ax=ax)
import plotly
plotly.tools.set_credentials_file(username='GTown2018', api_key='MMethJca31qEJ51J4Kan')
import plotly.plotly as py
import plotly.graph_objs as go
df_passer_by_season_probowl = df_passer_by_season.loc[(df_passer_by_season['PASS_ATTEMPT'] > 100) & (df_passer_by_season['PASSER_PRO_BOWL'] == 1)]
df_passer_by_season_qb_non_pb = df_passer_by_season.loc[(df_passer_by_season['PASS_ATTEMPT'] > 100) & (df_passer_by_season['PASSER_PRO_BOWL'] == 0)]
# Create a trace
trace0 = go.Scatter(
x = df_passer_by_season_probowl.PASS_ATTEMPT,
y = df_passer_by_season_probowl.PASS_RAT,
mode = 'markers',
name = 'Pro-Bowler',
marker = dict(
size = 10,
color = 'rgba(152, 0, 0, .8)'),
text= (df_passer_by_season_probowl['PASSER_SEAS'])
)
trace1 = go.Scatter(
x = df_passer_by_season_qb_non_pb.PASS_ATTEMPT,
y = df_passer_by_season_qb_non_pb.PASS_RAT,
mode = 'markers',
name = 'Non Pro-Bowler',
marker = dict(
size = 5,
color = 'rgba(0, 152, 200, .8)'),
text= df_passer_by_season_qb_non_pb['PASSER_SEAS']
)
layout = go.Layout(
title='QBs & ProBowls',
xaxis=dict(
title='Passes Attempted',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
)
),
yaxis=dict(
title='Passer Rating',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
)
)
)
#data = [trace0, trace1]
scatter_pb = go.Figure(data = [trace0, trace1], layout=layout)
# Plot and embed in ipython notebook!
#py.iplot(data)
py.iplot(scatter_pb)
df_passer_by_season_qb_nums_only.describe()
#df_passer_by_season_qb_nums_only.isnull().sum()
df_passer_by_season_qb_nums_only.groupby(['PASSER_DRAFTED']).agg({'PASSER_DRAFTED': np.size})
import sklearn
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
train_data = scaler.fit_transform(df_passer_by_season_qb_nums_only)
# test_data = scaler.transform(df_passer_by_season_min_pass)
train_data
df_standardized = pd.DataFrame(train_data,index=train_data[:,0])
df_standardized.columns = [
'PASSER_PRO_BOWL',
'PASS_RAT',
'PASS_ATTEMPT',
'COMP',
'PASSER_DRAFT_SPOT',
'PASSER_DRAFTED',
'DEEP_PASS',
'MED_PASS',
'SHORT_PASS',
'YDS',
'TD',
'INTRCPT',
'PASSER_HGHT',
'PASSER_AGE',
'PASSER_CAREER_STRT',
'TGT_HGHT',
'TGT_WGHT',
'TARGET_PRO_BOWL',
'TGT_AGE',
'TGT_DRAFT_SPOT',
'TGT_DRAFTED',
'TGT_CAREER_STRT',
'TGT_FORTY',
'TGT_VERT',
'RB_TGT',
'WR_TGT',
'TE_TGT',
'TRICK_PLAY',
'FIRST_DOWN_CONVERSION',
'RED_ZONE_ATTMPT',
'RED_ZONE_QB_RAT',
'FOURTH_DOWN_SUCCESS',
'THIRD_DOWN_SUCCESS',
'FOURTH_QTR_PASS_RAT',
'SHOTGUN',
'NO_HUDDLE',
'DEF_PENALTY_DCLND',
'SEAS',
'GOOD_WEATH',
'DOME_GAME',
'TURF_FIELD']
df_standardized.head(3)
f, ax = plt.subplots(figsize=(10, 8))
corr = df_standardized.corr()
sns.heatmap(corr, mask=np.zeros_like(corr, dtype=np.bool), cmap=sns.diverging_palette(220, 10, as_cmap=True),
square=True, ax=ax)
features = df_standardized.drop(['PASSER_PRO_BOWL'], axis=1)
labels = df_standardized['PASSER_PRO_BOWL']
```
##Regularization of Features
```
list(features)
df_standardized.plot.box(figsize=(80,10))
df_standardized.groupby(['PASSER_DRAFTED']).agg({'PASSER_DRAFTED': np.size})
from sklearn.linear_model import Ridge, Lasso, ElasticNet
#Regularization
#Lasso L1
model = Lasso()
model.fit(features, labels)
print(list(zip(features, model.coef_.tolist())))
model = Ridge()
model.fit(features, labels)
print(list(zip(features, model.coef_.tolist())))
model = ElasticNet()
model.fit(features, labels)
print(list(zip(features, model.coef_.tolist())))
corr_matrix = df_standardized.corr().abs()
corr_matrix
high_corr_var=np.where(corr_matrix>0.8)
high_corr_var=[(corr_matrix.index[x],corr_matrix.columns[y]) for x,y in zip(*high_corr_var) if x!=y and x<y]
high_corr_var
```
| github_jupyter |
```
!pip install scikit-learn==1.0
!pip install xgboost==1.4.2
!pip install catboost==0.26.1
!pip install pandas==1.3.3
!pip install radiant-mlhub==0.3.0
!pip install rasterio==1.2.8
!pip install numpy==1.21.2
!pip install pathlib==1.0.1
!pip install tqdm==4.62.3
!pip install joblib==1.0.1
!pip install matplotlib==3.4.3
!pip install Pillow==8.3.2
!pip install torch==1.9.1
!pip install plotly==5.3.1
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Select the Runtime > "Change runtime type" menu to enable a GPU accelerator, ')
print('and then re-execute this cell.')
else:
print(gpu_info)
import pandas as pd
import numpy as np
import random
import torch
def seed_all(seed_value):
random.seed(seed_value) # Python
np.random.seed(seed_value) # cpu vars
torch.manual_seed(seed_value) # cpu vars
if torch.cuda.is_available():
torch.cuda.manual_seed(seed_value)
torch.cuda.manual_seed_all(seed_value) # gpu vars
torch.backends.cudnn.deterministic = True #needed
torch.backends.cudnn.benchmark = False
seed_all(13)
# from google.colab import drive
# drive.mount('/content/drive')
import warnings
warnings.filterwarnings("ignore")
import gc
import pandas as pd
import numpy as np
from sklearn.metrics import *
from xgboost import XGBClassifier
from catboost import CatBoostClassifier
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import roc_auc_score
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from indices_creation import *
```
## Data Load Step
1. We load the mean aggregations for both train and test. The mean aggregations contain the labels and field IDs.
2. The quantile aggregations contain the field IDs.
```
import os
os.getcwd()
train_df_mean = pd.read_csv('train_mean.csv')
#### we need to drop 'label' and 'field_id' later in the code
test_df_mean = pd.read_csv('test_mean.csv')
#### we need to drop 'field_id' later in the code
train_df_median = pd.read_csv('train_median.csv')
#### we need to drop 'field_id' later in the code
test_df_median = pd.read_csv('test_median.csv')
#### we need to drop 'field_id' later in the code
train_size = pd.read_csv('size_of_field_train.csv')
test_size = pd.read_csv('size_of_field_test.csv')
train_size = train_size.rename({'Field_id':'field_id'},axis=1)
test_size = test_size.rename({'Field_id':'field_id'},axis=1)
train_df_median = train_df_median.merge(train_size, on =['field_id'],how='left')
test_df_median = test_df_median.merge(test_size, on =['field_id'],how='left')
cluster_df = pd.read_csv('seven_cluster.csv')
cluster_df = cluster_df.rename({'cluster_label':'cluster_label_7'},axis=1)
train_df_median = train_df_median.merge(cluster_df,on=['field_id'],how='left')
test_df_median = test_df_median.merge(cluster_df,on=['field_id'],how='left')
gc.collect()
full_nearest1=pd.read_csv('full_nearest_radius_0.25.csv')
full_nearest2=pd.read_csv('full_nearest_radius_0.4.csv')
colsnearest40 = full_nearest2.columns.tolist()
_ = colsnearest40.remove('field_id')
colsnearest40
train_df_median = train_df_median.merge(full_nearest1,on=['field_id'],how='left')
train_df_median = train_df_median.merge(full_nearest2,on=['field_id'],how='left')
print(train_df_median.shape)
test_df_median = test_df_median.merge(full_nearest1,on=['field_id'],how='left')
test_df_median = test_df_median.merge(full_nearest2,on=['field_id'],how='left')
```
## Removing Erroneous data points
We observed some data points for which the labels were floats, we will remove them (they are few in number) to make sure our model is learning from correctly labelled data points
```
print(f'The shape of train data before outlier removal - {train_df_mean.shape}')
train_df_mean = train_df_mean[train_df_mean.label.isin(list(range(1,10)))]
print(f'The shape of train data after outlier removal - {train_df_mean.shape}')
relevant_fids = train_df_mean['field_id'].values.tolist()
train_df_median = train_df_median[train_df_median['field_id'].isin(relevant_fids)]
print(f'The shape of median train data - {train_df_median.shape} and mean train data {train_df_mean.shape}' )
### two extra columns in train_df_mean being 'label' and 'size_of_field'
```
### Extract date list
We extract the list of all dates where observations were seen for index generation
```
cols = ['B01_','B02_','B03_','B04_','B05_','B06_','B07_','B08_','B09_','B8A_','B11_','B12_']
columns_available = train_df_mean.columns.tolist()
cols2consider = []
for col in cols:
cols2consider.extend( [c for c in columns_available if col in c])
bands_with_dates = [c for c in columns_available if 'B01_' in c]
dates = [c.replace('B01_','') for c in bands_with_dates]
print(f'The sample showing the commencement dates where observations were seen is {dates[:10]}')
print(f'The sample showing the ending dates where observations were seen is {dates[-10:]}')
```
### Removal of field ID column
We consider only the relevant columns to be considered for the next step
```
train_df_mean = train_df_mean[cols2consider+['label']]
test_df_mean = test_df_mean[cols2consider]
train_df_median = train_df_median[cols2consider+['size_of_field']+['cluster_label_7']+full_nearest1.columns.tolist()+colsnearest40]
test_df_median = test_df_median[cols2consider+['size_of_field']+['cluster_label_7']+full_nearest1.columns.tolist()+colsnearest40]
```
### Indices Creation
We will create the indices for train and test data for mean aggregates using the indices coded in indices_creation.py module
```
# train_df_mean = get_band_ndvi_red(train_df_mean,dates)
# train_df_mean = get_band_afri(train_df_mean,dates)
# train_df_mean = get_band_evi2(train_df_mean,dates)
# train_df_mean = get_band_ndmi(train_df_mean,dates)
# train_df_mean = get_band_ndvi(train_df_mean,dates)
# train_df_mean = get_band_evi(train_df_mean,dates)
# train_df_mean = get_band_bndvi(train_df_mean,dates)
# train_df_mean = get_band_nli(train_df_mean,dates)
# train_df_mean = get_band_lci(train_df_mean,dates)
# test_df_mean = get_band_ndvi_red(test_df_mean,dates)
# test_df_mean = get_band_afri(test_df_mean,dates)
# test_df_mean = get_band_evi2(test_df_mean,dates)
# test_df_mean = get_band_ndmi(test_df_mean,dates)
# test_df_mean = get_band_ndvi(test_df_mean,dates)
# test_df_mean = get_band_evi(test_df_mean,dates)
# test_df_mean = get_band_bndvi(test_df_mean,dates)
# test_df_mean = get_band_nli(test_df_mean,dates)
# test_df_mean = get_band_lci(test_df_mean,dates)
```
We will create the indices for train and test data for median aggregates using the indices coded in indices_creation.py module
```
train_df_median = get_band_ndvi_red(train_df_median,dates)
train_df_median = get_band_afri(train_df_median,dates)
train_df_median = get_band_evi2(train_df_median,dates)
train_df_median = get_band_ndmi(train_df_median,dates)
train_df_median = get_band_ndvi(train_df_median,dates)
train_df_median = get_band_evi(train_df_median,dates)
train_df_median = get_band_bndvi(train_df_median,dates)
train_df_median = get_band_nli(train_df_median,dates)
# train_df_median = get_band_lci(train_df_median,dates)
test_df_median = get_band_ndvi_red(test_df_median,dates)
test_df_median = get_band_afri(test_df_median,dates)
test_df_median = get_band_evi2(test_df_median,dates)
test_df_median = get_band_ndmi(test_df_median,dates)
test_df_median = get_band_ndvi(test_df_median,dates)
test_df_median = get_band_evi(test_df_median,dates)
test_df_median = get_band_bndvi(test_df_median,dates)
test_df_median = get_band_nli(test_df_median,dates)
# test_df_median = get_band_lci(test_df_median,dates)
# train_df_median = train_df_median.drop(cols2consider,axis=1)
# test_df_median = test_df_median.drop(cols2consider,axis=1)
train_df_mean.shape,train_df_median.shape,test_df_mean.shape,test_df_median.shape
######### Saving the label variable and dropping it from the data
train_y = train_df_mean['label'].values
train_df_mean = train_df_mean.drop(['label'],axis=1)
train_df_mean.replace([np.inf, -np.inf], np.nan, inplace=True)
test_df_mean.replace([np.inf, -np.inf], np.nan, inplace=True)
train_df_median.replace([np.inf, -np.inf], np.nan, inplace=True)
test_df_median.replace([np.inf, -np.inf], np.nan, inplace=True)
# train_df_slope.replace([np.inf, -np.inf], np.nan, inplace=True)
# test_df_slope.replace([np.inf, -np.inf], np.nan, inplace=True)
train = train_df_median.values
test = test_df_median.values
# train = pd.concat([train_df_median,train_df_slope],axis=1).values
# test = pd.concat([test_df_median,test_df_slope],axis=1).values
print(f'The shape of model ready train data is {train.shape} and model ready test data is {test.shape}')
print(f'The shape of target is {train_y.shape}')
train1 = pd.read_csv('train_with_slopes.csv')
test1 = pd.read_csv('test_with_slopes.csv')
train1.replace([np.inf, -np.inf], np.nan, inplace=True)
test1.replace([np.inf, -np.inf], np.nan, inplace=True)
train2=pd.concat([pd.DataFrame(train1.values,columns=train1.columns),train_df_median[['size_of_field','cluster_label_7']+full_nearest1.columns.tolist()+colsnearest40].reset_index(drop=True)],axis=1)
test2=pd.concat([pd.DataFrame(test1.values,columns=test1.columns),test_df_median[['size_of_field','cluster_label_7']+full_nearest1.columns.tolist()+colsnearest40].reset_index(drop=True)],axis=1)
train2.head()
del train2['field_id']
del test2['field_id']
pivot=pd.read_csv('pivottable.csv')
pivot
train2=train2.merge(pivot,how='left',on='cluster_label_7')
test2=test2.merge(pivot,how='left',on='cluster_label_7')
train2
del train_df_mean,train_df_median,train1,train_size,test_df_mean,test_df_median,test1,test_size
import gc
gc.collect()
train = train2[list(set(train2.columns.tolist()))].values
test = test2[list(set(train2.columns.tolist()))].values
train.shape,test.shape
# (1616-1520)/8
# train_slope = train1[[cols for cols in train1.columns if re.findall(r'\b(\w+NDVI_red_slope)\b',cols)]]
# test_slope = test1[[cols for cols in test1.columns if re.findall(r'\b(\w+NDVI_red_slope)\b',cols)]]
# train = pd.concat([train_df_median.reset_index(drop=True),train_slope],axis=1).values
# test = pd.concat([test_df_median.reset_index(drop=True),test_slope],axis=1).values
# train.shape,test.shape
oof_pred = np.zeros((len(train), 9))
y_pred_final = np.zeros((len(test),9 ))
num_models = 3
temperature = 50
n_splits = 15
error = []
kf = StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=13)
for fold, (tr_ind, val_ind) in enumerate(kf.split(train, train_y)):
wghts = [0]*num_models
logloss = []
X_train, X_val = train[tr_ind], train[val_ind]
# X_train1, X_val1 = train_max[tr_ind], train_max[val_ind]
y_train, y_val = train_y[tr_ind], train_y[val_ind]
model1 = XGBClassifier(n_estimators=2000,random_state=13,learning_rate=0.04,colsample_bytree=0.95,reg_lambda=13,
tree_method='gpu_hist',eval_metric='mlogloss')
model2 = CatBoostClassifier(task_type='GPU',verbose=False,n_estimators=5000,random_state=13,auto_class_weights='SqrtBalanced',max_depth=9,learning_rate=0.05)
model3 = CatBoostClassifier(task_type='GPU',verbose=False,n_estimators=5000,random_state=13,auto_class_weights='SqrtBalanced',max_depth=10,learning_rate=0.04)
# model4 = CatBoostClassifier(task_type='GPU',verbose=False,n_estimators=5000,random_state=13,auto_class_weights='SqrtBalanced',max_depth=11)
model1.fit(X_train,y_train)
val_pred1 = model1.predict_proba(X_val)
logloss.append(log_loss(y_val,val_pred1))
print('validation logloss model1 fold-',fold+1,': ',log_loss(y_val,val_pred1))
model2.fit(X_train,y_train)
val_pred2 = model2.predict_proba(X_val)
logloss.append(log_loss(y_val,val_pred2))
print('validation logloss model2 fold-',fold+1,': ',log_loss(y_val,val_pred2))
model3.fit(X_train,y_train)
val_pred3 = model3.predict_proba(X_val)
logloss.append(log_loss(y_val,val_pred3))
print('validation logloss model3 fold-',fold+1,': ',log_loss(y_val,val_pred3))
# model4.fit(X_train,y_train)
# val_pred4 = model4.predict_proba(X_val)
# logloss.append(log_loss(y_val,val_pred4))
# print('validation logloss model4 fold-',fold+1,': ',log_loss(y_val,val_pred4))
wghts = np.exp(-temperature*np.array(logloss/sum(logloss)))
wghts = wghts/sum(wghts)
print(wghts)
val_pred = wghts[0]*val_pred1+wghts[1]*val_pred2+wghts[2]*val_pred3 #+wghts[3]*val_pred4
print('Validation logloss for fold- ',fold+1,': ',log_loss(y_val,val_pred))
oof_pred[val_ind] = val_pred
y_pred_final += (wghts[0]*model1.predict_proba(test)+
wghts[1]*model2.predict_proba(test)+wghts[2]*model3.predict_proba(test)
)/(n_splits)
print('OOF LogLoss :- ',(log_loss(train_y,oof_pred)))
outputs = y_pred_final.copy()
test_df = pd.read_csv('test_mean.csv')
field_ids_test = test_df['field_id'].values.tolist()
data_test = pd.DataFrame(outputs)
data_test['field_id'] = field_ids_test
data_test = data_test[data_test.field_id != 0]
data_test
data_test = data_test.rename(columns={
0:'Lucerne/Medics',
1:'Planted pastures (perennial)',
2:'Fallow',
3:'Wine grapes',
4:'Weeds',
5:'Small grain grazing',
6:'Wheat',
7:'Canola',
8:'Rooibos'
})
pred_df = data_test[['field_id', 'Lucerne/Medics', 'Planted pastures (perennial)', 'Fallow', 'Wine grapes', 'Weeds', 'Small grain grazing', 'Wheat', 'Canola', 'Rooibos']]
pred_df['field_id'] = pred_df['field_id'].astype(int)
pred_df = pred_df.sort_values(by=['field_id'],ascending=True)
pred_df
pred_df.to_csv('trial1_sep_akash.csv',index=False)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W0D5_Statistics/W0D5_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Tutorial 2: Statistical Inference
**Week 0, Day 5: Probability & Statistics**
**By Neuromatch Academy**
__Content creators:__ Ulrik Beierholm
__Content reviewers:__ Natalie Schaworonkow, Keith van Antwerp, Anoop Kulkarni, Pooya Pakarian, Hyosub Kim
__Production editors:__ Ethan Cheng, Ella Batty
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
---
#Tutorial Objectives
This tutorial builds on Tutorial 1 by explaining how to do inference through inverting the generative process.
By completing the exercises in this tutorial, you should:
* understand what the likelihood function is, and have some intuition of why it is important
* know how to summarise the Gaussian distribution using mean and variance
* know how to maximise a likelihood function
* be able to do simple inference in both classical and Bayesian ways
* (Optional) understand how Bayes Net can be used to model causal relationships
---
# Setup
```
# Imports
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
from scipy.stats import norm
from numpy.random import default_rng # a default random number generator
#@title Figure settings
import ipywidgets as widgets # interactive display
from ipywidgets import interact, fixed, HBox, Layout, VBox, interactive, Label, interact_manual
%config InlineBackend.figure_format = 'retina'
# plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/NMA2020/nma.mplstyle")
#@title Plotting & Helper functions
def plot_hist(data, xlabel, figtitle = None, num_bins = None):
""" Plot the given data as a histogram.
Args:
data (ndarray): array with data to plot as histogram
xlabel (str): label of x-axis
figtitle (str): title of histogram plot (default is no title)
num_bins (int): number of bins for histogram (default is 10)
Returns:
count (ndarray): number of samples in each histogram bin
bins (ndarray): center of each histogram bin
"""
fig, ax = plt.subplots()
ax.set_xlabel(xlabel)
ax.set_ylabel('Count')
if num_bins is not None:
count, bins, _ = plt.hist(data, max(data), bins = num_bins)
else:
count, bins, _ = plt.hist(data, max(data)) # 10 bins default
if figtitle is not None:
fig.suptitle(figtitle, size=16)
plt.show()
return count, bins
def plot_gaussian_samples_true(samples, xspace, mu, sigma, xlabel, ylabel):
""" Plot a histogram of the data samples on the same plot as the gaussian
distribution specified by the give mu and sigma values.
Args:
samples (ndarray): data samples for gaussian distribution
xspace (ndarray): x values to sample from normal distribution
mu (scalar): mean parameter of normal distribution
sigma (scalar): variance parameter of normal distribution
xlabel (str): the label of the x-axis of the histogram
ylabel (str): the label of the y-axis of the histogram
Returns:
Nothing.
"""
fig, ax = plt.subplots()
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
# num_samples = samples.shape[0]
count, bins, _ = plt.hist(samples, density=True) # probability density function
plt.plot(xspace, norm.pdf(xspace, mu, sigma),'r-')
plt.show()
def plot_likelihoods(likelihoods, mean_vals, variance_vals):
""" Plot the likelihood values on a heatmap plot where the x and y axes match
the mean and variance parameter values the likelihoods were computed for.
Args:
likelihoods (ndarray): array of computed likelihood values
mean_vals (ndarray): array of mean parameter values for which the
likelihood was computed
variance_vals (ndarray): array of variance parameter values for which the
likelihood was computed
Returns:
Nothing.
"""
fig, ax = plt.subplots()
im = ax.imshow(likelihoods)
cbar = ax.figure.colorbar(im, ax=ax)
cbar.ax.set_ylabel('log likelihood', rotation=-90, va="bottom")
ax.set_xticks(np.arange(len(mean_vals)))
ax.set_yticks(np.arange(len(variance_vals)))
ax.set_xticklabels(mean_vals)
ax.set_yticklabels(variance_vals)
ax.set_xlabel('Mean')
ax.set_ylabel('Variance')
def posterior_plot(x, likelihood=None, prior=None, posterior_pointwise=None, ax=None):
"""
Plots normalized Gaussian distributions and posterior.
Args:
x (numpy array of floats): points at which the likelihood has been evaluated
auditory (numpy array of floats): normalized probabilities for auditory likelihood evaluated at each `x`
visual (numpy array of floats): normalized probabilities for visual likelihood evaluated at each `x`
posterior (numpy array of floats): normalized probabilities for the posterior evaluated at each `x`
ax: Axis in which to plot. If None, create new axis.
Returns:
Nothing.
"""
if likelihood is None:
likelihood = np.zeros_like(x)
if prior is None:
prior = np.zeros_like(x)
if posterior_pointwise is None:
posterior_pointwise = np.zeros_like(x)
if ax is None:
fig, ax = plt.subplots()
ax.plot(x, likelihood, '-C1', LineWidth=2, label='Auditory')
ax.plot(x, prior, '-C0', LineWidth=2, label='Visual')
ax.plot(x, posterior_pointwise, '-C2', LineWidth=2, label='Posterior')
ax.legend()
ax.set_ylabel('Probability')
ax.set_xlabel('Orientation (Degrees)')
plt.show()
return ax
def plot_classical_vs_bayesian_normal(num_points, mu_classic, var_classic,
mu_bayes, var_bayes):
""" Helper function to plot optimal normal distribution parameters for varying
observed sample sizes using both classic and Bayesian inference methods.
Args:
num_points (int): max observed sample size to perform inference with
mu_classic (ndarray): estimated mean parameter for each observed sample size
using classic inference method
var_classic (ndarray): estimated variance parameter for each observed sample size
using classic inference method
mu_bayes (ndarray): estimated mean parameter for each observed sample size
using Bayesian inference method
var_bayes (ndarray): estimated variance parameter for each observed sample size
using Bayesian inference method
Returns:
Nothing.
"""
xspace = np.linspace(0, num_points, num_points)
fig, ax = plt.subplots()
ax.set_xlabel('n data points')
ax.set_ylabel('mu')
plt.plot(xspace, mu_classic,'r-', label = "Classical")
plt.plot(xspace, mu_bayes,'b-', label = "Bayes")
plt.legend()
plt.show()
fig, ax = plt.subplots()
ax.set_xlabel('n data points')
ax.set_ylabel('sigma^2')
plt.plot(xspace, var_classic,'r-', label = "Classical")
plt.plot(xspace, var_bayes,'b-', label = "Bayes")
plt.legend()
plt.show()
```
---
# Section 1: Basic probability
## Section 1.1: Basic probability theory
```
# @title Video 1: Basic Probability
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1bw411o7HR", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="SL0_6rw8zrM", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
This video covers basic probability theory, including complementary probability, conditional probability, joint probability, and marginalisation.
<details>
<summary> <font color='blue'>Click here for text recap of video </font></summary>
Previously we were only looking at sampling or properties of a single variables, but as we will now move on to statistical inference, it is useful to go over basic probability theory.
As a reminder, probability has to be in the range 0 to 1
$P(A) \in [0,1] $
and the complementary can always be defined as
$P(\neg A) = 1-P(A)$
When we have two variables, the *conditional probability* of $A$ given $B$ is
$P (A|B) = P (A \cap B)/P (B)=P (A, B)/P (B)$
while the *joint probability* of $A$ and $B$ is
$P(A \cap B)=P(A,B) = P(B|A)P(A) = P(A|B)P(B) $
We can then also define the process of *marginalisation* (for discrete variables) as
$P(A)=\sum P(A,B)=\sum P(A|B)P(B)$
where the summation is over the possible values of $B$.
As an example if $B$ is a binary variable that can take values $B+$ or $B0$ then
$P(A)=\sum P(A,B)=P(A|B+)P(B+)+ P(A|B0)P(B0) $.
For continuous variables marginalization is given as
$P(A)=\int P(A,B) dB=\int P(A|B)P(B) dB$
</details>
### Math Exercise 1.1: Probability example
To remind ourselves of how to use basic probability theory we will do a short exercise (no coding needed!), based on measurement of binary probabilistic neural responses.
As shown by Hubel and Wiesel in 1959 there are neurons in primary visual cortex that respond to different orientations of visual stimuli, with different neurons being sensitive to different orientations. The numbers in the following are however purely fictional.
Imagine that your collaborator tells you that they have recorded the activity of visual neurons while presenting either a horizontal or vertical grid as a visual stimulus. The activity of the neurons is measured as binary: they are either active or inactive in response to the stimulus.
After recording from a large number of neurons they find that when presenting a horizontal grid, on average 40% of neurons are active, while 30% respond to vertical grids.
We will use the following notation to indicate the probability that a randomly chosen neuron responds to horizontal grids
$P(h+)=0.4$
and this to show the probability it responds to vertical:
$P(v+)=0.3$
We can find the complementary event, that the neuron does not respond to the horizontal grid, using the fact that these events must add up to 1. We see that the probability the neuron does not respond to the horizontal grid ($h0$) is
$P(h0)=1-P(h+)=0.6$
and that the probability to not respond to vertical is
$P(v0)=1-P(v+)=0.7$
We will practice computing various probabilities in this framework.
### A) Product
Assuming that the horizontal and vertical orientation selectivity are independent, what is the probability that a randomly chosen neuron is sensitive to both horizontal and vertical orientations?
Hint: Two events are independent if the outcome of one does not affect the outcome of the other.
```
# to_remove explanation
"""
Independent here means that 𝑃(ℎ+,𝑣+) = 𝑃(ℎ+)𝑃(𝑣+)
P(h+,v+) = P(h+) p(v+)=0.4*0.3=0.12
"""
```
### B) Joint probability generally
A collaborator informs you that actually these are not independent. Of those neurons that respond to vertical, only 10 percent also respond to horizontal, i.e. the probability of responding to horizonal *given* that it responds to vertical is $P(h+|v+)=0.1$
Given this new information, what is now the probability that a randomly chosen neuron is sensitive to both horizontal and vertical orientations?
```
# to_remove explanation
"""
Remember that joint probability can generally be expressed as 𝑃(𝑎,𝑏)=𝑃(𝑎|𝑏)𝑃(𝑏)
𝑃(ℎ+,𝑣+)=𝑃(ℎ+|𝑣+)𝑃(𝑣+)=0.1∗0.3=0.03
"""
```
### C) Conditional probability
You start measuring from a neuron and find that it responds to horizontal orientations. What is now the probability that it also responds to vertical ($𝑃(v+|h+)$)?
```
# to_remove explanation
"""
The conditional probability is given by 𝑃(𝑎|𝑏)=𝑃(𝑎,𝑏)/𝑃(𝑏)
𝑃(𝑣+|ℎ+)=𝑃(𝑣+,ℎ+)/𝑃(ℎ+)=𝑃(ℎ+|𝑣+)𝑃(𝑣+)/𝑃(ℎ+)=0.1∗0.3/0.4=0.075
"""
```
### D) Marginal probability
Lastly, let's check that everything has been done correctly. Given our knowledge about the conditional probabilities, we should be able to use marginalisation to recover the marginal probability of a random neuron responding to vertical orientations ($P(v+)$). We know from above that this should equal 0.3.
Calculate $P(v+)$ based on the conditional probabilities for $P(v+|h+)$ and $P(v+|h0)$ (the latter which you will need to calculate).
```
# to_remove explanation
"""
The first step is to calculute:
𝑃(𝑣+|ℎ0)=𝑃(ℎ0|𝑣+)𝑃(𝑣+)/𝑃(ℎ0)=(1−0.1)∗0.3/(1−0.4)=0.45
Then use the property of marginalisation (discrete version)
𝑃(𝑎)=∑𝑖𝑃(𝑎|𝑏=𝑖)𝑃(𝑏=𝑖)
𝑃(𝑣+)=𝑃(𝑣+|ℎ+)𝑃(ℎ+)+𝑃(𝑣+|ℎ0)𝑃(ℎ0)=0.075∗0.4+0.45∗(1−0.4)=0.3
Phew, we recovered the correct value!
"""
```
## Section 1.2: Markov chains
```
# @title Video 2: Markov Chains
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Rh41187ZC", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="XjQF13xMpss", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
### Coding exercise 1.2 Markov chains
We will practice more probability theory by looking at **Markov chains**. The Markov property specifies that you can fully encapsulate the important properties of a system based on its *current* state at the current time, any previous history does not matter. It is memoryless.
As an example imagine that a rat is able to move freely between 3 areas: a dark rest area
($state=1$), a nesting area ($state=2$) and a bright area for collecting food ($state=3$). Every 5 minutes (timepoint $i$) we record the rat's location. We can use a **categorical distribution** to look at the probability that the rat moves to one state from another.
The table below shows the probability of the rat transitioning from one area to another between timepoints ($state_i$ to $state_{i+1}$).
\begin{array}{|l | l | l | l |} \hline
state_{i} &P(state_{i+1}=1|state_i=*) &P(state_{i+1}=2|state_i=*) & P(state_{i+1}=3|state=_i*) \\ \hline
state_{i}=1& 0.2 &0.6 &0.2\\
state_{i}=2& .6 &0.3& 0.1\\
state_{i}=3& 0.8 &0.2 &0\\ \hline
\end{array}
We are modeling this as a Markov chain, so the animal is only in one of the states at a time and can transition between the states.
We want to get the probability of each state at time $i+1$. We know from Section 1.1 that we can use marginalisation:
$$P_(state_{i+1} = 1) = P(state_{i+1}=1|state_i=1)P(state_i = 1) + P(state_{i+1}=1|state_i=2)P(state_i = 2) + P(state_{i+1}=1|state_i=3)P(state_i = 3) $$
Let's say we had a row vector (a vector defined as a row, not a column so matrix multiplication will work out) of the probabilities of the current state:
$$P_i = [P(state_i = 1), P(state_i = 2), P(state_i = 3) ] $$
If we actually know where the rat is at the current time point, this would be deterministic (e.g. $P_i = [0, 1, 0]$ if the rat is in state 2). Otherwise, this could be probabilistic (e.g. $P_i = [0.1, 0.7, 0.2]$).
To compute the vector of probabilities of the state at the time $i+1$, we can use linear algebra and multiply our vector of the probabilities of the current state with the transition matrix. Recall your matrix multiplication skills from W0D3 to check this!
$$P_{i+1} = P_{i} T$$
where $T$ is our transition matrix.
This is the same formula for every step, which allows us to get the probabilities for a time more than 1 step in advance easily. If we started at $i=0$ and wanted to look at the probabilities at step $i=2$, we could do:
\begin{align*}
P_{1} &= P_{0}T\\
P_{2} &= P_{1}T = P_{0}TT = P_{0}T^2\\
\end{align*}
So, every time we take a further step we can just multiply with the transition matrix again. So, the probability vector of states at j timepoints after the current state at timepoint i is equal to the probability vector at timepoint i times the transition matrix raised to the jth power.
$$P_{i + j} = P_{i}T^j $$
If the animal starts in area 2, what is the probability the animal will again be in area 2 when we check on it 20 minutes (4 transitions) later?
Fill in the transition matrix in the code below.
```
###################################################################
## TODO for student
## Fill out the following then remove
raise NotImplementedError("Student exercise: compute state probabilities after 4 transitions")
###################################################################
# Transition matrix
transition_matrix = np.array([[ 0.2, 0.6, 0.2],[ .6, 0.3, 0.1], [0.8, 0.2, 0]])
# Initial state, p0
p0 = np.array([0, 1, 0])
# Compute the probabilities 4 transitions later (use np.linalg.matrix_power to raise a matrix a power)
p4 = ...
# The second area is indexed as 1 (Python starts indexing at 0)
print("The probability the rat will be in area 2 after 4 transitions is: " + str(p4[1]))
# to_remove solution
# Transition matrix
transition_matrix = np.array([[ 0.2, 0.6, 0.2],[ .6, 0.3, 0.1], [0.8, 0.2, 0]])
# Initial state, p0
p0 = np.array([0, 1, 0])
# Compute the probabilities 4 transitions later (use np.linalg.matrix_power to raise a matrix a power)
p4 = p0 @ np.linalg.matrix_power(transition_matrix, 4)
# The second area is indexed as 1 (Python starts indexing at 0)
print("The probability the rat will be in area 2 after 4 transitions is: " + str(p4[1]))
```
You should get a probability of 0.4311, i.e. there is a 43.11% chance that you will find the rat in area 2 in 20 minutes.
What is the average amount of time spent by the rat in each of the states?
Implicit in the question is the idea that we can start off with a random initial state and then measure how much relative time is spent in each area. If we make a few assumptions (e.g. ergodic or 'randomly mixing' system), we can instead start with an initial random distribution and see how the final probabilities of each state after many time steps (100) to estimate the time spent in each state.
```
# Initialize random initial distribution
p_random = np.ones((1,3))/3
###################################################################
## TODO for student: Fill compute the state matrix after 100 transitions
raise NotImplementedError("Student exercise: need to complete computation below")
###################################################################
# Fill in the missing line to get the state matrix after 100 transitions, like above
p_average_time_spent = ...
print("The proportion of time spend by the rat in each of the three states is: "
+ str(p_average_time_spent[0]))
# to_remove solution
# Initialize random initial distribution
p_random = np.ones((1,3))/3
# Fill in the missing line to get the state matrix after 100 transitions, like above
p_average_time_spent = p_random @ np.linalg.matrix_power(transition_matrix, 100)
print("The proportion of time spend by the rat in each of the three states is: "
+ str(p_average_time_spent[0]))
```
The proportion of time spend in each of the three areas are 0.4473, 0.4211, and 0.1316, respectively.
Imagine now that if the animal is satiated and tired the transitions change to:
\begin{array}{|l | l | l | l |} \hline
state_{i} &P(state_{i+1}=1|state_i=*) &P(state_{i+1}=2|state_i=*) &P(state_{i+1}=3|state_i=*) \\ \hline
state_{i}=1& 0.2 &0.7 &0.1\\
state_{i}=2& .3 &0.7& 0.\\
state_{i}=3& 0.8 &0.2 &0\\ \hline
\end{array}
Try repeating the questions above for this table of transitions by changing the transition matrix. Based on the probability values, what would you predict? Check how much time the rat spends on average in each area and see if it matches your predictions.
**Main course preview:** The Markov property is extremely important for many models, particularly Hidden Markov Models, discussed on day W3D2, and for methods such as Markov Chain Monte Carlo sampling.
---
# Section 2: Statistical inference and likelihood
## Section 2.1: Likelihoods
```
# @title Video 3: Statistical inference and likelihood
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1LM4y1g7wT", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="7aiKvKlYwR0", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
**Correction to video**: The variance estimate that maximizes the likelihood is $\bar{\sigma}^2=\frac{1}{n} \sum_i (x_i-\bar{x})^2 $. This is a biased estimate. Shown in the video is the sample variance, which is an unbiased estimate for variance: $\bar{\sigma}^2=\frac{1}{n-1} \sum_i (x_i-\bar{x})^2 $. See section 2.2.3 for more details.
<details>
<summary> <font color='blue'>Click here for text recap of video </font></summary>
A generative model (such as the Gaussian distribution from the previous tutorial) allows us to make predictions about outcomes.
However, after we observe $n$ data points, we can also evaluate our model (and any of its associated parameters) by calculating the **likelihood** of our model having generated each of those data points $x_i$.
$$P(x_i|\mu,\sigma)=\mathcal{N}(x_i,\mu,\sigma)$$
For all data points $\mathbf{x}=(x_1, x_2, x_3, ...x_n) $ we can then calculate the likelihood for the whole dataset by computing the product of the likelihood for each single data point.
$$P(\mathbf{x}|\mu,\sigma)=\prod_{i=1}^n \mathcal{N}(x_i,\mu,\sigma)$$
</details>
While the likelihood may be written as a conditional probability ($P(x|\mu,\sigma)$), we refer to it as the **likelihood function**, $L(\mu,\sigma)$. This slight switch in notation is to emphasize our focus: we use likelihood functions when the data points $\mathbf{x}$ are fixed and we are focused on the parameters.
Our new notation makes clear that the likelihood $L(\mu,\sigma)$ is a function of $\mu$ and $\sigma$, not of $\mathbf{x}$.
In the last tutorial we reviewed how the data was generated given the selected parameters of the generative process. If we do not know the parameters $\mu$, $\sigma$ that generated the data, we can try to **infer** which parameter values (given our model) gives the best (highest) likelihood. This is what we call statistical inference: trying to infer what parameters make our observed data the most likely or probable?
### Coding Exercise 2.1: Computing likelihood
Let's start with computing the likelihood of some set of data points being drawn from a Gaussian distribution with a mean and variance we choose.
As multiplying small probabilities together can lead to very small numbers, it is often convenient to report the *logarithm* of the likelihood. This is just a convenient transformation and as logarithm is a monotonically increasing function this does not change what parameters maximise the function.
```
def compute_likelihood_normal(x, mean_val, standard_dev_val):
""" Computes the log-likelihood values given a observed data sample x, and
potential mean and variance values for a normal distribution
Args:
x (ndarray): 1-D array with all the observed data
mean_val (scalar): value of mean for which to compute likelihood
standard_dev_val (scalar): value of variance for which to compute likelihood
Returns:
likelihood (scalar): value of likelihood for this combination of means/variances
"""
###################################################################
## TODO for student
raise NotImplementedError("Student exercise: compute likelihood")
###################################################################
# Get probability of each data point (use norm.pdf from scipy stats)
p_data = ...
# Compute likelihood (sum over the log of the probabilities)
likelihood = ...
return likelihood
# Set random seed
np.random.seed(0)
# Generate data
true_mean = 5
true_standard_dev = 1
n_samples = 1000
x = np.random.normal(true_mean, true_standard_dev, size = (n_samples,))
# Compute likelihood for a guessed mean/standard dev
guess_mean = 4
guess_standard_dev = .1
likelihood = compute_likelihood_normal(x, guess_mean, guess_standard_dev)
print(likelihood)
# to_remove solution
def compute_likelihood_normal(x, mean_val, standard_dev_val):
""" Computes the log-likelihood values given a observed data sample x, and
potential mean and variance values for a normal distribution
Args:
x (ndarray): 1-D array with all the observed data
mean_val (scalar): value of mean for which to compute likelihood
standard_dev_val (scalar): value of variance for which to compute likelihood
Returns:
likelihood (scalar): value of likelihood for this combination of means/variances
"""
# Get probability of each data point (use norm.pdf from scipy stats)
p_data = norm.pdf(x, mean_val, standard_dev_val)
# Compute likelihood (sum over the log of the probabilities)
likelihood = np.sum(np.log(p_data))
return likelihood
# Set random seed
np.random.seed(0)
# Generate data
true_mean = 5
true_standard_dev = 1
n_samples = 1000
x = np.random.normal(true_mean, true_standard_dev, size = (n_samples,))
# Compute likelihood for a guessed mean/standard dev
guess_mean = 4
guess_standard_dev = .1
likelihood = compute_likelihood_normal(x, guess_mean, guess_standard_dev)
print(likelihood)
```
You should get a likelihood of -92904.81. This is somewhat meaningless to us! For it to be useful, we need to compare it to the likelihoods computing using other guesses of the mean or standard deviation. The visualization below shows us the likelihood for various values of the mean and the standard deviation. Essentially, we are performing a rough grid-search over means and standard deviations. What would you guess as the true mean and standard deviation based on this visualization?
```
# @markdown Execute to visualize likelihoods
# Set random seed
np.random.seed(0)
# Generate data
true_mean = 5
true_standard_dev = 1
n_samples = 1000
x = np.random.normal(true_mean, true_standard_dev, size = (n_samples,))
# Compute likelihood for different mean/variance values
mean_vals = np.linspace(1, 10, 10) # potential mean values to ry
standard_dev_vals = np.array([0.7, 0.8, 0.9, 1, 1.2, 1.5, 2, 3, 4, 5]) # potential variance values to try
# Initialise likelihood collection array
likelihood = np.zeros((mean_vals.shape[0], standard_dev_vals.shape[0]))
# Compute the likelihood for observing the gvien data x assuming
# each combination of mean and variance values
for idxMean in range(mean_vals.shape[0]):
for idxVar in range(standard_dev_vals .shape[0]):
likelihood[idxVar,idxMean]= sum(np.log(norm.pdf(x, mean_vals[idxMean],
standard_dev_vals[idxVar])))
# Uncomment once you've generated the samples and compute likelihoods
xspace = np.linspace(0, 10, 100)
plot_likelihoods(likelihood, mean_vals, standard_dev_vals)
```
## Section 2.2: Maximum likelihood
```
# @title Video 4: Maximum likelihood
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Lo4y1C7xy", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Fuwx_V64nEU", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
Implicitly, by looking for the parameters that give the highest likelihood in the last section, we have been searching for the **maximum likelihood** estimate.
$$(\hat{\mu},\hat{\sigma})=argmax_{\mu,\sigma}L(\mu,\sigma)=argmax_{\mu,\sigma} \prod_{i=1}^n \mathcal{N}(x_i,\mu,\sigma)$$.
In next sections, we will look at other ways of inferring such parameter variables.
### Section 2.2.1: Searching for best parameters
We want to do inference on this data set, i.e. we want to infer the parameters that most likely gave rise to the data given our model. Intuitively that means that we want as good as possible a fit between the observed data and the probability distribution function with the best inferred parameters. We can search for the best parameters manually by trying out a bunch of possible values of the parameters, computing the likelihoods, and picking the parameters that resulted in the highest likelihood.
#### Interactive Demo 2.2: Maximum likelihood inference
Try to see how well you can fit the probability distribution to the data by using the demo sliders to control the mean and standard deviation parameters of the distribution. We will visualize the histogram of data points (in blue) and the Gaussian density curve with that mean and standard deviation (in red). Below, we print the log-likelihood.
- What (approximate) values of mu and sigma result in the best fit?
- How does the value below the plot (the log-likelihood) change with the quality of fit?
```
# @markdown Make sure you execute this cell to enable the widget and fit by hand!
# Generate data
true_mean = 5
true_standard_dev = 1
n_samples = 1000
vals = np.random.normal(true_mean, true_standard_dev, size = (n_samples,))
def plotFnc(mu,sigma):
loglikelihood= sum(np.log(norm.pdf(vals,mu,sigma)))
#calculate histogram
#prepare to plot
fig, ax = plt.subplots()
ax.set_xlabel('x')
ax.set_ylabel('probability')
#plot histogram
count, bins, ignored = plt.hist(vals,density=True)
x = np.linspace(0,10,100)
#plot pdf
plt.plot(x, norm.pdf(x,mu,sigma),'r-')
plt.show()
print("The log-likelihood for the selected parameters is: " + str(loglikelihood))
#interact(plotFnc, mu=5.0, sigma=2.1);
#interact(plotFnc, mu=widgets.IntSlider(min=0.0, max=10.0, step=1, value=4.0),sigma=widgets.IntSlider(min=0.1, max=10.0, step=1, value=4.0));
interact(plotFnc, mu=(0.0, 15.0, 0.1),sigma=(0.1, 5.0, 0.1));
# to_remove explanation
"""
- The log-likelihood should be greatest when 𝜇 = 5 and 𝜎 = 1.
- The summed log-liklihood increases (becomes less negative) as the fit improves
"""
```
Doing this was similar to the grid searched image from Section 2.1. Really, we want to see if we can do inference on observed data in a bit more principled way.
### Section 2.2.2: Optimization to find parameters
Let's again assume that we have a data set, $\mathbf{x}$, assumed to be generated by a normal distribution (we actually generate it ourselves in line 1, so we know how it was generated!).
We want to maximise the likelihood of the parameters $\mu$ and $\sigma^2$. We can do so using a couple of tricks:
* Using a log transform will not change the maximum of the function, but will allow us to work with very small numbers that could lead to problems with machine precision.
* Maximising a function is the same as minimising the negative of a function, allowing us to use the minimize optimisation provided by scipy.
The optimisation will be done using `sp.optimize.minimize`, which does a version of gradient descent (there are hundreds of ways to do numerical optimisation, we will not cover these here!).
#### Coding Exercise 2.2: Maximum Likelihood Estimation
In the code below, insert the missing line (see the `compute_likelihood_normal` function from previous exercise), with the mean as theta[0] and variance as theta[1].
```
# We define the function to optimise, the negative log likelihood
def negLogLike(theta, x):
""" Function for computing the negative log-likelihood given the observed data
and given parameter values stored in theta.
Args:
theta (ndarray): normal distribution parameters (mean is theta[0],
variance is theta[1])
x (ndarray): array with observed data points
Returns:
Calculated negative Log Likelihood value!
"""
###################################################################
## TODO for students: Compute the negative log-likelihood value for the
## given observed data values and parameters (theta)
# Fill out the following then remove
raise NotImplementedError("Student exercise: need to compute the negative \
log-likelihood value")
###################################################################
return ...
# Set random seed
np.random.seed(0)
# Generate data
true_mean = 5
true_standard_dev = 1
n_samples = 1000
x = np.random.normal(true_mean, true_standard_dev, size = (n_samples,))
# Define bounds, var has to be positive
bnds = ((None, None), (0, None))
# Optimize with scipy!
optimal_parameters = sp.optimize.minimize(negLogLike, (2, 2), args = x, bounds = bnds)
print("The optimal mean estimate is: " + str(optimal_parameters.x[0]))
print("The optimal variance estimate is: " + str(optimal_parameters.x[1]))
# optimal_parameters contains a lot of information about the optimization,
# but we mostly want the mean and variance
# to_remove solution
# We define the function to optimise, the negative log likelihood
def negLogLike(theta, x):
""" Function for computing the negative log-likelihood given the observed data
and given parameter values stored in theta.
Args:
theta (ndarray): normal distribution parameters (mean is theta[0],
variance is theta[1])
x (ndarray): array with observed data points
Returns:
Calculated negative Log Likelihood value!
"""
return -sum(np.log(norm.pdf(x, theta[0], theta[1])))
# Set random seed
np.random.seed(0)
# Generate data
true_mean = 5
true_standard_dev = 1
n_samples = 1000
x = np.random.normal(true_mean, true_standard_dev, size = (n_samples,))
# Define bounds, var has to be positive
bnds = ((None, None), (0, None))
# Optimize with scipy!
optimal_parameters = sp.optimize.minimize(negLogLike, (2, 2), args = x, bounds = bnds)
print("The optimal mean estimate is: " + str(optimal_parameters.x[0]))
print("The optimal variance estimate is: " + str(optimal_parameters.x[1]))
# optimal_parameters contains a lot of information about the optimization,
# but we mostly want the mean and variance
```
These are the approximations of the parameters that maximise the likelihood ($\mu$ ~ 5.280 and $\sigma$ ~ 1.148).
### Section 2.2.3: Analytical solution
Sometimes, things work out well and we can come up with formulas for the maximum likelihood estimates of parameters. We won't get into this further but basically we could set the derivative of the likelihood to 0 (to find a maximum) and solve for the parameters. This won't always work but for the Gaussian distribution, it does.
Specifically , the special thing about the Gaussian is that mean and standard deviation of the random sample can effectively approximate the two parameters of a Gaussian, $\mu, \sigma$.
Hence using the mean, $\bar{x}=\frac{1}{n}\sum_i x_i$, and variance, $\bar{\sigma}^2=\frac{1}{n} \sum_i (x_i-\bar{x})^2 $ of the sample should give us the best/maximum likelihood, $L(\bar{x},\bar{\sigma}^2)$.
Let's compare these values to those we've been finding using manual search and optimization, and the true values (which we only know because we generated the numbers!).
```
# Set random seed
np.random.seed(0)
# Generate data
true_mean = 5
true_standard_dev = 1
n_samples = 1000
x = np.random.normal(true_mean, true_standard_dev, size = (n_samples,))
# Compute and print sample means and standard deviations
print("This is the sample mean as estimated by numpy: " + str(np.mean(x)))
print("This is the sample standard deviation as estimated by numpy: " + str(np.std(x)))
# to_remove explanation
""" You should notice that the parameters estimated by maximum likelihood
estimation/inference are very close to the true parameters (mu = 5, sigma = 1),
as well as the parameters visualized to be best after Coding Exercise 2.1,
where all likelihood values were calculated explicitly.
"""
```
If you try out different values of the mean and standard deviation in all the previous exercises, you should see that changing the mean and
sigma parameter values (and generating new data from a distribution with theseparameters) makes no difference as MLE methods can still recover these parameters.
There is a slight problem: it turns out that the maximum likelihood estimate for the variance is actually a biased one! This means that the estimators expected value (mean value) and the true value of the parameter are different. An unbiased estimator for the variance is $\bar{\sigma}^2=\frac{1}{n-1} \sum_i (x_i-\bar{x})^2 $, this is called the sample variance. For more details, see [the wiki page on bias of estimators](https://en.wikipedia.org/wiki/Bias_of_an_estimator).
---
# Section 3: Bayesian Inference
## Section 3.1: Bayes
```
# @title Video 5: Bayesian inference with Gaussian distribution
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV11K4y1u7vH", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="1Q3VqcpfvBk", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
We will start to introduce Bayesian inference here to contrast with our maximum likelihood methods, but you will also revisit Bayesian inference in great detail on W3D1 of the course so we won't dive into all details.
For Bayesian inference we do not focus on the likelihood function $L(y)=P(x|y)$, but instead focus on the posterior distribution:
$$P(y|x)=\frac{P(x|y)P(y)}{P(x)}$$
which is composed of the **likelihood** function $P(x|y)$, the **prior** $P(y)$ and a normalising term $P(x)$ (which we will ignore for now).
While there are other advantages to using Bayesian inference (such as the ability to derive Bayesian Nets, see optional bonus task below), we will start by focusing on the role of the prior in inference. Does including prior information allow us to infer parameters in a better way?
### Think! 3.1: Bayesian inference with Gaussian distribution
In the above sections we performed inference using maximum likelihood, i.e. finding the parameters that maximised the likelihood of a set of parameters, given the model and data.
We will now repeat the inference process, but with an added Bayesian prior, and compare it to the "classical" inference (maximum likelihood) process we did before (Section 2). When using conjugate priors (more on this below) we can just update the parameter values of the distributions (here Gaussian distributions).
For the prior we start by guessing a mean of 5 (mean of previously observed data points 4 and 6) and variance of 1 (variance of 4 and 6). We use a trick (not detailed here) that is a simplified way of applying a prior, that allows us to just add these 2 values (pseudo-data) to the real data.
See the visualization below that shows the mean and standard deviation inferred by our classical maximum likelihood approach and the Bayesian approach for different numbers of data points.
Remembering that our true values are $\mu = 5$, and $\sigma^2 = 1$, how do the Bayesian inference and classical inference compare?
```
# @markdown Execute to visualize inference
def classic_vs_bayesian_normal(mu, sigma, num_points, prior):
""" Compute both classical and Bayesian inference processes over the range of
data sample sizes (num_points) for a normal distribution with parameters
mu,sigma for comparison.
Args:
mu (scalar): the mean parameter of the normal distribution
sigma (scalar): the standard deviation parameter of the normal distribution
num_points (int): max number of points to use for inference
prior (ndarray): prior data points for Bayesian inference
Returns:
mean_classic (ndarray): estimate mean parameter via classic inference
var_classic (ndarray): estimate variance parameter via classic inference
mean_bayes (ndarray): estimate mean parameter via Bayesian inference
var_bayes (ndarray): estimate variance parameter via Bayesian inference
"""
# Initialize the classical and Bayesian inference arrays that will estimate
# the normal parameters given a certain number of randomly sampled data points
mean_classic = np.zeros(num_points)
var_classic = np.zeros(num_points)
mean_bayes = np.zeros(num_points)
var_bayes = np.zeros(num_points)
for nData in range(num_points):
random_num_generator = default_rng(0)
x = random_num_generator.normal(mu, sigma, nData + 1)
# Compute the mean of those points and set the corresponding array entry to this value
mean_classic[nData] = np.mean(x)
# Compute the variance of those points and set the corresponding array entry to this value
var_classic[nData] = np.var(x)
# Bayesian inference with the given prior is performed below for you
xsupp = np.hstack((x, prior))
mean_bayes[nData] = np.mean(xsupp)
var_bayes[nData] = np.var(xsupp)
return mean_classic, var_classic, mean_bayes, var_bayes
# Set random seed
np.random.seed(0)
# Set normal distribution parameters, mu and sigma
mu = 5
sigma = 1
# Set the prior to be two new data points, 4 and 6, and print the mean and variance
prior = np.array((4, 6))
print("The mean of the data comprising the prior is: " + str(np.mean(prior)))
print("The variance of the data comprising the prior is: " + str(np.var(prior)))
mean_classic, var_classic, mean_bayes, var_bayes = classic_vs_bayesian_normal(mu, sigma, 60, prior)
plot_classical_vs_bayesian_normal(60, mean_classic, var_classic, mean_bayes, var_bayes)
# to_remove explanation
"""
Hopefully you can see that the blue line stays a little closer to the true values ($\mu=5$, $\sigma^2=1$).
Having a simple prior in the Bayesian inference process (blue) helps to regularise
the inference of the mean and variance parameters when you have very little data,
but has little effect with large data sets. You can see that as the number of data points
(x-axis) increases, both inference processes (blue and red lines) get closer and closer
together, i.e. their estimates for the true parameters converge as sample size increases.
"""
```
Note that the prior is only beneficial when it is close to the true value, i.e. 'a good guess' (or at least not a bad guess). As we will see in the next exercise, if you have a prior/bias that is very wrong, your inference will start off very wrong!
## Section 3.2: Conjugate priors
```
# @title Video 6: Conjugate priors
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Hg41137Zr", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="mDEyZHaG5aY", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
### Interactive Demo 3.2: Conjugate priors
Let's return to our example from Tutorial 1 using the binomial distribution - rat in a T-maze.
Bayesian inference can be used for any likelihood distribution, but it is a lot more convenient to work with **conjugate** priors, where multiplying the prior with the likelihood just provides another instance of the prior distribution with updated values.
For the binomial likelihood it is convenient to use the **beta** distribution as a prior
\begin{aligned}f(p;\alpha ,\beta )={\frac {1}{\mathrm {B} (\alpha ,\beta )}}p^{\alpha -1}(1-p)^{\beta -1}\end{aligned}
where $B$ is the beta function, $\alpha$ and $\beta$ are parameters, and $p$ is the probability of the rat turning left or right. The beta distribution is thus a distribution over a probability.
Given a series of Left and Right moves of the rat, we can now estimate the probability that the animal will turn left. Using Bayesian Inference, we use a beta distribution *prior*, which is then multiplied with the *likelihood* to create a *posterior* that is also a beta distribution, but with updated parameters (we will not cover the math here).
Activate the widget below to explore the variables, and follow the instructions below.
```
#@title
#@markdown Make sure you execute this cell to enable the widget
#beta distribution
#and binomial
def plotFnc(p,n,priorL,priorR):
# Set random seed
np.random.seed(1)
#sample from binomial
numL = np.random.binomial(n, p, 1)
numR = n - numL
stepSize=0.001
x = np.arange(0, 1, stepSize)
betaPdf=sp.stats.beta.pdf(x,numL+priorL,numR+priorR)
betaPrior=sp.stats.beta.pdf(x,priorL,priorR)
print("number of left "+str(numL))
print("number of right "+str(numR))
print(" ")
print("max likelihood "+str(numL/(numL+numR)))
print(" ")
print("max posterior " + str(x[np.argmax(betaPdf)]))
print("mean posterior " + str(np.mean(betaPdf*x)))
print(" ")
with plt.xkcd():
#rng.beta()
fig, ax = plt.subplots()
plt.rcParams.update({'font.size': 22})
ax.set_xlabel('p')
ax.set_ylabel('probability density')
plt.plot(x,betaPdf, label = "Posterior")
plt.plot(x,betaPrior, label = "Prior")
#print(int(len(betaPdf)/2))
plt.legend()
interact(plotFnc, p=(0, 1, 0.01),n=(1, 50, 1), priorL=(1, 10, 1),priorR=(1, 10, 1));
```
The plot above shows you the prior distribution (i.e. before any data) and the posterior distribution (after data), with a summary of the data (number of left and right moves) and the maximum likelihood, maximum posterior and mean of the posterior. Dependent on the purpose either the mean or the max of the posterior can be useful as a 'single-number' summary of the posterior.
Once you are familiar with the sliders and what they represent, go through these instructions.
**For $p=0.5$**
- Set $p=0.5$ and start off with a "flat" prior (`priorL=0`, `priorR=0`). Note that the prior distribution (orange) is flat, also known as uniformative. In this case the maximum likelihood and maximum posterior will get you almost identical results as you vary the number of datapoints ($n$) and the probability of the rat going left. However the posterior is a full distribution and not just a single point estimate.
- As $n$ gets large you will also notice that the estimate (max likelihood or max posterior) changes less for each change in $n$, i.e. the estimation stabilises.
- How many data points do you need think is needed for the probability estimate to stabilise? Note that this depends on how large fluctuations you are willing to accept.
- Try increasing the strength of the prior, `priorL=10` and `priorR=10`. You will see that the prior distribution becomes more 'peaky'. In short this prior means that small or large values of $p$ are conidered very unlikely. Try playing with the number of data points $n$, you should find that the prior stabilises/regularises the maximum posterior estimate so that it does not move as much.
**For $p=0.2$**
Try the same as you just did, now with $p=0.2$,
do you notice any differences? Note that the prior (assumeing equal chance Left and Right) is now badly matched to the data. Do the maximum likelihood and maximum posterior still give similar results, for a weak prior? For a strong prior? Does the prior still have a stabilising effect on the estimate?
**Take-away message:**
Bayesian inference gives you a full distribution over the variables that you are inferring, can help regularise inference when you have limited data, and allows you to build more complex models that better reflects true causality (see bonus below).
### Think! 3.2: Bayesian Brains
Bayesian inference can help you when doing data analysis, especially when you only have little data. But consider whether the brain might be able to benefit from this too. If the brain needs to make inferences about the world, would it be useful to do regularisation on the input? Maybe there are times where having a full probability distribution could be useful?
```
# to_remove explanation
""" You will learn more about "Bayesian brains" and the theory surrounding
these ideas once the course begins. Here is a brief explanation: it may
be ideal for human brains to implement Bayesian inference by integrating "prior"
information the brain has about the world (memories, prior knowledge, etc.) with
new evidence that updates its "beliefs"/prior. This process seems to parallel
the brain's method of learning about its environment, making it a compelling
theory for many neuroscience researchers. One of Bonus exercises below examines a possible
real world model for Bayesian inference: sound localization.
"""
```
---
# Summary
```
# @title Video 7: Summary
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1qB4y1K7WZ", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="OJN7ri3_FCA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
Having done the different exercises you should now:
* understand what the likelihood function is, and have some intuition of why it is important
* know how to summarise the Gaussian distribution using mean and variance
* know how to maximise a likelihood function
* be able to do simple inference in both classical and Bayesian ways
For more resources see
https://github.com/NeuromatchAcademy/precourse/blob/master/resources.md
---
# Bonus
## Bonus Coding Exercise 1: Finding the posterior computationally
Imagine an experiment where participants estimate the location of a noise-emitting object. To estimate its position, the participants can use two sources of information:
1. new noisy auditory information (the likelihood)
2. prior visual expectations of where the stimulus is likely to come from (visual prior).
The auditory and visual information are both noisy, so participants will combine these sources of information to better estimate the position of the object.
We will use Gaussian distributions to represent the auditory likelihood (in red), and a Gaussian visual prior (expectations - in blue). Using Bayes rule, you will combine them into a posterior distribution that summarizes the probability that the object is in each possible location.
We have provided you with a ready-to-use plotting function, and a code skeleton.
* You can use `my_gaussian` from Tutorial 1 (also included below), to generate an auditory likelihood with parameters $\mu$ = 3 and $\sigma$ = 1.5
* Generate a visual prior with parameters $\mu$ = -1 and $\sigma$ = 1.5
* Calculate the posterior using pointwise multiplication of the likelihood and prior. Don't forget to normalize so the posterior adds up to 1
* Plot the likelihood, prior and posterior using the predefined function `posterior_plot`
```
def my_gaussian(x_points, mu, sigma):
""" Returns normalized Gaussian estimated at points `x_points`, with parameters:
mean `mu` and standard deviation `sigma`
Args:
x_points (ndarray of floats): points at which the gaussian is evaluated
mu (scalar): mean of the Gaussian
sigma (scalar): standard deviation of the gaussian
Returns:
(numpy array of floats) : normalized Gaussian evaluated at `x`
"""
px = 1/(2*np.pi*sigma**2)**1/2 *np.exp(-(x_points-mu)**2/(2*sigma**2))
# as we are doing numerical integration we may have to remember to normalise
# taking into account the stepsize (0.1)
px = px/(0.1*sum(px))
return px
def compute_posterior_pointwise(prior, likelihood):
""" Compute the posterior probability distribution point-by-point using Bayes
Rule.
Args:
prior (ndarray): probability distribution of prior
likelihood (ndarray): probability distribution of likelihood
Returns:
posterior (ndarray): probability distribution of posterior
"""
##############################################################################
# TODO for students: Write code to compute the posterior from the prior and
# likelihood via pointwise multiplication. (You may assume both are defined
# over the same x-axis)
#
# Comment out the line below to test your solution
raise NotImplementedError("Finish the simulation code first")
##############################################################################
posterior = ...
return posterior
def localization_simulation(mu_auditory = 3.0, sigma_auditory = 1.5,
mu_visual = -1.0, sigma_visual = 1.5):
""" Perform a sound localization simulation with an auditory prior.
Args:
mu_auditory (float): mean parameter value for auditory prior
sigma_auditory (float): standard deviation parameter value for auditory
prior
mu_visual (float): mean parameter value for visual likelihood distribution
sigma_visual (float): standard deviation parameter value for visual
likelihood distribution
Returns:
x (ndarray): range of values for which to compute probabilities
auditory (ndarray): probability distribution of the auditory prior
visual (ndarray): probability distribution of the visual likelihood
posterior_pointwise (ndarray): posterior probability distribution
"""
##############################################################################
## Using the x variable below,
## create a gaussian called 'auditory' with mean 3, and std 1.5
## create a gaussian called 'visual' with mean -1, and std 1.5
#
#
## Comment out the line below to test your solution
raise NotImplementedError("Finish the simulation code first")
###############################################################################
x = np.arange(-8, 9, 0.1)
auditory = ...
visual = ...
posterior = compute_posterior_pointwise(auditory, visual)
return x, auditory, visual, posterior
# Uncomment the lines below to plot the results
# x, auditory, visual, posterior_pointwise = localization_simulation()
# _ = posterior_plot(x, auditory, visual, posterior_pointwise)
# to_remove solution
def my_gaussian(x_points, mu, sigma):
""" Returns normalized Gaussian estimated at points `x_points`, with parameters:
mean `mu` and standard deviation `sigma`
Args:
x_points (ndarray of floats): points at which the gaussian is evaluated
mu (scalar): mean of the Gaussian
sigma (scalar): standard deviation of the gaussian
Returns:
(numpy array of floats) : normalized Gaussian evaluated at `x`
"""
px = 1/(2*np.pi*sigma**2)**1/2 *np.exp(-(x_points-mu)**2/(2*sigma**2))
# as we are doing numerical integration we may have to remember to normalise
# taking into account the stepsize (0.1)
px = px/(0.1*sum(px))
return px
def compute_posterior_pointwise(prior, likelihood):
""" Compute the posterior probability distribution point-by-point using Bayes
Rule.
Args:
prior (ndarray): probability distribution of prior
likelihood (ndarray): probability distribution of likelihood
Returns:
posterior (ndarray): probability distribution of posterior
"""
posterior = likelihood * prior
posterior =posterior/ (0.1*posterior.sum())
return posterior
def localization_simulation(mu_auditory = 3.0, sigma_auditory = 1.5,
mu_visual = -1.0, sigma_visual = 1.5):
""" Perform a sound localization simulation with an auditory prior.
Args:
mu_auditory (float): mean parameter value for auditory prior
sigma_auditory (float): standard deviation parameter value for auditory
prior
mu_visual (float): mean parameter value for visual likelihood distribution
sigma_visual (float): standard deviation parameter value for visual
likelihood distribution
Returns:
x (ndarray): range of values for which to compute probabilities
auditory (ndarray): probability distribution of the auditory prior
visual (ndarray): probability distribution of the visual likelihood
posterior_pointwise (ndarray): posterior probability distribution
"""
x = np.arange(-8, 9, 0.1)
auditory = my_gaussian(x, mu_auditory, sigma_auditory)
visual = my_gaussian(x, mu_visual, mu_visual)
posterior = compute_posterior_pointwise(auditory, visual)
return x, auditory, visual, posterior
# Uncomment the lines below to plot the results
x, auditory, visual, posterior_pointwise = localization_simulation()
with plt.xkcd():
_ = posterior_plot(x, auditory, visual, posterior_pointwise)
```
Combining the the visual and auditory information could help the brain get a better estimate of the location of an audio-visual object, with lower variance.
**Main course preview:** On Week 3 Day 1 (W3D1) there will be a whole day devoted to examining whether the brain uses Bayesian inference. Is the brain Bayesian?!
## Bonus Coding Exercise 2: Bayes Net
If you have the time, here is another extra exercise.
Bayes Net, or Bayesian Belief Networks, provide a way to make inferences about multiple levels of information, which would be very difficult to do in a classical frequentist paradigm.
We can encapsulate our knowledge about causal relationships and use this to make inferences about hidden properties.
We will try a simple example of a Bayesian Net (aka belief network). Imagine that you have a house with an unreliable sprinkler system installed for watering the grass. This is set to water the grass independently of whether it has rained that day. We have three variables, rain ($r$), sprinklers ($s$) and wet grass ($w$). Each of these can be true (1) or false (0). See the graphical model representing the relationship between the variables.

There is a table below describing all the relationships between $w, r$, and s$.
Obviously the grass is more likely to be wet if either the sprinklers were on or it was raining. On any given day the sprinklers have probability 0.25 of being on, $P(s = 1) = 0.25$, while there is a probability 0.1 of rain, $P (r = 1) = 0.1$. The table then lists the conditional probabilities for the given being wet, given a rain and sprinkler condition for that day.
\begin{array}{|l | l || ll |} \hline
r &s&P(w=0|r,s) &P(w=1|r,s)$\\ \hline
0& 0 &0.999 &0.001\\
0& 1 &0.1& 0.9\\
1& 0 &0.01 &0.99\\
1& 1& 0.001 &0.999\\ \hline
\end{array}
You come home and find that the the grass is wet, what is the probability the sprinklers were on today (you do not know if it was raining)?
We can start by writing out the joint probability:
$P(r,w,s)=P(w|r,s)P(r)P(s)$
The conditional probability is then:
$
P(s|w)=\frac{\sum_{r} P(w|s,r)P(s) P(r)}{P(w)}=\frac{P(s) \sum_{r} P(w|s,r) P(r)}{P(w)}
$
Note that we are summing over all possible conditions for $r$ as we do not know if it was raining. Specifically, we want to know the probability of sprinklers having been on given the wet grass, $P(s=1|w=1)$:
$
P(s=1|w=1)=\frac{P(s = 1)( P(w = 1|s = 1, r = 1) P(r = 1)+ P(w = 1|s = 1,r = 0) P(r = 0))}{P(w = 1)}
$
where
\begin{eqnarray}
P(w=1)=P(s=1)( P(w=1|s=1,r=1 ) P(r=1) &+ P(w=1|s=1,r=0) P(r=0))\\
+P(s=0)( P(w=1|s=0,r=1 ) P(r=1) &+ P(w=1|s=0,r=0) P(r=0))\\
\end{eqnarray}
This code has been written out below, you just need to insert the right numbers from the table.
```
##############################################################################
# TODO for student: Write code to insert the correct conditional probabilities
# from the table; see the comments to match variable with table entry.
# Comment out the line below to test your solution
raise NotImplementedError("Finish the simulation code first")
##############################################################################
Pw1r1s1 = ... # the probability of wet grass given rain and sprinklers on
Pw1r1s0 = ... # the probability of wet grass given rain and sprinklers off
Pw1r0s1 = ... # the probability of wet grass given no rain and sprinklers on
Pw1r0s0 = ... # the probability of wet grass given no rain and sprinklers off
Ps = ... # the probability of the sprinkler being on
Pr = ... # the probability of rain that day
# Uncomment once variables are assigned above
# A= Ps * (Pw1r1s1 * Pr + (Pw1r0s1) * (1 - Pr))
# B= (1 - Ps) * (Pw1r1s0 *Pr + (Pw1r0s0) * (1 - Pr))
# print("Given that the grass is wet, the probability the sprinkler was on is: " +
# str(A/(A + B)))
# to_remove solution
Pw1r1s1 = 0.999 # the probability of wet grass given rain and sprinklers on
Pw1r1s0 = 0.99 # the probability of wet grass given rain and sprinklers off
Pw1r0s1 = 0.9 # the probability of wet grass given no rain and sprinklers on
Pw1r0s0 = 0.001 # the probability of wet grass given no rain and sprinklers off
Ps = 0.25 # the probability of the sprinkler being on
Pr = 0.1 # the probability of rain that day
# Uncomment once variables are assigned above
A= Ps * (Pw1r1s1 * Pr + (Pw1r0s1) * (1 - Pr))
B= (1 - Ps) * (Pw1r1s0 *Pr + (Pw1r0s0) * (1 - Pr))
print("Given that the grass is wet, the probability the sprinkler was on is: " +
str(A/(A + B)))
```
The probability you should get is about 0.7522.
Your neighbour now tells you that it was indeed
raining today, $P (r = 1) = 1$, so what is now the probability the sprinklers were on? Try changing the numbers above.
## Bonus Think!: Causality in the Brain
In a causal stucture this is the correct way to calculate the probabilities. Do you think this is how the brain solves such problems? Would it be different for task involving novel stimuli (e.g. for someone with no previous exposure to sprinklers), as opposed to common stimuli?
**Main course preview:** On W3D5 we will discuss causality further!
| github_jupyter |
**Outline**
Here's the general outline:
Given a square matrix M, we want to calculate its inverse, this is to say:
Given M we seek M_inverse in the following equation:
(i) M @ M_inverse = I
where
* @ is the matrix multiplication operator
* I is the identity matrix
* the dimensions of M, M_inverse and I are all : n x n
**Question**
We want to calculate M_inverse using gradient descent. How can this be done? (Allow yourself to think about this for 5-15 secs)
**Answer**
Ok so here is how:
Let us use gradient descent with respect to the approximate_inverse to gradually improve the approximate inverse of a matrix.
0) **Guess**: We *guess* the answer M_inverse, let's call this guess approximate_inverse. The first guess is random.
1) **Improve**: We *improve* approximate_inverse by nudging it in the right direction.
Step 0) Guess is only done once, Step 1) Improve is done many times
How do we know we're moving our approximation in the right direction? We must have a sense if one candidate approximate_inverse_1 is better than another candidate approximate_inverse_2. Put differently, given a candidate for the inverse, we must be able to anser: How far are we off? Are we completely lost? The degree of "being lost" is expressed via a **loss function**. For our loss function we choose a very common metric the mean squared error, or mse. In this case it measures the distance at each matrix element of our estimate Y_hat and Y_true (the identity matrix).
Further we must decide on what is called the **learning rate**.
The learning rate is one of the most important parameters controlling the gradient descent method. One common way of visualising gradient descent is by comparing the loss function to some sort of hilly landscape. The current combination of parameters (in this case eg. our random inverse matrix) corresponds to a position in this landscape. We would like to get to a valley in this landscape. One way to imagine this is to put a "ball" at the current position and let it roll. This ball moves downhill and can also pickup some momentum.
# Code
```
import torch
torch.set_printoptions(sci_mode=False)
M = torch.rand((4,4))
M
M.shape[0] == M.shape[1]
# define the a matrix-elementwise metric, the mean square error
def mse(y_hat, y_true): return ((y_hat-y_true)**2).mean()
def improve(approximate_inverse, M, learning_rate, momentum, losses):
# identity matrix
y_true = torch.eye(n=len(M))
# estimate of identity matrix
# here we have various choices a) b) c)
# a)
y_hat = approximate_inverse @ M
## b)
#y_hat = M @ approximate_inverse
## c)
#y_hat = (approximate_inverse @ M + M @ approximate_inverse)/2
# loss = "degree of being lost"
loss = mse(y_hat, y_true)
losses.append(loss.detach().numpy())
# calculate loss
loss.backward()
with torch.no_grad():
# displace by learning_rate * derivatives
# (i)
approximate_inverse -= learning_rate * approximate_inverse.grad
# remark: instead of (i) we could alternatively use sub_ and write (ii),
# this is computationally more efficient but harder to read.
# (ii)
# approximate_inverse.sub_(learning_rate * approximate_inverse.grad)
# momentum
# (iii)
approximate_inverse.grad *= momentum # reduce speed due to friction
# remark for the case of momentum == 0 we could alternatively write
# (iv)
# approximate_inverse.grad.zero_()
return None
def calculate_inverse(M, learning_rate=0.9, momentum=0.9):
isSquare = M.shape[0] == M.shape[1]
assert isSquare, 'M should be square'
torch.manual_seed(314)
# Step 0) Guess
approximate_inverse = torch.rand(size=M.shape, requires_grad=True)
losses = []
for t in range(10000):
# Step 1) Improve
improve(approximate_inverse, M, learning_rate, momentum, losses)
return approximate_inverse, losses
```
# Calculate inverse of random matrix
```
torch.manual_seed(314)
M1 = torch.rand(size=(4,4)) # torch.diag(torch.tensor([1.,2.,3.,4.]))
approximate_inverse_1, losses_1 = calculate_inverse(M1)
```
# Calculate inverse of diagonal matrix
```
losses2 = []
torch.manual_seed(314)
M2 = torch.diag(torch.tensor([1.,2.,3.,4.])) # torch.rand(size=(4,4)) #
approximate_inverse_2, losses_2 = calculate_inverse(M2)
```
# Inspect results
Let's inspect whether the approximate_inverse_1 is a good approximation the inverse of M1
```
approximate_inverse_1 @ M1
```
Let's inspect whether the approximate_inverse_2 is a good approximation the inverse of M2
```
approximate_inverse_1 @ M1
```
Looking good!
# Compare convergence speed for simple diagonal matrix and general random matrix
```
import pandas as pd
import numpy as np
import plotly.express as px
df = pd.DataFrame(
{
'losses_1':np.array(losses_1).flatten(),
'losses_2':np.array(losses_2).flatten()
})
fig = px.line(df, y='losses_1', log_y=True)
fig.update_traces(name='Random Matrix, Momentum=0.9', showlegend = True)
fig.add_scatter(y=df['losses_2'], name='Simple Diagonal Matrix, Momentum=0.9')
fig.update_layout(title=\
'<b>How fast does the gradient algorithm converge?</b>' + \
'<br>Answer:It depends, for the Simple Diagonal Matrix it' + \
' converges faster',
xaxis_title='Iterations',
yaxis_title='Losses'
)
fig
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torch.autograd import Variable
from torchvision import datasets, transforms, models
import cv2
from google.colab import drive
drive.mount('/content/drive')
import os
os.listdir('/content/drive/My Drive/Accident')
!unzip '/content/drive/My Drive/Accident/Data.zip'
os.listdir('Data')
dataDirectory = 'Data'
trainTransform = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])])
testTransform = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])])
#Data
train_data = datasets.ImageFolder(dataDirectory+'/train',transform = trainTransform)
test_data = datasets.ImageFolder(dataDirectory+'/test',transform = testTransform)
#DataLoaders
trainloader = torch.utils.data.DataLoader(train_data ,batch_size=64, shuffle=True)
testloader = torch.utils.data.DataLoader(test_data, batch_size=64)
model = models.densenet121(pretrained=True)
torch.cuda.is_available()
# Use GPU if it's available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = models.densenet121(pretrained=True)
# Freeze parameters so we don't backprop through them
for param in model.parameters():
param.requires_grad = False
model.classifier = nn.Sequential(nn.Linear(1024, 256),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(256, 2),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
# Only train the classifier parameters, feature parameters are frozen
optimizer = optim.Adam(model.classifier.parameters(), lr=0.0005)
model.to(device);
epochs = 2
steps = 0
running_loss = 0
print_every = 5
for epoch in range(epochs):
for inputs, labels in trainloader:
steps += 1
# Move input and label tensors to the default device
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
logps = model.forward(inputs)
loss = criterion(logps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if steps % print_every == 0:
test_loss = 0
accuracy = 0
model.eval()
with torch.no_grad():
for inputs, labels in testloader:
inputs, labels = inputs.to(device), labels.to(device)
logps = model.forward(inputs)
batch_loss = criterion(logps, labels)
test_loss += batch_loss.item()
# Calculate accuracy
ps = torch.exp(logps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor)).item()
print(f"Epoch {epoch+1}/{epochs}.. "
f"Train loss: {running_loss/print_every:.3f}.. "
f"Test loss: {test_loss/len(testloader):.3f}.. "
f"Test accuracy: {accuracy/len(testloader):.3f}")
running_loss = 0
model.train()
os.listdir('/content/drive/My Drive/Accident/Real_test')
inTransform = transforms.Compose([transforms.ToPILImage(),
transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])])
inference = cv2.imread('/content/drive/My Drive/Accident/Real_test/a.jpg')
inf = inTransform(inference)
inf = inf.unsqueeze(0)
device
inf = inf.to(device)
logps = model.forward(inf)
ps = torch.exp(logps)
top_p, top_class = ps.topk(1, dim=1)
top_class
path = '/content/drive/My Drive/Accident/Model'
torch.save(model.state_dict() , path+'/checkpoint.pth')
model.state_dict()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import torch
import torchvision
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from matplotlib import pyplot as plt
%matplotlib inline
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle=False)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
foreground_classes = {'plane', 'car', 'bird'}
#foreground_classes = {'bird', 'cat', 'deer'}
background_classes = {'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'}
#background_classes = {'plane', 'car', 'dog', 'frog', 'horse','ship', 'truck'}
fg1,fg2,fg3 = 0,1,2
dataiter = iter(trainloader)
background_data=[]
background_label=[]
foreground_data=[]
foreground_label=[]
batch_size=10
for i in range(5000):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
background_data.append(img)
background_label.append(labels[j])
else:
img = images[j].tolist()
foreground_data.append(img)
foreground_label.append(labels[j])
foreground_data = torch.tensor(foreground_data)
foreground_label = torch.tensor(foreground_label)
background_data = torch.tensor(background_data)
background_label = torch.tensor(background_label)
def create_mosaic_img(bg_idx,fg_idx,fg):
"""
bg_idx : list of indexes of background_data[] to be used as background images in mosaic
fg_idx : index of image to be used as foreground image from foreground data
fg : at what position/index foreground image has to be stored out of 0-8
"""
image_list=[]
j=0
for i in range(9):
if i != fg:
image_list.append(background_data[bg_idx[j]])#.type("torch.DoubleTensor"))
j+=1
else:
image_list.append(foreground_data[fg_idx])#.type("torch.DoubleTensor"))
label = foreground_label[fg_idx]-fg1 # minus 7 because our fore ground classes are 7,8,9 but we have to store it as 0,1,2
#image_list = np.concatenate(image_list ,axis=0)
image_list = torch.stack(image_list)
return image_list,label
desired_num = 30000
mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9
mosaic_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(desired_num):
np.random.seed(i)
bg_idx = np.random.randint(0,35000,8)
fg_idx = np.random.randint(0,15000)
fg = np.random.randint(0,9)
fore_idx.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
mosaic_list_of_images.append(image_list)
mosaic_label.append(label)
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label, fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx], self.fore_idx[idx]
batch = 250
msd = MosaicDataset(mosaic_list_of_images, mosaic_label , fore_idx)
train_loader = DataLoader( msd,batch_size= batch ,shuffle=False,num_workers=0,)
data,labels,fg_index = iter(train_loader).next()
bg = []
for i in range(120):
torch.manual_seed(i)
betag = torch.ones((250,9))/9 #torch.randn(250,9)
a=bg.append( betag.requires_grad_() )
class Module2(nn.Module):
def __init__(self):
super(Module2, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.fc4 = nn.Linear(10,3)
def forward(self,y): #z batch of list of 9 images
y1 = self.pool(F.relu(self.conv1(y)))
y1 = self.pool(F.relu(self.conv2(y1)))
y1 = y1.view(-1, 16 * 5 * 5)
y1 = F.relu(self.fc1(y1))
y1 = F.relu(self.fc2(y1))
y1 = F.relu(self.fc3(y1))
y1 = self.fc4(y1)
return y1
torch.manual_seed(1234)
what_net = Module2().double()
#what_net.load_state_dict(torch.load("simultaneous_what.pt"))
what_net = what_net.to("cuda")
def attn_avg(x,beta):
y = torch.zeros([batch,3, 32,32], dtype=torch.float64)
y = y.to("cuda")
alpha = F.softmax(beta,dim=1) # alphas
for i in range(9):
alpha1 = alpha[:,i]
y = y + torch.mul(alpha1[:,None,None,None],x[:,i])
return y,alpha
def calculate_attn_loss(dataloader,what,criter):
what.eval()
r_loss = 0
alphas = []
lbls = []
pred = []
fidices = []
correct = 0
tot = 0
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels,fidx= data
lbls.append(labels)
fidices.append(fidx)
inputs = inputs.double()
beta = bg[i] # alpha for ith batch
inputs, labels,beta = inputs.to("cuda"),labels.to("cuda"),beta.to("cuda")
avg,alpha = attn_avg(inputs,beta)
alpha = alpha.to("cuda")
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
correct += sum(predicted == labels)
tot += len(predicted)
pred.append(predicted.cpu().numpy())
alphas.append(alpha.cpu().numpy())
loss = criter(outputs, labels)
r_loss += loss.item()
alphas = np.concatenate(alphas,axis=0)
pred = np.concatenate(pred,axis=0)
lbls = np.concatenate(lbls,axis=0)
fidices = np.concatenate(fidices,axis=0)
#print(alphas.shape,pred.shape,lbls.shape,fidices.shape)
analysis = analyse_data(alphas,lbls,pred,fidices)
return r_loss/(i+1),analysis,correct.item(),tot,correct.item()/tot
# for param in what_net.parameters():
# param.requires_grad = False
def analyse_data(alphas,lbls,predicted,f_idx):
'''
analysis data is created here
'''
batch = len(predicted)
amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0
for j in range (batch):
focus = np.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
amth +=1
else:
alth +=1
if(focus == f_idx[j] and predicted[j] == lbls[j]):
ftpt += 1
elif(focus != f_idx[j] and predicted[j] == lbls[j]):
ffpt +=1
elif(focus == f_idx[j] and predicted[j] != lbls[j]):
ftpf +=1
elif(focus != f_idx[j] and predicted[j] != lbls[j]):
ffpf +=1
#print(sum(predicted==lbls),ftpt+ffpt
return [ftpt,ffpt,ftpf,ffpf,amth,alth]
optim1 = []
for i in range(120):
optim1.append(optim.RMSprop([bg[i]], lr=1))
# instantiate optimizer
optimizer_what = optim.RMSprop(what_net.parameters(), lr=0.001)#, momentum=0.9)#,nesterov=True)
criterion = nn.CrossEntropyLoss()
acti = []
analysis_data_tr = []
analysis_data_tst = []
loss_curi_tr = []
loss_curi_tst = []
epochs = 100
# calculate zeroth epoch loss and FTPT values
running_loss,anlys_data,correct,total,accuracy = calculate_attn_loss(train_loader,what_net,criterion)
print('training epoch: [%d ] loss: %.3f correct: %.3f, total: %.3f, accuracy: %.3f' %(0,running_loss,correct,total,accuracy))
loss_curi_tr.append(running_loss)
analysis_data_tr.append(anlys_data)
# training starts
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
what_net.train()
for i, data in enumerate(train_loader, 0):
# get the inputs
inputs, labels,_ = data
inputs = inputs.double()
beta = bg[i] # alpha for ith batch
inputs, labels,beta = inputs.to("cuda"),labels.to("cuda"),beta.to("cuda")
# zero the parameter gradients
optimizer_what.zero_grad()
optim1[i].zero_grad()
# forward + backward + optimize
avg,alpha = attn_avg(inputs,beta)
outputs = what_net(avg)
loss = criterion(outputs, labels)
# print statistics
running_loss += loss.item()
#alpha.retain_grad()
loss.backward(retain_graph=False)
optimizer_what.step()
optim1[i].step()
running_loss_tr,anls_data,correct,total,accuracy = calculate_attn_loss(train_loader,what_net,criterion)
analysis_data_tr.append(anls_data)
loss_curi_tr.append(running_loss_tr) #loss per epoch
print('training epoch: [%d ] loss: %.3f correct: %.3f, total: %.3f, accuracy: %.3f' %(epoch+1,running_loss_tr,correct,total,accuracy))
if running_loss_tr<=0.08:
break
print('Finished Training run ')
analysis_data_tr = np.array(analysis_data_tr)
columns = ["epochs", "argmax > 0.5" ,"argmax < 0.5", "focus_true_pred_true", "focus_false_pred_true", "focus_true_pred_false", "focus_false_pred_false" ]
df_train = pd.DataFrame()
df_test = pd.DataFrame()
df_train[columns[0]] = np.arange(0,epoch+2)
df_train[columns[1]] = analysis_data_tr[:,-2]/300
df_train[columns[2]] = analysis_data_tr[:,-1]/300
df_train[columns[3]] = analysis_data_tr[:,0]/300
df_train[columns[4]] = analysis_data_tr[:,1]/300
df_train[columns[5]] = analysis_data_tr[:,2]/300
df_train[columns[6]] = analysis_data_tr[:,3]/300
df_train
fig= plt.figure(figsize=(6,6))
plt.plot(df_train[columns[0]],df_train[columns[3]], label ="focus_true_pred_true ")
plt.plot(df_train[columns[0]],df_train[columns[4]], label ="focus_false_pred_true ")
plt.plot(df_train[columns[0]],df_train[columns[5]], label ="focus_true_pred_false ")
plt.plot(df_train[columns[0]],df_train[columns[6]], label ="focus_false_pred_false ")
plt.title("On Train set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xticks([0,2,4])
plt.xlabel("epochs")
plt.ylabel("percentage of data")
#plt.vlines(vline_list,min(min(df_train[columns[3]]/300),min(df_train[columns[4]]/300),min(df_train[columns[5]]/300),min(df_train[columns[6]]/300)), max(max(df_train[columns[3]]/300),max(df_train[columns[4]]/300),max(df_train[columns[5]]/300),max(df_train[columns[6]]/300)),linestyles='dotted')
plt.show()
fig.savefig("train_analysis.pdf")
fig.savefig("train_analysis.png")
aph = []
for i in bg:
aph.append(F.softmax(i,dim=1).detach().numpy())
aph = np.concatenate(aph,axis=0)
torch.save({
'epoch': 500,
'model_state_dict': what_net.state_dict(),
#'optimizer_state_dict': optimizer_what.state_dict(),
"optimizer_alpha":optim1,
"FTPT_analysis":analysis_data_tr,
"alpha":aph
}, "cifar_what_net_500.pt")
aph
running_loss_tr,anls_data,correct,total,accuracy = calculate_attn_loss(train_loader,what_net,criterion)
print("argmax>0.5",anls_data[-2])
```
| github_jupyter |
# Backwards Compatability Examples with Different Protocols
## Prerequisites
* A kubernetes cluster with kubectl configured
* curl
* grpcurl
* pygmentize
## Setup Seldon Core
Use the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html) to setup Seldon Core with an ingress - either Ambassador or Istio.
Then port-forward to that ingress on localhost:8003 in a separate terminal either with:
* Ambassador: `kubectl port-forward $(kubectl get pods -n seldon -l app.kubernetes.io/name=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080`
* Istio: `kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8003:80`
```
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
import json
import time
from IPython.core.magic import register_line_cell_magic
@register_line_cell_magic
def writetemplate(line, cell):
with open(line, 'w') as f:
f.write(cell.format(**globals()))
VERSION=!cat ../version.txt
VERSION=VERSION[0]
VERSION
```
## Model with Old REST Wrapper Upgraded
We will deploy a REST model that uses the SELDON Protocol namely by specifying the attribute `protocol: seldon`
```
%%writetemplate resources/model_seldon.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: example-seldon
spec:
protocol: seldon
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/mock_classifier_rest:1.4.0
name: classifier
graph:
name: classifier
type: MODEL
name: model
replicas: 1
!kubectl apply -f resources/model_seldon.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=example-seldon -o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep example-seldon -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
X=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/example-seldon/api/v1.0/predictions \
-H "Content-Type: application/json"
d=json.loads(X[0])
print(d)
assert(d["data"]["ndarray"][0][0] > 0.4)
%%writetemplate resources/model_seldon.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: example-seldon
spec:
protocol: seldon
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/mock_classifier:{VERSION}
name: classifier
graph:
name: classifier
type: MODEL
name: model
replicas: 1
!kubectl apply -f resources/model_seldon.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=example-seldon -o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep example-seldon -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
X=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/example-seldon/api/v1.0/predictions \
-H "Content-Type: application/json"
d=json.loads(X[0])
print(d)
assert(d["data"]["ndarray"][0][0] > 0.4)
X=!cd ../executor/proto && grpcurl -d '{"data":{"ndarray":[[1.0,2.0,5.0]]}}' \
-rpc-header seldon:example-seldon -rpc-header namespace:seldon \
-plaintext \
-proto ./prediction.proto 0.0.0.0:8003 seldon.protos.Seldon/Predict
d=json.loads("".join(X))
print(d)
assert(d["data"]["ndarray"][0][0] > 0.4)
!kubectl delete -f resources/model_seldon.yaml
```
## Model with Old GRPC Wrapper Upgraded
We will deploy a gRPC model that uses the SELDON Protocol namely by specifying the attribute `protocol: seldon`
```
%%writefile resources/model_seldon_grpc.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: grpc-seldon
spec:
name: grpcseldon
protocol: seldon
transport: grpc
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/mock_classifier_grpc:1.3
name: classifier
graph:
name: classifier
type: MODEL
endpoint:
type: GRPC
name: model
replicas: 1
!kubectl apply -f resources/model_seldon_grpc.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=grpc-seldon -o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep grpc-seldon -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
X=!cd ../executor/proto && grpcurl -d '{"data":{"ndarray":[[1.0,2.0,5.0,6.0]]}}' \
-rpc-header seldon:grpc-seldon -rpc-header namespace:seldon \
-plaintext \
-proto ./prediction.proto 0.0.0.0:8003 seldon.protos.Seldon/Predict
d=json.loads("".join(X))
print(d)
!kubectl delete -f resources/model_seldon_grpc.yaml
```
## Old Operator and Model Upgraded
```
!helm delete seldon seldon-core-operator \
--namespace seldon-system
!helm install seldon seldon-core-operator \
--repo https://storage.googleapis.com/seldon-charts \
--version 1.4.0 \
--namespace seldon-system \
--wait
%%writetemplate resources/model_seldon.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: example-seldon
spec:
protocol: seldon
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/mock_classifier_rest:1.4.0
name: classifier
graph:
name: classifier
type: MODEL
name: model
replicas: 1
%%writefile ../servers/sklearnserver/samples/iris.yaml
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
name: sklearn
spec:
name: iris
predictors:
- graph:
children: []
implementation: SKLEARN_SERVER
modelUri: gs://seldon-models/sklearn/iris
name: classifier
name: default
replicas: 1
svcOrchSpec:
env:
- name: SELDON_LOG_LEVEL
value: DEBUG
!kubectl apply -f resources/model_seldon.yaml
!kubectl apply -f ../servers/sklearnserver/samples/iris.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=example-seldon -o jsonpath='{.items[0].metadata.name}')
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=sklearn -o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep example-seldon -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
X=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/example-seldon/api/v1.0/predictions \
-H "Content-Type: application/json"
d=json.loads(X[0])
print(d)
assert(d["data"]["ndarray"][0][0] > 0.4)
X=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0, 6.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/sklearn/api/v1.0/predictions \
-H "Content-Type: application/json"
d=json.loads(X[0])
print(d)
!helm upgrade seldon \
../helm-charts/seldon-core-operator \
--namespace seldon-system \
--wait
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=example-seldon -o jsonpath='{.items[0].metadata.name}')
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=sklearn -o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep example-seldon -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
```
Only REST calls will be available as image is still old python wrapper
```
X=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/example-seldon/api/v1.0/predictions \
-H "Content-Type: application/json"
d=json.loads(X[0])
print(d)
assert(d["data"]["ndarray"][0][0] > 0.4)
```
Rest and gRPC calls will work with new server as image will have been updated.
```
X=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0, 6.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/sklearn/api/v1.0/predictions \
-H "Content-Type: application/json"
d=json.loads(X[0])
print(d)
X=!cd ../executor/proto && grpcurl -d '{"data":{"ndarray":[[1.0,2.0,5.0,6.0]]}}' \
-rpc-header seldon:sklearn -rpc-header namespace:seldon \
-plaintext \
-proto ./prediction.proto 0.0.0.0:8003 seldon.protos.Seldon/Predict
d=json.loads("".join(X))
print(d)
%%writetemplate resources/model_seldon.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: example-seldon
spec:
protocol: seldon
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/mock_classifier:{VERSION}
name: classifier
graph:
name: classifier
type: MODEL
name: model
replicas: 1
!kubectl apply -f resources/model_seldon.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=example-seldon -o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep example-seldon -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
X=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/example-seldon/api/v1.0/predictions \
-H "Content-Type: application/json"
d=json.loads(X[0])
print(d)
assert(d["data"]["ndarray"][0][0] > 0.4)
X=!cd ../executor/proto && grpcurl -d '{"data":{"ndarray":[[1.0,2.0,5.0]]}}' \
-rpc-header seldon:example-seldon -rpc-header namespace:seldon \
-plaintext \
-proto ./prediction.proto 0.0.0.0:8003 seldon.protos.Seldon/Predict
d=json.loads("".join(X))
print(d)
assert(d["data"]["ndarray"][0][0] > 0.4)
!kubectl delete -f resources/model_seldon.yaml
!kubectl delete -f ../servers/sklearnserver/samples/iris.yaml
```
| github_jupyter |
## Organizing the system by scoring coupling and cohesion
### Intuition
Ordering by group / modules gives us a visual indication of how well the system accomplishes the design goal of loosely coupled and highly cohesive modules. We can quantify this idea.
Clustering is a type of assignment problem seeking the optimal allocation of N components to M clusters. One of the prominent heuristics of system architecting is to choose modules such that they are as independent as possible...low coupling and high cohesion.
We can objectively score these clustering algorithms using an objective function that considers both the size of the clusters ($C_i$) and the number of interactions outside the clusters ($I_0$) according to the following equation, where $\alpha = 10$, $\beta = 100$ or $\alpha = 1$, $\beta = 10$, and $M$ is the number of clusters:
$Obj = \alpha \sum_{i=1}^{M}C_i^2 + \beta I_0$
Clustering objectives work against two competing extremes:
* M=1 => We want to minimize the size of the largest modules...otherwise, we could just take the trivial result of putting everything into one module.
* M=N => We want to minimize the number and/or strength of interactions among components that cross the module boundaries. As we get to more components, more and more interactions will be required to cross module boundaries.
The objective function can be evaluated for any number of potential designs that were manually or automatically created. This provides a real-time feedback loop about the potential quality of a design. The range of the function is immediately bound by the two extremes. Your job as an architect and designer is to minimize this function while preserving semantically meaningful modules.
_For more information, see Eppinger & Browning, Design Structure Matrix Methods and Applications, MIT Press, Cambridge, 2012, p. 25_
### Scoring `lein-topology`
Let's start by loading the included sample network data from `lein-topology`:
```
import sand
network_collection = "lein-topology"
network_name = "57af741"
data_path = "./data/" + network_collection + "-" + network_name
edge_file = data_path + ".csv"
edgelist = sand.csv_to_dicts(edge_file,header=['source', 'target', 'weight'])
g = sand.from_edges(edgelist)
g.summary()
```
Namespaces are the modules of the system and will be used in the modularity score:
```
g.vs['group'] = sand.fqn_to_groups(g.vs['label'])
len(set(g.vs['group']))
```
For us to apply this scoring methodology meaningfully, we'll make a couple of simplifying assumptions:
* `clojure.core` functions aren't moving to a different namespace.
* tests shouldn't factor in the score of how the production code is organized.
With these, we can apply the filtering from above a bit more strictly to get an even smaller subgraph of the function call network:
```
v_to_keep = g.vs(lambda v: 'topology' in v['label'] and not 'test' in v['label'])
tg = g.subgraph(v_to_keep)
# Recompute degree after building the subgraph:
tg.vs['indegree'] = tg.degree(mode="in")
tg.vs['outdegree'] = tg.degree(mode="out")
tg.summary()
```
The baseline modularity score of `lein-topology`'s core function dependency graph is:
```
%load_ext autoreload
%autoreload 2
import sand.modularity as mod
mod.objective(tg, tg.vs['group'])
```
Where is this on the range of possibilities?
Suppose all functions were in the same namespace. We'll simulate this by setting the group membership vector to all 1's:
```
mod.objective(tg, [1 for _ in range(len(tg.vs))])
```
This is the degenerate case of M=1, so the objective function simply returns the square of the number of vertices:
```
len(tg.vs) * len(tg.vs)
```
The other extreme occurs when we have the extreme of M=N, or all functions in their own namespace. We can simulate this by providing a unique group membership id for each vertex:
```
mod.objective(tg, range(len(tg.vs)))
```
Finally, we can compare our actual modularity score to a computational result. We can use Girvan-Newman edge-betweenness community detection to generate a modular design based on the network structure alone:
```
eb_membership = sand.edge_betweenness(tg, directed=True)
len(set(eb_membership))
len(set(tg.vs['group']))
```
So the edge betweenness algorithm comes up with fewer communities, i.e. namespace in this context. Let's see how the modularity score compares:
```
mod.objective(tg, eb_membership)
```
If this score is lower than our actual baseline, than the computational community structure may represent an improvement over the current structure. Which namespaces have changed groups? We may wish to refactor the code to reflect this structure.
If the edge betweenness modularity score is higher than our baseline, this fact acts as a quantitative defense of our design.
### The novelty here is receiving an algorithmic recommendation about how to improve the organization of the code.
| github_jupyter |
```
import tensorflow as tf
tf.config.experimental.list_physical_devices()
tf.test.is_built_with_cuda()
```
# Importing Libraries
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import os.path as op
import pickle
import tensorflow as tf
from tensorflow import keras
from keras.models import Model,Sequential,load_model
from keras.layers import Input, Embedding
from keras.layers import Dense, Bidirectional
from keras.layers.recurrent import LSTM
import keras.metrics as metrics
import itertools
from tensorflow.python.keras.utils.data_utils import Sequence
from decimal import Decimal
from keras import backend as K
from keras.layers import Conv1D,MaxPooling1D,Flatten,Dense
```
# Data Fetching
```
inp=pd.read_csv("../PJ sensor.csv",usecols=[6,7,10,11])
out=pd.read_csv("../PJ sensor.csv",usecols=[2,3,4,5,8,9])
inp.head(5)
out.head(5)
inp=np.array(inp)
out=np.array(out)
```
# Min Max Scaler
```
from sklearn.preprocessing import MinMaxScaler
import warnings
scaler_obj=MinMaxScaler()
X1=scaler_obj.fit_transform(inp)
Y1=scaler_obj.fit_transform(out)
warnings.filterwarnings(action='ignore', category=UserWarning)
X1=X1[:,np.newaxis,:]
Y1=Y1[:,np.newaxis,:]
def rmse(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true), axis=-1))
def coeff_determination(y_true, y_pred):
SS_res = K.sum(K.square( y_true-y_pred ))
SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
return ( 1 - SS_res/(SS_tot + K.epsilon()) )
Y1.shape
```
# Model
```
model1 = Sequential()
model1.add(keras.Input(shape=(1,4)))
model1.add(tf.keras.layers.LSTM(6,activation="relu",use_bias=True,kernel_initializer="glorot_uniform",bias_initializer="zeros"))
model1.add(Dense(6))
model1.add(keras.layers.BatchNormalization(axis=-1,momentum=0.99,epsilon=0.001,center=True,scale=True,
beta_initializer="zeros",gamma_initializer="ones",
moving_mean_initializer="zeros",moving_variance_initializer="ones",trainable=True))
model1.add(keras.layers.ReLU())
model1.compile(optimizer=keras.optimizers.Adam(learning_rate=1e-5), loss='mse',metrics=['accuracy','mse','mae',rmse])
model1.summary()
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X1, Y1, test_size=0.25, random_state=42)
model_fit8 = model1.fit(x_train,y_train,batch_size=2048,epochs=300, validation_split=0.1)
model1.evaluate(x_test,y_test)
model1.evaluate(x_train,y_train)
```
# Saving Model as File
```
model_json = model1.to_json()
with open("Model_File/lstmpj.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model1.save_weights("Model_File/lstmpj.h5")
print("Saved model to disk")
from keras.models import model_from_json
json_file = open('Model_File/lstmpj.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("Model_File/lstmpj.h5")
print("Loaded model from disk")
loaded_model.compile(optimizer=keras.optimizers.Adam(learning_rate=1e-5), loss='mse',metrics=['accuracy','mse','mae',rmse])
```
# Error Analysis
```
# summarize history for loss
plt.plot(model_fit8.history['loss'])
plt.plot(model_fit8.history['val_loss'])
plt.title('Model Loss',fontweight ='bold',fontsize = 15)
plt.ylabel('Loss',fontweight ='bold',fontsize = 15)
plt.xlabel('Epoch',fontweight ='bold',fontsize = 15)
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# summarize history for accuracy
plt.plot(model_fit8.history['accuracy'])
plt.plot(model_fit8.history['val_accuracy'])
plt.title('Model accuracy',fontweight ='bold',fontsize = 15)
plt.ylabel('Accuracy',fontweight ='bold',fontsize = 15)
plt.xlabel('Epoch',fontweight ='bold',fontsize = 15)
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X1, Y1, test_size=0.25, random_state=42)
y_test_pred=loaded_model.predict(x_test)
y_test_pred
```
| github_jupyter |
<a href="https://colab.research.google.com/github/SoumyadeepDebnath/DataEngineering_with_Python_by_SAM/blob/main/QualityCodes.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#### **Multiple Assignment**
```
# Instead of this
x = 10
y = 10
z = 10
a = 1
b = 2
# Use this
x = y = z = 10
a,b = 1,2
```
#### **Variable Unpacking**
In Python, unpacking is the process of assigning an iterable of values to a tuple (or list) of variables using a single assignment statement.
```
# Instead of this
x = 1
y = 2
x = 1
y = [2, 3, 4, 5]
x = 1
y = [2, 3, 4]
z = 5
# Use this
x, y = [1, 2]
x, *y = [1, 2, 3, 4, 5]
x, *y, z = [1, 2, 3, 4, 5]
```
#### **Swapping Variables**
```
# Instead of this
temp = x
x = y
y = temp
# Use this
x, y = y, x
```
#### **Name Casing**
In python, snake_case is preferred over camelCase for variables and functions names.
```
# Instead of this
def isEven(num):
pass
# Use this
def is_even(num):
pass
```
#### **Conditional Expressions**
```
# Instead of this
def is_even(num):
if num % 2 == 0:
print("Even")
else:
print("Odd")
# Use this
def is_even(num):
print("Even") if num % 2 == 0 else print("Odd")
# Or this
def is_even(num):
print("Even" if num % 2 == 0 else "Odd")
```
#### **String Formatting**
```
name = "Soumyadeep"
item = "Debnath"
# Instead of this
print("%s likes %s." %(name, item))
# Or this
print("{} likes {}.".format(name, item))
# Use this
print(f"{name} likes {item}.")
```
#### **Comparison Operator**
```
# Instead of this
if 0 < x and x < 1000:
print("x is a 3 digit number")
# Use this
if 0 < x < 1000:
print("x is a 3 digit number")
```
#### **Iterating over a list or tuple**
```
names = ['Harry', 'Ron', 'Hermione', 'Ginny', 'Neville']
```
Don’t need to use indices to access list elements
```
# Instead of this
for i in range(len(names)):
print(names[i])
# Use this
for name in names:
print(name)
```
Using enumerate() - for both indices and values
```
for index, value in enumerate(names):
print(index, value)
```
#### **List comprehension**
```
arr = [1, 2, 3, 4, 5, 6]
res = []
```
A list comprehension is a syntactic construct used to create a list from existing lists in several computer languages. In contrast to the use of map and filter functions, it uses the mathematical set-builder notation (set comprehension).
```
# Instead of this
for num in arr:
if num % 2 == 0:
res.append(num * 2)
else:
res.append(num)
# Use this
res = [(num * 2 if num % 2 == 0 else num) for num in arr]
```
#### **Using Set for searching**
Searching in a set is faster(O(1)) compared to list(O(n)).
Convert list to set for search fastest.
```
# Instead of this
l = ['a', 'e', 'i', 'o', 'u']
def is_vowel(char):
if char in l:
print("Vowel")
# Use this
s = {'a', 'e', 'i', 'o', 'u'}
def is_vowel(char):
if char in s:
print("Vowel")
```
#### **Iterating Dictionary**
```
roll_name = {
315: "Soumyadeep",
705: "Debnath"
}
```
Use dict.items() to iterate through a dictionary.
```
# Instead of this
for key in roll_name:
print(key, roll_name[key])
# Use this
for key, value in roll_name.items():
print(key, value)
```
| github_jupyter |
# Principal component analysis of ensemble forecast fields (GRIB)
In this example we will perform a principal component (PCA) analysis on ensemble forecast fields stored in GRIB format. We will use a combination of Metview, numpy and scipy to achieve this.
```
import metview as mv
import numpy as np
from scipy import linalg as LA
```
File *z500_ens.grib* contains 500 hPa geopotential ECMWF ensemble forecast (50 perturbed and a control member) for a given timestep (+96h). We read this data into a [Fieldset](https://confluence.ecmwf.int/display/METV/Fieldset+Functions) which is Metview's own class to handle GRIB data.
```
fs = mv.read("./z500_ens.grib")
```
We will compute the principal components (PC) using *numpy* and *scipy*. First we load the fields into a numpy array.
```
v = fs.values()
print(v.shape)
```
For the PCA we center the data, create the covariance matrix and compute the eigenvalues and eigenvectors of it.
```
v -= np.mean(v, axis = 0)
cov = np.cov(v, rowvar = False)
evals , evecs = LA.eigh(cov)
```
The resulting *evecs* array stores the eigenvectors as columns. The eigenvectors are guaranteed to be orthonormal but not yet sorted according to the eigenvalues. So we sort them in descending order.
```
idx = np.argsort(evals)[::-1]
evecs = evecs[:,idx]
evals = evals[idx]
```
This sorted set of eigenvectors forms the principal components of the examined fields. If we print the *explained variance* of the first few PCs we can conclude that the first two PCs are particularly dominant explaining almost 70% of the total variance. We will plot these PCs together with the ensemble mean and spread (i.e. standard deviation) to get a more detailed picture of the main flow patterns in the ENS forecast.
```
evals[:10]*100/evals.sum()
```
In order to plot the PCs with Metview we convert them back into fieldsets by creating a copy of the first two fields of our original fieldset and setting the values accordingly. Please note that the metadata of the new fields is not accurate but it is still perfectly fine for plotting purposes.
```
g = fs[0:2]
g[0] = g[0].set_values(evecs[:,0])
g[1] = g[1].set_values(evecs[:,1])
```
To plot the data, we need to tell Metview to send the plot to Jupyter.
```
mv.setoutput('jupyter')
```
Plotting is performed through Metview's interface to the [Magics](https://confluence.ecmwf.int/display/MAGP/Magics) library developed at ECMWF. We will first define the view parameters (by default we will get a global map in cylindrical projection).
```
# shaded land to make the points stand out more
grey_land_shading = mv.mcoast(
map_coastline_land_shade = "on",
map_coastline_land_shade_colour = "grey",
map_grid_latitude_increment = 10,
map_grid_longitude_increment = 10,
map_grid_colour = "charcoal"
)
area_view = mv.geoview(
map_area_definition = 'corners',
area = [30,-40.87,65,20],
coastlines = grey_land_shading
)
```
Then we define a 2x2 layout based on the map we defined above.
```
dw = mv.plot_superpage(pages = mv.mvl_regular_layout(area_view,2,2,1,1))
```
To highlight the details of the PCs we use Magics' powerful contouring routine to assign colours based on the magnitude of the differences.
```
cont_pc = mv.mcont(
legend = "on",
contour_line_colour = "black",
contour_highlight = "off",
contour_max_level = 0.06,
contour_min_level = -0.06,
contour_shade = "on",
contour_shade_colour_method = "palette",
contour_shade_method = "area_fill",
contour_shade_palette_name= "eccharts_red_blue2_10")
```
Finally, we plot each field with a custom title. We compute the ensemble mean and spread on the fly with fieldset functions from Metview.
```
mv.plot(dw[0], fs.mean(), mv.mtext(text_line_1 = "ENS mean"),
dw[1], fs.stdev(), mv.mtext(text_line_1 = "ENS spread"),
dw[2], g[0], cont_pc, mv.mtext(text_line_1 = "PC1"),
dw[3], g[1], cont_pc, mv.mtext(text_line_1 = "PC2"))
```
# Additional resources
- [Introductory Metview training course](https://confluence.ecmwf.int/display/METV/Data+analysis+and+visualisation+using+Metview)
- [Metview's Python interface](https://confluence.ecmwf.int/display/METV/Metview%27s+Python+Interface)
- [Function list](https://confluence.ecmwf.int/display/METV/List+of+Operators+and+Functions)
| github_jupyter |
```
import pandas as pd
from sklearn.neighbors import KNeighborsClassifier
from sklearn.preprocessing import MinMaxScaler
from sklearn.neighbors import NearestCentroid
from sklearn.neighbors import RadiusNeighborsClassifier
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
df_train = pd.read_csv('../data/train.csv')
df_test = pd.read_csv('../data/test.csv')
x_train = df_train[['mean_return', 'volatility']].values
y_train = df_train['label'].values
x_test = df_test[['mean_return', 'volatility']].values
y_test = df_test['label'].values
```
### Helper Functions
```
def buy_and_hold():
"""
This function calculates the buy and hold strategy.
Parameters
----------
df : pandas.DataFrame
Dataframe containing the data.
Returns
-------
profit : pandas.Series
Series containing the buy and hold strategy profit.
"""
df = pd.read_csv('../data/test.csv')
## Buy on the first day of 2020 and sell on the last dat of 2021
start_money = 100
## Buy on the first day of 2020
initial = 100 / df['mean_adj_close'].loc[df['Year'] == 2020].iloc[0]
## Sell on the last day of 2021
cash = df['mean_adj_close'].loc[df['Year'] == 2021].iloc[-1] * initial
profit = cash
return profit
def my_trading_strategy(df, **kwargs):
"""
This function calculates the trading strategy.
Parameters
----------
df : pandas.DataFrame
Dataframe containing the data.
Returns
-------
profit : pandas.Series
Series containing the trading strategy profit.
"""
label = kwargs['labels']
eval = kwargs['eval']
df['labels'] = label
initial: int('Stock') = 0
cash: int('Starting Money') = 100
profit: int = 0
total_profit: int = 0
sell: int = 0
total_loss: int = 0
success: int = 0
fail: int = 0
count: list = []
if eval == 'test':
for i in range(len(df)):
"""
Start buying
"""
if(df['labels'].iloc[i] == 1 and initial == 0):
initial = cash / df['mean_adj_close'].iloc[i]
elif(df['labels'].iloc[i] != 1 and initial != 0):
sell = initial * df['mean_adj_close'].iloc[i]
if(sell > cash):
profit = sell - cash
success += 1
total_profit += profit
else:
loss = cash - sell
fail += 1
total_loss += loss
cash = sell
initial = 0
print(f'Success: {success}')
print(f'Fails: {fail}')
return total_profit
elif eval == 'train':
for i in range(len(df)):
"""
Start buying
"""
if(df['label'].iloc[i] == 1 and initial == 0):
initial = cash / df['mean_adj_close'].iloc[i]
elif(df['label'].iloc[i] != 1 and initial != 0):
sell = initial * df['mean_adj_close'].iloc[i]
if(sell > cash):
profit = sell - cash
success += 1
total_profit += profit
else:
loss = cash - sell
fail += 1
total_loss += loss
cash = sell
initial = 0
print(f'Number of successfull trades: {success}')
print(f'Number of failed trades: {fail}')
return total_profit
print(f'Buy and hold profit = ${round(buy_and_hold(df_test))}')
print(f'My trading strategy profit = ${round(my_trading_strategy(df_test, labels="none", eval="train"))}')
```
### Preparing data for training
```
scaler = MinMaxScaler()
x_train = scaler.fit_transform(x_train)
x_test = scaler.transform(x_test)
```
### KNN
----
Finding the best value of k for KNN Euclidean distance
```
highest_accuracy: list = []
for k in range(3, 21, 2):
knn = KNeighborsClassifier(n_neighbors = k)
knn.fit(x_train, y_train)
y_pred = knn.predict(x_test)
highest_accuracy.append(accuracy_score(y_test, y_pred))
plt.figure(figsize=(10, 4))
plt.plot(range(3, 21, 2), highest_accuracy, color='red', linestyle='dashed', marker='o', markerfacecolor='black', markersize=4)
plt.title('Accuracy vs K')
plt.xlabel('Number of neighbors: k')
plt.ylabel('Accuracy')
plt.show()
```
Visually choosing k = 13
```
knn = KNeighborsClassifier(n_neighbors = 13, p=2)
knn.fit(x_train, y_train)
y_pred = knn.predict(x_test)
print(f'Accuracy: {round(accuracy_score(y_test, y_pred), 2)}')
```
y_pred = learned labels for year 2020 and 2021
```
print(f'My trading strategy profit = ${round(my_trading_strategy(df_test, labels=y_pred, eval="test"))}')
```
### KNN Manhattan
----
Finding the best value of k for KNN Manhattan Distance
```
highest_accuracy: list = []
for k in range(3, 21, 2):
knn = KNeighborsClassifier(n_neighbors = k, p = 1)
knn.fit(x_train, y_train)
y_pred = knn.predict(x_test)
highest_accuracy.append(accuracy_score(y_test, y_pred))
plt.figure(figsize=(10, 4))
plt.plot(range(3, 21, 2), highest_accuracy, color='red', linestyle='dashed', marker='o', markerfacecolor='black', markersize=4)
plt.title('Accuracy vs K')
plt.xlabel('Number of neighbors: k')
plt.ylabel('Accuracy')
plt.show()
knn = KNeighborsClassifier(n_neighbors = 13, p=1)
knn.fit(x_train, y_train)
y_pred = knn.predict(x_test)
print(f'Accuracy: {round(accuracy_score(y_test, y_pred), 2)}')
print(f'My trading strategy profit = ${round(my_trading_strategy(df_test, labels=y_pred, eval="test"))}')
```
#### KNN Minkowski
----
Finding the best value of k for Minkowski distance
```
highest_accuracy: list = []
p = 1.5
def minkowski(a, b, p):
return np.linalg.norm(a-b, ord=p)
for k in range(3, 21, 2):
knn = KNeighborsClassifier(n_neighbors = k, metric= lambda a, b: minkowski(a, b, p))
knn.fit(x_train, y_train)
y_pred = knn.predict(x_test)
highest_accuracy.append(accuracy_score(y_test, y_pred))
plt.figure(figsize=(10, 4))
plt.plot(range(3, 21, 2), highest_accuracy, color='red', linestyle='dashed', marker='o', markerfacecolor='black', markersize=4)
plt.title('Accuracy vs K')
plt.xlabel('Number of neighbors: k')
plt.ylabel('Accuracy')
plt.show()
knn = KNeighborsClassifier(n_neighbors = 13, metric= lambda a, b: minkowski(a, b, p))
knn.fit(x_train, y_train)
y_pred = knn.predict(x_test)
print(f'Accuracy: {round(accuracy_score(y_test, y_pred), 2)}')
print(f'My trading strategy profit = ${round(my_trading_strategy(df_test, labels=y_pred, eval="test"))}')
```
| github_jupyter |
# Titanic Prediction using Python
### A huge thank you to Jose Portilla and his Udemy course for teaching me https://www.udemy.com/python-for-data-science-and-machine-learning-bootcamp/learn/v4
## Imports and reading in files
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from subprocess import check_output
#print(check_output(["ls", "../input"]).decode("utf8"))
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv("../input/train.csv")
#Check that the file was read in properly and explore the columns
df.head()
```
## Data exploration
```
plt.figure(figsize=(12,8))
sns.heatmap(df.isnull(),cbar=False, yticklabels=False, cmap='viridis')
```
From the heat map, we can see that a lot of the 'Cabin' row information is missing. However, while the age column is also missing some data, we can use imputation to fill in some of the data later. Additionally, the 'Embarked' column has so few rows missing, that we can just delete those.
```
sns.set_style('darkgrid')
sns.countplot(x='Survived', data=df, hue='Pclass')
```
We can see here that those who did not survive were predominantly from the 3rd Passenger Class (Pclass).
```
sns.countplot(x='SibSp', data=df, hue='Survived')
df['Fare'].hist(bins=40)
```
Here, we impute the age of those we do not have information on. We use a boxplot to estimate the median age of each class, and impute that into the age for the rows with missing age.
```
plt.figure(figsize=(12,6))
sns.boxplot(x='Pclass', y='Age', data=df)
def inpute_age(cols):
Age = cols[0]
Pclass = cols[1]
if pd.isnull(Age):
if Pclass == 1:
return 37
elif Pclass == 2:
return 29
else: return 24
else: return Age
df['Age']=df[['Age','Pclass']].apply(inpute_age, axis=1)
plt.figure(figsize=(12,8))
sns.heatmap(df.isnull(),cbar=False, yticklabels=False, cmap='viridis')
```
You can see now the data is cleaner, but we still need to clean the 'Cabin' and 'Embarked' columns. For now, we will simply drop the 'Cabin' column and drop the rows where 'Embarked' is missing.
```
df.drop('Cabin', axis=1, inplace=True)
plt.figure(figsize=(12,6))
sns.heatmap(df.isnull(),cbar=False, yticklabels=False, cmap='viridis')
df.dropna(inplace=True)
plt.figure(figsize=(12,6))
sns.heatmap(df.isnull(),cbar=False, yticklabels=False, cmap='viridis')
```
The data is now clean of null values, but we still need to take care of objects that a machine learning algorithm can't handle, namely strings.
```
df.info()
```
We can see that 'Name', 'Sex', 'Ticket', and 'Embarked' are all objects. In this case, they indeed are all strings. We will use Pandas built in getDummies() funciton to convert those to numbers.
```
#We make a new 'Male columns because getDummies will drop one the the dummy variables
#to ensure linear independence.
df['Male'] = pd.get_dummies(df['Sex'], drop_first=True)
#The embarked column indicates where the passenger boarded the Titanic.
#It has three values ['S','C','Q']
embarked = pd.get_dummies(df['Embarked'], drop_first=True)
df = pd.concat([df, embarked], axis=1)
#These columns do not provide us any information for the following reasons:
#PassengerID: we consider 'PassengerID' a randomly assigned ID thus not correlated with surviability
#Name: we are not performing any feature extraction from the name, so we must drop tihs non-numerical column
#Sex: the 'Male' column already captures all information about the sex of the passenger
#Ticket: we are not performing any feature extraction, so we must drop this non-numerical column
#Embarked: we have extracted the dummy values, so those two numerical dummy values encapsulate all the embarked info
df.drop(['PassengerId', 'Name', 'Sex', 'Ticket', 'Embarked'], axis=1, inplace=True)
#Take a look at our new dataframe
df.head()
df.info()
```
##Build and train the model
```
#Seperate the feature columns from the target column
X = df.drop('Survived', axis=1)
y = df['Survived']
#Split the data into two. I don't think this is necessary since there are two files.
#I will keep this here for now
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.3)
from sklearn.linear_model import LogisticRegression
logmodel = LogisticRegression()
logmodel.fit(X, y)
#Read in the test data
test_df = pd.read_csv('../input/test.csv')
#Clean the test data the same way we did the training data
test_df['Age']=test_df[['Age','Pclass']].apply(inpute_age, axis=1)
test_df.drop('Cabin', axis=1, inplace=True)
test_df.dropna(inplace=True)
test_df['Male'] = pd.get_dummies(test_df['Sex'], drop_first=True)
embarked = pd.get_dummies(test_df['Embarked'], drop_first=True)
test_df = pd.concat([test_df, embarked], axis=1)
pass_ids = test_df['PassengerId']
test_df.drop(['PassengerId', 'Name', 'Sex', 'Ticket', 'Embarked'], axis=1, inplace=True)
test_df.tail()
predictions = logmodel.predict(test_df)
submission = pd.DataFrame({
"PassengerId": pass_ids,
"Survived": predictions
})
submission.to_csv('titanic.csv', index=False)
```
| github_jupyter |
# Zero Pressure Gradient Flat Plate

#### References
http://turbmodels.larc.nasa.gov/flatplate.html
```
DATA_DIR='.'
REF_DATA_DIR='.'
from zutil import analysis
data_dir = DATA_DIR
ref_data_dir = REF_DATA_DIR
analysis.data_init(default_data_dir=DATA_DIR,
default_ref_data_dir=REF_DATA_DIR)
case_dict = {'plate_coarse' : {'label' : '(34x24 cells) SST/MG/PRECON',
'facearea' : 0.142692,
'scale' : 816 },
'plate_medium' : {'label' : '(68x48 cells) SST/MG/PRECON',
'facearea' : 0.075823,
'scale' : 3264 },
'plate_fine' : {'label' : '(136x96 cells) SST/MG/PRECON',
'facearea' : 0.0391421,
'scale' : 13056 },
'plate_finer' : {'label' : '(272x192 cells) SST/MG/PRECON',
'facearea' : 0.0198947,
'scale' : 52224 },
'plate_medium_f100' : {'label' : '(68x48 cells) SST/MG/PRECON/WF',
'facearea' : 0.075823,
'scale' : 3264 }}
```
### Check friction coefficient [0.0024 < c_f < 0.0028] at x = 0.97
```
valid_lower = 0.0024
valid_upper = 0.0028
cf = {}
h = {}
```
### Initialise Environment
```
from zutil.plot import *
import math
```
### Get control dictionary
```
from zutil.post import get_case_parameters,print_html_parameters
parameters={}
for case_name in case_dict:
case_param = get_case_parameters(case_name)
parameters[case_name] = case_param
# Analysis constants
import zutil
reference_area = 2.0
bl_position = 0.97
mach = 0.2
kappa = 1.402
R = 287.058
temperature = zutil.to_kelvin(540.0)
pressure = 101325
density = pressure/(R*temperature)
speed_of_sound = math.sqrt(kappa*pressure/density)
u_ref = mach*speed_of_sound
```
### Plotting functions
```
from zutil.post import get_case_root, get_case_report
from zutil.post import get_csv_data,get_fw_csv_data
from zutil.post import for_each
def plot_theory(ax, filename):
df = get_fw_csv_data(filename,widths=[16,16,16,16,16])
ax.plot(df[1], df[0], color='grey', label='$\mathbf{u^{+}=y^{+}}$')
ax.plot(df[2], df[0], color='grey', label='Log law')
def plot_theory_rough(ax, filename, kr_plus):
df = get_fw_csv_data(filename,widths=[16,16,16,16,16])
df[2] -= 5.235 - (8.5 - 1.0/0.41 * log(kr_plus))
ax.plot(df[2], df[0], color='grey', label='Log law - kr+='+str(kr_plus))
def plot_comparison(ax,filename):
df = get_fw_csv_data(filename,widths=[16,14],skiprows=2)
ax.plot(df[0], df[1], color='red', label='CFL3D (545x385 points)')
def velocity_plot(data,pts,**kwargs):
ax = kwargs['axis']
chart_label = kwargs['chart_label']
ax.loglog(data.GetPointData()['vprof'], pts.GetPoints()[:,2], label=chart_label)
def eddy_plot(data,pts,**kwargs):
ax = kwargs['axis']
chart_label = kwargs['chart_label']
ax.semilogy(data.GetPointData()['eddy'], pts.GetPoints()[:,2], label=chart_label)
def plot_velocity_profile(ax,file_root,label,face_area):
wall = PVDReader( FileName=file_root+'_wall.pvd' )
wall_clean = CleantoGrid(Input=wall)
drag = MinMax(Input=wall_clean)
drag.Operation = "SUM"
drag.UpdatePipeline()
drag_client = servermanager.Fetch(drag)
cd = drag_client.GetCellData().GetArray("frictionforce").GetValue(0)
wall_slice = Slice(Input=wall, SliceType="Plane" )
wall_slice.SliceType.Normal = [1.0,0.0,0.0]
wall_slice.SliceType.Origin = [bl_position, 0.0, 0.0]
wall_slice.UpdatePipeline()
wall_slice_client = servermanager.Fetch(wall_slice)
nu = wall_slice_client.GetCellData().GetArray("nu").GetValue(0)
utau = wall_slice_client.GetCellData().GetArray("ut").GetValue(0)
yplus = wall_slice_client.GetCellData().GetArray("yplus").GetValue(0)
cf = wall_slice_client.GetCellData().GetArray("frictionforce").GetValue(0)
wall_vel = wall_slice_client.GetCellData().GetArray("V").GetValue(0)
wall_vel = 0.0
symmetry = PVDReader( FileName=file_root+'_symmetry.pvd' )
symmetry_clean = CleantoGrid(Input=symmetry)
CellDatatoPointData1 = CellDatatoPointData(Input=symmetry_clean)
CellDatatoPointData1.PassCellData = 1
Clip1 = Clip(Input=CellDatatoPointData1, ClipType="Plane" )
Clip1.ClipType.Normal = [0.0,1.0,0.0]
Clip1.ClipType.Origin = [0.0, -0.5, 0.0]
Clip2 = Clip(Input=Clip1, ClipType="Plane" )
Clip2.ClipType.Normal = [0.0,0.0,1.0]
Clip2.ClipType.Origin = [0.0, 0.0, 0.05]
Clip2.Invert = 1
Slice1 = Slice(Input=Clip2, SliceType="Plane" )
Slice1.SliceType.Normal = [1.0,0.0,0.0]
Slice1.SliceType.Origin = [bl_position, 0.0, 0.0]
Calculator2 = Calculator(Input=Slice1)
Calculator2.AttributeMode = 'Point Data'
Calculator2.Function = '(V.iHat - '+ str(wall_vel) +')'
Calculator2.ResultArrayName = 'vprof'
Calculator2.UpdatePipeline()
sorted_line = PlotOnSortedLines(Input=Calculator2)
sorted_line.UpdatePipeline()
extract_client = servermanager.Fetch(sorted_line)
chart_label = (('$\mathbf{y^{+} = %.2f}$')%(yplus) + (' $\mathbf{C_d=%.6f}$'%(cd/reference_area))
+ (' $\mathbf{C_f=%.5f}$'%(cf/face_area)) + ' ' + label)
for_each(extract_client,velocity_plot,axis=ax,chart_label=chart_label)
def plot_eddy_profile(ax,file_root,label,face_area):
wall = PVDReader( FileName=file_root+'_wall.pvd' )
wall_clean = CleantoGrid(Input=wall)
drag = MinMax(Input=wall_clean)
drag.Operation = "SUM"
drag.UpdatePipeline()
drag_client = servermanager.Fetch(drag)
cd = drag_client.GetCellData().GetArray("frictionforce").GetValue(0)
wall_slice = Slice(Input=wall, SliceType="Plane" )
wall_slice.SliceType.Normal = [1.0,0.0,0.0]
wall_slice.SliceType.Origin = [bl_position, 0.0, 0.0]
wall_slice.UpdatePipeline()
wall_slice_client = servermanager.Fetch(wall_slice)
nu = wall_slice_client.GetCellData().GetArray("nu").GetValue(0)
utau = wall_slice_client.GetCellData().GetArray("ut").GetValue(0)
yplus = wall_slice_client.GetCellData().GetArray("yplus").GetValue(0)
cf = wall_slice_client.GetCellData().GetArray("frictionforce").GetValue(0)
wall_vel = wall_slice_client.GetCellData().GetArray("V").GetValue(0)
wall_vel = 0.0
symmetry = PVDReader( FileName=file_root+'_symmetry.pvd' )
symmetry_clean = CleantoGrid(Input=symmetry)
CellDatatoPointData1 = CellDatatoPointData(Input=symmetry_clean)
CellDatatoPointData1.PassCellData = 1
Clip1 = Clip(Input=CellDatatoPointData1, ClipType="Plane" )
Clip1.ClipType.Normal = [0.0,1.0,0.0]
Clip1.ClipType.Origin = [0.0, -0.5, 0.0]
Clip2 = Clip(Input=Clip1, ClipType="Plane" )
Clip2.ClipType.Normal = [0.0,0.0,1.0]
Clip2.ClipType.Origin = [0.0, 0.0, 0.05]
Clip2.Invert = 1
Slice1 = Slice(Input=Clip2, SliceType="Plane" )
Slice1.SliceType.Normal = [1.0,0.0,0.0]
Slice1.SliceType.Origin = [bl_position, 0.0, 0.0]
Calculator2 = Calculator(Input=Slice1)
Calculator2.AttributeMode = 'Point Data'
Calculator2.Function = '(V.iHat - '+ str(wall_vel) +')'
Calculator2.ResultArrayName = 'vprof'
Calculator2.UpdatePipeline()
sorted_line = PlotOnSortedLines(Input=Calculator2)
sorted_line.UpdatePipeline()
extract_client = servermanager.Fetch(sorted_line)
chart_label = (('$\mathbf{y^{+} = %.2f}$')%(yplus) + (' $\mathbf{C_d=%.6f}$'%(cd/reference_area))
+ (' $\mathbf{C_f=%.5f}$'%(cf/face_area)) + ' ' + label)
for_each(extract_client,eddy_plot,axis=ax,chart_label=chart_label)
def plot_nu(ax,file_root,label,face_area):
wall = PVDReader( FileName=file_root+'_wall.pvd' )
wall_clean = CleantoGrid(Input=wall)
CellDatatoPointData1 = CellDatatoPointData(Input=wall_clean)
CellDatatoPointData1.PassCellData = 1
wall_slice = Slice(Input=CellDatatoPointData1, SliceType="Plane" )
wall_slice.SliceType.Normal = [0.0,1.0,0.0]
wall_slice.SliceType.Origin = [bl_position, -0.5, 0.0]
wall_slice.UpdatePipeline()
wall_slice_client = servermanager.Fetch(wall_slice)
sorted_line = PlotOnSortedLines(Input=wall_slice)
sorted_line.UpdatePipeline()
extract_client = servermanager.Fetch(sorted_line)
chart_label = label
for_each(extract_client,nu_plot,axis=ax,chart_label=chart_label)
def bl_plot(data,pts,**kwargs):
ax = kwargs['axis']
chart_label = kwargs['chart_label']
ax.plot(data.GetPointData()['yp'][1:], data.GetPointData()['up'][1:], label=chart_label)
def get_case_cf(file_root,label,face_area):
wall = PVDReader( FileName=file_root+'_wall.pvd' )
wall_clean = CleantoGrid(Input=wall)
wall_slice = Slice(Input=wall_clean, SliceType="Plane" )
wall_slice.SliceType.Normal = [1.0,0.0,0.0]
wall_slice.SliceType.Origin = [bl_position, 0.0, 0.0]
wall_slice.UpdatePipeline()
wall_slice_client = servermanager.Fetch(wall_slice)
cf = wall_slice_client.GetCellData().GetArray("frictionforce").GetValue(0)/face_area
return(cf)
def plot_profile(ax, file_root, label, kr=0.0):
wall = PVDReader( FileName=file_root+'_wall.pvd' )
wall_clean = CleantoGrid(Input=wall)
wall_slice = Slice(Input=wall_clean, SliceType="Plane" )
wall_slice.SliceType.Normal = [1.0,0.0,0.0]
wall_slice.SliceType.Origin = [bl_position, 0.0, 0.0]
wall_slice.UpdatePipeline()
wall_slice_client = servermanager.Fetch(wall_slice)
nu = wall_slice_client.GetCellData().GetArray("nu").GetValue(0)
utau = wall_slice_client.GetCellData().GetArray("ut").GetValue(0)
yplus = wall_slice_client.GetCellData().GetArray("yplus").GetValue(0)
t = wall_slice_client.GetCellData().GetArray("T").GetValue(0)
p = wall_slice_client.GetCellData().GetArray("p").GetValue(0)
rho = wall_slice_client.GetCellData().GetArray("rho").GetValue(0)
wall_vel = wall_slice_client.GetCellData().GetArray("V").GetValue(0)
symmetry = PVDReader( FileName=file_root+'_symmetry.pvd' )
symmetry_clean = CleantoGrid(Input=symmetry)
CellDatatoPointData1 = CellDatatoPointData(Input=symmetry_clean)
CellDatatoPointData1.PassCellData = 1
Clip1 = Clip(Input=CellDatatoPointData1, ClipType="Plane" )
Clip1.ClipType.Normal = [0.0,1.0,0.0]
Clip1.ClipType.Origin = [0.0, -0.5, 0.0]
Clip2 = Clip(Input=Clip1, ClipType="Plane" )
Clip2.ClipType.Normal = [0.0,0.0,1.0]
Clip2.ClipType.Origin = [0.0, 0.0, 0.05]
Clip2.Invert = 1
Slice1 = Slice(Input=Clip2, SliceType="Plane" )
Slice1.SliceType.Normal = [1.0,0.0,0.0]
Slice1.SliceType.Origin = [bl_position, 0.0, 0.0]
Calculator1 = Calculator(Input=Slice1)
Calculator1.AttributeMode = 'Point Data'
Calculator1.Function = 'log10(coords.kHat * '+str(utau)+'/'+str(nu)+')'
Calculator1.ResultArrayName = 'yp'
Calculator2 = Calculator(Input=Calculator1)
Calculator2.AttributeMode = 'Point Data'
Calculator2.Function = '(V.iHat - '+ str(wall_vel) +')/ '+str(utau)
Calculator2.ResultArrayName = 'up'
Calculator2.UpdatePipeline()
sorted_line = PlotOnSortedLines(Input=Calculator2)
sorted_line.UpdatePipeline()
extract_client = servermanager.Fetch(sorted_line)
# Plot theory for rough surface
if kr > 0.0:
plot_theory_rough(ax,os.path.join(analysis.data.ref_data_dir,'data/u+y+theory.csv'),kr*ut/nu);
chart_label = ('$\mathbf{y^{+}}$ = %.2f '%yplus + ('$\mathbf{u_{T}}$ = %.2f ') %utau + label)
for_each(extract_client,bl_plot,axis=ax,chart_label=chart_label)
```
## Plot comparison with Law of the Wall
```
from zutil.post import ProgressBar
from zutil.post import get_status_dict
pbar = ProgressBar()
fig = plt.figure(figsize=(10, 5),dpi=100, facecolor='w', edgecolor='#E48B25')
ax = fig.add_subplot(1,1,1)
ax.grid(True)
ax.set_xlabel('$\mathbf{\log(y^{+})}$', fontsize=ft.axis_font_size, fontweight='bold', color = '#5D5858')
ax.set_ylabel('$\mathbf{u^{+}}$', fontsize=ft.axis_font_size, fontweight='bold', color = '#5D5858')
ax.set_title('Zero Pressure Gradient Flat Plate Law of the wall \n',
fontsize=ft.title_font_size,
fontweight='normal')
ax.axis([0,5,0,30])
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(ft.axis_tick_font_size)
tick.label.set_fontweight('normal')
tick.label.set_color('#E48B25')
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(ft.axis_tick_font_size)
tick.label.set_fontweight('normal')
tick.label.set_color('#E48B25')
plot_theory(ax,os.path.join(analysis.data.ref_data_dir,'data/u+y+theory.csv'));
plot_comparison(ax,os.path.join(analysis.data.ref_data_dir,'data/flatplate_u+y+_sst.dat'));
for case in case_dict:
case_name = case
label = case_dict[case]['label']
status=get_status_dict(case_name)
if status:
num_procs = str(status['num processor'])
kr = 0.0
if 'kr' in case_dict[case]:
kr = case_dict[case]['kr']
plot_profile(ax, os.path.join(analysis.data.data_dir,get_case_root(case_name,str(num_procs))),label, kr)
pbar+=5
legend = ax.legend(loc='best', shadow=False, fontsize=ft.legend_font)
legend.get_frame().set_facecolor('white')
pbar.complete()
plt.show()
```
## Plot Velocity Profile
```
pbar = ProgressBar()
fig = plt.figure(figsize=(10, 5), dpi=100, facecolor='w', edgecolor='#E48B25')
ax = fig.add_subplot(1,1,1)
ax.grid(True)
ax.set_xlabel('speed (m/s)', fontsize=ft.axis_font_size, fontweight='normal', color = '#5D5858')
ax.set_ylabel('$\mathbf{z}$ (m)', fontsize=ft.axis_font_size, fontweight='normal', color = '#5D5858')
ax.set_title('Zero Pressure Gradient Flat Plate Velocity profile \n', fontsize=ft.title_font_size,
fontweight='normal')
df = get_fw_csv_data(os.path.join(analysis.data.ref_data_dir,'data/flatplate_u_sst.dat'),widths=[16,14],skiprows=2)
ax.plot(df[0]*u_ref, df[1], color='red', label='CFL3D (545x385 points)')
pbar+=5
for case in case_dict:
case_name = case
try:
label = case_dict[case]['label']
status=get_status_dict(case_name)
if status:
num_procs = str(status['num processor'])
plot_velocity_profile(ax,os.path.join(analysis.data.data_dir,get_case_root(case_name,str(num_procs))),
label,case_dict[case]['facearea'])
except Exception as e:
print str(e)
pbar+=5
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(ft.axis_tick_font_size)
tick.label.set_fontweight('normal')
tick.label.set_color('#E48B25')
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(ft.axis_tick_font_size)
tick.label.set_fontweight('normal')
tick.label.set_color('#E48B25')
ax.legend(loc='best', shadow=False, fontsize=ft.legend_font)
legend.get_frame().set_facecolor('white')
pbar+=5
pbar.complete()
plt.show()
```
## Eddy Plot
```
pbar = ProgressBar()
fig = plt.figure(figsize=(10, 5), dpi=100, facecolor='w', edgecolor='#E48B25')
ax = fig.add_subplot(1,1,1)
ax.grid(True)
ax.set_xlabel('eddy viscosity ratio', fontsize=ft.axis_font_size, fontweight='normal', color = '#5D5858')
ax.set_ylabel('$\mathbf{z}$ (m)', fontsize=ft.axis_font_size, fontweight='normal', color = '#5D5858')
ax.set_title('Zero Pressure Gradient Flat Plate Eddy profile \n', fontsize=ft.title_font_size,
fontweight='normal')
pbar+=5
for case in case_dict:
case_name = case
try:
label = case_dict[case]['label']
status=get_status_dict(case_name)
if status:
num_procs = str(status['num processor'])
plot_eddy_profile(ax,os.path.join(analysis.data.data_dir,get_case_root(case_name,str(num_procs))),
label,case_dict[case]['facearea'])
except Exception as e:
print str(e)
pbar+=5
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(ft.axis_tick_font_size)
tick.label.set_fontweight('normal')
tick.label.set_color('#E48B25')
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(ft.axis_tick_font_size)
tick.label.set_fontweight('normal')
tick.label.set_color('#E48B25')
ax.legend(loc='best', shadow=False, fontsize=ft.legend_font)
legend.get_frame().set_facecolor('white')
pbar+=5
pbar.complete()
plt.show()
```
## Mesh Convergence
```
pbar = ProgressBar()
fig = plt.figure(figsize=(10, 5),dpi=100, facecolor='w', edgecolor='#E48B25')
ax = fig.add_subplot(1,1,1)
ax.grid(True)
ax.set_xlabel('$\mathbf{h=(1/N)^{1/2}}$', fontsize=ft.axis_font_size, fontweight='bold', color = '#5D5858')
ax.set_ylabel('$\mathbf{C_f}$ at $\mathbf{x=0.97}$', fontsize=ft.axis_font_size, fontweight='bold', color = '#5D5858')
ax.set_title('Zero Pressure Gradient Flat Plate Mesh Convergence ($\mathbf{N}$ = $\mathbf{n}$ x $\mathbf{m}$) \n',
fontsize=ft.title_font_size,
fontweight='normal')
df = get_fw_csv_data(os.path.join(analysis.data.ref_data_dir,'data/cf_convergence_sst_fun3d.dat'),widths=[8,13,12,21],skiprows=3)
ax.plot(df[2], df[3], color='red', marker='x',label='FUN3D SST')
df = get_fw_csv_data(os.path.join(analysis.data.ref_data_dir,'data/cf_convergence_sst_cfl3d.dat'),widths=[8,13,12,21],skiprows=3)
ax.plot(df[2], df[3], color='blue', marker='x',label='CFL3D SST')
pbar+=5
num = 0
cf_plot = []
h_plot = []
for case in case_dict:
num = num + 1
case_name = case
if case_name.endswith('f100'):
continue
try:
status=get_status_dict(case_name)
if status:
num_procs = str(status['num processor'])
try:
cf_plot.append(get_case_cf(os.path.join(analysis.data.data_dir,get_case_root(case_name,str(num_procs))),
label,case_dict[case]['facearea']))
h_plot.append(1.0/math.sqrt(case_dict[case]['scale']))
except:
pass
except Exception as e:
print str(e)
pbar+=5
ax.plot(h_plot, cf_plot, marker='o', color='#E48B25',label='zCFD SST')
ax.legend(loc='best', shadow=False, fontsize=ft.legend_font)
legend.get_frame().set_facecolor('white')
pbar+=5
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(ft.axis_tick_font_size)
tick.label.set_fontweight('normal')
tick.label.set_color('#E48B25')
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(ft.axis_tick_font_size)
tick.label.set_fontweight('normal')
tick.label.set_color('#E48B25')
pbar.complete()
plt.show()
```
## Create Eddy Viscosity Images
```
def save_image(file_root, label):
renderView1 = CreateView('RenderView')
renderView1.ViewSize = [1281, 993]
renderView1.InteractionMode = '2D'
renderView1.AxesGrid = 'GridAxes3DActor'
renderView1.OrientationAxesVisibility = 0
renderView1.CenterOfRotation = [1.90, -0.5, 0.0364]
renderView1.StereoType = 0
renderView1.CameraPosition = [0.832, -5.77, 0.388]
renderView1.CameraFocalPoint = [0.832, -0.5, 0.388]
renderView1.CameraViewUp = [0.0, 0.0, 1.0]
renderView1.CameraParallelScale = 0.9
renderView1.Background = [0.0, 0.0, 0.0]
plate_medium_symmetrypvd = PVDReader(FileName=file_root+'_symmetry.pvd')
cellDatatoPointData1 = CellDatatoPointData(Input=plate_medium_symmetrypvd)
eddyLUT = GetColorTransferFunction('eddy')
eddyLUT.RGBPoints = [1e-07, 0.233, 0.300, 0.750,
196.0, 0.865, 0.865, 0.865,
392.0, 0.700, 0.016, 0.150]
eddyLUT.ScalarRangeInitialized = 1.0
cellDatatoPointData1Display = Show(cellDatatoPointData1, renderView1)
cellDatatoPointData1Display.ColorArrayName = ['POINTS', 'eddy']
cellDatatoPointData1Display.LookupTable = eddyLUT
Render()
WriteImage(label+'.png')
from IPython.display import Image, display
for case in case_dict:
case_name = case
label_ = case_dict[case]['label']
label_ = case_name
status=get_status_dict(case_name)
if status:
num_procs = str(status['num processor'])
save_image(os.path.join(analysis.data.data_dir,get_case_root(case_name,str(num_procs))),label_)
display(Image(filename=label_+'.png', width=800, height=500, unconfined=True))
```
<script type="text/javascript">
show=true;
function toggle(){
if (show){
$('div.input').hide();
}else{
$('div.input').show();
}
show = !show
}
</script>
<a href="javascript:toggle()" target="_self">toggle input</a>
| github_jupyter |
*This notebook is part of course materials for CS 345: Machine Learning Foundations and Practice at Colorado State University.
Original versions were created by Asa Ben-Hur.
The content is availabe [on GitHub](https://github.com/asabenhur/CS345).*
*The text is released under the [CC BY-SA license](https://creativecommons.org/licenses/by-sa/4.0/), and code is released under the [MIT license](https://opensource.org/licenses/MIT).*
<img style="padding: 10px; float:right;" alt="CC-BY-SA icon.svg in public domain" src="https://upload.wikimedia.org/wikipedia/commons/d/d0/CC-BY-SA_icon.svg" width="125">
<a href="https://colab.research.google.com/github//asabenhur/CS345/blob/master/notebooks/module02_04_nearest_neighbors.ipynb">
<img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
```
import numpy as np
from matplotlib import pylab as plt
%matplotlib inline
%autosave 0
```
# Nearest neighbor classification
The nearest neighbor classifier is one of the simplest machine learning methods available.
Here's the simplest version of it:
```
Nearest neighbor classifier
- Find the example in the training data that is closest to
the example that needs to be classified.
- Return its label
```
## Measuring distance
To implement this idea we need to define a concrete notion of closeness.
We'll do that using the **distance** between examples.
First recall that the *norm* of a vector was defined as:
$$
||\mathbf{x}||^2 = \mathbf{x}^\top \mathbf{x}.
$$
Using this notation, we can define the **Euclidean distance** $d_2(\mathbf{x}, \mathbf{x}')$ between vectors $\mathbf{x}$ and $\mathbf{x}'$ as:
$$
d_2(\mathbf{x}, \mathbf{x}')^2 = ||\mathbf{x} - \mathbf{x}'||^2 =
(\mathbf{x} - \mathbf{x}')^\top (\mathbf{x} - \mathbf{x}') =
\sum_{i=1}^d (x_i - x_i')^2.
$$
Here are some Numpy implementations that directly reflect the different ways of expressing this definition:
```
def distance(x1, x2):
return np.linalg.norm(x1-x2)
distance(np.array([2,1]), np.array([1,0]))
def distance2(x1, x2):
return np.sqrt(np.dot(x1-x2, x1-x2))
distance2(np.array([2,1]), np.array([1,0]))
def distance3(x1, x2):
return np.sqrt(np.sum( (x1-x2)**2) )
distance3(np.array([2,1]), np.array([1,0]))
```
Now we are ready to implement the nearest neighbor classifier:
```
class nearest_neighbor:
def __init__(self):
pass
def fit(self, X, y):
self.X = X
self.y = y
def get_nearest(self, x):
distances = [distance(x, self.X[i]) for i in range(len(self.X))]
return np.argmin(distances)
def predict(self, x) :
return self.y[self.get_nearest(x)]
```
Let's apply this classifier to the digit classification dataset bundled with scikit-learn. This dataset is originally from the [UCI machine learning repository](https://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits), and addresses handritten image recognition.
```
from sklearn import datasets
digits = datasets.load_digits()
X = digits.data
y = digits.target
print(X.shape, y.shape)
```
Each pixel in an image corresponds to a feature, so 8 x 8 images yield a feature matrix which has 64 dimensions. Each element in the feature matrix is an integer between 0 and 16:
```
X.max(),X.min()
```
Let's visualize the first 10 examples in the dataset as images:
```
num_plots = 10
fig, axes = plt.subplots(2, 5);
for i in range(num_plots) :
ax = axes[i // 5][i % 5]
ax.set_axis_off()
ax.imshow(np.resize(X[i], (8,8)),
cmap=plt.cm.gray_r, interpolation='nearest')
ax.set_title('label: %i' % y[i])
```
A scatter plot of the data will show us that individual features do carry some information about the labels:
```
plt.scatter(X[:, 10], X[:, 20], c=y);
```
Later in the course we will see ways of visualizing high dimensional data in two or three dimensions, which will help us get a better picture of what's going on overall.
Before using the nearest neighbor classifier we need to split the data into training and test sets:
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.3, shuffle=True, random_state=42)
print(X_train.shape, X_test.shape)
```
Now we are ready to classify our data:
```
nn = nearest_neighbor()
nn.fit(X_train, y_train)
y_pred = np.array([nn.predict(X_test[i]) for i in range(len(X_test))])
```
How accurate is our classifier?
```
np.sum(y_pred == y_test)/len(y_test)
```
Next, let's plot training examples and their closest test examples:
```
num_plots = 5
fig, axes = plt.subplots(5, 2);
for i in range(num_plots) :
ax_test = axes[i][0]
ax_nearest = axes[i][1]
ax_test.set_axis_off()
ax_nearest.set_axis_off()
ax_test.imshow(np.resize(X_test[i], (8,8)),
cmap=plt.cm.gray_r, interpolation='nearest')
nearest = X_train[nn.get_nearest(X_test[i])]
ax_nearest.imshow(np.resize(nearest, (8,8)),
cmap=plt.cm.gray_r, interpolation='nearest')
if (i==0):
ax_test.set_title('test example')
ax_nearest.set_title('nearest example')
```
### The decision boundary
To obtain a better understanding of the nearest neighbor classifier let us consider the question of the shape of its decision boundary. While the decision boundary of classifiers such as the perceptron are linear, the nearest neighbor classifier is more flexible as we see next.
First, here's a function for plotting the decision boundary of a classifier:
```
from matplotlib.colors import ListedColormap
def plot_boundary(classifier, X, y, axes = None) :
"""
code based on:
https://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html
"""
classifier.fit(X, y)
# color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
# create a two dimensional grid of points
h = .02 # grid size
x_min, x_max = X[:, 0].min() - 0.2, X[:, 0].max() + 0.2
y_min, y_max = X[:, 1].min() - 0.2, X[:, 1].max() + 0.2
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = classifier.predict(np.c_[xx.ravel(), yy.ravel()])
# plot the predictions on the grid
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# plot the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold, alpha=0.5)
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
from sklearn.datasets import make_classification
X,y = make_classification(n_samples=100, n_features=2, n_informative=2, n_redundant=0, n_repeated=0, n_classes=2, n_clusters_per_class=1, class_sep=0.35, random_state=1)
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(1)
plot_boundary(classifier, X, y)
```
To further highlight the way the nearest neighbor classifier works, here's random data in two dimensions:
```
X_random = rng.random(size=(100, 2))
y_random = rng.integers(0, 2, len(X_random))
print(X_random.shape, y_grid.shape)
classifier = KNeighborsClassifier(1)
plot_boundary(classifier, X_random, y_random)
```
### Voronoi diagrams
<img style="padding: 10px; float:left;" alt="20 points and their Voronoi cells by Balu Ertl CC BY-SA 4.0" src="https://upload.wikimedia.org/wikipedia/commons/5/54/Euclidean_Voronoi_diagram.svg" width="250">
Turns out that the decision boundary of a nearest neighbor classifier is related to the concept of a *Voronoi diagram*.
Given a collection of points $\{\mathbf{x}_1,\ldots,\mathbf{x}_N\}$,
the Voronoi cell associated with point $\mathbf{x}_i$ is the set of points that are closer to $\mathbf{x}_i$ than every other point in the collection.
For nearest neighbor classification we obtain the decision boundary by merging adjacent Voronoi cells that have the same label associated with them.
### Question:
What accuracy do you expect for a nearest neighbor classifier that is tested on the training set?
### Exercise: Accuracy with increasing levels of noise
The nearest neighbor classifier is not robust to the existence of noisy features. To demonstrate that, use the dataset below and add increasing number of noisy features. Compute the accuracy of the classifier as you add an increasing number of noise features.
Noise features can be added using the numpy [normal](https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.normal.html#numpy.random.Generator.normal) method of a random number generator, which samples random numbers from a normal distribution.
For example:
```
from numpy.random import default_rng
rng = default_rng(1)
# parameters of rng.normal:
# mean, standard deviation, and size of the output array
rng.normal(0, 0.5, size=(2,3))
```
To add the noise features to the feature matrix you can use the Numpy [hstack](https://numpy.org/doc/stable/reference/generated/numpy.hstack.html) method. For example to add two noise features:
```
num_noise = 2
X_train_noise = np.hstack((X_train,
rng.normal(0, 0.5, size=(len(X_train),num_noise))))
X_test_noise = np.hstack((X_test,
rng.normal(0, 0.5, size=(len(X_test),num_noise))))
```
For this exercise, use the following dataset in two dimensions for which the nearest neighbor classifier performs well without noise:
```
from sklearn.datasets import make_classification
X,y = make_classification(n_samples=100, n_features=2, n_informative=2, n_redundant=0, n_repeated=0, n_classes=2, n_clusters_per_class=1, class_sep=0.35, random_state=1)
plt.scatter(X[:,0], X[:,1], c=y, alpha=0.5, s=50);
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.3, shuffle=True, random_state=1)
nn = nearest_neighbor()
nn.fit(X_train, y_train)
y_pred = np.array([nn.predict(X_test[i]) for i in range(len(X_test))])
np.sum(y_pred == y_test)/len(y_test)
```
In your code, add noise features to the dataset, as described above, where the number of noise features increases from 2 to 32, using the values `[2, 4, 8, 16, 32]`. Plot the accuracy of the classifier as a function of the number of noise features.
```
# your code goes here
```
### Exercise: a faster Numpy implementation
Whereas our rudimentary implementation of the nearest neighbor classifier takes in a single vector as input, the nearest neighbor implementation in scikit-learn takes a matrix of test examples, which removes the need for the computationally slow `for` loop we had to use when computing the output for an entire test set. In this exercise, extend our nearest neighbor classifier and improve its efficiency so that no for loops would be required for computing the output for a large test set.
```
class nearest_neighbor:
def __init__(self):
pass
def fit(self, X, y):
self.X = X
self.y = y
def predict(self, X_test) :
"""
make nearest neighbor predictions for a two dimensional array
X_test, representing a test set.
The number of columns of X_test needs to be the same as the
number of columns of the training data.
return: an array of predictions for X_test
"""
return 0
```
| github_jupyter |
# SPAM CLASSIFIER
Before you start download spam.csv dataset from: https://www.kaggle.com/uciml/sms-spam-collection-dataset
```
# default_exp train
```
## Input parameters for mlflow project
```
#export
import argparse
parser= argparse.ArgumentParser()
parser.add_argument('--max_features', type=int)
args = parser.parse_args()
input_params = args.__dict__
#hide
input_params = {'max_features':3000}
#export
import pandas as pd
from sklearn.linear_model import SGDClassifier
from sklearn.svm import SVC
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.model_selection import cross_val_score, train_test_split
from sklearn.metrics import classification_report, accuracy_score
from sklearn.multiclass import OneVsRestClassifier
```
## Prepare data
```
#export
# download spam.csv dataset from: https://www.kaggle.com/uciml/sms-spam-collection-dataset
df = pd.read_csv('spam.csv', encoding='latin-1')
df.set_index('v2')
y = df.pop('v1').to_numpy()
X = df.pop('v2').to_numpy()
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3)
```
## Train and load to mlflow
```
#export
import warnings
import sys
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import ElasticNet
from urllib.parse import urlparse
import mlflow
import mlflow.sklearn
import mlflow.pyfunc
#conda_env=mlflow.pyfunc.get_default_conda_env()
with mlflow.start_run():
svc_tfidf = Pipeline([
("tfidf_vectorizer", TfidfVectorizer(stop_words="english", max_features=input_params['max_features'])),
("linear svc", OneVsRestClassifier(SVC(kernel='linear')))
])
model = svc_tfidf
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
ac_score = accuracy_score(y_test, y_pred)
print(classification_report(y_test, y_pred))
mlflow.log_param("max_features", input_params['max_features'])
mlflow.log_metric("accuracy_score", ac_score)
tracking_url_type_store = urlparse(mlflow.get_tracking_uri()).scheme
if tracking_url_type_store != "file":
mlflow.sklearn.log_model(model, "model", registered_model_name="SMSSpamModel")
else:
mlflow.sklearn.log_model(model, "model")
```
## Export train code
The above code will be exported to the python file using nbdev library (export, hide, default_exp keyworkd are needed )
```
#hide
from nbdev.export import *
notebook2script()
```
## Train from command using mlflow
```
%env MLFLOW_TRACKING_URI=http://mlflow:5000
!mlflow run . --no-conda --experiment-name="spamclassifier" -P max_features=3000
```
## Load from mlflow repository and test
```
import mlflow.sklearn
#sk_model = mlflow.sklearn.load_model("runs:/96771d893a5e46159d9f3b49bf9013e2/sk_models")
#sk_model = mlflow.sklearn.load_model("/mlflow/mlruns/2/64a89b0a6b7346498316bfae4c298535/artifacts/model")
sk_model = mlflow.sklearn.load_model("models:/SMSSpamModel/2")
#sk_model = mlflow.sklearn.load_model("models:/SMSSpamModel/Staging")
res=sk_model.predict([X_test[17]])
res[0]
X_test[0:50]
```
| github_jupyter |
---
**Universidad de Costa Rica** | Escuela de Ingeniería Eléctrica
*IE0405 - Modelos Probabilísticos de Señales y Sistemas*
### `PyX` - Serie de tutoriales de Python para el análisis de datos
# `Py7` - *Graficación estadística*
> La visualización de resultados es fundamental en el análisis de datos. Python tiene librerías complementarias como **Matplotlib** y **Seaborn**, entre otras, que ofrecen herramientas avanzadas para muchos tipos de gráficos comúnmente utilizados en muchos contextos académicos y profesionales.
*Fabián Abarca Calderón*
---
## Tipos de gráficos
Para empezar, haremos un repaso de tipos de gráficos usuales en estadística y análisis de datos.
[Seaborn](https://seaborn.pydata.org/tutorial/function_overview.html) clasifica sus gráficos de la siguiente forma:
<img src="https://seaborn.pydata.org/_images/function_overview_8_0.png" width="350">
### Gráficos relacionales
Mapeo de la relación entre dos variables en un plano cartesiano.
- **Gráfico de dispersión** (*scatter plot*): diagrama que muestra la relación de dos variables como pares ordenados en un plano cartesiano.
<img src="https://seaborn.pydata.org/_images/scatterplot_5_0.png" width="300">
- **Gráfico de línea** (*line chart, line plot*): diagrama que muestra la relación de dos variables como pares ordenados en un plano cartesiano, conectados por una línea para denotar una sucesión.
<img src="https://seaborn.pydata.org/_images/lineplot_9_0.png" width="300">
### Gráficos de distribución
Representación de la distribución de un conjunto de valores a lo largo de un eje.
- **Histograma** (*histogram*): diagrama que agrupa valores numéricos dentro de pequeños rangos de valores (*bins*) y que muestra la distribución del número de veces que aparece un valor dentro de cada intervalo, creando una densidad de probabilidad aproximada.
<img src="https://seaborn.pydata.org/_images/histplot_1_0.png" width="300">
- **Gráfico de estimación de densidad kernel** (*KDE plot*): como el histograma, es útil para una estimación de la distribución de ocurrencia de valores muestra, pero mediante la estimación $\hat{f}_h(x)$ de la función de densidad de probabilidad $f_X(x)$, a través de un "[kernel](https://en.wikipedia.org/wiki/Kernel_(statistics))".
<img src="https://seaborn.pydata.org/_images/kdeplot_5_0.png" width="300">
- **Función de distribución acumulativa empírica** (*ECDF plot*): es una aproximación de la función de distribución acumulativa (CDF) que representa la proporción o conteo de observaciones ubicadas por debajo de cada valor en un conjunto de datos.
<img src="https://seaborn.pydata.org/_images/ecdfplot_1_0.png" width="300">
- **Gráfico de alfombra** (*rug plot*): es una visualización de la distribución marginal de los datos en un eje real, como un gráfico de dispersión unidimensional.
<img src="https://seaborn.pydata.org/_images/rugplot_1_0.png" width="300">
### Gráficos categóricos
Representación de la distribución de un conjunto de valores a lo largo de un eje para distintas variables categóricas.
- **Gráfico de franjas** (*strip plot*): es un diagrama de dispersión donde una variable es categórica.
<img src="https://seaborn.pydata.org/_images/seaborn-stripplot-2.png" width="300">
- **Gráfico de enjambre** (*swarm plot*): es un diagrama de dispersión donde una variable es categórica, similar a un gráfico de franjas, pero los puntos no se traslapan.
<img src="https://seaborn.pydata.org/_images/seaborn-swarmplot-2.png" width="300">
- **Gráfico de caja o de caja y bigotes** (*box plot*, *box-and-whisker plot*): es una visualización de grupos de datos categóricos por medio de sus *cuartiles*: el mínimo y el mayor valor son los "bigotes", la mediana es la línea dentro de la caja, y el primer cuartil y tercer cuartil son los bordes de la caja. Siguiendo ciertos criterios se excluyen los valores atípicos (*outliers*), aquí mostrados como puntos.
<img src="https://seaborn.pydata.org/_images/seaborn-boxplot-2.png" width="300">
- **Gráfico de violín** (*violin plot*): tiene un rol similar al gráfico de caja en mostrar la distribución de datos numéricos por cuartiles en una categoría, pero lo combina con un gráfico KDE para representar además la distribución de ocurrencias.
<img src="https://seaborn.pydata.org/_images/seaborn-violinplot-2.png" width="300">
- **Gráfico de punto** (*point plot*): el punto representa la media (u otro estimador) de los datos de una variable categórica y la línea es un intervalo de confianza para la estimación. Aporta menos información que los gráficos anteriores pero puede ser útil para representar los cambios de las medidas de tendencia central (la media, la mediana o la moda) entre una variable categórica y otra.
<img src="https://seaborn.pydata.org/_images/seaborn-pointplot-5.png" width="300">
- **Gráfico de barras** (*bar plot*): muestra una información similar a la del gráfico de punto: la media (u otra medida de tendencia central) y el intervalo de confianza de la estimación, pero la magnitud de la media es una barra.
<img src="https://seaborn.pydata.org/_images/seaborn-barplot-1.png" width="300">
---
## Seaborn
Según su [página oficial](https://seaborn.pydata.org/),
> Seaborn es una biblioteca de visualización de datos de Python basada en Matplotlib y compatible con las estructuras de datos de Pandas. Proporciona una interfaz de alto nivel para dibujar gráficos estadísticos atractivos e informativos.
<img src="https://seaborn.pydata.org/_static/logo-wide-lightbg.svg" width="350">
**Nota**: Los aspectos básicos de Matplotlib ya fueron cubiertos en `Py2` y subsiguientes.
Con los tipos de gráficos mostrados en la sección anterior, Seaborn tiene la posibilidad de configurar una [gran variedad](https://seaborn.pydata.org/examples/index.html) de alternativas. Aquí mostraremos algunos ejemplos básicos.
Seaborn se importa por convención como
```python
import seaborn as sns
```
A continuación hay algunos ejemplos utilizando conjuntos de datos incluidos con la librería.
```
# Import seaborn
import seaborn as sns
# Apply the default theme
sns.set_theme()
# Load an example dataset
tips = sns.load_dataset("tips")
# Create a visualization
sns.relplot(
data=tips,
x="total_bill", y="tip", col="time",
hue="smoker", style="smoker", size="size",
)
```
---
### Más información
* [Página web](https://www.google.com/)
* Libro o algo
* Tutorial [w3schools](https://www.w3schools.com/python/)
---
**Universidad de Costa Rica** | Facultad de Ingeniería | Escuela de Ingeniería Eléctrica
© 2021
---
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.plotly as py
from plotly.offline import init_notebook_mode, iplot
init_notebook_mode(connected=True)
import plotly.graph_objs as go
import os
# print(os.listdir("../Software_Defect"))
data = pd.read_csv('cm1.csv')
defect_true_false = data.groupby('defects')['b'].apply(lambda x: x.count())
print('False: ',defect_true_false[0])
print('True: ',defect_true_false[1])
trace = go.Histogram(
x = data.defects,
opacity = 0.75,
name = "Defects",
marker = dict(color = 'green'))
hist_data = [trace]
hist_layout = go.Layout(barmode='overlay',
title = 'Defects',
xaxis = dict(title = 'True - False'),
yaxis = dict(title = 'Frequency'),
)
fig = go.Figure(data = hist_data, layout = hist_layout)
iplot(fig)
data.corr()
f,ax = plt.subplots(figsize = (15, 15))
sns.heatmap(data.corr(), annot = True, linewidths = .5, fmt = '.2f')
plt.show()
trace = go.Scatter(
x = data.v,
y = data.b,
mode = "markers",
name = "Volume - Bug",
marker = dict(color = 'darkblue'),
text = "Bug (b)")
scatter_data = [trace]
scatter_layout = dict(title = 'Volume - Bug',
xaxis = dict(title = 'Volume', ticklen = 5),
yaxis = dict(title = 'Bug' , ticklen = 5),
)
fig = dict(data = scatter_data, layout = scatter_layout)
iplot(fig)
data.isnull().sum()
trace1 = go.Box(
x = data.uniq_Op,
name = 'Unique Operators',
marker = dict(color = 'blue')
)
box_data = [trace1]
iplot(box_data)
def evaluation_control(data):
evaluation = (data.n < 300) & (data.v < 1000 ) & (data.d < 50) & (data.e < 500000) & (data.t < 5000)
data['complexityEvaluation'] = pd.DataFrame(evaluation)
data['complexityEvaluation'] = ['Succesful' if evaluation == True else 'Redesign' for evaluation in data.complexityEvaluation]
evaluation_control(data)
data
data.info()
data.groupby("complexityEvaluation").size()
# Histogram
trace = go.Histogram(
x = data.complexityEvaluation,
opacity = 0.75,
name = 'Complexity Evaluation',
marker = dict(color = 'darkorange')
)
hist_data = [trace]
hist_layout = go.Layout(barmode='overlay',
title = 'Complexity Evaluation',
xaxis = dict(title = 'Succesful - Redesign'),
yaxis = dict(title = 'Frequency')
)
fig = go.Figure(data = hist_data, layout = hist_layout)
iplot(fig)
from sklearn import preprocessing
scale_v = data[['v']]
scale_b = data[['b']]
minmax_scaler = preprocessing.MinMaxScaler()
v_scaled = minmax_scaler.fit_transform(scale_v)
b_scaled = minmax_scaler.fit_transform(scale_b)
data['v_ScaledUp'] = pd.DataFrame(v_scaled)
data['b_ScaledUp'] = pd.DataFrame(b_scaled)
data
scaled_data = pd.concat([data.v , data.b , data.v_ScaledUp , data.b_ScaledUp], axis=1)
scaled_data
data.info()
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.metrics import roc_curve, roc_auc_score
from sklearn import model_selection
X = data.iloc[:, :-10].values #Select related attribute values for selection
Y = data.complexityEvaluation.values #Select classification attribute values
Y
#Parsing selection and verification datasets
validation_size = 0.20
seed = 7
X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size = validation_size, random_state = seed)
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 0)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
#Summary of the predictions made by the classifier
print("SVM Algorithm")
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
#Accuracy score
from sklearn.metrics import accuracy_score
print("ACC: ",accuracy_score(y_pred,y_test))
```
| github_jupyter |
# ML with TensorFlow Extended (TFX) -- Part 1
The puprpose of this tutorial is to show how to do end-to-end ML with TFX libraries on Google Cloud Platform. This tutorial covers:
1. Data analysis and schema generation with **TF Data Validation**.
2. Data preprocessing with **TF Transform**.
3. Model training with **TF Estimator**.
4. Model evaluation with **TF Model Analysis**.
This notebook has been tested in Jupyter on the Deep Learning VM.
## 0. Setup Python and Cloud environment
Install the libraries we need and set up variables to reference our project and bucket.
```
%pip install -q --upgrade grpcio_tools tensorflow_data_validation
import apache_beam as beam
import platform
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_transform as tft
import tornado
print('tornado version: {}'.format(tornado.version))
print('Python version: {}'.format(platform.python_version()))
print('TF version: {}'.format(tf.__version__))
print('TFT version: {}'.format(tft.__version__))
print('TFDV version: {}'.format(tfdv.__version__))
print('Apache Beam version: {}'.format(beam.__version__))
PROJECT = 'cloud-training-demos' # Replace with your PROJECT
BUCKET = 'cloud-training-demos-ml' # Replace with your BUCKET
REGION = 'us-central1' # Choose an available region for Cloud MLE
import os
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
## ensure we predict locally with our current Python environment
gcloud config set ml_engine/local_python `which python`
```
<img valign="middle" src="images/tfx.jpeg">
### Flights dataset
We'll use the flights dataset from the book [Data Science on Google Cloud Platform](http://shop.oreilly.com/product/0636920057628.do)
```
DATA_BUCKET = "gs://cloud-training-demos/flights/chapter8/output/"
TRAIN_DATA_PATTERN = DATA_BUCKET + "train*"
EVAL_DATA_PATTERN = DATA_BUCKET + "test*"
!gsutil ls -l $TRAIN_DATA_PATTERN
!gsutil ls -l $EVAL_DATA_PATTERN
!gsutil cat $DATA_BUCKET'trainFlights-00000-of-00007.csv' | head -1
```
## 1. Data Analysis
For data analysis, visualization, and schema generation, we use [TensorFlow Data Validation](https://www.tensorflow.org/tfx/guide/tfdv) to perform the following:
1. **Analyze** the training data and produce **statistics**.
2. Generate data **schema** from the produced statistics.
3. **Configure** the schema.
4. **Validate** the evaluation data against the schema.
5. **Save** the schema for later use.
```
import tensorflow_data_validation as tfdv
print('TFDV version: {}'.format(tfdv.__version__))
```
### 1.1 Compute and visualise statistics
```
CSV_COLUMNS = ('ontime,dep_delay,taxiout,distance,avg_dep_delay,avg_arr_delay' +
',carrier,dep_lat,dep_lon,arr_lat,arr_lon,origin,dest').split(',')
TARGET_FEATURE_NAME = 'ontime'
DEFAULTS = [[0.0],[0.0],[0.0],[0.0],[0.0],[0.0],\
['na'],[0.0],[0.0],[0.0],[0.0],['na'],['na']]
# This is a convenience function for CSV. We can write a Beam pipeline for other formats.
# https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/generate_statistics_from_csv
train_stats = tfdv.generate_statistics_from_csv(
data_location=TRAIN_DATA_PATTERN,
column_names=CSV_COLUMNS,
stats_options=tfdv.StatsOptions(sample_rate=0.1)
)
tfdv.visualize_statistics(train_stats)
```
### 1.2 Infer Schema
```
schema = tfdv.infer_schema(statistics=train_stats)
tfdv.display_schema(schema=schema)
```
### 1.3 Configure Schema
Specify some tolerance for values.
```
# Relax the minimum fraction of values that must come from the domain for feature occupation.
carrier = tfdv.get_feature(schema, 'carrier')
carrier.distribution_constraints.min_domain_mass = 0.9
# All features are by default in both TRAINING and SERVING environments.
#schema.default_environment.append('TRAINING')
#schema.default_environment.append('EVALUATION')
#schema.default_environment.append('SERVING')
# Specify that weight and class feature is not in SERVING environment.
#tfdv.get_feature(schema, TARGET_FEATURE_NAME).not_in_environment.append('SERVING')
```
### 1.4 Validate evaluation data
```
eval_stats = tfdv.generate_statistics_from_csv(EVAL_DATA_PATTERN, column_names=CSV_COLUMNS)
eval_anomalies = tfdv.validate_statistics(eval_stats, schema) #, environment='EVALUATION')
tfdv.display_anomalies(eval_anomalies)
```
### 1.5 Freeze the schema
```
RAW_SCHEMA_LOCATION = 'raw_schema.pbtxt'
from tensorflow.python.lib.io import file_io
from google.protobuf import text_format
tfdv.write_schema_text(schema, RAW_SCHEMA_LOCATION)
!cat {RAW_SCHEMA_LOCATION}
```
## License
Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
---
**Disclaimer**: This is not an official Google product. The sample code provided for an educational purpose.
---
| github_jupyter |
Value investing means to invest in the 50 cheapest stocks that are relative to the
common measure of business asset (earning or return)
```
import pandas as pd
import numpy as np
import xlsxwriter
import requests
from scipy import stats
stocks = pd.read_csv('sp_500_stocks.csv')
from secrets import IEX_CLOUD_API_TOKEN
symbol = 'AAPL'
api_url = f'https://sandbox.iexapis.com/stable/stock/{symbol}/quote?token={IEX_CLOUD_API_TOKEN}'
data = requests.get(api_url).json()
print(data)
data['peRatio']
```
## Making the Batch API call
```
def chunks(list, n):
for i in range(0, len(list), n):
yield list[i:i + n]
symbol_lists = list(chunks(stocks['Ticker'], 100))
symbol_strings = []
for i in range(0, len(symbol_lists)):
symbol_strings.append(','.join(symbol_lists[i]))
symbol_strings
column_names = ['Ticker', 'Price', 'Price-to-Earning Ratio', 'Number of Shares to Buy']
df = pd.DataFrame(columns=column_names)
df
for symbol_string in symbol_strings:
batch_api = f'https://sandbox.iexapis.com/stable/stock/market/batch?symbols={symbol_string}&types=quote&token={IEX_CLOUD_API_TOKEN}'
data2 = requests.get(batch_api).json()
for symbol in symbol_string.split(','):
df = df.append(
pd.Series(
[
symbol,
data2[symbol]['quote']['latestPrice'],
data2[symbol]['quote']['peRatio'],
'N/A'
],
index=column_names),
ignore_index=True)
df
```
## Removing poor performing stocks
```
df.sort_values('Price-to-Earning Ratio', ascending=False, inplace=True)
df = df[df['Price-to-Earning Ratio'] > 0]
df = df[:50]
df.reset_index(inplace=True, drop=True)
df.head()
def portfolio_size():
global capitals
capitals = input('Please enter how much money you are going to invest: ')
try:
capitals = float(capitals)
except ValueError:
print("That's not a number.\nPlease try again.")
capitals = input('Please enter how much money you are going to invest: ')
capitals = float(capitals)
portfolio_size()
print(capitals)
weight = capitals / len(df.index)
weight
df['Number of Shares to Buy'] = np.floor(weight / df['Price'])
df.head()
```
## More realistic model
P/E ratio
price to book ratio
price to sales ratio
EV/EBITA
EV/Gross profit
```
column_names2 = [
'Ticker',
'Price',
'Price-to-Earning Ratio',
'PE Percentile',
'Price-to-Book Ratio',
'PB Percentile',
'Price-to-Sale Ratio',
'PS Percentile' ,
'EV/EBITDA',
'EV/EBITDA Percentile',
'EV/GP',
'EV/GP Percentile',
'Number of Shares to Buy',
]
df2 = pd.DataFrame(columns=column_names2)
df2
# Detact the NoneTypes
def detactNone(a, b):
try:
c = a / b
except TypeError:
c = None
return c
for symbol_string in symbol_strings:
batch_api2 = f'https://sandbox.iexapis.com/stable/stock/market/batch?symbols={symbol_string}&types=quote,advanced-stats&token={IEX_CLOUD_API_TOKEN}'
data3 = requests.get(batch_api2).json()
for symbol in symbol_string.split(','):
price = data3[symbol]['quote']['latestPrice']
peRatio = data3[symbol]['advanced-stats']['peRatio']
priceToBook = data3[symbol]['advanced-stats']['priceToBook']
priceToSales = data3[symbol]['advanced-stats']['priceToSales']
enterpriseValue = data3[symbol]['advanced-stats']['enterpriseValue']
ebitda = data3[symbol]['advanced-stats']['EBITDA']
gross_profit = data3[symbol]['advanced-stats']['grossProfit']
evToebitda = detactNone(enterpriseValue, ebitda)
evTogp = detactNone(enterpriseValue, gross_profit)
df2 = df2.append(
pd.Series(
[
symbol,
price,
peRatio,
np.nan,
priceToBook,
np.nan,
priceToSales,
np.nan,
evToebitda,
np.nan,
evTogp,
np.nan,
np.nan,
],
index=column_names2),
ignore_index=True)
df2
lookup = [
'Price-to-Earning Ratio',
'Price-to-Book Ratio',
'Price-to-Sale Ratio',
'EV/EBITDA',
'EV/GP',
]
for i in lookup:
na_df = df2[df2[i].isnull()]
len(na_df.index)
```
## Droping the rows that have at least 7 NA
```
df2.dropna(thresh=7, inplace=True)
df2.reset_index(drop=True, inplace=True)
df2
```
## Calculating the percentiles
```
metrics = {
'Price-to-Earning Ratio': 'PE Percentile',
'Price-to-Book Ratio': 'PB Percentile',
'Price-to-Sale Ratio': 'PS Percentile' ,
'EV/EBITDA': 'EV/EBITDA Percentile',
'EV/GP': 'EV/GP Percentile',
}
metrics.keys()
df2['RV Score'] = np.nan
for row in df2.index:
for metric in metrics:
df2.loc[row, metrics[metric]] = stats.percentileofscore(df2[metric], df2.loc[row, metric])
df2.loc[row, 'RV Score'] = np.mean(df2[metrics.values()].iloc[row])
df2
df2.sort_values('RV Score', ascending=True, inplace=True)
df2 = df2[:50]
df2.reset_index(drop=True, inplace=True)
df2
portfolio_size()
print(capitals)
equal_weight = float(capitals / len(df2.index))
df2['Number of Shares to Buy'] = np.floor(equal_weight / df2['Price'])
df2
df2.to_excel('Value_investing.xlsx')
```
| github_jupyter |
```
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Vertex AI: Vertex AI Migration: AutoML Image Classification
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ1%20Vertex%20SDK%20AutoML%20Image%20Classification.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ1%20Vertex%20SDK%20AutoML%20Image%20Classification.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
### Dataset
The dataset used for this tutorial is the [Flowers dataset](https://www.tensorflow.org/datasets/catalog/tf_flowers) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of flower an image is from a class of five flowers: daisy, dandelion, rose, sunflower, or tulip.
### Costs
This tutorial uses billable components of Google Cloud:
* Vertex AI
* Cloud Storage
Learn about [Vertex AI
pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage
pricing](https://cloud.google.com/storage/pricing), and use the [Pricing
Calculator](https://cloud.google.com/products/calculator/)
to generate a cost estimate based on your projected usage.
### Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
- The Cloud Storage SDK
- Git
- Python 3
- virtualenv
- Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to [Setting up a Python development environment](https://cloud.google.com/python/setup) and the [Jupyter installation guide](https://jupyter.org/install) provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
1. [Install and initialize the SDK](https://cloud.google.com/sdk/docs/).
2. [Install Python 3](https://cloud.google.com/python/setup#installing_python).
3. [Install virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.
4. To install Jupyter, run `pip3 install jupyter` on the command-line in a terminal shell.
5. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.
6. Open this notebook in the Jupyter Notebook Dashboard.
## Installation
Install the latest version of Vertex SDK for Python.
```
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
```
Install the latest GA version of *google-cloud-storage* library as well.
```
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
```
### Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
```
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Before you begin
### GPU runtime
This tutorial does not require a GPU runtime.
### Set up your Google Cloud project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)
3. [Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component,storage-component.googleapis.com)
4. If you are running this notebook locally, you will need to install the [Cloud SDK]((https://cloud.google.com/sdk)).
5. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$`.
```
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
```
#### Region
You can also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
- Americas: `us-central1`
- Europe: `europe-west4`
- Asia Pacific: `asia-east1`
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
```
REGION = "us-central1" # @param {type: "string"}
```
#### Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
```
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
```
### Authenticate your Google Cloud account
**If you are using Google Cloud Notebooks**, your environment is already authenticated. Skip this step.
**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
**Otherwise**, follow these steps:
In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.
**Click Create service account**.
In the **Service account name** field, enter a name, and click **Create**.
In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
```
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
```
### Create a Cloud Storage bucket
**The following steps are required, regardless of your notebook environment.**
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
```
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
```
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
```
! gsutil mb -l $REGION $BUCKET_NAME
```
Finally, validate access to your Cloud Storage bucket by examining its contents:
```
! gsutil ls -al $BUCKET_NAME
```
### Set up variables
Next, set up some variables used throughout the tutorial.
### Import libraries and define constants
```
import google.cloud.aiplatform as aip
```
## Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
```
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
```
#### Location of Cloud Storage training data.
Now set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage.
```
IMPORT_FILE = (
"gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv"
)
```
#### Quick peek at your data
This tutorial uses a version of the Flowers dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows.
```
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
```
## Create a dataset
### [datasets.create-dataset-api](https://cloud.google.com/vertex-ai/docs/datasets/create-dataset-api)
### Create the Dataset
Next, create the `Dataset` resource using the `create` method for the `ImageDataset` class, which takes the following parameters:
- `display_name`: The human readable name for the `Dataset` resource.
- `gcs_source`: A list of one or more dataset index files to import the data items into the `Dataset` resource.
- `import_schema_uri`: The data labeling schema for the data items.
This operation may take several minutes.
```
dataset = aip.ImageDataset.create(
display_name="Flowers" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aip.schema.dataset.ioformat.image.single_label_classification,
)
print(dataset.resource_name)
```
*Example Output:*
INFO:google.cloud.aiplatform.datasets.dataset:Creating ImageDataset
INFO:google.cloud.aiplatform.datasets.dataset:Create ImageDataset backing LRO: projects/759209241365/locations/us-central1/datasets/2940964905882222592/operations/1941426647739662336
INFO:google.cloud.aiplatform.datasets.dataset:ImageDataset created. Resource name: projects/759209241365/locations/us-central1/datasets/2940964905882222592
INFO:google.cloud.aiplatform.datasets.dataset:To use this ImageDataset in another session:
INFO:google.cloud.aiplatform.datasets.dataset:ds = aiplatform.ImageDataset('projects/759209241365/locations/us-central1/datasets/2940964905882222592')
INFO:google.cloud.aiplatform.datasets.dataset:Importing ImageDataset data: projects/759209241365/locations/us-central1/datasets/2940964905882222592
INFO:google.cloud.aiplatform.datasets.dataset:Import ImageDataset data backing LRO: projects/759209241365/locations/us-central1/datasets/2940964905882222592/operations/8100099138168815616
INFO:google.cloud.aiplatform.datasets.dataset:ImageDataset data imported. Resource name: projects/759209241365/locations/us-central1/datasets/2940964905882222592
projects/759209241365/locations/us-central1/datasets/2940964905882222592
## Train a model
### [training.automl-api](https://cloud.google.com/vertex-ai/docs/training/automl-api)
### Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
#### Create training pipeline
An AutoML training pipeline is created with the `AutoMLImageTrainingJob` class, with the following parameters:
- `display_name`: The human readable name for the `TrainingJob` resource.
- `prediction_type`: The type task to train the model for.
- `classification`: An image classification model.
- `object_detection`: An image object detection model.
- `multi_label`: If a classification task, whether single (`False`) or multi-labeled (`True`).
- `model_type`: The type of model for deployment.
- `CLOUD`: Deployment on Google Cloud
- `CLOUD_HIGH_ACCURACY_1`: Optimized for accuracy over latency for deployment on Google Cloud.
- `CLOUD_LOW_LATENCY_`: Optimized for latency over accuracy for deployment on Google Cloud.
- `MOBILE_TF_VERSATILE_1`: Deployment on an edge device.
- `MOBILE_TF_HIGH_ACCURACY_1`:Optimized for accuracy over latency for deployment on an edge device.
- `MOBILE_TF_LOW_LATENCY_1`: Optimized for latency over accuracy for deployment on an edge device.
- `base_model`: (optional) Transfer learning from existing `Model` resource -- supported for image classification only.
The instantiated object is the DAG (directed acyclic graph) for the training job.
```
dag = aip.AutoMLImageTrainingJob(
display_name="flowers_" + TIMESTAMP,
prediction_type="classification",
multi_label=False,
model_type="CLOUD",
base_model=None,
)
print(dag)
```
*Example output:*
<google.cloud.aiplatform.training_jobs.AutoMLImageTrainingJob object at 0x7f806a6116d0>
#### Run the training pipeline
Next, you run the DAG to start the training job by invoking the method `run`, with the following parameters:
- `dataset`: The `Dataset` resource to train the model.
- `model_display_name`: The human readable name for the trained model.
- `training_fraction_split`: The percentage of the dataset to use for training.
- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).
- `validation_fraction_split`: The percentage of the dataset to use for validation.
- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).
- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.
The `run` method when completed returns the `Model` resource.
The execution of the training pipeline will take upto 20 minutes.
```
model = dag.run(
dataset=dataset,
model_display_name="flowers_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=8000,
disable_early_stopping=False,
)
```
*Example output:*
INFO:google.cloud.aiplatform.training_jobs:View Training:
https://console.cloud.google.com/ai/platform/locations/us-central1/training/2109316300865011712?project=759209241365
INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:
PipelineState.PIPELINE_STATE_RUNNING
...
INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob run completed. Resource name: projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712
INFO:google.cloud.aiplatform.training_jobs:Model available at projects/759209241365/locations/us-central1/models/1284590221056278528
## Evaluate the model
### [projects.locations.models.evaluations.list](https://cloud.devsite.corp.google.com/ai-platform-unified/docs/reference/rest/v1beta1/projects.locations.models.evaluations/list)
## Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
```
# Get model resource ID
models = aip.Model.list(filter="display_name=flowers_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
```
*Example output:*
name: "projects/759209241365/locations/us-central1/models/623915674158235648/evaluations/4280507618583117824"
metrics_schema_uri: "gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml"
metrics {
struct_value {
fields {
key: "auPrc"
value {
number_value: 0.9891107
}
}
fields {
key: "confidenceMetrics"
value {
list_value {
values {
struct_value {
fields {
key: "precision"
value {
number_value: 0.2
}
}
fields {
key: "recall"
value {
number_value: 1.0
}
}
}
}
## Make batch predictions
### [predictions.batch-prediction](https://cloud.google.com/vertex-ai/docs/predictions/batch-predictions)
### Get test item(s)
Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
```
test_items = !gsutil cat $IMPORT_FILE | head -n2
if len(str(test_items[0]).split(",")) == 3:
_, test_item_1, test_label_1 = str(test_items[0]).split(",")
_, test_item_2, test_label_2 = str(test_items[1]).split(",")
else:
test_item_1, test_label_1 = str(test_items[0]).split(",")
test_item_2, test_label_2 = str(test_items[1]).split(",")
print(test_item_1, test_label_1)
print(test_item_2, test_label_2)
```
### Copy test item(s)
For the batch prediction, copy the test items over to your Cloud Storage bucket.
```
file_1 = test_item_1.split("/")[-1]
file_2 = test_item_2.split("/")[-1]
! gsutil cp $test_item_1 $BUCKET_NAME/$file_1
! gsutil cp $test_item_2 $BUCKET_NAME/$file_2
test_item_1 = BUCKET_NAME + "/" + file_1
test_item_2 = BUCKET_NAME + "/" + file_2
```
### Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:
- `content`: The Cloud Storage path to the image.
- `mime_type`: The content type. In our example, it is a `jpeg` file.
For example:
{'content': '[your-bucket]/file1.jpg', 'mime_type': 'jpeg'}
```
import json
import tensorflow as tf
gcs_input_uri = BUCKET_NAME + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {"content": test_item_1, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + "\n")
data = {"content": test_item_2, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + "\n")
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
```
### Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
- `job_display_name`: The human readable name for the batch prediction job.
- `gcs_source`: A list of one or more batch request input files.
- `gcs_destination_prefix`: The Cloud Storage location for storing the batch prediction resuls.
- `sync`: If set to True, the call will block while waiting for the asynchronous batch job to complete.
```
batch_predict_job = model.batch_predict(
job_display_name="flowers_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
sync=False,
)
print(batch_predict_job)
```
*Example output:*
INFO:google.cloud.aiplatform.jobs:Creating BatchPredictionJob
<google.cloud.aiplatform.jobs.BatchPredictionJob object at 0x7f806a6112d0> is waiting for upstream dependencies to complete.
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/5110965452507447296?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296 current state:
JobState.JOB_STATE_RUNNING
### Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter `sync` to `True` in the `batch_predict()` method to block until the batch prediction job is completed.
```
batch_predict_job.wait()
```
*Example Output:*
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/181835033978339328?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_SUCCEEDED
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob run completed. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
### Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
- `content`: The prediction request.
- `prediction`: The prediction response.
- `ids`: The internal assigned unique identifiers for each prediction request.
- `displayNames`: The class names for each class label.
- `confidences`: The predicted confidence, between 0 and 1, per class label.
```
import json
import tensorflow as tf
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
```
*Example Output:*
{'instance': {'content': 'gs://andy-1234-221921aip-20210802180634/100080576_f52e8ee070_n.jpg', 'mimeType': 'image/jpeg'}, 'prediction': {'ids': ['3195476558944927744', '1636105187967893504', '7400712711002128384', '2789026692574740480', '5501319568158621696'], 'displayNames': ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips'], 'confidences': [0.99998736, 8.222247e-06, 3.6782617e-06, 5.3231275e-07, 2.6960555e-07]}}
## Make online predictions
### [predictions.deploy-model-api](https://cloud.google.com/vertex-ai/docs/predictions/deploy-model-api)
## Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the `deploy` method.
```
endpoint = model.deploy()
```
*Example output:*
INFO:google.cloud.aiplatform.models:Creating Endpoint
INFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/4087251132693348352
INFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:To use this Endpoint in another session:
INFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/759209241365/locations/us-central1/endpoints/4867177336350441472')
INFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/1691336130932244480
INFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
### [predictions.online-prediction-automl](https://cloud.google.com/vertex-ai/docs/predictions/online-predictions-automl)
### Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
```
test_item = !gsutil cat $IMPORT_FILE | head -n1
if len(str(test_item[0]).split(",")) == 3:
_, test_item, test_label = str(test_item[0]).split(",")
else:
test_item, test_label = str(test_item[0]).split(",")
print(test_item, test_label)
```
### Make the prediction
Now that your `Model` resource is deployed to an `Endpoint` resource, you can do online predictions by sending prediction requests to the Endpoint resource.
#### Request
Since in this example your test item is in a Cloud Storage bucket, you open and read the contents of the image using `tf.io.gfile.Gfile()`. To pass the test data to the prediction service, you encode the bytes into base64 -- which makes the content safe from modification while transmitting binary data over the network.
The format of each instance is:
{ 'content': { 'b64': base64_encoded_bytes } }
Since the `predict()` method can take multiple items (instances), send your single test item as a list of one test item.
#### Response
The response from the `predict()` call is a Python dictionary with the following entries:
- `ids`: The internal assigned unique identifiers for each prediction request.
- `displayNames`: The class names for each class label.
- `confidences`: The predicted confidence, between 0 and 1, per class label.
- `deployed_model_id`: The Vertex AI identifier for the deployed Model resource which did the predictions.
```
import base64
import tensorflow as tf
with tf.io.gfile.GFile(test_item, "rb") as f:
content = f.read()
# The format of each instance should conform to the deployed model's prediction input schema.
instances = [{"content": base64.b64encode(content).decode("utf-8")}]
prediction = endpoint.predict(instances=instances)
print(prediction)
```
*Example output:*
Prediction(predictions=[{'ids': ['3195476558944927744', '5501319568158621696', '1636105187967893504', '2789026692574740480', '7400712711002128384'], 'displayNames': ['daisy', 'tulips', 'dandelion', 'sunflowers', 'roses'], 'confidences': [0.999987364, 2.69604527e-07, 8.2222e-06, 5.32310196e-07, 3.6782335e-06]}], deployed_model_id='5949545378826158080', explanations=None)
## Undeploy the model
When you are done doing predictions, you undeploy the model from the `Endpoint` resouce. This deprovisions all compute resources and ends billing for the deployed model.
```
endpoint.undeploy_all()
```
# Cleaning up
To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
- Dataset
- Pipeline
- Model
- Endpoint
- AutoML Training Job
- Batch Job
- Custom Job
- Hyperparameter Tuning Job
- Cloud Storage Bucket
```
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
```
| github_jupyter |
# Factor Model of Portfolio Return
```
import sys
!{sys.executable} -m pip install -r requirements.txt
import numpy as np
import pandas as pd
import time
import os
import quiz_helper
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (14, 8)
```
### data bundle
```
import os
import quiz_helper
from zipline.data import bundles
os.environ['ZIPLINE_ROOT'] = os.path.join(os.getcwd(), '..', '..','data','module_4_quizzes_eod')
ingest_func = bundles.csvdir.csvdir_equities(['daily'], quiz_helper.EOD_BUNDLE_NAME)
bundles.register(quiz_helper.EOD_BUNDLE_NAME, ingest_func)
print('Data Registered')
```
### Build pipeline engine
```
from zipline.pipeline import Pipeline
from zipline.pipeline.factors import AverageDollarVolume
from zipline.utils.calendars import get_calendar
universe = AverageDollarVolume(window_length=120).top(500)
trading_calendar = get_calendar('NYSE')
bundle_data = bundles.load(quiz_helper.EOD_BUNDLE_NAME)
engine = quiz_helper.build_pipeline_engine(bundle_data, trading_calendar)
```
### View Data¶
With the pipeline engine built, let's get the stocks at the end of the period in the universe we're using. We'll use these tickers to generate the returns data for the our risk model.
```
universe_end_date = pd.Timestamp('2016-01-05', tz='UTC')
universe_tickers = engine\
.run_pipeline(
Pipeline(screen=universe),
universe_end_date,
universe_end_date)\
.index.get_level_values(1)\
.values.tolist()
universe_tickers
len(universe_tickers)
from zipline.data.data_portal import DataPortal
data_portal = DataPortal(
bundle_data.asset_finder,
trading_calendar=trading_calendar,
first_trading_day=bundle_data.equity_daily_bar_reader.first_trading_day,
equity_minute_reader=None,
equity_daily_reader=bundle_data.equity_daily_bar_reader,
adjustment_reader=bundle_data.adjustment_reader)
```
## Get pricing data helper function
```
from quiz_helper import get_pricing
```
## get pricing data into a dataframe
```
returns_df = \
get_pricing(
data_portal,
trading_calendar,
universe_tickers,
universe_end_date - pd.DateOffset(years=5),
universe_end_date)\
.pct_change()[1:].fillna(0) #convert prices into returns
returns_df
```
## Let's look at a two stock portfolio
Let's pretend we have a portfolio of two stocks. We'll pick Apple and Microsoft in this example.
```
aapl_col = returns_df.columns[3]
msft_col = returns_df.columns[312]
asset_return_1 = returns_df[aapl_col].rename('asset_return_aapl')
asset_return_2 = returns_df[msft_col].rename('asset_return_msft')
asset_return_df = pd.concat([asset_return_1,asset_return_2],axis=1)
asset_return_df.head(2)
```
## Factor returns
Let's make up a "factor" by taking an average of all stocks in our list. You can think of this as an equal weighted index of the 490 stocks, kind of like a measure of the "market". We'll also make another factor by calculating the median of all the stocks. These are mainly intended to help us generate some data to work with. We'll go into how some common risk factors are generated later in the lessons.
Also note that we're setting axis=1 so that we calculate a value for each time period (row) instead of one value for each column (assets).
```
factor_return_1 = returns_df.mean(axis=1)
factor_return_2 = returns_df.median(axis=1)
factor_return_l = [factor_return_1, factor_return_2]
```
## Factor exposures
Factor exposures refer to how "exposed" a stock is to each factor. We'll get into this more later. For now, just think of this as one number for each stock, for each of the factors.
```
from sklearn.linear_model import LinearRegression
"""
For now, just assume that we're calculating a number for each
stock, for each factor, which represents how "exposed" each stock is
to each factor.
We'll discuss how factor exposure is calculated later in the lessons.
"""
def get_factor_exposures(factor_return_l, asset_return):
lr = LinearRegression()
X = np.array(factor_return_l).T
y = np.array(asset_return.values)
lr.fit(X,y)
return lr.coef_
factor_exposure_l = []
for i in range(len(asset_return_df.columns)):
factor_exposure_l.append(
get_factor_exposures(factor_return_l,
asset_return_df[asset_return_df.columns[i]]
))
factor_exposure_a = np.array(factor_exposure_l)
print(f"factor_exposures for asset 1 {factor_exposure_a[0]}")
print(f"factor_exposures for asset 2 {factor_exposure_a[1]}")
```
## Quiz 1 Portfolio's factor exposures
Let's make up some portfolio weights for now; in a later lesson, we'll look at how portfolio optimization combines alpha factors and a risk factor model to choose asset weights.
$\beta_{p,k} = \sum_{i=1}^{N}(x_i \times \beta_{i,k})$
```
weight_1 = 0.60 #let's give AAPL a portfolio weight
weight_2 = 0.40 #give MSFT a portfolio weight
weight_a = np.array([weight_1, weight_2])
```
For the sake of understanding, try saving each of the values
into a separate variable to perform the multipliations and additions
Check that your calculations for portfolio factor exposure match
the output of this dot product:
```
weight_a.dot(factor_exposure_a)
```
```
# TODO: calculate portfolio's exposure to factor 1
factor_exposure_1_1 = # ...
factor_exposure_2_1 = # ...
factor_exposure_p_1 = # ...
factor_exposure_p_1
# TODO: calculate portfolio's exposure to factor 2
factor_exposure_1_2 = # ...
factor_exposure_2_2 = # ...
factor_exposure_p_2 = # ...
factor_exposure_p_2
```
## Quiz 2 Calculate portfolio return
For clarity, try storing the pieces into their own
named variables and writing out the multiplications and addition.
You can check if your answer matches this output:
```
asset_return_df.values.dot(weight_a)
```
```
# TODO calculate the portfolio return
asset_return_1 = # ...
asset_return_2 = # ...
portfolio_return = # ...
portfolio_return = pd.Series(portfolio_return,index=asset_return_df.index).rename('portfolio_return')
portfolio_return.head(2)
```
## Quiz 3 Contribution of Factors
The sum of the products of factor exposure times factor return is the contribution of the factors. It's also called the "common return." calculate the common return of the portfolio, given the two factor exposures and the two factor returns.
```
# TODO: Calculate the contribution of the two factors to the return of this example asset
common_return = # ...
common_return = common_return.rename('common_return')
common_return.head(2)
```
## Quiz 4 Specific Return
The specific return is the part of the portfolio return that isn't explained by the factors. So it's the actual return minus the common return.
Calculate the specific return of the stock.
```
# TODO: calculate the specific return of this asset
specific_return = # ...
specific_return = specific_return.rename('specific_return')
```
## Visualize the common return and specific return
```
return_components = pd.concat([common_return,specific_return],axis=1)
return_components.head(2)
return_components.plot(title="asset return = common return + specific return");
pd.DataFrame(portfolio_return).plot(color='purple');
```
## Solution
[Solution notebook](factor_model_portfolio_return_solution.ipynb)
| github_jupyter |
<a href="https://colab.research.google.com/github/dlmacedo/starter-academic/blob/master/content/courses/deeplearning/notebooks/tensorflow/saving_and_serializing.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Saving and Serializing Models with TensorFlow Keras
The first part of this guide covers saving and serialization for Sequential models and models built using the Functional API and for Sequential models. The saving and serialization APIs are the exact same for both of these types of models.
Saving for custom subclasses of `Model` is covered in the section "Saving Subclassed Models". The APIs in this case are slightly different than for Sequential or Functional models.
## Setup
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
print("TensorFlow Version:", tf.__version__)
#print("GPU Available:", tf.test.is_gpu_available())
print("GPU Available:", tf.config.list_physical_devices('GPU'))
tf.keras.backend.clear_session() # For easy reset of notebook state.
```
## Part I: Saving Sequential models or Functional models
Let's consider the following model:
```
from tensorflow import keras
from tensorflow.keras import layers
inputs = keras.Input(shape=(784,), name='digits')
x = layers.Dense(64, activation='relu', name='dense_1')(inputs)
x = layers.Dense(64, activation='relu', name='dense_2')(x)
outputs = layers.Dense(10, activation='softmax', name='predictions')(x)
model = keras.Model(inputs=inputs, outputs=outputs, name='3_layer_mlp')
model.summary()
```
Optionally, let's train this model, just so it has weight values to save, as well as an an optimizer state.
Of course, you can save models you've never trained, too, but obviously that's less interesting.
```
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255
model.compile(loss='sparse_categorical_crossentropy',
optimizer=keras.optimizers.RMSprop())
history = model.fit(x_train, y_train,
batch_size=64,
epochs=1)
# Save predictions for future checks
predictions = model.predict(x_test)
```
### Whole-model saving
You can save a model built with the Functional API into a single file. You can later recreate the same model from this file, even if you no longer have access to the code that created the model.
This file includes:
- The model's architecture
- The model's weight values (which were learned during training)
- The model's training config (what you passed to `compile`), if any
- The optimizer and its state, if any (this enables you to restart training where you left off)
```
# Save the model
model.save('path_to_my_model.h5')
# Recreate the exact same model purely from the file
new_model = keras.models.load_model('path_to_my_model.h5')
import numpy as np
# Check that the state is preserved
new_predictions = new_model.predict(x_test)
np.testing.assert_allclose(predictions, new_predictions, rtol=1e-6, atol=1e-6)
# Note that the optimizer state is preserved as well:
# you can resume training where you left off.
```
### Export to SavedModel
You can also export a whole model to the TensorFlow `SavedModel` format. `SavedModel` is a standalone serialization format for Tensorflow objects, supported by TensorFlow serving as well as TensorFlow implementations other than Python.
```
# Export the model to a SavedModel
#keras.experimental.export_saved_model(model, 'path_to_saved_model')
model.save('path_to_saved_model')
# Recreate the exact same model
#new_model = keras.experimental.load_from_saved_model('path_to_saved_model')
new_model = keras.models.load_model('path_to_saved_model')
# Check that the state is preserved
new_predictions = new_model.predict(x_test)
np.testing.assert_allclose(predictions, new_predictions, rtol=1e-6, atol=1e-6)
# Note that the optimizer state is preserved as well:
# you can resume training where you left off.
```
The `SavedModel` files that were created contain:
- A TensorFlow checkpoint containing the model weights.
- A `SavedModel` proto containing the underlying Tensorflow graph. Separate
graphs are saved for prediction (serving), train, and evaluation. If
the model wasn't compiled before, then only the inference graph
gets exported.
- The model's architecture config, if available.
### Architecture-only saving
Sometimes, you are only interested in the architecture of the model, and you don't need to save the weight values or the optimizer. In this case, you can retrieve the "config" of the model via the `get_config()` method. The config is a Python dict that enables you to recreate the same model -- initialized from scratch, without any of the information learned previously during training.
```
config = model.get_config()
reinitialized_model = keras.Model.from_config(config)
# Note that the model state is not preserved! We only saved the architecture.
new_predictions = reinitialized_model.predict(x_test)
assert abs(np.sum(predictions - new_predictions)) > 0.
```
You can alternatively use `to_json()` from `from_json()`, which uses a JSON string to store the config instead of a Python dict. This is useful to save the config to disk.
```
json_config = model.to_json()
reinitialized_model = keras.models.model_from_json(json_config)
```
### Weights-only saving
Sometimes, you are only interested in the state of the model -- its weights values -- and not in the architecture. In this case, you can retrieve the weights values as a list of Numpy arrays via `get_weights()`, and set the state of the model via `set_weights`:
```
weights = model.get_weights() # Retrieves the state of the model.
model.set_weights(weights) # Sets the state of the model.
```
You can combine `get_config()`/`from_config()` and `get_weights()`/`set_weights()` to recreate your model in the same state. However, unlike `model.save()`, this will not include the training config and the optimizer. You would have to call `compile()` again before using the model for training.
```
config = model.get_config()
weights = model.get_weights()
new_model = keras.Model.from_config(config)
new_model.set_weights(weights)
# Check that the state is preserved
new_predictions = new_model.predict(x_test)
np.testing.assert_allclose(predictions, new_predictions, rtol=1e-6, atol=1e-6)
# Note that the optimizer was not preserved,
# so the model should be compiled anew before training
# (and the optimizer will start from a blank state).
```
The save-to-disk alternative to `get_weights()` and `set_weights(weights)`
is `save_weights(fpath)` and `load_weights(fpath)`.
Here's an example that saves to disk:
```
# Save JSON config to disk
json_config = model.to_json()
with open('model_config.json', 'w') as json_file:
json_file.write(json_config)
# Save weights to disk
model.save_weights('path_to_my_weights.h5')
# Reload the model from the 2 files we saved
with open('model_config.json') as json_file:
json_config = json_file.read()
new_model = keras.models.model_from_json(json_config)
new_model.load_weights('path_to_my_weights.h5')
# Check that the state is preserved
new_predictions = new_model.predict(x_test)
np.testing.assert_allclose(predictions, new_predictions, rtol=1e-6, atol=1e-6)
# Note that the optimizer was not preserved.
```
But remember that the simplest, recommended way is just this:
```
model.save('path_to_my_model.h5')
del model
model = keras.models.load_model('path_to_my_model.h5')
```
### Weights-only saving in SavedModel format
Note that `save_weights` can create files either in the Keras HDF5 format,
or in the TensorFlow SavedModel format. The format is infered from the file extension
you provide: if it is ".h5" or ".keras", the framework uses the Keras HDF5 format. Anything
else defaults to SavedModel.
```
model.save_weights('path_to_my_tf_savedmodel')
```
For total explicitness, the format can be explicitly passed via the `save_format` argument, which can take the value "tf" or "h5":
```
model.save_weights('path_to_my_tf_savedmodel', save_format='tf')
```
## Saving Subclassed Models
Sequential models and Functional models are datastructures that represent a DAG of layers. As such,
they can be safely serialized and deserialized.
A subclassed model differs in that it's not a datastructure, it's a piece of code. The architecture of the model
is defined via the body of the `call` method. This means that the architecture of the model cannot be safely serialized. To load a model, you'll need to have access to the code that created it (the code of the model subclass). Alternatively, you could be serializing this code as bytecode (e.g. via pickling), but that's unsafe and generally not portable.
For more information about these differences, see the article ["What are Symbolic and Imperative APIs in TensorFlow 2.0?"](https://medium.com/tensorflow/what-are-symbolic-and-imperative-apis-in-tensorflow-2-0-dfccecb01021).
Let's consider the following subclassed model, which follows the same structure as the model from the first section:
```
class ThreeLayerMLP(keras.Model):
def __init__(self, name=None):
super(ThreeLayerMLP, self).__init__(name=name)
self.dense_1 = layers.Dense(64, activation='relu', name='dense_1')
self.dense_2 = layers.Dense(64, activation='relu', name='dense_2')
self.pred_layer = layers.Dense(10, activation='softmax', name='predictions')
def call(self, inputs):
x = self.dense_1(inputs)
x = self.dense_2(x)
return self.pred_layer(x)
def get_model():
return ThreeLayerMLP(name='3_layer_mlp')
model = get_model()
```
First of all, *a subclassed model that has never been used cannot be saved*.
That's because a subclassed model needs to be called on some data in order to create its weights.
Until the model has been called, it does not know the shape and dtype of the input data it should be
expecting, and thus cannot create its weight variables. You may remember that in the Functional model from the first section, the shape and dtype of the inputs was specified in advance (via `keras.Input(...)`) -- that's why Functional models have a state as soon as they're instantiated.
Let's train the model, so as to give it a state:
```
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255
model.compile(loss='sparse_categorical_crossentropy',
optimizer=keras.optimizers.RMSprop())
history = model.fit(x_train, y_train,
batch_size=64,
epochs=1)
```
The recommended way to save a subclassed model is to use `save_weights` to create a TensorFlow SavedModel checkpoint, which will contain the value of all variables associated with the model:
- The layers' weights
- The optimizer's state
- Any variables associated with stateful model metrics (if any)
```
model.save_weights('path_to_my_weights', save_format='tf')
# Save predictions for future checks
predictions = model.predict(x_test)
# Also save the loss on the first batch
# to later assert that the optimizer state was preserved
first_batch_loss = model.train_on_batch(x_train[:64], y_train[:64])
```
To restore your model, you will need access to the code that created the model object.
Note that in order to restore the optimizer state and the state of any stateful metric, you should
compile the model (with the exact same arguments as before) and call it on some data before calling `load_weights`:
```
# Recreate the model
new_model = get_model()
new_model.compile(loss='sparse_categorical_crossentropy',
optimizer=keras.optimizers.RMSprop())
# This initializes the variables used by the optimizers,
# as well as any stateful metric variables
new_model.train_on_batch(x_train[:1], y_train[:1])
# Load the state of the old model
new_model.load_weights('path_to_my_weights')
```
You've reached the end of this guide! This covers everything you need to know about saving and serializing models with tf.keras in TensorFlow 2.0.
| github_jupyter |
### Preparation steps
Install iotfunctions with
`pip install git+https://github.com/ibm-watson-iot/functions@development`
This projects contains the code for the Analytics Service pipeline as well as the anomaly functions and should pull in most of this notebook's dependencies.
The plotting library matplotlib is the exception, so you need to run
`pip install matplotlib`
#### Unpacking the data
Data is stored in `ArmstarkPumps_3408_36B1_096.tar.bz2`
Please run `tar xfj ArmstarkPumps_3408_36B1_096.tar.bz2` to unpack it prior to running this notebook. You should find
```
-rw-rw-r-- 1 markus markus 72361832 Dez 22 17:10 04714B603408.csv
-rw-rw-r-- 1 markus markus 71782702 Dez 22 16:43 04714B6036B1.csv
-rw-rw-r-- 1 markus markus 50373659 Dez 22 16:43 IOT_PUMP_DE_GEN5_202012141519.csv
```
```
# Real life data
import logging
import threading
import itertools
import math
import datetime
import pandas as pd
import numpy as np
import scipy as sp
import json
import ibm_db
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import axes3d
import seaborn as seabornInstance
from sqlalchemy import Column, Integer, String, Float, DateTime, Boolean, func
from iotfunctions import base
from iotfunctions import bif
from iotfunctions import entity
from iotfunctions import metadata
from iotfunctions.metadata import EntityType
from iotfunctions.db import Database
from iotfunctions.dbtables import FileModelStore, DBModelStore
from iotfunctions.enginelog import EngineLogging
from iotfunctions import estimator
from iotfunctions.bif import PythonExpression
from iotfunctions.ui import (UISingle, UIMultiItem, UIFunctionOutSingle,
UISingleItem, UIFunctionOutMulti, UIMulti, UIExpression,
UIText, UIStatusFlag, UIParameters)
from iotfunctions.anomaly import (SaliencybasedGeneralizedAnomalyScore, SpectralAnomalyScore,
FFTbasedGeneralizedAnomalyScore, KMeansAnomalyScore,
SaliencybasedGeneralizedAnomalyScoreV2, FFTbasedGeneralizedAnomalyScoreV2,
KMeansAnomalyScoreV2, BayesRidgeRegressor,)
from mmfunctions.anomaly import (FeatureBuilder, GBMForecaster, KDEAnomalyScore, VIAnomalyScore)
#from poc.functions import State_Timer
import datetime as dt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn import metrics
from pandas.plotting import register_matplotlib_converters
import seaborn as sns
register_matplotlib_converters()
EngineLogging.configure_console_logging(logging.INFO)
# set up a db object with a FileModelStore to support scaling
with open('credentials_as_beta3.json', encoding='utf-8') as F:
credentials = json.loads(F.read())
db_schema=None
#fm = FileModelStore()
#db = Database(credentials=credentials, model_store=fm)
# 11390 is the entity type id of shadow_pump_de_gen5
db=Database(credentials=credentials, entity_type_id=11930)
# 11390 is the entity type id of shadow_pump_de_gen5
if db.model_store is None:
db.model_store = DBModelStore('beta-3', 11390, 'bluadmin', db.native_connection, db.db_type)
print (db)
db.native_connection
def load_data(filename):
# load data
df_input = pd.read_csv(filename, parse_dates=['RCV_TIMESTAMP_UTC'], comment='#')
#df_input = df_input.asfreq('H')
df_input = df_input.sort_values(by='RCV_TIMESTAMP_UTC').\
rename(columns={'RCV_TIMESTAMP_UTC':'timestamp', 'DEVICEID': 'entity'}).\
drop(columns=['DEVICETYPE','ID','FORMAT','UPDATED_UTC','LOGICALINTERFACE_ID','PUMP_MODE',\
'DEVICE_ID','FIRMWARE_VER','PTS_COUNT','SERIAL_NUMBER', 'HARDWARE_VER',\
'TAG_NUMBER','PWR','HW_VER','DQ','FW_VER','PTS','PERF_OPTION']).\
drop(columns=['VIBRATION_N_XAXIS','VIBRATION_N_YAXIS','VIBRATION_N_ZAXIS','REATED_POWER',\
'RATED_SPEED','RATED_CURRENT','RMSN_X','RMSN_Y','RMSN_Z','RUN_QTY',\
'DESIGN_HEAD','DESIGN_FLOW','RMS_X_AVG'])
df_input = df_input[df_input['VERSION'] != 0].drop(columns=['VERSION'])
return df_input
def prep_data(df_input):
entity = df_input['entity'].values[0]
list_ac_vx = []
cnt = 0
for idx,row in df_input[['timestamp','VIBRATIONS_XAXIS','VIBRATIONS_YAXIS','VIBRATIONS_ZAXIS',
'ACCEL_POWER','ACCEL_SPEED']].iterrows():
if isinstance(row['timestamp'], str):
ts = datetime.datetime.strptime(row['timestamp'], '%Y-%m-%d-%H.%M.%S.%f')
else:
ts = row['timestamp']
rvibxs = row['VIBRATIONS_XAXIS']
rvibys = row['VIBRATIONS_YAXIS']
rvibzs = row['VIBRATIONS_ZAXIS']
racc = row['ACCEL_POWER']
rspd = row['ACCEL_SPEED']
if isinstance(racc, str):
racc = eval(racc)
if isinstance(rspd, str):
rspd = eval(rspd)
if isinstance(rvibxs, str):
list_ac = []
for ac in racc:
#print(ac)
list_ac.append(eval(ac))
list_ac.append(eval(ac))
list_ac.append(eval(ac))
#print (list_ac)
list_as = []
for as_ in rspd:
list_as.append(eval(as_))
list_as.append(eval(as_))
list_as.append(eval(as_))
list_vx = []
for vx in eval(rvibxs):
list_vx.append(vx)
#print (list_vx)
list_vy = []
for vy in eval(rvibys):
list_vy.append(vy)
list_vz = []
for vz in eval(rvibzs):
list_vz.append(vz)
#print(list_ac, list_vx)
cnt2 = 0
for p in zip(list_ac, list_as, list_vx, list_vy, list_vz):
#print(ts + datetime.timedelta(seconds = cnt2 * 10), p[0],p[1])
list_ac_vx.append([ts + datetime.timedelta(seconds = cnt2 * 10), p[0],p[1],p[2],p[3],p[4]])
cnt2 += 1
cnt += 1
df_clean = pd.DataFrame(list_ac_vx, columns=['timestamp','power','speed','rms_x', 'rms_y', 'rms_z'])
df_clean['entity'] = entity
return df_clean.set_index(['entity','timestamp'])
#df_clean = prep_data(load_data('./IOT_PUMP_DE_GEN5_202012141519.csv'))
#df_clean.head(2)
filename = "./IOT_SHADOW_PUMP_DE_GEN5_202102231533.csv"
dff = pd.read_csv(filename, parse_dates=['EVT_TIMESTAMP'], comment='#')
dff.columns = dff.columns.str.lower()
df_clean = dff.drop(columns=['devicetype','logicalinterface_id','eventtype','format',
'rcv_timestamp_utc','updated_utc']).set_index(['deviceid','evt_timestamp'])
df_clean.to_csv('PumpTestData.csv')
df_clean.index.levels[0]
#plt.figure(figsize=(20, 6))
df_display = df_clean[df_clean['speed'] > 1000]
sns.set_style('ticks')
#fig, ax = plt.subplots(1,1,figsize=(20,8))
gx = sns.displot(df_display[['speed','rms_x']], x='speed', y='rms_x', kind="kde",
rug=False, height=6, aspect=2)
gx.set_titles('KDE for' + df_clean.index.levels[0].values[0])
#plt.figure(figsize=(16, 6))
sns.set_style('ticks')
#fig, ax = plt.subplots(1,1,figsize=(20,8))
gy = sns.displot(df_display[['speed','rms_y']], x='speed', y='rms_y', kind="kde",
rug=False, height=6, aspect=2)
gy.set_titles('KDE for' + df_clean.index.levels[0].values[0])
# prepare
delete_model = True
df_i = df_clean.dropna()
#df_i['speed'] /= 10
df_i.to_csv('PumpTestData.csv')
%%time
print('Train and evaluate model for ' + df_i.index.levels[0].values)
# Now run the anomaly functions as if they were executed in a pipeline
EngineLogging.configure_console_logging(logging.DEBUG)
jobsettings = { 'db': db,
'_db_schema': 'public', 'save_trace_to_file' : True}
spsi = VIAnomalyScore(['speed'], ['rms_x'])
#spsi.epochs = 1 # only for testing model storage
spsi.epochs = 300
spsi.auto_train = True
spsi.delete_model = delete_model
et = spsi._build_entity_type(columns = [Column('MinTemp',Float())], **jobsettings)
et.name = 'IOT_SHADOW_PUMP_DE_GEN5'
spsi._entity_type = et
df_i = spsi.execute(df=df_i)
EngineLogging.configure_console_logging(logging.INFO)
df_i.describe()
%%time
print('Train and evaluate model for ' + df_i.index.levels[0].values)
# Now run the anomaly functions as if they were executed in a pipeline
EngineLogging.configure_console_logging(logging.DEBUG)
jobsettings = { 'db': db,
'_db_schema': 'public', 'save_trace_to_file' : True}
spsiy = VIAnomalyScore(['speed'], ['rms_y'])
#spsiy.epochs = 1 # only for testing model storage
spsiy.epochs = 300
spsiy.auto_train = True
spsiy.delete_model = delete_model
ety = spsiy._build_entity_type(columns = [Column('MinTemp',Float())], **jobsettings)
ety.name = 'IOT_SHADOW_PUMP_DE_GEN5'
spsiy._entity_type = ety
df_i = spsiy.execute(df=df_i)
EngineLogging.configure_console_logging(logging.INFO)
df_i.describe()
%%time
print('Train and evaluate model for ' + df_i.index.levels[0].values)
# Now run the anomaly functions as if they were executed in a pipeline
EngineLogging.configure_console_logging(logging.DEBUG)
jobsettings = { 'db': db,
'_db_schema': 'public', 'save_trace_to_file' : True}
spsiz = VIAnomalyScore(['speed'], ['rms_z'])
#spsiz.epochs = 1 # only for testing model storage
spsiz.epochs = 300
spsiz.auto_train = True
spsiz.delete_model = delete_model
etz = spsiz._build_entity_type(columns = [Column('MinTemp',Float())], **jobsettings)
etz.name = 'IOT_SHADOW_PUMP_DE_GEN5'
spsiz._entity_type = etz
df_i = spsiz.execute(df=df_i)
EngineLogging.configure_console_logging(logging.INFO)
df_i.describe()
et.name
df_i.columns
df_i.describe()
df_i.index.levels[0].values
# no multilevel index for plotting
count = 4
df_1 = df_i.loc[df_i.index.levels[0].values[count]].copy()
# remove speed < 400
df_1[df_1['speed'] < 400] = np.nan
print (df_i.index.levels[0].values[count], df_1.shape)
rmsstr = 'rms_x'
predstr = 'predicted_rms_x'
preddstr = 'pred_dev_rms_x'
anostr = 'AnomalyX'
# X-coordinate
arr1 = np.where(df_1[rmsstr] > df_1[predstr] + df_1[preddstr], df_1[rmsstr], 0) + \
np.where(df_1[rmsstr] < df_1[predstr] - df_1[preddstr], df_1[rmsstr], 0)
arr1[arr1 == 0] = np.nan
df_1[anostr] = arr1
fig, ax = plt.subplots(1, 1, figsize=(10, 8), squeeze=False)
ax[0,0].scatter(df_1['speed'], df_1[rmsstr])
ax[0,0].scatter(df_1['speed'], df_1[predstr], color='green')
ax[0,0].scatter(df_1['speed'], df_1[predstr] - df_1[preddstr], color='orange', lw=0.1, label='limit - p<0.05')
ax[0,0].scatter(df_1['speed'], df_1[predstr] + df_1[preddstr], color='orange', lw=0.1)
ax[0,0].scatter(df_1['speed'], df_1[anostr], color='red', lw=2)
ax[0,0].set_xlabel('Speed')
ax[0,0].set_ylabel('Vibration - X')
ax[0,0].legend()
ax[0,0].set_title('pump ' + df_i.index.levels[0].values[count])
rmsstr = 'rms_y'
predstr = 'predicted_rms_y'
preddstr = 'pred_dev_rms_y'
anostr = 'AnomalyY'
# Y-coordinate
arr1 = np.where(df_1[rmsstr] > df_1[predstr] + df_1[preddstr], df_1[rmsstr], 0) + \
np.where(df_1[rmsstr] < df_1[predstr] - df_1[preddstr], df_1[rmsstr], 0)
arr1[arr1 == 0] = np.nan
df_1[anostr] = arr1
fig, ax = plt.subplots(1, 1, figsize=(10, 8), squeeze=False)
ax[0,0].scatter(df_1['speed'], df_1[rmsstr])
ax[0,0].scatter(df_1['speed'], df_1[predstr], color='green')
ax[0,0].scatter(df_1['speed'], df_1[predstr] - df_1[preddstr], color='orange', lw=0.1, label='limit - p<0.05')
ax[0,0].scatter(df_1['speed'], df_1[predstr] + df_1[preddstr], color='orange', lw=0.1)
ax[0,0].scatter(df_1['speed'], df_1[anostr], color='red', lw=2)
ax[0,0].set_xlabel('Speed')
ax[0,0].set_ylabel('Vibration - Y')
ax[0,0].legend()
ax[0,0].set_title('pump ' + df_i.index.levels[0].values[count])
rmsstr = 'rms_z'
predstr = 'predicted_rms_z'
preddstr = 'pred_dev_rms_z'
anostr = 'AnomalyZ'
# Z-coordinate
arr1 = np.where(df_1[rmsstr] > df_1[predstr] + df_1[preddstr], df_1[rmsstr], 0) + \
np.where(df_1[rmsstr] < df_1[predstr] - df_1[preddstr], df_1[rmsstr], 0)
arr1[arr1 == 0] = np.nan
df_1[anostr] = arr1
fig, ax = plt.subplots(1, 1, figsize=(10, 8), squeeze=False)
ax[0,0].scatter(df_1['speed'], df_1[rmsstr])
ax[0,0].scatter(df_1['speed'], df_1[predstr], color='green')
ax[0,0].scatter(df_1['speed'], df_1[predstr] - df_1[preddstr], color='orange', lw=0.1, label='limit - p<0.05')
ax[0,0].scatter(df_1['speed'], df_1[predstr] + df_1[preddstr], color='orange', lw=0.1)
ax[0,0].scatter(df_1['speed'], df_1[anostr], color='red', lw=2)
ax[0,0].set_xlabel('Speed')
ax[0,0].set_ylabel('Vibration - Z')
ax[0,0].legend()
ax[0,0].set_title('pump ' + df_i.index.levels[0].values[count])
#df_1 = df_i.loc[df_i.index.levels[0].values[0]].copy()
fig, ax = plt.subplots(2, 2, figsize=(20, 9), squeeze=False)
ax[0,0].scatter(df_1['speed'], df_1['rms_x'])
ax[0,0].scatter(df_1['speed'], df_1['predicted_rms_x'], color='green')
ax[0,0].scatter(df_1['speed'], df_1['predicted_rms_x'] - df_1['pred_dev_rms_x'], color='orange', lw=0.1, label='limit - p<0.05')
ax[0,0].scatter(df_1['speed'], df_1['predicted_rms_x'] + df_1['pred_dev_rms_x'], color='orange', lw=0.1)
ax[0,0].scatter(df_1['speed'], df_1['AnomalyX'], color='red', lw=0.5)
ax[0,0].set_xlabel('Speed')
ax[0,0].set_ylabel('Vibration - X')
ax[0,0].legend()
ax[0,0].set_title('pump ' + df_i.index.levels[0].values[count])
ax[0,1].scatter(df_1['speed'], df_1['rms_y'])
ax[0,1].scatter(df_1['speed'], df_1['predicted_rms_y'], color='green')
ax[0,1].scatter(df_1['speed'], df_1['predicted_rms_y'] - df_1['pred_dev_rms_y'], color='orange', lw=0.1, label='limit - p<0.05')
ax[0,1].scatter(df_1['speed'], df_1['predicted_rms_y'] + df_1['pred_dev_rms_y'], color='orange', lw=0.1)
ax[0,1].scatter(df_1['speed'], df_1['AnomalyY'], color='red', lw=0.5)
ax[0,1].set_xlabel('Speed')
ax[0,1].set_ylabel('Vibration - Y')
ax[0,1].legend()
ax[0,1].set_title('pump ' + df_i.index.levels[0].values[count])
ax[1,0].scatter(df_1['speed'], df_1['rms_z'])
ax[1,0].scatter(df_1['speed'], df_1['predicted_rms_z'], color='green')
ax[1,0].scatter(df_1['speed'], df_1['predicted_rms_z'] - df_1['pred_dev_rms_z'], color='orange', lw=0.1, label='limit - p<0.05')
ax[1,0].scatter(df_1['speed'], df_1['predicted_rms_z'] + df_1['pred_dev_rms_z'], color='orange', lw=0.1)
ax[1,0].scatter(df_1['speed'], df_1['AnomalyZ'], color='red', lw=0.5)
ax[1,0].set_xlabel('Speed')
ax[1,0].set_ylabel('Vibration - Z')
ax[1,0].legend()
ax[1,0].set_title('pump ' + df_i.index.levels[0].values[count])
#ax[1,1].scatter(df_1['speed'], df_1['rms_y'])
#ax[1,1].set_xlabel('Speed')
#ax[1,1].set_ylabel('Vibration - Y')
#ax[1,1].legend()
ax[1,1].set_title('Hic sunt leones !')
#plt.fill_between(df_1['Ap'], - df_1['pred_dev_Vx'], df_1['pred_dev_Vx'], alpha=0.2)
'''
df_1i = df_1[24000:28000]
cnt = 0
fig, ax = plt.subplots(1, 1, figsize=(15, 6), squeeze=False)
ax[cnt,0].scatter(df_1i.index,df_1i['rms_x'], lw=0.4, color='green', label='vibration')
ax[cnt,0].scatter(df_1i.index, df_1i['AnomalyX'], lw=0.4, color='red', zorder=10, label='excessive vibration')
ax[cnt,0].legend()
ax[cnt,0].set_title('pump (X)' + df_i.index.levels[0].values[0])
ax[cnt,0].tick_params(axis = "x", which = "both", bottom = False, top = False)
ax[cnt,0].set_xticklabels([])
'''
'''
df_1i = df_1[24000:28000]
cnt = 0
fig, ax = plt.subplots(1, 1, figsize=(15, 6), squeeze=False)
ax[cnt,0].scatter(df_1i.index,df_1i['rms_y'], lw=0.4, color='green', label='vibration')
ax[cnt,0].scatter(df_1i.index, df_1i['AnomalyY'], lw=0.4, color='red', zorder=10, label='excessive vibration')
ax[cnt,0].legend()
ax[cnt,0].set_title('pump (Y)' + df_i.index.levels[0].values[0])
ax[cnt,0].tick_params(axis = "x", which = "both", bottom = False, top = False)
ax[cnt,0].set_xticklabels([])
'''
# OOM for ~170000 data points
'''
df_1i = df_1[0:4000]
fig, ax = plt.subplots(4, 1, figsize=(15, 12), squeeze=False)
cnt = 0
ax[0,0].scatter(df_1i.index,df_1i['speed'], lw=0.4, color='orange', zorder=5, label='speed')
cnt += 1
ax[cnt,0].scatter(df_1i.index,df_1i['rms_x'], lw=0.4, color='green', label='vibration')
ax[cnt,0].scatter(df_1i.index, df_1i['AnomalyX'], lw=0.4, color='red', zorder=10, label='excessive vibration')
ax[cnt,0].legend()
ax[cnt,0].set_title('pump (X)' + df_i.index.levels[0].values[0])
cnt += 1
ax[cnt,0].scatter(df_1i.index,df_1i['rms_y'], lw=0.4, color='green', label='vibration')
ax[cnt,0].scatter(df_1i.index, df_1i['AnomalyY'], lw=0.4, color='red', zorder=10, label='excessive vibration')
ax[cnt,0].legend()
ax[cnt,0].set_title('pump (Y)' + df_i.index.levels[0].values[0])
cnt += 1
ax[cnt,0].scatter(df_1i.index,df_1i['rms_z'], lw=0.4, color='green', label='vibration')
ax[cnt,0].scatter(df_1i.index, df_1i['AnomalyZ'], lw=0.4, color='red', zorder=10, label='excessive vibration')
ax[cnt,0].legend()
ax[cnt,0].set_title('pump (Z)' + df_i.index.levels[0].values[0])
'''
df_i.loc['04714B601096'].index
```
#### Does it make sense to have both accel values as features
```
fig, ax = plt.subplots(1, 1, figsize=(6, 5), squeeze=False)
ax[0,0].scatter(df_i['speed'].values, df_i['power'].values)
ax[0,0].set_xlabel('accel speed')
ax[0,0].set_ylabel('accel power')
#ax[0,1].plot(df_i.loc['04714B601096'].index, df_i['speed'].values, color='green')
#ax[0,1].plot(df_i.loc['04714B601096'].index, df_i['power'].values*100 + 400, color='orange')
#df_i[['As','Ap']].plot(figsize=(20,20))
# hack - do not store model in db2
old_model_store = db.model_store
db.model_store = FileModelStore()
# Now run the anomaly functions as if they were executed in a pipeline
EngineLogging.configure_console_logging(logging.DEBUG)
jobsettings = { 'db': db,
'_db_schema': 'public', 'save_trace_to_file' : True}
spsib = BayesRidgeRegressor(['speed'], ['power'])
et = spsib._build_entity_type(columns = [Column('speed',Float())], **jobsettings)
spsib._entity_type = et
df_i = spsib.execute(df=df_i)
db.model_store = old_model_store
df_i.describe()
df_2 = df_i[0:10000]
df_2 = df_2[df_2['speed'] > 400]
plots = 1
fig, ax = plt.subplots(plots, 1, figsize=(20,10), squeeze=False)
cnt = 0
ax[cnt,0].plot(df_2.unstack(level=0).index, df_2['speed'],linewidth=0.5,color='blue',label='speed')
ax[cnt,0].plot(df_2.unstack(level=0).index, df_2['power'],linewidth=0.5,color='green',label='power')
ax[cnt,0].plot(df_2.unstack(level=0).index, df_2['predicted_power'],linewidth=0.5,color='red',label='MaxTemp pred')
ax[cnt,0].fill_between(df_2.unstack(level=0).index, df_2['predicted_power'] - df_2['stddev_power'],
df_2['predicted_power'] + df_2['stddev_power'], color="pink", alpha=0.3, label="predict stddev")
ax[cnt,0].legend(bbox_to_anchor=(1.1, 1.05))
ax[cnt,0].set_ylabel('power',fontsize=12,weight="bold")
ax[cnt,0].set_xlabel('speed',fontsize=12,weight="bold")
cnt = 1
def load_data_short(filename):
# load data
df_input = pd.read_csv(filename, parse_dates=['RCV_TIMESTAMP_UTC'], comment='#')
#df_input = df_input.asfreq('H')
df_input = df_input.sort_values(by='RCV_TIMESTAMP_UTC').\
rename(columns={'RCV_TIMESTAMP_UTC':'timestamp', 'DEVICEID': 'entity'}).\
drop(columns=['DEVICETYPE', 'FORMAT','UPDATED_UTC','LOGICALINTERFACE_ID','DEVICE_ID']).\
drop(columns=['RMS_X','RMS_Y','RMS_Z'])
#df_input = df_input[df_input['VERSION'] != 0].drop(columns=['VERSION'])
return df_input
def prep_data_short(df_input):
entity = df_input['entity'].values[0]
list_ac_vx = []
cnt = 0
for idx,row in df_input[['timestamp','VIBRATIONS_XAXIS','ACCEL_SPEED']].iterrows():
if isinstance(row['timestamp'], str):
ts = datetime.datetime.strptime(row['timestamp'], '%Y-%m-%d-%H.%M.%S.%f')
else:
ts = row['timestamp']
rvibs = row['VIBRATIONS_XAXIS']
racc = row['ACCEL_SPEED']
if isinstance(racc, str):
racc = eval(racc)
if not isinstance(racc, list) and math.isnan(racc):
continue
if isinstance(rvibs, str):
list_ac = []
for ac in racc:
#print(ac)
list_ac.append(eval(ac))
list_ac.append(eval(ac))
list_ac.append(eval(ac))
#print (list_ac)
list_vx = []
for vx in eval(rvibs):
list_vx.append(vx)
#print (list_vx)
#print(list_ac, list_vx)
cnt2 = 0
for p in zip(list_ac, list_vx):
#print(ts + datetime.timedelta(seconds = cnt2 * 10), p[0],p[1])
list_ac_vx.append([ts + datetime.timedelta(seconds = cnt2 * 10), p[0],p[1]])
cnt2 += 1
cnt += 1
df_clean = pd.DataFrame(list_ac_vx, columns=['timestamp','Ap','Vx'])
df_clean['Ap'] = df_clean['Ap']/1000
df_clean['entity'] = entity
return df_clean.set_index(['entity','timestamp'])
#df_clean2 = prep_data(load_data('./IOT_PUMP_DE_GEN5_202012151526.csv'))
df_clean2 = prep_data_short(load_data_short('./04714B603408.csv'))
#df_clean2.describe()
df_clean2.head(2)
df_clean2.index.levels[0].values[0]
df_clean2.describe()
plt.figure(figsize=(16, 6))
g = sns.displot(df_clean2[['Ap','Vx']], x='Ap', y='Vx', kind="kde", rug=True)
g.set_titles('KDE for' + df_clean2.index.levels[0].values[0])
df_i2 = df_clean2.copy()
%%time
print('Train and evaluate model for ' + df_i2.index.levels[0].values[0])
# Now run the anomaly functions as if they were executed in a pipeline
EngineLogging.configure_console_logging(logging.DEBUG)
jobsettings = { 'db': db,
'_db_schema': 'public', 'save_trace_to_file' : True}
spsi2 = VIAnomalyScore(['Ap'], ['Vx'])
et = spsi2._build_entity_type(columns = [Column('MinTemp',Float())], **jobsettings)
spsi2._entity_type = et
spsi2.epochs = 300 # 1500 takes ages
df_i2 = spsi.execute(df=df_i2)
EngineLogging.configure_console_logging(logging.INFO)
df_i2.describe()
entity = df_i2.index.levels[0].values[0]
df_1 = df_i2.loc[entity].copy()
arr1 = np.where(df_1['Vx'] > df_1['predicted_Vx'] + df_1['pred_dev_Vx'], df_1['Vx'], 0) + \
np.where(df_1['Vx'] < df_1['predicted_Vx'] - df_1['pred_dev_Vx'], df_1['Vx'], 0)
arr1[arr1 == 0] = np.nan
df_1['Anomaly'] = arr1
fig, ax = plt.subplots(1, 2, figsize=(20, 6), squeeze=False)
ax[0,0].scatter(df_1['Ap'], df_1['Vx'])
ax[0,0].scatter(df_1['Ap'], df_1['predicted_Vx'], color='green')
ax[0,0].scatter(df_1['Ap'], df_1['predicted_Vx'] - df_1['pred_dev_Vx'], color='orange', lw=0.1, label='limit - p<0.05')
ax[0,0].scatter(df_1['Ap'], df_1['predicted_Vx'] + df_1['pred_dev_Vx'], color='orange', lw=0.1)
ax[0,0].scatter(df_1['Ap'], df_1['Anomaly'], color='red', lw=2)
ax[0,0].set_xlabel('Speed')
ax[0,0].set_ylabel('Vibration')
ax[0,0].legend()
ax[0,0].set_title('pump ' + entity)
ax[0,1].scatter(df_i2['Ap'], df_i2['Vx'])
ax[0,1].set_xlabel('Speed')
ax[0,1].set_ylabel('Vibration')
ax[0,1].legend()
ax[0,1].set_title('pump ' + entity + ' - plain scatterplot')
df_1[df_1['Ap'] < 2.0]['Anomaly'] = np.nan
df_1[np.abs(df_1['Vx']) < 0.7]['Anomaly'] = np.nan
fig, ax = plt.subplots(1, 1, figsize=(15, 6), squeeze=False)
#ax[0,0].plot(df_1['max'] + 2.0, lw=0.4, color='blue', label='smoothed max vib + 1')
ax[0,0].scatter(df_1.index,df_1['Vx'], lw=0.4, color='green', label='vibration')
ax[0,0].scatter(df_1.index,df_1['Ap']/3, lw=0.4, color='orange', zorder=5, label='accel speed')
ax[0,0].scatter(df_1.index, df_1['Anomaly'], lw=0.4, color='red', zorder=10, label='excessive vibration')
ax[0,0].legend()
ax[0,0].set_title('pump ' + df_i2.index.levels[0].values[0])
df_clean3 = prep_data_short(load_data_short('./04714B6036B1.csv'))
df_clean3.head(2)
df_clean3.index.levels[0].values[0]
plt.figure(figsize=(16, 6))
g = sns.displot(df_clean3[['Ap','Vx']], x='Ap', y='Vx', kind="kde", rug=True)
g.set_titles('KDE for' + df_clean3.index.levels[0].values[0])
df_i3 = df_clean3.copy()
%%time
print('Train and evaluate model for ' + df_i3.index.levels[0].values[0])
# Now run the anomaly functions as if they were executed in a pipeline
EngineLogging.configure_console_logging(logging.DEBUG)
jobsettings = { 'db': db,
'_db_schema': 'public', 'save_trace_to_file' : True}
spsi3 = VIAnomalyScore(['Ap'], ['Vx'])
et = spsi3._build_entity_type(columns = [Column('MinTemp',Float())], **jobsettings)
spsi3._entity_type = et
df_i3 = spsi.execute(df=df_i3)
EngineLogging.configure_console_logging(logging.INFO)
df_i3.describe()
entity = df_i3.index.levels[0].values[0]
df_1 = df_i3.loc[entity].copy()
arr1 = np.where(df_1['Vx'] > df_1['predicted_Vx'] + df_1['pred_dev_Vx'], df_1['Vx'], 0) + \
np.where(df_1['Vx'] < df_1['predicted_Vx'] - df_1['pred_dev_Vx'], df_1['Vx'], 0)
arr1[arr1 == 0] = np.nan
df_1['Anomaly'] = arr1
fig, ax = plt.subplots(1, 2, figsize=(20, 6), squeeze=False)
ax[0,0].scatter(df_1['Ap'], df_1['Vx'])
ax[0,0].scatter(df_1['Ap'], df_1['predicted_Vx'], color='green')
ax[0,0].scatter(df_1['Ap'], df_1['predicted_Vx'] - df_1['pred_dev_Vx'], color='orange', lw=0.1, label='limit - p<0.05')
ax[0,0].scatter(df_1['Ap'], df_1['predicted_Vx'] + df_1['pred_dev_Vx'], color='orange', lw=0.1)
ax[0,0].scatter(df_1['Ap'], df_1['Anomaly'], color='red', lw=2)
ax[0,0].set_xlabel('Accel Speed')
ax[0,0].set_ylabel('Vibration')
ax[0,0].legend()
ax[0,0].set_title('pump ' + entity)
ax[0,1].scatter(df_i3['Ap'], df_i3['Vx'])
ax[0,1].set_xlabel('Power')
ax[0,1].set_ylabel('Vibration')
ax[0,1].legend()
ax[0,1].set_title('pump ' + entity + ' - plain scatterplot')
df_1[df_1['Ap'] < 0.5]['Anomaly'] = np.nan
df_1[np.abs(df_1['Vx']) < 0.5]['Anomaly'] = np.nan
fig, ax = plt.subplots(1, 1, figsize=(15, 6), squeeze=False)
#ax[0,0].plot(df_1['max'] + 2.0, lw=0.4, color='blue', label='smoothed max vib + 1')
ax[0,0].scatter(df_1.index,df_1['Vx'], lw=0.4, color='green', label='vibration')
ax[0,0].scatter(df_1.index,df_1['Ap']/3, lw=0.4, color='orange', zorder=5, label='accel speed')
ax[0,0].scatter(df_1.index, df_1['Anomaly'], lw=0.4, color='red', zorder=10, label='excessive vibration')
ax[0,0].legend()
ax[0,0].set_title('pump ' + df_i3.index.levels[0].values[0])
```
#### Approximate with Gaussian
major contribution: gaussian along x/y coordinates, start with diagonal covariance matrix:
likelihood:
$e^{\frac{ {(\mu_0 - x)}^2}{\sigma_0} \frac{ {(\mu_1 - y)}^2}{\sigma_1} }$
```
# theta_0, theta_1 = mu_0/mu_1, theta_2/theta_3 = sigma_0/ sigma_1
def log_likelihood(theta, F, e):
return np.sum(theta[0]
np.log(2 * np.pi * (theta[1] ** 2 + theta[2] ** 2))
+ (theta[0]) ** 2 / (theta[1] ** 2 + theta[2] ** 2))
def log_prior(theta):
# sigma needs to be positive.
if theta[1] <= 0:
return -np.inf
else:
return 0
def log_posterior(theta, F, e):
return log_prior(theta) + log_likelihood(theta, F, e)
# same setup as above:
ndim, nwalkers = 2, 50
nsteps, nburn = 2000, 1000
starting_guesses = np.random.rand(nwalkers, ndim)
starting_guesses[:, 0] *= 2000 # start mu between 0 and 2000
starting_guesses[:, 1] *= 20 # start sigma between 0 and 20
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[F, e])
sampler.run_mcmc(starting_guesses, nsteps)
sample = sampler.chain # shape = (nwalkers, nsteps, ndim)
sample = sampler.chain[:, nburn:, :].reshape(-1, 2)
```
| github_jupyter |
# TensorFlow2.0教程-过拟合和欠拟合
```
from __future__ import absolute_import, division, print_function
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
plt.plot(train_data[0])
```
防止过度拟合的最简单方法是减小模型的大小,即模型中可学习参数的数量。
深度学习模型往往善于适应训练数据,但真正的挑战是概括,而不是适合。
另一方面,如果网络具有有限的记忆资源,则将不能容易地学习映射。为了最大限度地减少损失,它必须学习具有更强预测能力的压缩表示。同时,如果您使模型太小,则难以适应训练数据。 “太多容量”和“容量不足”之间存在平衡。
要找到合适的模型大小,最好从相对较少的图层和参数开始,然后开始增加图层的大小或添加新图层,直到看到验证损失的收益递减为止。
我们将在电影评论分类网络上使用Dense图层作为基线创建一个简单模型,然后创建更小和更大的版本,并进行比较。
## 1.创建一个baseline模型
```
import tensorflow.keras.layers as layers
baseline_model = keras.Sequential(
[
layers.Dense(16, activation='relu', input_shape=(NUM_WORDS,)),
layers.Dense(16, activation='relu'),
layers.Dense(1, activation='sigmoid')
]
)
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data, train_labels,
epochs=20, batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
```
## 2.创建一个小模型
```
small_model = keras.Sequential(
[
layers.Dense(4, activation='relu', input_shape=(NUM_WORDS,)),
layers.Dense(4, activation='relu'),
layers.Dense(1, activation='sigmoid')
]
)
small_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
small_model.summary()
small_history = small_model.fit(train_data, train_labels,
epochs=20, batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
```
## 3.创建一个大模型
```
big_model = keras.Sequential(
[
layers.Dense(512, activation='relu', input_shape=(NUM_WORDS,)),
layers.Dense(512, activation='relu'),
layers.Dense(1, activation='sigmoid')
]
)
big_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
big_model.summary()
big_history = big_model.fit(train_data, train_labels,
epochs=20, batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('small', small_history),
('big', big_history)])
```
请注意,较大的网络在仅仅一个时期之后几乎立即开始过度拟合,并且更过拟合更严重。 网络容量越大,能够越快地对训练数据进行建模(导致训练损失低),但过度拟合的可能性越大(导致训练和验证损失之间的差异很大)。
## 4.添加l2正则
```
l2_model = keras.Sequential(
[
layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation='relu', input_shape=(NUM_WORDS,)),
layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation='relu'),
layers.Dense(1, activation='sigmoid')
]
)
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model.summary()
l2_history = l2_model.fit(train_data, train_labels,
epochs=20, batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('l2', l2_history)])
```
## 5.添加dropout
```
dpt_model = keras.Sequential(
[
layers.Dense(16, activation='relu', input_shape=(NUM_WORDS,)),
layers.Dropout(0.5),
layers.Dense(16, activation='relu'),
layers.Dropout(0.5),
layers.Dense(1, activation='sigmoid')
]
)
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
dpt_model.summary()
dpt_history = dpt_model.fit(train_data, train_labels,
epochs=20, batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_history)])
```
防止神经网络中过度拟合的最常用方法:
- 获取更多训练数据。
- 减少网络容量。
- 添加权重正规化。
- 添加dropout。
| github_jupyter |
```
!nvidia-smi
!pip --quiet install transformers
!pip --quiet install tokenizers
from google.colab import drive
drive.mount('/content/drive')
!cp -r '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Scripts/.' .
COLAB_BASE_PATH = '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/'
MODEL_BASE_PATH = COLAB_BASE_PATH + 'Models/Files/198-roBERTa_base/'
import os
os.mkdir(MODEL_BASE_PATH)
```
## Dependencies
```
import json, warnings, shutil
from scripts_step_lr_schedulers import *
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
```
# Load data
```
# Unzip files
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_1.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_2.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_3.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_4.tar.gz'
!tar -xf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/complete_64_clean/fold_5.tar.gz'
database_base_path = COLAB_BASE_PATH + 'Data/complete_64_clean/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
print(f'Training samples: {len(k_fold)}')
display(k_fold.head())
```
# Model parameters
```
vocab_path = COLAB_BASE_PATH + 'qa-transformers/roberta/roberta-base-vocab.json'
merges_path = COLAB_BASE_PATH + 'qa-transformers/roberta/roberta-base-merges.txt'
base_path = COLAB_BASE_PATH + 'qa-transformers/roberta/'
config = {
"MAX_LEN": 64,
"BATCH_SIZE": 32,
"EPOCHS": 7,
"LEARNING_RATE": 3e-5,
"ES_PATIENCE": 2,
"N_FOLDS": 5,
"question_size": 4,
"base_model_path": base_path + 'roberta-base-tf_model.h5',
"config_path": base_path + 'roberta-base-config.json'
}
with open(MODEL_BASE_PATH + 'config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
```
# Tokenizer
```
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
```
## Learning rate schedule
```
lr_min = 1e-6
lr_start = 1e-7
lr_max = config['LEARNING_RATE']
train_size = len(k_fold[k_fold['fold_1'] == 'train'])
step_size = train_size // config['BATCH_SIZE']
total_steps = config['EPOCHS'] * step_size
warmup_steps = step_size * 1
decay = .999
rng = [i for i in range(0, total_steps, config['BATCH_SIZE'])]
y = [exponential_schedule_with_warmup(tf.cast(x, tf.float32), warmup_steps=warmup_steps, lr_start=lr_start,
lr_max=lr_max, lr_min=lr_min, decay=decay) for x in rng]
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
```
# Model
```
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=True)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
_, _, hidden_states = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
h11 = hidden_states[-2]
x = layers.Dropout(.1)(h11)
x_start = layers.Dense(1)(x)
x_start = layers.Flatten()(x_start)
y_start = layers.Activation('softmax', name='y_start')(x_start)
x_end = layers.Dense(1)(x)
x_end = layers.Flatten()(x_end)
y_end = layers.Activation('softmax', name='y_end')(x_end)
model = Model(inputs=[input_ids, attention_mask], outputs=[y_start, y_end])
return model
```
# Train
```
AUTO = tf.data.experimental.AUTOTUNE
k_fold_best = k_fold.copy()
history_list = []
for n_fold in range(config['N_FOLDS']):
n_fold +=1
print('\nFOLD: %d' % (n_fold))
# Load data
base_data_path = 'fold_%d/' % (n_fold)
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
y_valid = np.load(base_data_path + 'y_valid.npy')
step_size = x_train.shape[1] // config['BATCH_SIZE']
valid_step_size = x_valid.shape[1] // config['BATCH_SIZE']
# Train model
model_path = 'model_fold_%d.h5' % (n_fold)
model = model_fn(config['MAX_LEN'])
optimizer = optimizers.Adam(learning_rate=lambda: exponential_schedule_with_warmup(tf.cast(optimizer.iterations, tf.float32),
warmup_steps=warmup_steps, lr_start=lr_start,
lr_max=lr_max, lr_min=lr_min, decay=decay))
model.compile(optimizer, loss={'y_start': losses.CategoricalCrossentropy(label_smoothing=0.2),
'y_end': losses.CategoricalCrossentropy(label_smoothing=0.2)})
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=False, verbose=1)
checkpoint = ModelCheckpoint((MODEL_BASE_PATH + model_path), monitor='val_loss', mode='min',
save_best_only=True, save_weights_only=True, verbose=1)
history = model.fit(get_training_dataset(x_train, y_train, config['BATCH_SIZE'], AUTO, seed=SEED),
validation_data=(get_validation_dataset(x_valid, y_valid, config['BATCH_SIZE'], AUTO, repeated=True, seed=SEED)),
epochs=config['EPOCHS'],
steps_per_epoch=step_size,
validation_steps=valid_step_size,
callbacks=[checkpoint, es],
verbose=2).history
history_list.append(history)
# Make predictions
model.load_weights(MODEL_BASE_PATH + model_path)
predict_eval_df(k_fold_best, model, x_train, x_valid, get_test_dataset, decode, n_fold, tokenizer, config, config['question_size'])
```
# Model loss graph
```
for n_fold in range(config['N_FOLDS']):
print('Fold: %d' % (n_fold+1))
plot_metrics(history_list[n_fold])
```
# Model evaluation
```
display(evaluate_model_kfold(k_fold_best, config['N_FOLDS']).style.applymap(color_map))
```
# Visualize predictions
```
#@title
display(k_fold[[c for c in k_fold.columns if not (c.startswith('textID') or
c.startswith('text_len') or
c.startswith('selected_text_len') or
c.startswith('text_wordCnt') or
c.startswith('selected_text_wordCnt') or
c.startswith('fold_') or
c.startswith('start_fold_') or
c.startswith('end_fold_'))]].head(15))
```
| github_jupyter |
```
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="0"
import tensorflow as tf
from keras import backend as K
K.set_image_dim_ordering('th')
import numpy as np
import pandas as pd
import cv2
import zarr
import glob
import matplotlib.pyplot as plt
%matplotlib inline
from keras.models import Sequential, load_model, Model
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Input
from keras.optimizers import Adam, SGD, RMSprop, Nadam
from keras.layers.convolutional import Convolution3D, MaxPooling3D, UpSampling3D
from keras.layers.advanced_activations import PReLU
from keras.layers import BatchNormalization, GlobalAveragePooling3D, GlobalMaxPooling3D
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.layers.core import SpatialDropout3D
from keras.utils.np_utils import to_categorical
from preds2d_utils import *
from preds3d_models_exp import *
def experimental_cnn3d(width):
optimizer = Adam(lr = 5e-5)
inputs = Input(shape=(1, 136, 168, 168))
conv1 = Convolution3D(32, 3, 3, 3, activation = 'relu', border_mode='same')(inputs)
conv1 = BatchNormalization(axis = 1)(conv1)
conv1 = Convolution3D(32, 3, 3, 3, activation = 'relu', border_mode='same')(conv1)
conv1 = BatchNormalization(axis = 1)(conv1)
pool1 = MaxPooling3D(pool_size=(2, 2, 2), border_mode='same')(conv1)
conv2 = Convolution3D(64, 3, 3, 3, activation = 'relu', border_mode='same')(pool1)
conv2 = BatchNormalization(axis = 1)(conv2)
conv2 = Convolution3D(64, 3, 3, 3, activation = 'relu', border_mode='same')(conv2)
conv2 = BatchNormalization(axis = 1)(conv2)
pool2 = MaxPooling3D(pool_size=(2, 2, 2), border_mode='same')(conv2)
conv3 = Convolution3D(128, 3, 3, 3, activation = 'relu', border_mode='same')(pool2)
conv3 = BatchNormalization(axis = 1)(conv3)
pool3 = MaxPooling3D(pool_size=(8, 8, 8), border_mode='same')(conv3)
output = Flatten(name='flatten')(pool3)
output = Dropout(0.2)(output)
output = Dense(128)(output)
output = PReLU()(output)
output = Dropout(0.3)(output)
output = Dense(2, activation='softmax', name = 'predictions')(output)
model3d = Model(inputs, output)
model3d.compile(loss='categorical_crossentropy', optimizer = optimizer, metrics = ['accuracy'])
return model3d
def classifier(kernel_size, pool_size):
model = Sequential()
model.add(Convolution3D(16, kernel_size[0], kernel_size[1], kernel_size[2],
border_mode='valid',
input_shape=(1, 136, 168, 168)))
model.add(Activation('relu'))
model.add(MaxPooling3D(pool_size=pool_size))
model.add(Convolution3D(32, kernel_size[0], kernel_size[1], kernel_size[2]))
model.add(Activation('relu'))
model.add(MaxPooling3D(pool_size=pool_size))
model.add(Convolution3D(64, kernel_size[0], kernel_size[1], kernel_size[2]))
model.add(Activation('relu'))
model.add(MaxPooling3D(pool_size=pool_size))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(2))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer = Adam(lr=5e-5), metrics = ['accuracy'])
return model
start_train = 0
end_train = 1398
start_val = 1400
end_val = 1595
epochs = 20
width = 16
ks1 = (2, 2, 2)
ks2 = ks3 = ks1
cnn3d_genfit('DSB_class2d_exp', classifier(ks1, ks1), epochs, start_train, end_train, start_val, end_val,
end_train - start_train,
end_val - start_val)
```
### 1595 examples, 1398 original
| github_jupyter |
# Plotting and Visualization
---
Created on 2019-05-22
Update on 2019-05-22
Author: Jiacheng
Github: https://github.com/Jiachengciel/Data_Analysis
---
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib notebook
```
---
## 1. A Brief matplotlib API Primer
## matplotlib API入门
```
data = np.arange(10)
data
plt.plot(data)
```
* ### Figure和Subplot
```
fig = plt.figure()
ax1 = fig.add_subplot(221)
ax2 = fig.add_subplot(222)
ax3 = fig.add_subplot(223)
ax3.plot(np.random.randn(50).cumsum(), 'k--')
# alpha 图表的填充不透明度(0-1)
ax1.hist(np.random.randn(100), bins=20, color='k', alpha=0.3)
ax2.scatter(np.arange(30), np.arange(30)+3*np.random.randn(30))
fig
fig, axes = plt.subplots(2, 3)
axes
```
* ### 调整subplot周围的间距
```
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=None, hspace=None)
fig
fig, axes = plt.subplots(2, 2, sharex=True, sharey=True)
for i in range(2):
for j in range(2):
axes[i, j].hist(np.random.randn(500), bins=50, color='k', alpha=0.5)
plt.subplots_adjust(wspace=0, hspace=0)
# 间距收缩为0
```
* ### 颜色、标记和线型
```
plt.plot(np.random.randn(30).cumsum(), 'k--')
# 带有标记点
plt.plot(np.random.randn(30).cumsum(), 'ko--')
plt.plot(np.random.randn(30).cumsum(), color='k', linestyle='dashed', marker='o')
data = np.random.randn(30).cumsum()
plt.plot(data, 'k--', label='Default')
plt.plot(data, 'k-', drawstyle='steps-post', label='steps-post')
plt.legend(loc='best') # 添加标记
```
* ### 设置标题、轴标签、刻度以及刻度标签
```
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(np.random.randn(1000).cumsum())
# 设置x轴刻度
ticks = ax.set_xticks([0, 250, 500, 750, 1000])
# 设定刻度名称,并旋转30度
labels = ax.set_xticklabels(['one', 'two', 'three', 'four', 'five'],
rotation=30, fontsize='small')
# 设置图像名称
ax.set_title('My matplotlib plot')
ax.set_xlabel('Stages')
fig
```
* ### 添加图例
```
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(np.random.randn(1000).cumsum(), 'k', label='one')
ax.plot(np.random.randn(1000).cumsum(), 'g--', label='two')
ax.plot(np.random.randn(1000).cumsum(), 'r.', label='three')
ax.legend(loc='best')
```
* ### 注解以及在Subplot上绘图
```
from datetime import datetime
fig = plt.figure()
ax = fig.add_subplot(111)
data = pd.read_csv('../examples/spx.csv', index_col=0, parse_dates=True)
data
spx = data['SPX']
spx.plot(ax=ax, style='k--')
crisis_data = [
(datetime(2007, 10, 11), 'Peak of bull market'),
(datetime(2008, 3, 12), 'Bear Stearns Fails'),
(datetime(2008, 9, 15), 'Lehman Bankruptcy')
]
for date, label in crisis_data:
# ax.annotate方法可以在指定的x和y坐标轴绘制标签
ax.annotate(label, xy=(date, spx.asof(date)+75),
xytext=(date, spx.asof(date)+225),
arrowprops=dict(facecolor='black', headwidth=4, width=2,
headlength=4),
horizontalalignment='left', verticalalignment='top')
# 人工设定起始和结束边界
ax.set_xlim(['1/1/2007', '1/1/2011'])
ax.set_ylim([600, 1800])
ax.set_title('Important dates in the 2008-2009 financial crisis')
fig
```
* ### 图形绘制
```
fig = plt.figure()
ax = fig.add_subplot(111)
rect = plt.Rectangle((0.2, 0.75), 0.4, 0.15, color='k', alpha=0.3)
circ = plt.Circle((0.7, 0.2), 0.15, color='b', alpha=0.3)
pgon = plt.Polygon([[0.15, 0.15], [0.35, 0.4], [0.2, 0.6]], color='g', alpha=0.5)
ax.add_patch(rect)
ax.add_patch(circ)
ax.add_patch(pgon)
```
* ### 将图表保存到文件
#### dpi(控制“每英寸点数”分辨率)
#### bbox_inches(可以剪除当前图表周围的空白部分)
plt.savefig('figpath.png', dpi=400, bbox_inches='tight')
* ### matplotlib配置
#### 全局的图像默认大小设置为10×10
plt.rc('figure', figsize=(10, 10))
---
## 2. Plotting with pandas and seaborn
## 使用pandas和seaborn绘图
* ### 线型图
```
fig = plt.figure()
ax = fig.add_subplot(111)
s = pd.Series(np.random.randn(10).cumsum(), index=np.arange(0, 100, 10))
plt.plot(s)
# 为各列绘制一条线
columns=['A', 'B', 'C', 'D']
df = pd.DataFrame(np.random.randn(10, 4).cumsum(0),
columns=columns,
index=np.arange(0, 100, 10))
df.plot()
# plt.legend(loc='best')
```
* ### 柱状图
```
fig, axes = plt.subplots(2,1)
data = pd.Series(np.random.rand(16), index=list('abcdefghijklmnop'))
data.plot.bar(ax=axes[0], color='k', alpha=0.7)
data.plot.barh(ax=axes[1], color='b', alpha=0.7)
df = pd.DataFrame(np.random.rand(6, 4),
index=['one', 'two', 'three', 'four', 'five', 'six'],
columns=pd.Index(['A', 'B', 'C', 'D'], name='Genus'))
df
df.plot.bar()
# 堆积柱状图
df.plot.barh(stacked=True, alpha=0.5)
tips = pd.read_csv('../examples/tips.csv')
party_counts = pd.crosstab(tips['day'], tips['size'])
party_counts
party_counts = party_counts.loc[:, 2:5]
party_counts
party_counts.plot.bar()
# 每个数字除以改行之和
party_pcts = party_counts.div(party_counts.sum(1), axis=0)
party_pcts
party_pcts.plot.bar()
import seaborn as sns
tips['tip_pct'] = tips['tip'] / (tips['total_bill'] - tips['tip'])
tips.head()
sns.barplot(x='tip_pct', y='day', data=tips, orient='h')
sns.barplot(x='tip_pct', y='day', hue='time', data=tips, orient='h')
```
* ### 直方图和密度图
```
tips['tip_pct'].plot.hist(bins=50)
tips['tip_pct'].plot.density()
comp1 = np.random.normal(0,1, size=200)
comp2 = np.random.normal(10,2, size=200)
values = pd.Series(np.concatenate([comp1, comp2]))
# bins 切片100
sns.distplot(values, bins=100, color='k')
```
* ### 散布图或点图
```
macro = pd.read_csv('../examples/macrodata.csv')
macro[:10]
data=macro[['cpi', 'm1', 'tbilrate', 'unemp']]
trans_data = np.log(data).diff().dropna()
trans_data[-5:]
sns.regplot('m1', 'unemp', data=trans_data)
plt.title('Changes in log %s versus log %s' % ('m1', 'unemp'))
# 散布图矩阵
sns.pairplot(trans_data, diag_kind='kde', plot_kws={'alpha':0.2})
```
* ### 分面网格(facet grid)和类型数据
```
sns.catplot(x='day', y='tip_pct', hue='time', col='smoker',
kind='bar', data=tips[tips.tip_pct < 1])
sns.catplot(x='day', y='tip_pct', row='time',
col='smoker',
kind='bar', data=tips[tips.tip_pct < 1])
# 盒图
sns.catplot(x='tip_pct', y='day', kind='box',
data=tips[tips.tip_pct < 0.5])
```
| github_jupyter |
```
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Vertex client library: AutoML image segmentation model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/community/gapic/automl/showcase_automl_image_segmentation_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/community/gapic/automl/showcase_automl_image_segmentation_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
## Overview
This tutorial demonstrates how to use the Vertex client library for Python to create image segmentation models and do online prediction using Google Cloud's [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users).
### Dataset
The dataset used for this tutorial is the [TODO](https://). This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
### Objective
In this tutorial, you create an AutoML image segmentation model and deploy for online prediction from a Python script using the Vertex client library. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Google Cloud Console.
The steps performed include:
- Create a Vertex `Dataset` resource.
- Train the model.
- View the model evaluation.
- Deploy the `Model` resource to a serving `Endpoint` resource.
- Make a prediction.
- Undeploy the `Model`.
### Costs
This tutorial uses billable components of Google Cloud (GCP):
* Vertex AI
* Cloud Storage
Learn about [Vertex AI
pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage
pricing](https://cloud.google.com/storage/pricing), and use the [Pricing
Calculator](https://cloud.google.com/products/calculator/)
to generate a cost estimate based on your projected usage.
## Installation
Install the latest version of Vertex client library.
```
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
```
Install the latest GA version of *google-cloud-storage* library as well.
```
! pip3 install -U google-cloud-storage $USER_FLAG
```
### Restart the kernel
Once you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
```
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Before you begin
### GPU runtime
*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU**
### Set up your Google Cloud project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)
3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)
4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.
5. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
```
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
```
#### Region
You can also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
- Americas: `us-central1`
- Europe: `europe-west4`
- Asia Pacific: `asia-east1`
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/vertex-ai/docs/general/locations)
```
REGION = "us-central1" # @param {type: "string"}
```
#### Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
```
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
```
### Authenticate your Google Cloud account
**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.
**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
**Otherwise**, follow these steps:
In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.
**Click Create service account**.
In the **Service account name** field, enter a name, and click **Create**.
In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
```
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
```
### Set up variables
Next, set up some variables used throughout the tutorial.
### Import libraries and define constants
#### Import Vertex client library
Import the Vertex client library into our Python environment.
```
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
```
#### Vertex constants
Setup up the following constants for Vertex:
- `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
- `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
```
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
```
#### AutoML constants
Set constants unique to AutoML datasets and training:
- Dataset Schemas: Tells the `Dataset` resource service which type of dataset it is.
- Data Labeling (Annotations) Schemas: Tells the `Dataset` resource service how the data is labeled (annotated).
- Dataset Training Schemas: Tells the `Pipeline` resource service the task (e.g., classification) to train the model for.
```
# Image Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml"
# Image Labeling type
LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_segmentation_io_format_1.0.0.yaml"
# Image Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_segmentation_1.0.0.yaml"
```
# Tutorial
Now you are ready to start creating your own AutoML image segmentation model.
## Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
- Dataset Service for `Dataset` resources.
- Model Service for `Model` resources.
- Pipeline Service for training.
- Endpoint Service for deployment.
- Prediction Service for serving.
```
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
```
## Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
### Create `Dataset` resource instance
Use the helper function `create_dataset` to create the instance of a `Dataset` resource. This function does the following:
1. Uses the dataset client service.
2. Creates an Vertex `Dataset` resource (`aip.Dataset`), with the following parameters:
- `display_name`: The human-readable name you choose to give it.
- `metadata_schema_uri`: The schema for the dataset type.
3. Calls the client dataset service method `create_dataset`, with the following parameters:
- `parent`: The Vertex location root path for your `Database`, `Model` and `Endpoint` resources.
- `dataset`: The Vertex dataset object instance you created.
4. The method returns an `operation` object.
An `operation` object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.
You can use the `operation` object to get status on the operation (e.g., create `Dataset` resource) or to cancel the operation, by invoking an operation method:
| Method | Description |
| ----------- | ----------- |
| result() | Waits for the operation to complete and returns a result object in JSON format. |
| running() | Returns True/False on whether the operation is still running. |
| done() | Returns True/False on whether the operation is completed. |
| canceled() | Returns True/False on whether the operation was canceled. |
| cancel() | Cancels the operation (this may take up to 30 seconds). |
```
TIMEOUT = 90
def create_dataset(name, schema, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
dataset = aip.Dataset(
display_name=name, metadata_schema_uri=schema, labels=labels
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("unknown-" + TIMESTAMP, DATA_SCHEMA)
```
Now save the unique dataset identifier for the `Dataset` resource instance you created.
```
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
```
### Data preparation
The Vertex `Dataset` resource for images has some requirements for your data:
- Images must be stored in a Cloud Storage bucket.
- Each image file must be in an image format (PNG, JPEG, BMP, ...).
- There must be an index file stored in your Cloud Storage bucket that contains the path and label for each image.
- The index file must be either CSV or JSONL.
#### JSONL
For image segmentation, the JSONL index file has the requirements:
- Each data item is a separate JSON object, on a separate line.
- The key/value pair `image_gcs_uri` is the Cloud Storage path to the image.
- The key/value pair `category_mask_uri` is the Cloud Storage path to the mask image in PNG format.
- The key/value pair `'annotation_spec_colors'` is a list mapping mask colors to a label.
- The key/value pair pair `display_name` is the label for the pixel color mask.
- The key/value pair pair `color` are the RGB normalized pixel values (between 0 and 1) of the mask for the corresponding label.
{ 'image_gcs_uri': image, 'segmentation_annotations': { 'category_mask_uri': mask_image, 'annotation_spec_colors' : [ { 'display_name': label, 'color': {"red": value, "blue", value, "green": value} }, ...] }
*Note*: The dictionary key fields may alternatively be in camelCase. For example, 'image_gcs_uri' can also be 'imageGcsUri'.
#### Location of Cloud Storage training data.
Now set the variable `IMPORT_FILE` to the location of the JSONL index file in Cloud Storage.
```
IMPORT_FILE = "gs://ucaip-test-us-central1/dataset/isg_data.jsonl"
```
#### Quick peek at your data
You will use a version of the Unknown dataset that is stored in a public Cloud Storage bucket, using a JSONL index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of objects in a JSONL index file (`wc -l`) and then peek at the first few rows.
```
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
```
### Import data
Now, import the data into your Vertex Dataset resource. Use this helper function `import_data` to import the data. The function does the following:
- Uses the `Dataset` client.
- Calls the client method `import_data`, with the following parameters:
- `name`: The human readable name you give to the `Dataset` resource (e.g., unknown).
- `import_configs`: The import configuration.
- `import_configs`: A Python list containing a dictionary, with the key/value entries:
- `gcs_sources`: A list of URIs to the paths of the one or more index files.
- `import_schema_uri`: The schema identifying the labeling type.
The `import_data()` method returns a long running `operation` object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.
```
def import_data(dataset, gcs_sources, schema):
config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}]
print("dataset:", dataset_id)
start_time = time.time()
try:
operation = clients["dataset"].import_data(
name=dataset_id, import_configs=config
)
print("Long running operation:", operation.operation.name)
result = operation.result()
print("result:", result)
print("time:", int(time.time() - start_time), "secs")
print("error:", operation.exception())
print("meta :", operation.metadata)
print(
"after: running:",
operation.running(),
"done:",
operation.done(),
"cancelled:",
operation.cancelled(),
)
return operation
except Exception as e:
print("exception:", e)
return None
import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA)
```
## Train the model
Now train an AutoML image segmentation model using your Vertex `Dataset` resource. To train the model, do the following steps:
1. Create an Vertex training pipeline for the `Dataset` resource.
2. Execute the pipeline to start the training.
### Create a training pipeline
You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
1. Being reusable for subsequent training jobs.
2. Can be containerized and ran as a batch job.
3. Can be distributed.
4. All the steps are associated with the same pipeline job for tracking progress.
Use this helper function `create_pipeline`, which takes the following parameters:
- `pipeline_name`: A human readable name for the pipeline job.
- `model_name`: A human readable name for the model.
- `dataset`: The Vertex fully qualified dataset identifier.
- `schema`: The dataset labeling (annotation) training schema.
- `task`: A dictionary describing the requirements for the training job.
The helper function calls the `Pipeline` client service'smethod `create_pipeline`, which takes the following parameters:
- `parent`: The Vertex location root path for your `Dataset`, `Model` and `Endpoint` resources.
- `training_pipeline`: the full specification for the pipeline training job.
Let's look now deeper into the *minimal* requirements for constructing a `training_pipeline` specification:
- `display_name`: A human readable name for the pipeline job.
- `training_task_definition`: The dataset labeling (annotation) training schema.
- `training_task_inputs`: A dictionary describing the requirements for the training job.
- `model_to_upload`: A human readable name for the model.
- `input_data_config`: The dataset specification.
- `dataset_id`: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.
- `fraction_split`: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
```
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
```
### Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the `task` field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the `json_format.ParseDict` method for the conversion.
The minimal fields you need to specify are:
- `budget_milli_node_hours`: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.
- `model_type`: The type of deployed model:
- `CLOUD_HIGH_ACCURACY_1`: For deploying to Google Cloud and optimizing for accuracy.
- `CLOUD_LOW_LATENCY_1`: For deploying to Google Cloud and optimizing for latency (response time),
Finally, create the pipeline by calling the helper function `create_pipeline`, which returns an instance of a training pipeline object.
```
PIPE_NAME = "unknown_pipe-" + TIMESTAMP
MODEL_NAME = "unknown_model-" + TIMESTAMP
task = json_format.ParseDict(
{"budget_milli_node_hours": 2000, "model_type": "CLOUD_LOW_ACCURACY_1"}, Value()
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
```
Now save the unique identifier of the training pipeline you created.
```
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
```
### Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's `get_training_pipeline` method, with the following parameter:
- `name`: The Vertex fully qualified pipeline identifier.
When the model is done training, the pipeline state will be `PIPELINE_STATE_SUCCEEDED`.
```
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
```
# Deployment
Training the above model may take upwards of 30 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field `model_to_deploy.name`.
```
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
```
## Model information
Now that your model is trained, you can get some information on your model.
## Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
### List evaluations for all slices
Use this helper function `list_model_evaluations`, which takes the following parameter:
- `name`: The Vertex fully qualified model identifier for the `Model` resource.
This helper function uses the model client service's `list_model_evaluations` method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.
For each evaluation -- you probably only have one, we then print all the key names for each metric in the evaluation, and for a small set (`confidenceMetricsEntries`) you will print the result.
```
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("confidenceMetricsEntries", metrics["confidenceMetricsEntries"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
```
## Deploy the `Model` resource
Now deploy the trained Vertex `Model` resource you created with AutoML. This requires two steps:
1. Create an `Endpoint` resource for deploying the `Model` resource to.
2. Deploy the `Model` resource to the `Endpoint` resource.
### Create an `Endpoint` resource
Use this helper function `create_endpoint` to create an endpoint to deploy the model to for serving predictions, with the following parameter:
- `display_name`: A human readable name for the `Endpoint` resource.
The helper function uses the endpoint client service's `create_endpoint` method, which takes the following parameter:
- `display_name`: A human readable name for the `Endpoint` resource.
Creating an `Endpoint` resource returns a long running operation, since it may take a few moments to provision the `Endpoint` resource for serving. You call `response.result()`, which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the `Endpoint` resource: `response.name`.
```
ENDPOINT_NAME = "unknown_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
```
Now get the unique identifier for the `Endpoint` resource you created.
```
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
```
### Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
- Single Instance: The online prediction requests are processed on a single compute instance.
- Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.
- Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
- Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
- Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
- Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request.
```
MIN_NODES = 1
MAX_NODES = 1
```
### Deploy `Model` resource to the `Endpoint` resource
Use this helper function `deploy_model` to deploy the `Model` resource to the `Endpoint` resource you created for serving predictions, with the following parameters:
- `model`: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.
- `deploy_model_display_name`: A human readable name for the deployed model.
- `endpoint`: The Vertex fully qualified endpoint identifier to deploy the model to.
The helper function calls the `Endpoint` client service's method `deploy_model`, which takes the following parameters:
- `endpoint`: The Vertex fully qualified `Endpoint` resource identifier to deploy the `Model` resource to.
- `deployed_model`: The requirements specification for deploying the model.
- `traffic_split`: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
- If only one model, then specify as **{ "0": 100 }**, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
- If there are existing models on the endpoint, for which the traffic will be split, then use `model_id` to specify as **{ "0": percent, model_id: percent, ... }**, where `model_id` is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
Let's now dive deeper into the `deployed_model` parameter. This parameter is specified as a Python dictionary with the minimum required fields:
- `model`: The Vertex fully qualified model identifier of the (upload) model to deploy.
- `display_name`: A human readable name for the deployed model.
- `disable_container_logging`: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
- `automatic_resources`: This refers to how many redundant compute instances (replicas). For this example, we set it to one (no replication).
#### Traffic Split
Let's now dive deeper into the `traffic_split` parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
#### Response
The method returns a long running operation `response`. We will wait sychronously for the operation to complete by calling the `response.result()`, which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
```
DEPLOYED_NAME = "unknown_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"automatic_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
```
## Make a online prediction request
Now do a online prediction to your deployed model.
### Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
```
import json
test_items = !gsutil cat $IMPORT_FILE | head -n1
test_data = test_items[0].replace("'", '"')
test_data = json.loads(test_data)
try:
test_item = test_data["image_gcs_uri"]
test_label = test_data["segmentation_annotation"]["annotation_spec_colors"]
except:
test_item = test_data["imageGcsUri"]
test_label = test_data["segmentationAnnotation"]["annotationSpecColors"]
print(test_item, test_label)
```
### Make a prediction
Now you have a test item. Use this helper function `predict_item`, which takes the following parameters:
- `filename`: The Cloud Storage path to the test item.
- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed.
- `parameters_dict`: Additional filtering parameters for serving prediction results.
This function calls the prediction client service's `predict` method with the following parameters:
- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed.
- `instances`: A list of instances (encoded images) to predict.
- `parameters`: Additional filtering parameters for serving prediction results. *Note*, image segmentation models do not support additional parameters.
#### Request
Since in this example your test item is in a Cloud Storage bucket, you will open and read the contents of the imageusing `tf.io.gfile.Gfile()`. To pass the test data to the prediction service, we will encode the bytes into base 64 -- This makes binary data safe from modification while it is transferred over the Internet.
The format of each instance is:
{ 'content': { 'b64': [base64_encoded_bytes] } }
Since the `predict()` method can take multiple items (instances), you send our single test item as a list of one test item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the `predict()` method.
#### Response
The `response` object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction -- in our case there is just one:
- `confidenceMask`: Confidence level in the prediction.
- `categoryMask`: The predicted label per pixel.
```
import base64
import tensorflow as tf
def predict_item(filename, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
with tf.io.gfile.GFile(filename, "rb") as f:
content = f.read()
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{"content": base64.b64encode(content).decode("utf-8")}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", dict(prediction))
predict_item(test_item, endpoint_id, None)
```
## Undeploy the `Model` resource
Now undeploy your `Model` resource from the serving `Endpoint` resoure. Use this helper function `undeploy_model`, which takes the following parameters:
- `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed to.
- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` is deployed to.
This function calls the endpoint client service's method `undeploy_model`, with the following parameters:
- `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed.
- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource is deployed.
- `traffic_split`: How to split traffic among the remaining deployed models on the `Endpoint` resource.
Since this is the only deployed model on the `Endpoint` resource, you simply can leave `traffic_split` empty by setting it to {}.
```
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
```
# Cleaning up
To clean up all GCP resources used in this project, you can [delete the GCP
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
- Dataset
- Pipeline
- Model
- Endpoint
- Batch Job
- Custom Job
- Hyperparameter Tuning Job
- Cloud Storage Bucket
```
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
```
| github_jupyter |
```
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Vertex SDK: AutoML natural language text classification model
## Installation
Install the latest (preview) version of Vertex SDK.
```
! pip3 install -U google-cloud-aiplatform --user
```
Install the Google *cloud-storage* library as well.
```
! pip3 install google-cloud-storage
```
### Restart the Kernel
Once you've installed the Vertex SDK and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
```
import os
if not os.getenv("AUTORUN"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Before you begin
### GPU run-time
*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU**
### Set up your GCP project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a GCP project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)
3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)
4. [Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebooks.
5. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
```
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
```
#### Region
You can also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend when possible, to choose the region closest to you.
- Americas: `us-central1`
- Europe: `europe-west4`
- Asia Pacific: `asia-east1`
You cannot use a Multi-Regional Storage bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see [Region support for Vertex AI services](https://cloud.google.com/vertex-ai/docs/general/locations)
```
REGION = "us-central1" # @param {type: "string"}
```
#### Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
```
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
```
### Authenticate your GCP account
**If you are using Google Cloud Notebooks**, your environment is already
authenticated. Skip this step.
*Note: If you are on an Vertex notebook and run the cell, the cell knows to skip executing the authentication steps.*
```
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Vertex, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth login
```
### Create a Cloud Storage bucket
**The following steps are required, regardless of your notebook environment.**
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
```
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
```
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
```
! gsutil mb -l $REGION gs://$BUCKET_NAME
```
Finally, validate access to your Cloud Storage bucket by examining its contents:
```
! gsutil ls -al gs://$BUCKET_NAME
```
### Set up variables
Next, set up some variables used throughout the tutorial.
### Import libraries and define constants
#### Import Vertex SDK
Import the Vertex SDK into our Python environment.
```
import base64
import json
import os
import sys
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
```
#### Vertex AI constants
Setup up the following constants for Vertex AI:
- `API_ENDPOINT`: The Vertex AI API service endpoint for dataset, model, job, pipeline and endpoint services.
- `PARENT`: The Vertex AI location root path for dataset, model and endpoint resources.
```
# API Endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex AI location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
```
#### AutoML constants
Next, setup constants unique to AutoML Text Classification datasets and training:
- Dataset Schemas: Tells the managed dataset service which type of dataset it is.
- Data Labeling (Annotations) Schemas: Tells the managed dataset service how the data is labeled (annotated).
- Dataset Training Schemas: Tells the Vertex AI Pipelines service the task (e.g., classification) to train the model for.
```
# Text Dataset type
TEXT_SCHEMA = "google-cloud-aiplatform/schema/dataset/metadata/text_1.0.0.yaml"
# Text Labeling type
IMPORT_SCHEMA_TEXT_CLASSIFICATION = "gs://google-cloud-aiplatform/schema/dataset/ioformat/text_classification_single_label_io_format_1.0.0.yaml"
# Text Training task
TRAINING_TEXT_CLASSIFICATION_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_classification_1.0.0.yaml"
```
## Clients Vertex AI
The Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex).
You will use several clients in this tutorial, so set them all up upfront.
- Dataset Service for managed datasets.
- Model Service for managed models.
- Pipeline Service for training.
- Endpoint Service for deployment.
- Prediction Service for serving. *Note*: Prediction has a different service endpoint.
```
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
clients["job"] = create_job_client()
for client in clients.items():
print(client)
IMPORT_FILE = "gs://cloud-ml-data/NL-classification/happiness.csv"
! gsutil cat $IMPORT_FILE | head -n 10
```
*Example output*:
```
I went on a successful date with someone I felt sympathy and connection with.,affection
I was happy when my son got 90% marks in his examination,affection
I went to the gym this morning and did yoga.,exercise
We had a serious talk with some friends of ours who have been flaky lately. They understood and we had a good evening hanging out.,bonding
I went with grandchildren to butterfly display at Crohn Conservatory,affection
I meditated last night.,leisure
"I made a new recipe for peasant bread, and it came out spectacular!",achievement
I got gift from my elder brother which was really surprising me,affection
YESTERDAY MY MOMS BIRTHDAY SO I ENJOYED,enjoy_the_moment
Watching cupcake wars with my three teen children,affection
```
## Create a dataset
### [projects.locations.datasets.create](https://cloud.google.com/vertex-ai/docs/reference/rest/v1beta1/projects.locations.datasets/create)
#### Request
```
DATA_SCHEMA = TEXT_SCHEMA
dataset = {
"display_name": "happiness_" + TIMESTAMP,
"metadata_schema_uri": "gs://" + DATA_SCHEMA,
}
print(
MessageToJson(
aip.CreateDatasetRequest(parent=PARENT, dataset=dataset).__dict__["_pb"]
)
)
```
*Example output*:
```
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"dataset": {
"displayName": "happiness_20210226015238",
"metadataSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/metadata/text_1.0.0.yaml"
}
}
```
#### Call
```
request = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
```
#### Response
```
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
```
*Example output*:
```
{
"name": "projects/116273516712/locations/us-central1/datasets/574578388396670976",
"displayName": "happiness_20210226015238",
"metadataSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/metadata/text_1.0.0.yaml",
"labels": {
"aiplatform.googleapis.com/dataset_metadata_schema": "TEXT"
},
"metadata": {
"dataItemSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/dataitem/text_1.0.0.yaml"
}
}
```
```
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
```
### [projects.locations.datasets.import](https://cloud.google.com/vertex-ai/docs/reference/rest/v1beta1/projects.locations.datasets/import)
#### Request
```
LABEL_SCHEMA = IMPORT_SCHEMA_TEXT_CLASSIFICATION
import_config = {
"gcs_source": {"uris": [IMPORT_FILE]},
"import_schema_uri": LABEL_SCHEMA,
}
print(
MessageToJson(
aip.ImportDataRequest(
name=dataset_short_id, import_configs=[import_config]
).__dict__["_pb"]
)
)
```
*Example output*:
```
{
"name": "574578388396670976",
"importConfigs": [
{
"gcsSource": {
"uris": [
"gs://cloud-ml-data/NL-classification/happiness.csv"
]
},
"importSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/ioformat/text_classification_single_label_io_format_1.0.0.yaml"
}
]
}
```
#### Call
```
request = clients["dataset"].import_data(
name=dataset_id, import_configs=[import_config]
)
```
#### Response
```
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
```
*Example output*:
```
{}
```
## Train a model
### [projects.locations.trainingPipelines.create](https://cloud.google.com/vertex-ai/docs/reference/rest/v1beta1/projects.locations.trainingPipelines/create)
#### Request
```
TRAINING_SCHEMA = TRAINING_TEXT_CLASSIFICATION_SCHEMA
task = json_format.ParseDict(
{
"multi_label": False,
},
Value(),
)
training_pipeline = {
"display_name": "happiness_" + TIMESTAMP,
"input_data_config": {"dataset_id": dataset_short_id},
"model_to_upload": {"display_name": "happiness_" + TIMESTAMP},
"training_task_definition": TRAINING_SCHEMA,
"training_task_inputs": task,
}
print(
MessageToJson(
aip.CreateTrainingPipelineRequest(
parent=PARENT, training_pipeline=training_pipeline
).__dict__["_pb"]
)
)
```
*Example output*:
```
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"trainingPipeline": {
"displayName": "happiness_20210226015238",
"inputDataConfig": {
"datasetId": "574578388396670976"
},
"trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_classification_1.0.0.yaml",
"trainingTaskInputs": {
"multi_label": false
},
"modelToUpload": {
"displayName": "happiness_20210226015238"
}
}
}
```
#### Call
```
request = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
```
#### Response
```
print(MessageToJson(request.__dict__["_pb"]))
```
*Example output*:
```
{
"name": "projects/116273516712/locations/us-central1/trainingPipelines/2903115317607661568",
"displayName": "happiness_20210226015238",
"inputDataConfig": {
"datasetId": "574578388396670976"
},
"trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_classification_1.0.0.yaml",
"trainingTaskInputs": {},
"modelToUpload": {
"displayName": "happiness_20210226015238"
},
"state": "PIPELINE_STATE_PENDING",
"createTime": "2021-02-26T02:23:54.166560Z",
"updateTime": "2021-02-26T02:23:54.166560Z"
}
```
```
# The full unique ID for the training pipeline
training_pipeline_id = request.name
# The short numeric ID for the training pipeline
training_pipeline_short_id = training_pipeline_id.split("/")[-1]
print(training_pipeline_id)
```
### [projects.locations.trainingPipelines.get](https://cloud.google.com/vertex-ai/docs/reference/rest/v1beta1/projects.locations.trainingPipelines/get)
#### Call
```
request = clients["pipeline"].get_training_pipeline(name=training_pipeline_id)
```
#### Response
```
print(MessageToJson(request.__dict__["_pb"]))
```
*Example output*:
```
{
"name": "projects/116273516712/locations/us-central1/trainingPipelines/2903115317607661568",
"displayName": "happiness_20210226015238",
"inputDataConfig": {
"datasetId": "574578388396670976"
},
"trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_classification_1.0.0.yaml",
"trainingTaskInputs": {},
"modelToUpload": {
"name": "projects/116273516712/locations/us-central1/models/2369051733671280640",
"displayName": "happiness_20210226015238"
},
"state": "PIPELINE_STATE_SUCCEEDED",
"createTime": "2021-02-26T02:23:54.166560Z",
"startTime": "2021-02-26T02:23:54.396088Z",
"endTime": "2021-02-26T06:08:06.548524Z",
"updateTime": "2021-02-26T06:08:06.548524Z"
}
```
```
while True:
response = clients["pipeline"].get_training_pipeline(name=training_pipeline_id)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_name = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
break
else:
model_id = response.model_to_upload.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(20)
print(model_id)
```
## Evaluate the model
### [projects.locations.models.evaluations.list](https://cloud.google.com/vertex-ai/docs/reference/rest/v1beta1/projects.locations.models.evaluations/list)
#### Call
```
request = clients["model"].list_model_evaluations(parent=model_id)
```
#### Response
```
model_evaluations = [json.loads(MessageToJson(mel.__dict__["_pb"])) for mel in request]
print(json.dumps(model_evaluations, indent=2))
# The evaluation slice
evaluation_slice = request.model_evaluations[0].name
```
*Example output*:
```
[
{
"name": "projects/116273516712/locations/us-central1/models/2369051733671280640/evaluations/1541152463304785920",
"metricsSchemaUri": "gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml",
"metrics": {
"confusionMatrix": {
"annotationSpecs": [
{
"displayName": "exercise",
"id": "952213353537732608"
},
{
"id": "1528674105841156096",
"displayName": "achievement"
},
{
"id": "3258056362751426560",
"displayName": "leisure"
},
{
"id": "3834517115054850048",
"displayName": "bonding"
},
{
"id": "5563899371965120512",
"displayName": "enjoy_the_moment"
},
{
"id": "6140360124268544000",
"displayName": "nature"
},
{
"id": "8446203133482237952",
"displayName": "affection"
}
],
"rows": [
[
19.0,
1.0,
0.0,
0.0,
0.0,
0.0,
0.0
],
[
0.0,
342.0,
5.0,
2.0,
13.0,
2.0,
13.0
],
[
2.0,
10.0,
42.0,
1.0,
12.0,
0.0,
2.0
],
[
0.0,
4.0,
0.0,
121.0,
1.0,
0.0,
4.0
],
[
2.0,
29.0,
3.0,
2.0,
98.0,
0.0,
6.0
],
[
0.0,
3.0,
0.0,
1.0,
0.0,
21.0,
1.0
],
[
0.0,
7.0,
0.0,
1.0,
6.0,
0.0,
409.0
]
]
},
"confidenceMetrics": [
{
"f1Score": 0.25,
"recall": 1.0,
"f1ScoreAt1": 0.88776374,
"precisionAt1": 0.88776374,
"precision": 0.14285715,
"recallAt1": 0.88776374
},
{
"confidenceThreshold": 0.05,
"recall": 0.9721519,
"f1Score": 0.8101266,
"recallAt1": 0.88776374,
"f1ScoreAt1": 0.88776374,
"precisionAt1": 0.88776374,
"precision": 0.69439423
},
# REMOVED FOR BREVITY
{
"f1Score": 0.0033698399,
"recall": 0.0016877637,
"confidenceThreshold": 1.0,
"recallAt1": 0.0016877637,
"f1ScoreAt1": 0.0033698399,
"precisionAt1": 1.0,
"precision": 1.0
}
],
"auPrc": 0.95903283,
"logLoss": 0.08260541
},
"createTime": "2021-02-26T06:07:48.967028Z",
"sliceDimensions": [
"annotationSpec"
]
}
]
```
### [projects.locations.models.evaluations.get](https://cloud.google.com/vertex-ai/docs/reference/rest/v1beta1/projects.locations.models.evaluations/get)
#### Call
```
request = clients["model"].get_model_evaluation(name=evaluation_slice)
```
#### Response
```
print(MessageToJson(request.__dict__["_pb"]))
```
*Example output*:
```
{
"name": "projects/116273516712/locations/us-central1/models/2369051733671280640/evaluations/1541152463304785920",
"metricsSchemaUri": "gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml",
"metrics": {
"confusionMatrix": {
"annotationSpecs": [
{
"displayName": "exercise",
"id": "952213353537732608"
},
{
"displayName": "achievement",
"id": "1528674105841156096"
},
{
"id": "3258056362751426560",
"displayName": "leisure"
},
{
"id": "3834517115054850048",
"displayName": "bonding"
},
{
"displayName": "enjoy_the_moment",
"id": "5563899371965120512"
},
{
"displayName": "nature",
"id": "6140360124268544000"
},
{
"id": "8446203133482237952",
"displayName": "affection"
}
],
"rows": [
[
19.0,
1.0,
0.0,
0.0,
0.0,
0.0,
0.0
],
[
0.0,
342.0,
5.0,
2.0,
13.0,
2.0,
13.0
],
[
2.0,
10.0,
42.0,
1.0,
12.0,
0.0,
2.0
],
[
0.0,
4.0,
0.0,
121.0,
1.0,
0.0,
4.0
],
[
2.0,
29.0,
3.0,
2.0,
98.0,
0.0,
6.0
],
[
0.0,
3.0,
0.0,
1.0,
0.0,
21.0,
1.0
],
[
0.0,
7.0,
0.0,
1.0,
6.0,
0.0,
409.0
]
]
},
"logLoss": 0.08260541,
"confidenceMetrics": [
{
"precision": 0.14285715,
"precisionAt1": 0.88776374,
"recall": 1.0,
"f1ScoreAt1": 0.88776374,
"recallAt1": 0.88776374,
"f1Score": 0.25
},
{
"f1Score": 0.8101266,
"recall": 0.9721519,
"precision": 0.69439423,
"confidenceThreshold": 0.05,
"recallAt1": 0.88776374,
"precisionAt1": 0.88776374,
"f1ScoreAt1": 0.88776374
},
# REMOVED FOR BREVITY
{
"confidenceThreshold": 1.0,
"f1Score": 0.0033698399,
"f1ScoreAt1": 0.0033698399,
"precisionAt1": 1.0,
"precision": 1.0,
"recall": 0.0016877637,
"recallAt1": 0.0016877637
}
],
"auPrc": 0.95903283
},
"createTime": "2021-02-26T06:07:48.967028Z",
"sliceDimensions": [
"annotationSpec"
]
}
```
## Make batch predictions
### Prepare files for batch prediction
```
test_item = ! gsutil cat $IMPORT_FILE | head -n1
test_item, test_label = str(test_item[0]).split(",")
print(test_item, test_label)
```
*Example output*:
```
I went on a successful date with someone I felt sympathy and connection with. affection
```
### Make the batch input file
Let's now make a batch input file, which you store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each text file. The dictionary contains the key/value pairs:
- `content`: The Cloud Storage path to the text file.
- `mimeType`: The content type. In our example, it is an `text/plain` file.
```
import json
import tensorflow as tf
test_item_uri = "gs://" + BUCKET_NAME + "/test.txt"
with tf.io.gfile.GFile(test_item_uri, "w") as f:
f.write(test_item + "\n")
gcs_input_uri = "gs://" + BUCKET_NAME + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {"content": test_item_uri, "mime_type": "text/plain"}
f.write(json.dumps(data) + "\n")
! gsutil cat $gcs_input_uri
! gsutil cat $test_item_uri
```
*Example output*:
```
{"content": "gs://migration-ucaip-trainingaip-20210226015238/test.txt", "mime_type": "text/plain"}
I went on a successful date with someone I felt sympathy and connection with.
```
### [projects.locations.batchPredictionJobs.create](https://cloud.google.com/vertex-ai/docs/reference/rest/v1beta1/projects.locations.batchPredictionJobs/create)
#### Request
```
batch_prediction_job = {
"display_name": "happiness_" + TIMESTAMP,
"model": model_id,
"input_config": {
"instances_format": "jsonl",
"gcs_source": {"uris": [gcs_input_uri]},
},
"output_config": {
"predictions_format": "jsonl",
"gcs_destination": {
"output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/"
},
},
"dedicated_resources": {
"machine_spec": {
"machine_type": "n1-standard-2",
"accelerator_count": 0,
},
"starting_replica_count": 1,
"max_replica_count": 1,
},
}
print(
MessageToJson(
aip.CreateBatchPredictionJobRequest(
parent=PARENT, batch_prediction_job=batch_prediction_job
).__dict__["_pb"]
)
)
```
*Example output*:
```
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"batchPredictionJob": {
"displayName": "happiness_20210226015238",
"model": "projects/116273516712/locations/us-central1/models/2369051733671280640",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {
"uris": [
"gs://migration-ucaip-trainingaip-20210226015238/test.jsonl"
]
}
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210226015238/batch_output/"
}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
}
}
}
```
#### Call
```
request = clients["job"].create_batch_prediction_job(
parent=PARENT, batch_prediction_job=batch_prediction_job
)
```
#### Response
```
print(MessageToJson(request.__dict__["_pb"]))
```
*Example output*:
```
{
"name": "projects/116273516712/locations/us-central1/batchPredictionJobs/4770983263059574784",
"displayName": "happiness_20210226015238",
"model": "projects/116273516712/locations/us-central1/models/2369051733671280640",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {
"uris": [
"gs://migration-ucaip-trainingaip-20210226015238/test.jsonl"
]
}
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210226015238/batch_output/"
}
},
"state": "JOB_STATE_PENDING",
"completionStats": {
"incompleteCount": "-1"
},
"createTime": "2021-02-26T09:37:44.471843Z",
"updateTime": "2021-02-26T09:37:44.471843Z"
}
```
```
# The fully qualified ID for the batch job
batch_job_id = request.name
# The short numeric ID for the batch job
batch_job_short_id = batch_job_id.split("/")[-1]
print(batch_job_id)
```
### [projects.locations.batchPredictionJobs.get](https://cloud.google.com/vertex-ai/docs/reference/rest/v1beta1/projects.locations.batchPredictionJobs/get)
#### Call
```
request = clients["job"].get_batch_prediction_job(name=batch_job_id)
```
#### Response
```
print(MessageToJson(request.__dict__["_pb"]))
```
*Example output*:
```
{
"name": "projects/116273516712/locations/us-central1/batchPredictionJobs/4770983263059574784",
"displayName": "happiness_20210226015238",
"model": "projects/116273516712/locations/us-central1/models/2369051733671280640",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {
"uris": [
"gs://migration-ucaip-trainingaip-20210226015238/test.jsonl"
]
}
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210226015238/batch_output/"
}
},
"state": "JOB_STATE_PENDING",
"completionStats": {
"incompleteCount": "-1"
},
"createTime": "2021-02-26T09:37:44.471843Z",
"updateTime": "2021-02-26T09:37:44.471843Z"
}
```
```
def get_latest_predictions(gcs_out_dir):
""" Get the latest prediction subfolder using the timestamp in the subfolder name"""
folders = !gsutil ls $gcs_out_dir
latest = ""
for folder in folders:
subfolder = folder.split("/")[-2]
if subfolder.startswith("prediction-"):
if subfolder > latest:
latest = folder[:-1]
return latest
while True:
response = clients["job"].get_batch_prediction_job(name=batch_job_id)
if response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("The job has not completed:", response.state)
if response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
folder = get_latest_predictions(
response.output_config.gcs_destination.output_uri_prefix
)
! gsutil ls $folder/prediction*.jsonl
! gsutil cat $folder/prediction*.jsonl
break
time.sleep(60)
```
*Example output*:
```
gs://migration-ucaip-trainingaip-20210226015238/batch_output/prediction-happiness_20210226015238-2021-02-26T09:37:44.261133Z/predictions_00001.jsonl
{"instance":{"content":"gs://migration-ucaip-trainingaip-20210226015238/test.txt","mimeType":"text/plain"},"prediction":{"ids":["8446203133482237952","3834517115054850048","1528674105841156096","5563899371965120512","952213353537732608","3258056362751426560","6140360124268544000"],"displayNames":["affection","bonding","achievement","enjoy_the_moment","exercise","leisure","nature"],"confidences":[0.9183423,0.045685068,0.024327256,0.0057157497,0.0040851077,0.0012627868,5.8173126E-4]}}
```
## Make online predictions
### [projects.locations.endpoints.create](https://cloud.google.com/vertex-ai/docs/reference/rest/v1beta1/projects.locations.endpoints/create)
#### Request
```
endpoint = {"display_name": "happiness_" + TIMESTAMP}
print(
MessageToJson(
aip.CreateEndpointRequest(parent=PARENT, endpoint=endpoint).__dict__["_pb"]
)
)
```
*Example output*:
```
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"endpoint": {
"displayName": "happiness_20210226015238"
}
}
```
#### Call
```
request = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
```
#### Response
```
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
```
*Example output*:
```
{
"name": "projects/116273516712/locations/us-central1/endpoints/7367713068517687296"
}
```
```
# The fully qualified ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
```
### [projects.locations.endpoints.deployModel](https://cloud.google.com/vertex-ai/docs/reference/rest/v1beta1/projects.locations.endpoints/deployModel)
#### Request
```
deployed_model = {
"model": model_id,
"display_name": "happiness_" + TIMESTAMP,
"automatic_resources": {"min_replica_count": 1, "max_replica_count": 1},
}
traffic_split = {"0": 100}
print(
MessageToJson(
aip.DeployModelRequest(
endpoint=endpoint_id,
deployed_model=deployed_model,
traffic_split=traffic_split,
).__dict__["_pb"]
)
)
```
*Example output*:
```
{
"endpoint": "projects/116273516712/locations/us-central1/endpoints/7367713068517687296",
"deployedModel": {
"model": "projects/116273516712/locations/us-central1/models/2369051733671280640",
"displayName": "happiness_20210226015238",
"automaticResources": {
"minReplicaCount": 1,
"maxReplicaCount": 1
}
},
"trafficSplit": {
"0": 100
}
}
```
#### Call
```
request = clients["endpoint"].deploy_model(
endpoint=endpoint_id, deployed_model=deployed_model, traffic_split=traffic_split
)
```
#### Response
```
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
```
*Example output*:
```
{
"deployedModel": {
"id": "418518105996656640"
}
}
```
```
# The unique ID for the deployed model
deployed_model_id = result.deployed_model.id
print(deployed_model_id)
```
### [projects.locations.endpoints.predict](https://cloud.google.com/vertex-ai/docs/reference/rest/v1beta1/projects.locations.endpoints/predict)
#### Request
```
test_item = ! gsutil cat $IMPORT_FILE | head -n1
test_item, test_label = str(test_item[0]).split(",")
instances_list = [{"content": test_item}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
request = aip.PredictRequest(
endpoint=endpoint_id,
)
request.instances.append(instances)
print(MessageToJson(request.__dict__["_pb"]))
```
*Example output*:
```
{
"endpoint": "projects/116273516712/locations/us-central1/endpoints/7367713068517687296",
"instances": [
[
{
"content": "I went on a successful date with someone I felt sympathy and connection with."
}
]
]
}
```
#### Call
```
request = clients["prediction"].predict(endpoint=endpoint_id, instances=instances)
```
#### Response
```
print(MessageToJson(request.__dict__["_pb"]))
```
*Example output*:
```
{
"predictions": [
{
"confidences": [
0.8867673277854919,
0.024743923917412758,
0.0034913308918476105,
0.07936617732048035,
0.0013463868526741862,
0.0002393187169218436,
0.0040455833077430725
],
"displayNames": [
"affection",
"achievement",
"enjoy_the_moment",
"bonding",
"leisure",
"nature",
"exercise"
],
"ids": [
"8446203133482237952",
"1528674105841156096",
"5563899371965120512",
"3834517115054850048",
"3258056362751426560",
"6140360124268544000",
"952213353537732608"
]
}
],
"deployedModelId": "418518105996656640"
}
```
### [projects.locations.endpoints.undeployModel](https://cloud.google.com/vertex-ai/docs/reference/rest/v1beta1/projects.locations.endpoints/undeployModel)
#### Call
```
request = clients["endpoint"].undeploy_model(
endpoint=endpoint_id, deployed_model_id=deployed_model_id, traffic_split={}
)
```
#### Response
```
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
```
*Example output*:
```
{}
```
# Cleaning up
To clean up all GCP resources used in this project, you can [delete the GCP
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial.
```
delete_dataset = True
delete_model = True
delete_endpoint = True
delete_pipeline = True
delete_batchjob = True
delete_bucket = True
# Delete the dataset using the Vertex AI fully qualified identifier for the dataset
try:
if delete_dataset:
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the model using the Vertex AI fully qualified identifier for the model
try:
if delete_model:
clients["model"].delete_model(name=model_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex AI fully qualified identifier for the endpoint
try:
if delete_endpoint:
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex AI fully qualified identifier for the training pipeline
try:
if delete_pipeline:
clients["pipeline"].delete_training_pipeline(name=training_pipeline_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex AI fully qualified identifier for the batch job
try:
if delete_batchjob:
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r gs://$BUCKET_NAME
```
| github_jupyter |
# SLU12: Feature Engineering (aka Real World Data): Examples notebook
---
In this notebook we will cover the following:
* Types of statistical data
* Dealing with numerical features
* Dealing with categorical features
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
avengers = pd.read_csv('data/avengers.csv')
```
# 2. Types of data in Pandas
## 2.1. Numerical and object dtypes
```
avengers.dtypes
(avengers.select_dtypes(include='object')
.head(3))
```
## 2.2. Category dtype
```
avengers_cat = avengers.copy()
avengers_cat = avengers_cat.assign(Universe=avengers['Universe'].astype('category'))
avengers_cat['Universe'].cat.categories
avengers_cat['Universe'].cat.ordered
```
### Ordinal data
```
avengers_ord = avengers.copy()
avengers_ord = avengers_ord.assign(Membership=avengers['Membership'].astype('category'))
avengers_ord['Membership'].cat.categories
ordered_cats = ['Honorary', 'Academy', 'Probationary', 'Full']
avengers_ord = avengers_ord.assign(Membership=avengers_ord['Membership'].cat.set_categories(ordered_cats, ordered=True))
avengers_ord['Membership'].min(), avengers_ord['Membership'].max()
```
# 3. Types of statistical data
## 3.1. Dealing with numerical data
### 3.1.1. Introducing sklearn-like transformers
```
from sklearn.preprocessing import KBinsDiscretizer
from sklearn.preprocessing import Binarizer
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import Normalizer
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import RobustScaler
```
### 3.1.2. Discretization of numerical data
#### Binning
```
# save column as a dataframe, as required by the transformer
X = avengers[['Appearances']]
# initialize transformer with desired options
binner = KBinsDiscretizer(n_bins=10, encode='ordinal', strategy='uniform')
# fit transformer to data
binner.fit(X)
# create new feature by transforming the data
avengers['Appearances_bins'] = binner.transform(X)
binner.bin_edges_
```
#### Binarization
```
# initialize transformer with desired options
binarizer = Binarizer(threshold = 1000)
# save data to binarize
X = avengers[['Appearances']]
# fit transformer to data
binarizer.fit(X)
# create new feature by transforming the data
avengers['Appearances_binary'] = binarizer.transform(X)
# plot histogram
avengers['Appearances_binary'].plot.hist(figsize=(4, 5));
plt.xlim(0,1);
plt.title('Number of appearances after binarization');
```
### 3.1.3. Scaling of numerical data
#### [MinMaxScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html#sklearn.preprocessing.MinMaxScaler)
```
# initialize transformer with desired options
minmaxscaler = MinMaxScaler(feature_range=(0,1))
# save data to scale
X = avengers[['Appearances']]
# fit transformer to data
minmaxscaler.fit(X)
# create new feature by transforming the data
avengers['Appearances_minmax'] = minmaxscaler.transform(X)
# plot histogram
avengers['Appearances_minmax'].plot.hist(figsize=(8, 5));
plt.xlim(0, 1);
plt.title('Number of appearances after min-max scaling');
```
#### [Normalizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Normalizer.html)
```
# initialize transformer with desired options
normalizer = Normalizer(norm='l2')
# save numerical columns to normalize
X = avengers[['Appearances', 'TotalDeaths', 'TotalReturns']]
# fit transformer to data
normalizer.fit(X)
# create new features by transforming the data
X_normalized = normalizer.transform(X) # recall that output is a numpy array
avengers['Appearances_normalized'] = X_normalized[:, 0]
avengers['TotalDeaths_normalized'] = X_normalized[:, 1]
avengers['TotalReturns_normalized'] = X_normalized[:, 2]
# plot histogram of normalized appearances
avengers['Appearances_normalized'].plot.hist(figsize=(8, 5));
plt.xlim(0.88, 1);
plt.title('Number of appearances after normalization');
```
#### [StandardScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html#sklearn.preprocessing.StandardScaler)
```
# initialize transformer with desired options
standardscaler = StandardScaler()
# save data to scale
X = avengers[['Appearances']]
# fit transformer to data
standardscaler.fit(X)
# create new feature by transforming the data
avengers['Appearances_standard_scaled'] = standardscaler.transform(X)
# plot histogram
avengers['Appearances_standard_scaled'].plot.hist(figsize=(8, 5));
plt.title('Number of appearances after standard scaling');
```
#### [RobustScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html#sklearn.preprocessing.RobustScaler)
```
# initialize transformer with desired options
robustscaler = RobustScaler()
# save data to scale
X = avengers[['Appearances']]
# fit transformer to data
robustscaler.fit(X)
# create new feature by transforming the data
avengers['Appearances_robust_scaled'] = robustscaler.transform(X)
# plot histogram
avengers['Appearances_robust_scaled'].plot.hist(figsize=(8, 5));
plt.title('Number of appearances after robust scaling');
```
## 3.2. Dealing with categorical data
### 3.2.1. Binary data
```
avengers = pd.read_csv('data/avengers.csv')
(avengers.assign(Active_mapped = avengers['Active'].map({'YES': 1, 'NO': 0}),
Gender_mapped = avengers['Gender'].map({'MALE': 1, 'FEMALE': 0}))
.sample(5))
```
### 3.2.2. Enconding categorical features
```
import category_encoders as ce
```
#### Ordinal encoding
```
# initialize transformer with desired options
ordinalencoder = ce.ordinal.OrdinalEncoder()
# save data to scale (no need to reshape)
X = avengers[['Universe']]
# fit transformer to data
ordinalencoder.fit(X)
# create new feature by transforming the data
X_encoded = ordinalencoder.transform(X)
X_encoded.sample(5, random_state=9)
ordinalencoder.category_mapping
```
#### One-hot encoding
```
# initialize transformer with desired options
ohe = ce.one_hot.OneHotEncoder(use_cat_names=True, handle_unknown='indicator')
# save data to scale (no need to reshape)
X = avengers[['Universe']]
# fit transformer to data
ohe.fit(X)
# create new feature by transforming the data
X_ohe = ohe.transform(X)
X_ohe.sample(5, random_state=9)
```
| github_jupyter |
# Exercise 15 - More plotting
In this exercise, we will meet some more advanced features of Python's plotting capabilities.
In `matplotlib`, a `figure` represents the entire 'page' you can draw on, and can contain multiple `axes`, each of which contains a single plot. This allows you to build up complex, multi-panel figures.
To understand how this works, it's useful to go to the very-basics, and manually create a figure and an axis to make a plot in.
For example:
```python
fig = plt.figure() # creates a new figure, with no axes
ax = fig.add_axes([.1, .1, .8, .8]) # creates an axis at the specified coordinates
# the coordinates are in the form [left, bottom, width, height].
# Units are fractions of the total figure size (i.e. 0.5 is the centre of the figure).
ax.plot(x, y) # draws a plot on the axis
```
**➤ Make a new figure containing two axes - one in the upper right quarter, and one in the lower left quarter.**
```
# Try it here!
fig = plt.figure()
ax1 = fig.add_axes([.5, .5, .4, .4])
ax2 = fig.add_axes([.1, .1, .4, .4])
```
This is straightforward, but can get laborious if you're making a plot with lots of panels. Luckily, `matplotlib` has a number of built-in functions that make creating multi-plot figures very easy.
For simple cases where you want an $N \times M$ grid of plots, you can use `fig.add_subplot()`:
```python
fig = plt.figure()
fig.add_subplot(nrows, ncols, index)
```
where `nrows` is the number of 'rows' in the grid, `ncols` is the number of columns, and `index` is a number between 1 and `nrows*ncols` which selects which of the panels we wish to create. Typically, one will call `fig.add_subplot()` several times, with the same `nrows` and `ncols`, but a different `index` in each case. Having switched to a panel, we can then issue plotting commands for that panel.
As a shorthand, if `nrows`, `ncols` and `index` are all single-digit numbers, you can also call `fig.add_subplot(nci)` where `nci` is `nrows`, `ncols` and `index` concatenated into a 3-digit number. In other words, `fig.add_subplot(3,2,5)` is exactly the same as `fig.add_subplot(325)`.
For example:
```python
fig = plt.figure(figsize=(12,3))
xx = np.linspace(0,1,100)
ax1 = fig.add_subplot(1, 2, 1)
ax1.plot(np.sin(5*np.pi*xx),'r')
ax2 = fig.add_subplot(122)
ax2.plot(np.cos(5*np.pi*xx),'b')
```
Also notice here that we have passed `figsize = (xsize, ysize)` to `plt.figure()`, to control the size (and more importantly, shape) of the figure we have created. *Note: figure sizes are in inches.*
**➤ Use `fig.add_subplot()` to make a new figure with the same layout as the one above.**
```
# Try it here!
fig = plt.figure()
fig.add_subplot(2, 2, 2)
fig.add_subplot(2, 2, 3)
```
Notice how `matplotlib` has put some space between the axis automatically here, making for a nicer-looking plot.
Alternatively, you can create all the `axes` you need when you create the figure, using `plt.subplots(nrows, ncol)`. This returns two objects - a `figure`, and an array of `axes` objects. For example:
```python
fig, axs = plt.subplots(2, 2, figsize=[6, 6])
```
will create a figure object (`fig`), and a (2x2) array of `axes` (`axs`).
This can be really useful if you're creating plots programmatically, as it creates all the axes at the start, and you can them iterate through them. For example:
```python
fig, axs = plt.subplots(2, 2, figsize=[6, 6])
x = np.linspace(0, 10, 100)
for i, ax in enumerate(axs.flatten()):
ax.plot(x, x**i)
```
Note the use of `.flatten()`, to convert a 2x2 array into a 1D array, which is more suitable for iteration.
**➤ Try making this plot.**
```
# Try it here!
fig, axs = plt.subplots(2, 2, figsize=[6, 6])
x = np.linspace(0, 10, 100)
for i, ax in enumerate(axs.flatten()):
ax.plot(x, x**i)
```
You'll probably notice that a lot of the labels are overlapping, which doesn't look great. You can deal with this using the *really* handy function `fig.tight_layout()`. This automatically resizes all the panels so that axis labels and legends don't overlap, and the panels make the best use of all available space in the figure. Using this command at the end of any plotting script will ensure your figures are as beautiful as possible!
**➤ Make the plot again, but run `fig.tight_layout()` at the end.**
```
# Try it here!
fig, axs = plt.subplots(2, 2, figsize=[6, 6])
x = np.linspace(0, 10, 100)
for i, ax in enumerate(axs.flatten()):
ax.plot(x, x**i)
fig.tight_layout()
```
One final thing to notice here - all panels in this plot share the same x-axis, so plotting it on the top row isn't very space-efficient...
**➤ Try setting the `sharex` argument of `plt.subplots` to `True`.**
```
# Try it here!
fig, axs = plt.subplots(2, 2, figsize=[6, 4.5], sharex=True)
x = np.linspace(0, 10, 100)
for i, ax in enumerate(axs.flatten()):
ax.plot(x, x**i)
fig.tight_layout()
```
At this point, hopefully you're getting the idea that there are both `figure` and `axes` objects, which do quite different things. In general, actions performed on an `axis` will *only* influence that axis, while actions performed on the `figure` can influence *all* `axes` within it, or create new `axes`.
Just one final thing to complete this plot - axis labels! This is done within the `axes` objects, as the labels relate to each panel individually. The specific functions you'll need are `ax.set_xlabel()` and `ax.set_ylabel()`. `axes` objects have a wide range of `ax.set_*()` methods, which can modify their appearance. It's worth having a look at these to be aware of what's possible (type `ax.set_` and press <kbd>Tab</kbd> to see a list of possibilities).
Before you set the axis labels, there's a potential problem here - we've removed the x labels from the top row of plots... How can we deal with this? One of the benefits of using `matplotlib`'s functions for arranging axes on a plot, is that each `axis` object is 'aware' of where it is. For example, the method `ax.is_last_row()` might be useful here.
**➤ Within the loop, label the x axis of the bottom row with $x$, and the y-axis of every panel with $x^i$**
```
# Try it here!
fig, axs = plt.subplots(2, 2, figsize=[6, 4.5], sharex=True)
x = np.linspace(0, 10, 100)
for i, ax in enumerate(axs.flatten()):
ax.plot(x, x**i)
if ax.is_last_row():
ax.set_xlabel('x')
ax.set_ylabel('$x^{:}$'.format(i))
fig.tight_layout()
```
## Review: Multi-Panel Plots
- Make a plot that is 12 inches wide and 3 inches tall containing 2 axes on the left and right, with a shared y-scale.
- Create an `x` variable with 100 points between 0 an 1.
- On the left axis, plot the function $y = \sin(5 \pi x)$ with a red line.
- On the right axis, plot the function $y = \cos(5 \pi x)$ with a blue line.
- Label the y-axis of the left panel 'y'.
- Label the x-axes of both panels 'x'
```
# Try it here!
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=[12, 3], sharey=True)
x = np.linspace(0, 1, 100)
ax1.plot(x, np.sin(5 * np.pi * x), c='r')
ax2.plot(x, np.cos(5 * np.pi * x), c='b')
ax1.set_ylabel('y')
ax1.set_xlabel('x')
ax2.set_xlabel('x')
fig.tight_layout()
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
## Projections & Coordinate Systems
So far, we've been working in simple cartesian *(x, y)* coordinates. In science, you might come across data than needs to be represented in other coordinate systems, for example polar *(radius, angle)* coordinates.
`matplotlib` can handle different coordinate systems using the `projection` argument when creating an axis. For example:
```python
fig = plt.figure()
# create a panel with a polar projection
ax = fig.add_subplot(111, projection='polar')
# draw a spiral!
n = 1000
theta = np.linspace(0, 10 * np.pi, n)
radius = np.linspace(0, 1, n)
ax.plot(theta, radius)
# mofdify plot appearance
ax.set_theta_zero_location('N')
ax.set_rticks([])
```
Notice we're using `ax.set_theta_zero_location()` and `ax.set_rticks([])` to modify the appearance of the plot. Can you work out what these do?
**➤ Create a figure with two panels showing the same data. The right panel should have a polar projection.**
```
# Try it here!
fig = plt.figure(figsize=[6, 3])
# create a panel with a polar projection
ax = fig.add_subplot(122, projection='polar')
# draw a spiral!
n = 1000
theta = np.linspace(0, 10 * np.pi, n)
radius = np.linspace(0, 1, n)
ax.plot(theta, radius)
ax.set_theta_zero_location('N') # set location of 0 degrees ('North' = top)
ax.set_rticks([]) # remove radial labels
ax2 = fig.add_subplot(121)
ax2.plot(theta, radius)
ax2.set_xlabel('theta')
ax2.set_ylabel('radius')
fig.tight_layout()
```
## Annotations
Often, plots neet annotations to make sense. `matplotlib` has a number of tools to do this. The most useful ones are `ax.text()`, which you can use to add text to a plot, and `ax.annotate()`, which can be used to add a variety of text, arrows and other symbols.
For example:
```python
fig, ax = plt.subplots(1, 1)
xx = np.linspace(0, 2*np.pi, 1000)
ax.plot(xx,np.sin(5*xx),'b')
ax.text(3, 0, 'some text')
ax.annotate('an annotation', xy=(4, .2))
```
```
# Try it here!
```
These commands have a number of other options, which alter their behaviour. The example above places the text using the same *(x,y)* coordinates as the data. If we want to place an annotation at the same position on each plot, this is impractical, and we should use *fractional coordinates* of the axis itself, instead of the data coordinates.
You can do this in both `annotate` and `text`:
```python
ax.text(.2, .2, 'some text', transform=ax.transAxes)
ax.annotate('an annotation', xy=(.6, .6), xycoords='axes fraction')
```
You might also want to change the position of this text relative to the specified coordinates. For example, you want the text 'anchored' to the *(x, y)* position on its lower left corner. Do this by specifying the `verticalalignment` and `horizontalalignment` parameters (`va` and `ha` for short).
```python
ax.text(.5, .5, 'some text', transform=ax.transAxes, ha='left', va='bottom')
ax.annotate('an annotation', xy=(.5, .5), xycoords='axes fraction', ha='right', va='top')
```
Finally, you can draw more complicated annotations using `annotate`, for example arrows with labels:
```python
ax.annotate('an arrow!', xy=(.5, .5), xytext=(.7,1.15),
arrowprops=dict(arrowstyle="->",
connectionstyle="arc3"),
fontsize=12)
```
**➤ Create a figure with a plot of $y = x^2$, from x=-5 to x=5. Draw an arrow pointing to the minimum, with a label in the middle of the plot.**
```
# Try it here!
```
Another really important type of annotation is a figure legend. `matplotlib` can automatically create legends if you specify the `label='text'` argument when drawing a line. For example:
```python
fig, ax = plt.subplots()
x = np.linspace(0, 4 * np.pi, 100)
ax.plot(x, np.sin(x), label='sin(x)')
ax.plot(x, np.cos(x), label='cos(x)')
ax.legend() # this creates a figure legend
```
**➤ Create a plot showing $A = x$ and $B = 0.8 * x + 10$, with a figure legend.**
```
# Try it here!
```
## Shared Axes
Sometimes, we may wish to plot two different datasets on the same axis, so that they share the same horizontal (or vertical) axis, but have different vertical (or horizontal) axes. To do this, matplotlib provides the `ax.twinx()` and `ax.twiny()` commands, which create a new set of axes that share the `x` or `y` axis with `ax`:
```python
fig, ax_sin = plt.subplots(1, 1)
xx = np.linspace(0, 2*np.pi, 1000)
ax_sin.plot(xx,np.sin(xx))
ax_sin.set_ylabel('sin(x)')
ax_quad = ax_sin.twinx()
ax_quad.plot(xx,xx**2,'r')
ax_quad.set_ylabel('$x^2$')
```
```
# Try it here!
```
This may seem similar to the `sharex` option we passed to `plt.subplots()` earlier. The key difference is that `sharex` allowed us to create *multiple* axes that had *similar* scales; `plt.twinx()` allows us to have *one* (apparent) set of axes featuring *two* different scales.
## Real-World Data: Canberra Weather
The file `CanberraBOMweather.xls` is an Excel spreadsheet containing one year of daily weather data in Canberra, obtained from the Bureau of Meteorology.
**➤ Load this data into a `pandas` dataframe**
```
# Try it here!
import pandas as pd
dat = pd.read_excel('CanberraBOMweather.xls', skiprows=7)
```
**➤ Produce a plot showing daily mean temperature and pressure as a function of time.**
You should average the 9am and 3pm pressure readings for each day.
Think about the absolute values of these data - should they be on the same, or separate y axes?
```
# Try it here!
temp = (dat.loc[:, '9am Temperature (°C)'] + dat.loc[:, '3pm Temperature (°C)']) / 2
pres = (dat.loc[:, '9am MSL pressure (hPa)'] + dat.loc[:, '3pm MSL pressure (hPa)']) / 2
fig, ax = plt.subplots(1, 1)
ax.plot(dat.Date, temp, c='r')
pax = ax.twinx()
pax.plot(dat.Date, pres)
```
The function `plt.fill_between()` provides a way to shade in a region of your figure. Use this to show the maximum and minimum temperatures each day on your graph. Make sure that the mean temperature and pressure information is not obscured! (Hint: the `alpha` parameter sets the transparrency of a line)
```
# Try it here!
temp = (dat.loc[:, '9am Temperature (°C)'] + dat.loc[:, '3pm Temperature (°C)']) / 2
pres = (dat.loc[:, '9am MSL pressure (hPa)'] + dat.loc[:, '3pm MSL pressure (hPa)']) / 2
fig, ax = plt.subplots(1, 1)
ax.plot(dat.Date, temp, c='C1')
ax.fill_between(dat.loc[:, 'Date'].values,
dat.loc[:, '9am Temperature (°C)'].values,
dat.loc[:, '3pm Temperature (°C)'].values, color='C1', alpha=0.5)
pax = ax.twinx()
pax.plot(dat.Date, pres, alpha=0.7)
```
It's clear that temperature and pressure are correlated.
**➤ Produce a scatter-plot showing daily mean temperature vs. pressure.**
```
# Try it here!
temp = (dat.loc[:, '9am Temperature (°C)'] + dat.loc[:, '3pm Temperature (°C)']) / 2
pres = (dat.loc[:, '9am MSL pressure (hPa)'] + dat.loc[:, '3pm MSL pressure (hPa)']) / 2
fig, ax = plt.subplots(1, 1)
ax.scatter(temp, pres)
ax.set_ylabel('Pressure')
ax.set_xlabel('Temperature')
```
You should see a negative correlation between Temperature and Pressure. Wind speed might also be important here... is there more of a relationship between wind and temperature, or wind and pressure? Let's have a look!
Do this by colouring the points by wind speed. (Note: before calculating the average wind speed, you'll need to replace 'Calm' with 0 in the wind speed data).
You can colour points according to a variable using the `c` parameter, for example:
```python
fig, ax = plt.subplots(1, 1)
ax.scatter(x, y, c=z)
```
```
# Try it here!
temp = (dat.loc[:, '9am Temperature (°C)'] + dat.loc[:, '3pm Temperature (°C)']) / 2
pres = (dat.loc[:, '9am MSL pressure (hPa)'] + dat.loc[:, '3pm MSL pressure (hPa)']) / 2
dat.replace('Calm', 0, inplace=True)
wind = (dat.loc[:, '9am wind speed (km/h)'] + dat.loc[:, '3pm wind speed (km/h)']) / 2
fig, ax = plt.subplots(1, 1)
ax.scatter(temp, pres, c=wind)
ax.set_ylabel('Pressure')
ax.set_xlabel('Temperature')
```
Now you need a way of telling someone what the colour means - a colour bar! This is adding a new element to the *figure*, rather than the *axis*, so you need to do this at the *figure* level (`fig.colorbar()`). Because the figure can contain multiple axes, you also need to tell the command what it's drawing the colourbar for. For example:
```python
fig, ax = plt.subplots(1, 1)
cb = ax.scatter(x, y, c=z) # assign your coloured points to a variable
fig.colorbar(cb, label='z')
```
```
# Try it here!
temp = (dat.loc[:, '9am Temperature (°C)'] + dat.loc[:, '3pm Temperature (°C)']) / 2
pres = (dat.loc[:, '9am MSL pressure (hPa)'] + dat.loc[:, '3pm MSL pressure (hPa)']) / 2
dat.replace('Calm', 0, inplace=True)
wind = (dat.loc[:, '9am wind speed (km/h)'] + dat.loc[:, '3pm wind speed (km/h)']) / 2
fig, ax = plt.subplots(1, 1)
cb = ax.scatter(temp, pres, c=wind)
fig.colorbar(cb, label='Wind Speed')
ax.set_ylabel('Pressure')
ax.set_xlabel('Temperature')
```
| github_jupyter |
# HLCM Diagnostic
Arezoo Besharati, Paul Waddell, UrbanSim, July 2018
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Preliminaries" data-toc-modified-id="Preliminaries-1"><span class="toc-item-num">1 </span>Preliminaries</a></span><ul class="toc-item"><li><span><a href="#Load-data" data-toc-modified-id="Load-data-1.1"><span class="toc-item-num">1.1 </span>Load data</a></span></li><li><span><a href="#Perform-desired-variable-creations-and-transformations" data-toc-modified-id="Perform-desired-variable-creations-and-transformations-1.2"><span class="toc-item-num">1.2 </span>Perform desired variable creations and transformations</a></span></li></ul></li><li><span><a href="#Model-Estimation" data-toc-modified-id="Model-Estimation-2"><span class="toc-item-num">2 </span>Model Estimation</a></span><ul class="toc-item"><li><span><a href="#HLCM-for-Multi-Family" data-toc-modified-id="HLCM-for-Multi-Family-2.1"><span class="toc-item-num">2.1 </span>HLCM for Multi-Family</a></span></li></ul></li><li><span><a href="#Model-Prediction" data-toc-modified-id="Model-Prediction-3"><span class="toc-item-num">3 </span>Model Prediction</a></span></li></ul></div>
## Preliminaries
```
import os; os.chdir('../../')
import numpy as np, pandas as pd
import matplotlib.pyplot as plt
import warnings;
warnings.simplefilter('ignore')
%load_ext autoreload
%autoreload 2
from urbansim_templates import modelmanager as mm
from urbansim_templates.models import LargeMultinomialLogitStep
import orca
import seaborn as sns
%matplotlib notebook
pd.set_option('display.float_format', lambda x: '%.2f' % x)
```
### Load data
```
# Load any script-based Orca registrations
from scripts import datasources
from scripts import models
## Initialize Networks
#orca.run(["initialize_network_walk","initialize_network_small"]);
```
### Perform desired variable creations and transformations
```
### Data Cleaning
parcel = orca.get_table('parcels').to_frame()
bld = orca.get_table('buildings').to_frame()
hh = orca.get_table('households').to_frame()
hh['hh_random'] = np.random.uniform(0,1,len(hh))
nodeswalk = orca.get_table('nodeswalk').to_frame()
nodessmall = orca.get_table('nodessmall').to_frame()
nodessmall_upper = nodessmall.quantile(.99)
nodessmall_clipped = nodessmall.clip_upper(nodessmall_upper, axis=1)
nsc=nodessmall_clipped
nodeswalk_upper = nodeswalk.quantile(.99)
nodeswalk_clipped = nodeswalk.clip_upper(nodeswalk_upper, axis=1)
nwc=nodeswalk_clipped
# scale income and create race dummies
hh.income_k = hh.income/1000
hh.white = (hh.race_of_head == 1).astype(int)
hh.black = (hh.race_of_head == 2).astype(int)
hh.asian = (hh.race_of_head == 6).astype(int)
hh.hisp = (hh.hispanic_head == 'yes').astype(int)
hh.single = (hh.persons == 1).astype(int)
hh.elderly = (hh.age_of_head > 65).astype(int)
# building_type dummies
bld.single_family = (bld.building_type_id == 1).astype(int)
bld.multi_family = (bld.building_type_id == 3).astype(int)
bld.mixed_use = (bld.building_type_id > 3).astype(int)
# add the columns
orca.add_column('households', 'income_k', hh.income_k)
orca.add_column('households', 'white', hh.white)
orca.add_column('households', 'black', hh.black)
orca.add_column('households', 'asian', hh.asian)
orca.add_column('households', 'hispanic', hh.hisp)
orca.add_column('households', 'elderly', hh.elderly)
orca.add_column('households', 'single', hh.single)
orca.add_column('households', 'hh_random', hh.hh_random)
orca.add_column('buildings', 'single_family', bld.single_family)
orca.add_column('buildings', 'multi_family', bld.multi_family)
orca.add_column('buildings', 'mixed_use', bld.mixed_use);
nwc['prop_children_500_walk'] = (nwc['children_500_walk'] > 0).astype(int) / nwc['hh_500_walk']
nwc['prop_singles_500_walk'] = nwc['singles_500_walk'] / nwc['hh_500_walk']
nwc['prop_elderly_500_walk'] = nwc['elderly_hh_500_walk'] / nwc['hh_500_walk']
nwc['prop_black_500_walk'] = nwc['pop_black_500_walk'] / nwc['pop_500_walk']
nwc['prop_white_500_walk'] = nwc['pop_white_500_walk'] / nwc['pop_500_walk']
nwc['prop_asian_500_walk'] = nwc['pop_asian_500_walk'] / nwc['pop_500_walk']
nwc['prop_hisp_500_walk'] = nwc['pop_hisp_500_walk'] / nwc['pop_500_walk']
nwc['prop_rich_500_walk'] = nwc['rich_500_walk'] / nwc['pop_500_walk']
nwc['prop_poor_500_walk'] = nwc['poor_500_walk'] / nwc['pop_500_walk']
nwc['prop_children_1500_walk'] = (nwc['children_1500_walk'] > 0).astype(int) / nwc['hh_1500_walk']
nwc['prop_singles_1500_walk'] = nwc['singles_1500_walk'] / nwc['hh_1500_walk']
nwc['prop_elderly_1500_walk'] = nwc['elderly_hh_1500_walk'] / nwc['hh_1500_walk']
nwc['prop_black_1500_walk'] = nwc['pop_black_1500_walk'] / nwc['pop_1500_walk']
nwc['prop_white_1500_walk'] = nwc['pop_white_1500_walk'] / nwc['pop_1500_walk']
nwc['prop_asian_1500_walk'] = nwc['pop_asian_1500_walk'] / nwc['pop_1500_walk']
nwc['prop_hisp_1500_walk'] = nwc['pop_hisp_1500_walk'] / nwc['pop_1500_walk']
nwc['prop_rich_1500_walk'] = nwc['rich_1500_walk'] / nwc['pop_1500_walk']
nwc['prop_poor_1500_walk'] = nwc['poor_1500_walk'] / nwc['pop_1500_walk']
orca.add_table('nodessmall', nsc);
orca.add_table('nodeswalk', nwc);
hh_f = hh[(hh['building_type'] > 2) & (hh['hh_random'] < .2) & (hh['recent_mover'] == 1) \
& (hh['income'] > 0) & (hh['income'] < 500000)]
len(hh_f)
mm.initialize()
```
## Model Estimation
### HLCM for Multi-Family
```
orca.broadcast('nodeswalk', 'rentals', cast_index=True, onto_on='node_id_walk')
orca.broadcast('nodeswalk', 'parcels', cast_index=True, onto_on='node_id_walk')
orca.broadcast('nodessmall', 'rentals', cast_index=True, onto_on='node_id_small')
orca.broadcast('nodessmall', 'parcels', cast_index=True, onto_on='node_id_small')
%%time
m_mf = LargeMultinomialLogitStep()
m_mf.choosers = ['households']
m_mf.alternatives = ['buildings','parcels','nodeswalk','nodessmall']
m_mf.choice_column = 'building_id'
m_mf.alt_sample_size = 50
#Filters on choosers
# m_mf2.chooser_filters = ['building_type > 2 &\
# household_id % 1000 < 40 &\
# recent_mover == 1 &\
# 0 <income < 500000']
m_mf.chooser_filters = ['building_type > 2 &\
hh_random < .2 &\
recent_mover == 1 &\
0 <income < 500000']
#Filters on alternatives
m_mf.alt_filters = ['residential_units > 1',
'0 < avg_income_500_walk < 300000',
'pop_1500_walk > 0',
'sqft_per_unit > 0']
m_mf.model_expression = ' np.log(residential_units) + \
np.log1p(res_price_per_sqft) + \
np.log1p(sqft_per_unit) + \
np.log1p(income):np.log1p(sqft_per_unit) + \
np.log1p(jobs_1500_walk) + \
np.log1p(jobs_25000) + \
np.log(income):np.log(avg_income_1500_walk) + \
np.log1p(pop_1500_walk) + \
white:prop_white_500_walk + \
black:prop_black_500_walk + \
asian:prop_asian_500_walk + \
hispanic:prop_hisp_500_walk\
- 1'
m_mf.name = 'hlcm_multi_family1'
m_mf.tags = ['multi_family','test']
m_mf.fit()
# register the model
m_mf.register()
# number of choosers/agents/households/observations
len(m_mf._get_df(tables=m_mf.choosers, filters=m_mf.chooser_filters))
m_mf.fitted_parameters
#or
#mm.get_step('hlcm_multi_family').fitted_parameters
```
## Model Prediction
```
m_mf.out_chooser_filters = ['building_type > 2 &\
hh_random < .2 &\
recent_mover == 1 &\
0 <income < 500000']
m_mf.out_alt_filters = ['residential_units == 1',
'0 < avg_income_500_walk < 500000',
'sqft_per_unit > 0']
%%time
m_mf.run()
print(m_mf.probabilities.shape)
m_mf.probabilities.head()
### number of observations/choosers
print(len(m_mf.probabilities.observation_id.unique()))
### or
#len(m_mf.choices)
### number of unique alternatives
print(len(m_mf.probabilities.building_id.unique()))
### number of alternatives
print(len(m_mf.probabilities.building_id))
# summed probability
predict_df=m_mf.probabilities.groupby('building_id')['probability'].sum().to_frame()
predict_df.head()
plt.hist(predict_df['probability'],bins= 100);
# Check that choices are plausible
choices = pd.DataFrame(m_mf.choices)
df = pd.merge(m_mf.probabilities, choices, left_on='observation_id', right_index=True)
df['chosen'] = 0
df.loc[df.building_id == df.choice, 'chosen'] = 1
print(df.head())
print(np.corrcoef(df.probability, df.chosen))
### join predicted df and df
#hh_f = hh[(hh['building_type'] > 2) & (hh['hh_random'] < .2) & (hh['recent_mover'] == 1)\
# & (hh['income'] > 0) & (hh['income'] < 500000)]
#df = orca.merge_tables(target = 'buildings', tables = ['buildings','parcels','nodeswalk','nodessmall'])
#hh_f_data = hh_f.merge(df, left_on='building_id', right_index=True)
#hh_f_data.columns.tolist()
#predict= pd.merge(predict_df,hh_f_data, left_index=True,right_on='building_id',how='left', sort=False)
#predict[['probability','building_id']].head()
#predict_2= pd.merge(predict_df,df, left_index=True,right_index=True,how='left', sort=False)
#predict_2.head()
```
| github_jupyter |
# Exposure Time Calculator tutorial
```
# Allows interactive plot within this notebook
%matplotlib notebook
# Allows to take into account modifications made in the source code without having to restart the notebook
%reload_ext autoreload
%autoreload 2
```
# Telescope configuration Files
First you need to define the caracteristics of your telescope in a hjson file. This file must be in pyETC/pyETC/telescope_database/.
This file contains all caracteristics of the telescope and its environment you do not need to modify for different observations. (Size of the mirrors, number ad type of lenses, mirrors..., sky background, atmosphere transmission, cameras caracteristics,...)
For instance for the COLIBRI: [COLIBRI.hjson](../pyETC/telescope_database/COLIBRI.hjson)
# ETC configuration File
The ETC configuration file contains information related to the conditions of an observation: moon age, target elevation, seeing, filter band to use, exposure time, number of exposures...
An example of configuration file can be found in in pyETC/pyETC/configFiles: [example.hjson](../pyETC/configFiles/example.hjson)
# Simple user cases
## 1) Illustrative example
In order to compute the limiting magnitudes you need to define a SNR and an exposure time
```
# Load os package, in order to get the environment variable GFT_SIM
import os
# Load ETC package
from pyETC.pyETC import etc
# load ETC with a config file and the COLIBRI caracteristics
COLIBRI_ETC=etc(configFile='example.hjson',name_telescope='colibri')
# Execute
COLIBRI_ETC.sim()
```
The plots and results displayed here are also saved in a .txt file in pyETC/results/results_summary.txt
The main idea of the ETC is that all information are stored in a dictionnary named "information".
If you want to know the whole content just write:
```
COLIBRI_ETC.information
```
If you want to get the value of one parameter, for instance the computed magnitude, just write:
```
COLIBRI_ETC.information['mag']
```
If you want to modify some parameters, there are 2 possibilities:
- either modify the hjson configFile
- or directly change the parameters value in the "information" dictionary. This is useful when the ETC is used in a python script interacting with other python packages.
For instance, we want to change the exposure time, SNR, seeing, elevation of the target, age of the moon without modifying the input file:
```
# Modify the exposure time
COLIBRI_ETC.information["exptime"] = 60
# Change the number of expositions
COLIBRI_ETC.information['Nexp'] = 3
# Change the seeing at the zenith, in arcseconds
COLIBRI_ETC.information["seeing_zenith"] =1.2
# Change the elevation of the target, in degrees
COLIBRI_ETC.information['elevation'] = 78
#Change the age of the moon
COLIBRI_ETC.information["moon_age"] = 2
# Execute again
COLIBRI_ETC.sim()
```
In the case of COLIBRI, there 3 channels and one might want to use one NIR band. You just need to specify the channel to use and the filter band, for instance the J band:
```
# Select the J filter band
COLIBRI_ETC.information["filter_band"] = 'J'
# Specify the NIR channel in order to load the NIR channel caracterisitcs (optics transmissions + camera carac.)
COLIBRI_ETC.information["channel"] = 'CAGIRE'
# Execute again
COLIBRI_ETC.sim()
```
## 2) Compute the SNR for a given magnitude (or spectrum) and a exposure time
### 2.1) For a given magnitude
To compute the SNR for a Vega magnitude of 18 and a single exposure of 5s, we need to modify in the hjson configFile:
"etc_type": snr
"object_type": magnitude
"object_magnitude": 18
"exptime": 5
"photometry_system": Vega
Here we rather update the dictionary to avoid to use too many configFiles.
```
# Now we want to compute the SNR
COLIBRI_ETC.information['etc_type'] = 'snr'
# Set up the object: either 'magnitude' for a constant magnitude,
# or 'spectrum' for a given spectrum in the database,
# or 'grb_sim' to ompute the grb spectrum
COLIBRI_ETC.information['object_type'] = 'magnitude'
# If we select 'magnitude', we need to define the object magnitude
COLIBRI_ETC.information['object_magnitude'] = 18
# Set an exposure time of 5s
COLIBRI_ETC.information['exptime'] = 5
# Use Vega system
COLIBRI_ETC.information['photometry_system'] = 'Vega'
# Execute
COLIBRI_ETC.sim()
```
### 2.2) For a given spectrum
The spectrum is stored in /data/
Wavelength are in Angstoms and fluxes in erg/s/cm2/A
```
# Specify that the object is a spectrum
COLIBRI_ETC.information['object_type'] = 'spectrum'
# Define the folder. Starting from pyETC/pyETC directory
COLIBRI_ETC.information['object_folder'] = '/data/calspec/'
# Define the file in this folder
COLIBRI_ETC.information['object_file'] = 'bd02d3375_stis_001.txt'
#COLIBRI_ETC.information['object_folder'] = '/data/'
#COLIBRI_ETC.information['object_file']='bohlin2006_Vega.dat'
# Modify the exposure time
COLIBRI_ETC.information["exptime"] = 5
# Change the number of expositions
COLIBRI_ETC.information['Nexp'] = 3
# Change the seeing at the zenith, in arcseconds
COLIBRI_ETC.information["seeing_zenith"] = 0.79
# Change the elevation of the target
COLIBRI_ETC.information['elevation'] = 78
#Change the age of the moon
COLIBRI_ETC.information["moon_age"] = 7
# Specify the channel
COLIBRI_ETC.information["channel"]= 'DDRAGO-R'
# Select the z filter band
COLIBRI_ETC.information["filter_band"] = 'z'
# Use AB system
COLIBRI_ETC.information['photometry_system'] = 'AB'
# Execute
COLIBRI_ETC.sim()
```
### 2.3) For a simulated GRB spectrum
(Required pyGRBaglow python package)
GRB spectrum can be simulated with:
- empirical model: single power law, broken power law
- theoretical model: synchrotron model of Granot & Sari 2002
In the following we use the theoretical model.
In the following we update the dictionary, but one can also load the tuto_grb_sim.hjson configFile.
```
# First we specify that the object is a simulated GRB spectrum
COLIBRI_ETC.information['object_type'] = 'grb_sim'
# Specify the GRB model to use
COLIBRI_ETC.information['grb_model'] = 'gs02'
# Redshift
COLIBRI_ETC.information['grb_redshift'] = 3
# Time (in days)
COLIBRI_ETC.information['t_sinceBurst'] = 0.2
# Equivalent isotropic energy
COLIBRI_ETC.information['E_iso'] = 1e53
# Gamma-ray radiative efficiency
COLIBRI_ETC.information['eta'] = 0.3
# fraction of the internal energy given to the magnetic field in the Forward Shock
COLIBRI_ETC.information['eps_b'] = 1e-4
# fraction of the internal energy given to the electrons accelerated into the Forward Shock
COLIBRI_ETC.information['eps_e'] = 0.1
# index of the energy distribution of the shocked accelerated electrons
COLIBRI_ETC.information['p'] = 2.2
# interstellar medium density (in cm3)
COLIBRI_ETC.information['n0'] = 1
# Inverse Compton parameter, to take into account the Inverse Compton effects on the cooling of electrons
COLIBRI_ETC.information['Y'] = 0
# ISM type: 0: constant ISM density / 1: Massive star progenitor surounded by its preexplosion Wind
COLIBRI_ETC.information['ism_type'] = 0
# Host galaxy extinction (either 'mw', 'smc','lmc' or 'none')
COLIBRI_ETC.information['host_extinction_law'] = 'smc'
# Amount of extinction in the V band (in mag)
COLIBRI_ETC.information['Av_Host'] = 0.2
COLIBRI_ETC.information['galactic_extinction_law'] = 'smc'
# IGM extinction model: either 'madau' or 'meiksin' or 'none'
COLIBRI_ETC.information['IGM_extinction_model'] = 'meiksin'
# Galactic extinction, by default a mw extinction law is used
# Amount of galactic extinction in V band (in mag)
COLIBRI_ETC.information['Av_galactic'] = 0.1
#Execute
COLIBRI_ETC.sim()
```
## 3) Compute exposure time for a given SNR and magnitude or spectrum
Here we compute the exposure time to reach a magnitude of 18 (AB system) in z band with a SNR of 10 with the COLIBRI telescope.
```
# Specify that we want to compute the exposure time
COLIBRI_ETC.information['etc_type'] = 'time'
# For a given magnitude
COLIBRI_ETC.information['object_type'] = 'magnitude'
# Define the object magnitude
COLIBRI_ETC.information['object_magnitude'] = 18
# Define the SNR
COLIBRI_ETC.information['SNR'] = 10
# Specify the channel
COLIBRI_ETC.information["channel"]= 'DDRAGO-R'
# Select the z filter band
COLIBRI_ETC.information["filter_band"] = 'z'
# Use AB system
COLIBRI_ETC.information['photometry_system'] = 'AB'
#If you do not want to display the plot:
COLIBRI_ETC.information['plot'] = False
# Execute
COLIBRI_ETC.sim()
```
## 4) Compute limiting magnitudes
Here we want to compute the limiting magnitude for a SNR = 10 and 3 exposures of 10s in r band with the COLIBRI telescope.
```
# Specify that we want to compute the exposure time
COLIBRI_ETC.information['etc_type'] = 'mag'
# Define the SNR
COLIBRI_ETC.information['SNR'] = 10
# Modify the exposure time
COLIBRI_ETC.information["exptime"] = 10
# Change the number of expositions
COLIBRI_ETC.information['Nexp'] = 3
# Change the seeing at the zenith, in arcseconds
COLIBRI_ETC.information["seeing_zenith"] = 0.79
#Change the age of the moon
COLIBRI_ETC.information["moon_age"] = 7
# Specify the channel
COLIBRI_ETC.information["channel"]= 'DDRAGO-B'
# Select the z filter band
COLIBRI_ETC.information["filter_band"] = 'r'
# Use AB system
COLIBRI_ETC.information['photometry_system'] = 'AB'
#If you do not want to display the plot:
COLIBRI_ETC.information['plot'] = False
COLIBRI_ETC.sim()
```
## 5) Other
```
# If you do not want to display verbose:
COLIBRI_ETC.information['verbose'] = False
# If you do not want to create plots:
COLIBRI_ETC.information['plot'] = False
# Write system transmission of the last run in a file named 'sys_trans.txt' with wavelength in nm
COLIBRI_ETC.write_file_trans(COLIBRI_ETC.information['system_response'],'sys_trans',wvl_unit='nm')
# Plot system transmission of last run
trans=COLIBRI_ETC.information['system_response']
COLIBRI_ETC.plot_trans(trans,'system_transmision',title='test system response',ylabel='Transmission',
ylim=[0,1],wvl_unit='microns',passband_centered=True)
```
| github_jupyter |
# Time Domain and Gating
## Intro
This notebooks demonstrates how to use [scikit-rf](www.scikit-rf.org) for time-domain analysis and gating. A quick example is given first, followed by a more detailed explanation.
S-parameters are measured in the frequency domain, but can be analyzed in time domain if you like. In many cases, measurements are not made down to DC. This implies that the time-domain transform is not complete, but it can be very useful non-theless. A major application of time-domain analysis is to use *gating* to isolate a single response in space. More information about the details of time domain analysis see [1].
References
* [1] Keysight - Time Domain Analysis Using a Network Analyzer - Application Note [pdf](https://www.keysight.com/us/en/assets/7018-01451/application-notes/5989-5723.pdf)
## Quick Example
```
import skrf as rf
%matplotlib inline
rf.stylely()
from pylab import *
# load data for the waveguide to CPW probe
probe = rf.Network('../metrology/oneport_tiered_calibration/probe.s2p')
# we will focus on s11
s11 = probe.s11
# time-gate the first largest reflection
s11_gated = s11.time_gate(center=0, span=.2)
s11_gated.name='gated probe'
# plot frequency and time-domain s-parameters
figure(figsize=(8,4))
subplot(121)
s11.plot_s_db()
s11_gated.plot_s_db()
title('Frequency Domain')
subplot(122)
s11.plot_s_db_time()
s11_gated.plot_s_db_time()
title('Time Domain')
tight_layout()
```
## Interpreting Time Domain
Out DUT in this example is a waveguide-to-CPW probe, that was measured in [this other example](../metrology/One%20Port%20Tiered%20Calibration.ipynb).
```
# load data for the waveguide to CPW probe
probe = rf.Network('../metrology/oneport_tiered_calibration/probe.s2p')
probe
```
Note there are two time-domain plotting functions in scikit-rf:
* `Network.plot_s_db_time()`
* `Network.plot_s_time_db()`
The difference is that the former, `plot_s_db_time()`, employs windowing before plotting to enhance impluse resolution. Windowing will be discussed in a bit, but for now we just use `plot_s_db_time()`.
Plotting all four s-parameters of the probe in both frequency and time-domain.
```
# plot frequency and time-domain s-parameters
figure(figsize=(8,4))
subplot(121)
probe.plot_s_db()
title('Frequency Domain')
subplot(122)
probe.plot_s_db_time()
title('Time Domain')
tight_layout()
```
Focusing on the reflection coefficient from the waveguide port (s11), you can see there is an interference pattern present.
```
probe.plot_s_db(0,0)
title('Reflection Coefficient From \nWaveguide Port')
```
This ripple is evidence of several discrete reflections. Plotting s11 in the time-domain allows us to see where, or *when*, these reflections occur.
```
probe_s11 = probe.s11
probe_s11.plot_s_db_time(0,0)
title('Reflection Coefficient From \nWaveguide Port, Time Domain')
ylim(-100,0)
```
From this plot we can see two dominant reflections;
* one at $t=0$ns (the test-port)
* and another at $t=.2$ ns (who knows?).
## Gating The Reflection of Interest
To isolate the reflection from the waveguide port, we can use time-gating. This can be done by using the method `Network.time_gate()`, and provide it an appropriate center and span (in ns). To see the effects of the gate, both the original and gated reponse are compared.
```
probe_s11_gated = probe_s11.time_gate(center=0, span=.2)
probe_s11_gated.name='gated probe'
s11.plot_s_db_time()
s11_gated.plot_s_db_time()
```
Next, compare both responses in frequency domain to see the effect of the gate.
```
s11.plot_s_db()
s11_gated.plot_s_db()
```
### Auto-gate
The time-gating method in `skrf` has an auto-gating feature which can also be used to gate the largest reflection. When no gate parameters are provided, `time_gate()` does the following:
1. find the two largest peaks
* center the gate on the tallest peak
* set span to distance between two tallest peaks
You may want to plot the gated network in time-domain to see what the determined gate shape looks like.
```
title('Waveguide Interface of Probe')
s11.plot_s_db(label='original')
s11.time_gate().plot_s_db(label='autogated') #autogate on the fly
```
Might see how the autogate does on the other proben interface,
```
title('Other Interface of Probe')
probe.s22.plot_s_db()
probe.s22.time_gate().plot_s_db()
```
## Determining Distance
To make time-domain useful as a diagnostic tool, one would like to convert the x-axis to distance. This requires knowledge of the propagation velocity in the device. **skrf** provides some transmission-line models in the module [skrf.media](http://scikit-rf.readthedocs.org/en/latest/reference/media/index.html), which can be used for this.
**However...**
For dispersive media, such as rectangular waveguide, the phase velocity is a function of frequency, and transforming time to distance is not straightforward. As an approximation, you can normalize the x-axis to the speed of light.
Alternativly, you can simulate the a known device and compare the two time domain responses. This allows you to attribute quantatative meaning to the axes. For example, you could create an ideal delayed load as shown below. Note: the magnitude of a response *behind* a large impulse doesn not have meaningful units.
```
from skrf.media import RectangularWaveguide
# create a rectangular waveguide media to gererate a theoretical network
wr1p5 = RectangularWaveguide(frequency=probe.frequency,
a=15*rf.mil,z0=1)
# create an ideal delayed load, parameters are adjusted until the
# theoretical response agrees with the measurement
theory = wr1p5.delay_load(Gamma0=rf.db_2_mag(-20),
d=2.4, unit='cm')
probe.plot_s_db_time(0,0, label = 'Measurement')
theory.plot_s_db_time(label='-20dB @ 2.4cm from test-port')
ylim(-100,0)
```
This plot demonstrates a few important points:
* the theortical delayed load is not a perfect impulse in time. This is due to the dispersion in waveguide.
* the peak of the magnitude in time domain is not identical to that specified, also due to disperison (and windowing).
## What the hell is Windowing?
The `'plot_s_db_time()'` function does a few things.
1. windows the s-parameters.
* converts to time domain
* takes magnitude component, convert to dB
* calculates time-axis s
* plots
A word about step 1: **windowing**. A FFT represents a signal with a basis of periodic signals (sinusoids). If your frequency response is not periodic, which in general it isnt, taking a FFT will introduces artifacts in the time-domain results. To minimize these effects, the frequency response is *windowed*. This makes the frequency response more periodic by tapering off the band-edges.
Windowing is just applied to improve the plot appearance,d it does not affect the original network.
In skrf this can be done explicitly using the `'windowed()'` function. By default this function uses the hamming window, but can be adjusted through arguments. The result of windowing is show below.
```
probe_w = probe.windowed()
probe.plot_s_db(0,0, label = 'Original')
probe_w.plot_s_db(0,0, label = 'Windowed')
```
Comparing the two time-domain plotting functions, we can see the difference between windowed and not.
```
probe.plot_s_time_db(0,0, label = 'Original')
probe_w.plot_s_time_db(0,0, label = 'Windowed')
```
| github_jupyter |
# 5. Combining arrays
We have already seen how to create arrays and how to modify their dimensions. One last operation we can do is to combine multiple arrays. There are two ways to do that: by assembling arrays of same dimensions (concatenation, stacking etc.) or by combining arrays of different dimensions using *broadcasting*. Like in the previous chapter, we illustrate with small arrays and a real image.
```
import numpy as np
import matplotlib.pyplot as plt
import skimage
plt.gray();
image = skimage.data.chelsea()
```
## 5.1 Arrays of same dimensions
Let's start by creating a few two 2D arrays:
```
array1 = np.ones((10,5))
array2 = 2*np.ones((10,3))
array3 = 3*np.ones((10,5))
```
### 5.1.1 Concatenation
The first operation we can perform is concatenation, i.e. assembling the two 2D arrays into a larger 2D array. Of course we have to be careful with the size of each dimension. For example if we try to concatenate ```array1``` and ```array2``` along the first dimension, we get:
```
np.concatenate([array1, array2])
```
Both array have 10 lines, but one has 3 and the other 5 columns. We can therefore only concatenate them along the second dimensions:
```
array_conc = np.concatenate([array1, array2], axis = 1)
array_conc.shape
plt.imshow(array_conc, cmap = 'gray');
```
If we now use our example of real image, we can for example concatenate the two first channels of our RGB image:
```
plt.imshow(np.concatenate([image[:,:,0], image[:,:,1]]));
plt.imshow(np.concatenate([image[:,:,0], image[:,:,1]], axis=1));
```
### 5.1.2 Stacking
If we have several arrays with exact same sizes, we can also *stack* them, i.e. assemble them along a *new* dimension. For example we can create a 3D stack out of two 2D arrays:
```
array_stack = np.stack([array1, array3])
array_stack.shape
```
We can select the dimension along which to stack, again by using the ```axis``` keyword. For example if we want our new dimensions to be the *third* axis we can write:
```
array_stack = np.stack([array1, array3], axis = 2)
array_stack.shape
```
With our real image, we can for example stack the different channels in a new order (note that one could do that easily with ```np.swapaxis```):
```
image_stack = np.stack([image[:,:,2], image[:,:,0], image[:,:,1]], axis=2)
plt.imshow(image_stack);
```
As we placed the red channel, which has the highest intensity, at the position of the green one (second position) our image now is dominated by green tones.
## 5.2 Arrays of different dimensions
### 5.2.1 Broadcasting
Numpy has a powerful feature called **broadcasting**. This is the feature that for example allows you to write:
```
2 * array1
```
Here we just combined a single number with an array and Numpy *re-used* or *broadcasted* the element with less dimensions (the number 2) across the entire ```array1```. This does not only work with single numbers but also with arrays of different dimensions. Broadcasting can become very complex, so we limit ourselves here to a few common examples.
The general rule is that in an operation with arrays of different dimensions, **missing dimensions** or **dimensions of size 1** get *repeated* to create two arrays of same size. Note that comparisons of dimension size start from the **last** dimensions. For example if we have a 1D array and a 2D array:
```
array1D = np.arange(4)
array1D
array2D = np.ones((6,4))
array2D
array1D * array2D
```
Here ```array1D``` which has a *single line* got *broadcasted* over *each line* of the 2D array ```array2D```. Note the the size of each dimension is important. If ```array1D``` had for example more columns, that broadcasting could not work:
```
array1D = np.arange(3)
array1D
array1D * array2D
```
As mentioned above, dimension sizes comparison start from the last dimension, so for example if ```array1D``` had a length of 6, like the first dimension of ```array2D```, broadcasting would fail:
```
array1D = np.arange(6)
array1D.shape
array2D.shape
array1D * array2D
```
### 5.2.2 Higher dimensions
Broadcasting can be done in higher dimensional cases. Imagine for example that you have an RGB image with dimensions $NxMx3$. If you want to modify each channel independently, for example to rescale them, you can use broadcasting. We can use again our real image:
```
image.shape
scale_factor = np.array([0.5, 0.1, 1])
scale_factor
rescaled_image = scale_factor * image
rescaled_image
plt.imshow(rescaled_image.astype(int))
```
Note that if we the image has the dimensions $3xNxM$ (RGB planes in the first dimension), we encounter the same problem as before: a mismatch in size for the **last** dimension:
```
image2 = np.rollaxis(image, axis=2)
image2.shape
scale_factor.shape
scale_factor * image2
```
### 5.2.3 Adding axes
As seen above, if we have a mismatch in dimension size, the broadcasting mechanism doesn't work. To salvage such cases, we still have the possibility to *add* empty axes in an array to restore the matching of the non-empty dimension.
In the above example our arrays have the following shapes:
```
image2.shape
scale_factor.shape
```
So we need to add two "empty" axes after the single dimension of ```scale_factor```:
```
scale_factor_corr = scale_factor[:, np.newaxis, np.newaxis]
scale_factor_corr.shape
image2_rescaled = scale_factor_corr * image2
```
| github_jupyter |
# Data Processing and Versioning
```
%matplotlib inline
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.pyplot import figure
import seaborn as sn
from azureml.core import Workspace, Dataset
# import dataset
df = pd.read_csv('Dataset/weather_dataset_raw.csv')
```
# 1. Data quality assessment
```
df.head()
df.describe()
df.shape
df.dtypes
```
#### Check for missing data
```
df.isnull().values.any()
```
# 2. Calibrate missing data
```
df['Weather_conditions'].fillna(method='ffill',inplace=True,axis=0)
df.isnull().values.any()
df.Weather_conditions.value_counts()
df["Weather_conditions"].replace({"snow": "no_rain", "clear": "no_rain"}, inplace=True)
df.Weather_conditions.value_counts()
```
#### Convert Timestamp to Datetime format
```
df['Timestamp'] = pd.to_datetime(df['Timestamp'])
```
#### Convert text data to numeric using Label Encoding
```
y = df['Weather_conditions']
from sklearn.preprocessing import LabelEncoder
le=LabelEncoder()
y=le.fit_transform(y)
y = pd.DataFrame(data=y, columns=["Current_weather_condition"])
df = pd.concat([df, y], axis=1)
df.Current_weather_condition.value_counts()
df.drop(['Weather_conditions'],axis=1,inplace=True)
```
#### Future Weather_condition
```
df['Future_weather_condition'] = df.Current_weather_condition.shift(4, axis = 0)
df.dropna(inplace=True)
df['Future_weather_condition'] = df['Future_weather_condition'].apply(np.int64)
# Result - rain is 0 and no_rain is 1
df.head()
```
## b) Understanding Correlations between data (columns)
```
df.corr(method ='pearson')
corrMatrix = df.corr()
sn.heatmap(corrMatrix, annot=True)
plt.show()
# Filter or drop irrelevent data columns
df.drop(['S_No', 'Apparent_Temperature_C'],axis=1,inplace=True)
from matplotlib.pyplot import figure
figure(num=None, figsize=(12, 10), dpi=80, facecolor='w', edgecolor='w')
df.corr(method ='pearson')['Future_weather_condition'].sort_values(ascending=True).drop(['Future_weather_condition']).plot(kind='bar', width=0.9)
```
## d) Timeseries analysis of Temperature
```
time = df['Timestamp']
temp = df['Temperature_C']
## plot graph
plt.plot(time, temp)
plt.show()
# Save processed dataset
df.to_csv('Dataset/weather_dataset_processed.csv',index=False)
```
## Register dataset to the workspace
```
subscription_id = '6faa9ede-4786-48dc-9c1e-0262e2844ebf'
resource_group = 'Learn_MLOps'
workspace_name = 'MLOps_WS'
workspace = Workspace(subscription_id, resource_group, workspace_name)
# get the datastore to upload prepared data
datastore = workspace.get_default_datastore()
# upload the local file from src_dir to the target_path in datastore
datastore.upload(src_dir='Dataset', target_path='data')
dataset = Dataset.Tabular.from_delimited_files(datastore.path('data/weather_dataset_processed.csv'))
# preview the first 3 rows of the dataset from datastore
dataset.take(3).to_pandas_dataframe()
# Register Dataset to workspace
weather_ds = dataset.register(workspace=workspace,
name='processed_weather_data_portofTurku',
description='processed weather data')
```
| github_jupyter |
#### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Classification on imbalanced data
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/structured_data/imbalanced_data"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial demonstrates how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the [Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud) dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use [Keras](https://www.tensorflow.org/guide/keras/overview) to define the model and [class weights](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model) to help the model learn from the imbalanced data. .
This tutorial contains complete code to:
* Load a CSV file using Pandas.
* Create train, validation, and test sets.
* Define and train a model using Keras (including setting class weights).
* Evaluate the model using various metrics (including precision and recall).
* Try common techniques for dealing with imbalanced data like:
* Class weighting
* Oversampling
## Setup
```
import tensorflow as tf
from tensorflow import keras
import os
import tempfile
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
```
## Data processing and exploration
### Download the Kaggle Credit Card Fraud data set
Pandas is a Python library with many helpful utilities for loading and working with structured data. It can be used to download CSVs into a Pandas [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html#pandas.DataFrame).
Note: This dataset has been collected and analysed during a research collaboration of Worldline and the [Machine Learning Group](http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available [here](https://www.researchgate.net/project/Fraud-detection-5) and the page of the [DefeatFraud](https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/) project
```
file = tf.keras.utils
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
raw_df.head()
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
```
### Examine the class label imbalance
Let's look at the dataset imbalance:
```
neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
```
This shows the small fraction of positive samples.
### Clean, split and normalize the data
The raw data has a few issues. First the `Time` and `Amount` columns are too variable to use directly. Drop the `Time` column (since it's not clear what it means) and take the log of the `Amount` column to reduce its range.
```
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop('Time')
# The `Amount` column covers a huge range. Convert to log-space.
eps = 0.001 # 0 => 0.1¢
cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)
```
Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where [overfitting](https://developers.google.com/machine-learning/crash-course/generalization/peril-of-overfitting) is a significant concern from the lack of training data.
```
# Use a utility from sklearn to split and shuffle your dataset.
train_df, test_df = train_test_split(cleaned_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
```
Normalize the input features using the sklearn StandardScaler.
This will set the mean to 0 and standard deviation to 1.
Note: The `StandardScaler` is only fit using the `train_features` to be sure the model is not peeking at the validation or test sets.
```
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
```
Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export.
### Look at the data distribution
Next compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:
* Do these distributions make sense?
* Yes. You've normalized the input and these are mostly concentrated in the `+/- 2` range.
* Can you see the difference between the distributions?
* Yes the positive examples contain a much higher rate of extreme values.
```
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns=train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns=train_df.columns)
sns.jointplot(x=pos_df['V5'], y=pos_df['V6'],
kind='hex', xlim=(-5,5), ylim=(-5,5))
plt.suptitle("Positive distribution")
sns.jointplot(x=neg_df['V5'], y=neg_df['V6'],
kind='hex', xlim=(-5,5), ylim=(-5,5))
_ = plt.suptitle("Negative distribution")
```
## Define the model and metrics
Define a function that creates a simple neural network with a densly connected hidden layer, a [dropout](https://developers.google.com/machine-learning/glossary/#dropout_regularization) layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:
```
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
keras.metrics.AUC(name='prc', curve='PR'), # precision-recall curve
]
def make_model(metrics=METRICS, output_bias=None):
if output_bias is not None:
output_bias = tf.keras.initializers.Constant(output_bias)
model = keras.Sequential([
keras.layers.Dense(
16, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid',
bias_initializer=output_bias),
])
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
```
### Understanding useful metrics
Notice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.
* **False** negatives and **false** positives are samples that were **incorrectly** classified
* **True** negatives and **true** positives are samples that were **correctly** classified
* **Accuracy** is the percentage of examples correctly classified
> $\frac{\text{true samples}}{\text{total samples}}$
* **Precision** is the percentage of **predicted** positives that were correctly classified
> $\frac{\text{true positives}}{\text{true positives + false positives}}$
* **Recall** is the percentage of **actual** positives that were correctly classified
> $\frac{\text{true positives}}{\text{true positives + false negatives}}$
* **AUC** refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than a random negative sample.
* **AUPRC** refers to Area Under the Curve of the Precision-Recall Curve. This metric computes precision-recall pairs for different probability thresholds.
Note: Accuracy is not a helpful metric for this task. You can have 99.8%+ accuracy on this task by predicting False all the time.
Read more:
* [True vs. False and Positive vs. Negative](https://developers.google.com/machine-learning/crash-course/classification/true-false-positive-negative)
* [Accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy)
* [Precision and Recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall)
* [ROC-AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc)
* [Relationship between Precision-Recall and ROC Curves](https://www.biostat.wisc.edu/~page/rocpr.pdf)
## Baseline model
### Build the model
Now create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.
Note: this model will not handle the class imbalance well. You will improve it later in this tutorial.
```
EPOCHS = 100
BATCH_SIZE = 2048
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_prc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
model = make_model()
model.summary()
```
Test run the model:
```
model.predict(train_features[:10])
```
### Optional: Set the correct initial bias.
These initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: [A Recipe for Training Neural Networks: "init well"](http://karpathy.github.io/2019/04/25/recipe/#2-set-up-the-end-to-end-trainingevaluation-skeleton--get-dumb-baselines)). This can help with initial convergence.
With the default bias initialization the loss should be about `math.log(2) = 0.69314`
```
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
```
The correct bias to set can be derived from:
$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$
$$ b_0 = -log_e(1/p_0 - 1) $$
$$ b_0 = log_e(pos/neg)$$
```
initial_bias = np.log([pos/neg])
initial_bias
```
Set that as the initial bias, and the model will give much more reasonable initial guesses.
It should be near: `pos/total = 0.0018`
```
model = make_model(output_bias=initial_bias)
model.predict(train_features[:10])
```
With this initialization the initial loss should be approximately:
$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
```
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
```
This initial loss is about 50 times less than if would have been with naive initialization.
This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training.
### Checkpoint the initial weights
To make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training:
```
initial_weights = os.path.join(tempfile.mkdtemp(), 'initial_weights')
model.save_weights(initial_weights)
```
### Confirm that the bias fix helps
Before moving on, confirm quick that the careful bias initialization actually helped.
Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
```
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
def plot_loss(history, label, n):
# Use a log scale on y-axis to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train ' + label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val ' + label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
```
The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage.
### Train the model
```
model = make_model()
model.load_weights(initial_weights)
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[early_stopping],
validation_data=(val_features, val_labels))
```
### Check training history
In this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in the [Overfit and underfit](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit) tutorial.
Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
```
def plot_metrics(history):
metrics = ['loss', 'prc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])
plt.legend();
plot_metrics(baseline_history)
```
Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model.
### Evaluate metrics
You can use a [confusion matrix](https://developers.google.com/machine-learning/glossary/#confusion_matrix) to summarize the actual vs. predicted labels, where the X axis is the predicted label and the Y axis is the actual label:
```
train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
```
Evaluate your model on the test dataset and display the results for the metrics you created above:
```
baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
```
If the model had predicted everything perfectly, this would be a [diagonal matrix](https://en.wikipedia.org/wiki/Diagonal_matrix) where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity.
### Plot the ROC
Now plot the [ROC](https://developers.google.com/machine-learning/glossary#ROC). This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
```
def plot_roc(name, labels, predictions, **kwargs):
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right');
```
### Plot the AUPRC
Now plot the [AUPRC](https://developers.google.com/machine-learning/glossary?hl=en#PR_AUC). Area under the interpolated precision-recall curve, obtained by plotting (recall, precision) points for different values of the classification threshold. Depending on how it's calculated, PR AUC may be equivalent to the average precision of the model.
```
def plot_prc(name, labels, predictions, **kwargs):
precision, recall, _ = sklearn.metrics.precision_recall_curve(labels, predictions)
plt.plot(precision, recall, label=name, linewidth=2, **kwargs)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right');
```
It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness.
## Class weights
### Calculate class weights
The goal is to identify fraudulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
```
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
weight_for_0 = (1 / neg) * (total / 2.0)
weight_for_1 = (1 / pos) * (total / 2.0)
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
```
### Train a model with class weights
Now try re-training and evaluating the model with class weights to see how that affects the predictions.
Note: Using `class_weights` changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like `tf.keras.optimizers.SGD`, may fail. The optimizer used here, `tf.keras.optimizers.Adam`, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
```
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)
```
### Check training history
```
plot_metrics(weighted_history)
```
### Evaluate metrics
```
train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)
weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_weighted)
```
Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade-offs between these different types of errors for your application.
### Plot the ROC
```
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plt.legend(loc='lower right');
```
### Plot the AUPRC
```
plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_prc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_prc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plt.legend(loc='lower right');
```
## Oversampling
### Oversample the minority class
A related approach would be to resample the dataset by oversampling the minority class.
```
pos_features = train_features[bool_train_labels]
neg_features = train_features[~bool_train_labels]
pos_labels = train_labels[bool_train_labels]
neg_labels = train_labels[~bool_train_labels]
```
#### Using NumPy
You can balance the dataset manually by choosing the right number of random
indices from the positive examples:
```
ids = np.arange(len(pos_features))
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
```
#### Using `tf.data`
If you're using `tf.data` the easiest way to produce balanced examples is to start with a `positive` and a `negative` dataset, and merge them. See [the tf.data guide](../../guide/data.ipynb) for more examples.
```
BUFFER_SIZE = 100000
def make_ds(features, labels):
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
```
Each dataset provides `(feature, label)` pairs:
```
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
```
Merge the two together using `tf.data.Dataset.sample_from_datasets`:
```
resampled_ds = tf.data.Dataset.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
```
To use this dataset, you'll need the number of steps per epoch.
The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once:
```
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch
```
### Train on the oversampled data
Now try training the model with the resampled data set instead of using class weights to see how these methods compare.
Note: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.
```
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks=[early_stopping],
validation_data=val_ds)
```
If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.
But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight.
This smoother gradient signal makes it easier to train the model.
### Check training history
Note that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data.
```
plot_metrics(resampled_history)
```
### Re-train
Because training is easier on the balanced data, the above training procedure may overfit quickly.
So break up the epochs to give the `tf.keras.callbacks.EarlyStopping` finer control over when to stop training.
```
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch=20,
epochs=10*EPOCHS,
callbacks=[early_stopping],
validation_data=(val_ds))
```
### Re-check training history
```
plot_metrics(resampled_history)
```
### Evaluate metrics
```
train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)
resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_resampled)
```
### Plot the ROC
```
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right');
```
### Plot the AUPRC
```
plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_prc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_prc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_prc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_prc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right');
```
## Applying this tutorial to your problem
Imbalanced data classification is an inherently difficult task since there are so few samples to learn from. You should always start with the data first and do your best to collect as many samples as possible and give substantial thought to what features may be relevant so the model can get the most out of your minority class. At some point your model may struggle to improve and yield the results you want, so it is important to keep in mind the context of your problem and the trade offs between different types of errors.
| github_jupyter |
# Keypoint Detectors
```
import os
import csv
import matplotlib.pyplot as plt
data_dir = "data/keypoints"
names = ["SHITOMASI", "HARRIS", "FAST", "BRISK", "ORB", "AKAZE", "SIFT"]
images = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
data = dict()
# read data
class KeypointLineWrapper:
def __init__(self, lst):
self._lst = lst
def num(self):
return int(self._lst[0])
def min_size(self):
return float(self._lst[1])
def max_size(self):
return float(self._lst[2])
def mean_size(self):
return float(self._lst[3])
min_num = 999999999
max_num = 0
min_min_size = 999999999
max_min_size = 0
min_max_size = 999999999
max_max_size = 0
min_mean_size = 999999999
max_mean_size = 0
for name in names:
with open(os.path.join(data_dir, name + '_keypoints_number.txt')) as f:
csv_reader = csv.reader(f, delimiter=' ')
data[name] = { 'num': [], 'min': [], 'max': [], 'mean': [] }
for row in csv_reader:
lw = KeypointLineWrapper(row)
data[name]['num'].append(lw.num())
data[name]['min'].append(lw.min_size())
data[name]['max'].append(lw.max_size())
data[name]['mean'].append(lw.mean_size())
min_num = min(min_num, lw.num())
max_num = max(max_num, lw.num())
min_min_size = min(min_min_size, lw.min_size())
max_min_size = max(max_min_size, lw.min_size())
min_max_size = min(min_max_size, lw.max_size())
max_max_size = max(max_max_size, lw.max_size())
min_mean_size = min(min_mean_size, lw.mean_size())
max_mean_size = max(max_mean_size, lw.mean_size())
fig, axes = plt.subplots(nrows=2, ncols=len(names),
sharex=True, sharey=False,
figsize=(50, 10))
for i, name in enumerate(names):
# number of keypoints
axes[0][i].plot(images, data[name]['num'],
label="min="+str(min(data[name]['num']))+
" mean="+str(sum(data[name]['num'])//len(data[name]['num']))+
" max="+str(max(data[name]['num'])))
axes[0][i].set_ylim(min_num - 1, max_num + 1)
axes[0][i].set_xlim(-1, 10)
axes[0][i].set_title(name)
axes[0][i].legend()
# keypoints' neigbourhood sizes
axes[1][i].plot(images, data[name]['min'], color='red',
label="min_min="+str(min(data[name]['min']))+
" max_min="+str(max(data[name]['min'])))
axes[1][i].plot(images, data[name]['mean'], color='blue',
label="min_mean="+str(min(data[name]['mean']))+
" max_mean="+str(max(data[name]['mean'])))
axes[1][i].plot(images, data[name]['max'], color='green',
label="min_max="+str(min(data[name]['max']))+
" max_max="+str(max(data[name]['max'])))
axes[1][i].set_ylim(min_min_size - 5, max_max_size + 5)
axes[1][i].set_xlim(-1, 10)
axes[1][i].legend()
axes[0][0].set_ylabel('# of keypoints')
axes[1][0].set_ylabel('neighborhood sizes')
fig.show()
# print min/max statistics among all images and detectors
string = \
"""\
min_num = {min_num}
max_num = {max_num}
min_min_size = {min_min_size}
max_min_size = {max_min_size}
min_max_size = {min_max_size}
max_max_size = {max_max_size}
min_mean_size = {min_mean_size}
max_mean_size = {min_mean_size}
"""\
.format(min_num=min_num,
max_num=max_num,
min_min_size=min_min_size,
max_min_size=max_min_size,
min_max_size=min_max_size,
max_max_size=max_max_size,
min_mean_size=min_mean_size,
max_mean_size=min_mean_size)
print(string)
```
# Keypoint Matches
```
import os
import csv
import numpy as np
from glob import glob
from collections import OrderedDict
import matplotlib.pyplot as plt
data_dir = "data/matches"
data = OrderedDict()
# read data
class MatchesLineWrapper:
def __init__(self, detector, descriptor, lst):
self._lst = lst
self._detector = detector
self._descriptor = descriptor
def detector():
return self._detector
def descriptor():
return self._descriptor
def prev_img(self):
return int(self._lst[0])
def cur_img(self):
return int(self._lst[1])
def matches(self):
return int(self._lst[2])
files = glob(os.path.join(data_dir, '*_matches.txt'))
counter = 1
for file in files:
detector = file.split('_')[0].split('/')[-1]
descriptor = file.split('_')[1]
key = '(' + str(counter) + ') ' + detector + '+' + descriptor
counter += 1
with open(file) as f:
csv_reader = csv.reader(f, delimiter=' ')
data[key] = OrderedDict()
for row in csv_reader:
lw = MatchesLineWrapper(detector, descriptor, row)
inner_key = str(lw.prev_img()) + '-' + str(lw.cur_img())
data[key][inner_key] = lw.matches()
fig, ax = plt.subplots(1, 1, figsize=(20, 20))
label_helper = dict()
for combination_id, combination_dict in data.items():
x = np.arange(0, len(combination_dict), 1)
y = list(combination_dict.values())
if not label_helper.get(tuple(y), None):
label_helper[tuple(y)] = []
label_helper[tuple(y)].append(combination_id[1:].split(')')[0])
for combination_id, combination_dict in data.items():
x = np.arange(0, len(combination_dict), 1)
y = list(combination_dict.values())
ax.plot(x, y,
label=combination_id + ": min=" + str(min(combination_dict.values()))
+ " mean=" + str(sum(combination_dict.values()) // len(combination_dict.values()))
+ " max=" + str(max(combination_dict.values())))
ax.set_xticks(np.arange(0, len(combination_dict), 1))
ax.set_xticklabels(combination_dict.keys())
ax.set_xlabel('image pairs')
ax.set_ylabel('number of matches')
pos = 0
for line, ids in label_helper.items():
ax.annotate('(' + ','.join(ids) + ')', # this is the text
(pos, line[pos]), # this is the point to label
textcoords="offset points", # how to position the text
xytext=(0,0), # distance from text to points (x,y)
ha='center') # horizontal alignment can be left, right or center
pos = (pos + 1) % 9
ax.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=1, mode="expand", borderaxespad=0.)
plt.show()
```
# Detector / Descriptor Timings
```
import os
import csv
import numpy as np
from glob import glob
from collections import OrderedDict
import matplotlib.pyplot as plt
data_detectors_dir = "data/timings/detectors"
data_descriptors_dir = "data/timings/descriptors"
data_detectors = OrderedDict()
data_descriptors = OrderedDict()
files_detectors = glob(os.path.join(data_detectors_dir, "*_detector_timings.txt"))
files_descriptors = glob(os.path.join(data_descriptors_dir, "*_descriptor_timings.txt"))
fig, axes = plt.subplots(nrows=2, ncols=1,
sharex=False, sharey=False,
figsize=(10, 20))
for fn in files_detectors:
with open(fn) as f:
csv_reader = csv.reader(f, delimiter=' ')
detector = fn.split("_")[0].split('/')[-1]
data_detectors[detector] = []
for row in csv_reader:
data_detectors[detector].append(float(row[0]))
for fn in files_descriptors:
with open(fn) as f:
csv_reader = csv.reader(f, delimiter=' ')
descriptor = fn.split("_")[0].split('/')[-1]
data_descriptors[descriptor] = []
for row in csv_reader:
data_descriptors[descriptor].append(float(row[0]))
axes[0].boxplot(list(data_detectors.values()), notch=True,
labels=list(data_detectors.keys()), showmeans=True)
axes[1].boxplot(list(data_descriptors.values()), notch=True,
labels=list(data_descriptors.keys()), showmeans=True)
axes[0].set_xlabel('detectors')
axes[0].set_ylabel('time (ms)')
axes[1].set_xlabel('descriptors')
axes[1].set_ylabel('time (ms)')
# draw statistics
for i, vals in enumerate(data_detectors.values()):
mean = sum(vals) / len(vals)
axes[0].annotate('%.2f' % mean, # mean
(i+1, mean), # this is the point to label
textcoords="offset points", # how to position the text
xytext=(50,0), # distance from text to points (x,y)
ha='right') # horizontal alignment can be left, right or center
for i, vals in enumerate(data_descriptors.values()):
mean = sum(vals) / len(vals)
axes[1].annotate('%.2f' % mean, # mean
(i+1, mean), # this is the point to label
textcoords="offset points", # how to position the text
xytext=(50,0), # distance from text to points (x,y)
ha='right') # horizontal alignment can be left, right or center
plt.show()
```
| github_jupyter |
# Generate a Vecsigrafo using Swivel
In this notebook we show how to generate a Vecsigrafo based on a subset of the [UMBC corpus](https://ebiquity.umbc.edu/resource/html/id/351/UMBC-webbase-corpus).
We follow the procedure described in [Towards a Vecsigrafo: Portable Semantics in Knowledge-based Text Analytics](https://pdfs.semanticscholar.org/b0d6/197940d8f1a5fa0d7474bd9a94bd9e44a0ee.pdf) and depicted in the following figure:

## Tokenization and Word Sense Disambiguation
The main difference with standard swivel is that:
- we use word-sense disambiguation on the text as a pre-processing step (Swivel simply uses white-space tokenization)
- each 'token' in the resulting sequences is composed of a lemma and an optional concept identifier.
### Disambiguators
If we are going to apply WSD, we will need some disambiguator strategy. Unfortunately, there are not a lot of open-source high-performance disambiguators available. At [Expert System](https://www.expertsystem.com/), we have a [state-of-the-art disambiguator](https://www.expertsystem.com/products/cogito-cognitive-technology/semantic-technology/disambiguation/) that assings **syncon**s (our version of synsets) to lemmas in the text.
Since Expert System's disambiguator and semantic KG are proprietary, in this notebook we will be mostly using WordNet (although we may present some results and examples based on Expert System's results). We have implemented a lightweight disambiguation strategy, proposed by [Mancini, M., Camacho-Collados, J., Iacobacci, I., & Navigli, R. (2017). Embedding Words and Senses Together via Joint Knowledge-Enhanced Training. CoNLL.](http://arxiv.org/abs/1612.02703), which has allowed us to produce disambiguated corpora based on WordNet 3.1.
To be able to inspect the disambiguated corpus, let's make sure we have access to WordNet in our environment by executing the following cell.
```
import nltk
nltk.download('wordnet')
from nltk.corpus import wordnet as wn
wn.synset('Maya.n.02')
```
### Tokenizations
When applying a disambiguator, the tokens are no longer (groups of) words. Each token can contain different types of information, we generally keep the following token information:
* `t`: text, the original text (possibly normalised, i.e. lower-cased)
* `l`: lemma, the lemma form of the word
* `g`: grammar: the grammar type
* `s`: syncon (or synset) identifier
### Example WordNet
We have included a small sample of our disambiguated UMBC corpus as part of our [GitHub tutorial repo](https://github.com/HybridNLP2018/tutorial). Execute the following cell to clone the repo, unzip the sample corpus and print the first line of the corpus
```
%cd /content/
!git clone https://github.com/HybridNLP2018/tutorial.git
%cd /content/tutorial/datasamples/
!unzip umbc_tlgs_wnscd_5K.zip
toked_corpus = '/content/tutorial/datasamples/umbc_tlgs_wnscd_5K'
!head -n1 {toked_corpus}
%cd /content/
```
You should see, among others, the first line in the corpus, which starts with:
```
the%7CGT_ART mayan%7Clem_Mayan%7CGT_ADJ%7Cwn31_Maya.n.03 image%7Clem_image%7CGT_NOU%7Cwn31_effigy.n.01
```
The file included in the github repo for this tutorial is a subset of a disambiguated tokenization for the UMBC corpus, it only contains the first 5 thousand lines of that corpus (the full corpus has about 40 million lines) as we only need it to show the steps necessary to generate embeddings.
The last output, from the cell above, shows the format we are using to represent the tokenized corpus. We use white space to separate the tokens, and have URL encoded each token to avoid mixing up tokens. Since this format is hard to read, we provide a library to inspect the lines in an easy manner. Execute the following cell to display the first two lines in the corpus as a table.
```
%cd /content/
import tutorial.scripts.wntoken as wntoken
import pandas
# open the file and produce a list of python dictionaries describing the tokens
corpus_tokens = wntoken.open_as_token_dicts(toked_corpus, max_lines=2)
# convert the tokens into a pandas DataFrame to display in table form
pandas.DataFrame(corpus_tokens, columns=['line', 't', 'l', 'g', 's', 'glossa'])
```
### Example Cogito
As a second example, analysing the original sentence:
EXPERIMENTAL STUDY We conducted an empirical evaluation to assess the effectiveness
using Cogito gives us

We filter some of the words and only keep the lemmas and the syncon ids and encode them into the next sequence of disambiguated tokens:
en#86052|experimental en#2686|study en#76710|conduct en#86047|empirical en#3546|evaluation en#68903|assess
en#25094|effectiveness
## Vocabulary and Co-occurrence matrix
Next, we need to count the co-occurrences in the disambiguated corpus. We can either:
- use **standard swivel prep**: in this case each *<text>|<lemma>|<grammar>|<synset>* tuple will be treated as a separate token. For the example sentence from UMBC, presented above, we would then get that `mayan|lem_Mayan|GT_ADJ|wn31_Maya.n.03` has a co-occurrence count of 1 with `image|lem_image|GT_NOU|wn31_effigy.n.01`. This would result in a very large vocabulary.
- use **joint-subtoken prep**: in this case, you can specify which individual subtoken information you want to take into account. In this notebook we will use **ls** information, hence each synset and each lemma are treated as separate entities in the vocabulary and will be represented with different embeddings. For the example sentence we would get that `lem_Mayan` has a co-occurrence count of 1 with `wn31_Maya.n.03`, `lem_image` and `wn31_effigy.n.01`.
```
import os
import numpy as np
```
### Standard Swivel Prep
For the **standard swivel prep**, we can simply call `prep` using the `!python` command. In this case we have the `toked_corpus` which contains the disambiguated sequences as shown above. The output wil be a set of sharded co-occurrence submatrices as explained in the notebook for creating word vectors.
We set the `shard_size` to 512 since the corpus is quite small. For larger corpora we could use the standard value of 4096.
```
!mkdir /content/umbc/
!mkdir /content/umbc/coocs
!mkdir /content/umbc/coocs/tlgs_wnscd_5k_standard
coocs_path = '/content/umbc/coocs/tlgs_wnscd_5k_standard/'
!python tutorial/scripts/swivel/prep.py --input={toked_corpus} --output_dir={coocs_path} --shard_size=512
```
Expected output:
... tensorflow flags ....
vocabulary contains 8192 tokens
writing shard 256/256
Wrote vocab and sum files to /content/umbc/coocs/tlgs_wnscd_5k_standard/
Wrote vocab and sum files to /content/umbc/coocs/tlgs_wnscd_5k_standard/
done!
```
!head -n15 /content/umbc/coocs/tlgs_wnscd_5k_standard/row_vocab.txt
```
As the cells above show, applying standard prep results in a vocabulary of over 8K "tokens", however each token is still represented as a URL-encoded combination of the plain text, lemma, grammar type and synset (when available).
### Joint-subtoken Prep
For the **joint-subtoken prep** step, we have a Java implementation that is not open-source yet (as it is still tied to proprietary code, we are working on refactoring the code so that Cogito subtokens are just a special case). However, we ***provide pre-computed co-occurrence files***.
Although not open source, we describe the steps we executed to help you implement a similar pipeline.
First, we ran our implementation of subtoken prep on the corpus. Notice:
* we are only including lemma and synset information (i.e. we are not including plain text and grammar information).
* furthermore, we are filtering the corpus by
1. removing any tokens related to punctuation marks (PNT), auxiliary verbs (AUX) and articles (ART), since we think these do not contribute much to the semantics of words.
2. replacing tokens with grammar types `ENT` (entities) and `NPH` (proper names) with generic variants `grammar#ENT` and `grammar#NPH` respectively. The rationale is that, depending on the input corpus, names of people or organizations may appear a few times, but may be filtered out if they do not appear enough times. This ensures such tokens are kept in the vocabulary and contribute to the embeddings of words nearby. The main disadvantage is that we will not have some proper names in our final vocabulary.
```
java $JAVA_OPTIONS net.expertsystem.word2vec.swivel.SubtokPrep \
--input C:/hybridNLP2018/tutorial/datasamples/umbc_tlgs_wnscd_5K \
--output_dir C:/corpora/umbc/coocs/tlgs_wnscd_5K_ls_f/ \
--expected_seq_encoding TLGS_WN \
--sub_tokens \
--output_subtokens "LEMMA,SYNSET" \
--remove_tokens_with_grammar_types "PNT,AUX,ART" \
--generalise_tokens_with_grammar_types "ENT,NPH" \
--shard_size 512
```
The output log looked as follows:
```
INFO net.expertsystem.word2vec.swivel.SubtokPrep - expected_seq_encoding set to 'TLGS_WN'
INFO net.expertsystem.word2vec.swivel.SubtokPrep - remove_tokens_with_grammar_types set to PNT,AUX,ART
INFO net.expertsystem.word2vec.swivel.SubtokPrep - generalise_tokens_with_grammar_types set to ENT,NPH
INFO net.expertsystem.word2vec.swivel.SubtokPrep - Creating vocab for C:\hybridNLP2018\tutorial\datasamples\umbc_tlgs_wnscd_5K
INFO net.expertsystem.word2vec.swivel.SubtokPrep - read 5000 lines from C:\hybridNLP2018\tutorial\datasamples\umbc_tlgs_wnscd_5K
INFO net.expertsystem.word2vec.swivel.SubtokPrep - filtered 166152 tokens from a total of 427796 (38,839%)
generalised 1899 tokens from a total of 427796 (0,444%)
full vocab size 21321
INFO net.expertsystem.word2vec.swivel.SubtokPrep - Vocabulary contains 5632 tokens (21321 full count, 5913 appear > 5 times)
INFO net.expertsystem.word2vec.swivel.SubtokPrep - Flushing 1279235 co-occ pairs
INFO net.expertsystem.word2vec.swivel.SubtokPrep - Wrote 121 tmpShards to disk
```
We have included the output of this process as part of the GitHub repo for the tutorial. We will unzip this folder to inspect the results:
```
!unzip /content/tutorial/datasamples/precomp-coocs-tlgs_wnscd_5K_ls_f.zip -d /content/umbc/coocs/
precomp_coocs_path = '/content/umbc/coocs/tlgs_wnscd_5K_ls_f'
```
The previous cell extracts the pre-computed co-occurrence shards and defines a variable `precomp_coocs_path` that points to the folder where these shards are stored.
Next, we print the first 10 elements of the vocabulary to see the format that we are using to represent the lemmas and synsets:
```
!head -n10 {precomp_coocs_path}/row_vocab.txt
```
As the output above shows, the vocabulary we get with `subtoken prep` is smaller (5.6K elements instead of over 8K) and it contains individual lemmas and synsets (it also contains *special* elements grammar#ENT and grammar#NPH, as described above).
**More importantly**, the co-occurrence counts take into account the fact that certain lemmas co-occur more frequently with certain other lemmas and synsets, which should be taken into account when learning embedding representations.
## Learn embeddings from co-occurrence matrix
With the sharded co-occurrence matrices created in the previous section it is now possible to learn embeddings by calling the `swivel.py` script. This launches a tensorflow application based on various parameters (most of which are self-explanatory) :
- `input_base_path`: the folder with the co-occurrence matrix (protobuf files with the sparse matrix) generated above.
- `submatrix_` rows and columns need to be the same size as the `shard_size` used in the `prep` step.
- `num_epochs` the number of times to go through the input data (all the co-occurrences in the shards). We have found that for large corpora, the learning algorithm converges after a few epochs, while for smaller corpora you need a larger number of epochs.
Execute the following cell to generate embeddings for the pre-computed co-occurrences.
```
vec_path = '/content/umbc/vec/tlgs_wnscd_5k_ls_f'
!python /content/tutorial/scripts/swivel/swivel.py --input_base_path={precomp_coocs_path} \
--output_base_path={vec_path} \
--num_epochs=40 --dim=150 \
--submatrix_rows=512 --submatrix_cols=512
```
This will take a few minutes, depending on your machine.
The result is a list of files in the specified output folder, including:
- the tensorflow graph, which defines the architecture of the model being trained
- checkpoints of the model (intermediate snapshots of the weights)
- `tsv` files for the final state of the column and row embeddings.
```
%ls {vec_path}
```
### Convert `tsv` files to `bin` file
As we've seen in previous notebooks, the `tsv` files are easy to inspect, but they take too much space and they are slow to load since we need to convert the different values to floats and pack them as vectors. Swivel offers a utility to convert the `tsv` files into a `bin`ary format. At the same time it combines the column and row embeddings into a single space (it simply adds the two vectors for each word in the vocabulary).
```
!python /content/tutorial/scripts/swivel/text2bin.py --vocab={precomp_coocs_path}/row_vocab.txt --output={vec_path}/vecs.bin \
{vec_path}/row_embedding.tsv \
{vec_path}/col_embedding.tsv
```
This adds the `vocab.txt` and `vecs.bin` to the folder with the vectors:
```
%ls {vec_path}
```
## Inspect the embeddings
As in previous notebooks, we can now use Swivel to inspect the vectors using the `Vecs` class. It accepts a `vocab_file` and a file for the binary serialization of the vectors (`vecs.bin`).
```
from tutorial.scripts.swivel import vecs
```
...and we can load existing vectors. Here we load some pre-computed embeddings, but feel free to use the embeddings you computed by following the steps above (although, due to random initialization of weight during the training step, your results may be different).
```
vectors = vecs.Vecs(precomp_coocs_path + '/row_vocab.txt',
vec_path + '/vecs.bin')
```
Next, let's define a basic method for printing the `k` nearest neighbors for a given word:
And let's use the method on a few lemmas and synsets in the vocabulary:
```
import pandas as pd
pd.DataFrame(vectors.k_neighbors('lem_California'))
pd.DataFrame(vectors.k_neighbors('lem_semantic'))
pd.DataFrame(vectors.k_neighbors('lem_conference'))
pd.DataFrame(vectors.k_neighbors('wn31_conference.n.01'))
```
Note that using the Vecsigrafo approach gets us very different results than when using standard swivel (notebook 01):
* the results now include concepts (synsets), besides just words. Without further information, this makes interpreting the results harder since we now only have the concept id, but we can search for these concepts in the underlying KG (WordNet in this case) to explore the semantic network and get further information.
Of course, results may not be very good, since these have been derived from a very small corpus (5K lines from UMBC). In the excercise below, we encourage you to download and inspect pre-computed embeddings based on the full UMBC corpus.
```
pd.DataFrame(vectors.k_neighbors('lem_semantic web'))
pd.DataFrame(vectors.k_neighbors('lem_ontology'))
```
# Conclusion and Exercises
In this notebook we generated a vecsigrafo based on a disambiguated corpus. The resulting embedding space combines concept ids and lemmas.
We have seen that the resulting space:
1. may be harder to inspect due to the potentially opaque concept ids
2. clearly different than standard swivel embeddings
The question is: are the resulting embeddings *better*?
To get an answer, in the next notebook, we will look at **evaluation methods for embeddings**.
## Exercise 1: Explore full precomputed embeddings
We have also pre-computed embeddings for the full UMBC corpus. The provided `tar.gz` file is about 1.1GB, hence downloading it may take several minutes.
```
full_precomp_url = 'https://zenodo.org/record/1446214/files/vecsigrafo_umbc_tlgs_ls_f_6e_160d_row_embedding.tar.gz'
full_precomp_targz = '/content/umbc/vec/tlgs_wnscd_ls_f_6e_160d_row_embedding.tar.gz'
!wget {full_precomp_url} -O {full_precomp_targz}
```
Next, we have to unpack the vectors:
```
!tar -xzf {full_precomp_targz} -C /content/umbc/vec/
full_precomp_vec_path = '/content/umbc/vec/vecsi_tlgs_wnscd_ls_f_6e_160d'
%ls /content/umbc/vec/vecsi_tlgs_wnscd_ls_f_6e_160d/
```
The data only includes the `tsv` version of the vectors, so we need to convert these to the binary format that Swivel uses. And for that, we aso need a `vocab.txt` file, which we can derive from the tsv as follows:
```
with open(full_precomp_vec_path + '/vocab.txt', 'w', encoding='utf_8') as f:
with open(full_precomp_vec_path + '/row_embedding.tsv', 'r', encoding='utf_8') as vec_lines:
vocab = [line.split('\t')[0].strip() for line in vec_lines]
for word in vocab:
print(word, file=f)
```
Let's inspect the vocabulary:
```
!wc -l {full_precomp_vec_path}/vocab.txt
!grep 'wn31_' {full_precomp_vec_path}/vocab.txt | wc -l
!grep 'lem_' {full_precomp_vec_path}/vocab.txt | wc -l
```
As we can see, the embeddings have a vocabulary of just under 1.5M entries, 56K of which are synsets and most of the rest are lemmas.
Next, convert the `tsv` into the swivel's binary format. This can take a couple of minutes.
```
!python /content/tutorial/scripts/swivel/text2bin.py --vocab={full_precomp_vec_path}/vocab.txt --output={full_precomp_vec_path}/vecs.bin \
{full_precomp_vec_path}/row_embedding.tsv
```
Now, we are ready to load the vectors.
```
vecsi_wn_umbc = vecs.Vecs(full_precomp_vec_path + '/vocab.txt',
full_precomp_vec_path + '/vecs.bin')
pd.DataFrame(vecsi_wn_umbc.k_neighbors('lem_California'))
pd.DataFrame(vecsi_wn_umbc.k_neighbors('lem_semantic'))
pd.DataFrame(vecsi_wn_umbc.k_neighbors('lem_conference'))
print(wn.synset('conference.n.01').definition())
pd.DataFrame(vecsi_wn_umbc.k_neighbors('wn31_conference.n.01'))
print(wn.synset('conference.n.03').definition())
pd.DataFrame(vecsi_wn_umbc.k_neighbors('wn31_conference.n.03'))
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from tqdm import tqdm
trainData = np.load('../../../dataFinal/npy_files/fin_t2_train.npy')
trainLabels = open('../../../dataFinal/finalTrainLabels.labels', 'r').readlines()
testData = np.load('../../../dataFinal/npy_files/fin_t2_test.npy')
testLabels = open('../../../dataFinal/finalTestLabels.labels', 'r').readlines()
valData = np.load('../../../dataFinal/npy_files/fin_t2_trial.npy')
valLabels = open('../../../dataFinal/finalDevLabels.labels', 'r').readlines()
for i in tqdm(range(len(trainLabels))):
trainLabels[i] = int(trainLabels[i])
for i in tqdm(range(len(testLabels))):
testLabels[i] = int(testLabels[i])
for i in tqdm(range(len(valLabels))):
valLabels[i] = int(valLabels[i])
trainLabels = np.array(trainLabels)
testLabels = np.array(testLabels)
valLabels = np.array(valLabels)
trainLabels = trainLabels.reshape((-1, ))
testLabels = testLabels.reshape((-1, ))
valLabels = valLabels.reshape((-1, ))
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import f1_score, precision_score, recall_score, accuracy_score
f1DictTrain = {}
precisionDictTrain = {}
recallDictTrain = {}
accDictTrain = {}
f1DictTest = {}
precisionDictTest = {}
recallDictTest = {}
accDictTest = {}
f1DictVal = {}
precisionDictVal = {}
recallDictVal = {}
accDictVal = {}
lossFn = ['hinge', 'log', 'modified_huber', 'perceptron']
for i in tqdm(range(len(lossFn))):
fn = lossFn[i]
sgd = SGDClassifier(learning_rate='optimal', verbose=0, max_iter=500, loss=fn)
sgd.fit(trainData, trainLabels)
trainPreds = sgd.predict(trainData)
testPreds = sgd.predict(testData)
valPreds = sgd.predict(valData)
key = str(fn)
f1DictTrain[key] = f1_score(trainLabels, trainPreds, average='weighted')
precisionDictTrain[key] = precision_score(trainLabels, trainPreds, average='weighted')
recallDictTrain[key] = recall_score(trainLabels, trainPreds, average='weighted')
accDictTrain[key] = accuracy_score(trainLabels, trainPreds, normalize=True)
f1DictTest[key] = f1_score(testLabels, testPreds, average='weighted')
precisionDictTest[key] = precision_score(testLabels, testPreds, average='weighted')
recallDictTest[key] = recall_score(testLabels, testPreds, average='weighted')
accDictTest[key] = accuracy_score(testLabels, testPreds, normalize=True)
f1DictVal[key] = f1_score(valLabels, valPreds, average='weighted')
precisionDictVal[key] = precision_score(valLabels, valPreds, average='weighted')
recallDictVal[key] = recall_score(valLabels, valPreds, average='weighted')
accDictVal[key] = accuracy_score(valLabels, valPreds, normalize=True)
print(precisionDictTrain)
print(precisionDictTest)
print(precisionDictVal)
print(recallDictTrain)
print(recallDictTest)
print(recallDictVal)
print(f1DictTrain)
print(f1DictTest)
print(f1DictVal)
print(accDictTrain)
print(accDictTest)
print(accDictVal)
import matplotlib.pyplot as plt
plt.style.use('seaborn')
valList = [recallDictVal[i] for i in recallDictVal.keys()]
trainList = [recallDictTrain[i] for i in recallDictTrain.keys()]
testList = [recallDictTest[i] for i in recallDictTest.keys()]
plt.plot(lossFn, valList, label='Validation Set', c='red')
plt.plot(lossFn, trainList, label='Train Set', c='green')
plt.plot(lossFn, testList, label='Test Set')
plt.legend()
plt.ylabel('Weighted Recall Score')
plt.xlabel('Loss Function')
plt.show()
valList = [precisionDictVal[i] for i in precisionDictVal.keys()]
trainList = [precisionDictTrain[i] for i in precisionDictTrain.keys()]
testList = [precisionDictTest[i] for i in precisionDictTest.keys()]
plt.plot(lossFn, valList, label='Validation Set', c='red')
plt.plot(lossFn, trainList, label='Train Set', c='green')
plt.plot(lossFn, testList, label='Test Set')
plt.legend()
plt.ylabel('Weighted Precision Score')
plt.xlabel('Loss Function')
plt.show()
valList = [f1DictVal[i] for i in f1DictVal.keys()]
trainList = [f1DictTrain[i] for i in f1DictTrain.keys()]
testList = [f1DictTest[i] for i in f1DictTest.keys()]
plt.plot(lossFn, valList, label='Validation Set', c='red')
plt.plot(lossFn, trainList, label='Train Set', c='green')
plt.plot(lossFn, testList, label='Test Set')
plt.legend()
plt.ylabel('Weighted F1 Score')
plt.xlabel('Loss Function')
plt.show()
valList = [accDictVal[i] for i in accDictVal.keys()]
trainList = [accDictTrain[i] for i in accDictTrain.keys()]
testList = [accDictTest[i] for i in accDictTest.keys()]
plt.plot(lossFn, valList, label='Validation Set', c='red')
plt.plot(lossFn, trainList, label='Train Set', c='green')
plt.plot(lossFn, testList, label='Test Set')
plt.legend()
plt.ylabel('Accuracy')
plt.xlabel('Loss Function')
plt.show()
model = SGDClassifier(learning_rate='optimal', verbose=0, max_iter=500, loss='modified_huber')
model.fit(trainData, trainLabels)
import pickle as pk
filename = 'WE_SGD_modified_huber'
pk.dump(model,open(filename,'wb'))
sgd = SGDClassifier(learning_rate='optimal', verbose=1, max_iter=500, loss='hinge')
sgd.fit(trainData, trainLabels)
print()
print('train', sgd.score(trainData, trainLabels))
print('test', sgd.score(testData, testLabels))
sgd = SGDClassifier(learning_rate='optimal', verbose=1, max_iter=500, loss='log')
sgd.fit(trainData, trainLabels)
print()
print('train', sgd.score(trainData, trainLabels))
print('test', sgd.score(testData, testLabels))
sgd = SGDClassifier(learning_rate='optimal', verbose=1, max_iter=500, loss='modified_huber')
sgd.fit(trainData, trainLabels)
print()
print('train', sgd.score(trainData, trainLabels))
print('test', sgd.score(testData, testLabels))
sgd = SGDClassifier(learning_rate='optimal', verbose=1, max_iter=500, loss='perceptron')
sgd.fit(trainData, trainLabels)
print()
print('train', sgd.score(trainData, trainLabels))
print('test', sgd.score(testData, testLabels))
```
| github_jupyter |
2D image (width, height) ==> (width * features * resolution, height)
```
import torch
from torch import nn
import torch.nn.functional as F
import torch.optim as optim
import pdb
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
import random
from scipy.ndimage.filters import gaussian_filter
from skimage.draw import line_aa
%matplotlib inline
from scipy.stats import norm
device = "cuda" if torch.cuda.is_available() else "cpu"
class NormalDistributionCollection(object):
def __init__(self, resolution, var=0.07):
self.gaussians = torch.stack([self.normal_distribution(resolution, index.item() / resolution, var) for index in np.arange(resolution)])
@staticmethod
def normal_distribution(n, mean, var=0.05):
x = norm.pdf(np.arange(0, 1, 1.0 / n), mean, var)
x = x / np.max(x)
return torch.tensor(x).float()
@staticmethod
def to_pdf(frames):
element_count = np.prod(frames.shape)
frames_shape = frames.shape
frames_view = frames.contiguous().view((element_count,))
frames_pdf = torch.stack([NormalDistributionCollection.normal_distribution(resolution, mean.item()) for mean in frames_view])
frames_pdf = frames_pdf.view(frames_shape[:-1] + (frames_shape[-1] * resolution, ))
return frames_pdf
def generate_frames1(width, height):
frames = []
for _ in range(100):
frame = np.zeros((width, height))
rr, cc, val = line_aa(random.randint(0, height-1), random.randint(0, width-1), random.randint(0, height-1), random.randint(0, width-1))
frame[rr, cc] = val
frame=gaussian_filter(frame, 0.5)
frames.append(frame)
return torch.as_tensor(frames).to(device)
def generate_frames(width, height):
return torch.as_tensor([[[0, 0], [1, 1]], [[1, 1], [0, 0]]])
def show_image(image, vmin=None, vmax=None, title=None, print_values=False):
#print("image ", image.shape)
fig, ax1 = plt.subplots(figsize=(6, 2))
if title:
plt.title(title)
#i = image.reshape((height, width))
#print("i ", i.shape)
ax1.imshow(image, vmin=vmin, vmax=vmax)
plt.show()
if print_values:
print(image)
# Assume input (samples, feature maps, height, width) and that
# features maps is a perfect squere, e.g. 9, of an integer 'a', e.g. 3 in this case
# Output (samples, height * a, width * a)
def flatten_feature_maps(f):
s = f.shape
f = f.permute(0, 2, 3, 1) # move features to the end
s = f.shape
a = int(s[3] ** 0.5) # feature maps are at pos 3 now that we want to first split into a square of size (a X a)
assert a * a == s[3], "Feature map count must be a perfect square"
f = f.view(s[0], s[1], s[2], a, a)
f = f.permute(0, 1, 3, 2, 4).contiguous() # frame count, height, sqr(features), width, sqr(features)
s = f.shape
f = f.view(s[0], s[1] * s[2], s[3] * s[4]) # each point becomes a square of features
return f, a
class Encoder(torch.nn.Module):
def __init__(self, a, resolution, kernel_size=4):
super().__init__()
out_channels = a * a
self.e1 = nn.Conv2d(in_channels=1, out_channels=out_channels, kernel_size=(kernel_size, kernel_size*resolution), stride=(2, 2*resolution)).to(device)
self.bn1 = nn.BatchNorm2d(out_channels).to(device)
#self.pool1 = nn.MaxPool2d(2).to(device)
self.tanh = nn.Tanh()
def forward(self, x):
x = x[:, None, :, :] # assume 1 feature
x = self.e1(x)
x = self.bn1(x)
#x = self.pool1(x)
x = self.tanh(x)
x, _ = flatten_feature_maps(x)
return x
class Decoder(torch.nn.Module):
def __init__(self, a, output_height, output_width, resolution):
super().__init__()
self.a = a
self.output_height = output_height
self.output_width = output_width
self.resolution = resolution
#self.tanh = nn.Tanh()
self.relu = nn.ReLU()
self.tanh = nn.Tanh()
self.d1 = nn.ConvTranspose2d(1, 1, a, stride=a)
self.linear = None
def forward(self, z):
assert z.shape[-2] % self.a == 0, f"input height must be multiple of {self.a}"
assert z.shape[-1] % self.a == 0, f"input width must be multiple of {self.a}"
z = z[:, None, :, :] # assume 1 feature
#z = self.d1(z)
#z = self.tanh(z)
#print(f"z.shape = {z.shape}")
s = z.shape
z = z.view(s[0], s[1] * s[2] * s[3])
if self.linear is None:
self.linear = nn.Linear(z.shape[-1], self.output_height * self.output_width * self.resolution)
z = self.linear(z)
z = z.view(z.shape[0], self.output_height, self.output_width * self.resolution)
z = self.relu(z)
return z
class Network(torch.nn.Module):
def __init__(self, height, width, resolution, out_channels, kernel_size=4):
super().__init__()
self.a = int(out_channels**0.5)
self.encoder = Encoder(a=a, resolution=resolution, kernel_size=kernel_size).to(device)
self.decoder = Decoder(a=a, output_height=height, output_width=width, resolution=resolution).to(device)
self.hidden_state = None
def forward(self, x):
x = self.hidden_state = self.encoder(x)
x = self.decoder(x)
return x
class Layer(object):
def __init__(self):
self.hidden_state = "uninitialized"
def train(self, frames, resolution, out_channels=10, epochs=1000, kernel_size=4):
print("--------------------------------")
print("Training with frames ", frames.shape)
_, height, width = frames.shape
frames_pdf = NormalDistributionCollection.to_pdf(frames).to(device)
print("frames_pdf ", frames_pdf.shape)
self.model = model = Network(height, width, resolution, out_channels, kernel_size=kernel_size).to(device)
out = model(frames_pdf)
print("out shape = ", out.shape)
for index in range(10, 11):
show_image(frames[index].detach().cpu().numpy(), title=f"frame {index} : {frames[index].shape}", vmin=0, vmax=1)
show_image(frames_pdf[index].detach().cpu().numpy(), title=f"frame pdf {index} : {frames_pdf[index].shape}", vmin=0, vmax=1)
show_image(out[index].detach().cpu().numpy(), title=f"out {index} : {out[index].shape}", vmin=0, vmax=1)
optimizer = optim.Adam(model.parameters(), lr=1e-4)
for epoch in range(epochs):
model.train()
optimizer.zero_grad()
out = model(frames_pdf)
loss = F.mse_loss(out, frames_pdf)
if (epoch + 1) % int(epochs/10) == 0:
print(f"epoch {epoch}:\tloss {loss}")
loss.backward()
optimizer.step()
for index in range(10, 11):
show_image(frames[index].detach().cpu().numpy(), title=f"frame {index} : {frames[index].shape}", vmin=0, vmax=1)
show_image(frames_pdf[index].detach().cpu().numpy(), title=f"frame pdf {index} : {frames_pdf[index].shape}", vmin=0, vmax=1)
show_image(out[index].detach().cpu().numpy(), title=f"out {index} : {out[index].shape}", vmin=0, vmax=1)
self.hidden_state = model.hidden_state
# show info
print("frames_pdf ", frames_pdf.shape)
h = model.hidden_state
print("h ", h.shape)
# Layer 1 autoencoder
resolution = 8
frames1 = generate_frames1(8, 8)
layer1 = Layer()
epochs = 10000 if device == "cuda" else 5000
a = 4
layer1.train(frames1, resolution, out_channels=a**2, epochs=epochs, kernel_size=4)
# Layer 2 autoencoder
layer2 = Layer()
f = layer1.hidden_state
layer2.train(f, resolution, out_channels=(a+1)**2, epochs=epochs, kernel_size=4)
# Layer 3 autoencoder
layer3 = Layer()
f = layer2.hidden_state
layer3.train(f, resolution, out_channels=(a+1)**2, epochs=epochs, kernel_size=4)
from scipy import stats
def sample_from_frame_pdf(mu_bar):
assert len(mu_bar.shape) == 3
# reshape mu_pdf from (frame count, height, width*resolution) into (frame count*height*width, resolution)
s = mu_bar.shape
mu_pdf = mu_bar.view(s[0], s[1], int(s[2] / resolution), resolution)
s = mu_pdf.shape
mu_pdf = mu_pdf.view(s[0] * s[1] * s[2], s[3])
# sample single value from each distributions into (frame count*height*width, 1)
sample = torch.Tensor([sample_from_pdf(item.numpy()) for item in mu_pdf])
# reshape back into (frame count, height, width)
sample = sample.view(s[0], s[1], s[2])
return sample
def sample_from_pdf(mu_pdf):
assert mu_pdf.shape == (resolution, )
pk = mu_pdf.copy()
xk = np.arange(resolution)
pk[pk<0] = 0
sum_pk = sum(pk)
if sum(pk) > 0:
pk = pk / sum_pk
custm = stats.rv_discrete(name='custm', values=(xk, pk))
value = custm.rvs(size=1) / resolution
# apply scale (conflates value and confidence!)
value = value * sum_pk
return value
else:
return [0]
image_index = 22
vmin = 0
vmax = 1
mu1 = frames1[image_index].unsqueeze(dim=0)
def forward(mu, layer, layer_index):
print(f"mu{layer_index} shape {mu.shape}")
show_image(mu[0], title=f"mu{layer_index}", vmin=vmin, vmax=vmax)
mu_pdf = NormalDistributionCollection.to_pdf(mu)
print(f"mu_pdf{layer_index} shape {mu_pdf.shape}")
show_image(mu_pdf[0], title=f"mu{layer_index}_pdf", vmin=vmin, vmax=vmax)
mu_pdf_sampled = sample_from_frame_pdf(mu_pdf.detach())[0]
show_image(mu_pdf_sampled, title=f"mu{layer_index}_pdf_sampled ~= mu{layer_index}", vmin=vmin, vmax=vmax)
# Layer
layer.hidden_state = h = layer.model.encoder(mu_pdf)
print(f"h{layer_index} shape {h.shape}")
mu_bar = layer.model.decoder(h)
print(f"mu_bar{layer_index} shape {mu_bar.shape}")
show_image(mu_bar[0].detach().numpy(), title=f"mu{layer_index}_bar", vmin=vmin, vmax=vmax)
mu_bar_sampled = sample_from_frame_pdf(mu_bar.detach())[0]
show_image(mu_bar_sampled, title=f"mu{layer_index}_bar_sampled ~= mu{layer_index}", vmin=vmin, vmax=vmax)
return mu_bar_sampled
# Layer 1
forward(mu1, layer1, 1)
# Layer 2
mu2 = layer1.hidden_state.detach()
print("mu2 shape = ", mu2.shape)
mu2_bar_sampled = forward(mu2, layer2, 2).unsqueeze(dim=0)
print("mu2_bar_sampled shape = ", mu2_bar_sampled.shape)
# mu2_bar_sampled decoded by layer 1 should reproduce mu1
mu_bar = layer1.model.decoder(mu2_bar_sampled).detach()
show_image(mu_bar.squeeze(), vmin=vmin, vmax=vmax)
mu_bar_sampled = sample_from_frame_pdf(mu_bar)[0]
show_image(mu_bar_sampled, title=f"mu_bar_sampled ~= mu", vmin=vmin, vmax=vmax)
mu_bar_sampled = sample_from_frame_pdf(mu_bar)[0]
show_image(mu_bar_sampled, title=f"mu_bar_sampled ~= mu", vmin=vmin, vmax=vmax)
mu_bar_sampled = sample_from_frame_pdf(mu_bar)[0]
show_image(mu_bar_sampled, title=f"mu_bar_sampled ~= mu", vmin=vmin, vmax=vmax)
mu_bar_sampled = sample_from_frame_pdf(mu_bar)[0]
show_image(mu_bar_sampled, title=f"mu_bar_sampled ~= mu", vmin=vmin, vmax=vmax)
mu_bar_sampled = sample_from_frame_pdf(mu_bar)[0]
show_image(mu_bar_sampled, title=f"mu_bar_sampled ~= mu", vmin=vmin, vmax=vmax)
```
| github_jupyter |
```
import keras
keras.__version__
```
# 5.2 - Using convnets with small datasets
This notebook contains the code sample found in Chapter 5, Section 2 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
## Training a convnet from scratch on a small dataset
Having to train an image classification model using only very little data is a common situation, which you likely encounter yourself in
practice if you ever do computer vision in a professional context.
Having "few" samples can mean anywhere from a few hundreds to a few tens of thousands of images. As a practical example, we will focus on
classifying images as "dogs" or "cats", in a dataset containing 4000 pictures of cats and dogs (2000 cats, 2000 dogs). We will use 2000
pictures for training, 1000 for validation, and finally 1000 for testing.
In this section, we will review one basic strategy to tackle this problem: training a new model from scratch on what little data we have. We
will start by naively training a small convnet on our 2000 training samples, without any regularization, to set a baseline for what can be
achieved. This will get us to a classification accuracy of 71%. At that point, our main issue will be overfitting. Then we will introduce
*data augmentation*, a powerful technique for mitigating overfitting in computer vision. By leveraging data augmentation, we will improve
our network to reach an accuracy of 82%.
In the next section, we will review two more essential techniques for applying deep learning to small datasets: *doing feature extraction
with a pre-trained network* (this will get us to an accuracy of 90% to 93%), and *fine-tuning a pre-trained network* (this will get us to
our final accuracy of 95%). Together, these three strategies -- training a small model from scratch, doing feature extracting using a
pre-trained model, and fine-tuning a pre-trained model -- will constitute your future toolbox for tackling the problem of doing computer
vision with small datasets.
## The relevance of deep learning for small-data problems
You will sometimes hear that deep learning only works when lots of data is available. This is in part a valid point: one fundamental
characteristic of deep learning is that it is able to find interesting features in the training data on its own, without any need for manual
feature engineering, and this can only be achieved when lots of training examples are available. This is especially true for problems where
the input samples are very high-dimensional, like images.
However, what constitutes "lots" of samples is relative -- relative to the size and depth of the network you are trying to train, for
starters. It isn't possible to train a convnet to solve a complex problem with just a few tens of samples, but a few hundreds can
potentially suffice if the model is small and well-regularized and if the task is simple.
Because convnets learn local, translation-invariant features, they are very
data-efficient on perceptual problems. Training a convnet from scratch on a very small image dataset will still yield reasonable results
despite a relative lack of data, without the need for any custom feature engineering. You will see this in action in this section.
But what's more, deep learning models are by nature highly repurposable: you can take, say, an image classification or speech-to-text model
trained on a large-scale dataset then reuse it on a significantly different problem with only minor changes. Specifically, in the case of
computer vision, many pre-trained models (usually trained on the ImageNet dataset) are now publicly available for download and can be used
to bootstrap powerful vision models out of very little data. That's what we will do in the next section.
For now, let's get started by getting our hands on the data.
## Downloading the data
The cats vs. dogs dataset that we will use isn't packaged with Keras. It was made available by Kaggle.com as part of a computer vision
competition in late 2013, back when convnets weren't quite mainstream. You can download the original dataset at:
`https://www.kaggle.com/c/dogs-vs-cats/data` (you will need to create a Kaggle account if you don't already have one -- don't worry, the
process is painless).
The pictures are medium-resolution color JPEGs. They look like this:

Unsurprisingly, the cats vs. dogs Kaggle competition in 2013 was won by entrants who used convnets. The best entries could achieve up to
95% accuracy. In our own example, we will get fairly close to this accuracy (in the next section), even though we will be training our
models on less than 10% of the data that was available to the competitors.
This original dataset contains 25,000 images of dogs and cats (12,500 from each class) and is 543MB large (compressed). After downloading
and uncompressing it, we will create a new dataset containing three subsets: a training set with 1000 samples of each class, a validation
set with 500 samples of each class, and finally a test set with 500 samples of each class.
Here are a few lines of code to do this:
```
import os, shutil
# The path to the directory where the original
# dataset was uncompressed
original_dataset_dir = '/Users/fchollet/Downloads/kaggle_original_data'
# The directory where we will
# store our smaller dataset
base_dir = '/Users/fchollet/Downloads/cats_and_dogs_small'
os.mkdir(base_dir)
# Directories for our training,
# validation and test splits
train_dir = os.path.join(base_dir, 'train')
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
os.mkdir(test_dir)
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
os.mkdir(train_cats_dir)
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
os.mkdir(train_dogs_dir)
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
os.mkdir(validation_cats_dir)
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
os.mkdir(validation_dogs_dir)
# Directory with our validation cat pictures
test_cats_dir = os.path.join(test_dir, 'cats')
os.mkdir(test_cats_dir)
# Directory with our validation dog pictures
test_dogs_dir = os.path.join(test_dir, 'dogs')
os.mkdir(test_dogs_dir)
# Copy first 1000 cat images to train_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(train_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 cat images to validation_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1000, 1500)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(validation_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 cat images to test_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1500, 2000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(test_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy first 1000 dog images to train_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(train_dogs_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 dog images to validation_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1000, 1500)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(validation_dogs_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 dog images to test_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1500, 2000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(test_dogs_dir, fname)
shutil.copyfile(src, dst)
```
As a sanity check, let's count how many pictures we have in each training split (train/validation/test):
```
print('total training cat images:', len(os.listdir(train_cats_dir)))
print('total training dog images:', len(os.listdir(train_dogs_dir)))
print('total validation cat images:', len(os.listdir(validation_cats_dir)))
print('total validation dog images:', len(os.listdir(validation_dogs_dir)))
print('total test cat images:', len(os.listdir(test_cats_dir)))
print('total test dog images:', len(os.listdir(test_dogs_dir)))
```
So we have indeed 2000 training images, and then 1000 validation images and 1000 test images. In each split, there is the same number of
samples from each class: this is a balanced binary classification problem, which means that classification accuracy will be an appropriate
measure of success.
## Building our network
We've already built a small convnet for MNIST in the previous example, so you should be familiar with them. We will reuse the same
general structure: our convnet will be a stack of alternated `Conv2D` (with `relu` activation) and `MaxPooling2D` layers.
However, since we are dealing with bigger images and a more complex problem, we will make our network accordingly larger: it will have one
more `Conv2D` + `MaxPooling2D` stage. This serves both to augment the capacity of the network, and to further reduce the size of the
feature maps, so that they aren't overly large when we reach the `Flatten` layer. Here, since we start from inputs of size 150x150 (a
somewhat arbitrary choice), we end up with feature maps of size 7x7 right before the `Flatten` layer.
Note that the depth of the feature maps is progressively increasing in the network (from 32 to 128), while the size of the feature maps is
decreasing (from 148x148 to 7x7). This is a pattern that you will see in almost all convnets.
Since we are attacking a binary classification problem, we are ending the network with a single unit (a `Dense` layer of size 1) and a
`sigmoid` activation. This unit will encode the probability that the network is looking at one class or the other.
```
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
```
Let's take a look at how the dimensions of the feature maps change with every successive layer:
```
model.summary()
```
For our compilation step, we'll go with the `RMSprop` optimizer as usual. Since we ended our network with a single sigmoid unit, we will
use binary crossentropy as our loss (as a reminder, check out the table in Chapter 4, section 5 for a cheatsheet on what loss function to
use in various situations).
```
from keras import optimizers
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
```
## Data preprocessing
As you already know by now, data should be formatted into appropriately pre-processed floating point tensors before being fed into our
network. Currently, our data sits on a drive as JPEG files, so the steps for getting it into our network are roughly:
* Read the picture files.
* Decode the JPEG content to RBG grids of pixels.
* Convert these into floating point tensors.
* Rescale the pixel values (between 0 and 255) to the [0, 1] interval (as you know, neural networks prefer to deal with small input values).
It may seem a bit daunting, but thankfully Keras has utilities to take care of these steps automatically. Keras has a module with image
processing helper tools, located at `keras.preprocessing.image`. In particular, it contains the class `ImageDataGenerator` which allows to
quickly set up Python generators that can automatically turn image files on disk into batches of pre-processed tensors. This is what we
will use here.
```
from keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
```
Let's take a look at the output of one of these generators: it yields batches of 150x150 RGB images (shape `(20, 150, 150, 3)`) and binary
labels (shape `(20,)`). 20 is the number of samples in each batch (the batch size). Note that the generator yields these batches
indefinitely: it just loops endlessly over the images present in the target folder. For this reason, we need to `break` the iteration loop
at some point.
```
for data_batch, labels_batch in train_generator:
print('data batch shape:', data_batch.shape)
print('labels batch shape:', labels_batch.shape)
break
```
Let's fit our model to the data using the generator. We do it using the `fit_generator` method, the equivalent of `fit` for data generators
like ours. It expects as first argument a Python generator that will yield batches of inputs and targets indefinitely, like ours does.
Because the data is being generated endlessly, the generator needs to know example how many samples to draw from the generator before
declaring an epoch over. This is the role of the `steps_per_epoch` argument: after having drawn `steps_per_epoch` batches from the
generator, i.e. after having run for `steps_per_epoch` gradient descent steps, the fitting process will go to the next epoch. In our case,
batches are 20-sample large, so it will take 100 batches until we see our target of 2000 samples.
When using `fit_generator`, one may pass a `validation_data` argument, much like with the `fit` method. Importantly, this argument is
allowed to be a data generator itself, but it could be a tuple of Numpy arrays as well. If you pass a generator as `validation_data`, then
this generator is expected to yield batches of validation data endlessly, and thus you should also specify the `validation_steps` argument,
which tells the process how many batches to draw from the validation generator for evaluation.
```
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
```
It is good practice to always save your models after training:
```
model.save('cats_and_dogs_small_1.h5')
```
Let's plot the loss and accuracy of the model over the training and validation data during training:
```
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
These plots are characteristic of overfitting. Our training accuracy increases linearly over time, until it reaches nearly 100%, while our
validation accuracy stalls at 70-72%. Our validation loss reaches its minimum after only five epochs then stalls, while the training loss
keeps decreasing linearly until it reaches nearly 0.
Because we only have relatively few training samples (2000), overfitting is going to be our number one concern. You already know about a
number of techniques that can help mitigate overfitting, such as dropout and weight decay (L2 regularization). We are now going to
introduce a new one, specific to computer vision, and used almost universally when processing images with deep learning models: *data
augmentation*.
## Using data augmentation
Overfitting is caused by having too few samples to learn from, rendering us unable to train a model able to generalize to new data.
Given infinite data, our model would be exposed to every possible aspect of the data distribution at hand: we would never overfit. Data
augmentation takes the approach of generating more training data from existing training samples, by "augmenting" the samples via a number
of random transformations that yield believable-looking images. The goal is that at training time, our model would never see the exact same
picture twice. This helps the model get exposed to more aspects of the data and generalize better.
In Keras, this can be done by configuring a number of random transformations to be performed on the images read by our `ImageDataGenerator`
instance. Let's get started with an example:
```
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
```
These are just a few of the options available (for more, see the Keras documentation). Let's quickly go over what we just wrote:
* `rotation_range` is a value in degrees (0-180), a range within which to randomly rotate pictures.
* `width_shift` and `height_shift` are ranges (as a fraction of total width or height) within which to randomly translate pictures
vertically or horizontally.
* `shear_range` is for randomly applying shearing transformations.
* `zoom_range` is for randomly zooming inside pictures.
* `horizontal_flip` is for randomly flipping half of the images horizontally -- relevant when there are no assumptions of horizontal
asymmetry (e.g. real-world pictures).
* `fill_mode` is the strategy used for filling in newly created pixels, which can appear after a rotation or a width/height shift.
Let's take a look at our augmented images:
```
# This is module with image preprocessing utilities
from keras.preprocessing import image
fnames = [os.path.join(train_cats_dir, fname) for fname in os.listdir(train_cats_dir)]
# We pick one image to "augment"
img_path = fnames[3]
# Read the image and resize it
img = image.load_img(img_path, target_size=(150, 150))
# Convert it to a Numpy array with shape (150, 150, 3)
x = image.img_to_array(img)
# Reshape it to (1, 150, 150, 3)
x = x.reshape((1,) + x.shape)
# The .flow() command below generates batches of randomly transformed images.
# It will loop indefinitely, so we need to `break` the loop at some point!
i = 0
for batch in datagen.flow(x, batch_size=1):
plt.figure(i)
imgplot = plt.imshow(image.array_to_img(batch[0]))
i += 1
if i % 4 == 0:
break
plt.show()
```
If we train a new network using this data augmentation configuration, our network will never see twice the same input. However, the inputs
that it sees are still heavily intercorrelated, since they come from a small number of original images -- we cannot produce new information,
we can only remix existing information. As such, this might not be quite enough to completely get rid of overfitting. To further fight
overfitting, we will also add a Dropout layer to our model, right before the densely-connected classifier:
```
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
```
Let's train our network using data augmentation and dropout:
```
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=100,
validation_data=validation_generator,
validation_steps=50)
```
Let's save our model -- we will be using it in the section on convnet visualization.
```
model.save('cats_and_dogs_small_2.h5')
```
Let's plot our results again:
```
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
Thanks to data augmentation and dropout, we are no longer overfitting: the training curves are rather closely tracking the validation
curves. We are now able to reach an accuracy of 82%, a 15% relative improvement over the non-regularized model.
By leveraging regularization techniques even further and by tuning the network's parameters (such as the number of filters per convolution
layer, or the number of layers in the network), we may be able to get an even better accuracy, likely up to 86-87%. However, it would prove
very difficult to go any higher just by training our own convnet from scratch, simply because we have so little data to work with. As a
next step to improve our accuracy on this problem, we will have to leverage a pre-trained model, which will be the focus of the next two
sections.
| github_jupyter |
# Convolutional Sentiment Classifier
In this notebook, we build a *convolutional* neural net to classify IMDB movie reviews by their sentiment.
```
#load watermark
%load_ext watermark
%watermark -a 'Gopala KR' -u -d -v -p watermark,numpy,pandas,matplotlib,nltk,sklearn,tensorflow,theano,mxnet,chainer,seaborn,keras,tflearn
```
#### Load dependencies
```
import keras
from keras.datasets import imdb
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Dropout, Embedding
from keras.layers import SpatialDropout1D, Conv1D, GlobalMaxPooling1D # new!
from keras.callbacks import ModelCheckpoint
import os
from sklearn.metrics import roc_auc_score
import matplotlib.pyplot as plt
%matplotlib inline
```
#### Set hyperparameters
```
# output directory name:
output_dir = 'model_output/conv'
# training:
epochs = 4
batch_size = 128
# vector-space embedding:
n_dim = 64
n_unique_words = 5000
max_review_length = 400
pad_type = trunc_type = 'pre'
drop_embed = 0.2 # new!
# convolutional layer architecture:
n_conv = 256 # filters, a.k.a. kernels
k_conv = 3 # kernel length
# dense layer architecture:
n_dense = 256
dropout = 0.2
```
#### Load data
```
(x_train, y_train), (x_valid, y_valid) = imdb.load_data(num_words=n_unique_words) # removed n_words_to_skip
```
#### Preprocess data
```
x_train = pad_sequences(x_train, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_valid = pad_sequences(x_valid, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
```
#### Design neural network architecture
```
model = Sequential()
model.add(Embedding(n_unique_words, n_dim, input_length=max_review_length))
model.add(SpatialDropout1D(drop_embed))
model.add(Conv1D(n_conv, k_conv, activation='relu'))
# model.add(Conv1D(n_conv, k_conv, activation='relu'))
model.add(GlobalMaxPooling1D())
model.add(Dense(n_dense, activation='relu'))
model.add(Dropout(dropout))
model.add(Dense(1, activation='sigmoid'))
model.summary()
```
#### Configure model
```
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
modelcheckpoint = ModelCheckpoint(filepath=output_dir+"/weights.{epoch:02d}.hdf5")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
```
#### Train!
```
# 89.1% validation accuracy in epoch 2
# ...with second convolutional layer is essentially the same at 89.0%
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_valid, y_valid), callbacks=[modelcheckpoint])
```
#### Evaluate
```
model.load_weights(output_dir+"/weights.01.hdf5") # zero-indexed
y_hat = model.predict_proba(x_valid)
plt.hist(y_hat)
_ = plt.axvline(x=0.5, color='orange')
"{:0.2f}".format(roc_auc_score(y_valid, y_hat)*100.0)
test complete; Gopal
```
| github_jupyter |
```
import sys
from pathlib import Path
from addict import Dict
from copy import deepcopy
sys.path.append('../../')
import numpy as np
import pandas as pd
import pylab as plt
import seaborn as sns
from sklearn.metrics import accuracy_score, roc_auc_score
from sklearn.model_selection import GroupShuffleSplit
from sklearn.ensemble import RandomForestClassifier
from examples.utils.config import Config
from examples.utils.dataset_adapters import retina_dataset
from pyspikelib import TrainNormalizeTransform
from pyspikelib.utils import simple_undersampling
from pyspikelib.train_encoders import ISIShuffleTransform
from pyspikelib.mpladeq import prettify, beautify_mpl, boxplot
import tensorflow as tf
import tensorflow.keras as keras
tf.random.Generator = None
import sktime_dl
from sktime_dl.deeplearning import CNNClassifier
from viz_utils import PlotLearningCurveCallback
beautify_mpl()
config_dict = {
'seed': 0,
'window': 50,
'step': 50,
'train_subsample_factor': 0.7,
'test_subsample_factor': 0.7,
'delimiter': None,
'dataset': '../../data/retina/mode_paper_data',
'state': 'randomly_moving_bar',
}
config = Config(config_dict)
np.random.seed(config.seed)
retinal_spikes = retina_dataset(config.dataset)[config.state]
shuffler = ISIShuffleTransform()
retinal_spikes_shuffled = shuffler.transform(
deepcopy(retinal_spikes), format='pandas', delimiter=config.delimiter
)
group_split = GroupShuffleSplit(n_splits=1, test_size=0.5)
X = np.hstack([retinal_spikes.series.values, retinal_spikes_shuffled.series.values])
y = np.hstack(
[np.ones(retinal_spikes.shape[0]), np.zeros(retinal_spikes_shuffled.shape[0])]
)
groups = np.hstack(
[retinal_spikes.groups.values, retinal_spikes_shuffled.groups.values]
)
for train_index, test_index in group_split.split(X, y, groups):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
X_train = pd.DataFrame({'series': X_train, 'groups': groups[train_index]})
X_test = pd.DataFrame({'series': X_test, 'groups': groups[test_index]})
for train_index, test_index in group_split.split(X, y, groups):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
X_train = pd.DataFrame({'series': X_train, 'groups': groups[train_index]})
X_test = pd.DataFrame({'series': X_test, 'groups': groups[test_index]})
normalizer = TrainNormalizeTransform(
window=config.window,
step=config.step,
n_samples=None
)
X_train, y_train = normalizer.transform(X_train, y_train, delimiter=config.delimiter)
X_test, y_test = normalizer.transform(X_test, y_test, delimiter=config.delimiter)
print('Dataset size: train {}, test {}'.format(X_train.shape, X_test.shape))
print('Average target: train {}, test {}'.format(y_train.mean(), y_test.mean()))
from pyspikelib.utils import simple_undersampling
Xs_train, ys_train = simple_undersampling(
pd.DataFrame(X_train), y_train, subsample_size=0.9
)
Xs_test, ys_test = simple_undersampling(
pd.DataFrame(X_test), y_test, subsample_size=0.9
)
Xs_train.shape, Xs_test.shape, ys_train.mean(), ys_test.mean()
baseline_forest = RandomForestClassifier(n_estimators=100, max_depth=5, n_jobs=-1)
baseline_forest.fit(Xs_train, ys_train)
accuracy_score(ys_test, baseline_forest.predict(Xs_test)), \
roc_auc_score(ys_test, baseline_forest.predict_proba(Xs_test)[:, 1])
ce_loss = tf.keras.losses.BinaryCrossentropy(
from_logits=False, label_smoothing=0, reduction="auto", name="binary_crossentropy"
)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
network = CNNClassifier(nb_epochs=1000,
batch_size=1024,
verbose=False,
loss=ce_loss,
optimizer=optimizer,
callbacks=[PlotLearningCurveCallback(update_freq=50)])
network.fit(Xs_train, ys_train,
validation_X=Xs_test,
validation_y=ys_test)
```
| github_jupyter |
STAT 453: Deep Learning (Spring 2020)
Instructor: Sebastian Raschka (sraschka@wisc.edu)
Course website: http://pages.stat.wisc.edu/~sraschka/teaching/stat453-ss2020/
GitHub repository: https://github.com/rasbt/stat453-deep-learning-ss20
```
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch
```
# Example Showing How to Get Gradients of an Intermediate Variable in PyTorch
This notebook illustrates how we can fetch the intermediate gradients of a function that is composed of multiple inputs and multiple computation steps in PyTorch. Note that gradient is simply a vector listing the derivatives of a function with respect
to each argument of the function. So, strictly speaking, we are discussing how to obtain the partial derivatives here.
Assume we have this simple toy graph:

Now, we provide the following values to b, x, and w; the red numbers indicate the intermediate values of the computation and the end result:

Now, the next image shows the partial derivatives of the output node, a, with respect to the input nodes (b, x, and w) as well as all the intermediate partial derivatives:

For instance, if we are interested in obtaining the partial derivative of the output a with respect to each of the input and intermediate nodes, we could do the following in PyTorch, where `d_a_b` denotes "partial derivative of a with respect to b" and so forth:
## Intermediate Gradients in PyTorch via autograd's `grad`
In PyTorch, there are multiple ways to compute partial derivatives or gradients. If the goal is to just compute partial derivatives, the most straightforward way would be using `torch.autograd`'s `grad` function. By default, the `retain_graph` parameter of the `grad` function is set to `False`, which will free the graph after computing the partial derivative. Thus, if we want to obtain multiple partial derivatives, we need to set `retain_graph=True`. Note that this is a very inefficient solution though, as multiple passes over the graph are being made where intermediate results are being recalculated:
```
import torch
import torch.nn.functional as F
from torch.autograd import grad
x = torch.tensor([3.], requires_grad=True)
w = torch.tensor([2.], requires_grad=True)
b = torch.tensor([1.], requires_grad=True)
u = x * w
v = u + b
a = F.relu(v)
d_a_b = grad(a, b, retain_graph=True)
d_a_u = grad(a, u, retain_graph=True)
d_a_v = grad(a, v, retain_graph=True)
d_a_w = grad(a, w, retain_graph=True)
d_a_x = grad(a, x)
for name, grad in zip("xwbuv", (d_a_x, d_a_w, d_a_b, d_a_u, d_a_v)):
print('d_a_%s:' % name, grad)
```
As Adam Paszke (PyTorch developer) suggested to me, this can be made rewritten in a more efficient manner by passing a tuple to the `grad` function so that it can reuse intermediate results and only require one pass over the graph:
```
import torch
import torch.nn.functional as F
from torch.autograd import grad
x = torch.tensor([3.], requires_grad=True)
w = torch.tensor([2.], requires_grad=True)
b = torch.tensor([1.], requires_grad=True)
u = x * w
v = u + b
a = F.relu(v)
partial_derivatives = grad(a, (x, w, b, u, v))
for name, grad in zip("xwbuv", (partial_derivatives)):
print('d_a_%s:' % name, grad)
```
## Intermediate Gradients in PyTorch via `retain_grad`
In PyTorch, we most often use the `backward()` method on an output variable to compute its partial derivative (or gradient) with respect to its inputs (typically, the weights and bias units of a neural network). By default, PyTorch only stores the gradients of the leaf variables (e.g., the weights and biases) via their `grad` attribute to save memory. So, if we are interested in the intermediate results in a computational graph, we can use the `retain_grad` method to store gradients of non-leaf variables as follows:
```
import torch
import torch.nn.functional as F
from torch.autograd import Variable
x = torch.tensor([3.], requires_grad=True)
w = torch.tensor([2.], requires_grad=True)
b = torch.tensor([1.], requires_grad=True)
u = x * w
v = u + b
a = F.relu(v)
u.retain_grad()
v.retain_grad()
a.backward()
for name, var in zip("xwbuv", (x, w, b, u, v)):
print('d_a_%s:' % name, var.grad)
```
## Intermediate Gradients in PyTorch Using Hooks
Finally, and this is a not-recommended workaround, we can use hooks to obtain intermediate gradients. While the two other approaches explained above should be preferred, this approach highlights the use of hooks, which may come in handy in certain situations.
> The hook will be called every time a gradient with respect to the variable is computed. (http://pytorch.org/docs/master/autograd.html#torch.autograd.Variable.register_hook)
Based on the suggestion by Adam Paszke (https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94/7?u=rasbt), we can use these hooks in a combination with a little helper function, `save_grad` and a `hook` closure writing the partial derivatives or gradients to a global variable `grads`. So, if we invoke the `backward` method on the output node `a`, all the intermediate results will be collected in `grads`, as illustrated below:
```
import torch
import torch.nn.functional as F
grads = {}
def save_grad(name):
def hook(grad):
grads[name] = grad
return hook
x = torch.tensor([3.], requires_grad=True)
w = torch.tensor([2.], requires_grad=True)
b = torch.tensor([1.], requires_grad=True)
u = x * w
v = u + b
x.register_hook(save_grad('d_a_x'))
w.register_hook(save_grad('d_a_w'))
b.register_hook(save_grad('d_a_b'))
u.register_hook(save_grad('d_a_u'))
v.register_hook(save_grad('d_a_v'))
a = F.relu(v)
a.backward()
grads
```
| github_jupyter |
#### Example 1. Access individual elements of 1-D array
```
import numpy as np
# We create a rank 1 ndarray that contains integers from 1 to 5
x = np.array([1, 2, 3, 4, 5])
# We print x
print()
print('x = ', x)
print()
# Let's access some elements with positive indices
print('This is First Element in x:', x[0])
print('This is Second Element in x:', x[1])
print('This is Fifth (Last) Element in x:', x[4])
print()
# Let's access the same elements with negative indices
print('This is First Element in x:', x[-5])
print('This is Second Element in x:', x[-4])
print('This is Fifth (Last) Element in x:', x[-1])
```
#### Example 2. Modify an element of 1-D array
```
# We create a rank 1 ndarray that contains integers from 1 to 5
x = np.array([1, 2, 3, 4, 5])
# We print the original x
print()
print('Original:\n x = ', x)
print()
# We change the fourth element in x from 4 to 20
x[3] = 20
# We print x after it was modified
print('Modified:\n x = ', x)
```
#### Example 3. Access individual elements of 2-D array
```
# We create a 3 x 3 rank 2 ndarray that contains integers from 1 to 9
X = np.array([[1,2,3],[4,5,6],[7,8,9]])
# We print X
print()
print('X = \n', X)
print()
# Let's access some elements in X
print('This is (0,0) Element in X:', X[0,0])
print('This is (0,1) Element in X:', X[0,1])
print('This is (2,2) Element in X:', X[2,2])
```
#### Example 4. Modify an element of 2-D array
```
# We create a 3 x 3 rank 2 ndarray that contains integers from 1 to 9
X = np.array([[1,2,3],[4,5,6],[7,8,9]])
# We print the original x
print()
print('Original:\n X = \n', X)
print()
# We change the (0,0) element in X from 1 to 20
X[0,0] = 20
# We print X after it was modified
print('Modified:\n X = \n', X)
```
#### Example 5. Delete elements
```
# We create a rank 1 ndarray
x = np.array([1, 2, 3, 4, 5])
# We create a rank 2 ndarray
Y = np.array([[1,2,3],[4,5,6],[7,8,9]])
# We print x
print()
print('Original x = ', x)
# We delete the first and last element of x
x = np.delete(x, [0,4])
# We print x with the first and last element deleted
print()
print('Modified x = ', x)
# We print Y
print()
print('Original Y = \n', Y)
# We delete the first row of y
w = np.delete(Y, 0, axis=0)
# We delete the first and last column of y
v = np.delete(Y, [0,2], axis=1)
# We print w
print()
print('w = \n', w)
# We print v
print()
print('v = \n', v)
```
#### Example 6. Append elements
```
# We create a rank 1 ndarray
x = np.array([1, 2, 3, 4, 5])
# We create a rank 2 ndarray
Y = np.array([[1,2,3],[4,5,6]])
# We print x
print()
print('Original x = ', x)
# We append the integer 6 to x
x = np.append(x, 6)
# We print x
print()
print('x = ', x)
# We append the integer 7 and 8 to x
x = np.append(x, [7,8])
# We print x
print()
print('x = ', x)
# We print Y
print()
print('Original Y = \n', Y)
# We append a new row containing 7,8,9 to y
v = np.append(Y, [[7,8,9]], axis=0)
# We append a new column containing 9 and 10 to y
q = np.append(Y,[[9],[10]], axis=1)
# We print v
print()
print('v = \n', v)
# We print q
print()
print('q = \n', q)
```
#### Example 7. Insert elements
```
# We create a rank 1 ndarray
x = np.array([1, 2, 5, 6, 7])
# We create a rank 2 ndarray
Y = np.array([[1,2,3],[7,8,9]])
# We print x
print()
print('Original x = ', x)
# We insert the integer 3 and 4 between 2 and 5 in x.
x = np.insert(x,2,[3,4])
# We print x with the inserted elements
print()
print('x = ', x)
# We print Y
print()
print('Original Y = \n', Y)
# We insert a row between the first and last row of y
w = np.insert(Y,1,[4,5,6],axis=0)
# We insert a column full of 5s between the first and second column of y
v = np.insert(Y,1,5, axis=1)
# We print w
print()
print('w = \n', w)
# We print v
print()
print('v = \n', v)
```
#### Example 8. Stack arrays
```
# We create a rank 1 ndarray
x = np.array([1,2])
# We create a rank 2 ndarray
Y = np.array([[3,4],[5,6]])
# We print x
print()
print('x = ', x)
# We print Y
print()
print('Y = \n', Y)
# We stack x on top of Y
z = np.vstack((x,Y))
# We stack x on the right of Y. We need to reshape x in order to stack it on the right of Y.
w = np.hstack((Y,x.reshape(2,1)))
# We print z
print()
print('z = \n', z)
# We print w
print()
print('w = \n', w)
```
| github_jupyter |
Lambda School Data Science
*Unit 2, Sprint 1, Module 3*
---
# Ridge Regression
## Assignment
We're going back to our other **New York City** real estate dataset. Instead of predicting apartment rents, you'll predict property sales prices.
But not just for condos in Tribeca...
- [ ] Use a subset of the data where `BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'` and the sale price was more than 100 thousand and less than 2 million.
- [ ] Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test.
- [ ] Do one-hot encoding of categorical features.
- [ ] Do feature selection with `SelectKBest`.
- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html). Use the scaler's `fit_transform` method with the train set. Use the scaler's `transform` method with the test set.
- [ ] Fit a ridge regression model with multiple features.
- [ ] Get mean absolute error for the test set.
- [ ] As always, commit your notebook to your fork of the GitHub repo.
The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal.
## Stretch Goals
Don't worry, you aren't expected to do all these stretch goals! These are just ideas to consider and choose from.
- [ ] Add your own stretch goal(s) !
- [ ] Instead of `Ridge`, try `LinearRegression`. Depending on how many features you select, your errors will probably blow up! 💥
- [ ] Instead of `Ridge`, try [`RidgeCV`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html).
- [ ] Learn more about feature selection:
- ["Permutation importance"](https://www.kaggle.com/dansbecker/permutation-importance)
- [scikit-learn's User Guide for Feature Selection](https://scikit-learn.org/stable/modules/feature_selection.html)
- [mlxtend](http://rasbt.github.io/mlxtend/) library
- scikit-learn-contrib libraries: [boruta_py](https://github.com/scikit-learn-contrib/boruta_py) & [stability-selection](https://github.com/scikit-learn-contrib/stability-selection)
- [_Feature Engineering and Selection_](http://www.feat.engineering/) by Kuhn & Johnson.
- [ ] Try [statsmodels](https://www.statsmodels.org/stable/index.html) if you’re interested in more inferential statistical approach to linear regression and feature selection, looking at p values and 95% confidence intervals for the coefficients.
- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way.
- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
import pandas as pd
import pandas_profiling
# Read New York City property sales data
df = pd.read_csv(DATA_PATH+'condos/NYC_Citywide_Rolling_Calendar_Sales.csv')
# Change column names: replace spaces with underscores
df.columns = [col.replace(' ', '_') for col in df]
# SALE_PRICE was read as strings.
# Remove symbols, convert to integer
df['SALE_PRICE'] = (
df['SALE_PRICE']
.str.replace('$','')
.str.replace('-','')
.str.replace(',','')
.astype(int)
)
# BOROUGH is a numeric column, but arguably should be a categorical feature,
# so convert it from a number to a string
df['BOROUGH'] = df['BOROUGH'].astype(str)
# Reduce cardinality for NEIGHBORHOOD feature
# Get a list of the top 10 neighborhoods
top10 = df['NEIGHBORHOOD'].value_counts()[:10].index
# At locations where the neighborhood is NOT in the top 10,
# replace the neighborhood with 'OTHER'
df.loc[~df['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import torch
from UnarySim.sw.kernel.div import UnaryDiv
from UnarySim.sw.stream.gen import RNG, SourceGen, BSGen
from UnarySim.sw.metric.metric import ProgressiveError
import matplotlib as mpl
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import ticker, cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import time
import math
import numpy as np
import seaborn as sns
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
def test(mode="unipolar",
depth_abs=4,
depth_kernel=4,
depth_sync=2,
shiftreg=False,
rng="Sobol",
rng_dim=4,
bitwidth=8,
total_cnt=100,
savepdf=False):
stype = torch.float
btype = torch.float
rtype = torch.float
print("========================================================")
print(mode)
print("========================================================")
if mode is "unipolar":
# all values in unipolar are non-negative
# dividend is always non greater than divisor
# divisor is non-zero
low_bound = 0
up_bound = 2**bitwidth
elif mode is "bipolar":
# values in bipolar are arbitrarily positive or negative
# abs of dividend is always non greater than abs of divisor
# abs of divisor is non-zero
low_bound = -2**(bitwidth-1)
up_bound = 2**(bitwidth-1)
divisor_list = []
dividend_list = []
for divisor_val in range(up_bound, low_bound-1, -1):
divisor_list.append([])
dividend_list.append([])
for dividend_val in range(low_bound, up_bound+1, 1):
divisor_list[up_bound-divisor_val].append(divisor_val)
dividend_list[up_bound-divisor_val].append(dividend_val)
dividend = torch.tensor(dividend_list).type(torch.float).div(up_bound).to(device)
divisor = torch.tensor(divisor_list).type(torch.float).div(up_bound).to(device)
quotient = dividend.div(divisor)
# find the invalid postions in quotient
quotient_nan = torch.isnan(quotient)
quotient_inf = torch.isinf(quotient)
quotient_mask = quotient_nan + quotient_inf
quotient[quotient_mask] = 0
quotient = quotient.clamp(-1, 1)
result_pe_total = []
for rand_idx in range(1, total_cnt+1):
quotientPE = ProgressiveError(quotient, mode=mode).to(device)
dividendPE = ProgressiveError(dividend, mode=mode).to(device)
dividendSRC = SourceGen(dividend, bitwidth, mode=mode, rtype=rtype)().to(device)
divisorPE = ProgressiveError(divisor, mode=mode).to(device)
divisorSRC = SourceGen(divisor, bitwidth, mode=mode, rtype=rtype)().to(device)
dut_div = UnaryDiv(depth_abs=depth_abs,
depth_kernel=depth_kernel,
depth_sync=depth_sync,
shiftreg_abs=shiftreg,
mode=mode,
rng=rng,
rng_dim=rng_dim,
stype=stype,
btype=btype).to(device)
# define the bit stream regen for dividend and divisor
regenRNG = RNG(bitwidth, rand_idx+2, rng, rtype)().to(device)
maxCNT = 2**bitwidth - 1
dividendCNT = torch.zeros_like(dividend) + 2**(bitwidth - 1)
dividendBS_regen = BSGen(dividendCNT, regenRNG, stype).to(device)
divisorCNT = torch.zeros_like(dividend) + 2**(bitwidth - 1)
divisorBS_regen = BSGen(divisorCNT, regenRNG, stype).to(device)
dividendRNG = RNG(bitwidth, rand_idx, rng, rtype)().to(device)
dividendBS = BSGen(dividendSRC, dividendRNG, stype).to(device)
divisorRNG = RNG(bitwidth, rand_idx+1, rng, rtype)().to(device)
divisorBS = BSGen(divisorSRC, divisorRNG, stype).to(device)
with torch.no_grad():
start_time = time.time()
for i in range(2**bitwidth):
dividend_bs = dividendBS(torch.tensor([i]))
dividendPE.Monitor(dividend_bs)
divisor_bs = divisorBS(torch.tensor([i]))
divisorPE.Monitor(divisor_bs)
dividendCNT = (dividendCNT + dividend_bs*2 - 1).clamp(0, maxCNT)
dividendBS_regen.source = dividendCNT.clone().detach()
dividend_bs_regen = dividendBS_regen(torch.tensor([i]))
divisorCNT = ( divisorCNT + divisor_bs*2 - 1).clamp(0, maxCNT)
divisorBS_regen.source = divisorCNT.clone().detach()
divisor_bs_regen = divisorBS_regen(torch.tensor([i]))
quotient_bs = dut_div(dividend_bs_regen, divisor_bs_regen)
quotientPE.Monitor(quotient_bs)
# get the result for different rng
result_pe = quotientPE()[1].cpu().numpy()
result_pe[quotient_mask.cpu().numpy()] = np.nan
result_pe_total.append(result_pe)
# get the result for different rng
result_pe_total = np.array(result_pe_total)
#######################################################################
# check the error of all simulation
#######################################################################
result_pe_total_no_nan = result_pe_total[~np.isnan(result_pe_total)]
print("RMSE:{:1.4}".format(math.sqrt(np.mean(result_pe_total_no_nan**2))))
print("MAE: {:1.4}".format(np.mean(np.abs(result_pe_total_no_nan))))
print("bias:{:1.4}".format(np.mean(result_pe_total_no_nan)))
print("max: {:1.4}".format(np.max(result_pe_total_no_nan)))
print("min: {:1.4}".format(np.min(result_pe_total_no_nan)))
#######################################################################
# check the error according to input value
#######################################################################
avg_total = np.mean(result_pe_total, axis=0)
avg_total[quotient_mask.cpu().numpy()] = 0
fig, ax = plt.subplots()
fig.set_size_inches(5.5, 4)
axis_len = quotientPE()[1].size()[0]
divisor_y_axis = []
dividend_x_axis = []
for axis_index in range(axis_len):
divisor_y_axis.append((up_bound-axis_index/(axis_len-1)*(up_bound-low_bound))/up_bound)
dividend_x_axis.append((axis_index/(axis_len-1)*(up_bound-low_bound)+low_bound)/up_bound)
X, Y = np.meshgrid(dividend_x_axis, divisor_y_axis)
Z = avg_total
levels = [-0.09, -0.06, -0.03, 0.00, 0.03, 0.06, 0.09]
cs = plt.contourf(X, Y, Z, levels, cmap=cm.RdBu, extend="both")
cbar = fig.colorbar(cs)
# plt.tight_layout()
plt.xticks(np.arange(low_bound/up_bound, up_bound/up_bound+0.1, step=0.5))
# ax.xaxis.set_ticklabels([])
plt.yticks(np.arange(low_bound/up_bound, up_bound/up_bound+0.1, step=0.5))
# ax.yaxis.set_ticklabels([])
if savepdf is True:
plt.savefig("div-"+mode+"-bw"+str(bitwidth)+"-cordivkernel-in-stream"+".pdf",
dpi=300,
bbox_inches='tight')
plt.show()
plt.close()
test(mode="unipolar", depth_abs=3, depth_kernel=2, depth_sync=2, shiftreg=False, rng="Sobol", rng_dim=4, bitwidth=8, total_cnt=100, savepdf=False)
test(mode="bipolar", depth_abs=3, depth_kernel=2, depth_sync=2, shiftreg=False, rng="Sobol", rng_dim=4, bitwidth=8, total_cnt=100, savepdf=False)
fig, ax = plt.subplots()
fig.set_size_inches(0.1, 1.6)
cmap = cm.RdBu
bounds = [-0.12, -0.09, -0.06, -0.03, 0.00, 0.03, 0.06, 0.09, 0.12]
norm = mpl.colors.BoundaryNorm(bounds, cmap.N)
cb = mpl.colorbar.ColorbarBase(ax, cmap=cmap,
norm=norm,
boundaries=bounds,
extend='both',
spacing='uniform',
orientation='vertical')
# plt.tight_layout()
# plt.savefig("colorbar.pdf", dpi=300, bbox_inches='tight')
plt.show()
```
| github_jupyter |
**[SQL Home Page](https://www.kaggle.com/learn/intro-to-sql)**
---
# Introduction
Queries with **GROUP BY** can be powerful. There are many small things that can trip you up (like the order of the clauses), but it will start to feel natural once you've done it a few times. Here, you'll write queries using **GROUP BY** to answer questions from the Hacker News dataset.
Before you get started, run the following cell to set everything up:
```
# Set up feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.sql.ex3 import *
print("Setup Complete")
```
The code cell below fetches the `comments` table from the `hacker_news` dataset. We also preview the first five rows of the table.
```
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "hacker_news" dataset
dataset_ref = client.dataset("hacker_news", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# Construct a reference to the "comments" table
table_ref = dataset_ref.table("comments")
# API request - fetch the table
table = client.get_table(table_ref)
# Preview the first five lines of the "comments" table
client.list_rows(table, max_results=5).to_dataframe()
```
# Exercises
### 1) Prolific commenters
Hacker News would like to send awards to everyone who has written more than 10,000 posts. Write a query that returns all authors with more than 10,000 posts as well as their post counts. Call the column with post counts `NumPosts`.
In case sample query is helpful, here is a query you saw in the tutorial to answer a similar question:
```
query = """
SELECT parent, COUNT(1) AS NumPosts
FROM `bigquery-public-data.hacker_news.comments`
GROUP BY parent
HAVING COUNT(1) > 10
"""
```
```
# Query to select prolific commenters and post counts
prolific_commenters_query = """
SELECT author, COUNT(1) AS NumPosts
FROM `bigquery-public-data.hacker_news.comments`
GROUP BY author
HAVING COUNT(1) > 10000
""" # Your code goes here
# Set up the query (cancel the query if it would use too much of
# your quota, with the limit set to 1 GB)
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)
query_job = client.query(prolific_commenters_query, job_config=safe_config)
# API request - run the query, and return a pandas DataFrame
prolific_commenters = query_job.to_dataframe()
# View top few rows of results
print(prolific_commenters.head())
# Check your answer
q_1.check()
```
For the solution, uncomment the line below.
```
#q_1.solution()
```
### 2) Deleted comments
How many comments have been deleted? (If a comment was deleted, the `deleted` column in the comments table will have the value `True`.)
```
# Write your query here and figure out the answer
num_deleted_posts = 227736
# Check your answer
q_2.check()
```
For the solution, uncomment the line below.
```
#q_2.solution()
```
# Keep Going
**[Click here](https://www.kaggle.com/dansbecker/order-by)** to move on and learn about the **ORDER BY** clause.
---
**[SQL Home Page](https://www.kaggle.com/learn/intro-to-sql)**
*Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161314) to chat with other Learners.*
| github_jupyter |
## Logistic Regression in Plaintext : Training and Evaluation
The file Plaintext_train_eval.ipynb shows the implementation and evaluation of Logistic Regression using Nesterov's Accelereated Gradient method.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
import sklearn
from sklearn.linear_model import LogisticRegression
## Read data
inputData = pd.read_csv("./creditcard.csv")
data = inputData.iloc[:,[4,10,14,16]]
y = inputData.iloc[:,30]
data = (data - data.mean())/data.std()
data.iloc[data>2.5] = 2.5
data.describe()
```
### Training data preparation
Before running gradient descent, we need to split train-test data, calculate pre-computed weights, and define requred functions.
```
# Sample Selection
from sklearn.model_selection import train_test_split
from imblearn.under_sampling import NearMiss
X_train, X_test, y_train, y_test = train_test_split(data, y, test_size=0.3, random_state=2)
nm = NearMiss(random_state = 2, sampling_strategy=0.025)
x_res, y_res = nm.fit_resample(X_train, y_train.ravel())
print('Shape of training data after Near Miss: {}'.format(x_res.shape))
x_res = pd.DataFrame(x_res)
# Weights precomputation
temp = (x_res.loc[y_res==0,:].mean() + x_res.loc[y_res==1,:].mean() )/2
weight_precomp = [-1*np.log(0.66)/x for x in temp]
bias = [0.6]
beta = pd.Series(bias + weight_precomp)
print(weight_precomp)
## Define functions
def getLinResponse(train_data, beta):
## replace with pd dot function
lin_response = pd.Series(np.dot(train_data, beta))
return lin_response
def getSigmoid(lin_response):
sigmoid = (1 / (1 + np.exp(-1*(lin_response))))
return sigmoid
def getPolySigmoid(lin_response):
N = 4.265
C = 15/4*N**-5
p_sigmoid = C/20*lin_response**5 - C/6*N**2*lin_response**3 + C/4*N**4*lin_response + 0.5
return p_sigmoid
def lsa3Sigmoid(lin_response):
sigmoid = 0.5 + ((1.20096/8) * lin_response) + ((-0.81562/pow(8,3)) * lin_response**3)
return sigmoid
def getBatchGradient(sigmoid, y, train_data):
fungradient = []
for j in range(len(train_data.columns)):
temp = 0
for k in range(len(train_data)):
temp += (sigmoid.iloc[k] - y.iloc[k]) * train_data.iloc[k,j]
fungradient.append(temp)
m = len(train_data)
fungradient[:] = [x/m for x in fungradient]
return fungradient
def getMomentumGradient(sigmoid, y, train_data, mGradient):
bGradient = getBatchGradient(sigmoid, y, train_data)
for index in range(len(bGradient)):
mGradient[index] = ((1 * bGradient[index])/20) + (19 * mGradient[index]/20)
# mGradient[index] = (bGradient[index]) + (9 * mGradient[index]/10)
return mGradient
def getError(sigmoid, y, t_size):
## modify to remove loop using in built log of array + array (!?!?)
err, tempErr = 0,0
for k in range(t_size):
temperr = -1*y.iloc[k] * (np.log(sigmoid + 0.01)) - ((1-y) * np.log(1-sigmoid + 0.01))
err = temperr.sum()
# tempErr = (-1*y.iloc[k] * np.log(sigmoid.iloc[k]+0.001)) - (((1-y.iloc[k])*np.log(1-sigmoid.iloc[k]+0.001)))
# err += tempErr
err = err/t_size
return err
## Run Gradient Descent with NAG
y_res = pd.Series(y_res)
xconst = pd.DataFrame(np.ones(len(x_res)))
train_data = pd.concat([xconst, x_res], axis = 1)
y = y_res
t_size = len(train_data)
iter_num= []
alphaCol = []
error = []
# Stop if error goes below 0.1
thresh = pow(10,-1)
# Stop if not converged in max iterations
max_iter = 10
alpha = 0.1
mGradient = np.zeros(len(data.columns)+1)
for i in range(max_iter):
print("starting iter: ", i)
iter_num.append(i)
alphaCol.append(alpha)
step_temp = pd.Series([val * alpha * -1 for val in mGradient])
beta_temp = beta.add(step_temp)
lin_response = getLinResponse(train_data, beta_temp)
sigmoid_l = lsa3Sigmoid(lin_response)
mGradient = getMomentumGradient(sigmoid_l, y, train_data, mGradient)
step = pd.Series([val * alpha * -1 for val in mGradient])
error.append(getError(sigmoid_l, y, t_size))
if error[i] < thresh:
print("Converged in iteration: ",i)
break
elif i == max_iter-1:
print("failed to converge in max iterations")
break
else:
print("update beta with updated step")
beta = beta.add(step)
continue
```
## Evaluation on Training Data
The output of the model on training data is simply the sigmoid output for the last iteration of GD.
We apply a threshold as 0.6 (to further reduce False Politives).
This gives the output of the model is {0,1} in the vector 'out'.
This is compared with the training data y vector, y_res, and then accuracy and other metrics are calculated
```
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import matthews_corrcoef
from sklearn.metrics import confusion_matrix
out = sigmoid_l > 0.7
nag_train_accuracy = sum(out == y)/len(y)
nag_train_auc = roc_auc_score(y_res, out)
nag_train_fpr, nag_train_tpr, _ = roc_curve(y_res, sigmoid_l)
nag_train_precision, nag_train_recall, _ = precision_recall_curve(y_res, sigmoid_l)
nag_train_mcc = matthews_corrcoef(y_res, out)
tn, fp, fn, tp = confusion_matrix(y_res, out).ravel()
(tn, fp, fn, tp)
print(nag_train_accuracy, nag_train_auc, nag_train_mcc)
```
## Train sklearn Logistic Regression model and calculate metrics.
sklearn model is trained for comparison and evaluation of our algorithm.
The 'liblinear' solver is used, which is considered appropriate for small datasets
```
clf = LogisticRegression(random_state=42, solver = 'liblinear').fit(x_res, y_res)
y_pred = clf.predict(x_res)
y_pred_prob = clf.predict_proba(x_res)
y_pred_prob = y_pred_prob[:,1]
sk_train_accuracy = clf.score(x_res,y_res)
sk_train_auc = roc_auc_score(y_res, y_pred)
sk_train_fpr, sk_train_tpr, _ = roc_curve(y_res, y_pred_prob)
sk_train_precision, sk_train_recall, _ = precision_recall_curve(y_res, y_pred_prob)
sk_train_mcc = matthews_corrcoef(y_res, y_pred)
sk_train_auc, sk_train_mcc
# Evaluate NAG algorithm model for test data
X_test = np.array(X_test)
const1 = np.ones(len(X_test))
const1 = const1.reshape(len(X_test), 1)
test_data = np.concatenate((const1, X_test), axis = 1)
beta_test = [0.49,0.27,-0.12,-0.09,-0.15]
test_resp = getLinResponse(test_data, beta)
test_sig = lsa3Sigmoid(test_resp)
test_out = test_sig > 0.6
test_out = np.array(test_out)
y_test = np.array(y_test)
tn_test, fp_test, fn_test, tp_test = confusion_matrix(y_test, test_out).ravel()
nag_test_accuracy = sum(test_out == y_test)/len(y_test)
nag_test_auc = roc_auc_score(y_test, test_out)
nag_test_mcc = matthews_corrcoef(y_test, test_out)
nag_test_fpr, nag_test_tpr, _ = roc_curve(y_test, test_sig)
nag_test_precision, nag_test_recall, _ = precision_recall_curve(y_test, test_sig)
print(tn_test, fp_test, fn_test, tp_test)
## Evaluate sklearn model for test data
test_pred = clf.predict(X_test)
test_pred_prob = clf.predict_proba(X_test)
test_pred_prob = test_pred_prob[:,1]
sk_test_accuracy = clf.score(X_test,y_test)
sk_test_auc = roc_auc_score(y_test, test_pred)
sk_test_fpr, sk_test_tpr, _ = roc_curve(y_test, test_pred_prob)
sk_test_precision, sk_test_recall, _ = precision_recall_curve(y_test, test_pred_prob)
sk_test_mcc = matthews_corrcoef(y_test, test_pred)
tn_test, fp_test, fn_test, tp_test = confusion_matrix(y_test, test_pred).ravel()
print(tn_test, fp_test, fn_test, tp_test)
## Print Evaluation and plot
fig = plt.figure(figsize = (15,7))
plt.subplot(1, 2, 1)
plt.plot(sk_test_fpr,sk_test_tpr, label = 'sklearn model')
plt.plot(nag_test_fpr, nag_test_tpr, label = 'NAG plaintext')
plt.legend(loc='best')
plt.title("AUC curves comparison")
plt.subplot(1, 2, 2)
plt.plot(sk_test_recall, sk_test_precision, label = 'sklearn model')
plt.plot(nag_test_recall, nag_test_precision, label = 'NAG plaintext')
plt.legend(loc='best')
plt.title("Precision-Recall curves comparison ")
plt.savefig('./out/sk_pt_evaluation.png', dpi = 100)
plt.show()
print("sklearn AUC = ", sk_test_auc, " NAG AUC = ", nag_test_auc,"\n")
print("sklearn MCC = ", sk_test_mcc, " NAG MCC = ", nag_test_mcc)
```
| github_jupyter |
RMinimum : Full - Test - Case: $k(n) = \log(n)/\log(\log(n))$
```
import math
import random
import queue
```
Testfälle : k(n) = n^(1/2)
```
# User input
n = 2**22
# Automatic generation: k = log(n)/loglog(n), X = [0, ..., n-1]
lgn = math.log(n) / math.log(2)
k = int(lgn / (math.log(lgn)/math.log(2)))
X = [i for i in range(n)]
# Show Testcase
print('')
print('Input tuple : (n, k)')
print('============')
print('(', n, ',', k, ')')
```
Algorithmus : Full
```
def rminimum(X,k, cnt = []):
k = int(k)
n = len(X)
if cnt == []:
cnt = [0 for _ in range(len(X))]
if len(X) == 3:
if X[0] < X[1]:
cnt[X[0]] += 2
cnt[X[1]] += 1
cnt[X[2]] += 1
if X[0] < X[2]:
X = X[0]
else:
X = X[2]
else:
cnt[X[0]] += 1
cnt[X[1]] += 2
cnt[X[2]] += 1
if X[1] < X[2]:
X = X[1]
else:
X = X[2]
return cnt
W, L, cnt = RMinimum_step1(X, cnt)
minele, cnt = RMinimum_step2(L, k, cnt)
res3, cnt = RMinimum_step3(W, k, minele, cnt)
res4, cnt = RMinimum_step4(res3, k, n, cnt)
return cnt
# ==================================================
def RMinimum_step1(lst, cnt):
random.shuffle(lst)
W = [0 for _ in range(len(lst) // 2)]
L = [0 for _ in range(len(lst) // 2)]
for i in range(len(lst) // 2):
if lst[2 * i] > lst[2 * i + 1]:
W[i] = lst[2 * i + 1]
L[i] = lst[2 * i]
else:
W[i] = lst[2 * i]
L[i] = lst[2 * i + 1]
cnt[lst[2 * i + 1]] += 1
cnt[lst[2 * i]] += 1
return W, L, cnt
# ==================================================
def RMinimum_step2(L, k, cnt):
random.shuffle(L)
res = [L[i * k:(i + 1) * k] for i in range((len(L) + k - 1) // k)]
minele = [0 for _ in range(len(res))]
var = list(res)
for i in range(len(var)):
q = queue.Queue()
for item in var[i]:
q.put(item)
while q.qsize() > 1:
a = q.get()
b = q.get()
if a < b:
q.put(a)
else:
q.put(b)
cnt[a] += 1
cnt[b] += 1
minele[i] = q.get()
return minele, cnt
# ==================================================
def RMinimum_step3(lst, k, minele, cnt):
random.shuffle(lst)
var = [lst[i * k:(i + 1) * k] for i in range((len(lst) + k - 1) // k)]
res = [0 for _ in range(len(var))]
for i in range(len(var)):
res[i] = [elem for elem in var[i] if elem < minele[i]]
cnt[minele[i]] += len(var[i])
for elem in var[i]:
cnt[elem] += 1
res = [item for sublist in res for item in sublist]
return res, cnt
# ==================================================
def RMinimum_step4(newW, k, n, cnt):
if len(newW) <= (math.log(n)/math.log(2))**2:
q = queue.Queue()
var = list(newW)
for item in var:
q.put(item)
while q.qsize() > 1:
a = q.get()
b = q.get()
if a < b:
q.put(a)
else:
q.put(b)
cnt[a] += 1
cnt[b] += 1
res = q.get()
else:
res = rminimum(newW,k, cnt)
return res, cnt
# ==================================================
# Testfall
cnt = rminimum(X, k)
```
Algorithmus: $\log^2(n)$ $\rightarrow$ $\log(n)$
```
def rminimum(X,k, cnt = []):
k = int(k)
n = len(X)
if cnt == []:
cnt = [0 for _ in range(len(X))]
if len(X) == 3:
if X[0] < X[1]:
cnt[X[0]] += 2
cnt[X[1]] += 1
cnt[X[2]] += 1
if X[0] < X[2]:
X = X[0]
else:
X = X[2]
else:
cnt[X[0]] += 1
cnt[X[1]] += 2
cnt[X[2]] += 1
if X[1] < X[2]:
X = X[1]
else:
X = X[2]
return cnt
W, L, cnt = RMinimum_step1(X, cnt)
minele, cnt = RMinimum_step2(L, k, cnt)
res3, cnt = RMinimum_step3(W, k, minele, cnt)
res4, cnt = RMinimum_step4(res3, k, n, cnt)
return cnt
# ==================================================
def RMinimum_step1(lst, cnt):
random.shuffle(lst)
W = [0 for _ in range(len(lst) // 2)]
L = [0 for _ in range(len(lst) // 2)]
for i in range(len(lst) // 2):
if lst[2 * i] > lst[2 * i + 1]:
W[i] = lst[2 * i + 1]
L[i] = lst[2 * i]
else:
W[i] = lst[2 * i]
L[i] = lst[2 * i + 1]
cnt[lst[2 * i + 1]] += 1
cnt[lst[2 * i]] += 1
return W, L, cnt
# ==================================================
def RMinimum_step2(L, k, cnt):
random.shuffle(L)
res = [L[i * k:(i + 1) * k] for i in range((len(L) + k - 1) // k)]
minele = [0 for _ in range(len(res))]
var = list(res)
for i in range(len(var)):
q = queue.Queue()
for item in var[i]:
q.put(item)
while q.qsize() > 1:
a = q.get()
b = q.get()
if a < b:
q.put(a)
else:
q.put(b)
cnt[a] += 1
cnt[b] += 1
minele[i] = q.get()
return minele, cnt
# ==================================================
def RMinimum_step3(lst, k, minele, cnt):
random.shuffle(lst)
var = [lst[i * k:(i + 1) * k] for i in range((len(lst) + k - 1) // k)]
res = [0 for _ in range(len(var))]
for i in range(len(var)):
res[i] = [elem for elem in var[i] if elem < minele[i]]
cnt[minele[i]] += len(var[i])
for elem in var[i]:
cnt[elem] += 1
res = [item for sublist in res for item in sublist]
return res, cnt
# ==================================================
def RMinimum_step4(newW, k, n, cnt):
if len(newW) <= (math.log(n)/math.log(2)):
q = queue.Queue()
var = list(newW)
for item in var:
q.put(item)
while q.qsize() > 1:
a = q.get()
b = q.get()
if a < b:
q.put(a)
else:
q.put(b)
cnt[a] += 1
cnt[b] += 1
res = q.get()
else:
res = rminimum(newW,k, cnt)
return res, cnt
# ==================================================
# Testfall
cnt2 = rminimum(X, k)
```
Resultat : Vergleich: $|W| < \log(n)^2$ vs $|W| < \log(n)$
```
def test(X, k, cnt, cnt2):
# cnt: log^2, cnt2: log
n = len(X)
lgn = int(math.log(n) / math.log(2))
lgk = int(math.log(k) / math.log(2))
f_min = cnt[0]
f_rem = max(cnt[1:])
work = int(sum(cnt)/2)
f_min2 = cnt2[0]
f_rem2 = max(cnt2[1:])
work2 = int(sum(cnt2)/2)
print('')
print('Testfall n / k:', n, '/', k)
print('====================================')
print('Attribute : log^2 | log')
print('------------------------------------')
print('f_min :', f_min, '|', f_min2)
print('E[f_min] :', k)
print('------------------------------------')
print('f_rem :', f_rem, '|', f_rem2)
print('E[f_rem] :', k)
print('------------------------------------')
print('log(n) :', lgn)
print('------------------------------------')
print('Work :', work, '|', work2)
print('O(n) :', n)
print('====================================')
return
# ==================================================
# Testfall
test(X, k, cnt, cnt2)
```
| github_jupyter |
# Introduction to pysptk
This notebook shows a few typical usages of pysptk, with a focus on a spectral parameter estimation. The steps are composed of:
- windowing
- mel-generalized cepstrum analysis
- visualize spectral envelope estimates
- F0 estimation
## Requirements
- pysptk: https://github.com/r9y9/pysptk
- seaborn: https://github.com/mwaskom/seaborn
- scipy: https://github.com/scipy/scipy
```
%pylab inline
import matplotlib
import seaborn
seaborn.set_style("dark")
rcParams['figure.figsize'] = (16, 6)
import numpy as np
import pysptk
from scipy.io import wavfile
```
## Data
```
fs, x = wavfile.read(pysptk.util.example_audio_file())
assert fs == 16000
plot(x)
xlim(0, len(x))
title("raw waveform of example audio file")
```
## Windowing
```
# Pick a short segment
pos = 40000
frame_length = 1024
xw = x[pos:pos+frame_length] * pysptk.blackman(frame_length)
plot(xw, linewidth=3.0)
xlim(0, frame_length)
title("a windowed time frame")
# plotting utility
def pplot(sp, envelope, title="no title"):
plot(sp, "b-", linewidth=2.0, label="Original log spectrum 20log|X(w)|")
plot(20.0/np.log(10)*envelope, "r-", linewidth=3.0, label=title)
xlim(0, len(sp))
xlabel("frequency bin")
ylabel("log amplitude")
legend(prop={'size': 20})
```
## Spectral parameter estimation and visualize it's spectral envelop estimate
```
# Compute spectrum 20log|X(w)| for a windowed signal
sp = 20*np.log10(np.abs(np.fft.rfft(xw)))
mgc = pysptk.mgcep(xw, 20, 0.0, 0.0)
pplot(sp, pysptk.mgc2sp(mgc, 0.0, 0.0, frame_length).real, title="Lineaer frequency cepstrum based envelope")
mgc = pysptk.mcep(xw, 20, 0.41)
pplot(sp, pysptk.mgc2sp(mgc, 0.41, 0.0, frame_length).real, title="Mel-cepstrum based envelope")
mgc = pysptk.mgcep(xw, 20, 0.0, -1.0)
pplot(sp, pysptk.mgc2sp(mgc, 0.0, -1.0, frame_length).real, title="LPC cepstrum based envelope")
mgc = pysptk.mgcep(xw, 20, 0.41, -1.0)
pplot(sp, pysptk.mgc2sp(mgc, 0.41, -1.0, frame_length).real, title="Warped LPC cepstrum based envelope")
mgc = pysptk.gcep(xw, 20, -0.35)
pplot(sp, pysptk.mgc2sp(mgc, 0.0, -0.35, frame_length).real, title="Generalized cepstrum based envelope")
mgc = pysptk.mgcep(xw, 20, 0.41, -0.35)
pplot(sp, pysptk.mgc2sp(mgc, 0.41, -0.35, frame_length).real, title="Mel-generalized cepstrum based envelope")
```
## F0 estimation
pysptk supports two f0 estimation algorithms; SWIPE' and RAPT'. For more detailed information, pleases see the reference manual of the SPTK.
```
f0_swipe = pysptk.swipe(x.astype(np.float64), fs=fs, hopsize=80, min=60, max=200, otype="f0")
plot(f0_swipe, linewidth=3, label="F0 trajectory estimated by SWIPE'")
f0 = pysptk.rapt(x.astype(np.float32), fs=fs, hopsize=80, min=60, max=200, otype="f0")
plot(f0, linewidth=3, label="F0 trajectory estimated by RAPT")
xlim(0, len(f0_swipe))
legend(prop={'size': 18})
```
| github_jupyter |
```
#IMPORT SEMUA LIBARARY
#IMPORT LIBRARY PANDAS
import pandas as pd
#IMPORT LIBRARY UNTUK POSTGRE
from sqlalchemy import create_engine
import psycopg2
#IMPORT LIBRARY CHART
from matplotlib import pyplot as plt
from matplotlib import style
#IMPORT LIBRARY BASE PATH
import os
import io
#IMPORT LIBARARY PDF
from fpdf import FPDF
#IMPORT LIBARARY CHART KE BASE64
import base64
#IMPORT LIBARARY EXCEL
import xlsxwriter
#FUNGSI UNTUK MENGUPLOAD DATA DARI CSV KE POSTGRESQL
def uploadToPSQL(columns, table, filePath, engine):
#FUNGSI UNTUK MEMBACA CSV
df = pd.read_csv(
os.path.abspath(filePath),
names=columns,
keep_default_na=False
)
#APABILA ADA FIELD KOSONG DISINI DIFILTER
df.fillna('')
#MENGHAPUS COLUMN YANG TIDAK DIGUNAKAN
del df['kategori']
del df['jenis']
del df['pengiriman']
del df['satuan']
#MEMINDAHKAN DATA DARI CSV KE POSTGRESQL
df.to_sql(
table,
engine,
if_exists='replace'
)
#DIHITUNG APABILA DATA YANG DIUPLOAD BERHASIL, MAKA AKAN MENGEMBALIKAN KELUARAN TRUE(BENAR) DAN SEBALIKNYA
if len(df) == 0:
return False
else:
return True
#FUNGSI UNTUK MEMBUAT CHART, DATA YANG DIAMBIL DARI DATABASE DENGAN MENGGUNAKAN ORDER DARI TANGGAL DAN JUGA LIMIT
#DISINI JUGA MEMANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
def makeChart(host, username, password, db, port, table, judul, columns, filePath, name, subjudul, limit, negara, basePath):
#TEST KONEKSI DATABASE
try:
#KONEKSI KE DATABASE
connection = psycopg2.connect(user=username,password=password,host=host,port=port,database=db)
cursor = connection.cursor()
#MENGAMBL DATA DARI TABLE YANG DIDEFINISIKAN DIBAWAH, DAN DIORDER DARI TANGGAL TERAKHIR
#BISA DITAMBAHKAN LIMIT SUPAYA DATA YANG DIAMBIL TIDAK TERLALU BANYAK DAN BERAT
postgreSQL_select_Query = "SELECT * FROM "+table+" ORDER BY tanggal ASC LIMIT " + str(limit)
cursor.execute(postgreSQL_select_Query)
mobile_records = cursor.fetchall()
uid = []
lengthx = []
lengthy = []
#MELAKUKAN LOOPING ATAU PERULANGAN DARI DATA YANG SUDAH DIAMBIL
#KEMUDIAN DATA TERSEBUT DITEMPELKAN KE VARIABLE DIATAS INI
for row in mobile_records:
uid.append(row[0])
lengthx.append(row[1])
if row[2] == "":
lengthy.append(float(0))
else:
lengthy.append(float(row[2]))
#FUNGSI UNTUK MEMBUAT CHART
#bar
style.use('ggplot')
fig, ax = plt.subplots()
#MASUKAN DATA ID DARI DATABASE, DAN JUGA DATA TANGGAL
ax.bar(uid, lengthy, align='center')
#UNTUK JUDUL CHARTNYA
ax.set_title(judul)
ax.set_ylabel('Total')
ax.set_xlabel('Tanggal')
ax.set_xticks(uid)
#TOTAL DATA YANG DIAMBIL DARI DATABASE, DIMASUKAN DISINI
ax.set_xticklabels((lengthx))
b = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(b, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
barChart = base64.b64encode(b.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#line
#MASUKAN DATA DARI DATABASE
plt.plot(lengthx, lengthy)
plt.xlabel('Tanggal')
plt.ylabel('Total')
#UNTUK JUDUL CHARTNYA
plt.title(judul)
plt.grid(True)
l = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(l, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
lineChart = base64.b64encode(l.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#pie
#UNTUK JUDUL CHARTNYA
plt.title(judul)
#MASUKAN DATA DARI DATABASE
plt.pie(lengthy, labels=lengthx, autopct='%1.1f%%',
shadow=True, startangle=180)
plt.axis('equal')
p = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(p, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
pieChart = base64.b64encode(p.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#MENGAMBIL DATA DARI CSV YANG DIGUNAKAN SEBAGAI HEADER DARI TABLE UNTUK EXCEL DAN JUGA PDF
header = pd.read_csv(
os.path.abspath(filePath),
names=columns,
keep_default_na=False
)
#MENGHAPUS COLUMN YANG TIDAK DIGUNAKAN
header.fillna('')
del header['tanggal']
del header['total']
#MEMANGGIL FUNGSI EXCEL
makeExcel(mobile_records, header, name, limit, basePath)
#MEMANGGIL FUNGSI PDF
makePDF(mobile_records, header, judul, barChart, lineChart, pieChart, name, subjudul, limit, basePath)
#JIKA GAGAL KONEKSI KE DATABASE, MASUK KESINI UNTUK MENAMPILKAN ERRORNYA
except (Exception, psycopg2.Error) as error :
print (error)
#KONEKSI DITUTUP
finally:
if(connection):
cursor.close()
connection.close()
#FUNGSI MAKEEXCEL GUNANYA UNTUK MEMBUAT DATA YANG BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH XLSXWRITER
def makeExcel(datarow, dataheader, name, limit, basePath):
#MEMBUAT FILE EXCEL
workbook = xlsxwriter.Workbook(basePath+'jupyter/BLOOMBERG/SektorFiskal/excel/'+name+'.xlsx')
#MENAMBAHKAN WORKSHEET PADA FILE EXCEL TERSEBUT
worksheet = workbook.add_worksheet('sheet1')
#SETINGAN AGAR DIBERIKAN BORDER DAN FONT MENJADI BOLD
row1 = workbook.add_format({'border': 2, 'bold': 1})
row2 = workbook.add_format({'border': 2})
#MENJADIKAN DATA MENJADI ARRAY
data=list(datarow)
isihead=list(dataheader.values)
header = []
body = []
#LOOPING ATAU PERULANGAN, KEMUDIAN DATA DITAMPUNG PADA VARIABLE DIATAS
for rowhead in dataheader:
header.append(str(rowhead))
for rowhead2 in datarow:
header.append(str(rowhead2[1]))
for rowbody in isihead[1]:
body.append(str(rowbody))
for rowbody2 in data:
body.append(str(rowbody2[2]))
#MEMASUKAN DATA DARI VARIABLE DIATAS KE DALAM COLUMN DAN ROW EXCEL
for col_num, data in enumerate(header):
worksheet.write(0, col_num, data, row1)
for col_num, data in enumerate(body):
worksheet.write(1, col_num, data, row2)
#FILE EXCEL DITUTUP
workbook.close()
#FUNGSI UNTUK MEMBUAT PDF YANG DATANYA BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH FPDF
def makePDF(datarow, dataheader, judul, bar, line, pie, name, subjudul, lengthPDF, basePath):
#FUNGSI UNTUK MENGATUR UKURAN KERTAS, DISINI MENGGUNAKAN UKURAN A4 DENGAN POSISI LANDSCAPE
pdf = FPDF('L', 'mm', [210,297])
#MENAMBAHKAN HALAMAN PADA PDF
pdf.add_page()
#PENGATURAN UNTUK JARAK PADDING DAN JUGA UKURAN FONT
pdf.set_font('helvetica', 'B', 20.0)
pdf.set_xy(145.0, 15.0)
#MEMASUKAN JUDUL KE DALAM PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=judul, border=0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('arial', '', 14.0)
pdf.set_xy(145.0, 25.0)
#MEMASUKAN SUB JUDUL KE PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=subjudul, border=0)
#MEMBUAT GARIS DI BAWAH SUB JUDUL
pdf.line(10.0, 30.0, 287.0, 30.0)
pdf.set_font('times', '', 10.0)
pdf.set_xy(17.0, 37.0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('Times','',10.0)
#MENGAMBIL DATA HEADER PDF YANG SEBELUMNYA SUDAH DIDEFINISIKAN DIATAS
datahead=list(dataheader.values)
pdf.set_font('Times','B',12.0)
pdf.ln(0.5)
th1 = pdf.font_size
#MEMBUAT TABLE PADA PDF, DAN MENAMPILKAN DATA DARI VARIABLE YANG SUDAH DIKIRIM
pdf.cell(100, 2*th1, "Kategori", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][0], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Jenis", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][1], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Pengiriman", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][2], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Satuan", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][3], border=1, align='C')
pdf.ln(2*th1)
#PENGATURAN PADDING
pdf.set_xy(17.0, 75.0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('Times','B',11.0)
data=list(datarow)
epw = pdf.w - 2*pdf.l_margin
col_width = epw/(lengthPDF+1)
#PENGATURAN UNTUK JARAK PADDING
pdf.ln(0.5)
th = pdf.font_size
#MEMASUKAN DATA HEADER YANG DIKIRIM DARI VARIABLE DIATAS KE DALAM PDF
pdf.cell(50, 2*th, str("Negara"), border=1, align='C')
for row in data:
pdf.cell(40, 2*th, str(row[1]), border=1, align='C')
pdf.ln(2*th)
#MEMASUKAN DATA ISI YANG DIKIRIM DARI VARIABLE DIATAS KE DALAM PDF
pdf.set_font('Times','B',10.0)
pdf.set_font('Arial','',9)
pdf.cell(50, 2*th, negara, border=1, align='C')
for row in data:
pdf.cell(40, 2*th, str(row[2]), border=1, align='C')
pdf.ln(2*th)
#MENGAMBIL DATA CHART, KEMUDIAN CHART TERSEBUT DIJADIKAN PNG DAN DISIMPAN PADA DIRECTORY DIBAWAH INI
#BAR CHART
bardata = base64.b64decode(bar)
barname = basePath+'jupyter/BLOOMBERG/SektorFiskal/img/'+name+'-bar.png'
with open(barname, 'wb') as f:
f.write(bardata)
#LINE CHART
linedata = base64.b64decode(line)
linename = basePath+'jupyter/BLOOMBERG/SektorFiskal/img/'+name+'-line.png'
with open(linename, 'wb') as f:
f.write(linedata)
#PIE CHART
piedata = base64.b64decode(pie)
piename = basePath+'jupyter/BLOOMBERG/SektorFiskal/img/'+name+'-pie.png'
with open(piename, 'wb') as f:
f.write(piedata)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
widthcol = col/3
#MEMANGGIL DATA GAMBAR DARI DIREKTORY DIATAS
pdf.image(barname, link='', type='',x=8, y=100, w=widthcol)
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(linename, link='', type='',x=103, y=100, w=widthcol)
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(piename, link='', type='',x=195, y=100, w=widthcol)
pdf.ln(2*th)
#MEMBUAT FILE PDF
pdf.output(basePath+'jupyter/BLOOMBERG/SektorFiskal/pdf/'+name+'.pdf', 'F')
#DISINI TEMPAT AWAL UNTUK MENDEFINISIKAN VARIABEL VARIABEL SEBELUM NANTINYA DIKIRIM KE FUNGSI
#PERTAMA MANGGIL FUNGSI UPLOADTOPSQL DULU, KALAU SUKSES BARU MANGGIL FUNGSI MAKECHART
#DAN DI MAKECHART MANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
#DEFINISIKAN COLUMN BERDASARKAN FIELD CSV
columns = [
"kategori",
"jenis",
"tanggal",
"total",
"pengiriman",
"satuan",
]
#UNTUK NAMA FILE
name = "SektorFiskal2_2"
#VARIABLE UNTUK KONEKSI KE DATABASE
host = "localhost"
username = "postgres"
password = "1234567890"
port = "5432"
database = "bloomberg_SektorFiskal"
table = name.lower()
#JUDUL PADA PDF DAN EXCEL
judul = "Data Sektor Fiskal"
subjudul = "Badan Perencanaan Pembangunan Nasional"
#LIMIT DATA UNTUK SELECT DI DATABASE
limitdata = int(8)
#NAMA NEGARA UNTUK DITAMPILKAN DI EXCEL DAN PDF
negara = "Indonesia"
#BASE PATH DIRECTORY
basePath = 'C:/Users/ASUS/Documents/bappenas/'
#FILE CSV
filePath = basePath+ 'data mentah/BLOOMBERG/SektorFiskal/' +name+'.csv';
#KONEKSI KE DATABASE
engine = create_engine('postgresql://'+username+':'+password+'@'+host+':'+port+'/'+database)
#MEMANGGIL FUNGSI UPLOAD TO PSQL
checkUpload = uploadToPSQL(columns, table, filePath, engine)
#MENGECEK FUNGSI DARI UPLOAD PSQL, JIKA BERHASIL LANJUT MEMBUAT FUNGSI CHART, JIKA GAGAL AKAN MENAMPILKAN PESAN ERROR
if checkUpload == True:
makeChart(host, username, password, database, port, table, judul, columns, filePath, name, subjudul, limitdata, negara, basePath)
else:
print("Error When Upload CSV")
```
| github_jupyter |
TSG050 - Cluster create hangs with “timeout expired waiting for volumes to attach or mount for pod”
===================================================================================================
Description
-----------
The controller gets stuck during the `bdc create` create process.
> Events: Type Reason Age From Message —- —— —- —- ——- Warning
> FailedScheduling 12m (x7 over 12m) default-scheduler pod has unbound
> immediate PersistentVolumeClaims (repeated 3 times) Normal Scheduled
> 12m default-scheduler Successfully assigned
> bdc/mssql-monitor-influxdb-0 to aks-nodepool1-32258814-0 Warning
> FailedMount 1m (x5 over 10m) kubelet, aks-nodepool1-32258814-0 Unable
> to mount volumes for pod
> “mssql-monitor-influxdb-0\_bdc(888fb098-4857-11e9-92d1-0e4531614717)”:
> timeout expired waiting for volumes to attach or mount for pod
> “bdc”/“mssql-controller-0”. list of unmounted volumes=\[storage\].
> list of unattached volumes=\[storage default-token-pj765\]
NOTE: This Warning does often appear during a normally, but it should
clear up with a couple of minutes.
Steps
-----
### Common functions
Define helper functions used in this notebook.
```
# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows
import sys
import os
import re
import json
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
retry_hints = {} # Output in stderr known to be transient, therefore automatically retry
error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help
install_hint = {} # The SOP to help install the executable if it cannot be found
first_run = True
rules = None
debug_logging = False
def run(cmd, return_output=False, no_output=False, retry_count=0, base64_decode=False, return_as_json=False):
"""Run shell command, stream stdout, print stderr and optionally return output
NOTES:
1. Commands that need this kind of ' quoting on Windows e.g.:
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}
Need to actually pass in as '"':
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name}
The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:
`iter(p.stdout.readline, b'')`
The shlex.split call does the right thing for each platform, just use the '"' pattern for a '
"""
MAX_RETRIES = 5
output = ""
retry = False
global first_run
global rules
if first_run:
first_run = False
rules = load_rules()
# When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see:
#
# ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')
#
if platform.system() == "Windows" and cmd.startswith("azdata sql query"):
cmd = cmd.replace("\n", " ")
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries
#
user_provided_exe_name = cmd_actual[0].lower()
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
# When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`
#
if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ:
cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc")
# To aid supportability, determine which binary file will actually be executed on the machine
#
which_binary = None
# Special case for CURL on Windows. The version of CURL in Windows System32 does not work to
# get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance
# of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost
# always the first curl.exe in the path, and it can't be uninstalled from System32, so here we
# look for the 2nd installation of CURL in the path)
if platform.system() == "Windows" and cmd.startswith("curl "):
path = os.getenv('PATH')
for p in path.split(os.path.pathsep):
p = os.path.join(p, "curl.exe")
if os.path.exists(p) and os.access(p, os.X_OK):
if p.lower().find("system32") == -1:
cmd_actual[0] = p
which_binary = p
break
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
if which_binary == None:
which_binary = shutil.which(cmd_actual[0])
# Display an install HINT, so the user can click on a SOP to install the missing binary
#
if which_binary == None:
print(f"The path used to search for '{cmd_actual[0]}' was:")
print(sys.path)
if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:
display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
# Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait
#
wait = True
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
wait = False
break # otherwise infinite hang, have not worked out why yet.
else:
print(line, end='')
if rules is not None:
apply_expert_rules(line)
if wait:
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()
if not no_output:
for line in iter(p.stderr.readline, b''):
try:
line_decoded = line.decode()
except UnicodeDecodeError:
# NOTE: Sometimes we get characters back that cannot be decoded(), e.g.
#
# \xa0
#
# For example see this in the response from `az group create`:
#
# ERROR: Get Token request returned http error: 400 and server
# response: {"error":"invalid_grant",# "error_description":"AADSTS700082:
# The refresh token has expired due to inactivity.\xa0The token was
# issued on 2018-10-25T23:35:11.9832872Z
#
# which generates the exception:
#
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte
#
print("WARNING: Unable to decode stderr line, printing raw bytes:")
print(line)
line_decoded = ""
pass
else:
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"STDERR: {line_decoded}", end='')
if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"):
exit_code_workaround = 1
# inject HINTs to next TSG/SOP based on output in stderr
#
if user_provided_exe_name in error_hints:
for error_hint in error_hints[user_provided_exe_name]:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))
# apply expert rules (to run follow-on notebooks), based on output
#
if rules is not None:
apply_expert_rules(line_decoded)
# Verify if a transient error, if so automatically retry (recursive)
#
if user_provided_exe_name in retry_hints:
for retry_hint in retry_hints[user_provided_exe_name]:
if line_decoded.find(retry_hint) != -1:
if retry_count < MAX_RETRIES:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, retry_count=retry_count)
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
# WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so
# don't wait here, if success known above
#
if wait:
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
else:
if exit_code_workaround !=0 :
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed.\n')
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
def load_json(filename):
"""Load a json file from disk and return the contents"""
with open(filename, encoding="utf8") as json_file:
return json.load(json_file)
def load_rules():
"""Load any 'expert rules' from the metadata of this notebook (.ipynb) that should be applied to the stderr of the running executable"""
# Load this notebook as json to get access to the expert rules in the notebook metadata.
#
try:
j = load_json("tsg050-timeout-expired-waiting-for-volumes.ipynb")
except:
pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename?
else:
if "metadata" in j and \
"azdata" in j["metadata"] and \
"expert" in j["metadata"]["azdata"] and \
"expanded_rules" in j["metadata"]["azdata"]["expert"]:
rules = j["metadata"]["azdata"]["expert"]["expanded_rules"]
rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first.
# print (f"EXPERT: There are {len(rules)} rules to evaluate.")
return rules
def apply_expert_rules(line):
"""Determine if the stderr line passed in, matches the regular expressions for any of the 'expert rules', if so
inject a 'HINT' to the follow-on SOP/TSG to run"""
global rules
for rule in rules:
notebook = rule[1]
cell_type = rule[2]
output_type = rule[3] # i.e. stream or error
output_type_name = rule[4] # i.e. ename or name
output_type_value = rule[5] # i.e. SystemExit or stdout
details_name = rule[6] # i.e. evalue or text
expression = rule[7].replace("\\*", "*") # Something escaped *, and put a \ in front of it!
if debug_logging:
print(f"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.")
if re.match(expression, line, re.DOTALL):
if debug_logging:
print("EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'".format(output_type_name, output_type_value, expression, notebook))
match_found = True
display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.'))
print('Common functions defined successfully.')
# Hints for binary (transient fault) retry, (known) error and install guide
#
retry_hints = {'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond']}
error_hints = {'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb']]}
install_hint = {'kubectl': ['SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb']}
```
### Instantiate Kubernetes client
```
# Instantiate the Python Kubernetes client into 'api' variable
import os
from IPython.display import Markdown
try:
from kubernetes import client, config
from kubernetes.stream import stream
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
try:
config.load_kube_config()
except:
display(Markdown(f'HINT: Use [TSG118 - Configure Kubernetes config](../repair/tsg118-configure-kube-config.ipynb) to resolve this issue.'))
raise
api = client.CoreV1Api()
print('Kubernetes client instantiated')
except ImportError:
display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
```
### Get the namespace for the big data cluster
Get the namespace of the Big Data Cluster from the Kuberenetes API.
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA\_NAMESPACE, before starting
Azure Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name
except IndexError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print('The kubernetes namespace for your big data cluster is: ' + namespace)
```
### Get the name of controller pod
```
label_selector = 'app=controller'
name=api.list_namespaced_pod(namespace, label_selector=label_selector).items[0].metadata.name
print ("Controller pod name: " + name)
```
### Set the text for look for in pod events
Set the text to look for in pod events that demonstrates this TSG is
applicable to a current cluster state
```
kind="Pod"
precondition_text="timeout expired waiting for volumes to attach or mount for pod"
```
### Get events for a kubernetes resources
Get the events for a kubernetes named space resource:
```
V1EventList=api.list_namespaced_event(namespace)
for event in V1EventList.items:
if (event.involved_object.kind==kind and event.involved_object.name==name):
print(event.message)
```
### PRECONDITION CHECK
```
precondition=False
for event in V1EventList.items:
if (event.involved_object.kind==kind and event.involved_object.name==name):
if event.message.find(precondition_text) != -1:
precondition=True
if not precondition:
raise Exception("PRECONDITION NON-MATCH: 'tsg050-timeout-expired-waiting-for-volumes' is not a match for an active problem")
print("PRECONDITION MATCH: 'tsg050-timeout-expired-waiting-for-volumes' is a match for an active problem in this cluster")
```
Resolution
----------
Delete the pod that is stuck trying to mount a PV (Persisted Volume),
the higher level kubernetes resource (statefulset, replicaset etc.) will
re-create the Pod.
```
run(f'kubectl delete pod/{name} -n {namespace}')
```
### Get the name of the new controller pod
Get the name of the new controller pod, and view the events to ensure
the issue has cleaned-up
```
name=api.list_namespaced_pod(namespace, label_selector=label_selector).items[0].metadata.name
print("New controller pod name: " + name)
```
### Get events for a kubernetes resources
Get the events for a kubernetes named space resource:
```
V1EventList=api.list_namespaced_event(namespace)
for event in V1EventList.items:
if (event.involved_object.kind==kind and event.involved_object.name==name):
print(event.message)
```
### Validate the new controller pod gettings into a ‘Running’ state
```
run('kubectl get pod/{name} -n {namespace}')
print('Notebook execution complete.')
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.